FOSDEM 2020

FOSDEM 2020
2020-02-01 - 2020-02-02

Days:

Back

Day 1 01.02.2020

Home
Back

Day 2 02.02.2020

Home
Back

Keynotes / History / Internet [Room Janson]

Home
Back

Community and Ethics / Containers and Security / Databases [Room K.1.105 (La Fontaine)]

Home
Back

Lightning Talks [Room H.2215 (Ferrer)]

Home
Back

Free Java [Room H.1302 (Depage)]

Home
Back

Software Defined Networking [Room H.1308 (Rolin)]

Home
Back

DNS / Web Performance [Room H.1309 (Van Rijn)]

Home
Back

Open Source Computer Aided Modeling and Design [Room H.2213]

Home
Back

MySQL, MariaDB and Friends [Room H.2214]

Home
Back

BOFs (Track B - in H.3242) [Room H.3242]

Home
Back

BOFs (Track C - in H.3244) [Room H.3244]

Home
Back

BOFs (Track A - in J.1.106) [Room J.1.106]

Home
Back

Collaborative Information and Content Management Applications / Coding for Language Communities [Room AW1.120]

Home
Back

Erlang, Elixir and Friends / Graph Systems and Algorithms [Room AW1.121]

Home
Back

Ada [Room AW1.125]

Home
Back

Open Research Tools and Technologies [Room AW1.126]

Home
Back

Game Development [Room K.3.201]

Home
Back

RISC-V / Retrocomputing [Room K.3.401]

Home
Back

LLVM [Room K.4.201]

Home
Back

Graphics [Room K.4.401]

Home
Back

Hardware-aided Trusted Computing / Open Source Firmware, BMC and Bootloader [Room K.4.601]

Home
Back

Security [Room UA2.114 (Baudoux)]

Home
Back

Quantum Computing [Room UA2.118 (Henriot)]

Home
Back

Legal and Policy Issues [Room UA2.220 (Guillissen)]

Home
Back

Testing and Automation [Room UB2.147]

Home
Back

Python [Room UB2.252A (Lameere)]

Home
Back

Certification [Room UB4.132]

Home
Back

Open Document Editors [Room UB4.136]

Home
Back

Workshops [Room UB4.228]

Home
Back

Backup and Recovery / Dependency Management [Room UD2.119]

Home
Back

Infra Management [Room UD2.120 (Chavanne)]

Home
Back

Containers [Room UD2.208 (Decroly)]

Home
Back

Embedded, Mobile and Automotive [Room UD2.218A]

Home
Back

Freedom [Room UD2.Corridor]

Home
Back

Freedom / Keynotes [Room Janson]

Home
Back

Databases / Miscellaneous [Room K.1.105 (La Fontaine)]

Home
Back

Lightning Talks [Room H.2215 (Ferrer)]

Home
Back

JavaScript [Room H.1302 (Depage)]

Home
Back

Software Defined Storage [Room H.1308 (Rolin)]

Home
Back

Virtualization and IaaS [Room H.1309 (Van Rijn)]

Home
Back

Open Source Design [Room H.2213]

Home
Back

PostgreSQL [Room H.2214]

Home
Back

BOFs (Track B - in H.3242) [Room H.3242]

Home
Back

BOFs (Track C - in H.3244) [Room H.3244]

Home
Back

BOFs (Track A - in J.1.106) [Room J.1.106]

Home
Back

Free Software Radio [Room AW1.120]

Home
Back

BSD [Room AW1.121]

Home
Back

Minimalistic, Experimental and Emerging Languages [Room AW1.125]

Home
Back

Geospatial [Room AW1.126]

Home
Back

Distributions [Room K.3.201]

Home
Back

Rust [Room K.3.401]

Home
Back

Debugging Tools [Room K.4.201]

Home
Back

Hardware Enablement [Room K.4.401]

Home
Back

Microkernels and Component-based OS [Room K.4.601]

Home
Back

Mozilla [Room UA2.114 (Baudoux)]

Home
Back

Kotlin [Room UA2.118 (Henriot)]

Home
Back

Decentralized Internet and Privacy [Room UA2.220 (Guillissen)]

Home
Back

Open Media [Room UB2.147]

Home
Back

Go [Room UB2.252A (Lameere)]

Home
Back

Certification [Room UB4.132]

Home
Back

Continuous Integration and Continuous Deployment [Room UB4.136]

Home
Back

Workshops [Room UB4.228]

Home
Back

HPC, Big Data, and Data Science [Room UB5.132]

Home
Back

Community devroom [Room UB5.230]

Home
Back

Free Tools and Editors [Room UD2.119]

Home
Back

Monitoring and Observability [Room UD2.120 (Chavanne)]

Home
Back

Real Time Communications [Room UD2.208 (Decroly)]

Home
Back

Internet of Things [Room UD2.218A]

Home
Back

Keysigning [Room UD2.Corridor]

Home
Back

Welcome to FOSDEM 2020

Home

Speaker FOSDEM Staff
RoomJanson
TrackKeynotes
Time09:30 - 09:55
Event linkView original entry

FOSDEM welcome and opening talk.

Welcome to FOSDEM 2020!

Back

The Linux Kernel: We have to finish this thing one day ;)

Home

Speaker Thorsten Leemhuis
RoomJanson
TrackKeynotes
Time10:00 - 10:50
Event linkView original entry

Today's Linux kernel and the ones from the early FOSDEM days still have some things in common, but in the end are totally different beasts. This talk will take a closer look at how the Linux kernel and its development during those twenty years evolved and adapted to new expectations.

The talk will thus mention some of the changes to the Linux kernel that had the most impact in those two decades; many might have seemed small or insignificant in the beginning, but turned out to pave the path for ground breaking techniques like Android, Cloud or Containers we all deal with today. The speaker will also take a closer look at how Linux kernel development itself changed during those twenty years, but also cover aspects that didn't change much – like the email driver development model, which in the days of Github and Gitlab might look anarchic and from the Middle Ages to some. The speaker also will step on thin ice and speculate a few minutes what the future might hold, as there are a few strong indicators that Linux is entering a new era and slowly but steady get more aspects of a microkernel.

Back

LibreOffice turns ten and what's next

Home

Speaker Michael Meeks
RoomJanson
TrackHistory
Time11:00 - 11:50
Event linkView original entry

From ten years of LibreOffice, how can you apply what we
learned to your project ? What is going on in LibreOffice today, and
where is it going ? and How can you re-use or contribute to the story.

Come hear about the story of LibreOffice, the reasons we
started - and some of the highlights: successes, failures and other
lessons learned from our first ten years. Hear how our initial
decisions and vision of open-ness and vendor neutrality panned
out. See what has been learned about building an effective commercial
ecosystem, with certification.



Hear about the trajectory of technical updates and how we


re-juvenated an open-source code-base through massive re-factoring, as
well as re-targetting for web and mobile.



Catch up with the latest in Online innovation, optimization


and scalability work as well as our growing integration with lots of
other Open Source projects.



Finally catch up with the latest and greatest feature/function


improvements as we move towards LibreOffice 7, and find out how you
can best get involved with the next decade of the LibreOffice story.

Back

Over Twenty Years Of Automation

Home

Speaker James Shubin
RoomJanson
TrackHistory
Time12:00 - 12:50
Event linkView original entry

Over the past twenty years, the automation landscape has changed dramatically.
As our hunger for complex technical infrastructure increased, and our inability to keep up with these demands faltered, we've outsourced a lot of the work to third-parties and cloud providers.
We'll step backwards and show where we came from, and where we're going.
If we don't understand this future, and step up to the challenge, then we eventually won't control our own computers anymore.
We'll discuss this timeline from a tools perspective and showcase many live demos of the past, present, and what will be possible in the future.
This presentation will contain many demos and interactive examples. I will showcase some modern ideas I have with my Free Software project called mgmtconfig.

Back

Blender, Coming of Age

Home

Speaker Ton Roosendaal
RoomJanson
TrackHistory
Time13:00 - 13:50
Event linkView original entry

The presentation is going to be audiovisual and entertaining; based on a number of short videos I want to tell the story of Blender. Starting in late 90s, how Blender became open source, going over the big milestones for Blender, end ending with the fast growth of our project and the interest of the film and game industry. Blender now is a more mature project now, which involves a different dynamics than it used to be. How are we going to tackle the challenges of the industry, while not losing the community that brought us this far?

Back

The Hidden Early History of Unix

Home

Speaker Warner Losh
RoomJanson
TrackHistory
Time14:00 - 14:50
Event linkView original entry

The early history of Unix is much richer than is usually presented. There are many hidden gems that are little known and quite interesting to learn about. The dates of many of the "firsts" for Unix accomplishments is much earlier than people think. This well-researched talk explores those gems, firsts and shares many artifacts from the early days of Unix. Many of these artifacts have just come to light in recent years, and change how we view the early history of Unix. Even the oldest of grey beards will learn things they didn't know about Unix from this talk.

Most histories of Unix follow the same old boring plan: talk about Multics, Ken scrounging the pdp-7, moving to the pdp-11, rewriting in C and then the explosion that happened with V6 and V7 before jumping into the Unix wars between AT&T and BSD followed by something about Linux (either pro or con depending on the speaker's politics). We've all seen it, and many can predict which "classic" pictures will be used, the points that will be made, and the arcs drawn.



This talk is nothing like that. It brings all the early years of Unix to life in a unique way. The early years of Unix were surprising rich. The author will use original sources to take you on a tour of many of the firsts in Unix and explore the community ties key to Unix's early success. Many of today's fads, like microkernels, hypervisors, multiprocessing and user mode execution actually happened early on in Unix's history, long they were today's fads. "What's old is new again" has never been so apt. You'll be surprised to learn how early each of these things happened. Come see the secret history of Unix as it played out both in obscure business units of AT&T and in the world wide users groups who banded together to support each other when AT&T wouldn't. You'll see footage of early machines as well as the first real Unix application: space travel (newly rediscovered and restored by the TUHS group). See first hand the machines, programs, newsletters and documentation that together weave a rich tale of innovation, community and working within constraints. Learn how today's open source movement owes a debt to these early communities and how they paved the way for Unix to become the open and ubiquitous system it is today and helped sow the seeds for the communities of today.

Back

Generation gaps

Home

Speaker Liam Proven
RoomJanson
TrackHistory
Time15:00 - 15:50
Event linkView original entry

For as long as computers have been around, roughly every 10-15 years, the whole industry underwent a reset and reinvented itself anew… until the early 1990s, when somehow, the industry skipped a generation. Instead, it looked backwards, and adopted an older model of computing. The cost has been very high and is holding back the development of the entire field.

This talk looks at how we turned to the past instead of the future, what we missed out on as a result, and how to move forward. It follows on from the above proposal, but takes a different tack and should stand alone. It looks at where personal computers might have – but didn’t – go in the 1980s and 1990s. At a sampling of advanced OS technologies that never caught on – Plan 9, Inferno, Taos, Oberon – and the cool stuff we lost out on as a result.



It will end with trying to identify the next such generation gap, and why we should consider starting afresh rather than adapting current tech for the next gen.

Back

HTTP/3 for everyone

Home

Speaker Daniel Stenberg
RoomJanson
TrackInternet
Time16:00 - 16:50
Event linkView original entry

HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.

HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.



HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.



Daniel Stenberg does a presentation about HTTP/3 and QUIC. Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.

Back

State of the Onion

Home

Speaker Pili Guerra
RoomJanson
TrackInternet
Time17:00 - 17:50
Event linkView original entry

The Tor Project is building usable free software to fight surveillance and censorship across the globe. In this talk we'll give an update on what we got up to during 2019, what happened in the wider Tor ecosystem, and what lies ahead of us.

During the past year the Tor Project has been working hard on improving the software, building and training communities around the world as well as creating an anti-censorship team and roadmap that can push forward technologies to circumvent censorship.



This talk will cover major milestones we achieved and will give an outline about what is lying ahead. In particular, we'll talk about our work to scale the network so it can cope with increased demand as we move forward with our plans for mainstream adoption of Tor Browser and the Tor network.



We will also share updates about our anti-censorship efforts, a year on from the formation of a dedicated Anti-Censorship team, and their work on next generation pluggable transports. Moreover, we'll explain our defense against website traffic fingerprinting attacks and plans for improving onion services and making them more usable (DDoS resistance, better user interfaces for authentication and dealing with errors).



Finally, we'll shed some light onefforts to get Tor support directly embedded into other browsers, like Firefox and Brave, and educating users both by reorganizing the content on our website, creating dedicated community and developer portals and extensive trainings throughout the world.

Back

SCION

Home

Speaker Mateusz Kowalski
Kamila Součková
RoomJanson
TrackInternet
Time18:00 - 18:50
Event linkView original entry

Do you know where your internet traffic flows? Does it go through China even if you don't want it to? SCION is a new internet architecture aimed at solving this problem. We will show how you can easily join the already existing worldwide network.

The current Internet was not designed with control and security considerations in mind: incidents such as the hijacking of all traffic for YouTube by a Pakistani ISP in February 2008, the Cloudflare DNS service hijacked by AnchNet in May 2018, or a large chunk of European mobile traffic being rerouted through China in June 2019 show that we cannot quite trust the current Internet. SCION is a proposed future Internet architecture aiming to offer high availability and security, even in the presence of actively malicious network operators and devices.



Designing a new Internet from scratch gives us the opportunity to make it work a lot better: we are aiming to notably improve security, availability, and performance. At the same time, just replacing the Internet would not be feasible, and thus we also emphasise practical concerns, such as incremental deployment and backwards compatibility. Thanks to that, SCION is currently the only clean-slate Internet architecture with a world-wide research network and production deployments in several large institutions in Switzerland; and you can start using it today.



In the first part of this talk, we will drive you through the current state of SCION design and implementation, showing how it provides its important features:






The world-wide test deployment, SCIONLab, consists of around 50 different points-of-presence around the globe, many of them connected via direct, BGP-free, links. Having many independent organizations belonging to a continually evolving network introduces some non-trivial challenges of managing what you don’t own, which we will also talk about.



We will show a live demo presenting how easy it is today for the end user to join the network and start using the available services. We will also present how taking down a part of the network can look and how SCION prevents a scenario of traffic passing by China or Pakistan.



To close the talk, we will very briefly present the future plans and the direction in which we want the project to evolve.

Back

How FOSS could revolutionize municipal government

Home

Speaker Danese Cooper
RoomK.1.105 (La Fontaine)
TrackCommunity and Ethics
Time10:00 - 10:50
Event linkView original entry

Free and Open Source software has revolutionized the Software Industry and nearly all other areas of human endeavor, but until now its reach into actual governance at the municipal citizen level has not been very deep. Initiatives like Code for America have encountered challenges driving acceptance for FOSS alternatives to proprietary software for citizen governance. At the same time the gap between citizen need and cities’ capabilities as widened. But several new projects are aiming to change this state of affairs to help empower citizens and improve municipal services worldwide through adoption of FOSS. Learn about these efforts and how you can get involved.

Back

The Selfish Contributor Explained

Home

Speaker James Bottomley
RoomK.1.105 (La Fontaine)
TrackCommunity and Ethics
Time11:00 - 11:50
Event linkView original entry

It has become very popular in the last several years to think of free and open source as a community forward activity, indeed the modern approach is to try and form a community or foundation first and do code second. There is also much talk about maintainer burn out and community exploitation. However, the same people who talk about this still paraphrase the most famous quote from the Cathedral and the Bazaar "Scratching your own itch". They forget this is your own itch not everyone else's because Open Source begins as a selfish activity. The fact that communities do form around a load of selfish competing individuals is actually a testament to the unacknowledged power of open source to co-opt the selfish instinct and make it synergistic to some communal good.



This talk will explore the selfish origins of free and open source and dissect the co-option power it has to form broad and deep communities from what are apparently simple transactional engagements. We'll also explain how some of the more recent community failures have been engendered by the concentration on long term community to the detriment of enabling purely transactional mechanics by which initially selfish contributors come to the project.

The origins of selfish contributions, while called out in the founding canons of the open source and free software movements, were initially not very well recognized until the first open source projects (and here the author will use the example he's familiar with: the Linux Kernel) tried to engage successfully with companies trying to utilize the projects for their own ends. We became adept at explaining why forking is bad and how your cost structure will rise exponentially if you do it and how the cheapest method of achieving your business goals is in fact to work with the existing community on whatever feature the company was trying to add as part of its business plan. In fact, those of use who did company engagement because transactional sales people offering a corporation the achievements of business goals in exchange for some unexpected community engagement.



Nowadays, all companies have drunk the open source coolaid and all pledge their loyalty to further the interests of the community and this previous learning is all but forgotten in the third and fourth waves of open source. However, at their hearts, corporations are still the same business goal focussed transactional entities we had to deal with in the early days of open source and an understanding of how to co-opt their transactional nature would go a long way to assisting new open source developers in their quest to form communities.



This talk will begin with some history of the Linux kernel's corporate engagement, explore and explain some of the processes within open source development that lead to the conversion of transactionalism into long term community synergy (the truly astounding ability of open source to co-opt selfishness) and then give a few examples of how serving the community rather than enabling transactionalism can give rise to burn out and apparent exploitation. In conclusion we'll give some simple rules to enable this co-opting and suggest how some of the competing interests of the ethical source and social justice movements might likewise be co-opted.

Back

The Ethics Behind Your IoT

Home

Speaker Molly de Blanc
RoomK.1.105 (La Fontaine)
TrackCommunity and Ethics
Time12:00 - 12:50
Event linkView original entry

Internet of Things (IoT) devices are part of the future we were promised. Armed with our mobile devices, we can control everything from our cars to our toasters to the doors of our homes. Along with convenience, IoT devices bring us ethical quandaries, as designers and users. We need to consider the ethical implicates of the technologies we are building and ask ourselves not just about the ways they are being used, for both good and evil, but the potential use cases we might encounter in the future.

IoT devices are becoming increasingly prevalent in our lives -- even my water filter is wifi enabled. In this session, we'll be looking at two case studies of how one might interact with IoT devices, and then consider the ethical implications of these devices, focused on the social impacts they can have on an individual or a society. While we will focus on smart doorbells and smart locks and situations in which installing these could significantly impact quality of life, we will touch on other common IoT devices.

Back

Freedom and AI: Can Free Software include ethical AI systems?

Home

Speaker Justin W. Flory
Michael Nolan
RoomK.1.105 (La Fontaine)
TrackCommunity and Ethics
Time13:00 - 13:50
Event linkView original entry

Despite the number of working groups, advisory committees, and coordination roundtables, there is little progress towards creating more ethical and safe AI systems. AI systems are deployed in increasingly fragile contexts. From law enforcement to humanitarian aid, several organizations use AI powered systems to make or inform critical decisions with increasingly outsized side effects.



What is a rights-based approach for designing minimally safe and transparent guidelines for AI systems? In this talk, we explore what a Free AI system might look like. Then, taking research and guidelines from organizations such as Google and the UN Office for the Coordination of Humanitarian Affairs, we propose practical policies and tools to ensure those building an AI system respect user freedom. Lastly, we propose the outlines of a new kind of framework where all derivative works also respect those freedoms.

Rights based approaches are commonly used within humanitarian contexts to approach problems that the sector faces. In this talk, we use the same approach to solving the issue of “unethical” AI systems. We do so by first defining a set of rights that we feel must be respected, proposing a number of methods that we feel helps ensure a system respects those rights, and lastly propose an organizational and regulatory framework that we feel could help encourage and enforce those methods be used by developers of AI systems.

Back

How Containers and Kubernetes re-defined the GNU/Linux Operating System

Home

Speaker Daniel Riek
RoomK.1.105 (La Fontaine)
TrackContainers and Security
Time14:00 - 14:50
Event linkView original entry

Free Software (as in Freedom) had won. The vertically integrated Cloud now is the predominant operational paradigm and is threatening to undermine software freedom. To many all seems lost, but the world keeps changing and decentralized compute is making a comeback. Containers and Kubernetes are already having a deep impact on the Linux operating system (OS) that goes well beyond DevOps and cloud-native applications. The concepts of application-centric packaging, process isolation through Linux containers, and immutable infrastructure are shaking up the core traditions of today's GNU/Linux operating systems. These concepts are also challenging the assumptions and approaches derived from the past 40+ years of work that originated with UNIX. The Linux distribution as we know it is coming to an end, and is being replaced by a new concept of containerized, multi-instance, multi-user applications, which can be deployed in scale-out environments as well as for widely distributed compute scenarios. In this session, we'll assess this new OS environment in the times of '5G' de-centralized cloud and take a deep look at the consequences this new OS model has for both developers and admins.

This talk will draw on the history of compute in general and Free and Open Source in specific to explain an evolution of paradigms from the GNU/Linux Distribution to modern Free Software application plattforms build on Kubernetes and how they can shape the future of compute in the face of major technological changes.

Back

Fixing the Kubernetes clusterfuck

Home

Speaker Kris Nova
RoomK.1.105 (La Fontaine)
TrackContainers and Security
Time15:00 - 15:50
Event linkView original entry

Kubernetes is complex, and extremely vulnerable. In 2019 we explored the complexity of the Kubernetes codebase, and the antipatterns therein. This year we want to look at understanding how we observe our cluster at runtime. Let's live code some C and C++ and explore the libraries that bring Wireshark, Falco, and Sysdig to life. We concretely demonstrate how we are able to audit a Kubernetes system, by taking advantage of auditing the kernel's syscall information while enriching this data with meta information from Kubernetes.

We start off by presenting the problem of Kubernetes security at runtime. We discuss concerns with namespace and privilege escalation in a Kubernetes environment. We discover how auditing the kernel gives us visibility into both the container layer, as well as the underlying system layer.



We look at building an eBPF probe, or kernel module to begin auditing syscall metrics. We discover how we are able to pull those out of the kernel into userspace, and start exploring powerful patterns for using these metrics to secure a Kubernetes cluster.



The audience walks away understanding how the kernel treats containers, and how we are able to easily make sense of them. The audience also walks away equipped with an OSS toolkit for understanding, observing, and securing a Kubernetes environment.

Back

Address Space Isolation in the Linux Kernel

Home

Speaker James Bottomley
Mike Rapoport
RoomK.1.105 (La Fontaine)
TrackContainers and Security
Time16:00 - 16:50
Event linkView original entry

Security is a big problem especially in the cloud of container workloads. This presentation investigates improving security in the Linux kernel itself. The first target is securing sensitive application data, for instance, private keys.

Address space isolation has been used to protect the kernel and userspace programs from each other since the invention of the virtual memory.



Assuming that kernel bugs and therefore exploits are inevitable it might be worth isolating parts of the kernel to minimize damage that these exploits can cause. Moreover, restricted mappings in the kernel mode may improve mitigation of hardware speculation vulnerabilities.



There are several ongoing efforts to use restricted address spaces in Linux kernel for various use cases:
* speculation vulnerabilities mitigation in KVM
* support for memory areas visible only in a single owning context
* hardening of the Linux containers



We are going to present the approach for the implementation of restricted mappings in the Linux kernel and how this implementation would be used with various use-cases.



We are also going to take a closer look at possibility to assign an address space to the Linux namespaces, so that tasks running in namespace A have different view of kernel memory mappings than the tasks running in namespace B. For instance, by keeping all the objects in a network namespace private, we can achieve levels of isolation equivalent to running a separated network stack.

Back

Guix: Unifying provisioning, deployment, and package management in the age of containers

Home

Speaker Ludovic Courtès
RoomK.1.105 (La Fontaine)
TrackContainers and Security
Time17:00 - 17:50
Event linkView original entry

This talk will reflect on what GNU Guix has to offer to users and how it compares to other approaches—from CONDA and pip to Flatpak and Docker. Guix is not only a transactional package manager and declarative GNU/Linux distribution: it’s also an environment manager, a container provisioning tool, and more. We will describe these tools and our journey to 1.0, emphasizing key properties that set Guix apart:
reproducibility, transparency, and hackability.

When it comes to software deployment, we are getting used to a new distribution of roles among many actors: traditional distros take care
of core software pieces, “application bundles” à la Docker/Flatpak provide complex applications, Cabal, Gem, npm, pip, and friends take care of language-specific software, and Puppet/Ansible orchestrate the whole thing. Each of these tools has its justification, but the end result is a maze that’s hard to deal with.



In this talk I will present GNU Guix, a software deployment toolbox and its associated distro that feature transactional upgrades and rollbacks, declarative OS deployment inspired by Nix, and reproducible builds. I will show how Guix approaches a variety of use cases: “package management” à la apt-get, environment management à la VirtualEnv, Ansible-style declarative OS deployment, and container provisioning à la Docker.



Guix emphasizes programmability and one of its salient features is that it provides a unified Scheme programming environment to deal with with all the aspects of configuration, deployment, and system management—including service management by PID 1. I will illustrate how
this works out on standalone Guix systems, and show the benefits of the approach by discussing applications that take advantage of Guix as a library to support reproducible software deployment.



Last, I will reflect on the road that led to Guix 1.0 six months ago and present some of the challenges ahead.

Back

AMENDMENT LumoSQL - Experiments with SQLite, LMDB and more

Home

Speaker Dan Shearer
RoomK.1.105 (La Fontaine)
TrackDatabases
Time18:00 - 18:50
Event linkView original entry

LumoSQL is an experimental fork of SQLite, the embeddable database library founding everything from Android to iOS to Firefox. As a replacement for fopen(), SQLite is a good choice for singer-writer applications and disconnected, slow and small devices. Modern IoT and application use cases are increasingly multi-writer, fast, high-capacity and internet-connected, and LumoSQL aims to address these very different modern needs. LumoSQL initially aims to improving speed and reliability, by replacing the internal key-value store with LMDB, by updating and fixing a prototype from 2013, and allowing multiple storage backends. Next up we are designing the architecture for replacing the write-ahead log system (as used by all other open and closed source databases) with a single-level store, drawing on LMDB as an example of a single-level store in production at scale. Challenges so far involve code archeology, understanding and updating benchmarking, designing a system for keeping parity with upstream code changes, file format migration and identifying bugs in both SQLite and LMDB.



Please do join us in testing and improving at https://github.com/LumoSQL/LumoSQL .



In this talk we welcome questions and contributions. This conference has many SQLite users and developers. What do you want to see?

LumoSQL is a combination of two embedded data storage C language libraries: SQLite and LMDB. LumoSQL is an updated version of Howard Chu's 2013 proof of concept combining the codebases. Howard's LMDB library has become ubiquitous on the basis of performance and reliability, so the 2013 claims of it greatly increasing the performance of SQLite seem credible. D Richard Hipp's SQLite is relied on by many millions of people on a daily basis (every Android and Firefox user, as just two projects of the thousands that use SQLite) so an improved version of SQLite would benefit billions of people.



The original code changes btree.c in SQLite 3.7.17 to use LMDB 0.9.9 . It takes some work to replicate the original results because not only has much changed since, but as a proof of concept there was no project established to package code or make it accessible. LumoSQL revives the original code and shows how it is still relevant in 2019. The premise seems sound. Some bugs have been fixed in LMDB and the prototype SQLightning work.



There needs to be multiple backends, initially the original SQLite on-disk format and LMDB and initially for compatibilit and conversion purposes. However the ability to have more backends is very attractive and already there are draft designs for where that could lead.



The design taking shape for tracking SQLite upstream may be useful to other projects, where an automated process and can handle most changes that do not change some of the basic APIs.



Write-Ahead Logs are in every single widely-used database today, a concurrency model developed in the 1990s and now the only option in both closed and open source SQL databases. There are pros and cons for WALs, but the merge-back model of WALs is a lack of atomicity that becomes obvious in corruption and reliability issues at speed and scale. Databases go to very expensive efforts to avoid this, but combined with a lack of real-time integrity checking in almost all databases, this is a fundamental problem and especially for modern SQLite-type use cases.

Back

Civil society needs Free Software hackers

Home

Speaker Matthias Kirschner
RoomH.2215 (Ferrer)
TrackLightning Talks
Time12:00 - 12:15
Event linkView original entry

More and more traditionally processes in our society now incorporate, and are influenced by software.

Processes that decide for example: Who will be able to go to which university? Who will be invited for a job interview? How long does someone have to go to jail?



Therefore many organisation which work for people's rights are now confronted with the problems proprietary software creates for society. The pupils associations, the unions, human right organisations, or environmental organisations -- all of them need to understand how software works to do their work in our society.



To continue to fulfil their role, civil society needs to understand how processes are implemented in software, they need to be able to challenge the assumptions, the values, and the way programmers designed them, and have a better understanding how you could change them.



In short: in a world in which more and more of our live is controlled by software, civil society organisations need us as Free Software hackers to support them doing their job.

Back

A tool for Community Supported Agriculture (CSA) management, OpenOlitor

Home

Speaker Mikel Cordovilla
RoomH.2215 (Ferrer)
TrackLightning Talks
Time12:20 - 12:35
Event linkView original entry

OpenOlitor is a SaaS open-source tool facilitating the organization and management of CSAs (Community Supported Agriculture) communities. This
tool covers a large spectrum of functionalities needed for CSAs such as member management, emailing, invoicing, share planning and delivery, absence
scheduling, etc. This software is organized and monitored by an international community that promotes the tool, helps operate it and support the
interested communities. In order to promote the sustainability of the tool and this international community an organization based on CSS
(Community Supported Software) has been proposed.

1 - Introduction



The Community Supported Agriculture movement has grown considerably the last few years proposing a new model of food production and distribution. CSA initiatives connect producers and consumers directly. The consumer receives a basket of locally produced, fresh products on a subscription basis. Risk is shared among all participants of the community, independently of being a consumer or producer. The growing popularity of the CSA model has lead to larger sized communities. Consequently, the management effort for this organizations is becoming unaffordable on a volunteer basis.
In such conditions a software helping with the redundant tasks, and connecting all participants of these communities, can be particularly supportive.
The main motivations for the creation of OpenOlitor (OO) are to attend to the growing popularity of the CSA model with the aim of keeping the model
economically viable and sustainable by reducing management task time investment through building software which facilitates easy organization.



2 - OpenOlitor



OpenOlitor is a web-based, open source tool facilitating the organization and management of CSAs (Community Supported Agriculture)
communities. This tool is composed of two different access points:






2.1 - Functionalities OpenOlitor (OO) covers the main functionalities CSAs need:






2.2 - Basic architecture



The OO platform is divided in back-end and front-end:






3 - Current numbers



Currently, seven CSAs are using OpenOlitor to support their organizations in a production environment. Three more CSAs are in the process of transitioning to this digital tool. A few numbers extracted from the last two years and a half to put into perspective the work volume facilitated by OO:






4 - Hosting and Operations



As shown by the architecture, the database and the front-end are CSA specific but the server can be used by multiple organizations. We promote the idea of hosting communities where a group of CSAs share the costs for the hosting and the effort for the operational work. This model is currently in use for all CSAs using the software.



5 - Sustainability model



As a financial model we promote the idea of Community Supported Software (CSS). This model is directly inspired by the CSA model where all participants share the risk and responsibility of the food production. Transferring this idea to software, an organization working with OO is invited to participate on the maintenance, operation, support and further development of the software. A fixed financial contribution is defined per basket delivered. All contributions are shared with all projects using the tool. Using this model, the effort and risk is equally shared, independently of the size of the CSA.



6 - Legal organizations supporting OO



This software is organized and monitored by an international community that promotes this tool:



OpenOlitor International: Non legally formed organization composed by around 10 members from different European countries. This organization is in charge of the vision and principles of all interactions with the software. This group meets periodically to decide OO main features and how to evolve the platform. Everyone interested in OO is invited to join;
OpenOlitor association: non-profit organization based in Switzerland. Legal body managing the founds received by initiatives and public sources;
OpenOlitor operations association: Organization in charge for the operation and support of the Swiss based CSAs;
Sunu: a German based organization that promotes digitalization for CSAs. They are promoting the ue of OO as well as the operational and support for German CSAs.

Back

What's in my food ? Open Food Facts, the Wikipedia of Food

Home

Speaker Pierre Slamich
RoomH.2215 (Ferrer)
TrackLightning Talks
Time12:40 - 12:55
Event linkView original entry

Open Food Facts is a collaborative and crowdsourced database of food products from the whole planet, licensed under the Open Database License (ODBL). It was launched in 2012, and today it is powered by 27000 contributors who have collected data and images for over 1 million products in 178 countries (and growing strong…)
This is the opportunity to learn more about Open Food Facts, and the latest developments of the project.

Scan a product using your phone, take a picture, and you're already part of the Open Food Facts revolution !



In this talk we'll show how Open Food Facts leverages open source technologies such as Perl, Python, TensorFlow, MongoDB, Java, Swift, React and Flutter as well as the great power of communities to open data of public interest for health & science, as well as unforeseen applications in your daily life.



We will also introduce you to Open Beauty Facts, for freeing your cosmetic cupboard: shampoos, toothpastes, lipsticks, etc.



How does it work?
Using our Android or iPhone app, you can easily scan the barcode of products from your home or local store.
You can either check them out (thanks to the decoding and comparison tools) or contribute pictures of their labels, assisted by our label-reading AI.
The same can also be done from the website, where additional tools are available to fill in the product details from the labels, navigate or vizualise the database based in various ways, or access the APIs and raw data to make your own tools and analysis.



Open Food Facts is developed and managed by a community of open source, open data and food enthusiasts and is organised as a non-profit association. All its creations are open:
- the collected data is published as Open Data,
- the software running the server(s) is open source and reusable (it was also used to create the Open Beauty Facts database),
- the mobile applications are open source as well.

Back

Web3 - the Internet of Freedom, Value, and Trust

Home

Speaker Bruno Škvorc
RoomH.2215 (Ferrer)
TrackLightning Talks
Time13:00 - 13:15
Event linkView original entry

For as long as human society has existed, humans have been unable to trust each other. For millennia, we relied on middlemen to establish business or legal relationships. With the advent of Web2.0, we also relayed the establishment of personal connections, and the system has turned against us. The middlemen abuse our needs and their power and we find ourselves chained to convenience at the expense of our own thoughts, our own privacy. Web3 is a radical new frontier ready to turn the status quo on its head, and these are the technologies we're using to make it happen.

Back

Next, the programmable web browser

Home

Speaker Atlas Engineer
RoomH.2215 (Ferrer)
TrackLightning Talks
Time13:20 - 13:35
Event linkView original entry

While actual browsers expose their internals through an API and limit access to the host system, Next doesn't, allowing for infinite extensibility and inviting the users to program their web browser. On top of that, it doesn't tie itself to a particular platform (we currently provide bindings to WebKit and WebEngine) and allows for live code reloads, thanks to the Common Lisp language, about which we'll share our experience too.

Next is a keyboard-oriented, extensible web browser designed for power users. While most (all?) current browsers expose their internals through an API, Next exposes its entire code to the user. Modern browsers limit access to the host system, and Next doesn't, allowing for a broad range of new features. Similar projects have failed due to being tied to a particular platform (Vimperator, Conkeror…), but Next's choice is to have its core written as an independent library, and to provide bindings to web engines (currently WebKit and WebEngine are supported). Next's magic touch is its live coding capability: we can develop a command from a REPL, compile the function and try the changes immediately, without restarting anything. Or just edit our init file and reload it into the current session. This flexbility comes for free thanks to the Common Lisp language, the experience with which we'd like to share too.



Next is usable as of today. It features ad-blocking (only basic for now, contributions are welcome), multi-selection, bookmarks, session persistence, Emacs and Vim-like shortcuts, history seen as a tree, fuzzy completion everywhere, basic git-forking and file management interfaces…



We are entering a new phase of development, with hopefully announcements that we can make public at the time of the conference :)

Back

AMENDMENT Weblate: open-source continuous localization platform

Home

Speaker Michal Čihař
RoomH.2215 (Ferrer)
TrackLightning Talks
Time13:40 - 13:55
Event linkView original entry

Please note that this talk will now be given by Michal Čihař instead of Václav Zbránek.



You will learn how to localize your project easily with little effort, open-source way. No repetitive work, no manual work with translation files anymore. Weblate is unique for its tight integration to VCS. Set it up once and start engaging the community of translators. More languages translated means more happy users of your software. Be like openSUSE, Fedora, and many more, and speak your users' language now thanks to Weblate!

I will show you the main perks of Weblate and the setup of the project. If you have a project with open repo and you want to start translating it, take your git:// link, and we will set it up right on the spot. FOSDEM is a great time and place to found your translating community.

Back

Kapow! A Web Framework for the Shell

Home

Speaker Roberto Abdelkader Martínez Pérez
RoomH.2215 (Ferrer)
TrackLightning Talks
Time14:00 - 14:15
Event linkView original entry

This talk is about "Kapow!" an open source webframework for the shell developed by BBVA Innovation Labs. We will talk about the current development of the project including an overview of Kapow!'s technology stack and the recent release of the first stable version.

The motivation behind the project was to create an adapter between the shell and the web. Allowing users to expose command line programs as HTTP services with a high degree of freedom and flexibility, not imposing any predefined behavior.
The project is based on an open specification.



Kapow! supports an increasing list of HTTP features; including forms, websockets and streaming. The architecture is based on exposing a private REST API through which the shell can interact with the incoming user HTTP requests.



Leveraging this architecture an ecosystem of specialized tools is planned, to help with common high level tasks in Kapow! services.
For example:
- Shell commands that can interact with incoming web browser requests and render specialized web pages.
- Automatic mapping of shell command parameters to REST API through bash completion scripts.



Roberto will do some demos about creating Kapow! services from scratch (expose nmap as a service, tcpdump and pandoc).

Back

Yjs: A CRDT framework for shared editing

Home

Speaker Kevin Jahns
RoomH.2215 (Ferrer)
TrackLightning Talks
Time14:20 - 14:35
Event linkView original entry

Shared editing is the ability to collaboratively edit the same text in real-time. The market for shared editing solutions is fragmented. Once you choose a solution you will be locked into using a specific editor and a specific backend. Yjs is a data synchronization framework that aims to enable shared editing in all open-source editors using any networking stack.

Yjs is a data synchronization framework that is specifically designed for creating shared editing applications like Google Docs. The number of editors, that Yjs supports, is steadily growing. At this time we implemented shared-editing support for six of the most prominent open-source rich-text editors - including Quill, ProseMirror, and CodeMirror. We are currently working on integrating Yjs in Gutenberg, the new WordPress editor.



Since Yjs is network agnostic and does not require a central authority to resolve sync conflicts, it is possible to use any networking stack to share document updates of the collaborative document. We created an ecosystem of modules that handle data synchronization over, for example, WebRTC, Dat Protocol, IPFS, and traditional client-server connections via WebSockets.



In this lightning talk, we want to show the huge potential of Yjs that hopefully will make it the go-to solution for creating shared editing applications on the web.

Back

Encrypt your collaboration with CryptPad

Home

Speaker Ludovic Dubost
RoomH.2215 (Ferrer)
TrackLightning Talks
Time14:40 - 14:55
Event linkView original entry

We'll show a real demonstration of how you can encrypt your data and collaborate with others in real-time using the CryptPad Open Source project.
This demonstration will include real-time Wysiwyg, Text, Kanban, Spreadsheet, File storage and Teams features allowing to share your documents securely with your friends and co-workers.

The Internet business model is about "surveillance capitalism" and every day our data is being used to target us with more and more invading advertisements, and every day data is being leaked by Internet business and cloud providers.



CryptPad is using end-to-end encryption to protect your data, while including innovative algorithms to allow collaboration between users. With CryptPad, the cloud hoster cannot read your data and if data leaks, hackers only get encrypted data which they cannot read without the keys.



Start ditching proprietary privacy invading cloud services for an free software privacy protecting alternative.



Come join the 20000 weekly users of the cryptpad.fr main instance and the 300 other instances available, including the German Pirate Party and C3 Wien instances.



Come join us restoring our privacy.

Back

Protect your data objects, not your network connections

Home

Speaker Stephan Schwichtenberg
RoomH.2215 (Ferrer)
TrackLightning Talks
Time15:00 - 15:15
Event linkView original entry

Agenda



1) Current situation: complicated & incomplete threat models
2) Concepts worth looking into
3) data sovereignty
4) named data networks
5) zero trust
6) Our hands-on experience with the above

The current state of play to protect data is a tedious task that involves many stakeholders and blocks resources.
The shift from on-premise to private/public cloud systems requires a careful inspection of an assumed threat model.
The application of a zero-trust model is one radical shift to authenticate and authorize at any given point in your
IT landscape, but most importantly it breaks with the assumption that we can design systems that are "safe and secure".
The talk will highlight the above mentioned concepts and will give a brief outline of a new approach called Named Data
Network (NDN) and how this could improve the situation in terms of data sovereignty.

Back

Optimizing sandbox creation with a FUSE file system

Home

Speaker Julio Merino
RoomH.2215 (Ferrer)
TrackLightning Talks
Time15:20 - 15:35
Event linkView original entry

The Bazel build system sandboxes each action (e.g. each compiler invocation) to ensure the action only has access to declared inputs and that the action only generates the promised outputs. This ensures that the execution of each build tool is deterministic and not subject to system-wide state. Unfortunately, creating these sandboxes is costly, and every millisecond added to the runtime of each action has a large impact on total build time. Just consider that Bazel focuses on large-ish builds with thousands of actions in them: each little inefficiency quickly multiplies and can result in significant slowdowns, and developers always want faster build times. In this talk, I'll explain how Bazel implements sandboxing and I will cover a FUSE file system I've been developing, sandboxfs, to optimize this process. I'll go into the details of the file system, explain how it started as a Go project and was then rewritten in Rust, and then show some performance metrics.

Outline:




Back

Indexing Encrypted Data Using Bloom Filters

Home

Speaker Claude Warren
RoomH.2215 (Ferrer)
TrackLightning Talks
Time15:40 - 15:55
Event linkView original entry

Bloom filters are a probabilistic data structure that tell us where things are not. They also utilize one way hash functions to build a probabilistic representation of an object. This talk will address how this structure can be used to provide an index into encrypted data that can be made publicly available with minimal risk.

Talk will cover how bloom filters are constructed, the Flat Bloofi indexing implementation and how to take the properties to be indexed and create Bloom filters, and then how to associate the bloom filter with the encrypted object in the index.



The result is an extremely fast index that can retrieve data items containing partial keys.



After this talk participants will be able to provide search capabilities across a collection of encrypted objects.



Code examples will be provided.

Back

Verifpal

Home

Speaker Nadim Kobeissi
RoomH.2215 (Ferrer)
TrackLightning Talks
Time16:00 - 16:15
Event linkView original entry

Verifpal is new software for verifying the security of cryptographic protocols. Building upon contemporary research in symbolic formal verification, Verifpal’s main aim is to appeal more to real-world practitioners, students and engineers without sacrificing comprehensive formal verification features. Verifpal represents a serious attempt at making the formal analysis of advanced cryptographic systems such as Signal and TLS 1.3 easier to achieve.

Contemporary research in symbolic formal verification has led to confirming security guarantees (as well as finding attacks) in secure channel protocols such as TLS and Signal. However, formal verification in general has not managed to significantly exit the academic bubble. Verifpal is new software for verifying the security of cryptographic protocols that aims is to work better for real-world practitioners, students and engineers without sacrificing comprehensive formal verification features.



In order to achieve this, Verifpal introduces a new, intuitive language for modeling protocols that is easier to write and understand than the languages employed by existing tools. Its formal verification paradigm is also designed explicitly to provide protocol modeling that avoids user error. By modeling principals explicitly and with discrete states, Verifpal models are able to be written in a way that reflects how protocols are described in the real world. At the same time, Verifpal is able to model protocols under an active attacker with unbounded sessions and fresh values, and supports queries for advanced security properties such as forward secrecy or key compromise impersonation.



Verifpal has already been used to verify security properties for Signal, Scuttlebutt, TLS 1.3 and other protocols. It is a community-focused project, and available under a GPLv3 license.



An Intuitive Protocol Modeling Language:
The Verifpal language is meant to illustrate protocols close to how one may describe them in an informal conversation, while still being precise and expressive enough for formal modeling. Verifpal reasons about the protocol model with explicit principals: Alice and Bob exist and have independent states.



Modeling that Avoids User Error:
Verifpal does not allow users to define their own cryptographic primitives. Instead, it comes with built-in cryptographic functions — this is meant to remove the potential for users to define fundamental cryptographic operations incorrectly.



Easy to Understand Analysis Output:
When a contradiction is found for a query, the result is related in a readable format that ties the attack to a real-world scenario. This is done by using terminology to indicate how the attack could have been possible, such as through a man-in-the-middle on ephemeral keys.



Friendly and Integrated Software:
Verifpal comes with a Visual Studio Code extension that offers syntax highlighting and, soon, live query verification within Visual Studio Code, allowing developers to obtain insights on their model as they are writing it.

Back

Mandos

Home

Speaker Teddy Hogeborn
RoomH.2215 (Ferrer)
TrackLightning Talks
Time16:20 - 16:35
Event linkView original entry

Disk encryption is essential for physical computer security, but seldom used due to the trouble of remembering and typing a password at every restart. We describe Mandos, a program which solves this problem, its security model, and the underlying concepts of its design, and some of its evolution over the 10 years since its initial release.

Any security system must have a clear view of its intended threat model – i.e. what threats it is actually intended to protect against; the specific choices and tradeoffs made for Mandos will be explained. Another danger of security system design is the risk of its non-use; i.e. that the system will not be used for some real or perceived drawbacks, such as complexity. The deliberate design choices of Mandos, involving low-interaction, “invisible” and automatic features, will be covered. If possible, the many necessary changes made since the last FOSDEM talk in 2015 will also be described.

Back

RedWax - trust only yourself

Home

Speaker Dirk-Willem van Gulik
RoomH.2215 (Ferrer)
TrackLightning Talks
Time16:40 - 16:55
Event linkView original entry

In this talk we will show you, practical, hands on, how you can secure your application, a small user community or environment using industry best of breed security, fully self-supporting and without having to rely on a central certificate authority, big-tech or other central powerhouses. As devolved & federated, or as central as you want - you set the rules. Working for you, with your standard-issue iPhone, your router or your laptop out of the box.



Project redwax produces industry best practice crypto in a small package. Available today. If you know how to install the apache webserver - you are almost there.



Project Redwax lets you download,a set of easy to deploy simple tools that capture and hard code a lot of industry best practice and specialist PKI knowledge. So that they can be put into the hands of a much wider community than currently served by a few specialist industries. It provides a non centralised, interoperable, open standard, open source, fully federated trust network where participants are not required to ask permission and can be self sufficient.



This presentation presents what is available today and our plans on how to take this further, to engage with the wider open source community that together we can support individuals, organisations and (small) companies to get best of breed, distributed, peer to peer, security, confidentiality and privacy without having to rely on central infrastructures.

Wouldn’t it be nice to be able to trust your own environment without having to trust a corporation or government? Wouldn’t it be nice to take the sting out of certificate management?



With some hands on examples we introduce the audience to the advantages of running your own certificate authority for security IOT in and around your home and establishing a trusted channel for exchanging information with your friends.



This project (and code) helps you to decentralize trust management so that the values security, confidentiality and privacy can be upheld in public infrastructure and private interactions. We strengthen the existing technologies and infrastructure by providing a modular, very simple and foremost practical set of tools to manage public key based trust infrastructures as currently used.



Project Redwax lets you download, a set of easy to deploy simple tools that capture and hard code a lot of industry best practice and specialist PKI knowledge. So that they can be put into the hands of a much wider community than currently served by a few specialist industries. It provides a non centralised, interoperable, open standard, open source, fully federated trust network where participants are not required to ask permission and can be self sufficient.



With support from NLNET project Redwax has made the code available under the Apache license from a infrastructure that is firmly rooted in Europe. And the good news - it is simple - if you know how to install the Apache Webserver - you are almost there. We are working with the unix/linux community to get the code in standard distro’s and cloud init scripts so that these capabilities are made available to wider community of application developers.



This presentation presents what is available today and our plans on how to take this further, to engage with the wider open source community that together we can support individuals, organisations and (small) companies to get best of breed, distributed, peer to peer, security, confidentiality and privacy without having to rely on central infrastructures, rare knowledge or big interests.



Speaker: Dirk-Willem van Gulik (Dirkx@apache.org)
Talk: 50 minutes.
Travel support: not needed
Language: English



BIO
Dirk-Willem van Gulik?



During the startup-years - Dirk-Willem van Gulik helped shape the world-wide-web. He was one of the founders, and the first president, of the Apache Software Foundation; and worked on standards such as HTTP at the Internet Engineering Taskforce. He has worked for the Joint Research Centre of the European Commission, the United Nations, telecommunications firms, the BBC, several satellite&space agencies and founded several startups. He participated in different international standards bodies, such as the IETF and W3C on metadata, GIS, PKI, Security, Architecture and Internet standards. Dirk build the initial engineering team at Covalent - the first open source company; and was one of the Founders of Asemantics, a leader in Enterprise Information Integration; which helped make the Semantic Web a reality. He then initiated Joost.com, a peer to peer based video and build and lead the team that created the worlds first instant play P2P viewer and a back office system with user profile driven advert targeting and payment settlements. He was the Chief Technical Architect at the BBC where has helped shape the audience facing delivery platform Forge in the time for the Olympics and where he made information security and compliance a core enabler for business processes. He currently works on several medical and privacy intensive security projects with a heavy emphasis on Architecture and Governance. When not at work, he loves to sail, hang out at the makerspaceleiden.nl or play with his lego.

Back

KDE Itinerary

Home

Speaker Volker Krause
RoomH.2215 (Ferrer)
TrackLightning Talks
Time17:00 - 17:15
Event linkView original entry

Getting your itinerary presented in a unified, well structured and always up to date fashion rather than as advertisement overloaded HTML emails or via countless vendor apps has become a standard feature of digital assistants such as the Google platform. While very useful and convenient, it comes at a heavy privacy cost. Besides sensitive information such as passport or credit card numbers, the correlation of travel data from a large pool of users exposes a lot about people's work, interests and relationships. Just not using such services is one way to escape this, or we build a privacy-respecting alternative ourselves!

Standing on the shoulders of KDE, Wikidata, Navitia, OpenStreetMap and a few other FOSS communities we have been exploring what it would take to to build a free and privacy-respecting travel assistant during the past two years, resulting in a number of building blocks and the "KDE Itinerary" application. In this talk we will look at what has been built, and how, and what can be done with this now. In particular we will review the different types of data digital travel assistants rely on, where we can get those from, and at what impact for your privacy.



The most obvious data source are your personal booking information. Extracting data from reservation documents is possible from a number of different input formats, such as emails, PDF files or Apple Wallet passes, considering structured annotations and barcodes, but also by using vendor-specific extractors for unstructured data. All of this is done locally on your own devices, without any online access.



Reservation data is then augmented from open data sources such as Wikidata and OpenStreetMap to fill in often missing but crucial information such as timezones or geo coordinates of departure and arrival locations. And finally we need realtime traffic data as well, such as provided by Navitia as Open Data for ground-based transport.



We will of course also look at how features based on that data can be integrated into applications. While there is the KDE Itinerary mobile application presenting these information in a timeline view with access to the corresponding boarding passes or ticket tokens, the underlying building blocks are explicitly meant to be reusable for wider integration. This might be of particular interest for people working on e.g. email application or digital assistants.



Should the author fail to show up to this presentation it might be that last year's fixes for the barcode scanners at the Brussels airport station didn't work after all ;-)

Back

Gate project

Home

Speaker Timo Savola
RoomH.2215 (Ferrer)
TrackLightning Talks
Time17:20 - 17:35
Event linkView original entry

This presentation is an introduction of an open source project I have been working on for five years.



Building on WebAssembly, Gate makes it possible to snapshot running programs and resume them in diverse environments: unlike with other snapshot-and-restore solutions, the snapshots are portable across CPU architectures and operating systems. Part of the solution is appropriate resource abstraction. The presentation includes a quick demonstration of migration of a program between x86-64 and ARM64 machines.

A subproject of Gate is a specialized WebAssembly compiler implemented in Go.



Gate is at the stage where a proof-of-concept has been implemented; it works. Its future would greatly benefit from wider community engagement, so I wish to present the project and its ideas to a technical user and developer audience.

Back

The pool next to the ocean: How to bring OpenSource skills to more people

Home

Speaker Johannes Tigges
RoomH.2215 (Ferrer)
TrackLightning Talks
Time17:40 - 17:55
Event linkView original entry

The pool next to the ocean: How to bring OpenSource skills to more people



OpenSource powers the world and is everywhere with more and more enterprises and large companies understanding the value of it and the need to be able to be a good OpenSource citizen.
However, not everyone in those enterprises has the skills to participate in OpenSource communities, feels ready to contribute something or to create and run a vibrant OpenSource community. I observed that there are two distinct groups of people - one with OSS background, ability and willingness to operate in that domain and those that will likely only use OSS without any likeliness to contribute or participate.
Let's change that and build a bridge between those two groups while generating value for the enterprise making it more likely to receive support for this activity.
InnerSource, the application of OpenSource principles and practices within the enterprise, can be this bridge.
You'll learn about creating opportunities for people who haven't been exposed to OpenSource collaboration to learn about the OpenSource ways of collaboration in a safe environment within their organization by creating shared projects internally that follow OpenSource practices and principles.
You'll also learn about how organizations can profit from cross-team/silo collaboration and knowledge exchange. Also, the acquisition of very valuable skills by their employees that can facilitate the successful transition of those internal projects into OpenSource and creation of vibrant communities around them.
This approach is successfully used by many enterprises, and I'm part of a community who has built and is building OpenSource-d training material for this.
Attend this talk if you want to learn about how to deal with silo issues within your company, how to facilitate your companies way to transition projects to OpenSource or how to build up skills to successfully interact with OpenSource projects. Also attend if you want to hear a bit about freely available training material explaining InnerSource concepts for people who haven't been involved in it yet.

Back

Tracking local storage configuration on linux

Home

Speaker Alasdair Kergon
RoomH.2215 (Ferrer)
TrackLightning Talks
Time18:00 - 18:15
Event linkView original entry

This short talk will discuss a few of the difficulties of tracking and monitoring storage devices on linux systems today and discuss some projects that are trying to improve the situation.

Back

Concept Programming, from ideas to code

Home

Speaker Christophe de Dinechin
RoomH.2215 (Ferrer)
TrackLightning Talks
Time18:20 - 18:35
Event linkView original entry

Programming is the art of turning ideas into code.
Ideas and code do not live in the same space. Any translation is lossy.
Concept programming is a cross-language approach that focuses on this translation process, and helps identify often overlooked classes of issues.

Ideas and code do not live in the same space. Consequently, any translation is lossy. But this loss is not immediately visible. For example, how does your programming language coerce you into coding a concept as simple as "maximum" or list in a way that is generally full of warts?
Concept programming is a cross-language approach that focuses on this translation process, and helps identify often overlooked classes of issues. It separates the "concept space" and the "code space", and focuses on how the mechanics in one space translate (or not) into the other.



It introduces a few core ideas:




Back

DeskConnD: Secure, cross-platform IPC on the network

Home

Speaker Omer Akram
RoomH.2215 (Ferrer)
TrackLightning Talks
Time18:40 - 18:55
Event linkView original entry

DeskConnD is a cross-platform, python based daemon that uses Crossbar and WAMP to enable end-to-end encrypted IPC over the network.

In this talk Omer Akram will talk about his new project that he has been working on for the past year to mainly make it easy for developers to add functionalities to their apps that were previously cumbersome to implement, having no platform-specific dependencies enables this project to be cross-platform.



Combining proven technologies like WebSocket/WAMP and ZeroConf, DeskConnD allows different components of a distributed app on the local network to securely communicate and do messaging based on the RPCs and PubSub paradigms.

Back

State of OpenJDK

Home

Speaker Mark Reinhold
RoomH.1302 (Depage)
TrackFree Java
Time10:30 - 11:15
Event linkView original entry

A review of the past year in the life of the OpenJDK Community, and a look at what's ahead.

Back

Project Loom: Advanced concurrency for fun and profit

Home

Speaker Andrew Haley
RoomH.1302 (Depage)
TrackFree Java
Time11:20 - 11:45
Event linkView original entry

Project Loom, an OpenJDK project, is "intended to explore, incubate and deliver Java VM features and APIs built on top of them for the purpose of supporting easy-to-use, high-throughput lightweight concurrency and new programming models on the Java platform." These feature include Lightweight Threads, delimited continuations, and tail-call elimination.



The speaker, a Project Loom team member, will describe the project in depth, in particular the gnarly details of how coroutine and continuation scheduling mechanism works, and a new feature, Scoped Locals.

Back

TornadoVM: A Virtual Machine for Exploiting ​High-Performance Heterogeneous ​Execution of Java Programs​

Home

Speaker Thanos Stratikopoulos
RoomH.1302 (Depage)
TrackFree Java
Time11:50 - 12:15
Event linkView original entry

The proliferation of heterogeneous hardware in recent years has driven us to consider that every system we program, most likely includes a mix of computing elements; each of which with different hardware characteristics enabling programmers to improve performance while reducing energy consumption. These new heterogeneous devices include multi-core CPUs, GPUs and FPGAs. This trend has been accompanied by changes in software development norms that do not necessarily favor programmers. A prime example is the two most popular heterogeneous programming languages, CUDA and OpenCL, which expose several low-level features to the API making them difficult to use by non-expert users.



Instead of using low-level programming languages, developers in industry and academia tend to use higher-level, object-oriented programming languages, typically executed on managed runtime environments, such as Java, R, and Javascript. Although many programmers might expect that such programming languages would have already been adapted for transparent execution on heterogeneous hardware, the reality is that their support is either very limited or absent.



In this talk, we present TornadoVM (https://github.com/beehive-lab/TornadoVM), a heterogeneous programming framework for Java programs. TornadoVM co-exists with standard JVMs (e.g., OpenJDK) that implement the JVMCI. TornadoVM consists of three components: 1) a simple API for composing pipelines of existing Java methods, 2) an optimizing JIT compiler that extends the Graal compiler with hardware-aware optimizations that generate OpenCL C code, and 3) a runtime system that executes TornadoVM specific bytecodes, performs memory management, and schedules the code for execution on GPUs, multicore CPUs, and FPGAs. Essentially, TornadoVM is a “VM-in-a-VM” that can adapt execution completely dynamically and transparently to the user, always finding the highest-performing combination of hardware accelerators through dynamic reconfiguration.

Back

ByteBuffers are dead, long live ByteBuffers!

Home

Speaker Maurizio Cimadamore
RoomH.1302 (Depage)
TrackFree Java
Time12:20 - 13:00
Event linkView original entry

Abstract: Direct buffers are, to date, the only way to access foreign,
off-heap memory. Despite their success, direct buffers suffer from some
limitations --- stateful-ness, lack of addressing space,
non-deterministic deallocation to name a few --- which makes them a
less-than-ideal choice under certain workloads. In this talk we paint
the path to the future: a safe, supported and efficient foreign memory
access API for Java. By providing a more targeted solution to the
problem of accessing foreign memory, not only developers will be freed
by the above limitations - but they will also enjoy improved
performances, as the new API is designed from the ground-up with JIT
optimizations in mind - and all without sacrificing memory access safety.

Back

Free at Last! The Tale of Jakarta EE

Home

Speaker Mike Milinkovich
RoomH.1302 (Depage)
TrackFree Java
Time13:05 - 13:30
Event linkView original entry

In September 2017 Oracle announced that it would be migrating governance of the Java EE platform to the Eclipse Foundation, the home of MicroProfile. Two years later Jakarta EE 8 shipped, signaling the successful completion of that move. As a result, Free Java has a new home for a significant piece of the Java ecosystem. A home which is purely open source, vendor neutral, and community led.



This talk will be about how the long and painful journey from Java EE to Jakarta EE unfolded. But more importantly it will focus on how the new Jakarta EE community works, and how there is a new, open, specification process for Java APIs (other than SE) that is available for the community. We are looking forward to welcoming many of those interested in Free Java to participate in driving new innovation in Java APIs for cloud and other exciting use cases.

Back

Shenandoah 2.0

Home

Speaker Roman Kennke
RoomH.1302 (Depage)
TrackFree Java
Time13:35 - 14:00
Event linkView original entry

Shenandoah GC landed in JDK12 about a year ago, giving OpenJDK
another low-pause garbage collector. It has undergone substantial
changes since then. Specifically we have a new barrier scheme, and have
eliminated the extra forwarding pointer word per object, thus
substantially reducing memory footprint. After giving a general
introduction to OpenJDK GC landscape and Shenandoah GC, this talk
focuses on those recent changes in Shenandoah and what's in it for you.

Back

JMC & JFR - 2020 Vision

Home

Speaker Jie Kang
RoomH.1302 (Depage)
TrackFree Java
Time14:05 - 14:45
Event linkView original entry

JDK Flight Recorder provides production time profiling and diagnostics
via a compact events-based infrastructure inside the Hotspot JVM
itself. JDK Mission Control is a stand-alone application that provides
real-time monitoring information for Hotspot JVMs, as well as tools to
read, automatically analyze and visualize flight recording data
produced by JDK Flight Recorder.



When this talk is presented, JMC 7.1.0 has (hopefully) been out for a
little while. This presentation talks about what is new and the
roadmap for the upcoming JMC 8. We will also discuss recent changes in
the project, such as the move to Skara. Towards the end we will demo
how JDK Flight Recorder and JMC core libraries can be used to diagnose
applications deployed in container orchestration platforms.

Back

Hacking on GraalVM: A (very) Rough Guide

Home

Speaker Andrew Dinn
Josh Matsuoka
RoomH.1302 (Depage)
TrackFree Java
Time14:50 - 15:15
Event linkView original entry

The GraalVM project provides, among other options, a means to deliver
Java programs as compact, self-contained, fast-startup native images.
GraalVM has been moving from research to development for quite a few
years now. However, it is only just beginning to be properly integrated
with the latest OpenJDK releases and there is still much to be done to
get it fully productized and to improve usability and performance.



This talk will recount our joint experiences of trying to add new and/or
improved capabilities to the the GraalVM code base. Our story will
stumble gracelessly from one pitfall to the next cock-up in the hope
that by exposing and explaining our own history of lamentable error and
occasional failure you will be able to avoid being doomed to repeat it.



We will provide a guide to getting started and building GraalVM, an
overview of the how the compiler, native image generator and other
elements of the GraalVM toolkit operate plus a map of what code sits
where in the source tree and how it fits together and offer tips for
debugging the Graal compiler and native image generator -- all the tasks
you will need to perform in order to attain a vantage point from which
to change or add to the current functionality.

Back

Reducing OpenJDK Java Garbage Collection times with stack allocation

Home

Speaker Nikola Grcevski
RoomH.1302 (Depage)
TrackFree Java
Time15:20 - 15:45
Event linkView original entry

In this talk we'll explore ways that the JVM can reduce the object allocation rate of Java programs automatically by performing stack allocation of objects that are known to be local to a method, or in compiler terms non-escaping. The discussion is focused on employing the escape analysis optimization in the OpenJDK Hotspot C2 compiler to determine which Java objects can be stack allocated, and how this optimization can reduce pressure on the Java JVM garbage collectors.



We'll show some results on how various real world applications can benefit from such optimizations and describe the methodology of how we prototyped this in OpenJDK. Our work is only in prototype state at this moment and we are looking for more data to understand how broadly applicable this optimizations is. This work wouldn't be possible without free open source access to Java.

Back

G1: To infinity and beyond

Home

Speaker Stefan Johansson
RoomH.1302 (Depage)
TrackFree Java
Time15:50 - 16:15
Event linkView original entry

Abstract: G1 has been around for quite some time now and since JDK 9 it
is the default garbage collector in OpenJDK. The community working on G1
is quite big and the contributions over the last few years have made a
significant impact on the overall performance. This talk will focus on
some of these features and how they have improved G1 in various ways. We
will also take a brief look at what features we have lined up for the
future.

Back

Just-in-time compiling Java in 2020

Home

Speaker Martin Doerr
RoomH.1302 (Depage)
TrackFree Java
Time16:20 - 16:45
Event linkView original entry

Client compiler, server compiler, JVMCI, Graal ... What are we using today and how do they work together?
In this talk I'll give and overview of the Just-in-time compilers included in OpenJDK and explain how to play with them.
I'll also address Just-in-time compiler threads and resource related issues.

Back

Helpful NullPointerExceptions - The little thing that became a JEP

Home

Speaker Christoph Langer
RoomH.1302 (Depage)
TrackFree Java
Time16:50 - 17:15
Event linkView original entry

One of the most prevalent - if not the most prevalent - exception type in Java is the NullPointerException. While Java set out to overcome the possibilities
to do the mistakes one can do when programming in languages like C/C++ by not exposing pointers in the Java language, the misleading term 'pointer' sneaked
into this exception. To this day, NullPointerExceptions thrown by the runtime system didn't contain messages. All you had was a callstack and a line number.
But in typical expressions and statements there are several dereferences where an NPE can occur in one line.



We - some engineers in the SAP team - thought this could be helped by a little enhancement. The new NPE message gives precise information about the location
and tries to explain what was going on when a null reference was encountered. However, due to its prominent nature, it eventually became a JEP.



In my talk I'll demonstrate the improvements that come with this enhancement. I will lift the hood a little and provide a glance at its implementation details.
And finally I'll say some words about the current status and share some ideas for further improvements in the area of exception messages.

Back

Taming Metaspace: a look at the machinery, and a proposal for a better one

Home

Speaker Thomas Stüfe
RoomH.1302 (Depage)
TrackFree Java
Time17:20 - 17:45
Event linkView original entry

When examining memory footprint of a JVM process, the delta between Java
heap usage and actual working set size can be surprisingly large. The JVM
uses off-heap memory for a lot of things: thread stacks, compiler arenas,
code heap, byte buffers, GC control... however, one of the largest
consumers of off-heap memory can be class metadata. Class metadata are
stored in Metaspace, which includes the Compressed Class Space.



The talk will explore what Metaspace actually is and what is stored there;
describe the architecture of the Metaspace allocator and the Compressed
Class Space; how it interacts with the GC; how it is sized. We will
highlight waste areas and demonstrate how to use jcmd to examine Metaspace
internals.



The current implementation of the Metaspace allocator suffers from a number
of shortcomings. They can manifest in excessive waste and a certain
"clinginess" - an unwillingness to let go of unused memory.



At SAP, we developed an improved version which is more frugal with memory
and provides a much improved elasticity. So the second part of this talk
will concentrate on our new implementation. We will highlight the
differences to the old architecture, demonstrate advantages and examine how
it works.

Back

The OpenJDK JVM : Securing a moving target or What could possibly go wrong?

Home

Speaker Andrew Dinn
RoomH.1302 (Depage)
TrackFree Java
Time17:50 - 18:15
Event linkView original entry

The OpenJDK Java Virtual Machine presents some interesting challenges
when it comes to guarding against potential vulnerabilities. This talk
will explain how dynamic class-loading, JIT compilation, speculative
compilation and other aspects of the JVM's operation present a moving
attack surface that presents some very different challenges to those
found in other programs or runtimes.



This talk won't say anything about specific vulnerabilities but it will
identify a few areas of the OpenJDK JVM where some of these unique types
of vulnerability have been identified and resolved. It may teach you
some things you didn't know about the complexity of the JVM and
hopefully reassure you that the OpenJDK devs are very aware of what
could possibly go wrong. Whether we have got it all right is left as a
follow-up exercise for attendees.

Back

JRuby Startup and AOT

Home

Speaker Charles Nutter
Thomas Enebo
RoomH.1302 (Depage)
TrackFree Java
Time18:20 - 19:00
Event linkView original entry

Rubyists work from a command line, which makes JRuby startup time a critical concern. Traditionally, the JVM has not been optimized for startup, but that's changing. This talk will explore all available options for making a heavy runtime like JRuby start up quickly, from using class data caching services like Hotspot's CDS and OpenJ9's Quickstart to ahead-of-time compilation of JRuby using GraalVM's Native Image. We'll compare approaches and trade-offs.

Back

Fundamental Technologies We Need to Work on for Cloud-Native Networking

Home

Speaker Magnus Karlsson
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time10:30 - 11:10
Event linkView original entry

Many people and companies are betting that cloud-native networking
will be the preferred way of implementing network functions in an easy
and scalable way. It is based around the tenants of modularity, high
availability, scalability, low-overhead networking, and ease of
deployment. And a number of companies such as Google has shown that it
is really possible to achieve these properties with it. But the
architectural basis of cloud-native is quite different from the ones
of virtualization-based NFV, but nevertheless, in many cases we
continue to use the software packages that were designed for that
drastically different architecture. The question is, how well does
the current set of open source projects used in NFV work in a
cloud-native environment and what needs to change in them in order to
realize the cloud-native vision?



In this presentation, I will define what I mean with cloud-native
networking and from that derive the system requirements needed to realize
that vision. Based on these requirements, we can deduce a number of
basic architectural properties, features and services that are needed
in the system to be able to satisfy these requirements. Then I will go
through the most popular open source projects such as Linux, DPDK and
OVS and see how they satisfy these architectural properties and
features. The main contribution of this presentation will be to show
what we need to work on within these SW packages in order to realize
cloud-native networking. Or maybe we need completely new SW projects
to be able to achieve this.

Back

Skydive

Home

Speaker Sylvain Baubeau
Sylvain Afchain
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time11:10 - 11:30
Event linkView original entry

Skydive is an open source real-time network topology and protocols analyzer providing a comprehensive way of understanding what is happening in your network infrastructure.

Skydive is a toolbox to monitor, visualize and troubleshoot an infrastructure.



It first collects all the information about the physical and logical infrastructure : network interfaces, Linux and Openvswitch bridges, network namespaces, Docker/runc containers, Kubernetes objects, virtual machines and more. All these objects are stored into a graph to allow the operator to visualize and query the whole topology. On top of this, Skydive is able to inject, capture and analyze traffic at any point of this infrastructure - using various technics such as AFpacket, eBPF, DPDK or SFlow samples. This make possible use cases like :



• troobleshoot dropped packets

• get metrics and bandwidth about some specific traffic

• trigger alert on some abnormal traffic detection

• get latency on the whole path of a flow

• and a lot more...


Skydive is agnostic to any SDN, container orchestration engine or virtualization platform. That being said, Skydive has plugins for specific technologies such as Kubernetes/Istio, OpenStack, Openvswitch, Network Service Mesh, OpenContrail, VPP and more.



This presentation will present the architecture of Skydive and demonstrate some use cases through a demo.

Back

Do you really see what’s happening on your NFV infrastructure?

Home

Speaker Emma Foley
Krzysztof Kepka
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time11:30 - 12:10
Event linkView original entry

As CoSP’s accelerate their adoption of SDN and NFV technologies, the increased need for metrics, performance measurement and benchmarking becomes a focus, to ensure the continued delivery of “best in class” services. As NFV environments have grown in size and complexity, the tools required to gain this greater visibility into the NFVi need to continue to evolve to meet the requirements for manageability, serviceability and resiliency.



Using Collectd as a metrics collection tool, OPNFV Barometer monitors the performance of the NFVi resources and has the capability to expose these insights via open industry standard interfaces to analytics or MANO components for potential enforcement or corrective actions. Barometer works with related open source technologies and communities (collectd, DPDK, OpenStack, Prometheus, SAF, etc.) to provide numerous metrics and events that address various different use cases such as service healing, power optimization and ensuring application QoS.

Back

Endless Network Programming − An Update from eBPF Land

Home

Speaker Quentin Monnet
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time12:10 - 12:30
Event linkView original entry

The Linux kernel networking capabilities have been undergoing major changes over the last years. At the heart of the performance gain, eBPF (extended Berkeley Packet Filter) and XDP (eXpress Data Path) have brought new possibilities in terms of tracing and network packet processing. eBPF is a trendy topic in the Linux world, and today it needs little introduction among the SDN and NFV community. But the technology is still under heavy development, bringing new features, more flexibility, and better performance to the users. This presentation is an update on the latest evolutions in the eBPF world!



Many of those changes occur directly inside the eBPF subsystem architecture. New program types are being added. Early constraints such as the maximal number of instructions for programs, or the unavailability of loops, are changing. The internals are improved with support for debug information (BTF) or 32-bit instructions. And many new mechanisms are implemented, such as global data support, the “BPF trampoline”, batched map operations, dynamic linking. Let's review all the latest trends in eBPF kernel development!



But beyond kernel code, eBPF has grown as a full ecosystem, with a variety of tools used to work with it, or to build upon it. Bpftool, a reference utility to manage eBPF programs, keeps evolving. The networking projects using eBPF keep growing in number (e.g. Katran, Suricata, Sysdig, Hubble, Libkefir) or in features (e.g. Cilium). Let's review (briefly) some of those projects that assert eBPF as one of the essential fast dataplane solutions in the Linux world.

Back

Replacing iptables with eBPF in Kubernetes with Cilium

Home

Speaker Michal Rostecki
Swami Vasudevan
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time12:30 - 12:50
Event linkView original entry

Michal Rostecki is a Software Engineer working at SUSE. He's working on Cilium, both upstream and on integrating it with openSUSE Kubic Project and SUSE CaaS Platform.



Swaminathan Vasudevan is a Software Engineer working at SUSE. Worked on Neutron Networking Upstream and currently migrating to Cilium and openSUSE Kubic Project and SUSE CaaS Platform.

Cilium is an open source project which provides networking, security and load balancing for application services that are deployed using Linux container technologies by using the native eBPF technology in the Linux kernel.
In this presentation we would talk about:
- The evolution of the BPF filters and will explain the advantages of eBPF Filters and its use cases today in Linux especially on how Cilium networking utilizes the eBPF Filters to secure the Kubernetes workload with increased performance when compared to legacy iptables.
- How Cilium uses SOCKMAP for layer 7 policy enforcement
- How Cilium integrates with Istio and handles L7 Network Policies with Envoy Proxies.
- The new features since the last release such as running Kubernetes cluster without kube-proxy, providing clusterwide NetworkPolicies, providing fully distributed networking and security observability platform for cloud native workloads etc.

Back

Analyzing DPDK applications with eBPF

Home

Speaker Stephen Hemminger
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time12:50 - 13:10
Event linkView original entry

One of the challenges of doing software network applications is observing the inputs, outputs, and what the application is doing with them. Linux provides a rich tool set with eBPF but integrating this into a DPDK application is challenging. The DPDK libraries for capturing is incomplete which leads to lots of time debugging the tools. This talk addresses these issues, recommends solutions and proposes enhancements to make developers live easier.

The DPDK provides a limited form of packet capture, but it only works on a single interface with no filtering and inaccurate timestamps.
I go over what packet capture does now, how it can be improved, and how it can be integrated with other tracing.



This talk is an extension of the talk (based on community feedback) given in 2019 at the DPDK userspace summit.

Back

XDP and page_pool API

Home

Speaker Ilias Apalodimas
Lorenzo Bianconi
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time13:10 - 13:30
Event linkView original entry

XDP support is an increasing trend on the network devices. XDP main goal is
processing packets at the lowest point in the software stack avoiding
overheads. Memory recycling of received buffers achieved through
the in kernel pagepool API plays a fundamental role in the increased performance.
Adding XDP support on a driver can be non-trivial. In this talk we'll demonstrate
how porting a standard ethernet driver (mvneta/netsec) to XDP and the page
pool API can
boost performance.
Part of the page_pool evolution involves adding the recycling support
in the kernel's SKB stack and leverage the increased performance
attributes of the API.

Back

Weave Net, an Open Source Container Network

Home

Speaker Bryan Boreham
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time13:30 - 14:10
Event linkView original entry

A tour of the internals of Weave Net, one of the most popular container networks:
design challenges and lessons learned from five years in the wild. Including
Kubernetes integration and how CNI was born.



Weave Net is written in Go, using many Linux kernel features such as veths, bridges and iptables.
Aimed at developers rather than network engineers, Weave Net tries to be self-configuring and
find the best available transport between nodes. The control plane operates via gossip,
with no central point of control.

Back

Rethinking kubernetes networking with SRv6 and Contiv-VPP

Home

Speaker Ahmed Abdelsalam
Miroslaw Walukiewicz
Filip Gschwandtner
Daniel Bernier
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time14:10 - 15:00
Event linkView original entry

Kubernetes (k8s) is currently the de-facto standard for containers orchestration. However, K8s does not provide any solution for handling containers networking. Instead, it offloads the networking to third-party certified plugins called CNI plugins. Contiv-VPP is a k8s CNI plugin that offers fast I/O by leveraging the carrier-grade capabilities of VPP and DPDK in the dataplane.



The adoption of containers and microservices calls for IPv6 to provide addressing and reachability for such massive number of endpoints. SRv6 leverages the IPv6 dataplane to provide overlay networking, traffic engineering, load balancing, network policy and service chaining.



In this talk, we present an SRv6-based solution for k8s networking. We will show how SRv6 is used for pod-to-pod communication, k8s services and service function chaining (SFC), and how SRv6 solves several k8s networking challenges. We will also show the integration of our solution in Contiv-VPP. This solution is the result of combined effort between Bell Canada, Cisco and Pantheon.tech.

Back

Akraino Edge KNI blueprint

Home

Speaker Yolanda Robla Mota
Ricardo Noriega
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time15:00 - 15:40
Event linkView original entry

Blueprints in the Kubernetes-Native Infrastructure Blueprint Family leverage the best-practices and tools from the Kubernetes community to declaratively manage edge computing stacks at scale and with a consistent, uniform user experience from the infrastructure up to the services and from developer environments to production environments on bare metal or on public cloud.



All blueprints in this family share the following characteristics:






Come and see the leading edge!

Launched in 2018, Akraino Edge Stack aims to create an open source software stack that supports high-availability cloud services optimized for edge computing systems and applications.



As part of the Akraino project, Kubernetes-Native-Infrastructure blueprint family represents the reference edge stack managed as a declarative platform, where controllers monitor a system for deviations between the user-declared target state and reality and take corrective
actions to reconcile reality with the declared target state.



KNI blueprints cover up two different use cases:






With Kubernetes Native Infrastructure learn about the k8s way of managing infrastructure. By defining a declarative state, the edge administrator will be able to manage thousands of sites by following an innovative GitOps approach.



If you are interested in these exciting topics, don't miss the talk!

Back

Fast QUIC sockets for cloud networking

Home

Speaker Nathan Skrzypczak
Aloys Augustin
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time15:40 - 16:20
Event linkView original entry

QUIC was introduced by Google to move the transport protocol implementation out of the kernel, and is now being standardized in the IETF. It provides both encryption and multiplexing, and will be the default transport for HTTP/3. In this talk we'll present the work we've done investigating whether QUIC would benefit from vectorized packet processing, the impact it has on performance and how it can be consumed by external applications.



VPP (vector packet processing) is a fast network data plane, part of the Linux Foundation FD.io project providing fast network functions on top of DPDK. It provides an optimized support of TCP & UDP allowing significant performance improvements. In this presentation, we'll discuss:




Back

Mixing kool-aids! Accelerate the internet with AF_XDP & DPDK

Home

Speaker Ciara Loftus
Kevin Laatz
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time16:20 - 16:40
Event linkView original entry

"With its recent advancements, AFXDP is gaining popularity in the high performance packet processing space. As a result, existing frameworks for packet processing, such as DPDK, are integrating AFXDP support to provide more options for moving packets to user space applications. The challenge with such integration is that both AF_XDP and frameworks like DPDK have their own assumptions and constraints about such things as, for example, how to align or manage packet buffers, making the integration less straight forward than it might appear at first glance.



This talk takes a look at the usability of AFXDP pre-kernel v5.4, before diving into the recent challenges we encountered when integrating DPDK and AFXDP, and how we made changes (on both sides) to allow the two to work together in a much more seamless manner."

Back

Dial your Networking Code up to 11

Home

Speaker Bruce Richardson
Harry van Haaren
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time16:40 - 17:20
Event linkView original entry

Modern CPUs provide a wide variety of Single-instruction-multiple-data (SIMD) instructions, or vector instuctions, for operating on larger blocks of data than with regular instructions. Though thought of by many programmers primarily as instructions for doing calculations in parallel on arrays of data, these vector instructions can actually be used in other ways to accelerate packet processing applications. This talk goes through a number of examples in open-source projects, such as DPDK and OVS, where vector instructions have been used to boost performance significantly, and explains the general techniques used that can be applied to other applications.

The talk focuses on the work done on DPDK and OVS to leverage the SSE and AVX instruction sets for packet acceleration. It shows how the different tasks to be performed in those applications can be mapped to SIMD instructions, and presents general guidelines on how to think about packet processing work from a vectorization viewpoint. It also discusses some considerations in application design so as to allow the app to run with best performance on a variety of platforms, each of which may have different instruction sets available.

Back

Userspace networking: beyond the kernel bypass with RDMA!

Home

Speaker Benoît Ganne
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time17:20 - 17:40
Event linkView original entry

While userspace networking has demonstrated great performance benefits, it does come with greater complexity than kernel networking.



In parallel, Remote Direct Memory Access (RDMA) was developed as an efficient way to move data in HPC and storage clusters with great success.



Key properties of this technology are also highly desirable for userspace networking: native integration with the operating system (OS), OS bypass and a very efficient software interface.
RDMA-capable network adapters are now enabling standard Ethernet networking functions through the RDMA interface, allowing userspace networking software such as VPP to achieve extreme performance while integrating transparently with the OS.



We'll present:




Back

Vita: high-speed traffic encryption on x86_64 with Snabb

Home

Speaker Max Rottenkolber
RoomH.1308 (Rolin)
TrackSoftware Defined Networking
Time17:40 - 18:00
Event linkView original entry

Vita is a high-performance IPsec VPN gateway designed with medium and large network operators in mind. It is written in a high-level language (Lua) using the Snabb networking toolkit and achieves high performance via networking in userspace, i.e. bypassing the kernel network stack.



This talk will discuss Vita and how it was developed using Snabb. Topics include: fast software networking using a dynamic, high-level language; cryptographic ciphers implemented in software software accelerated by x86 extensions; modern cryptography; limiting complexity; multi-core scaling; YANG enabled control planes; minimalist NIC drivers.

Back

DNS Devroom Opening

Home

Speaker Shane Kerr
Pieter Lexis
Peter van Dijk
RoomH.1309 (Van Rijn)
TrackDNS
Time10:30 - 10:35
Event linkView original entry

Welcome to the DNS DevRoom

Back

DNS Management in OpenStack

Home

Speaker Graham Hayes
RoomH.1309 (Van Rijn)
TrackDNS
Time10:35 - 11:05
Event linkView original entry

All major clouds have integrated DNS management these days, and OpenStack is one of them. We will run through the OpenStack DNS (Designate) project - how it works, why we laid it out the way we did, how you can use it, and how other OpenStack components can leverage it.

We will run through the general architecture of the project, and show how we can remain a simple control layer over multiple DNS servers and service providers.



We will show how you can run Designate stand alone, as a multi tenant API for managing DNS inside your company, and how you can use the ability to have multiple pools of servers available for multiple purposes.



Finally we will show the myriad of both OpenStack and other Open Source software integrations for DNS management, and DNS-01 ACME validation.

Back

HashDNS and FQDNDHCP

Home

Speaker Renzo Davoli
RoomH.1309 (Van Rijn)
TrackDNS
Time11:10 - 11:40
Event linkView original entry

Would you like a DNS server for IPv6 where adding a new node is as simple as typing in its name?
If the answer is yes, try HashDNS.

IPv6 autoconfiguration methods can give addresses to interfaces but do not provide any means of configuring the DNS. So autoconfiguration is suitable for clients. If a host has to act as a server, it must have a fully qualified domain name and the DNS service has to map its name to its IP address.



In the Internet of Thread scenario, the number of network nodes can be orders of magnitude higher than before, as each process or thread can be a node. This idea of hash based IPv6 addresses is a viable solution to the problem to manage the DNS resolution in IoTh environments.



The host part of an IPv6 address can be computed as the result of a hash function computer on the fully qualified domain name.



In this way it is possible to write a DNS server able to resolve the addresses of any hostname in a sub-domain provided the network prefix of that sub-domain.



The installation of a new node of the network (computer, namespace, IoTh process) is as simple as providing it with its IPv6 address (the one obtained by concatenating the network prefix and the host address computed by the hash function).



There is no need to change the configuration of the DNS.



Actually the installation of a new node (or its renaming) is even simpler than that. The host can use a DHCP service designed to get the right address from the DNS server given its fully qualified domain name.



So a system administrator has nothing to do more than assigning the new node its name. (They have just to baptize the new node)

Back

State of djbdnscurve6

Home

Speaker Erwin Hoffmann (feh)
RoomH.1309 (Van Rijn)
TrackDNS
Time11:45 - 12:05
Event linkView original entry

The fehQlibs and djbdnscurve6 provide both a DNS library which support IPv6 LLU addresses. The inclusion and use of IPv6 LLU addresses is discussed. While the typical Unix /etc/resolv.conf is applied system-wide and the Microsoft Window's pendent works interface-dependent, here application specific DNS settings can be used.

Overview:
1. Background and heritage on fehQlibs and djbdnscurve6
2. Application specific DNS resolver settings
3. Integration of IPv6 LLU addresses - benefits
4. Integration of IPv6 LLU addresses - recipe
5. Outlook and future challenges

Back

Testing DoH and DoT servers, compliance and performance

Home

Speaker Stéphane Bortzmeyer
RoomH.1309 (Van Rijn)
TrackDNS
Time12:10 - 12:30
Event linkView original entry

Of course, encrypting DNS is necessary for privacy and security, like
for every other Internet protocol. That's why DoT and DoH deployment
is very important, so that users could safely go to a resolver they
trust. Now, it is time to assert the technical compliance and
performance of these trusted resolvers. We will talk about the things
that could and should be tested against DoT and DoH servers and how to
implement it. We will then discuss performance measurements, specially
with the opportunities brought by parallelism (both in DoT and DoH)
and the challenges they create for measurements.
This talk will be inspired by the development of a tool which is, at
this stage, in a very alpha state.

Back

Improving BIND 9 Code Quality

Home

Speaker Ondřej Surý
RoomH.1309 (Van Rijn)
TrackDNS
Time12:35 - 13:05
Event linkView original entry

BIND 9 consists of a huge and old codebase. In this talk, I would like you to show all the available tools that we use on regular basis to improve, refactor and make the BIND 9 code safer. I'll show the examples of various Google/LLVM Sanitizers, cppcheck, LLVM scan-build and semantic patching using coccinelle.

Back

unwind(8)

Home

Speaker Florian Obser
RoomH.1309 (Van Rijn)
TrackDNS
Time13:10 - 13:40
Event linkView original entry

DNS is easy. You type fosdem.org in your browser's address bar, hit enter and you will be greeted by your favorite open-source event's start page. Actually...

We will introduce unwind(8) - an always-running, validating DNS recursive nameserver, answering queries on localhost (127.0.0.1). We will explain its privilege-separated design and show that it is secure to run this daemon by default. We will then show how its novel approach of observing changes in network location and actively probing the quality of the local network improve the user experience in DNS resolution. The focus will be on laptops that move through many networks, some good, some bad, some outright hostile.



We will compare unwind(8) to prior solutions and show how its design enables it to run without user intervention.

Back

extending catalog zones

Home

Speaker Leo Vandewoestijne
RoomH.1309 (Van Rijn)
TrackDNS
Time13:45 - 14:00
Event linkView original entry

another approach in automating maintenance

Catalog zones are DNS zones containing dynamic configuration,
or better said “configuration data”:

the domain zones to be loaded/unloaded) in your DNS daemon.



This already works find in Bind, be it slow when running million of domains.
So I wish to add a section that would allow to iterate over the relevant data.

Back

The Different Ways of Minimizing ANY

Home

Speaker Edward Lewis
RoomH.1309 (Van Rijn)
TrackDNS
Time14:05 - 14:25
Event linkView original entry

The DNS Protocol has features that have grown to become liabilities. The query type "ANY" is one. Earlier this year a published RFC document describes how a DNS server may respond to such queries while reducing the liability. But the document does not define a definitive means for a server to signal that it is differing from the original protocol. This presentation measures of the impact of having no definitive means specified and examines the "fear, uncertainty, and doubt" of lacking explicit signals.

The "minimal ANY responses" RFC (Providing Minimal-Sized Responses to DNS Queries That Have QTYPE=ANY, a.k.a. RFC 8482) results in about 1% of the TLD nameservers indicating they are minimizing ANY responses.
That's (only) about 250 cases.
What is troubling is that there are about 9 different responses observed to indicate the response is "minimized"
9 different ways in just 250 samples, "fuzzing" the protocol
The morale of this tale is that "fuzzying" the protocol is worrisome.  (Not that minimizing ANY is a bad thing.)

Back

Check Yourself Before You Wreck Yourself

Home

Speaker Nic Jansma
RoomH.1309 (Van Rijn)
TrackWeb Performance
Time15:00 - 15:35
Event linkView original entry

Boomerang is an open-source Real User Monitoring (RUM) JavaScript library used by thousands of websites to measure their visitor's experiences. The developers behind Boomerang take pride in building a reliable and performant third-party library that everyone can use without being concerned about its measurements affecting their site. We recently performed and shared an audit of Boomerang's performance, to help communicate its "cost of doing business", and in doing so we found several areas of code that we wanted to improve. We'll discuss how we performed the audit, some of the improvements we've made, how we're testing and validating our changes, and the real-time telemetry we capture for our library to ensure we're having as little of an impact as possible on the sites we're included on.

Boomerang is an open-source Real User Monitoring (RUM) JavaScript library used by thousands of websites to measure their visitor's experiences.



Boomerang runs on billions of page loads a day, either via the open-source library or as part of Akamai's mPulse RUM service. The developers behind Boomerang take pride in building a reliable and performant third-party library that everyone can use without being concerned about its measurements affecting their site.



Recently, we performed and shared an audit of Boomerang's performance, to help communicate the "cost of doing business" of including Boomerang on a page while it takes its measurements. In doing the audit, we found several areas of code that we wanted to improve and have been making continuous improvements ever since. We've taken ideas and contributions from the OSS community, and have built a Performance Lab that helps "lock in" our improvements by continuously measuring the metrics that are important to us.



We'll discuss how we performed the audit, some of the improvements we've made, how we're testing and validating our changes, and the real-time telemetry we capture on our library to ensure we're having as little of an impact as possible on the sites we're included on.

Back

Metrics and models for Web performance evaluation

Home

Speaker Dario Rossi
RoomH.1309 (Van Rijn)
TrackWeb Performance
Time15:40 - 16:15
Event linkView original entry

The World Wide Web is still among the most prominent Internet applications. While the Web landscape has been in perpetual movement since the very beginning,
these last few years have witnessed some noteworthy proposals such as SPDY, HTTP/2 and QUIC, which profoundly reshape the application-layer protocols family.
To measure the impact of such changes, going beyond the classic W3C notion of page load time, a number of Web performance metrics has been proposed (such as
SpeedIndex, Above-The-Fold and variants). At the same time, there is still a limited amount of understanding on how these metrics correlate with the user
perception (e.g., such as user ratings, user-perceived page load time, etc.). In this talk, we discuss the state of the art in metrics and models for Web
performance evaluation, and their correlation with user experience through several real-world studies. Additional information, software and datasets are
available at https://webqoe.telecom-paristech.fr

Back

Hint, Hint, Font Loading Matters!

Home

Speaker Sia Karamalegos
RoomH.1309 (Van Rijn)
TrackWeb Performance
Time16:20 - 16:55
Event linkView original entry

We all love fonts. From Google Fonts to Typekit, Hoefler&Co and more, they give character and tone to our websites. The down side of fonts is that they can really slow down our loads. In this talk we'll learn about common pitfalls like critical requests depth and how to use resource hints to play tricks with latency to load web applications faster. We'll walk through a network profile to understand what's going on in the browser and how to make it faster.

Back

The ultimate guide to HTTP resource prioritization

Home

Speaker Robin Marx
RoomH.1309 (Van Rijn)
TrackWeb Performance
Time17:00 - 17:35
Event linkView original entry

Come learn about how browsers try to guess in what order web page resources should be loaded and how servers use that information to often (accidentally) make your web page slower instead.
We look at what resource prioritization is, how it's often implemented terribly in modern HTTP/2 stacks and how we're trying to fix it in QUIC and HTTP/3.
We use clear visualizations and images to help explain the nuances in this complex topic and also muse a bit on whether prioritization actually has that large an impact on web performance.

HTTP/2 started the move from multiple parallel TCP connections to a single underlying pipe. QUIC and HTTP/3 continue that trend.
While this reduces the connection overhead and lets congestion controllers do their work, it also means we no longer send data in a truly parallel fashion.
As such, we need to be careful about how exactly we send our resource data, as some files are more important than others to achieve good web performance.



To help regulate this, HTTP/2 introduced a complex prioritization mechanism. Browsers use complex heuristics to try and estimate the importance of a resource and, with various success, communicate their preferences to the servers.
It has however become clear that this scheme does not work well in practice. Between server implementation bugs, questionable browser choices and bufferbloat in caches and network setups, HTTP/2 prioritization is sometimes more a liability than a useful feature.



For this reason, this feature is being completely reworked in HTTP/3 over QUIC. However, there a whole new can of worms is opened.
One of QUIC's main features for improving performance over TCP is that it removes "head of line blocking": if one resource suffers packet loss, other can still make progress.
That is... if there are other resources in progress! What performs well for lossy links turns out to be exactly what to prevent for high speed connections.



Along the way, we also discuss existing options for web developers to impact the browser's heuristics and server behaviour (such as resource hints (e.g., preload) and the upcoming priority hints).



Finally, we question about how we got in this terrible state of things to begin with: if people made so many mistakes implementing HTTP/2 prioritization, why didn't anyone really notice until 3 years later?
Could it be its impact on web performance is actually limited? Or have we just not seen its full potential yet?



We make this complex topic approachable with plenty of visualizations and animations.
The content is mainly based on our own research (and papers) and that of others in the web community, such as Patrick Meenan and Andy Davies.

Back

Shipping a performance API on Chromium

Home

Speaker Nicolás Peña Moreno
RoomH.1309 (Van Rijn)
TrackWeb Performance
Time17:40 - 18:15
Event linkView original entry

Adding new web performance APIs to the web is a complex process. In this talk, I'll go over the steps we went through to ship the Element Timing API in Chromium, which enables measuring rendering timing of image and text content. You'll learn about the process to ship an API exposing performance information to web developers. There were many steps involved in the process: engaging with developers and other browser vendors, brainstorming, privacy and security reviews, Origin Trials, posting an Intent, and addressing questions and ideas after the API has shipped.

Back

The journey of building OpenSpeedMonitor

Home

Speaker Stefan Burnicki
Nils Kuhn
RoomH.1309 (Van Rijn)
TrackWeb Performance
Time18:20 - 18:55
Event linkView original entry

Keeping track caring about web performance is hard with constantly changing standards, improving browsers, frameworks and devices.
It gets even harder when you develop a tool meeting these changing requirements.
Eight years ago, as an IT service provider, we were faced with the task of permanently monitoring the performance of one of the largest e-commerce platforms. After the initial use of WebPagetest, we quickly needed to develop our own features.
What started as minor extensions became a separate project over time.
In this talk, we would like to take you on the journey we have taken developing OpenSpeedMonitor. You will hear about some unexpected challenges, what we learned the hard way and why we would have failed years ago, if we didn't decide to develop FLOSS.

Back

Designing and Producing Open Source Hardware with FOSS/OSHW tools

Home

Speaker Tsvetan Usunov
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time10:30 - 10:50
Event linkView original entry

We have possibility to setup small electronic assembly/production "factory" at our house for less than EUR 1000.
I will try to explain every step from the design to final product:

We live in exciting times. It was never so easy to design and produce electronic devices like today.
This really unleash people's creativity.
The Open Source Hardware movement helps people to study, modify, improve and share designs and knowledge.
Today we have FOSS CAD tools like KiCad to design our electronic boards.
There are multiply choices to manufacture PCBs even in small quantity.
There are lot of places to source components at low cost.
We have possibility to setup small electronic assembly/production "factory" at our house for less than EUR 1000.
I will try to explain every step from the design to final product:




  1. How to design your product with KiCad

  2. How to generate files for production

  3. Where to order your PCBs

  4. Where to source the components for the assembly

  5. How to setup small "factory" at home at budget

  6. How to certify your OSHW project at OSHWA.org




and will demonstrate Do-It-Yourself oven, solder paste printer, manual pick and place tools which could be used for production.

Back

LibrePCB Status Update

Home

Speaker Urban Bruhin
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time10:55 - 11:15
Event linkView original entry

An overview about what's new in LibrePCB since the last presentation at FOSDEM 2018, and a short live demonstration to see LibrePCB in action.

Back

Open-source design ecosystems around FreeCAD

Home

Speaker Yorik van Havre
Brad Collette
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time11:20 - 11:50
Event linkView original entry

A walk through the different ways in which people from different areas and backgrounds use a same application (FreeCAD), and the impact this has on their workflows, and even on FreeCAD development

The FreeCAD project gathers a community of developers and users coming from a very large array of specialties: Makers, mechanical engineers, civil engineers, electronics engineers, architects, opticians, graphic designers, etc. All these people using the same software is a unique opportunity to explore and build cross-discipline workflows, and have people coming from one field learn unusual ways from other fields. This constant interchange of paradigms also influences FreeCAD development itself, and connects it to other fields and applications too, to create larger ecosystems. In this talk, we will show some examples of how this happens in different areas.

Back

ngspice open source circuit simulator

Home

Speaker Holger Vogt
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time11:55 - 12:15
Event linkView original entry

An update of the development activities will be presented leading to ngspice-32. Its interface to KiCad has been extended, PSPICE device model compatibility and OpAmp convergence are improved, several bugs have been fixed.



The VBIC bipolar model and the VDMOS power MOS model now incorporate the self heating effect.



This will lead to the second part of the talk: ngspice may be very well used to simulate thermal device behavior. Heat generation, transport and temperatures are translated into electrical signals. Thus we simulate two circuits: The electrical circuit with its power losses, and the thermal circuit withany resulting device heating, its feedback on the electrical behavior, and the external cooling measures that need to be provided. Some ciruit examples will be given.

Back

Towards CadQuery 2.0

Home

Speaker Adam Urbanczyk
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time12:20 - 12:40
Event linkView original entry

CadQuery (CQ) [1] is a Python library for building of parametric 3D models. The overarching
design goal is to be extremely fluent and as close as possible to the design intent. CQ is based on
the open source CAD kernel from OpenCascade and therefor offers industry standard B-Rep
modeling capabilities and allows exporting to lossless formats such as STEP as well as lossy ones
such as STL. Originally it used Python bindings based on FreeCAD [2] but recently we switched to
PythonOCC [3] to be more flexible and have full access to the underlying CAD kernel capabilities.
In the talk I will summarize the current status of the CQ project, show some interesting
usage examples and discuss newly implemented features. Furthermore I will elaborate on the future
plans of the core development team and touch on some of the challenges of maintaining a project
such as CQ. I will also present a fairly new addition to the CQ ecosystem – CQ-editor [3]. It is a
Python/PyQt5 based lightweight cross-platform GUI editor that allows to quickly develop and
preview CQ 3D models. It also offers graphical debugging and CQ stack introspection capabilities
which dramatically lowers the entry barrier for trying out and using CQ.



References



[1] https://github.com/CadQuery/cadquery
[2] https://www.freecadweb.org
[3] https://github.com/tpaviot/pythonocc-core
[4] https://github.com/CadQuery/CQ-editor

Back

KiCad: Back to the Future

Home

Speaker Wayne Stambaugh
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time12:45 - 13:15
Event linkView original entry

I will talk about KiCad's role in the Open Hardware design movement and how it is remarkably similar to the early days of the Free, Libre, Open Source Software (FLOSS) movement and what it means for the future of Open Hardware.

Back

Pocket Science Lab from Development to Production

Home

Speaker Mario Behling
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time13:20 - 13:40
Event linkView original entry

In this talk we will cover the development path of the Pocket Science Lab (PSLab) board from version one in 2014 to today and outline how we use tools like KiCad to bring the device to large scale production. We will also share some major issues that we solved to get the device manufacturing ready and challenges that lie ahead of us like ensuring thorough device testing at production.

In this talk we will cover the development path of the Pocket Science Lab (PSLab) board from version one in 2014 to today and outline how we use tools like KiCad to bring the device to large scale production. We will also share some major issues that we solved to get the device manufacturing ready and challenges that lie ahead of us like ensuring thorough device testing at production. The goal of Pocket Science Lab is to create an Open Source hardware device (open on all layers) and software applications that can be used for experiments. The tiny pocket lab provides an array of instruments for doing science and engineering experiments. It provides functions of numerous measurement tools including an oscilloscope, a waveform generator, a frequency counter, a programmable voltage, current source and even a component to control robots with up to four servos.

Back

Designing functional objects with functional objects

Home

Speaker Marius Kintel
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time13:45 - 14:05
Event linkView original entry

Reflecting on OpenSCAD's 10 years of history and what we've learned and discovered along the way. Discussion on opportunities and potential avenues forward, and some stories from the trenches.

Back

Leveraging Open Source Designs

Home

Speaker Lasse Mönch
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time14:10 - 14:20
Event linkView original entry

Incorporating new components into PCBs is a difficult task that often requires reading multiple datasheets and creating prototypes to get it right. The funny thing is: every engineer needs to re-read reference designs! Even though there are tens of thousands of designs with new components documented and available on Github. The reason: it is almost impossible to find a relevant project. The solution? Instead of using Github search, which only retrieves files by filename, our approach creates a local database that takes the search results from Github, and then parses the used components inside the PCB designs to index them. That way, you can easily search a component and get the most relevant designs as a reference.



This talk will give an overview of the software that was created, discusses the difficulties that were overcome and the potential for improvement in future work.



We have currently indexed 8,000 KiCad-projects on GitHub containing 37,000 electronic components as a proof-of-concept. The project was kindly supported by AISLER with a server instance for a prototype of the search engine, which can be accessed at https://search-dev.aisler.net

Back

Fritzing - the past, the present and the future

Home

Speaker Kjell Morgenstern
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time14:25 - 14:45
Event linkView original entry

Making electronics accessible to the broad public was mainly made possible by Arduino, Raspberry PI and last but not least Fritzing. Back in 2009, it was a pain to get from a loose wiring on a breadboard to a PCB. Fritzing came up first with a unique breadboard view and a simple to use PCB layout. Fast forward 10 years to Fosdem 2019, Fritzing was in a major crisis. Despite well over 200.000 users, thousands of downloads per day and an enthusiastic community, development had stalled. It has now been rebooted, and the project is back to gaining momentum. So what has happened between last year and this year?
This talk will give a rough introduction to Fritzing and its ecosystem, including how we overcame the problems, learned from our mistakes and how we plan to keep improving Fritzing in the future.

Back

Sparselizard: a general purpose multiphysics FEM library

Home

Speaker Alexandre Halbach
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time14:50 - 15:10
Event linkView original entry

This presentation describes sparselizard: a fast, general, robust and user-friendly finite element c++ library with high potential for low-maintenance integration to open-source simulation tools. It is demonstrated with a large range of validated examples that the library has the ability to simulate heavily nonlinear multiphysics problems involving at least mechanic, fluid, electric, magnetic and electromagnetic physics. Its robustness, speed and user-friendliness are also demonstrated.

Back

Open CASCADE Technology - an introduction and overview

Home

Speaker Alexander Malyshev
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time15:15 - 15:45
Event linkView original entry

Open Cascade Technology is a framework for B-Rep modeling. The presentation highlights key features available in the toolkits.
The following topics are covered:
- What is OCCT?
- Development facts
- OCCT architecture
- Modeling algorithms
- Visualization
- Data exchange
- Versions history

Back

AMENDMENT STEP Reduce

Home

Speaker Seth Hillbrand
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time15:50 - 16:10
Event linkView original entry

STEP files generated by many FOSS modelling programs are much larger than those generated by the equivalent commercial software packages. This does not meaningfully matter for most files. However, for libraries of STEP files containing thousands of models, the extra bytes in downloads and storage can be problematic.



This talk introduces a single-purpose library and command line utility called STEPReduce that removes the superfluous elements, replacing them with their equivalent, existing in-file references. I will show a reduction of 50% file size in both compressed and uncompressed STEP files over the full KiCad STEP library. I will also show the resulting improvement in complex model load times using the industry standard OpenCascade import routines. Finally, I will show the utility's verification routines of solid model equivalence between compressed and uncompressed files.



Please note that this talk replaces one entitled "News from gEDA/gaf" that was due to have been given by Roland Lutz, who has sent his apologies but is now unable to attend as he has fallen ill. We wish him a speedy recovery.

Back

Gmsh

Home

Speaker Christophe Geuzaine
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time16:15 - 16:35
Event linkView original entry

Gmsh (http://gmsh.info) is an open source finite element mesh generator with built-in pre- and post-processing facilities. Under continuous development for the last two decades, it has become the de facto standard for open source finite element mesh generation, with a large user community in both academia and industry. In this talk I will present an overview of Gmsh, and highlight recent developments including the support for constructive solid geometry, new robust and parallel meshing algorithms, flexible solver integration and a new multi-language Application Programming Interface in C++, C, Python and Julia.

Back

AXIOM - open source cinema camera

Home

Speaker Sebastian Pichelhofer
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time16:40 - 17:00
Event linkView original entry

The presentation will give a brief overview of the projects history & lessons learned during the course of developing a high tech camera device as community project. We also want to demo and explain the produced hardware, enclosures and sample footage then look at the challenges still ahead. Last 5 minutes reserved for Q&A

Back

Horizon EDA - Version 1.0

Home

Speaker Lukas Kramer
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time17:05 - 17:25
Event linkView original entry

This talk covers my motivation for starting a new EDA package in 2016 and the main ideas behind horizon as well as what has changed in the last year. I'll also go into my short- and long-term plans for the project.

Horizon EDA is a from-scratch EDA package with focus on useful parts management, rule-driven design and good usability. It has already proven its suitability for medium-complexity projects in the board design for my master thesis and in various hobby projects.



This talk covers my motivation for starting a new EDA package in 2016 and the main ideas behind horizon as well as what has changed in the last year. I'll also go into my short- and long-term plans for the project.

Back

OpenPiton: An Open-Source Framework for EDA Tool Development

Home

Speaker David Wentzlaff
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time17:30 - 17:50
Event linkView original entry

As contemporary industrial ASIC designs have reached hundreds of billions transistor count, EDA tools must have the scalability to handle such large designs. However, few open-source RTL designs reflect the scale that industrial ASICs have reached. In this talk, we will present OpenPiton, a scalable, tiled manycore design that can reach as many as 65,536 cores in a single chip, and up to 500 million cores on a multi-chip design. The modularity and scalability of the OpenPiton design can enable EDA tool developers to test their tools' functionality at contemporary scales and adapt their development for future larger designs. With its many configurability options, extensive scalability, and heterogeneity, the OpenPiton platform is well placed to supercharge open-source EDA tool development and pave the way for a completely open-source ASIC synthesis and back-end flow tested using open-source designs.

Title:
OpenPiton: An Open-Source Framework for EDA Tool Development



Abstract:
As contemporary industrial ASIC designs have reached hundreds of billions transistor count, EDA tools must have the scalability to handle such large designs. However, few open-source RTL designs reflect the scale that industrial ASICs have reached. In this talk, we will present OpenPiton, a scalable, tiled manycore design that can reach as many as 65,536 cores in a single chip, and up to 500 million cores on a multi-chip design. The modularity and scalability of the OpenPiton design can enable EDA tool developers to test their tools' functionality at contemporary scales and adapt their development for future larger designs. With its many configurability options, extensive scalability, and heterogeneity, the OpenPiton platform is well placed to supercharge open-source EDA tool development and pave the way for a completely open-source ASIC synthesis and back-end flow tested using open-source designs.



Preferred Session length:
Short (20 minutes)



Speaker: Prof. David Wentzlaff (Princeton University)



Speaker bio:
David Wentzlaff is an associate professor of electrical engineering at Princeton University. Wentzlaff's research has earned several awards, among them an NSF CAREER award, DARPA Young Faculty Award, AFOSR Young Investigator Prize, induction into the MICRO Hall of Fame, and the ASPLOS WACI Test-of-Time Award. He received his M.S. and Ph.D. from MIT and received a B.S. in electrical engineering from the University of Illinois at Urbana-Champaign. He was Lead Architect and Founder of Tilera Corporation, a multicore chip manufacturer now owned by Mellanox. David's current research interests include how to create manycore microprocessors customized specifically for Cloud computing environments, how to design computer architectures in a post Moore’s Law world, and how to reduce the impact of computing on the environment by optimizing computer architecture for fully biodegradable substrates. Many of the research projects created by Wentzlaff’s group have been open-sourced including the PriME simulator, OpenPiton, and PRGA.



Link to any hardware / code / slides for the talk:
https://parallel.princeton.edu/openpiton/
https://github.com/PrincetonUniversity/openpiton

Back

Designing Hardware, Journey from Novice to Not Bad

Home

Speaker Eric Herman
Kendrick Shaw
Stephanie Medlock
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time17:55 - 18:15
Event linkView original entry

The three main contributors to the OpenElectronicsLab projects started out as relative novices. The wealth of online resources and some trial-and-error opens the doors to the world of hardware design.



This will reflect on what lowered the barriers, insights gained, what needed to be done to handle things which turned out to be harder than expected, and to encourage hesitant novices to get started designing their own hardware.

Back

Finite element modeling with the deal.II software library

Home

Speaker Wolfgang Bangerth
RoomH.2213
TrackOpen Source Computer Aided Modeling and Design
Time18:20 - 18:50
Event linkView original entry

The finite element method has been the method of choice to simulate the deformation of solids as well as the flow of many kinds of fluids for nearly 70 years now. In the case of solids, it provides a general framework to describe how a body reacts to external stimuli by modeling how deformation affects the internally stored energy. While most software that implements the method used to be homegrown for a particular purpose, the 2000s have seen the emergence of large and professionally developed. open source software libraries that provide a broad range of functionality that makes the implementation of such codes straightforward. I will give an overview of one of these libraries, deal.II, and how and where it is used.

Back

Welcome to the MySQL, MariaDB & Friends Devroom 2020

Home

Speaker Frédéric Descamps
Ian Gilfillan
RoomH.2214
TrackMySQL, MariaDB and Friends
Time10:30 - 10:40
Event linkView original entry

Welcome to the FOSDEM MySQL, MariaDB & Friends Devroom 2020

Back

MySQL 8 vs MariaDB 10.4

Home

Speaker Peter Zaitsev
RoomH.2214
TrackMySQL, MariaDB and Friends
Time10:40 - 11:00
Event linkView original entry

MySQL 8 and MariaDB 10.4 are the latest Major versions for MySQL and MariaDB. While MariaDB started by being slightly different MySQL variant, now it has grown into very much different database platforms which grows more different from every release.



In this presentation, we will look into the differences between MySQL and MariaDB in the core areas such as SQL features, query optimizations, replication, storage engines, and security as well as discuss unique features and capabilities MySQL 8 and MariaDB 10.4 offers compared to each other.

Back

MyRocks in the Wild Wild West!

Home

Speaker Alkin Tezuysal
RoomH.2214
TrackMySQL, MariaDB and Friends
Time11:10 - 11:30
Event linkView original entry

In this talk, we'll walk through RocksDB technology and look into areas where MyRocks is a good fit by comparison to other engines such as InnoDB. We will go over internals, benchmarks, and tuning of MyRocks engine. We also aim to explore the benefits of using MyRocks within the MySQL ecosystem. Attendees will be able to conclude with the latest development of tools and integration within MySQL.

Back

How Safe is Asynchronous Master-Master Setup?

Home

Speaker Sveta Smirnova
RoomH.2214
TrackMySQL, MariaDB and Friends
Time11:40 - 12:00
Event linkView original entry

It is common knowledge that built-in asynchronous master-master (active-active) replication is not safe. I remember times when the official MySQL User Reference Manual stated that such an installation is not recommended for production use. Some experts repeat this claim even now.



While this statement is generally true, I worked with thousands of shops that successfully avoided asynchronous replication limitations in active-active setups.



In this talk, I will show how they did it, demonstrate situations when asynchronous master-master replication is the best possible high availability option and beats such solutions as Galera or InnoDB Clusters. I will also cover common mistakes, leading to disasters.

Back

The consequences of sync_binlog != 1

Home

Speaker Jean-François Gagné
RoomH.2214
TrackMySQL, MariaDB and Friends
Time12:10 - 12:30
Event linkView original entry

Have you ever needed to get some additional write throughput from MySQL ? If yes, you probably found that setting sync_binlog to 0 (and trx_commit to 2) gives you an extra performance boost. As all such easy optimisation, it comes at a cost. This talk explains how this tuning works, presents its consequences and makes recommendations to avoid them. This will bring us to the details of how MySQL commits transactions and how those are replicated to slaves. Come to this talk to learn how to get the benefit of this tuning the right way and to learn some replication internals.

Back

Overview of encryption features

Home

Speaker Hrvoje Matijakovic
RoomH.2214
TrackMySQL, MariaDB and Friends
Time12:40 - 13:00
Event linkView original entry

MariaDB/MySQL/Percona Server provide some features in this space, but currently there isn't one product that covers all the needs (at least not available as FOSS).
This talk will provide an overview of Data-at-Rest-Encryption features in MySQL, MariaDB and Percona Server for MySQL, their availability across versions, and status (experimental/GA).

Back

Whats new in ProxySQL 2.0?

Home

Speaker Nick Vyzas
RoomH.2214
TrackMySQL, MariaDB and Friends
Time13:10 - 13:30
Event linkView original entry

ProxySQL, the high performance, high availability, protocol-aware proxy for MySQL is now GA in version 2.0. This version introduces several new features, like causal reads using GTID, better support for AWS Aurora, native support for Galera Cluster, LDAP authentication and SSL for client connections.



This session provides an overview of the most important new features.

Slide agenda:




Back

SELinux fun with MySQL and friends

Home

Speaker Matthias C
Ivan Groenewold
RoomH.2214
TrackMySQL, MariaDB and Friends
Time13:40 - 14:00
Event linkView original entry

SELinux (Security Enhanced Linux) provides enhanced security mechanism for more advanced access control and auditing. It allows your application software and your system users to only access the resources it's been preconfigured to allow. Of course when you want to move your data- or log files to a non-standard location these policies will stop MySQL from starting.



The easy way out is obviously to set SELinux to disabled or permissive. But someone once said: "Every time you disable SELinux a kitten dies". We'll show you a few ways how you can find out if it actually is SELinux that is blocking you and how to update the policies to properly keep you system secured.

Back

Running MySQL in Kubernetes in real life

Home

Speaker Sami Ahlroos
RoomH.2214
TrackMySQL, MariaDB and Friends
Time14:10 - 14:30
Event linkView original entry

Running databases in Kubernetes has come a long way.



Focusing on MySQL, we will explore the challenges and issues of running production databases in Kubernetes. We'll look at the opportunities and benefits of running in Kubernetes too.
While rolling out a database is easy enough, things can get interesting when production tasks are undertaken.
How do you achieve scaling – whether that's scaling up or down? How do you know that your latest backup will restore safely?
We will also take a look at an open source solution for monitoring your database deployments, adding support for Kubernetes as a robust production environment.

Focusing on MySQL, we will explore the challenges and issues of running production databases in Kubernetes. We'll look at the opportunities and benefits of running in Kubernetes too.
While rolling out a database is easy enough, things can get interesting when production tasks are undertaken.
How do you achieve scaling – whether that's scaling up or down? How do you know that your latest backup will restore safely?
We will also take a look at an open source solution for monitoring your database deployments, adding support for Kubernetes as a robust production environment.



Outline:
- Introduction
- Installing MySQL in Kubernetes
- Scaling up, scaling down
- Backup, restore, verification
- An open source monitoring solution
- What could possibly go wrong?



Takeaways:
This presentation should encourage the audience to embrace the possibilities of running production databases on Kubernetes, and will help attendees understand the "do's and dont's" of such a deployment.

Back

ALTER TABLE improvements in MariaDB Server

Home

Speaker Marko Mäkelä
RoomH.2214
TrackMySQL, MariaDB and Friends
Time14:40 - 15:00
Event linkView original entry

ALTER TABLE in MySQL used to copy the table contents row by row. We can do much better; in the best case, allow instantaneous schema changes, even for nontrivial changes, such as ADD COLUMN…AFTER and DROP COLUMN. This talk describes how ALTER TABLE has been improved over the years for the InnoDB storage engine in MySQL 5.1, 5.5, 5.6, 5.7, and MariaDB Server 10.2, 10.3, 10.4, 10.5, mostly by the presenter.

The talk enumerates different classes of ALTER TABLE operations:




  1. operations not involving other than metadata

  2. operations that can be performed instantly by introducing a backward-compatible data file format change and 'faking' the operation (ADD or DROP COLUMN)

  3. operations that can avoid rebuilding a table

  4. operations that must rebuild the table

  5. variations of the previous 2 classes that allow concurrent modifications to the table




We also show how ALTER TABLE can be executed concurrently on multiple nodes in statement-based replication.
Finally, we cover some theoretical limits of what kind of ALTER TABLE operations can be supported without rebuilding the table, by introducing an optional validation step and on-demand conversion of records in previous schema versions of the table.

Back

Rewinding time with System Versioned Tables

Home

Speaker Sergei Golubchik
RoomH.2214
TrackMySQL, MariaDB and Friends
Time15:10 - 15:30
Event linkView original entry

Imagine, you're given a time machine. A fairly limited one, it cannot transport you anywhere. Still, it can show you the past, what your tables looked like at any given point in time. This is exactly what the SQL:2011 standard and MariaDB 10.3+ are giving you. System versioned tables allow you to rewind time and see their content as it was in the past — all using normal SELECT statements. This talk will show how to create system versioned tables, how to use them and how not to kill the performance when doing that. It will present various new applications and use cases that became possible now. Having a time machine, what will you use it for?

Imagine, you're given a time machine. A fairly limited one, it cannot transport you anywhere. Still, it can show you the past, what your tables looked like at any given point in time. This is exactly what the SQL:2011 standard and MariaDB 10.3+ are giving you. System versioned tables allow you to rewind time and see their content as it was in the past — all using normal SELECT statements. This talk will show how to create system versioned tables, how to use them and how not to kill the performance when doing that. It will present numerous different applications and use cases that became possible now. Having a time machine, what will you use it for?

Back

Knocking down the barriers of ORDER BY LIMIT queries with MariaDB 10.5

Home

Speaker Varun Gupta
RoomH.2214
TrackMySQL, MariaDB and Friends
Time15:40 - 16:00
Event linkView original entry

The talk will start with a recap of how MariaDB(or MySQL) handles the
ORDER BY LIMIT optimization and examples demonstrating why the current
optimizer is not good enough.



Further, the talk will describe how the optimizer in MariaDB 10.5 mostly
solves the issue, the remaining unresolved issues and how DBAs can tackle them.

FULL DESCRIPTION:



For the first part of the talk, I will discuss the possible strategies by
which ORDER BY LIMIT optimization is handled in MariaDB (or MySQL)



The strategies are:
1) Using an ordered index (ref, range or index scan)
2) Using filesort on the first non-const table
3) Using filesort on the temporary table, that stores the output of the join



Then I will discuss how the current MariaDB/MySQL optimizer makes the choice between the strategies and show
the situations where it will never get a good query plan



For the second part of the talk, I will describe how a new cost-based
optimization in MariaDB 10.5 solves the above issue.
The talk will contain details about how the costs were taken into account
during the optimization phase. Further, with the help of examples
I would demonstrate how the execution differs for this new optimization
and how this leads to improved performance for ORDER BY LIMIT queries.

Back

CPU performance analysis for MySQL using Hot/Cold Flame Graph

Home

Speaker Vinicius Grippa
RoomH.2214
TrackMySQL, MariaDB and Friends
Time16:10 - 16:30
Event linkView original entry

Come to see some real-life examples of how you can do CPU profiling with perf and eBPF/BCC, to create FlameGraphs and ColdGraphs visualizations of the on-CPU/off-CPU time spent by the database. Based on these visualizations and reading the database source code (this is why we love Open Source!) you can quickly gain insight about what's burning CPU (FlameGraphs) and what's causing CPU to wait (ColdGraphs), and with this knowledge you will be several steps closer to answering "what's consuming all that CPU time".

Back

Hash Join in MySQL 8

Home

Speaker Erik Frøseth
RoomH.2214
TrackMySQL, MariaDB and Friends
Time16:40 - 17:00
Event linkView original entry

JOIN is one of the most common operation in a database system, and for a long time, the only algorithm for executing a join in MySQL has been variations of the nested loop algorithm. But starting from MySQL 8.0.18, it is now possible to execute joins using hash join. This presentation will walk you through how we were able to implement hash join using our new iterator executor, how hash join in MySQL works, when it is used, and everything else that is worth knowing about hash join in MySQL.

Back

Comparing Hash Join solution, the good, the bad and the worse.

Home

Speaker Marco Tusa (the Grinch)
RoomH.2214
TrackMySQL, MariaDB and Friends
Time17:10 - 17:30
Event linkView original entry

MySQL 8.0.18 comes (finally) with a long waited and desired hash-join implementation.
That was already present in other pseudo MySQL distributions like MariaDb.
But, what is has-join, how it works, what problems it solves, when and how to use it.
Last but not least are all the different implementations doing the same things, or are they acting and performing differently.
We are going to perform a short journey in hash-join implementations and answer all these questions.

Back

MySQL 8.0: Secure your MySQL Replication Deployment

Home

Speaker Pedro Figueiredo
RoomH.2214
TrackMySQL, MariaDB and Friends
Time17:40 - 18:00
Event linkView original entry

Data protection is an extensive and hot topic. Making sure that
whoever accesses your data has identity well established and is
authorized can be a complex and hard task. Moreover, nowadays
data tends to move around quickly between different instances of the
same service, different services and different data consumers. This
often implies that data traverses different administrative domains. It
is key that MySQL handles, stores and replicates data complying
with the security requirements that business and regulations demand.



This session showcases the new developments in MySQL 8.0 that tighten
related replication security setups, and reduce the attack surface of
the different replication topologies. We will talk about secure
inter-server communication, encryption of replication data at rest
and the new features that make the replication applier run under a
specific security context. Come and learn about security related
replication features in MySQL 8.0.

Back

Automating schema migration flow with GitHub Actions, skeema & gh-ost

Home

Speaker Shlomi Noach
RoomH.2214
TrackMySQL, MariaDB and Friends
Time18:10 - 18:30
Event linkView original entry

Schema migration is more than running an ALTER TABLE. It is about designing, reviewing, approving, queuing, scheduling, executing, auditing, controlling and versioning the changes.



At GitHub we run multiple migrations per day, and much of this flow used to be manual, taking a significant toll from the databases team. In this session we illustrate how we automated away migration using free and open source solutions, and based on trusted development flow.

Schema migration is more than running an ALTER TABLE. It is about designing, reviewing, approving, queuing, scheduling, executing, auditing, controlling and versioning the changes.



At GitHub we run multiple migrations per day, and much of this flow used to be manual, taking a significant toll from the databases team. In this session we illustrate how we automated away migration using free and open source solutions, and based on trusted development flow.



We highlight the use of the skeema tool, with GitHub Actions, git flow and gh-ost.

Back

20 mins to write a MySQL Shell Plugin

Home

Speaker Frédéric Descamps
RoomH.2214
TrackMySQL, MariaDB and Friends
Time18:40 - 19:00
Event linkView original entry

MySQL Shell is a new client for MySQL. It comes with multiple functionalities like the adminAPI commands to setup and operate a MySQL InnoDB Cluster but also check for upgrades, import JSON, parallel import and more... It also allows you to communicate with the MySQL Server in SQL, Python or Javascript !
During this session we will write a plugin from scratch to extend the Shell using the MySQL Shell Plugin Framework. The code will be written live in Python.

Back

Apache Camel BoF

Home

Speaker Zoran Regvart
RoomH.3242
TrackBOFs (Track B - in H.3242)
Time11:00 - 12:00
Event linkView original entry

Apache Camel is a free software integration framework from the Apache Software Foundation. This meetup is for anyone wishing to meet and discuss Apache Camel development, share experiences and meet in meat space other folk in the Apache Camel community.

Apache Camel has been around and its used quite widely for in all sorts of software integration projects. Camel version 3 was recently released and the community has kicked of several sub-projects: Camel K - a serverless, cloud native integration on top of Kubernetes, Camel Quarkus - low memory, fast startup support utilizing Quarkus, and Camel Kafka Connector - for running Camel inside of Kafka. Let's meet and discuss these initiatives or just talk and exchange ideas around Camel. Anyone wishing to share can present on any topic relating to Apache Camel is welcome to do so, though please be mindful of the time - we have one hour in this session and would like that everyone would have the chance to participate.

Back

FFmpeg BOF

Home

Speaker
RoomH.3242
TrackBOFs (Track B - in H.3242)
Time14:00 - 15:00
Event linkView original entry

Back

RIOT BOF

Home

Speaker
RoomH.3242
TrackBOFs (Track B - in H.3242)
Time16:00 - 17:00
Event linkView original entry

Back

VideoLan BOF

Home

Speaker
RoomH.3242
TrackBOFs (Track B - in H.3242)
Time17:00 - 18:00
Event linkView original entry

Back

Sailfish OS BOF

Home

Speaker
RoomH.3244
TrackBOFs (Track C - in H.3244)
Time15:00 - 16:00
Event linkView original entry

Back

Nethserver BOF

Home

Speaker
RoomH.3244
TrackBOFs (Track C - in H.3244)
Time16:00 - 17:00
Event linkView original entry

Back

TinyGo

Home

Speaker Ron Evans
RoomJ.1.106
TrackBOFs (Track A - in J.1.106)
Time11:00 - 12:00
Event linkView original entry

Back

Sourcehut & aerc meetup

Home

Speaker Drew DeVault
RoomJ.1.106
TrackBOFs (Track A - in J.1.106)
Time12:00 - 13:00
Event linkView original entry

Members of the closely linked Sourcehut and aerc communities meet up to put faces to names and discuss the present and future of both projects, and to collect stickers.

Sourcehut is a free/libre project hosting platform with sophisticated git hosting, mailing lists, continuous integration, and more. We'll be discussing the remaining tasks for the alpha, planning and seeking feedback for the beta, and showing off cool stuff added in the past year.



aerc is a FOSS email client for your terminal designed especially for software developers which integrates nicely into the mailing list workflow endorsed by Sourcehut. Early in its development, you'll have a chance to discuss what you'd like to see in your dream email client and speculate wildly on the lofty goals it aims to achieve.

Back

Open Source Storage BoF

Home

Speaker thingee
RoomJ.1.106
TrackBOFs (Track A - in J.1.106)
Time13:00 - 14:00
Event linkView original entry

Please join us for our Birds of a Feather session on open source storage. We'll be talking about Ceph, Gluster, and all other things open source and storage. Mike Perez, community human for Ceph at Red Hat, will be leading the discussion and is happy to give some cool demos during the BoF.

Please join us for our Birds of a Feather session on open source storage. We'll be talking about Ceph, Gluster, and all other things open source and storage. Mike Perez, community human for Ceph at Red Hat, will be leading the discussion and is happy to give some cool demos during the BoF.

Back

Tor Meetup

Home

Speaker Pili Guerra
RoomJ.1.106
TrackBOFs (Track A - in J.1.106)
Time14:00 - 15:00
Event linkView original entry

Tor Meetup

Back

A best practices guide for FLOSS community managers

Home

Speaker Karsten Wade
RoomJ.1.106
TrackBOFs (Track A - in J.1.106)
Time15:00 - 16:00
Event linkView original entry

tl;dr - contribute to a community management guide written for practitioners, by practitioners; bring your internet-enabled device.



Whether you are a community manager or any other contributor to a free/libre open source software (FLOSS) project, you'll find yourself learning and thinking of good methods and practices for having a happy, successful, and productive project. Community managers and members share these practices with each other all the time -- in talks, in person, as blog posts, and sometimes as a stand-alone book. But has anyone ever really collaborated on a guide that is for practitioners and collaboratively written BY practitioners?



Actually, yes: 10 years ago The Open Source Way 1.0 was written and released. In the passage of that decade, the best practices around caring for FLOSS projects have evolved, as has the work of the people in those community shepherding roles.



We have embarked on a revamp to this guide, and we invite you to comelearn about, participate in, and contribute to this living document real-time during this BOF.



We will begin the BOF with a brief introduction to the guide itself, what is covered within it currently, and an overview of the narrative being told from/to community management practitioners. We'll then get hands-on with the contribution process, starting with a walk through for what is needed for the 2.0 release. After we do some real-time submissions to the guide, the BOF participants can begin working together in small groups or individually on portions of the guide, including both content and publication toolchain.

Back

Replicant Meetup

Home

Speaker David Ludovino
RoomJ.1.106
TrackBOFs (Track A - in J.1.106)
Time16:00 - 17:30
Event linkView original entry

Replicant is a fully free Android distribution running on several devices, a free software mobile operating system putting the emphasis on freedom and privacy/security.



This meeting is for everyone interested in the Replicant project (users, developers, devices vendors, etc.). Among other things, we will present the ongoing efforts on Replicant 9 and discuss how we should move forward. Everyone's point of view is welcomed.

Here's a non-exhaustive list of the topics that will be up for debate:






We will try to setup an audio conference (through Mumble) for those that cannot come to FOSDEM. For that we need several unidirectional
(cardioid) microphones, being it lapel or handheld, that can be connected to a laptop. If you have one please bring it.

Back

Creating Sustainable Public Sector Open Source Communities

Home

Speaker OSOR team
RoomJ.1.106
TrackBOFs (Track A - in J.1.106)
Time17:30 - 19:00
Event linkView original entry

The Open Source Observatory (OSOR) of the European Commission is an online collection that provides its community with an information observatory, community building activities, as well as assistance and support services. On behalf of OSOR, we propose to make a presentation on the currently ongoing study towards a guidelines document for creating sustainable open source communities in the public sector.
In this context, OSOR is producing guidelines for creating sustainable open source communities within the public sector. The purpose of the guidelines is to act as a practical tool that can be used by public sector officials interested in establishing open source communities or by members of such communities. The production of the guidelines is a multi-step process, involving desk research, primary data collection, development of four case studies and conduction of interviews with key stakeholders.
After presenting the objectives and approach to produce the guidelines, the OSOR representatives will present the preliminary findings related to the guidelines, including the key success factors associated with healthy communities. The audience will be also invited to further brainstorm in groups the key success factors of sustainable OSS communities as well as to identify the key components that our guidelines should contain.

The production of the guidelines is a multi-step process, involving both desk research and primary data collection. More specifically, our team has conducted an in-depth literature review, followed by a questionnaire targeting OSS communities in the public sector, which will be running between January and February 2020. Following the analysis of data collected from the questionnaire, our team will develop four case studies illustrating successful and failed OSS initiatives in the public sector. Interviews with key case study stakeholders are to be conducted within each case study.
The key objective of our workshop at FOSDEM is not only to present our preliminary findings to the audience but to also obtain their views on our findings and future guidelines. We want to ensure that OSOR puts forward truly community driven guidelines.
During the workshop, we will recount the findings so far which are the data of the literature review and survey. More specifically, we will present the key success factors and their components associated with sustainable OSS communities as well as some interesting failed and successful public sector OSS initiatives. The audience will be then invited to further brainstorm in groups the key success factors of sustainable OSS communities as well as to identify the key components that our guidelines should contain.
We will also invite the audience to contribute to our ongoing survey and invite them to get in touch with the OSOR community for further ideas on our guidelines and case studies.



The target group of OSOR is wide-ranging within the open source community. It includes policy makers, IT managers, IT developers, researchers and students, and OSS advocates and enthusiasts in general. OSOR invites members of the audience who are interested in the development of OSS within public administrations, community managers and members, developers and organisations who would like to learn more about the project of measuring the health and sustainability of an open source project. The audience is also invited to follow the work of OSOR and the European Commission on OSS more closely. OSOR promotes collaboration, sharing and digital development through various services provided to its community.

Back

Building a distributed knowledge base with a Linked Data powered CMS

Home

Speaker Sander Van Dooren
RoomAW1.120
TrackCollaborative Information and Content Management Applications
Time10:30 - 10:55
Event linkView original entry

After a quick overview of the semantic web landscape, we'll look into a practical application of Linked Data in the public sector: the Joinup platform of the European Commission.
Joinup makes use of Linked Data to harvest information from various sources into a knowledge base. We'll have a look at the various components that we needed to build (all Open Source) to make the CMS system (Drupal) 'Linked Data enabled'.
We'll look at the technical problems that needed to be solved and the solutions we came up with, as well as a the opportunities that this technology can bring to the CMS world.

Back

From 0 to Intranet in 20 minutes with XWiki

Home

Speaker Anca Luca
RoomAW1.120
TrackCollaborative Information and Content Management Applications
Time11:00 - 11:25
Event linkView original entry

Sharing knowledge in a team of people working together (company, association, study group or any other kind of project) is key for its long-term success. Even if this is not properly identified from the beginning as a main concern, setting up tools and processes that allow knowledge to be accumulated and organized correctly ends up being on the TODO list at some point, and may come with the wish to have done it earlier. Thus, the best solution is one that is quick enough to setup from the very beginning and versatile enough to be able to incrementally develop into a proper solid fortress of knowledge.

In this presentation I will use the XWiki platform to incrementally build a collaborative intranet from scratch and will try to address some frequent needs of knowledge sharing in a team, using already-made add-ons or new tools (list not exhaustive):
* Blog, for unidirectional communication,
* Meeting notes, for spoken knowledge not to be lost,
* File Manager for just dropping files to share,
* Task manager for lightweight ticketing,
* Holiday requests and Recruitment applications for team management,
* Unstructured free content sharing, using standard wiki pages and page templates,
* Structured content tailored to fit your exact specific needs,
* Multi-wiki separation for teams isolation,
* Powerful search in all this,
* Authentication add-ons to plug your own users management,
* Easy look & feel setup, to brand it as your own.

Back

ONLYOFFICE: How to securely collaborate on documents within content management applications

Home

Speaker Alex Mikheev
RoomAW1.120
TrackCollaborative Information and Content Management Applications
Time11:30 - 11:55
Event linkView original entry

ONLYOFFICE is an open-source HTML5-based office suite for editing and collaborating on text documents, spreadsheets, and presentations online. Here, maximizing format compatibility, establishing browser-agnostic content display and optimizing real-time data transfer in co-authoring are the core principles in building applicable editing software.



End user-side demand indicates that deeper integration in productivity solutions and content management applications is one of the main directions for ONLYOFFICE. We would like to share our experience in building connectors that allow users to edit and co-author their documents securely right within a platform they use.



Presentation milestones:




Back

A dozen more things you didn't know Nextcloud could do

Home

Speaker Jos Poortvliet
RoomAW1.120
TrackCollaborative Information and Content Management Applications
Time12:00 - 12:25
Event linkView original entry

With Nextcloud you can sync, share and collaborate on data, but you don't need to put your photos, calendars or chat logs on an American server. Nope, Nextcloud is self-hosted and 100% open source! Thanks to hundreds of apps, Nextcloud can do a lot and in this talk, I will highlight some cool things.

Consider this a follow-up from my talk about 200 things Nextcloud can do last year! An update on what's new and some cool new stuff. What, what is Nextcloud? Let's see. A private cloud is one way to put it, though that's a contradiction of course. It is a way to share your data, sync your files, communicate and collaborate with others - without giving your data to GAFAM! Keep it on your own server, or something close (like a local hosting provider or data center). Nextcloud is a PHP app that does all that, and more! Easy to use, secure (really) and fully open source of course.

Back

Bringing Collabora Online to your web app

Home

Speaker Michael Meeks
RoomAW1.120
TrackCollaborative Information and Content Management Applications
Time12:30 - 12:55
Event linkView original entry

The Collabora Online code-base can bring the power of LibreOffice into
an iframe inside your web app. Come and hear how this works, how to integrate
secure, collaborative document editing with your software, and about all the
latest greatest work going on there.

Collabora Online uses a WOPI-like protocol, and it is rather simple to integrate.
Come hear about the total of three REST methods you need for a simple
integration, as well as the wealth of options to control how collaboration works.



Hear about some of our integrations into Nextcloud, Kolab,
Mattermost, Moodle, ownCloud, and many more.



See the internals of Collabora Online, and how you can get involved with
building, debugging, and developing it, and checkout some of the new features
for Mobile and PC browser that we've been working on to make life better
for our users.

Back

More than one tool for collaborating on writing the Tiki CMS

Home

Speaker Jean-Marc Libs
RoomAW1.120
TrackCollaborative Information and Content Management Applications
Time13:00 - 13:25
Event linkView original entry

The Tiki Wiki CMS Groupware software community obviously uses Tiki itself for collaboration and knowledge management.
Yet, many other software tools or infrastructures are used. I will review and explain how and why.

Tiki Wiki CMS Groupware was initially released in 2002 and is still very much alive with a vibrant community.
We obviously want to use Tiki itself for collaboration and knowledge management as much as possible, but the real situation is, we use a lot more tools.
We leverage the fact that Tiki is part of wikisuite by using the other wikisuite software, but these still don't cover everything we use.
The reasons vary from old habits to convenience or improved efficiency and they are different for each collaboration tool.
I will review them and explain why we use them and how useful they are to us. Also, I will mention our plans for the future.

Back

Wikibase Ecosystem

Home

Speaker Lydia Pintscher
RoomAW1.120
TrackCollaborative Information and Content Management Applications
Time13:30 - 13:55
Event linkView original entry

Wikidata, Wikimedia's knowledge base, has been very successful since its inception 7 years ago. Wikidata's general purpose data about the world is powering everything from Wikipedia to your digital personal assistant. Its linked, machine readable data is collected and maintained by a community of over 20000 people. But not all data should and can be in Wikidata. Instead we are taking the software powering Wikidata, Wikibase, to new places. We empower communities and institutions all around the world who want to collect structured, machine-readable data about a topic area of their choice to run their own Wikibase. These Wikibase instances are then connected to form a thriving ecosystem. In this talk we'll go over what Wikibase is, where it's coming from and what it is enabling right now.

Back

Decentralized collaborative applications

Home

Speaker TG x
RoomAW1.120
TrackCollaborative Information and Content Management Applications
Time14:00 - 14:25
Event linkView original entry

A data-centric, offline-first approach to decentralized collaborative application development focusing on data ownership and privacy.

Exploring replicated mergeable data structure stores as building blocks of decentralized applications that enable asynchronous collaboration and offline search in combination with peer-to-peer gossip-based protocols that provide pub/sub, dissemination, and recommendation services both over the internet as well as on local and mobile proximity networks, thereby forming interest-based networks that facilitate discovery of personally relevant content and people.

Back

The unsupervised free CAT for low resource languages

Home

Speaker Alberto Massidda
RoomAW1.120
TrackCoding for Language Communities
Time15:00 - 16:00
Event linkView original entry

We present: 1) a full pipeline for unsupervised machine translation training (making use of monolingual corpora) for languages with low available resources; 2) a translation server making use of that unsupervised MT with an HTTP API compatible with Moses toolkit, a once prominent MT system; 3) a Docker packaged version of the EU funded free Computer Aided Translation (CAT) tool MateCAT for ease of deployment.
This full translation pipeline enables a non technical user, speaking a non-FIGS language for which there is scarcity of parallel corpora, to start translating documents and software following translation industry standards.

Localization within community suffers from the fragmentation of technologies (too wide wedge between commercial Computer Aided Translation tools and free ones), available language resources (making difficult to train a Machine Translation) and lack of clear and robust pipelines to get started.
Low resource language communities suffer the most, since MT systems require training corpora of millions of words and industry has settled to expecting the massive corpora available to FIGS (French, Italian, German, Spanish) languages.
Moreover, the community suffers from a lack of adoption of established technologies and workflows, leading to reinventing the wheel and suboptimal efforts’ outcomes.
Today we would like to present a connector for the implementation of an unsupervised MT (made by Artetxe et al.), that claims a BLEU of 26 on limited language resources (which is enough as a support system) integrated with MateCAT, an industry level, free, web based tool funded by EU, in order to provide a more viable alternative to resorting to Google Translate and commercial LSPs.

Back

Lexemes in Wikidata

Home

Speaker Lydia Pintscher
RoomAW1.120
TrackCoding for Language Communities
Time16:00 - 16:30
Event linkView original entry

Wikidata, Wikimedia's knowledge base, has been collecting general purpose data about the world for 7 years now. This data powers Wikipedia but also many applications outside Wikimedia, like your digital personal assistant. In recent years Wikidata's community has also started collecting lexicographical data in order to provide a large data set of machine-readable data about words in hundreds of languages. In this talk we will explore how Wikidata enables thousands of volunteers to describe their languages and make it available as a source of data for systems that do automated translation, text generation and more.

Back

Nuspell: version 3 of the new spell checker

Home

Speaker Sander van Geloven
RoomAW1.120
TrackCoding for Language Communities
Time16:30 - 17:00
Event linkView original entry

Nuspell version 3 is a FOSS checker that is written in pure C++17. It extensively supports character encodings, locales, compounding, affixing and complex morphology. Existing spell checking in web browsers, office suits, IDEs and other text editors can use this as a drop-in replacement. Nuspell supports 90 languages, suggestions and personal dictionaries.

In this talk we will summarize the functionality of Nuspell version 3 and provide easy to follow examples on how to use it as a command-line tool or link to the C++ library. Newly made integrations in Firefox and Enchant will be discussed. The audience will be invited to further integrate Nuspell into their software, create new language bindings, port it to other operating systems and help grow its community. This new spell checker has outgrown from an MVP to a faster and more complete spell checker.

Back

AMENDMENT Weblate! Localize your project the developer way: continously, flawlessly, community driven, and open-source

Home

Speaker Michal Čihař
RoomAW1.120
TrackCoding for Language Communities
Time17:00 - 18:00
Event linkView original entry

Please note that this talk will now be given by Michal Čihař instead of Václav Zbránek.



The presentation will show you how to localize your project easily with little effort, open-source way. Why we started Weblate? We said no to repetitive work, no to manual work with translation files anymore. Weblate is unique for its tight integration to VCS. Set it up once and start engaging the community of translators. More languages translated means more happy users of your software. Be like openSUSE, Fedora, and many more, and speak your users' language now thanks to Weblate!

The presentation will show you how to localize your project easily with little effort, open-source way. Why we started Weblate? We said no to repetitive work, no to manual work with translation files anymore. Weblate is unique for its tight integration to VCS. Set it up once and start engaging the community of translators. More languages translated means more happy users of your software. Be like openSUSE, Fedora, and many more, and speak your users' language now thanks to Weblate! I will show you the main perks of Weblate and the setup of the project. If you have a project with open repo and you want to start translating it, take your git:// link, and we will set it up right on the spot. FOSDEM is a great time and place to found your translating community. And I am looking forward to answer all your questions!

Back

Open Edge Hardware and Software for Natural Language Translation and Understanding

Home

Speaker Alexander Rush
Thierry Tambe
RoomAW1.120
TrackCoding for Language Communities
Time18:00 - 18:30
Event linkView original entry

The last half decade has seen a major increase in the accuracy of deep learning methods for natural language translation and understanding. However many users still interact with these systems through proprietary models served on specialized cloud hardware. In this talk we discuss co-design efforts between researchers in natural language processing and computer architecture to develop an open-source software/hardware system for natural language translation and understanding across languages. With this system, users can access state-of-the-art models for translation, speech, and classification, and also run these models efficiently on edge device open-hardware designs.



Our work combines two open-source development efforts, OpenNMT and FlexNLP. The OpenNMT project is a multi-year collaborative project for creating an ecosystem for neural machine translation and neural sequence learning. Started in December 2016 by the Harvard NLP group and SYSTRAN, the project has since been used in many research and industry applications. The project includes highly configurable model architectures and training procedures, efficient model serving capabilities for use in real world applications, and extensions to tasks such as text generation, tagging, summarization, image to text, and speech to text. FlexNLP is an open-source fully retargetable hardware accelerator targeted for natural language processing. Its hardware design is targeted to key NLP computational functions such as attention mechanisms and layer normalization that are often overlooked by today’s CNN or RNN hardware accelerators. FlexNLP’s rich instruction set architecture and microarchitecture enable a diverse set of computations and operations that are paramount for end-to-end inference on state-of-the-art attention-based NLP models. Together they provide an open pipeline for both model training and edge device deployment.

Back

Poio Predictive Text

Home

Speaker Peter Bouda
RoomAW1.120
TrackCoding for Language Communities
Time18:30 - 19:00
Event linkView original entry

The Poio project develops language technologies to support communication in lesser-used and under-resourced languages on and with electronic devices. Within the Poio project we develop text input services with text prediction and transliteration for mobile devices and desktop users to allow conversation between individuals and in online communities.

In this lightning talk I will present the current architecture of the Poio Corpus, our corpus collection and data management pipeline. I will show how to add a new language to the corpus and how you can use the pipeline to build language models for the predictive text technology. Our goal is to make collaboration with language communities as smoothless as possible, so that developers, data engineers and speakers of under-ressourced language can collaborate to build grassroots language technologies. Poio started as a language revitalization project at the Interdisciplinary Centre for Social and Language Documentation in Minde/Portugal, a non-profit organization dedicated to the documentation and preservation of linguistic heritage.

Back

Farwest Demo

Home

Speaker Loïc Hoguin
RoomAW1.121
TrackErlang, Elixir and Friends
Time10:30 - 10:50
Event linkView original entry

Farwest is an Erlang framework for building RESTful Web applications and APIs.



Well written Farwest applications apply the HATEOAS principles and as a result can be interacted with using a single client. This removes entirely the need to write a separate client per API and lets servers decide how the data is best consumed by everyone.



This demo will show how to use Farwest to write a simple API to a document oriented database.

Back

OpenTelemetry: an XKCD 927 Success Story

Home

Speaker Greg Mefford
RoomAW1.121
TrackErlang, Elixir and Friends
Time11:00 - 11:20
Event linkView original entry

Learn how distributed tracing can revolutionize the way you troubleshoot errors and performance issues, in both monolithic and distributed micro-service architectures.



OpenTelemetry is an industry standard for distributed tracing, merging the tech and communities of OpenCensus and OpenTracing.


Back

Debugging and tracing a production RabbitMQ node

Home

Speaker Gabriele Santomaggio
RoomAW1.121
TrackErlang, Elixir and Friends
Time11:30 - 11:50
Event linkView original entry

In this talk, we will see how to debug/trace on a running RabbitMQ node.
Erlang remote access and remote debugging are builtin features in Erlang/Elixir.

With these features, it is possible to see what's happening inside a BEAM node (as RabbitMQ).
I will show also how to use "dynamic loading" to add a not native code in a running beam.

Erlang remote access and remote debugging are builtin features in Erlang/Elixir.

With these features, it is possible to see what's happening inside a BEAM node (as RabbitMQ).
There are a set of tools inside the beam like etop, eprof, dbg, fprof ... that work in the same Linux way.
In this talk, we will see how to use some of these features on a running RabbitMQ node.
I will show also how to use "dynamic loading" to add a not native code in a running beam.

Back

Keep Calm and Use Nerves

Home

Speaker Arjan Scherpenisse
RoomAW1.121
TrackErlang, Elixir and Friends
Time12:00 - 12:20
Event linkView original entry

Intended as a introduction to Nerves, the IoT platform for the BEAM, this talk is a journey through the land of library ecosystems, device drivers and pixel manipulators, in search for the holy grail: a stable and maintainable IoT device.

The Nerves project (https://nerves-project.org/) is a framework for building IoT devices with Elixir. In this talk I will explain how a Nerves project is structured and then move on to show and demonstrate one of the projects that I did with it, focussing on the development experience and the state of the Nerves ecosystem.

Back

Lumen

Home

Speaker Luke Imhoff
RoomAW1.121
TrackErlang, Elixir and Friends
Time12:30 - 12:50
Event linkView original entry

Lumen is an alternative compiler, interpreter and runtime to the BEAM designed for WebAssembly. Lumen allows Erlang and Elixir to run in the browser.

The Lumen project is a reimplementation of the BEAM in Rust. Using Rust, Lumen is able to leverage the cutting edge tools of the Rust WASM ecosystem. Compiling Elixir and Erlang from source to LLVM IR, the Lumen compiler is able to do whole program optimizations allowing for dead-code elimination of parts of the user application, OTP, and the runtime itself. Eliminating the dead code makes shipping OTP size-competitive with JavaScript frameworks, while retaining the benefits of thousands of concurrent processes and supervision trees.

Back

CoffeeBeam

Home

Speaker Viktor Gergely
RoomAW1.121
TrackErlang, Elixir and Friends
Time13:00 - 13:20
Event linkView original entry

The speaker started to experiment with running BEAM modules on Android during summer of 2019. A prototype called CoffeeBeam has been created that is capable of loading and running BEAM files on Android. The solution also contains a virtual machine that provides a lightweight Erlang runtime system. Most of the implemented functionality is independent of the source language of the BEAM files, so the platform is easily extensible to support further languages on the BEAM. During the talk, the speaker is going to present a real-life example of running a BEAM file on Android, while presenting the concepts of the implementation and sharing the story of this journey.

CoffeeBeam: a BEAM VM for Android



Goal



CoffeeBeam is a lightweight Erlang virtual machine that provides easy integration of BEAM files with Android applications. The current alternative solutions provide almost fully functional Erlang runtime systems in the form of Erlang shell on Android devices. However, CoffeeBeam follows a different approach, targeting easy integration of pre-compiled BEAM files into standalone Android applications. The characteristics of Android devices are in focus: they provide large amount of memory while CPU usage needs to be optimized to provide longer device lifetime. It is preferred to make the communication between Erlang and the Android application transparent to provide better user experience.



Use Case



Let's assume that you chose a language over the BEAM to implement an application logic efficiently. CoffeeBeam provides a framework to build on this logic and enable communication between the BEAM and the Android application with only minor changes to your original code. The demonstrated example is a TicTacToe game where the game logic is implemented in Erlang that is extended with a graphical user interface implemented as an Android activity in Java.



Application Architecture



Android Activity



The TicTacToe game is implemented as an Android activity which is a common way of creating interactive applications. The activity contains the view for displaying textual information (game name and user instructions depending on the state of the game) and widgets (game board and new game button) for initiating user actions towards the game logic.



CoffeeBeam VM



The CoffeBeam VM provides the runtime system for the game logic. It is written in Java and included as a .jar library inside the Android application source code. Starting and stopping the VM is connected to the onCreate() and onDestroy() callbacks of the activity.



Game logic



The flow of the game and the computer player's intelligence is implemented as an Erlang module (approximately 250 lines of code) and the compiled .beam file is packaged into the Android application as resource.



Communication



The BeamClient class provides interface for starting and stopping the VM, and manages communication between the VM and the Android application through function calls and callback functions. The default behavior can be redefined by extending the BeamClient class. The forms of communication are described in detail below.



Function call in the VM



User actions in the Android application are translated into function calls in the VM using the apply(String module, String function, ErlList args) method of the BeamClient class. The function call implies creating a new process in the Erlang VM and applying module:function with the list of args. The TicTacToe game logic provides the following functions:






Handle function result in Android



When the Erlang function is executed in the VM, the result of the function initiates a callback in the BeamClient as handleResult(ErlTerm result). In the TicTacToe example, the process identifier of the game process is returned as the result of the tictactoe:start() function. The returned value can be used to send Erlang messages to the game process during the game.



Handle function callback in Android



Each call in the form of beamclient:function(arg) in the Erlang modules results in a BeamClient callback handleCall(String function, ErlTerm arg). Each game event invokes a beamclient:update({Event, Board}) function call that is translated into handleCall callback in the Android application.



Summary and contribution



CoffeeBeam executes BEAM files in a lightweight VM that can be packaged into the Android application. The above TicTacToe example showed how to include the Erlang game logic in the Android application that provides the graphical user interface. The game flow runs in a separate process in the CoffeBeam VM, and the communication with Android is done through BeamClient function calls and callbacks.



The CoffeeBeam VM is open source and available for further development to extend the VM functionality or implement customizations for other languages running on the BEAM. The source code with documented interface is available at: https://github.com/vikger/coffeebeam.

Back

Going Meta with Elixir's Macros

Home

Speaker Wiebe-Marten Wijnja
RoomAW1.121
TrackErlang, Elixir and Friends
Time13:30 - 13:50
Event linkView original entry

Compilation and execution are as different as night and day. Or are they? By blurring the lines, Elixir (and the BEAM VM) enable some very powerful and useful meta-programming techniques.



In this presentation, Marten will talk about running and generating code at compile-time, Elixir's hygienic macros, and how to compile and hot-reload altered or extra code, while your program is running!
Besides explaining these concepts, their usefulness will be motivated using various practical real-world examples.

Back

Processes & Grains

Home

Speaker Evadne Wu
RoomAW1.121
TrackErlang, Elixir and Friends
Time14:00 - 14:20
Event linkView original entry

A popular way to manage long-running state in Erlang and Elixir programs is by using processes; this model is well-understood and well-supported, but remains firmly rooted within known orthodoxy. Within this session, I shall demonstrate application of the Orleans model to existing Erlang and Elixir applications, review existing work done by the community, and compare this way of programming against other models. For maximum enjoyment, the audience is advised to possess working knowledge of Erlang and Elixir. Some background knowledge in Web applications will be helpful as well. The session will be presented with live demo in Elixir.

Back

Designing a performant and scalable graph processing python package

Home

Speaker Vincent Cave
RoomAW1.121
TrackGraph Systems and Algorithms
Time15:00 - 15:20
Event linkView original entry

Python has proven to be a popular choice for data scientists in
the domain of graph analytics. The multitude of freely available
frameworks and python packages allow to develop applications
quickly through ease of expressibility and reuse of code.
With petabytes of data generated everyday and an ever evolving
landscape of hardware solutions, we observe a graph processing
framework should expose the following characteristics: ease of
use, scalability, interoperability across data formats, and
portability across hardware vendors.
While existing python packages have been helping to drive
application development, our assessment is that none of the
packages address all the aforementioned challenges.
We propose a community led, open source effort, to design and
build a graph processing python library to specifically address
these challenges.

Back

Graffiti

Home

Speaker Sylvain Baubeau
RoomAW1.121
TrackGraph Systems and Algorithms
Time15:25 - 15:45
Event linkView original entry

Graffiti is the graph engine of Skydive - an open source networking analysis tool. Graffiti was created from scratch to provide the features required by Skydive : distributed, replicated, store the whole history of the graph, allow subcribing to events on the graph using WebSocket and visualization.

Skydive (https://skydive.network) is an open source analysis tool. It collects information about an infrastructure topology - such as network interfaces, Linux bridges, network namespaces, containers, virtual machines, ... and store them into a graph database called Graffiti (https://github.com/skydive-project/skydive/tree/master/graffiti)



The graph is :



- distributed : some agents only have a portion of the graph
- replicated : for high availability and load distribution
- historical : every change on the graph is archived, allowing retrieval of the graph at any point in time or getting all the revisions of a set of nodes and edges during a period of time


A custom implementation of the Gremlin language is used to query the graph, with some additional steps to specify the time context of the query for instance.



In addition to the core engine, a WebSocket based user interface - based on D3JS - is available to visualize and interact with the graph.



This presentation will showcase a demo of Graffiti and try to advocate its use in your own project.

Back

The Neo4j Graph Algorithms Library: An Overview

Home

Speaker Max Kießling
RoomAW1.121
TrackGraph Systems and Algorithms
Time15:50 - 16:10
Event linkView original entry

Graph algorithms play an increasingly important role in real-world applications. The Neo4j Graph Algorithms library contains a set of ~50 graph algorithms covering a lot of different problem domains. In our talk, we’ll present the architecture of the library and demonstrate the different execution phases using a real world example.

Graph algorithms play an increasingly important role in real-world applications. Use-cases that we see in the wild are related to fraud detection, fraud detection, retail recommendation and identifying influencers for marketing campaigns. The Neo4j Graph Algorithms library contains a set of ~50 graph algorithms covering the above-mentioned problem domains.



Running a graph algorithm in Neo4j involves three essential steps: loading the graph from the database in an optimized in-memory format, executing the algorithm, and streaming or writing of results. For the user, these steps are hidden behind single procedure calls, integrated in the Cypher query language.



In our talk, we will explain and demonstrate what happens in the system when a user calls an algorithm procedure. This involves scanning Neo4j store files, constructing our in-memory graph representation and executing an algorithm via our Java Graph API.



Attendees will learn how to setup and use the Neo4j Graph Algorithms Library. Furthermore, they will get a good understanding of how the library works internally and how to tune it for specific use-cases.

Back

Gunrock: High-Performance Graph Analytics for the GPU

Home

Speaker Muhammad Osama
RoomAW1.121
TrackGraph Systems and Algorithms
Time16:15 - 16:35
Event linkView original entry

Gunrock is a CUDA library for graph-processing designed specifically for the GPU. It uses a high-level, bulk-synchronous, data-centric abstraction focused on operations on vertex or edge frontiers. Gunrock achieves a balance between performance and expressiveness by coupling high-performance GPU computing primitives and optimization strategies, particularly in the area of fine-grained load balancing, with a high-level programming model that allows programmers to quickly develop new graph primitives that scale from one to many GPUs on a node with small code size and minimal GPU programming knowledge.



Features of Gunrock include:
- Best of class performance among GPU graph analytics frameworks
- A large number of graph applications (28 at last count)
- A data-centric programming model targeted at GPUs that offers advantages over other programming models
- A programming model that scales to multiple GPUs with high performance while still using the same code as a single-GPU primitive



Gunrock began in 2013 as a project under DARPA's XDATA program and is currently the performance reference in DARPA's HIVE program. Gunrock is also in development as a component in NVIDIA's RAPIDS platform for data analytics. The Gunrock team actively develops and improves Gunrock under an Apache 2.0 license.



https://gunrock.github.io/

Back

Hardware-Software Co-Design for Efficient Graph Application Computations on Emerging Architectures

Home

Speaker Margaret Martonosi
Aninda Manocha
RoomAW1.121
TrackGraph Systems and Algorithms
Time16:40 - 17:00
Event linkView original entry

Graph databases and applications have attracted much attention in the past few years due to the efficiency with which they can represent big data, connecting different layers of data structures and allowing analysis while preserving contextual relationships.
This has resulted in a fast-growing community that has been developing various database and algorithmic innovations in this area, many of which will be gathering together in this conference. We joined this field as computer architecture researchers and are currently building a complete hardware-software design, called DECADES, that aims to accelerate the execution of these algorithms.



From a computer architecture point of view, applications involving dense matrix operations such as neural networks have garnered much attention for their acceleration through specialized hardware such as GPUs and TPUs, while graph applications remain difficult to improve even with modern specialized accelerator designs. The reason for this is the characteristic pointer-based data structures of graph applications and the resulting irregular memory accesses performed by many of these workloads. Such irregular memory accesses result in memory latency bottlenecks that dominate the total execution time. In this talk, as part of the DECADES infrastructure, we present an elegant hardware-software codesign solution, named FAST-LLAMAs, to overcome these memory-bottlenecks, and thus, accelerate graph and sparse applications in an energy efficient way.

Graph databases and applications have attracted much attention in the past few years due to the efficiency with which they can represent big data, connecting different layers of data structures and allowing analysis while preserving contextual relationships.
This has resulted in a fast-growing community that has been developing various database and algorithmic innovations in this area, many of which will be gathering together in this conference. We joined this field as computer architecture researchers and are currently building a complete hardware-software design, called DECADES, that aims to accelerate the execution of these algorithms.



From a computer architecture point of view, applications involving dense matrix operations such as neural networks have garnered much attention for their acceleration through specialized hardware such as GPUs and TPUs, while graph applications remain difficult to improve even with modern specialized accelerator designs. The reason for this is the characteristic pointer-based data structures of graph applications and the resulting irregular memory accesses performed by many of these workloads. Such irregular memory accesses result in memory latency bottlenecks that dominate the total execution time. In this talk, as part of the DECADES infrastructure, we present an elegant hardware-software codesign solution, named FAST-LLAMAs, to overcome these memory-bottlenecks, and thus, accelerate graph and sparse applications in an energy efficient way.



We propose a 40 minute talk which includes a rigorous characterization of the problem, and an in-depth analysis of our software-hardware co-design solution, FAST LLAMAs. We will present results based on a simulated model of our system which show significant performance improvements (up to 8x), as well as energy improvements (up to 20x) on a set of fundamental graph algorithms and important real-world datasets. Our system is completely open-source, and includes a compiler and cycle-accurate simulator. Our proposed system is compatible and easily extendable to many of the open-source graph analytic and database frameworks and we are excited to engage with the open-source community of this increasingly important domain.



The work is part of a large collaboration from three academic groups: Margaret Martonosi (PI Princeton), David Wentzlaff (PI Princeton), Luca Carloni (PI Columbia) with students/researchers: Juan L. Aragón (U. of Murcia, Spain), Jonathan Balkind, Ting-Jung Chang, Fei Gao, Davide Giri, Paul J. Jackson, Aninda Manocha, Opeoluwa Matthews, Tyler Sorensen, Esin Türeci, Georgios Tziantzioulis, and Marcelo Orenes Vera. In addition to the submission author, portions of the talk may be offered by others in the collaboration.

Back

Programmable Unified Memory Architecture (PUMA)

Home

Speaker Stijn Eyerman
RoomAW1.121
TrackGraph Systems and Algorithms
Time17:05 - 17:25
Event linkView original entry

Large scale graph analytics is essential to analyze relationships in big data sets. Thereto, the DARPA HIVE program targets a leap in power efficient graph analytics. In response to this program, Intel proposes the Programmable Unified Memory Architecture (PUMA). Based on graph workload analysis insights, PUMA consists of many multi-threaded cores, fine-grained memory and network accesses, a globally shared address space and powerful offload engines. In this talk, we will describe the PUMA architecture, both in terms of hardware and the software ecosystem. We will provide initial simulation based performance estimations, showing that for graph analysis applications, a PUMA node will outperform a conventional compute node by one to two orders of magnitude. Additionally, PUMA will continue to scale across multiple nodes, which is a challenge in conventional multinode setups.

Back

Cypher enhancements for sharded and federated graph databases

Home

Speaker Tobias Johansson
Petr Janouch
RoomAW1.121
TrackGraph Systems and Algorithms
Time17:30 - 17:50
Event linkView original entry

In this talk we will introduce enhancements to the Cypher graph query language, enabling queries spanning multiple graphs, intended for use in sharding and federation scenarios.
We will also present our experience with sharding the LDBC Social Network Benchmark dataset.

Back

AMENDMENT Raphtory: Streaming analysis of distributed temporal graphs

Home

Speaker Ben Steer
RoomAW1.121
TrackGraph Systems and Algorithms
Time17:55 - 18:15
Event linkView original entry

Temporal graphs capture the development of relationships within data throughout time. This model fits naturally within a streaming architecture, where new events can be inserted directly into the graph upon arrival from a data source, being compared to related entities or historical state. However, the vast majority of graph processing systems only consider traditional graph analysis on static data, with some outliers supporting batched updating and temporal analysis across graph snapshots. This talk will cover recent work defining a temporal graph model which can be updated via event streams and investigating the challenges of distribution and graph maintenance. Some notable challenges within this include partitioning a graph built from a stream, with the additional complexity of managing trade-offs between structural locality (proximity to neighbours) and temporal locality (proximity to an entities history). Synchronising graph state across the cluster and handling out-of-order updates, without a central ground truth limiting scalability. Managing memory constraints and performing analysis in parallel with ongoing update ingestion.
To address these challenges, we introduce Raphtory, a system which maintains temporal graphs over a distributed set of partitions, ingesting and processing parallel updates in near real-time. Raphtory's core components consist of Graph Routers and Graph Partition Managers. Graph Routers attach to a given input stream and convert raw data into graph updates, forwarding this to the Graph Partition Manager handling the affected entity. Graph Partition Managers contain a partition of the overall graph, inserting updates into the histories of affected entities at the correct chronological position. This removes the need for centralised synchronisation, as commands may be executed in any given arrival order whilst resulting in the same history. To deal with memory constraints, Partition Managers both compress older history and set an absolute threshold for memory usage. If this threshold is met a cut-off point is established, requiring all updates prior to this time to be transferred to offline storage. Once established and ingesting the selected input, analysis on the graph is permitted via Analysis Managers. These connect to the cluster, broadcasting requests to all Partition Managers who execute the algorithm. Analysis may be completed on the live graph (most up-to-date version), any point back through its history or as a temporal query over a range of time. Additionally, multiple Analysis Managers may operate concurrently on the graph with previously unseen algorithms compiled at run-time, thus allowing modification of ongoing analysis without re-ingesting the data.
Raphtory is an ongoing project, but is open source and available for use now. Raphtory is fully containerised for ease of installation and deployment and much work has gone into making it simple for users to ingest their own data sources, create custom routers and perform their desired analysis.
The proposed talk will discuss the benefits of viewing data as a temporal graph, the current version of Raphtory and how someone could get involved with the project. We shall also touch on several areas of possible expansion at the end for discussion with those interested.

The intended audience for this talk is a mixture of data scientists and graphy engineers. It is going to be quite high level, but introducing some interesting ideas of how to view data through the lens of a temporal graph as well as novel systems solutions for distribution, maintenance and processing.

Back

Temporal Graph Analytics with GRADOOP

Home

Speaker Kevin Gomez
Christopher Rost
RoomAW1.121
TrackGraph Systems and Algorithms
Time18:20 - 18:40
Event linkView original entry

The temporal analysis of evolving graphs is an important requirement in many domains but hardly supported in current graph database and graph processing systems. We, therefore, extended the distributed graph analytics framework Gradoop for time-related graph analysis by introducing a new temporal property graph data model. Our model supports bitemporal time dimensions for vertices and edges to represent both rollback and historical information. In addition to the data model, we introduce several time-dependent operators (e.g, Snapshot, Diff and Grouping) that natively support the natural evolution of the graph. Since this is an extension of Gradoop, the temporal operators are compatible and can be combined with the already known operators to build complex analytical tasks in a declarative way. In our talk, we will give a brief overview of the Gradoop system, the temporal property graph model and how we support the time-dependent analysis of large graphs. Based on real-world use-cases, we show the expressiveness and flexibility of our temporal operators and how they can be composed to answer complex analytical questions.

Back

Weaviate OSS Smart Graph

Home

Speaker Bob van Luijt
RoomAW1.121
TrackGraph Systems and Algorithms
Time18:40 - 19:00
Event linkView original entry

Weaviate is an open-source smart graph that aims to allow anyone, anywhere, any time to create their own semantic search engines, knowledge graphs or knowledge networks. Weaviate is RESTful and GraphQL API based and built on top of a semantic vector storage mechanism called the contextionary. Because all data is stored in the vector space, Weaviate is ideal for;
- Semantically search through the knowledge graph.
- Automatically classify entities in the graph.
- Create easy to use knowledge mappings.



Because the use of formal ontologies are optional, Weaviate can be used to create a P2P knowledge network which we want to present during this conference.



This is a follow up after the initial design was shared during last year's FOSDEM.

Problem



Creating a knowledge graph can be a complex endeavor, let alone the integration of semantic search models. Bain & Company research under US enterprise CTO's shows that 59% of them believe they lack the capabilities to generate meaningful business insights from their data, and 85% said it would require substantial investments to improve their data platforms.



Solution



Weaviate aims anyone to create large, enterprise-scale knowledge graphs as straight forward as possible. Weaviate's feature set allows anyone to;
- Semantically search through the knowledge graph.
- Automatically classify entities in the knowledge graph.
- Create easy to use knowledge mappings.



Weaviate's Contextionary



Weavite's Contextionary is the semantic vector storage mechanism that stores data -unlike traditional storage mechanisms- based on its semantic meaning. For example, if someone stores information about a company with the name Apple, this data object would be found closely related to concepts like the iPhone.



Because of the algorithmic use (as opposed to retraining) of the pre-trained machine learning model, Weaviate is able to learn new concepts fast and near-realtime. This allows the user to update and manipulate the knowledge graph directly.



Demo & Use cases



During the session, we want to show a few -recent- use cases to demo how Weaviate can be used. The demo will include;
querying;
semantic querying;
adding concepts;
going from an ontology to a schema;
and more.



Knowledge network



Because of Weaviate's contextionary, a formal ontology is optional (e.g., "a company with the name Netflix" is semantically similar to "a business with the identifier Netflix Inc.") this allows multiple Weaviate to connect and communicate over a peer to peer (P2P) network to exchange knowledge. Aka, the knowledge network. During the session, we want to demonstrate the first prototype of this network.



more information



more information can be found on our website: https://www.semi.technology/documentation/weaviate/current/

Back

Welcome to the Ada DevRoom

Home

Speaker Dirk Craeynest
Jean-Pierre Rosen
RoomAW1.125
TrackAda
Time10:30 - 10:35
Event linkView original entry

Welcome to the Ada Developer Room at FOSDEM 2020, which is organized
by Ada-Belgium in cooperation with Ada-Europe.

Ada-Belgium and Ada-Europe are non-profit organizations set up
to promote the use of the Ada programming language and related
technology, and to disseminate knowledge and experience into academia,
research and industry in Belgium and Europe, resp. Ada-Europe has
member-organizations, such as Ada-Belgium, in various countries,
and direct members in many other countries.

Back

An Introduction to Ada for Beginning and Experienced Programmers

Home

Speaker Jean-Pierre Rosen
RoomAW1.125
TrackAda
Time10:35 - 11:20
Event linkView original entry

An overview of the main features of the Ada language, with special
emphasis on those features that make it especially attractive for
free software development.

Ada is a feature-rich language, but what really makes Ada stand-out is
that the features are nicely integrated towards serving the goals of
software engineering. If you prefer to spend your time on designing
elegant solutions rather than on low-level debugging, if you think
that software should not fail, if you like to build programs from
readily available components that you can trust, you should really
consider Ada

Back

HAC: the Compiler which will Never Become Big

Home

Speaker Gautier de Montmollin
RoomAW1.125
TrackAda
Time11:30 - 11:50
Event linkView original entry

In the Ada world, we are surrounded by impressive and professional
tools that can handle large and complex projects. Did you ever
dream of a tiny, incomplete but compatible system to play with?
Are you too impatient, when developing small pieces of code, for
long compile-bind-link-run cycles? Are you a beginner intimidated by
project files and sophisticated tools? Then HAC (the HAC Ada Compiler,
or the Hello-world Ada Compiler) is for you.

HAC is a revival of the SmallAda project, which supported the "Pascal
subset" plus tasking.

Back

Tracking Performance of a Big Application from Dev to Ops

Home

Speaker Philippe Waroquiers
RoomAW1.125
TrackAda
Time12:00 - 12:50
Event linkView original entry

This talk describes how performance aspects of a big Air Traffic Flow
Management mission critical application are tracked from development
to operations.

Tracking performance is needed when new functionality is added, to
balance the additional services versus the resource increase needed.
Measuring and tracking performance is also critical to ensure a new
release can cope with the current or expected load.



We will discuss various aspects such as which tools and techniques
are used for performance tracking and measurements, what are the
traps and pitfalls encountered for these activities. The application
in question is using Ada, but most of the items discussed are not
particularly Ada related.

Back

Cappulada: What we've Learned

Home

Speaker Johannes Kliemann
RoomAW1.125
TrackAda
Time13:00 - 13:20
Event linkView original entry

Last year I presented Cappulada, a C++ binding generator for Ada that
intended to overcome the shortcomings of existing solutions and to
provide usable bindings even for complex C++ code.



This year I want to show our conclusions on why automatic bindings
between C++ and Ada are hard (if not impossible) and where existing
solutions (including our own) fail.

Back

Programming ROS2 Robots with RCLAda

Home

Speaker Alejandro Mosteo
RoomAW1.125
TrackAda
Time13:30 - 13:50
Event linkView original entry

The Robot Operating System (ROS) is one of the chief frameworks
for service robotics research and development. The next iteration
of this framework, ROS2, aims to improve critical shortcomings of
its predecessor like deterministic memory allocation and real-time
characteristics.



RCLAda is a binding to the ROS2 framework that enables the programming
of ROS2 nodes in pure Ada with seamless integration into the ROS2
workflow.

Back

Live Demo of Ada's Distribution Features

Home

Speaker Jean-Pierre Rosen
RoomAW1.125
TrackAda
Time14:00 - 14:50
Event linkView original entry

Ada incorporates in its standard a model for distributed execution.
It is an abstract model that does not depend on a particular kind of
network or any other communication mean, and that preserves full typing
control across partitions. This presentation briefly exposes the
principles of Ada's distribution model, then shows the possibilities
with life demos across different machines and operating systems.

Back

Writing Shared Memory Parallel Programs in Ada

Home

Speaker Jan Verschelde
RoomAW1.125
TrackAda
Time15:00 - 15:20
Event linkView original entry

Tasks in Ada are effective to speed up computations on multicore
processors. In writing parallel programs we determine the granularity
of the parallelism with respect to the memory management. We have to
decide on the size of each job, the mapping of the jobs to the tasks,
and on the location of the input and output data for each job.



A multitasked Newton's method will show the effectiveness of Ada to
speed up the computation of power series. This application belongs
to the free and open source package PHCpack, a package to solve
polynomial systems by polynomial homotopy continuation.

Back

Spunky: a Genode Kernel in Ada/SPARK

Home

Speaker Martin Stein
RoomAW1.125
TrackAda
Time15:30 - 15:50
Event linkView original entry

The Genode OS framework is an open-source tool kit for building highly
secure component-based operating systems scaling from embedded devices
to dynamic desktop systems. It runs on a variety of microkernels
like SeL4, NOVA, and Fiasco OC as well as on Linux and the Muen SK.
But the project also features its own microkernel named "base-hw"
written in C++ like most of the Genode framework.



Spunky is a pet project of mine. Simply put it's an approach to
re-implement the design of the "base-hw" kernel first in Ada and
later in SPARK with the ultimate goal to prove its correctness.
It is also an opportunity to learn how Genode can benefit from Ada
and SPARK in general and promote the use of safety-oriented languages
in the project.

Back

Alire: Ada Has a Package Manager

Home

Speaker Fabien Chouteau
Pierre-Marie de Rodat
Alejandro Mosteo
RoomAW1.125
TrackAda
Time16:00 - 16:50
Event linkView original entry

Alire (Ada LIbrary REpository) is a package manager project for the
Ada/SPARK community. The goal of a package manager is to facilitate
collaboration within the community and to lower the barrier of entry
for beginners.

In this talk we will present the Alire project, what it can do for
you and how you can contribute and give more visibility to your
Ada/SPARK projects.



We will also provide a tutorial to show how to use Alire to create
a library and then publish it for others to use.

Back

Protect Sensitive Data with Ada Keystore

Home

Speaker Stephane Carrez
RoomAW1.125
TrackAda
Time17:00 - 17:20
Event linkView original entry

Storing passwords and secret configuration is a challenge for an
application. Ada Keystore is a library that stores arbitrary content
by encrypting them in secure keystore (AES-256, HMAC-256).

The talk presents the project and shows how to use the Ada Keystore
library to get or store secret information in a secure manner.
The presentation explains how the Ada features such as types, protected
types, tasks, pre/post conditions have helped during the development
of this project.

Back

EUgen: a European Project Proposal Generator

Home

Speaker Riccardo Bernardini
RoomAW1.125
TrackAda
Time17:30 - 17:50
Event linkView original entry

Whoever wrote a research project proposal knows how much unnerving it
can be. The actual project description (made of work packages, tasks,
deliverable items, ...) has lots of redundancies and cross-references
that makes its coherency as frail as a house of cards. For example,
if the duration of a task is changed most probably you'll need to
update the effort in person-months of the task and of the including
work package; you must update the start date of depending tasks and
the deliver date of any deliverable items; most probably also the
WP efforts and length need update too; not to mention the need of
updating all the summary tables (summary of efforts, deliverable,
..) and the GANTT too. Any small changes is likely to start a ripple
of updates and the probability of forgetting something and getting an
incoherent project description is large. Given the harsh competition
in project funding, if your project is incoherent the probability of
getting funded is nil.



One day I got sick of this state of affair and I wrote my own project
generator: 10k lines of Ada code that reads a non-redundant project
description from a simple-format text file and produces a set of files
ready to be imported in the proposal, GANNT chart included. The user
can specify dependences between different items (e.g., this deliverable
is produced at the end of this task, this milestone is reached when
this deliverable is available, this task must begin after this other
task...) and the program automatically computes all the dates.

Both input parser and output processors are implemented using a plugin
structure that makes it easy to write new parsers to read different
formats or new output processors to produce output in different
formats. Currently a parser for a simple ad-hoc format and an output
processor that produces LaTeX files are provided; a new processor based
on the template expander protypo is currently being implemented.



Did I eat my own dog food? Well, yes, I did. I used it to write a
proposal (still under evaluation) and it served me well.

Back

On Rapid Application Development in Ada

Home

Speaker Tomasz Maluszycki
RoomAW1.125
TrackAda
Time18:00 - 18:20
Event linkView original entry

In the Ada world we typically write mission critical software that
just has to work, but in a way one could argue that a lot more software
is mission critical than is usually admitted.



What does it take to actually perform rapid application development
in any language? Can we do it in Ada and why would we do so?

A quick look into some language features that can be [ab]used for
enabling quick development of 'just a prototype' - which, as practice
shows is often deployed into production, usually without proper
quality controls and predictable outcome.

Back

Ada-TOML: a TOML Parser for Ada

Home

Speaker Pierre-Marie de Rodat
RoomAW1.125
TrackAda
Time18:30 - 18:50
Event linkView original entry

The world of generic structured data formats is full of contenders:
the mighty XML, the swift JSON, the awesome YAML, ... Alas, there
is no silver bullet: XML is very verbose, JSON is not convenient for
humans to write, YAML is known to be hard to parse, and so on.



TOML is yet another format whose goal is to be a good configuration
language: obvious semantics, convenient to write and easy to parse
in general-purpose programming languages.



In this talk, I'll shortly describe the TOML format and show a few
use cases in the real world. I'll then present the ada-toml library
itself: its high-level architecture and examples.

Back

Informal Discussions & Closing

Home

Speaker Dirk Craeynest
Jean-Pierre Rosen
RoomAW1.125
TrackAda
Time18:50 - 19:00
Event linkView original entry

Informal discussion on ideas and proposals for future events.

Back

The good and the bad sides of developing open source tools for neuroscience

Home

Speaker Jan Grewe
RoomAW1.126
TrackOpen Research Tools and Technologies
Time10:30 - 11:00
Event linkView original entry

The reproducibility crisis has shocked the scientific
community. Different papers describe this issue and the scientific
community has taken steps to improve on it. For example, several
initiatives have been founded to foster openness and standardisation
in different scientific communities (e.g. the INCF[1] for the
neurosciences). Journals encourage sharing of the data underlying
the presented results, some even make it a requirement.



What is the role of open source solutions in this respect? Where are the problems with
open source projects in (neuro-)sciences?



In this presentation I will address these questions at the example
of the entirely open-source based workflow in our laboratory[2] and
our efforts in developing generic solutions for storing metadata[3]
as well as unifying data and metadata storage[4] that we take together
with the German Neuroinformatics Node (G-Node[5]).



[1] https://incf.org
[2] https://github.com/bendalab
[3] https://github.com/g-node/python-odml
[4] https://github.com/g-node/nix
[5] https://g-node.org

Back

Challenges and opportunities in scientific software development

Home

Speaker Julia Sprenger
RoomAW1.126
TrackOpen Research Tools and Technologies
Time11:00 - 11:30
Event linkView original entry

The approaches used in software development in an industry setting and a scientific environment are exhibit a number of fundamental differences. In the former industry setting modern team development tools and methods are used (version control, continuous integration, Scrum, ...) to develop software in teams with a focus on the final software product. In contrast, in the latter scientific environment a large fraction of scientific code is produced by individual scientists lacking thorough training in software development with a specific research goal in mind. Indeed, it is only in the last decades that scientific software development started to become a fully recognized part of scientific work. Still, formal training in software development is largely missing in the scientific curricula of many universities. Additionally, due to the exploratory nature of the scientific method at the frontier of knowledge, most projects require the implementation of custom code. The combination of these circumstances promotes the development of scientific code not suited for sharing and long term maintenance, limiting the reusability and reproducibility of scientific data and findings. The systematic development and adoption of open source packages by the scientific community can emend this situation. Here we present examplary open source packages from the field of neuroscience and discuss the special requirements for open source software development and services in this research area.



Acknowledgements:
This project has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under Specific Grant Agreement No. 785907 (Human Brain Project SGA2). Supported by the NFDI Neuroscience Initiative.

Back

NeuroFedora: Enabling Free/Open Neuroscience

Home

Speaker Aniket Pradhan
RoomAW1.126
TrackOpen Research Tools and Technologies
Time11:30 - 12:00
Event linkView original entry

NeuroFedora is an initiative to provide a ready to use Fedora-based Free/Open source software platform for neuroscience. We believe that similar to Free software; science should be free for all to use, share, modify, and study. The use of Free software also aids reproducibility, data sharing, and collaboration in the research community. By making the tools used in the scientific process more comfortable to use, NeuroFedora aims to take a step to enable this ideal.

The computer has become an indispensable resource in modern neuroscience. From the gathering of data, simulation of computational models, analysis of large amounts of information, collaboration, and communication tools for community development, software is now a necessary part of the research pipeline.



The Neuroscience community is gradually moving to the use of Free/Open Source software (FOSS) 1; however, the software tools used in Neuroscience and research are generally complicated and sophisticated to use. Researchers that hail from a different field other than computing must spend considerable resources on setting up and managing the computing environment and the software tools. This limits the portability of the software, making the installation of software very time-consuming and sometimes tricky.



We present NeuroFedora - A Fedora-based operating system for Neuroscientists.
We have leveraged the infrastructure resources of the FOSS Fedora community 2 to develop an operating system that includes a plethora of ready-to-use Neuroscience software.
We follow the standard software development and quality assurance practices set out by the Fedora community to provide an integrated platform for researchers to use.
Furthermore, NeuroFedora is well integrated with other software such as desktop environments, text editors, and other daily use and development tools.



A NeuroFedora lab image is now available, with over 130 neuroscience packages ready to use. With an up to date documentation at (neuro.fedoraproject.org) and about 120+ packages in the queue, we encourage more FOSS enthusiasts to join the team to help NeuroFedora better aid the open (neuro)-science and research community.

Back

Spotlight on Free Software Building Blocks for a Secure Health Data Infrastructure

Home

Speaker Markus Suhr
Marcel Parciak
RoomAW1.126
TrackOpen Research Tools and Technologies
Time12:00 - 12:30
Event linkView original entry

Health Data is traditionally held and processed in large and complex mazes of hospital information systems. The market is dominated by vendors offering monolithic and proprietary software due to the critical nature of the supported processes and - in some cases - due to legal requirements. The “digital transformation”, “big data” and “artificial intelligence” are some of the hypes that demand for improved exchange of health care data in routine health care and medical research alike. Exchanging data at these scales requires open data formats and protocols, multi-stakeholder collaboration, and agile development. As an example, the de-facto messaging standard organization in medicine HL7 noticed a much more positive response from the medical research community regarding their openly available FHIR specification in comparison to the for-members-only and XML-based HL7v3 messaging standard specification.
While some past (or rather: ongoing) projects on a national scale in the German health care system have tried centralized, top-down specification and development approaches, more recent infrastructure projects embrace the competitive collaboration of a decentralized, bottom-up strategy. As a result, importance and recognition of free software increase in the Medical Informatics research community.



In a series of rapid spotlights, we present tools and frameworks that serve as cornerstones for the envisioned health data exchange infrastructure, including: Organization and collaboration tools; data extraction from clinical source systems, data transformation and de-identification; data management systems and long-term archival using persistent globally-unique object identifiers; federated queries across multiple independently managed clinical data integration centers.



We aim to encourage participants to actively add tools and frameworks within the discussion and highlight their experiences and challenges with using open systems in Medical Informatics.

Speaker bio:



Marcel Parciak and Markus Suhr are research associates at the University Medical Center Göttingen (UMG), Department of Medical Informatics.



Marcel graduated the Göttingen Medical Informatics Master program in 2018 and is currently a PhD student, investigating the challenges of data provenance in medical research. He is a system architect for the HiGHmed project that facilitates innovative federated infrastructure for cross-organisational secondary use of health care data.



Markus started his professional career in 2014 as a system administrator and software developer at the UMG hospital data center. He joined the Department of Medical Informatics in 2017, becoming lead developer for a free software project and working on multiple biomedical research projects. Since 2019 he is technical lead for the newly created Medical Data Integration Center. Markus is a supporter of the Free Software Foundation Europe.

Back

DataLad

Home

Speaker Michael Hanke
RoomAW1.126
TrackOpen Research Tools and Technologies
Time12:30 - 13:00
Event linkView original entry

Contemporary sciences are heavily data-driven, but today's data management technologies and sharing practices fall at least a decade behind software ecosystem counterparts.
Merely providing file access is insufficient for a simple reason: data are not static. Data often (and should!) continue to evolve; file formats can change, bugs will be fixed, new data are added, and derived data needs to be integrated.
While (distributed) version control systems are a de-facto standard for open source software development, a similar level of tooling and culture is not present in the open data community.



The lecture introduces DataLad, a software that aims to address this problem by providing a feature-rich API (command line and Python) for joint management of all digital objects of science: source code, data artifacts (as much as their derivatives), and essential utilities, such as container images of employed computational environments.
A DataLad dataset represents a comprehensive and actionable unit that can be used privately, or be published on today's cyberinfrastructure (GitLab, GitHub, Figshare, S3, Google Drive, etc.) to facilitate large and small-scale collaborations.



In addition to essential version control tasks, DataLad aids data discovery by supporting a plurality of evolving metadata description standards. Moreover, Datalad is able to capture data provenance information in a way that enables programmatic re-execution of computations, and as such provides a key feature for the implementation of reproducible science.
DataLad is extensible and customizable to fine tune its functionality to specific domains (e.g., field of science or organizational requirements).



DataLad is built on a few key principles:




  1. DataLad only knows about two things: Datasets and files.
    A DataLad dataset is a collection of files in folders.
    And a file is the smallest unit any dataset can contain.
    At its core, DataLad is a completely domain-agnostic, general-purpose tool to manage data.


  2. A dataset is a Git repository.
    A dataset is a Git repository. All features of the version control system Git
    also apply to everything managed by DataLad.


  3. A DataLad dataset can take care of managing and version controlling arbitrarily large data.
    To do this, it has an optional annex for (large) file content:
    Thanks to this annex, DataLad can track files that are TBs in size
    (something that Git could not do, and that allows you to restore previous versions of data,
    transform and work with it while capturing all provenance,
    or share it with whomever you want). At the same time, DataLad does all of the magic
    necessary to get this important feature to work quietly in the background.
    The annex is set-up automatically, and the tool git-annex manages it all underneath the hood.


  4. DataLad follows the social principle to
    minimize custom procedures and data structures. DataLad will not transform
    your files into something that only DataLad or a specialized tool can read.
    A PDF file (or any other type of
    file) stays a PDF file (or whatever other type of file it was)
    whether it is managed by DataLad or not. This guarantees that users will not loose
    data or data access if DataLad would vanish from their system, or even when DataLad
    would vanish from the face of Earth. Using DataLad thus does not require or generate
    data structures that can only be used or read with DataLad -- DataLad does not
    tie you down, it liberates you.


  5. Furthermore, DataLad is developed for complete decentralization.
    There is no required central server or service necessary to use DataLad. In this
    way, no central infrastructure needs to be maintained (or paid for) --
    your own laptop is the perfect place to live for your DataLad project, as is your
    institutions webserver, or any other common computational infrastructure you
    might be using.


  6. Simultaneously, though, DataLad aims to
    maximize the (re-)use of existing 3rd-party data resources and infrastructure.
    Users can use existing central infrastructure should they want to.
    DataLad works with any infrastructure from GitHub to
    Dropbox, Figshare, or institutional repositories,
    enabling users to harvest all of the advantages of their preferred
    infrastructure without tying anyone down to central services.



Back

Frictionless Data for Reproducible Research

Home

Speaker Lilly Winfree
RoomAW1.126
TrackOpen Research Tools and Technologies
Time13:00 - 13:30
Event linkView original entry

Generating insight and conclusions from research data is often not a straightforward process. Data can be hard to find, archived in difficult to use formats, poorly structured and/or incomplete. These issues create “friction” and make it difficult to use, publish and share data. The Frictionless Data initiative (https://frictionlessdata.io/) at Open Knowledge Foundation (http://okfn.org) aims to reduce friction in working with data, with a goal to make it effortless to transport data among different tools and platforms for further analysis, and with an emphasis on reproducible research and open data. The Frictionless Data project is comprised of a set of specifications (https://frictionlessdata.io/specs/) for data and metadata interoperability, accompanied by a collection of open source software libraries (https://frictionlessdata.io/software/) that implement these specifications, and a range of best practices for data management. Over the past year and a half, we have been working specifically with the researcher community to prototype using Frictionless Data’s open source tools to improve researchers’ data workflows and champion reproducibility. This talk will discuss the technical ideas behind Frictionless Data for research and will also showcase recent collaborative use cases, such as how oceanographers implemented Frictionless Data tooling into their data ingest pipelines to integrate disparate data while maintaining quality metadata in an easy to use interface.

Expected prior knowledge / intended audience



The audience should be familiar with the themes of researching, using data in various forms from various sources, scientific computing, and the talk is intended for those that are interested in data management, data cleaning, metadata, and using open research data.



Speaker bio



Lilly Winfree is the Product Owner of the Frictionless Data for Reproducible Research Project at Open Knowledge Foundation, where she solves researchers’ technical data management issues. She has her PhD in neuroscience, and has been active in the open data, open source, and open science communities for four years. Lilly has given numerous conference presentations and workshops over the past decade, and enjoys presenting on technical topics to technical and non-technical audiences.



Links to code / slides / material for the talk (optional)



https://github.com/frictionlessdata/
http://frictionlessdata.io/software/



Links to previous talks by the speaker



Workshop presentation: http://bit.ly/FDepfl
Talk from a previous position: https://youtu.be/4Jqu8mBXcmA

Back

On the road to sustainable research software.

Home

Speaker Mateusz Kuzak
RoomAW1.126
TrackOpen Research Tools and Technologies
Time13:30 - 13:45
Event linkView original entry

ELIXIR is an intergovernmental organization that brings together life science resources across Europe. These resources include databases, software tools, training materials, cloud storage, and supercomputers.

One of the goals of ELIXIR [1] is to coordinate these resources so that they form a single infrastructure. This infrastructure makes it easier for scientists to find and share data, exchange expertise, and agree on best practices. ELIXIR's activities are divided into the following five areas Data, Tools, Interoperability, Compute and Training known as “platforms”. The ELIXIR Tools Platform works to improve the discovery, quality and sustainability of software resources. Software Best Practices task of the Tools Platform aims to raise the quality and sustainability of research software by producing, adopting, promoting and measuring information standards and best practices applied to the software development life cycle. We have published four (4OSS) simple recommendations to encourage best practices in research software [2] and the Top 10 metrics for life science software good practices [3].



The 4OSS simple recommendations are as follows:
- Develop a publicly accessible open-source code from day one.
- Make software easy to discover by providing software metadata via a popular community registry.
- Adopt a license and comply with the licence of third-party dependencies.
- Have a clear and transparent contribution, governance and communication processes.



In order to encourage researchers and developers to adopt the 4OSS recommendations and build FAIR (Findable, Accessible, Interoperable and Reusable) software, best practices group in partnership with the ELIXIR Training platform, The Carpentries [4][5], and other communities are creating a collection of training materials [6]. The next step is to adopt, promote, and recognise these information standards and best practices, by developing comprehensive guidelines for software curation, and through workshops for training researchers and developers towards the adoption of software best practices and improvement of the usability of research software tools.



Additionally, the ELIXIR Software Best Practices WG is currently developing a Software Management Plan under the context of the necessary metrics for assessing adoption of good software development practices [7] and will subsequently develop practical guidelines to support its implementation in ELIXIR projects. We will work with the newly formed ReSA (Research Software Alliance) to facilitate the adoption of this plan to the broader community.
In the past year, the Working Group has also been working on improving the tooling and practices around software citation. This work has been done in collaboration with the eLife journal, the Software Sustainability Institute, Datacite, and Software Heritage, over multiple sprint events: (i) BOSC CoFest, (ii) eLife Innovation Sprint, (iii) FORCE19 Research Software Hackathon, and (iv) BioHackathon.



[1] “ELIXIR | A distributed infrastructure for life-science information” Internet: https://www.elixir-europe.org/, [Sep. 16, 2018]
[2] Jiménez RC, Kuzak M, Alhamdoosh M et al. (2017) “Four simple recommendations to encourage best practices in research software” F1000Research [Online]. 6:876. https://doi.org/10.12688/f1000research.11407.1
[3] Top 10 metrics for life science software good practices https://doi.org/10.12688/f1000research.9206.1
[4] “carpentries.org” Internet: carpentries.org, Sep. 15, 2018 [Sep. 16, 2018]
[5] “ELIXIR teams up with The Carpentries to boost its training programme | ELIXIR”, Internet: https://www.elixir-europe.org/news/elixir-carpentries-agreement, Aug. 17, 2018 [Sep. 16, 2018]
[6] SoftDev4Research/4OSS-lesson https://doi.org/10.5281/zenodo.2565040
[7] Top 10 metrics for life science software good practices https://doi.org/10.12688/f1000research.9206.1

Back

Stylo : a user friendly text editor for humanities scholars

Home

Speaker Antoine Fauchié
RoomAW1.126
TrackOpen Research Tools and Technologies
Time13:45 - 14:00
Event linkView original entry

As an editor for WYSIWYM text, Stylo is designed to change the entire digital editorial chain of scholarly journals the field of human sciences.



Stylo (https://stylo.ecrituresnumeriques.ca) is designed to simplify the writing and editing of scientific articles in the humanities and social sciences. It is intended for authors and publishers engaged in high quality scientific publishing. Although the structuring of documents is fundamental for digital distribution, this aspect is currently delayed until the end of the editorial process. This task should, however, be undertaken early on in the process; it must be considered by the author himself. The philosophy behind Stylo consists in returning the task of managing the publication markup to researchers. This repositioning of tasks relating to the editorial process relies on the author’s semantic rather than graphic skills.



This lightning talk will be the opportunity to present this tool and several publishing projects realized with Stylo.

Back

Using Advene to accompany research in AudioVisual Digital Humanities

Home

Speaker Olivier Aubert
RoomAW1.126
TrackOpen Research Tools and Technologies
Time14:00 - 14:15
Event linkView original entry

Advene is a video annotation platform (free software) that aims at accompanying scholars in their audiovisual analyses workflow. It promotes flexible and evolving annotation structures and interfaces in order to deal with the inherent dynamic nature of analysis. In this presentation, I will present the platform itself, and illustrate its usage through existing Digital Humanities projects that use it, from structuring videos for interview analyses to implementing a workflow for semantic annotation of movies.

The Advene project (Annotate Digital Video, Exchange on the NEt) aims
at providing a model and a format to share annotations about digital
video documents (movies, courses, conferences...), as well as tools to
edit and visualize the hypervideos generated from both the annotations
and the audiovisual documents.



With the Advene software, teachers, moviegoers, etc. can exchange
multimedia comments and analyses about video documents. The project
also aims at studying the way that communities of users (teachers,
moviegoers, students...) will use these self-publishing tools to share
their audiovisual "readings", and to envision new editing and viewing
interfaces for interactive comment and analysis of audiovisual
content.



Advene is a GPL desktop application, based on Python3/Gtk/Gstreamer.

Back

Shrivelling world

Home

Speaker Nicolas Roelandt
RoomAW1.126
TrackOpen Research Tools and Technologies
Time14:15 - 14:30
Event linkView original entry

Representing geographical time-space is a fundamental issue in geography, addressing core questions of the discipline, i.e. where are places and what distance separate them. Yet, considering the properties of geographical time space shaped by transport means, no satisfying cartographic representation – including classical maps and plastic space approaches – has been proposed so far.
The "shriveling_world" project aims at producing images of the global geographical time-space, using the third dimension, as in time-space relief maps. The word "shriveling" was introduced by Waldo Tobler in his comments of Mathis-L'Hostis time-space relief image, in order to describe the complex contraction process suggested by the model.
The FOSDEM presentation aims at opening the code to the scientific community, now that the application is close to a first functional version.

Back

Empowering social scientists with web mining tools

Home

Speaker Guillaume Plique
RoomAW1.126
TrackOpen Research Tools and Technologies
Time14:30 - 15:00
Event linkView original entry

Web mining, as represented mostly by the scraping & crawling practices, is not a straightforward task and requires a variety of skills related to web technologies.
However, web mining can be incredibly useful to social sciences since it enables researchers to tap into a formidable source of information about society.



But researchers may not have the possibility to invest copious amount of times into learning web technologies in and out. They usually rely on engineers to collect data from the web.
The object of this talk is to explain how Sciences Po's médialab designed & developed tools to empower researchers and enable them to perform web mining tasks to answer their research questions. Here is an example of issues we will tackle during this talk:






Examples will be taken mainly from the minet CLI tool and the artoo.js bookmarklet.



Speaker



Guillaume Plique is a research engineer working for SciencesPo's médialab. He assists social sciences researchers daily with their methods and maintain a variety of FOSS tools geared toward the social sciences community and also developers.

Back

Revamping OpenRefine

Home

Speaker Antonin Delpeuch
RoomAW1.126
TrackOpen Research Tools and Technologies
Time15:00 - 15:30
Event linkView original entry

OpenRefine is a data transformation tool popular in many communities: data journalism, semantic web, GLAMs, scientific research… In this talk I give an overview of our recent efforts to revamp this project as it approaches its 10th anniversary. We are working on exciting improvements which should help alleviate some of the most salient issues faced by our users. My intention is not to lecture the attendance about how to deal with technical debt or to grow a contributor community - I instead seek feedback and spark discussions about our choices. Let us know what you think and help us take good care of this fantastic tool!

Back

Pocket infrastructures to bridge reproducible research, live coding, civic hacktivism and data feminism for/from the Global South

Home

Speaker Santiago Bragagnolo
RoomAW1.126
TrackOpen Research Tools and Technologies
Time15:30 - 16:00
Event linkView original entry

We will showcase Grafoscopio, a flexible, extensible, self contained "pocket infrastructure", which simplifies infrastructure to amplify participation, so reproducible research and publishing, agile data storytelling and custom data visualization can be used in fields like investigative journalism, data feminism and civic hacktivism. We will show prototypes developed with Grafoscopio in the previously mentioned domains, the motivations behind Grafoscopio and the local community practices around it that deconstruct binary relations of power (software developer/user, data producer / consumer, software binary / source code, male/female) and approach reproducible research practices and tools from a perspective located and embodied in a particular place of the Global South in Latin America and in contrast/dialogue with Global North perspectives.

Reproducible research (and publishing) has been confined mostly to academic places. But it has a lot of potential in several other places like investigative journalism, data feminism and civic hacktivism, as we have showcased by building several prototypes, including: making the so called "Panama Papers" data leak story reproducible; creating domain specific visualizations for medicine information released by 16 governments; porting the Spanish Data Journalism Handbook and the Data Feminism book to our "pocket infrastructures" and the creation of agile and resilient tools and practices to write and publish together (see proposal links for a detailed view of such prototypes).



To bridge reproducible research and publishing, agile data storytelling and custom data visualization, with the previously mentioned domains, we have co-designed, developed, used and extended a set of inclusive approaches and tools for/from the Global South, that we have called "pocket infrastructures". Our "pocket infrastructures" simplify infrastructure to amplify participation, and they are mostly self contained, flexible, extensible, and work well with good, low or non connectivity and run from a variety of hardware, from a USB drive, to low end computers, to servers and the "cloud" and anything in between and beyond.
This is in sharp contrast with exclusionary approaches like "Big Data" or others that start with big and/or overcomplex infrastructures and are getting traction in the Global North (or are being imported from there to the Global South as the "only way" forward regarding reproducibility).



Grafoscopio is one of such pocket infrastructures for reproducible research and publishing, agile visualization and data storytelling and this lecture will showcase Grafoscopio, the motivations behind it, and some prototypes developed with it, and the community practices that allow the development of such projects and prototypes deconstructing also binary relations of power (software developer/user, data producer / consumer, software binary / source code, male/female).



Update: this talk has been proposed by Offray Luna who unfortunately couldn't make it to FOSDEM. Santiago Bragagnolo will present this talk on his behalf.

Back

Journalists are researchers like any others

Home

Speaker Anne L'Hôte
Bruno Thomas
RoomAW1.126
TrackOpen Research Tools and Technologies
Time16:00 - 16:30
Event linkView original entry

We are not journalists. But we are developers working for journalists. When we receive leaks, we are flooded by the huge amount of documents and the huge amount of questions that journalists have, trying to dig into this leak. Among others :



* Where to begin ?
* How many documents mention "tax avoidance" ?
* How many languages are in this leaks ?
* How many documents are in CSV ?


Journalists have more or less the same questions as researchers ! So to help them answer all these questions, we developed Datashare. In a nutshell, Datashare is a tool to answer all your questions about a corpus of documents : just like Google but without Google and without sending information to Google. That means that it extracts content and metadata from all types of documents and index it. Then, it detects any people, locations, organizations and email addresses. The web interface expose all of that to let you have a complete overview of your corpus and search through it. Plus Datashare lets you star and tag documents.



We didn't want to reinvent the wheel, and use assets that has been proved to work well. How did we end up with Datashare from an heterogeneous environment ? Initially we had :



- a command line tool to extract text from huge document corpus
- a proof of concept of NLP pipelines in java
- a shared index based on blacklight / RoR and SolR
- opensource tools and frameworks


Issues we had to fix :



- UX
- scalability of SolR with millions of documents
- integration of all the tools in one
- maintainability and robustness while increasing code base

Back

AMENDMENT Transforming scattered analyses into a documented, reproducible and shareable workflow

Home

Speaker Sébastien Rochette
RoomAW1.126
TrackOpen Research Tools and Technologies
Time16:30 - 17:00
Event linkView original entry

This presentation is a feedback from experience on helping a researcher transforming a series of scattered analyses into a documented, reproducible and shareable workflow.

Time allocated by researchers to program / code the analyses required to answer their scientific questions is usually low compared to other tasks. As a result, multiple small experiments are developed and outputs are gathered as best as possible to be presented in a scientific paper. However, science is not only about sharing results but also sharing methods. How can we make our results reproducible when we developed multiple, usually undocumented analyses? What do we do if the program is only applicable to our computer directory architecture? This is always possible to take time to rewrite, re-arrange and document analyses at the time we want/have to share them. Here, I will take the exemple of a "collaboration fest" where we dissected R scripts of a researcher in ecology. We started a reproducible, documented and open-source R-package along with its website, automatically built using continuous integration: https://cesco-lab.github.io/Vigie-Chiro_scripts/.

However, can we think, earlier in the process, a better way to use our small programming time slots by adopting a method that will save time in our future? In this aim, I will present a documentation-first method using little time while writing analyses, but saving a lot when the time has come to share your work.

Session type (Lecture or Lightning Talk)



Lecture



Session length (20-40 min, 10 min for a lightning talk)



30 min



Expected prior knowledge / intended audience



No prior knowledge expected. Example will be about building documentation for R software but any developper, using any programming language may be interested in the method adopted.



Speaker bio



Sébastien Rochette has a PhD in marine ecology. After a few years has a researcher in ecology, he joined ThinkR (https://rtask.thinkr.fr), a company giving courses and consultancy around the R-software. Along with commercial activities, he is highly involved in the development of open-source R packages. He also shares his experience with the R-community through free tutorials, blog posts, online help and other conferences. https://statnmap.com/



Links to code / slides / material for the talk (optional)



I wrote a blog post in French about what I am planning to present: https://thinkr.fr/transformer-plusieurs-scripts-eparpilles-en-beau-package-r/

This topic is also related to another blog post: https://rtask.thinkr.fr/when-development-starts-with-documentation/



Links to previous talks by the speaker



Talks about R are in my Github repository: https://github.com/statnmap/prez/. The "README" lists talks that have a live recorded video.

As a researcher, I also gave multiple talks about marine science, modelling and other topics related to my research.



Please note that this talk was originally scheduled to be at 17h.
The talk originally in this slot was "Developing from the field." by Audrey Baneyx and Robin de Mourat which will now take place at 17h.

Back

Developing from the field

Home

Speaker Robin De Mourat
RoomAW1.126
TrackOpen Research Tools and Technologies
Time17:00 - 17:30
Event linkView original entry

Software design and development within interdisciplinary research teams is a specific activity which closely associates makers and practitioners in the equipment of experimental research methods and practices. This closeness allows practitioners to tackle research endeavours’ specific requirements, such as understanding the methodological assumptions encoded within the tools. It also induces a specific relationship between “makers” and their publics of “users” : a non-commercial, situated and case-based crafting process, implying shifting roles and complex decision making. How does this peculiar context affect the design and valorization practices around open research tools and their evolution ? What are the benefits and difficulties of such settings, in terms of work organization, pedagogical approaches, and scientific methodology ? What can be shared for other contexts such as activism or journalism ? Grounding on the presentation of several case studies of research tools’ design and development elaborated at the médialab of Sciences Po, this talk will offer an account of how an interdisciplinary research environment affects and dialogs with established methods of design (“participative design”, “user experience research”), development (“agile methods”), and tool valorization and socialization.

Audrey Baneyx has a PhD in artificial intelligence from Paris 6 university. She is a research engineer at the médialab (Sciences Po, Paris) where she works at the intersection of digital methods, knowledge modelling and designing pedagogical storytellings. She is teaching digital culture and methods and, as a mediator, developing médialab tools communities of practitioners. She is co-leading a research group focusing on gender issues online.



Robin de Mourat is research designer at the médialab laboratory (Sciences Po, Paris). He works at the intersection between academic equipment and inquiry practices, combining a background in product design, design history & theory, and human-computer interactions, with diverse material and discursive experiments in the Humanities and Social Sciences. He has participated to the making of texts, things and conversations about the epistemology of design activities, interdisciplinary methodologies, and social & cultural studies of scholarly practices. He has been involved for several years in the development of advanced tools for academic writing and publishing in humanities and social sciences contexts.



Please note that this talk was originally scheduled to be at 16h30.
The talk originally in this slot was "Transforming scattered analyses into a documented, reproducible and shareable workflow" by Sébastien Rochette which will now take place at 16h30.
Note also that Audrey Baneyx couldn't make it, she will not co-present although she co-wrote the talk.

Back

A community-driven approach towards open innovation for research communication

Home

Speaker Emmy Tsang
RoomAW1.126
TrackOpen Research Tools and Technologies
Time17:30 - 18:00
Event linkView original entry

The advancement of web technologies has created an opportunity for developing tools for real-time collaborations, text-mining, interactive data visualisations, sharing reproducible compute environments, etc. These tools can change the ways researchers share, discover, consume and evaluate research and help promote open science and encourage responsible research behaviours.



Through its Innovation Initiative, eLife invests heavily in software development, new product design, collaboration and outreach so that the potential for improvements in the digital communication of new research can start to be realised. In particular, we support exclusively the development of open-source tools, with extensible capabilities, that can be used, adopted and modified by any interested party and actively engage the community of open innovators.



In this talk, we will introduce the following projects:
* Reproducible Document Stack (RDS), an open-tool stack capturing code, data and compute environment in a live paper to improve research reproducibility (see demo here)
* Fostering collaboration and innovation through hacking: eLife Innovation Sprint



We believe that openness is crucial to the future of research, and by supporting the community and promoting open-source research software, we can help build a culture towards integral, collaborative, open and reusable research. We hope to share some of our visions and learnings, and invite feedback and contributions from the wider open-source community on the next steps forward.

Speaker bio



Emmy Tsang is the Innovation Community Manager at eLife, a non-profit organisation with the mission to accelerate research communication and discovery. She is responsible for the day-to-day running of the eLife Innovation Initiative, which supports the development of open-source tools, technologies and processes aimed at improving the discovery, sharing, consumption and evaluation of scientific research. Prior to joining eLife, Emmy completed a PhD in neuroscience at the European Molecular Biology Laboratory in Rome, Italy. She is passionate about building communities, fostering collaborations and developing technological solutions to make research more open, reproducible and user-friendly.



Twitter: @eLifeInnovation / @emmy_ft



Previous talks by speaker




Back

The Journal of Open Source Software

Home

Speaker Karthik Ram
RoomAW1.126
TrackOpen Research Tools and Technologies
Time18:00 - 18:30
Event linkView original entry

Researchers rarely cite software they use as part of their research. As a result, research software and the time spent developing it have become invisible scholarly contributions. This lack of visibility reduces the incentives that are necessary to produce and share high quality software that are essential for the progress of science. The Journal of Open Source Software (JOSS) is an open source, open access journal primarily designed to make it easier for those individuals authoring research software to gain career credit for their work by publishing short software papers. Software papers are a recognized mechanism for authors of research software to create a citable ‘entity’ which can easily be cited in journals and as such directly impact a researcher’s career via established metrics such as the h-index. JOSS is unique in that it only accepts very short (~ 1-2 pages) papers, with short summaries and links to the software repository. In that sense, the software papers are not the focus of the review. Instead, we ask reviewers to conduct a thorough review of the associated software (which must be open source) ensuring that it is well documented, straightforward to install and functions as expected. In this talk I will describe the origin and impact that JOSS has had on research open source and also touch upon issues such as sustainability and credit.

Back

DSpace 7: A major leap forward for the leading institutional repository platform

Home

Speaker Bram Luyten
RoomAW1.126
TrackOpen Research Tools and Technologies
Time18:30 - 19:00
Event linkView original entry

The DSpace community is anticipating the largest release ever in 2020 with DSpace 7 ( https://wiki.duraspace.org/display/DSPACE/DSpace+Release+7.0+Status ). The platform is used in thousands of research institutions around the globe and powers systems including dspace.mit.edu, dash.harvard.edu and openknowledge.worldbank.org. If you download an academic paper through Google Scholar today, the chance is large that it is served to you thanks to a DSpace institutional repository.



The talk aims to briefly introduce the scope and usage of the DSpace software. Attendees will learn how the governance of the DSpace community is structured, and what lead to the decision to drop the two legacy UIs, JSPUI and XMLUI, in favour of an endeavour to introduce Angular as the new UI layer.



The most relevant piece of the presentation for the Fosdem audience, will be an outline of the tooling and best practices applied in the community, together with a pro and con evaluation.



We are very keen on learning from other participants in the audience what they could advise, both on a technical and organisational level, going forward.



Previous presentations on DSpace 7



Introducing DSpace 7
DSpace 7 Configurable Entities
The DSpace 7 Angular UI from a user perspective

Back

Welcome to game development devroom

Home

Speaker Julian Murgia
George Marques
RoomK.3.201
TrackGame Development
Time10:30 - 10:40
Event linkView original entry

Welcome to FOSDEM game development devroom! We'll present what this is all about and invite you to participate.

Back

Python for Godot

Home

Speaker Emmanuel Leblond
RoomK.3.201
TrackGame Development
Time10:45 - 11:10
Event linkView original entry

Godot is an incredible open source game engine. Among it key features, it comes packed with a script language called GDscript and loosely based on Python.
But could it be even better ? Could we use the real Python to code our game on Godot ?



And maybe even more important, is it really a good idea ?

Godot is an advanced, feature-packed, multi-platform 2D and 3D open source game engine.
The project has joined the Software Freedom Conservancy project and it growing community makes it hopes to become a real alternative to Unity&GameMaker.



This talk present the Godot-Python project aiming at bringing Python as a fully integrated language into Godot.



We will have a look at Godot’s internal architecture as is it itself a real interpreter with it garbage collector, dynamic typing, introspection and even builtin custom scripting language.
All of this having to work next to our Python interpreter and communicate back and forth with it.



We will then dig into Godot-Python design choices, both past and current, this project came through a looot of errors and trials ;-)



Finally we will discuss the pros and cons about using Python as a script language for Godot vs the traditional GDscript.



The audience should have some basic knowledge of C level computing (static vs dynamic language, compilation & linking).

Back

0 A.D.: Graphics Pipeline

Home

Speaker Vladislav Belov
RoomK.3.201
TrackGame Development
Time11:15 - 12:00
Event linkView original entry

A story about graphics pipeline of 0 A. D. (an open-source game of Wildfire Games) and its issues.



Talking structure:




Back

Game development with OpenXR

Home

Speaker Christoph Haag
RoomK.3.201
TrackGame Development
Time12:35 - 13:00
Event linkView original entry

Last year Khronos released OpenXR, an open API for using XR hardware. In this talk we will look at the practical side of creating VR applications and games with OpenXR.

Since the Oculus VR development kit started a resurgence of consumer VR, game development has largely been relegated to proprietary VR APIs and runtimes. Khronos reacted by creating an open API for using XR hardware and released it OpenXR 1.0 in July 2019. Collabora implemented the OpenXR API in a runtime nicknamed Monado, built on open source VR hardware drivers. With these building blocks VR applications can now use standardized APIs and run on a FOSS stack.



In this talk, Christoph will give an overview of the feature set of the OpenXR API and the practical side of creating VR applications and games with OpenXR. We will look at low level code using the OpenXR API directly as well as an OpenXR plugin for the godot engine.

Back

Open lightning talks

Home

Speaker George Marques
RoomK.3.201
TrackGame Development
Time13:05 - 14:00
Event linkView original entry

This hour is dedicated to people who want to come up and shortly present their project, without having to schedule a full talk.

Bring your open source game-related project (be it an engine, game, demo, tool, or something else) and showcase to our fellow gamedev friends. Talks should have at most five minutes with no time allotted for questions. If you want to bring a laptop to show something, keep it ready and install/uninstall it as fast as possible (will be part of your five minutes). Please be understanding so everyone can have a chance. Contact the room managers on the event day if you want to present something.

Back

Java & Games

Home

Speaker Mahmoud Abdelghany
RoomK.3.201
TrackGame Development
Time14:05 - 15:00
Event linkView original entry

According to the interwebs, Java is one of the most popular programming languages in the multiverse.

And yet, when it comes to games, its popularity seems to dwindle.



Why though...

Is it really not suited for game development?

Is the language as bad as the critics claim??

Is GC...EVIL!?



I have this side-project of porting Doom 3 from C/C++ to Java. And even though Doom 3 is a ~15 year old game, it is still a massive AAA code-base. So I believe my experiences there can adequately answer a lot of the questions that come to mind when you think of Java & Games.



During this talk, I will try to convince you that Java is a very viable game prototyping/development tool.

Either that, or will add more fuel to the flame...

Back

Double your contributors using these 3 simple tricks!

Home

Speaker Eshed Shaham
RoomK.3.201
TrackGame Development
Time15:05 - 15:35
Event linkView original entry

For some arcane reason contributors spend their precious time on open source game projects. Why do they do this? And more importantly: What can you do to make them do it on your project?

FOSS game projects live on the motivation of their contributors. This motivation usually consists of two parts: The desire to contribute to FOSS game development in general and the desire to contribute to a specific project. While the former is very interesting to discuss, our ability to affect it is negligible. Thankfully, the control of the latter is definitely possible, and it is crucial to the survival of our projects.
In this talk we will analyse different characteristics of FOSS game projects with regards to contributor recruitment and retention. We will try to present practical steps to lower the risk of your current or future projects dying to inactivity and obscurity.

Back

Benefits of porting Godot Engine to Vulkan

Home

Speaker Juan Linietsky
RoomK.3.201
TrackGame Development
Time15:45 - 16:45
Event linkView original entry

Godot 4.0 is in the process of being ported from OpenGL to a Vulkan rendering API.
This new technology provides new challenges and benefits for improving quality and performance,
which will be explained and compared during this presentation.

Godot 4.0 is in the process of being ported from OpenGL to a Vulkan rendering API.
This new technology provides new challenges and benefits for improving quality and performance,
which will be explained and compared during this presentation.

Back

Blender projects for 2020

Home

Speaker Dalai Felinto
RoomK.3.201
TrackGame Development
Time16:55 - 17:15
Event linkView original entry

An in-depth look at the development process of virtual reality in Blender. This project started in 2014 and illustrates well the development channels, how Blender does onboarding, how the development team collaborates with others interested parties, and the role the community has in the projects.

Back

Reloading Escoria

Home

Speaker Julian Murgia
RoomK.3.201
TrackGame Development
Time17:15 - 17:45
Event linkView original entry

Escoria is a Libre framework for the creation of point-and-click adventure games with MIT-Licenced Godot Engine. Since its release, Godot Engine changed a lot at fast pace while Escoria code was still based on old Godot 1.x features.



In this presentation, I'll present the current state of Escoria and discuss the process of rewrite as a Godot Engine plugin. It'll cover architecture and design, allowing adventure game creators to use Godot Engine as a full-featured editor for their adventure game.

Escoria is a Libre framework for the creation of point-and-click adventure games with MIT-Licenced Godot Engine. It was initially developed for the adventure game The Interactive Adventures of Dog Mendonça and Pizzaboy® and later streamlined for broader usages and open sourced as promised to the backers of the Dog Mendonça Kickstarter campaign. Since its release, Godot Engine changed a lot at fast pace while Escoria code was still based on old Godot 1.x features. It is more a collection of dedicated scripts than a proper Godot Engine layer designed for point and click adventure games. Evolving Escoria is quite a big task, so it might be better to actually start it over with a new project architecture while keeping its most interesting features in the process.



In this presentation, I'll present the current state of Escoria and discuss the process of rewrite as a Godot Engine plugin. It'll cover architecture and design, allowing them to use Godot Engine as a full-featured editor for their adventure game.

Back

Spring & Steam, an Odyssey

Home

Speaker Eshed Shaham
RoomK.3.201
TrackGame Development
Time17:50 - 18:30
Event linkView original entry

The Spring RTS Engine has been in active development since 2005. In the past few years, two of its games, Evolution RTS and Zero-K, have been released on Steam. The journey to these releases was long and difficult. Let's regale ourselves with tales of the adventures of the devs, learn many things that you shouldn't do, and also learn what to do in case you've already done what you shouldn't have.

Back

How lowRISC made its Ibex RISC-V CPU core faster

Home

Speaker Greg Chadwick
RoomK.3.401
TrackRISC-V
Time10:30 - 10:50
Event linkView original entry

Ibex implements RISC-V 32-bit I/E MC M-Mode, U-Mode and PMP. It uses an in order 2 stage pipe and is best suited for area and power sensitive rather than high performance applications. However there is scope for meaningful performance gains without major impact to power or area. This talk describes work done at lowRISC to analyse and improve the performance of Ibex. The RTL of an Ibex system is simulated using Verilator to run CoreMark and Embench and the traces analysed to identify the major sources of stalls within them. This informs where improvements should be targeted. The open source implementation tools Yosys and openSTA are used to assess potential timing and area impacts of these improvements. In this talk you’ll learn about the pipeline of Ibex, methods to analyse the performance of CPU microarchitecture and how to use Yosys and openSTA to analyse what limits clock frequency in a design.

Back

BlackParrot

Home

Speaker Dan Petrisko
RoomK.3.401
TrackRISC-V
Time10:50 - 11:10
Event linkView original entry

BlackParrot is a Linux-capable, cache-coherent RISC-V multicore, designed for efficiency and ease of use. In this talk, we will provide an architectural overview of BlackParrot, focusing on the design principles and development process as well as the software and hardware ecosystems surrounding the core. We will also discuss the project roadmap and our plans to engage the open-source community. Last, we will demonstrate a multithreaded RISC-V program running on top of Linux on a multicore BlackParrot FPGA implementation.

BlackParrot aims to be the default open-source, Linux-capable, cache-coherent, RV64GC multicore used by the world. Although originally developed by the University of Washington and Boston University, BlackParrot strives to be community-driven and infrastructure agnostic, a core which is Pareto optimal in terms of power, performance, area and complexity. In order to ensure BlackParrot is easy to use, integrate, modify and most importantly trust, development is guided by three core principles: Be Tiny, Be Modular, and Be Friendly. Development efforts have prioritized ease of use and silicon validation as first order design metrics, so that users can quickly get started and trust that their results will be representative of state-of-the-art ASIC designs. BlackParrot is ideal as the basis for a research platform, a lightweight accelerator host or as a standalone Linux core.

Back

The HammerBlade RISC-V Manycore

Home

Speaker Michael Taylor
Max H Ruttenberg
RoomK.3.401
TrackRISC-V
Time11:10 - 11:30
Event linkView original entry

HammerBlade is an open source RISC-V manycore that has been under development since 2015 and has already been silicon validated with a 511-core chip in 16nm TSMC. It features extensions to the RISC-V ISA that target GPU-competitive performance for parallel programs (i.e. GPGPU) including graphs and ML workloads. In this talk we will overview the components of the HW and software ecosystem in the latest version, and show you how to get up and running as an open source user or contributor in either SW or HW on Amazon F1 cloud FPGAs.

HammerBlade is an open source RISC-V manycore that has been under development since 2015 and has already been silicon validated with a 511-core chip in 16nm TSMC. It features extensions to the RISC-V ISA that target GPU-competitive performance for parallel programs (i.e. GPGPU) including graphs and ML workloads. In this talk we will overview the components of the HW and software ecosystem in the latest version, and show you how to get up and running as an open source user or contributor in either SW or HW on Amazon F1 cloud FPGAs. We will overview the HW-architecture, the CUDA-like programming environment, the runtime software, the HW architecture, and our Amazon F1 cloud emulation and cosimulation environments, and our suite of performance analysis tools.

Back

Open ESP

Home

Speaker Luca Carloni
Davide Giri
RoomK.3.401
TrackRISC-V
Time11:30 - 11:50
Event linkView original entry

ESP is an open-source research platform for RISC-V systems-on-chip that integrate many hardware accelerators.

ESP provides a vertically integrated design flow from software development and hardware integration to full-system prototyping on FPGA. For application developers, ESP offers domain-specific automated solutions to synthesize new accelerators for their software and map it onto the heterogeneous SoC architecture. For hardware engineers, ESP offers automated solutions to integrate their accelerator designs into the complete SoC.

The participants in this FOSDEM20 event will learn how to use ESP from the viewpoints of both application developers and hardware engineers by following a series of short hands-on tutorials embedded in the lecture.
Conceived as a heterogeneous integration platform and tested through years of teaching at Columbia University, ESP is intrinsically suited to foster collaborative engineering of RISC-V based SoCs across the open-source community.




ESP is an open-source research platform to design and program heterogeneous systems-on-chip (SoCs). A heterogeneous SoC combines multiple general-purpose processor cores and many specialized hardware accelerators.






ESP provides automated solutions to (a) synthesize new accelerators, (b) integrate them with RISC-V processors and other third party accelerators into a complete SoC, (c) rapidly prototype the SoC on an FPGA board, and (d) run software applications that take advantage of these accelerators.
ESP contributes to the open-source movement by supporting the realization of more scalable architectures for SoCs that integrate more heterogeneous components, thanks to a more flexible design methodology that accommodates different specification languages and design flows.






With ESP's automation capabilities, it is easy to realize FPGA-based prototypes of complete SoCs. For example, an SoC may feature the Ariane RISC-V processor core booting Linux, a multi-plane network-on-chip supporting a partitioned memory hierarchy with multiple DRAM controllers, and tens of loosely-coupled accelerators that execute coarse-grained tasks exchanging large amount of data with DRAM through direct-memory access (DMA). These accelerators can be third-party open-source hardware components that “speak” the AXI protocol (e.g. the NVIDIA NVDLA accelerator for deep learning) or new accelerators that can be synthesized with different design flows from specifications written in different languages, including: C with Xilinx Vivado HLS, SystemC with Cadence Stratus HLS, Keras TensorFlow and PyTorch with hls4ml, Chisel, SystemVerilog, Verilog, and VHDL.






Information technology has entered the age of heterogeneous computing. Across a variety of application domains, computing systems rely on highly heterogeneous architectures that combine multiple general-purpose processors with specialized hardware accelerators. The complexity of these systems, however, threatens to widen the gap between the capabilities provided by semiconductor technologies and the productivity of computer engineers. ESP tackles this challenge by raising the level of abstraction in the design process, simplifying the domain-specific programming of heterogeneous architectures, and leveraging the potential of the emerging open-source hardware movement.






Building on years of research on communication-based system-level design at Columbia University, ESP combines an architecture and a methodology. The flexible tile-based architecture simplifies the integration of heterogeneous components by balancing regularity and specialization. The companion methodology raises the level of abstraction to system-level design, thus promoting closer collaboration among software programmers and hardware engineers. Through the automatic generation of device drivers from pre-designed templates, ESP simplifies the invocation of accelerators from user-level applications executing on top of Linux. Through the automatic generation of a multi-plane network-on-chip from a parameterized model, the ESP architecture can scale to accommodate many processors, tens of accelerators, and a distributed memory hierarchy. A set of ESP Platform Services provides pre-validated solutions for accelerators configuration, memory management, sharing of system resources, and dynamic frequency scaling, among others.






To date, the majority of the open-source hardware (OSH) efforts related to RISC-V have focused on the development of processor cores that implement the RISC-V ISA and small-scale SoCs that connect these cores with tightly-coupled functional units and coprocessors, typically through bus-based interconnects. Meanwhile, there have been less efforts in developing solutions for large-scale SoCs that combine RISC-V cores with many loosely-coupled components, such as coarse-grain accelerators, interconnected with a network-on-chip (NoC). Compared to other RISC-V related projects, ESP is focused on scalability (with the NoC-based architecture), heterogeneity (with emphasis on loosely-coupled accelerators), and flexibility (with support of different design flows). Just like the ESP architecture simplifies the integration of heterogeneous components developed by different teams, the ESP methodology embraces the use of heterogeneous design flows for component development.






ESP has been developed by the System-Level Design (SLD) group in the Department of Computer Science at Columbia University during the past seven years. The SLD group has published over a dozen scientific papers in peer-reviewed conferences and journals to describe the most innovative aspects of ESP. ESP has been released as an open-source project via GitHub in the summer 2019.

Back

Building Loosely-coupled RISC-V Accelerators

Home

Speaker Schuyler Eldridge
RoomK.3.401
TrackRISC-V
Time11:50 - 12:10
Event linkView original entry

The burgeoning RISC-V hardware ecosystem includes a number of microprocessor implementations [1, 3] and SoC generation frameworks [1, 2, 7]. However, while accelerator “sockets” have been defined and used (e.g., Rocket Chip’s custom coprocessor/RoCC), accelerators require additional collateral to be generated like structured metadata descriptions, hardware wrappers, and device drivers. Requiring manual effort to generate this collateral proves both time consuming and error prone and is at odds with an agile approach to hardware design. However, the existence and use of hardware construction languages and hardware compilers provides a means to automate this process. Through the use of the Chisel hardware description language [4] and the FIRRTL hardware compiler [5], we provide a definition of an abstract accelerator template which users then implement to integrate an accelerator with the Embedded Scalable Platform (ESP) System-on-Chip platform [2, 8]. Through the use of this template, we are able to automatically generate XML metadata necessary to integrate accelerators with the ESP platform and work on generating all collateral is in progress. Our accelerator template is open source software provided under an Apache 2.0 license [6].



[1] CHIPS alliance Rocket-chip. GitHub Repository. Online: https://github.com/chipsalliance/rocket-chpi.



[2] Columbia University Embedded scalable platform. git repository. Online: https://github.com/sld-columbia/esp.



[3] ETH Zurich Ariane. GitHub Repository. Online: https://github.com/pulp-platform/ariane.



[4] Freechips Project Chisel3. GitHub Repository. Online: https://github.com/freechipsproject/chisel3.



[5] Freechips Project FIRRTL. GitHub Repository. Online: https://github.com/freechipsproject/firrtl.



[6] IBM ESP chisel acclerators. GitHub Repository. Online: https://github.com/ibm/esp-chisel-accelerators.



[7] Princeton University OpenPiton. GitHub Repository. Online: https://github.com/PrincetonUniversity/openpiton.



[8] ESP: The open-source heterogeneous system-on-chip platform. Online: https://www.esp.cs.columbia.edu/.

Back

ERASER: Early-stage Reliability And Security Estimation for RISC-V

Home

Speaker Karthik Swaminathan
RoomK.3.401
TrackRISC-V
Time12:10 - 12:30
Event linkView original entry

RISC-V processors have gained acceptance across a wide range of computing domains, from IoT to embedded/mobile class and even in server-class processing systems. In processing systems ranging from connected cars and autonomous vehicles, to those on-board satellites and spacecrafts, these processors are targeted to function in safety-critical systems, where Reliability, Availability and Serviceability (RAS)-based considerations are of paramount importance. Along with potential system vulnerabilities caused primarily due to random errors, these processors may also be sensitive to targeted errors, possibly from malicious entities, which raises serious concerns regarding the security and safety of the processing system. Consequently, such systems necessitate the incorporation of RAS-based considerations right from an early stage of processor design.



While the hardware and software ecosystem around RISC-V has been steadily maturing, there have, however, been limited developments in early stage reliability-aware design and verification. The Early-stage Reliability And Security Estimation for RISC-V (ERASER) tool attempts to address this shortcoming. It consists of an open source framework aimed at providing directions to incorporate such reliability and security features at an early, pre-silicon stage of design. These features may include what kind of protection to be applied and which components within the processor should they be applied to. The proposed infrastructure comprises of an open source toolchain for early stage modeling of latch vulnerability in a RISC-V core (SERMiner [1]), a tool for automated generation of stress marks that maximize the likelihood of a transient-failure induced error (Microprobe (RISC-V) [2]), and verification by means of statistical and/or targeted fault injection (Chiffre [3]). While the infrastructure is targeted towards any core that uses the RISC-V ISA, the repository provides an end-to-end flow for the Rocket core [4].



ERASER thus evaluates “RAS-readiness”, or the effectiveness of protection techniques in processor design such that processor vulnerability in terms of Failures-In-time (FIT) rate is minimized, for a specified power/performance overhead. FIT rate is defined as the number of failures in one billion hours of operation and is a standard vulnerability metric used in industry.



ERASER is an open source tool available for download at https://github.com/IBM/eraser. The tool currently supports analysis of all latches in the design across a single Rocket core and the generation of stressmarks that can be used to evaluate the vulnerability of these latches. In addition to radiation-induced soft errors, we plan to extend ERASER to also model errors due to voltage noise, thermal and aging-induced failures, both in memory and logic, and generate representative stressmarks.



ERASER is an initial effort to devise a comprehensive methodology for RAS analysis, particularly for open-source hardware, with the hope that it spurs further research and development into reliability-aware design both in industry and academia.



References:




  1. K. Swaminathan, R. Bertran, H. Jacobson, P. Kudva, P. Bose, ‘Generation of Stressmarks for Early-stage Soft-error Modeling’, International Conference on Dependable Systems and Networks (DSN) 2019


  2. S. Eldridge R. Bertran, A. Buyuktosunoglu, P. Bose, ‘MicroProbe: An Open Source Microbenchmark Generator, ported to the RISC-V ISA, the 7th RISC-V workshop, 2017


  3. S. Eldridge, A. Buyuktosunoglu and P. Bose, ‘Chiffre A Configurable Hardware Fault Injection Framework for RISC-V Systems’ 2nd Workshop on Computer Architecture Research with RISC-V (CARRV), 2018


  4. Krste Asanović, Rimas Avižienis, Jonathan Bachrach, Scott Beamer, David Biancolin, Christopher Celio, Henry Cook, Palmer Dabbelt, John Hauser, Adam Izraelevitz, Sagar Karandikar, Benjamin Keller, Donggyu Kim, John Koenig, Yunsup Lee, Eric Love, Martin Maas, Albert Magyar, Howard Mao, Miquel Moreto, Albert Ou, David Patterson, Brian Richards, Colin Schmidt, Stephen Twigg, Huy Vo, and Andrew Waterman, The Rocket Chip Generator, Technical Report UCB/EECS-2016-17, EECS Department, University of California, Berkeley, April 2016



The attached figure shows a representative flow for the RAS estimation methodology. An initial characterization of all instructions in the RISC-V ISA is carried out via RTL simulation using an existing core model (eg. the Rocket core). The simulation is configured to generate VCD (Value- Change Dump) files for every single instruction testcase. The SERMiner tool parses these VCD files to determine latch activities across the core, aggregated at a macro (or RTL module) level. Based on these per-instruction latch activities, SERMiner outputs an instruction sequence, which forms the basis of the SER stressmark to be generated by Microprobe (RISC-V). Microprobe (RISC-V) is a microbenchmark generation tool that is capable of generating microbenchmarks geared towards specific architecture and micro-architecture level characterization. One of its key applications is in the generation of stressmarks, or viruses, that target various worst-case corners of processor operation. These stressmarks may be targeted at maximizing power, voltage noise, temperature, or soft-error vulnerability as in case of this tool. The generated stressmark is then used to generate a list of latches that show a high residency and hence a high SER vulnerability. These latches are the focus of fault injection-based validation experiments using the Chiffre tool. Chiffre provides a framework for automatically instrumenting a hardware design with run-time configurable fault injectors. The vulnerable latches obtained from running the generated stressmarks through the Rocket core model, and then through SERMiner, are earmarked for targeted fault injection experiments using Chiffre. The objective of these experiments is to further prune the list of vulnerable latches by eliminating those that are derated, that is, they do not affect the overall output even when a fault is injected in them. Focusing any and all protection strategies on this final list of latches would maximize RAS coverage across the entire core.



Ongoing and future work:



ERASER currently only supports analysis of all latches in the design across a single Rocket core and the generated stressmarks can be used to evaluate the vulnerability of these latches. Most on-chip memory structures such as register files and caches, are equipped with parity/ECC protection and are as such protected against most radiation-induced soft errors. However, they are still vulnerable to supply voltage noise, thermal and aging-induced failures, and other hard or permanent errors. We plan to extend ERASER to model such errors, both in memory and logic, and generate stressmarks representative of worst-case thermal emergencies and voltage noise, in addition to soft errors.

Back

RISC-V Software and Firmware Development in the Cloud Using OpenPiton+Ariane on Amazon F1

Home

Speaker David Wentzlaff
Jonathan Balkind
RoomK.3.401
TrackRISC-V
Time12:30 - 12:50
Event linkView original entry

RISC-V application, OS, and firmware development has been slowed by the lack of "real hardware" available for developers to work with. With the rise of FPGAs in the cloud and the recent release of the OpenPiton+Ariane manycore platform on Amazon's F1 cloud FPGA platform, we propose using 1-12 core OpenPiton+Ariane processors emulated on F1 to develop RISC-V software and firmware. In this talk, we will give an accelerated tutorial on how to get started with OpenPiton+Ariane, the spec-compliant RISC-V platform it offers, and how the firmware and OS can be modified and run on top. We will show a number of applications built and running for our present Debian distribution and the software development environment that this offers. We will then highlight how hardware and software can be co-designed on OpenPiton+Ariane with the ability to recompile the hardware underlying the cloud FPGA image and deploy it for use by others. This platform is serving as a basis for software and hardware development for the DECADES project, a project investigating heterogenous manycore and hardware accelerator based designs with support for orchestrated data movement.

RISC-V Software and Firmware Development in the Cloud Using OpenPiton+Ariane on Amazon F1



RISC-V application, OS, and firmware development has been slowed by the lack of "real hardware" available for developers to work with. With the rise of FPGAs in the cloud and the recent release of the OpenPiton+Ariane manycore platform on Amazon's F1 cloud FPGA platform, we propose using 1-12 core OpenPiton+Ariane processors emulated on F1 to develop RISC-V software and firmware. In this talk, we will give an accelerated tutorial on how to get started with OpenPiton+Ariane, the spec-compliant RISC-V platform it offers, and how the firmware and OS can be modified and run on top. We will show a number of applications built and running for our present Debian distribution and the software development environment that this offers. We will then highlight how hardware and software can be co-designed on OpenPiton+Ariane with the ability to recompile the hardware underlying the cloud FPGA image and deploy it for use by others. This platform is serving as a basis for software and hardware development for the DECADES project, a project investigating heterogenous manycore and hardware accelerator based designs with support for orchestrated data movement.



http://openpiton.org
https://openpiton-blog.princeton.edu/2019/10/bringing-openpiton-to-amazon-ec2-f1-fpgas/



OpenPiton+Ariane contributors include:
Jonathan Balkind, Grigory Chirkov, Yaosheng Fu, Adi Fuchs, Fei Gao, Alexey Lavrov, Ang Li, Xiaohua Liang, Katie Lim, Matthew Matl, Michael McKeown, Tri Nguyen, Samuel Payne, Michael Schaffner, Mohammad Shahrad, Jinzheng Tu, Florian Zaruba, Yanqi Zhou, Georgios Tziantzioulis, Luca Benini, David Wentzlaff



DECADES is a large collaboration from three academic groups: Margaret Martonosi (PI Princeton), David Wentzlaff (PI Princeton), Luca Carloni (PI Columbia) with students/researchers: Jonathan Balkind, Ting-Jung Chang, Fei Gao, Davide Giri, Paul Jackson, Paolo Mantovani, Luwa Matthews, Aninda Manocha, Tyler Sorensen, Jinzheng Tu, Esin Türeci, Georgios Tziantzioulis, and Marcelo Orenes Vera. In addition to the submission author, portions of the talk may be offered by others in the collaboration.

Back

Cacheable Overlay Manager RISC-V

Home

Speaker Ofer Shinaar
RoomK.3.401
TrackRISC-V
Time12:50 - 13:10
Event linkView original entry

We would like to present and overlay technique for RISCV, develop by WD and open sourced.
This FW feature acts as a software “paging” manager. It is to be threaded with the Real-Time code and to the toolchain.
Cacheable Overlay Manager RISC-V (ComRV), a technique which fits limited memory embedded devices (as IoT’s), and does not need any HW support.

Back

RISC-V Boot flow: What's next ?

Home

Speaker Atish Patra
RoomK.3.401
TrackRISC-V
Time13:10 - 13:30
Event linkView original entry

RISC-V boot flow has come a long way since in recent times by leveraging the various opensource boot loaders/firmware projects. This also helped in achieving a well-supported and standard boot flow for RISC-V. As a result, developers can use the same boot loaders to boot Linux on RISC-V as they do in other architectures. Currently, U-Boot is used as the last stage boot loader and OpenSBI as the machine mode run time service provider, but there's more work to be done. A few of such future works includes U-boot SPL support, UEFI boot in RISC-V Linux and booting protocol improvements. This talk will focus on some of these ongoing works which are necessary to declare that RISC-V is truly ready for world domination.

Back

Oreboot

Home

Speaker Ryan O'Leary
RoomK.3.401
TrackRISC-V
Time13:30 - 13:50
Event linkView original entry

Oreboot = Coreboot - C. Oreboot is a fully open-source power-on-reset and romstage firmware written in Rust. Oreboot can boot a HiFive RISC-V processor to Linux with a Go user-mode.



Oreboot rethinks the firmware driver models. Each driver is distilled to four basic functions: init, pread, pwrite and shutdown. This interface allows us to make convenient higher-level drivers such as a "union driver" which duplicates a single write operation to multiple drivers. This makes consoles which have multiple underlying UART drivers elegant.



By using the Rust programming language, Oreboot has a leg-up in terms of security and reliability compared to contemporary firmware written in C or assembly. Rust's borrow-checker ensures pointers are not used after freed and proves that coroutines are thread-safe at compile time.



In this talk, we will also present a short overview of the basics of Rust, how our driver model incorporates coroutines and the bootflow of Oreboot.

Back

RISC-V Hypervisors

Home

Speaker Anup Patel
RoomK.3.401
TrackRISC-V
Time13:50 - 14:10
Event linkView original entry

The RISC-V H-extension (aka hypervisor extension) is suitable for both Type1 and Type2 hypervisor. We have ported two hypervisors for RISC-V: Xvisor (Type1) and KVM (Type2). We show the current state and furture work for both hypervisors.

Back

Port luajit to RISC-V

Home

Speaker Anton Kuzmin
RoomK.3.401
TrackRISC-V
Time14:10 - 14:30
Event linkView original entry

There is a need for a lightweight tools for experiments with RISC-V
custom extensions. Adding support for custom instructions in
binutils/gcc/llvm is out of range for many hardware architects. LuaJIT
includes a small and powerful assembler: dynasm, accessible from
within Lua interpreter. Currently dynasm supports following 32 and 64-bit
instruction sets: x86, x64, ARM, PowerPC, and MIPS, and it is just
reasonable to extend this support to RISC-V.



Lua itself is a very compact and simple yet powerful dynamic language,
its JIT compiler (luajit) makes it one of the fastest, if not the
fastest, interpreted language, and it is used in many projects, so
having it running on RISC-V would have use besides the mere internal
need for experimental platform.

Outline



Project scope






Develpment platforms






Benchmarks and baseline



Deviation (side project)



Yet another Forth and yet another assembler

Back

Welcome to the Retrocomputing DevRoom 2020

Home

Speaker Pau Garcia Quiles (pgquiles)
François Revol (mmu_man)
RoomK.3.401
TrackRetrocomputing
Time14:55 - 15:00
Event linkView original entry

A quick introduction to the 3rd edition of the retro-computing devroom.

Back

Alpha Waves, the first 3D platformer ever

Home

Speaker Christophe de Dinechin
RoomK.3.401
TrackRetrocomputing
Time15:00 - 15:30
Event linkView original entry

Alpha Waves is the first 3D platform game ever, according to the Guiness Book of Records.



Initially developed on Atari ST, it was then ported on Amiga and on the IBM PC.
The technology later gave rise to Alone in the Dark, a major game that launched Infogrames in the big league.
This is the history of that game.

The game was initially developed on Atari ST, representing 17000 lines of 68K assembly code.
It was later ported on Amiga and on the IBM PC, and was the first and only assembly program game that Infogrames ever ported to another CPU.
The technology developed for that game later inspired Frederick Raynal to develop Alone in the Dark, a major game that launched Infogrames in the big league.



This talk is the history of that game by its developer. It will cover:




Back

BASICODE: the 8-bit programming API that crossed the Berlin Wall

Home

Speaker Rob Hagemans
RoomK.3.401
TrackRetrocomputing
Time15:35 - 16:05
Event linkView original entry

At the height of the cold war, BASIC programs exchanged by radio and cassette tape provided young people of socialist Eastern and capitalist Western Europe a rare insight into each other's worlds. BASICODE was a transmission format and an API developed by the Dutch public broadcasting service to overcome the challenge of exchanging open source hobby programs in the highly fragmented 8-bit computing landscape of the early 1980s, which was dominated by mutually incompatible versions of BASIC. Somewhat improbably, the format was picked up across the iron curtain in the German Democratic Republic, where it experienced its age of greatest popularity. The need for programs to work on platforms with widely different capabilities and incompatible syntaxes forced it to be simply structured, highly regulated and relatively well documented. This makes it ideally suited for implementation in a web browser.


Back

Retro music - Open Cubic Player

Home

Speaker Stian Sebastian Skjelstad
RoomK.3.401
TrackRetrocomputing
Time16:10 - 16:40
Event linkView original entry

This presentation includes multiple topics mixed together
* Porting a DOS multimedia program into Linux/BSD
* How music was composed/stored in old type of games from the 8bit era, up into the amiga (and partly the DOS world)
* How does audio/music hardware work. C64 SID, ZX-Spectrum and alike, Amiga, Adlib FM-synth OPL2/3 and General Midi

As a child, I was exposed to Cubic Player. This program was a text-based music player. All the color full texts mesmerized me and it gave a direct visual feedback of how the music was generated.



During teenage years I learned programming and got introduced to Linux. All the sourcecode for everything was available. The kernel, multimedia libraries, tools, all kind of APIs. If there are anything you wonder how works, you can inspect it. If you are unable resolve a specific detail in your own project, try to find another project that has done something similar! But for playback of these Amiga style modules, there was no programs that had the same charm as Cubic Player. Programs like mikmod, XMMS, Audacious only displayed playlist, maybe instrument-names and maybe an spectrum analyzer.



When I discovered that Cubic Player had been released as Open Cubic Player. I grabbed the source code and started to study it. All of it was based on direct hardware access, as it was written for DOS. Direct access to video-memory/hardware, raw keyboard scan codes, hardware interrupts for timers allowing background rendering of audio if needed etc. A natural candidate for a novice programmer to port?



Slowly I went through one and one single file of the original source code.
* Lots of logic could be kept
* Some could be thrown away like direct audio hardware drivers
* Some needed heavy rewrites like video output
* Assembler needed to be rewritten into C inlines
* The timer interrupt porting caused a zero-day kernel bug to be discovered https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2004-0554



When text appears on the screen, and audio (highly distorted in the beginning) appeared, was a magical feeling. Progress was faster when the goal felt to be well within the reach.



When you the make your code public, people start to use it... and you start to get feedback like:
* Program is unable to compile on other platforms
* Program crash on other platforms
* Specific crashes that only happens for some few people - compiler bugs (these were more common back in the days)






How did old systems playback music? Using specific hardware circuits.
How is music composed / stored / playback?



We will take a deep dive into these two questions on these platforms:



ZX-Spectrum 128 .AY files - Using code chopped out from "aylet"



Atari ST .YM files - Using ST Sound Library



C64 .SID files - libsidplay



OPL2/OPL3 style - libadplug



Amiga style modules



General MIDI - TiMidity++



Open Source makes it possible to combine the work and effort from multiple volunteers. Not everything is a always an easy walk in the park, but it sure makes it easier than writing everything from scratch when there are known working code that can be either be used as a template or wrapped around.

Back

Reviving Minitel

Home

Speaker Frédéric Bisson
RoomK.3.401
TrackRetrocomputing
Time16:45 - 17:15
Event linkView original entry

Using web technologies, one can easily emulate the Minitel ecosystem, not only the iconic french terminal but also its servers. This easiness has been made possible due to ES6 and websockets.


Back

Reverse engineering a VIC-20 expansion cartridge

Home

Speaker Ben Dooks
RoomK.3.401
TrackRetrocomputing
Time17:20 - 17:50
Event linkView original entry

Going from seeing an image of a cartridge that I would have loved to have had for my VIC-20 to working out how it was built and then making my own.

Starting with a brief overview of the VIC-20 and the capabilities, to then looking at the "vixen" 16KiB RAM expansion, the initial view of the internals made available on the internet to going to a full implementation.



The initial overview will go through some of the identification of the components, the technologies involved (such as SRAM and DRAM) and the VIC-20 expansion bus. It will show what sort of technologies could go into implementing this sort of expansion and then how these can be discounted either by the age of the technology or the component complexity.



Techniques for further reversing a circuit from the physical device and how the actual device circuit was then discovered, followed by the implementation and testing will be shown.

Back

Running a mainframe on your laptop (for fun and profit)

Home

Speaker Jeroen Baten
RoomK.3.401
TrackRetrocomputing
Time17:55 - 18:25
Event linkView original entry

Yes, this talk is about running your own mainframe on your own hardware. Mainframes are old, yes, but they are still very much alive. New hardware is still being developed and there are a lot of fresh jobs in this area too. A lot of mainframes run COBOL workloads. COBOL is far from a dead language. It processes an estimated 85% of all business transactions, and 5 billion lines of new COBOL code are written every year. In this session the speaker will help you in take your first steps towards running your own mainframe. If you like then after this session you can continue to build your knowledge of mainframe systems using the links provided during the talk. Come on in and learn the basics of a completely different computer system! And it will take you less than an hour to do that!

Back

Arcade game port to ZX Spectrum

Home

Speaker Rui Martins
RoomK.3.401
TrackRetrocomputing
Time18:30 - 19:00
Event linkView original entry

Reverse engineering an Arcade game and re-implementing it into an 8 bit system is an engineering exercise, where compromises have to be made in order to accomplish the goal, since the capabilities of the target machine are severely under powered when compared with the source machine.



The processes of accomplishing this and all it involves, will be presented.

Remake an Arcade Game (reverse engineering) into an 8 bit system is an engineering exercise, where compromises have to be made in order to accomplish the goal, since the capabilities of the target machine are severely under powered when compared with the source machine.



Starting with graphic capabilities, passing through CPU limitations (clock speed and architecture) and ending with multimedia capabilities, every single one, needs to be addressed with a suitable compromise.
Some "hacks" and optimizations need to be applied, to use the 8 bit hardware capabilities in a convenient way to overcome the huge handy cap between architectures.



For practical and example purposes, the reverse engineering of the Arcade game "Magical Drop II" will be presented, and how it became "Extruder" ZX Spectrum game.



The software was developed in Zilog Z80 Assembly, and several tips and tricks will be shown, that facilitate and help the conversion process.

Back

Debugging with LLVM

Home

Speaker Andrzej Warzynski
Graham Hunter
RoomK.4.201
TrackLLVM
Time10:30 - 11:10
Event linkView original entry

The aim of this presentation is to showcase the technologies available in LLVM that aid debugging. We will focus on LLDB, the debugger, and sanitisers (e.g. AddressSanitizer and LeakSanitizer). No prior knowledge is required, but if you're familiar with GDB or Valgrind then this talk will introduce you to alternatives available within LLVM.



LLDB is a very powerful and extensible command line debugger available on Linux, Mac OS, FreeBSD, Windows and Android. It is used internally in XCode and Android Studio and available on various hardware platforms (e.g. X86, ARM, AArch64, PowerPC, Mips). LLDB is built as a set of reusable components which highly leverage existing libraries in LLVM. It has a very powerful expression evaluation engine, intuitive CL interface (with tab-completion), easy to navigate help pages and a "graphical" user interface. In this presentation we will explore basic usage as well as some lesser known features. LLDB has come a long way and we want to present how intuitive, helpful and powerful it can be when used pragmatically.



While LLDB will let you easily examine and debug a program at the point of failure, it can be harder to diagnose the underlying problem if it occurred before the program crashed or printed an incorrect result. LLVM provides some extra features in the form of 'sanitizers' to help find the root cause of some extra problems, like accessing a wrong-but-still-valid memory address or unintentionally wrapping a signed integer value. This presentation will explore how to use the sanitizers to debug programs and some examples of bugs they can catch.

Back

Benchmarking LLVM using Embench

Home

Speaker Jeremy Bennett
RoomK.4.201
TrackLLVM
Time11:15 - 11:55
Event linkView original entry

Dhrystone and Coremark have been the defacto standard microcontroller benchmark suites for the last thirty years, but these benchmarks no longer reflect the needs of modern embedded systems. Embench™ was explicitly designed to meet the requirements of modern connected embedded systems. The benchmarks are free, relevant, portable, and well implemented.



In this talk we will present the results of benchmarking Clang/LLVM for various IoT class architectures using Embench. We shall look at
- how code size and speed varies across architectures when compiling with Clang/LLVM.
- how Clang/LLVM performance has evolved over time
- how Clang/LLVM compares against other compilers, notably GCC
- the effectiveness of various compilation techniques (LTO, Combined Elimination, Profile Guided Optimization)



The aim is not to show which architecture or compiler is best, but to gain insight into the detail of the compilation process, so that all compilers and architectures can learn from each other.

Back

Confronting Clang and Fedora

Home

Speaker Serge Guelton (serge-sans-paille)
RoomK.4.201
TrackLLVM
Time12:10 - 12:50
Event linkView original entry

GCC is the default toolchain to build C/C++ packages in Fedora. Meanwhile OpenMandrivia already builds most of its package with the LLVM toolchain, the Debian archive is regularly rebuilt with with a recent verion of clang... So could we try that for Fedora?



This talk describes an on-going effort to achieve that goal while keeping the same compiler feature set as GCC.

Subtopics of the talk include:




Back

LLVM and GCC

Home

Speaker Jeremy Bennett
RoomK.4.201
TrackLLVM
Time12:55 - 13:35
Event linkView original entry

At the GNU Tools Cauldron we held a panel discussion on how GCC and LLVM can work together. The video of that discussion can be seen at https://www.youtube.com/watch?v=PnbJOSZXynA. We proposed a similar discussion to be held at the LLVM Developers Meeting, but the reviewers suggested that such a discussion would be better held as part of the FOSDEM LLVM Devroom, since that was more likely to attract GNU developers as well.



We wish to explore how Clang/LLVM and the GCC can work together effectively.



The participants will explore opportunities for co-operation between the projects. Areas to be covered include:






The compilers are part of wider projects providing all the components of the tool chain, and we anticipate the discussion will roam to low level utilities, source code debuggers and libraries as well. We hope the output of the discussion will inform future work between the two communities.



The panelists are




Back

LLVM meets Code Property Graphs

Home

Speaker Alex Denisov
Fabian Yamaguchi
RoomK.4.201
TrackLLVM
Time13:50 - 14:30
Event linkView original entry

The security of computer systems fundamentally depends on the quality of its underlying software. Despite a long series of research in academia and industry, security vulnerabilities regularly manifest in program code. Consequently, they remain one of the primary causes of security breaches today.
The discovery of software vulnerabilities is a classic yet challenging problem of the security domain. In the last decade, there appeared several production-graded solutions with a favorable outcome.



Code Property Graph[1] (or CPG) is one such solution. CPG is a representation of a program that combines properties of abstract syntax trees, control flow graphs, and program dependence graphs in a joint data structure.
There exist two counterparts[2][3] that allow traversals over code property graphs in order to find vulnerabilities and to extract any other interesting properties.



In this talk, we want to cover the following topics:






[1] https://ieeexplore.ieee.org/document/6956589
[2] https://github.com/ShiftLeftSecurity/codepropertygraph
[3] https://ocular.shiftleft.io

Back

LLVM and Python

Home

Speaker Batuhan Taşkaya
RoomK.4.201
TrackLLVM
Time14:35 - 15:15
Event linkView original entry

Python with LLVM has at least one decade of history. This session will be going to cover-up how python implementations tried to use LLVM such as CPython's Unladen Swallow branch (PEP 3146) or attempts from PyPy and why they failed. After that it will show what are the current python projects that use LLVM for speed, such as numba and python libraries for working with LLVM IR. In the end, it will mention about new ideas that would unite the powers of both LLVM and Python.

This talk is about LLVM's influence over Python's ecosystem. It is targeted an audience of language developers who want to integrate LLVM and developers who are curious about why dont dynamic languages can unite their power with LLVM to speed-up. It will start with python's implementations and the approaches they take. The Unladen Swallow branch which basically tries to integrate LLVM to CPython (under google sponsored) is a good example of these approaches. There are attempts from the PyPy side but they are also failed because of the dynamic nature of Python. After this fails, we'll swap back to our current time and show projects that are benefiting from LLVM to speed up python especially on the scientific side such as numba (which offers JITting via LLVM). Besides these projects, there are also a few projects that offer an interface to LLVM. Such as llvmpy and llvmlite. I've been using llvmlite about 1 year in my side projects and toy languages so these projects has the potential to inspire developers to work with LLVM and build languages a-top on it. In the end, it will show what is the future of these two big projects (LLVM & Python) and how we can participate.

Back

Flang : The Fortran frontend of LLVM

Home

Speaker Kiran Chandramohan
RoomK.4.201
TrackLLVM
Time15:20 - 16:00
Event linkView original entry

This talk introduces Flang (F18), the new Fortran frontend of LLVM being written in modern C++. The talk will provide a brief introduction to Flang, motivation for writing this new compiler, design principles, architecture, status, and an invitation to contribute.

F18 project started at PGI/Nvidia as a new Fortran frontend designed to work with LLVM. The aim of the project is to create a modern Fortran frontend (Fortran 2018 standard) in modern C++. In April of this year, it was accepted as an LLVM project (https://lists.llvm.org/pipermail/llvm-dev/2019-April/131703.html).



The parser and semantic analysis are implemented in a way that provides a strong correspondence to the standards document. It is hoped that this correspondence will help in the development of new features and will become the testbed for deciding future Fortran standard features. The frontend also embraces the newly open-source MLIR framework for language-specific optimisations. This will be through a new dialect call FIR (https://www.youtube.com/watch?v=ff3ngdvUang). MLIR will also be used for creating an OpenMP dialect. The project also hopes to share code with the Clang frontend. While the parser/AST will not be shared, code will be shared in the Driver, OpenMP codegen etc.



In this presentation, we hope to cover the technical details mentioned in the paragraph above, the status of implementation and also give an invitation to contribute.

Back

Ask LLVM developers Anything Panel

Home

Speaker Kristof Beyls
Peter Smith
Nick Desaulniers
Jeroen Dobbelaere
RoomK.4.201
TrackLLVM
Time16:05 - 16:45
Event linkView original entry

Ever wondered how the LLVM project and community works?
Want to get some advice on how to most effectively contribute?
Now is your chance at FOSDEM to ask experienced developers directly. This
panel will host a number of experienced LLVM developers, answering any
relevant questions from the audience.

In case questions from the audience do not fill the entire slot, a set of
prepared questions focussed on how to get started working with and contributing
to LLVM will be raised for the panelists to answer.



The panel will consist of the following experienced contributors to
LLVM: Kristof Beyls, Peter Smith, Jeroen Dobbelaere, Nick Desaulniers

Back

Automating Programming and Development of Heterogeneous SoCs with LLVM Tools

Home

Speaker Joshua Mack
Nirmal Kumbhare
RoomK.4.201
TrackLLVM
Time17:00 - 17:40
Event linkView original entry

Historically, programming heterogeneous systems has been quite a challenge. While programming support for basic general-purpose accelerators such as GPUs has become quite mature in many ways, general heterogeneous SoCs in particular can feature a much broader range of accelerators in their efforts to minimize power consumption while maximizing performance. Many SoCs, though, are designed with accelerators tailored for the domain -- such as signal processing -- in which they’ll be used: Domain-Specific SoCs. As SoC platforms become ever-more heterogeneous, we think that application developers shouldn’t need to waste time reading datasheets or APIs for SoC-specific kernel extensions just to take full advantage of their hardware. With this in mind, in this talk we will discuss strategies we are using to automate mapping of LLVM-compatible languages to heterogeneous platforms with no intervention (not even #pragmas) from the programmer.



To this end, we present our prototype of a software stack that seeks to address both of these needs. To meet the first need, we developed an LLVM-based hybrid compile/run-time toolchain to extract the semantic operations being performed in a given application. With these semantic operations extracted, we can link in additional libraries that enable dispatch of certain kernels (such as a Fast Fourier Transform) to accelerators on the SoC without user intervention. To evaluate the functionality of this toolchain, we developed a runtime system built on top of QEMU+Linux that includes scheduling and task dispatch capabilities targeting hypothetical SoC configurations. This enables behavioral modeling of these accelerators before silicon (or even FPGA) implementations are available. The focus here will be on the LLVM-mapping aspects, but a brief overview of our SoC simulation environment will be presented as well.

Back

HPVM: Extending LLVM For Compiling to Heterogeneous Parallel Systems

Home

Speaker Vikram Adve
RoomK.4.201
TrackLLVM
Time17:45 - 18:25
Event linkView original entry

TITLE: HPVM: Extending LLVM For Compiling to Heterogeneous Parallel Systems



SPEAKER: Vikram Adve, University of Illinois at Urbana-Champaign



Abstract:



We will present a detailed description of HPVM, an extension to LLVM for
compiling to heterogeneous parallel systems. HPVM aims to make it much
easier to develop compilers for diverse parallel hardware, and to implement
parallel languages (including domain-specific languages) for such hardware.
We will briefly describe at a high-level the key parallel abstraction of
hierarchical dataflow graphs used in HPVM, and then focus on on how HPVM is
integrated on top of LLVM. A second part of the talk will briefly describe
how we are extending HPVM to enable greater energy efficiency and
performance by taking advantage of approximation opportunities in
application domains such as machine learning and image processing. To
conclude, I will briefly discuss how HPVM might be added as a dialect in
MLIR so that other MLIR dialects and MLIR-based compilers can use HPVM for
code generation to diverse heterogeneous hardware targets, including GPUs,
FPGAs, and custom accelerators.

TITLE: HPVM: Extending LLVM For Compiling to Heterogeneous Parallel Systems



SPEAKER: Vikram Adve, University of Illinois at Urbana-Champaign



Background



LLVM has been extraordinarily successful as a compiler infrastructure for
enabling a wide range of compilers and compiler-based tools for scalar and
vector processors, and for supporting GPU compilers for OpenCL and CUDA.
LLVM has seen only limited use, however, for other classes of target
architectures, such as reconfigurable hardware (FPGAs) and domain-specific
accelerators such as for machine learning, image processing, signal
processing, graph processing, and other emerging domains. More generally,
heterogeneous system-on-chip (SoC) architectures are becoming increasingly
important, especially in "edge computing," but LLVM has largely been
limited to the host CPU and GPU on such SoCs, even though the number of
other programmable components on these systems has been steadily increasing.



Overview



In this talk, I will describe an extension of LLVM for developing a compiler
infrastructure -- Heterogeneous Parallel Virtual Machine, or HPVM -- for
heterogeneous parallel systems [1]. I will briefly describe at a high-level
the key parallel abstraction of hierarchical dataflow graphs used in HPVM to
describe heterogeneous parallelism, where ordinary LLVM code is used to
represent the computatational tasks. The main focus of this part of the
talk is how HPVM is integrated on top of LLVM. First, HPVM has been
implemented as a set of intrinsic functions that extend the LLVM
infrastructure. Second, the HPVM code generation framework reuses existing
LLVM (and other) back-ends, in order to leverage existing (often well-tuned)
code generators for individual programmable hardware elements, such as NVPTX
for NVIDIA GPUs, Intel's SPIR-V code generator for Intel SSE and AVX vector
hardware, and Altera's AOCL compiler for targeting Altera's FPGAs.



A second part of the talk will briefly describe how we are extending
HPVM to enable greater energy efficiency and performance by taking
advantage of approximation opportunities in application domains such
as machine learning and image processing. In particular, we are
currently developing ApproxHPVM, an extension of HPVM that supports a
range of algorithmic and hardware-level approximation mechanisms [2].
Moreover, ApproxHPVM only requires application programmers to specify
high-level, "end-to-end" design goals such as the maximum allowable
accuracy loss in a neural network or loss of image quality (e.g.,
PSNR) and the system automatically selects, optimizes and maps
approximation choices for individual coarse-grain tensor operations in
the application. The goal is to make sophisticated and well-tested
approximation techniques widely accessible to application developers.



To conclude, I will briefly discuss how HPVM and ApproxHPVM might be added
as a dialect in MLIR so that other MLIR dialects and MLIR-based compilers
can use HPVM for diverse heterogeneous hardware targets, including GPUs,
FPGAs, and custom accelerators.



Target Audience



The intended target audience for this talk falls into broadly two classes.
The first includes compiler practitioners and researchers interested in
compiling to heterogeneous systems, such as SoCs, FPGAs, and other
"edge-compute" hardware. The second includes language implementers
interested in implementing or porting domain-specific languages such as
TensorFlow, Halide, SPIRAL, and others to heterogeneous parallel systems.



Takeaways



We envision several takeaways for the audience: (1) Understand how to
develop an extension of LLVM that makes it easier to target emerging
hardware platforms not sufficiently well-supported by the existing LLVM IR
and code generation framework. (2) Expose attendees to the opportunities and
challenges in supporting and reasoning about approximate computations in a
compiler framework. (3) Discuss the opportunities and limitations of using
HPVM for supporting heterogeneous parallel systems in the context of MLIR.



Web Site and Software Availability



More information about HPVM is available at http://hpvm.cs.illinois.edu/.
The HPVM infrastructure is implemented as an extension to LLVM. To date,
the software is being developed using an internal Git repository at Illinois
and has been shared with collaborators at IBM and at Harvard University.
We will make it available publicly in open-source form on Github before the
FOSDEM conference.



REFERENCES



[1] Maria Kotsifakou, Prakalp Srivastava, Matthew D. Sinclair,
Rakesh Komuravelli, Vikram S. Adve and Sarita V. Adve, “HPVM:
Heterogeneous Parallel Virtual Machine.” Proceedings of Principles and
Practice of Parallel Programming (PPoPP), Feb 2018, Vösendorf / Wien,
Austria.



[2] Hashim Sharif, Prakalp Srivastava, Mohammed Huzaifa, Maria
Kotsifakou, Yasmin Sarita, Nathan Zhou, Keyur Joshi, Vikram S. Adve,
Sasa Misailovic and Sarita V. Adve, “ApproxHPVM: A Portable Compiler
IR for Accuracy-aware Optimizations,” OOPSLA 2019, October 2019,
Athens, Greece.

Back

libliftoff status update

Home

Speaker Simon Ser
RoomK.4.401
TrackGraphics
Time11:00 - 11:25
Event linkView original entry

This talk will explain some basics about KMS, introduce libliftoff, describe the library's current status and outline the next steps.

Many DRM drivers have been exposing overlay planes for quite some time. Overlay planes can improve battery consumption by scanning out directly client buffers, skipping composition. While Wayland compositors and the X server usually take advantage of the cursor plane (and sometimes are able to use the primary plane to directly scan out a client's buffer), overlay planes are under-used. The exception is Weston, which tries to use overlay planes. Other compositors ignore them.



The main challenge is to figure out how to assign buffers coming from clients to hardware planes. The only API exposed by KMS is atomic test commits, so user-space needs to try different combinations. It would be nice to have a common library shared between compositors to de-duplicate the work.



During the XDC 2019 conference we discussed about libliftoff, an attempt at designing such a library. Feedback was positive from both compositor writers and driver developers. We discussed about the API, the potential pitfalls and future goals. The scope of the library has been expanded: libliftoff could also provide some feedback to clients so that they allocate buffers suitable for hardware planes. Additionally, because the KMS API makes it tricky to find the best way to make use of hardware planes, libliftoff could grow some vendor-specific plugins.

Back

Zink Update

Home

Speaker Erik Faye-Lund
RoomK.4.401
TrackGraphics
Time11:30 - 11:55
Event linkView original entry

A short update on the state of Zink, and OpenGL implementation on top of vulkan, now that it's upstream in Mesa.

Back

Modernizing mesa3d.org

Home

Speaker Erik Faye-Lund
RoomK.4.401
TrackGraphics
Time12:00 - 12:55
Event linkView original entry

mesa3d.org is stuck on web 1.0 technology, but let's see what we can do about it.

This is a Birds-Of-a-Feather session, which starts off with a short presentation as an introduction about the current state of affairs.

Back

AMENDMENT Nouveau Status update

Home

Speaker Karol Herbst
RoomK.4.401
TrackGraphics
Time13:00 - 14:00
Event linkView original entry

I will talk about:
* features
* ongoing work and necessary reworks
* big and important technical issues
* overall state of the Nouveau project



This is a replacement for Manasi Navare's "Enabling 8K displays" talk, which got cancelled as Intel rejected her travel request again.

Back

FOSS Virtual & Augmented Reality

Home

Speaker Jakob Bornecrantz
RoomK.4.401
TrackGraphics
Time14:00 - 14:55
Event linkView original entry

In this talk will cover Monado and Khronos' OpenXR standard, and give an overview about the current state of open source VR and what lies ahead. Also go into some details of how tracking is done inside of Monado and show of the current state.

VR took off for the consumer with the release of Oculus consumer hardware. But the hardware lacked open source drivers and Linux support in general. The consumer VR space has now grown from a kickstarter campaign into a large industry. But this growth has its down sides, multiple companies have their own APIs competing. Luckily these companies have agreed to work on a single API under the Khronos umbrella. Now that OpenXR has been released and and the Monado project has been getting more stable it is now possible to do good VR on a completely open stack.

Back

Back to the Linux Framebuffer!

Home

Speaker Nicolas Caramelli
RoomK.4.401
TrackGraphics
Time15:00 - 15:55
Event linkView original entry

Although KMS/DRM can replace the Linux Framebuffer, there are a number of programs and libraries that can be built on top of the Linux Framebuffer (without X11 or Wayland dependencies) and that might still be worth considering. The Linux Framebuffer allows direct access to pixels: we will illustrate it with various rendering tools (Fbpad, Fbi, NetSurf, MPlayer, ...), but also with drawing libraries such as Cairo or Evas, and multimedia frameworks like FFmpeg or GStreamer.
The Mesa 3D project makes OpenGL rendering possible using only the Linux Framebuffer with GLFBDev or EGL: mesa-demos and yagears programs will be shown.
We will then cover graphics libraries (GLUT, SDL, EFL, GTK, Qt) that allow to integrate high level applications running directly on top of the Linux Framebuffer with no compositor. An example will be described using either WebKitGTK or QtWebKit for the rendering of a HTML5 media player and a WebGL sample, using the Linux Framebuffer port of those libraries and toolkits.
This talk is inspired by the HiGFXback project which aims at preserving historical backends used for graphics on GNU/Linux systems.

Back

The TTM memory manager

Home

Speaker Christian König
RoomK.4.401
TrackGraphics
Time16:00 - 16:55
Event linkView original entry

TTM is the memory manager in the Linux kernel used by graphics drivers with dedicated VRAM.



It was added to the mainline kernel in June 2009 and has seen numerous changes and we are now more or less running into a dead-end with it's design.



This talk outlines TTMs current functionality, what design problems we ran into and what can we do to fix this.

Back

Pattern Based Code Generation for GPUs

Home

Speaker Ian Romanick
RoomK.4.401
TrackGraphics
Time17:00 - 17:55
Event linkView original entry

Automatic, pattern-based code generation for Mesa's compiler infrastructure has been a long standing dream. Nearly a decade ago experiments were conducted using systems like BURS and lburg. Each of these attempts encountered various insurmountable road blocks. In the intervening years, both software and GPU architectures have changed significantly. These changes have enabled a code-generator generator to be a reality. The design and implementation of one system will be presented. In addition to the successes, various difficulties and rough edges will be detailed.

Back

A Vulkan driver for the RPI4

Home

Speaker Andreas Bergmeier
RoomK.4.401
TrackGraphics
Time18:00 - 18:25
Event linkView original entry

For the Raspberry PI 4 I started implementing a Vulkan driver. This talk will give a guide of how to approach such a task, what my expectations are and what I learned so far.

With the release of Raspberry PI 4 it becomes theoretically more viable to use it in GPU heavy scenarios. Even ordinary software like Gnome Shell, Chromium and games fall into that category.
Sadly, neither Broadcom nor Raspberry PI Foundation currently provide a Vulkan driver. Since I want as much performance (and little overheating) as possible, I started writing a Vulkan driver.
This entails learning kernel and mesa internals as well as trying to understand Gallium. All that I have learned so far, I will try to share in this talk.

Back

libratbag

Home

Speaker Filipe Laíns
RoomK.4.401
TrackGraphics
Time18:30 - 18:55
Event linkView original entry

This talk will give an update on the progress being done in libratbag as well as present the new projects we have planned. If there's time I will also show how you should be able to write your own driver and debug existing drivers. This is a talk about libratbag updates, planned projects and a code demo.

Back

Opening

Home

Speaker Vasily A. Sartakov
RoomK.4.601
TrackHardware-aided Trusted Computing
Time10:30 - 10:40
Event linkView original entry

abstract

description

Back

AMENDMENT Be secure with Rust & Intel SGX

Home

Speaker Jethro G. Beekman
RoomK.4.601
TrackHardware-aided Trusted Computing
Time10:40 - 11:15
Event linkView original entry

Intel Software Guard Extensions (SGX) makes software secure from the outside. Rust makes it secure from the inside. This workshop will introduce you to Rust and the Fortanix® Enclave Development Platform (EDP) for Rust: how it works, what you can do with it, and why Rust is such a good fit for SGX.



Please note that this was previously scheduled for 11:15 and replaces the talk on "Introduction to the CoSMIX Compiler" by Yan Michalevsky.

Back

AMENDMENT The Confidential Consortium Framework

Home

Speaker Amaury Chamayou
RoomK.4.601
TrackHardware-aided Trusted Computing
Time11:20 - 11:55
Event linkView original entry

The Confidential Consortium Framework is an open-source framework for building permissioned confidential multi-party services. It leverages hardware trusted execution environments to provide strong confidentiality, integrity, and high performance. CCF implements consortium-based programmable and auditable governance.



Please note that this talk was originally scheduled to begin at the later time of 11:50.

Back

AMENDMENT EActors: an actor-based programming framework for Intel SGX

Home

Speaker Vasily A. Sartakov
RoomK.4.601
TrackHardware-aided Trusted Computing
Time12:00 - 12:35
Event linkView original entry

In this talk I will present EActors, an actor framework that is tailored to SGX and offers a more seamless, flexible and efficient use of trusted execution – especially for applications demanding multiple enclaves. EActors disentangles the interaction with enclaves and, among them, from costly execution mode transitions. It features lightweight fine-grained parallelism based on the concept of actors, thereby avoiding costly SGX SDK provided synchronisation constructs. Finally, EActors offers a high degree of freedom to execute actors, either untrusted or trusted, depending on security requirements and performance demands.



Please note that this talk was originally scheduled to begin at the later time of 12:25

Back

AMENDMENT A Tale of Two Worlds: Assessing the Vulnerability of Enclave Shielding Runtimes

Home

Speaker Jo Van Bulck
RoomK.4.601
TrackHardware-aided Trusted Computing
Time12:40 - 13:15
Event linkView original entry

This talk analyzes the vulnerability space arising in Trusted
Execution Environments (TEEs) when interfacing a trusted enclave
application with untrusted, potentially malicious code. Considerable
research and industry effort has gone into developing TEE runtime
libraries with the purpose of transparently shielding enclave
application code from an adversarial environment. However, our analysis
reveals that shielding requirements are generally not well-understood in
real-world TEE runtime implementations. We expose several sanitization
vulnerabilities at the level of the Application Binary Interface (ABI)
and the Application Programming Interface (API) that can lead to
exploitable memory safety and side-channel vulnerabilities in the
compiled enclave. Mitigation of these vulnerabilities is not as simple
as ensuring that pointers are outside enclave memory. In fact, we
demonstrate that state-of-the-art mitigation techniques such as Intel’s
edger8r, Microsoft’s “deep copy marshalling”, or even memory-safe
languages like Rust fail to fully eliminate this attack surface. Our
analysis reveals 35 enclave interface sanitization vulnerabilities in 8
major open-source shielding frameworks for Intel SGX, RISC-V, and Sancus
TEEs. We practically exploit these vulnerabilities in several attack
scenarios to leak secret keys from the enclave or enable remote code
reuse. We have responsibly disclosed our findings, leading to 5
designated CVE records and numerous security patches in the vulnerable
open-source projects, including the Intel SGX-SDK, Microsoft Open
Enclave, Google Asylo, and the Rust compiler.



Please note that this talk was originally scheduled to begin at the later time of 13:00

Back

AMENDMENT HOWTO build a product with OP-TEE

Home

Speaker Rouven Czerwinski
RoomK.4.601
TrackHardware-aided Trusted Computing
Time13:20 - 13:55
Event linkView original entry

OP-TEE is an open source implementation of the GPD TEE specifications. However deploying OP-TEE inside
a real world product requires more than just the integration into the system, since the integrator needs
to ensure that all security requirements are met. This talk will outline a common set of these requirements
and show the necessary changes based on NXP i.MX6 platforms.



Please note that this talk was originally scheduled to begin at the later time of 13:35

Back

AMENDMENT Demo: SGX-LKL

Home

Speaker Thiago Zagatti
RoomK.4.601
TrackHardware-aided Trusted Computing
Time14:00 - 14:30
Event linkView original entry

SGX-LKL is a library OS designed to run unmodified Linux binaries inside SGX enclaves. It uses the Linux Kernel Library (LKL) and a modified version of musl to provide system support for complex applications within the enclave. SGX-LKL has support for in-enclave user-level threading, signal handling, and paging. This demo presents an overview of SGX-LKL and demonstrates how popular applications can be ported and executed within SGX-LKL.



Please note that this talk was originally scheduled to begin at the later time of 14:10

Back

Open source UEFI and TianoCore

Home

Speaker Leif Lindholm
RoomK.4.601
TrackOpen Source Firmware, BMC and Bootloader
Time15:00 - 15:25
Event linkView original entry

Historically, the UEFI forum has been a bit rubbish at interacting with open source development, but this is improving.



This talk gives a background on why (both the rubbish and the improvement) and what is being done.



Also, a brief update on news for the TianoCore/EDK2 project.

After much lawyerly fun, the UEFI Self-Certification Testsuite (SCT) was released under an OSI license (BSD2) at the end of 2018.
We will explain why this is useful, and how this has helped with the addition of UEFI support in U-Boot, as well as helped improving EDK2 code quality.



We have a new process, referred to as "code first" for drafting changes to the UEFI spefication in public.
This will give an overview of the process, as well as one of the first exercises of it - the definition of audio APIs (for accessibility or, you know, DOOM).



Also, an update on changes in the EDK2 reference implementation, including dropping the CLA and upcoming RISC-V support.



And finally Ard Biesheuvel will give a quick overview of a plan to unify the Linux kernel boot protocol on EFI platforms across architectures.

Back

Discover UEFI with U-Boot

Home

Speaker Heinrich Schuchardt
RoomK.4.601
TrackOpen Source Firmware, BMC and Bootloader
Time15:30 - 15:55
Event linkView original entry

The Unified Extensible Firmware Interface (UEFI) is the default for booting most Linux and BSD distributions. But the complexity of the UEFI standard does not offer an easy entry point for new developers. The U-Boot firmware provides a lightweight UEFI implementation. Using booting from iSCSI with U-Boot and iPXE as an example let's delve into the UEFI API.



The UEFI sub-system in U-Boot has developed from barely starting GRUB to supporting complex UEFI applications like iPXE and the EFI shell and passing most of the UEFI compliance tests for the implemented protocols and services.



The session gives an overview of the boottime and runtime services of UEFI with a focus on driver binding. The challenges of integrating the UEFI subsystem with U-Boot's infrastructure are described and an outlook is provided.



Questions this talk should answer:
- How does the UEFI driver model work?
- How does this integrate with U-Boot?
- What to expect next in U-Boot's UEFI implementation?

Back

Heads OEM device ownership/reownership : A tamper evident approach to remote integrity attestation

Home

Speaker Thierry Laurion
RoomK.4.601
TrackOpen Source Firmware, BMC and Bootloader
Time16:00 - 16:25
Event linkView original entry

Insurgo had engaged itself in the adventure of facilitating security accessibility and received NlNet funding to do exactly that. Now it wants to get developers involved and expand funding.



The goal of this is to bridge the gap between reasonably secure OS (QubesOS) and slightly more secure hardware (Heads) to help privacy-focused users and those that are vulnerable. But we need to prepare for the future now!



Insurgo has challenged the status quo that has been prevalent since 2015 and has made it possible for OEMs to preinstall QubesOS, thanks to the Heads Open Source Firmware (OSF) and his own PrivacyBeast QubesOS certified branch, not yet merged upstream, due to the lack of time and resources of a single man effort needing additional collaboration.



The integrity of the firmware and boot files is already remotely sealed and can be attested over smartphone (TPMTOTP) and from the bundled Librem Keys/Nitrokey Pro 2 (HOTP), prior to shipping. Thanks to HOTP-enabled USB security dongles bounded to shipped products, the user can visually validate that the hardware they've received is in OEM attested state, prior to complete reownership which is regenerating all required secrets from a trustable recovery environment (Heads OSF) thanks to a re-ownership wizard that guides the user until completion.



This is just the beginning of the adventure and the road ahead requires your help. Insurgo wants to propel this movement forward.



Today's secure hardware (REAL open source initialized hardware, eg. the RYF KGPE-D16, replicant supported phones, Sandy bridge/Ivy bridge based boards, eg. x230) struggle to stay current with upstream code and compliance requirements. LineageOS dropped support of the i9300. Coreboot dropped support of the KGPE-D16 platform. And the list will expand if no measures are taken to support maintainership of privacy focused projects that are taken for granted until support is finally dropped. This is a real problem requiring real solutions.



New efforts to support future, REAL Open Source Hardware (newly Respect Your Freedom [RYF] certified hardware, eg. Talos II from RaptorEngineering, future Power10 based hardware) are neither currently under active development nor currently supported by QubesOS. This needs to change. Now.



There is an opportunity for transition. This requires leadership, developers and funding.
This is why we've created the Insurgo Initiative on the OpenCollective platform.



This is where transparent funding will be available to the public for open source R&D. Please consider participating through code contributions!

Insurgo is making today's most trustworthy hardware available (TRUELY Neutered+Deactivated Intel ME, no FSP, no binary blobs whatsoever but EC firmware in the Root of Trust) to the masses through remote attestation over Heads OSF.



NlNet is helping Heads to be compatible on the T530, T430, T420 and X220, which are widely available, binary blob-free hardware platforms, thanks to a partnership with 9elements under NlNet grant.
NlNet funds is also permitting development of remote administration of QubesOS over tor hidden services when needed, thanks to an ongoing partnership with both the Qubes OS Project & Whonix.



But what about other work needed to ease accessibility of tomorrow's secure hardware and technologies?



Insurgo decided to give back to Open Source Firmware (OSF) related communities and will publicly announce novel approach to support required open source projects.
In premiere, we plan to give back 25% of Insurgo's net profit on sales to the Insurgo Initiative, hosted on OpenCollective.



Those funds will be available to Open Source projects in the form of bounties, to be paid out upon proof of work of agreed contributions.



The idea here is that open source tickets (issues) can be used as bounties and if knowledgeable people knew funds were available for needed work, they'd be more incentivized to address them.
Developers could then be rewarded for their efforts and paid for completing tasks similiar to how Open Source Funds (OpenTech, NlNet, etc) provides funds for larger projects.



The Insurgo Initiative will be self funded and potentially expanded through international partnerships, while the goal stays the same: supporting a future where security is more accessible to the public.



Here are some projects needing additional funding and more developer awareness, right now. Big funds and grant application are great. But the funding process has issues.
Not every developer wants to go through the application process, which requires management skills and requires a process that is not just about coding.
There are awesome developers out there whose help would be greatly needed.



How do we appropriately match developers with pertinent issues? We can fix this with the right mission and funding.
Insurgo's mission is for accessible security.



Bounty tags are being added to projects that lack the funding and to help address the current problems they face for completion:






The main problem we seem to face with many projects can be seen over and over again: a lack of maintainership.



No one can carry on a project for too long without becoming overwhelmed/drained by it.
We need to fairly distribute this work and make sure contributions are incentivized and fairly paid.



In this talk, I will go quickly over past work. The current situation. And where Insurgo wants to go.



Welcome aboard!

Back

Improving the Security of Edge Computing Services

Home

Speaker Daniel Kiper
Piotr Król
RoomK.4.601
TrackOpen Source Firmware, BMC and Bootloader
Time16:30 - 16:55
Event linkView original entry

For the last several years, hypervisors have played a key role in platform
security by reducing the possible attack surface. At the same time, the hype
surrounding computing and Internet of Things Gateways has led to an increase in
network appliance devices. Our target was to create a less-insecure virtual
network appliance using TrenchBoot, Trusted Platform Module 2.0 and AMD SKINIT
Dynamic Root of Trust for Measurement to establish a Xen hypervisor with a
meta-virtualized pfSense firewall. We are going to present it with an update
of the status of support of TrenchBoot for AMD processors.
This appliance is supported by are supported by apu2, a reliable low-SWaP x86
device from Swiss OEM PC Engines. It can be used as a Single Office / Home
Office firewall or an industrial edge device and has mostly open-source
hardware, coreboot firmware, mPCIe extensibility and an extended support
lifecycle for the embedded Central Processing Unit and motherboard.
In this talk, we will show how to create a system, which enables a significant
portion of computations to the edge devices while maintaining security. Using
a simple, well-known platform, we will conduct a secure boot using the Static
Root of Trust for Measurement with coreboot, move to the Dynamic Root of Trust
for Measurement by SKINIT in TrenchBoot and use all of this to provide a
complete chain of trust for the Xen hypervisor, a virtual firewall appliance
isolated by an input–output memory management unit (IOMMU) from the physical
network interface controller (NIC) devices. We will present benchmark data
on virtualization overhead, explain how this complexity can still be practical
and outline the value of this stack. In the second part of presentation we will
discuss current status of Intel TXT development in the GRUB and Linux kernel.

Back

Introducing AUTOREV

Home

Speaker Patrick Rudolph
RoomK.4.601
TrackOpen Source Firmware, BMC and Bootloader
Time17:00 - 17:25
Event linkView original entry

Modern Open Source boot firmware ships with an increasing amount of BLOBs. While it's often claimed that it eases the integration,
it makes life of Open Source developers harder, as it's not documented what is done inside BLOBs and what should be done outside of
the same.



We will show how to trace the MMIO access of BLOBs in firmware by using Open Source tools. As analysing the traces for possible
branches and loops is hard and stressful work, we created our own framework for automatic reverse engineering.
Our framework allows to capture and analyse MMIO traces, fuzz the BLOB under test and finally generates readable code in a high level language,
like C, for easy analysing.



During this talk, we will discuss the legal side, the motivation behind reverse engineering, and the benefit for the Open Source community.
We will explain the problems we faced, and explain the basic concept used, with examples from the real world.

Back

Look at ME!

Home

Speaker Daniel Maslowski (CyReVolt)
RoomK.4.601
TrackOpen Source Firmware, BMC and Bootloader
Time17:30 - 17:55
Event linkView original entry

With Intel's Firmware Support Package (FSP) and the recent release of a
redistributable firmware binary
for the Management Engine, it has become possible to share full firmware images
for modern x86 platforms and potentially audit the binaries. Yet, reverse
engineering, decompilation and disassembly are still not permitted. However,
thanks to previous research, we can have a closer look at the binary data and
come to a few conclusions. This talk briefly summarizes the fundamentals of
developing custom and open source firmware, followed by a quick guide through
the process of analyzing the binaries without actually violating the terms to
understand a few bits, and finally poses a statement on the political issues
that researchers, repair technicians and software developers are facing
nowadays, taking into account how consumers are affected and how they perceive
the situtation eventually.

Back

Capsule Update & LVFS: Improving system firmware updates

Home

Speaker Brian Richardson
RoomK.4.601
TrackOpen Source Firmware, BMC and Bootloader
Time18:00 - 18:25
Event linkView original entry

As the rich capabilities of platforms increase, so does their complexity. As hypervisors and operating systems harden their attack surfaces, malware has been moving deeper into the platform. For example, a modern laptop may have over 15 updatable firmware elements, each with low-level access to a specific hardware domain. From the early days of proprietary BIOS in the 1980’s and 1990’s, to the world of standards in the 2000’s, to the post-PC world of the last few years, the nature of firmware has changed. In order to provide security guarantees for platform firmware, the servicing model of the platform takes center stage.



This session discusses the evolution of platform servicing using examples based on device firmware, non-host/system on a chip (SOC) firmware, and implementation of the Unified Extensible Firmware Interface (UEFI). A modern servicing model features elements for component-based update, resiliency in case unexpected conditions, a more seamless user experience, lowering the friction of update integration, and telemetry for a view into platform health and firmware inventory.



This talk will discuss current trends in standards such as UEFI and associated EDK II firmware, and how the Linux Vendor Firmware System (LVFS) used these components as part of a holistic, open source approach to seamless firmware updates.

Back

Opening Intel Server firmware based on OpenBMC example

Home

Speaker Maciej Lawniczak
Przemyslaw Czarnowski
RoomK.4.601
TrackOpen Source Firmware, BMC and Bootloader
Time18:30 - 18:55
Event linkView original entry

Have you ever heard of Board Management Controller? It has been black box firmware to manage servers since last century … now it’s open. OpenBMC is a Linux Foundation project with a goal to produce an open source implementation of BMC firmware stack. It is a vendor independent Linux distribution created using Yocto project that provides complete set of manageability features. Backbone technologies in OpenBMC include D-Bus and systemd. With embedded web server it provides user friendly WebUI and Redfish interface for easy server management using modern RESTful APIs. Intel as one of the founding companies offers additional functionalities on top of OpenBMC implementation which will be presented as a part of this presentation.



In this talk we will:
- tell you a short history and overview of OpenBMC
- have a quick view on OpenBMC architecture (Yocto, Dbus, systemd)
- show what’s new in latest 2.7 releases and what is planned for 2.8 (Feb 2020)
- talk about Intel specific features available in OpenBMC
- tell you how to contribute to OpenBMC project
- give you a guide on how to modify, build and run the project on target BMC on Intel server



Audience: software engineers, validation engineer, embedded software architects, data center administrators

Back

Threat Modelling for Developers

Home

Speaker Arne Padmos
RoomUA2.114 (Baudoux)
TrackSecurity
Time10:30 - 10:55
Event linkView original entry

What threats do we need to take into account when building a system? A key method for answering this question is an approach called threat modelling, whereby security problems can be anticipated during the design phase. This talk discusses major threat-modelling approaches, and includes concrete examples of how to apply them to software-intensive systems.

Back

AMENDMENT Custom Crypto Policies by Examples

Home

Speaker Tomáš Mráz
RoomUA2.114 (Baudoux)
TrackSecurity
Time11:00 - 11:25
Event linkView original entry

Management of allowed cryptographical algorithms to disallow algorithms not allowed due to weaknesses or restrictions by certification standards is complicated task. The talk will introduce system-wide crypto-policies concept and implementation as an attempt to help system administrators with this task.



This talk replaces "OSINT" talk which was schedulled initially, but David Busby could not attend on the short notice.

The system-wide crypto-policies were introduced in Fedora 21 in 2014. Since that time the tool evolved a lot especially with the recent introduction of run-time generation of configuration from a policy definition file and introduction of sub-policy concept. The feature is called Custom crypto policies. The crypto-policies nowadays help management of crypto algorithms not only in Fedora but also in Red Hat Enterprise Linux 8. It will be shown how the policy definition file looks like and how it is converted to the actual crypto library configuration.

Back

Securing Existing Software using Formally Verified Libraries

Home

Speaker Tobias Reiher
RoomUA2.114 (Baudoux)
TrackSecurity
Time11:30 - 11:55
Event linkView original entry

Security vulnerabilities are still very common in todays software. Formal methods could improve the situation, but program verification remains a complex and time-consuming task. Often, the verification of existing software is infeasible and a complete rewrite can be prohibitively expensive. Both, however, is not necessarily required to improve on the current state. By replacing critical parts of an existing software by verified code, security can be strengthened significantly with moderate effort.



We show the feasibility of this approach by the example of a FLOSS TLS implementation. The basis of our PoC is the TLS 1.3 library Fizz which is written in C++. The existing message parser was replaced by a verified version implemented in the SPARK language. Our RecordFlux toolset was used to automatically generate the parser based on a formal message specification. With the SPARK tools we can prove automatically that an attacker cannot cause any overflows, runtime errors or undefined state by sending malformed messages to the modified library. Because of mismatches in the data structures used in C++ and SPARK, some glue code had to be written manually to integrate the verified parser into Fizz. Still, the modified TLS implementation shows only a slight performance loss while providing higher security.

Back

SpecFuzz: Bringing Spectre-type vulnerabilities to the surface

Home

Speaker Oleksii Oleksenko
RoomUA2.114 (Baudoux)
TrackSecurity
Time12:00 - 12:25
Event linkView original entry

Spectre-type attacks are a real threat to secure systems because a successful attack can undermine even an application that would be traditionally considered safe.
SpecFuzz is the first tool that enables fuzzing for such vulnerabilities.

The key is a novel concept of speculation exposure:
The program is instrumented to simulate speculative execution in software by forcefully executing the code paths that could be triggered due to mispredictions, thereby making the speculative memory accesses visible to integrity checkers (e.g., AddressSanitizer).
Combined with the conventional fuzzing techniques, speculation exposure enables more precise identification of potential vulnerabilities compared to state-of-the-art static analyzers.



Technical report: https://arxiv.org/abs/1905.10311

Back

Falco Internals 101 : Syscalls processing for security analysis

Home

Speaker Lorenzo Fontana
RoomUA2.114 (Baudoux)
TrackSecurity
Time12:30 - 12:55
Event linkView original entry

Linux Syscalls can be used as an entrypoint to do security analysis on Linux. However reading and processing every system call in userspace creates a very unique set of challenges.
In this talk we are going to see exactly what those challenges are and how we solved them in the Falco project.

One of the ways to have broad visibility into our systems, when doing security analysis is to go and ask our questions directly to the Linux kernel.
For this purpose, at a very foundational level, in every Linux system we find the syscalls interface. It’s certain that every user space process goes through this part of the kernel.



Starting with this assumption, the immediate conclusion is that we can just go and ask the syscalls “Yo syscalls! What’s happening in my system?”. While this reasoning might seem very simple, reading and processing every single syscall in userspace can result in a set of very unique challenges to this domain.



In this talk we are going to see exactly what those challenges are and how we solved them in the Falco project.



Part of the solution for Falco is to have two alternative drivers, a Kernel module and an eBPF driver talking to userspace using a Ring buffer but you have to come to this talk to hear the rest!

Back

Docker Security considerations & Incident Analysis

Home

Speaker John Lionis
RoomUA2.114 (Baudoux)
TrackSecurity
Time13:00 - 13:25
Event linkView original entry

In this presentation we take under consideration the increased use of Docker in corporate environments.
It is a fact that Docker has found wide spread of use during the past years, mostly because of it
being very easy to use , economic w.r.t resources used, fast and easy to deploy when compared with
a full blown virtual machine. More and more servers are being operated as Docker hosts on which
micro-services run in containers. From a security point of view, two aspects of it arise in the
context of this talk and the inherent time-limitations it has. Firstly, the aspect of the already
quite talked-through question, “is it secure ?”.Secondly the less analyzed aspect of incident analysis
and the changes introduced with respect to known methods and evidence.In this presentation we will
briefly outline some security considerations about Docker and the average user and then we will try
to examine how Docker introduces changes to the workflow related to incident analysis and forensics in its environment.

Back

Incrementality and deck functions

Home

Speaker Gilles Van Assche
RoomUA2.114 (Baudoux)
TrackSecurity
Time13:30 - 13:55
Event linkView original entry

Protocols in symmetric cryptography are often built from block ciphers, with a fixed input and output size, while variable sizes are handled through their modes of use. Incrementality, namely, the ability to efficiently compute the output for increasing inputs, or to request longer outputs, is often a property of the implementation rather than an explicit feature of a mode.



A doubly-extendable cryptographic keyed (or deck) function is a new kind of object that makes incrementality an integral part of its definition. Writing modes for various applications, such as authenticated encryption of a network channel or disk encryption with a wide block cipher, on top of a deck function turns out to be a simple exercise and leads to less error-prone implementations than on top of a block cipher. We illustrate this with the session-supporting authenticated encryption modes SANE and SANSE. (Sessions naturally protect a continuous flow of messages or a client-server dialog.)



While a deck function can be constructed from existing primitives, like a block cipher, we show two more natural ways of making a deck function in practice.




Back

How Transparent Data Encryption is built in MySQL and Percona Server ?

Home

Speaker Robert Golebiowski
RoomUA2.114 (Baudoux)
TrackSecurity
Time14:00 - 14:25
Event linkView original entry

How Transparent Data Encryption is built in MySQL and Percona Server ?
- keyrings – what are they used for ? What is the difference between using a server back-end (keyringvault) versus file back-end (keyringfile). How it affects server startup and why? Why per server separation is needed in Vault Server?
- How Master Key encryption works ? How it is build on page level ? How do we know which key we should fetch to decrypt a table ? How do we know that used key is the correct one ? How do we make sure that we can decrypt a table when we need it ?
- What crypto algorithms are used ?
- How Master Key rotation works ? Why is it needed ?
- What is KEYRING encryption and what are encryption threads?
- How binlog encryption works in 5.7 and how it works in 8.0 ?
- How undo log/redo log encryption works ?

How Transparent Data Encryption is Built in MySQL and Percona Server ?



In this presentation, we'll take a deep dive into the world of transparent data encryption for open source databases. We'll be looking at how transparent data encryption is implemented in MySQL and Percona Server for MySQL:
- keyrings – what are they used for ? What is the difference between using a server back-end (keyringvault) versus file back-end (keyringfile). How it affects server startup and why? Why per server separation is needed in Vault Server?
- How Master Key encryption works ? How it is build on page level ? How do we know which key we should fetch to decrypt a table ? How do we know that used key is the correct one ? How do we make sure that we can decrypt a table when we need it ?
- How Master Key rotation works ? Why is it needed ?
By the end of the talk, you'll have a better understanding of the transparent data encryption and will be aware of things to take into account when interacting with encrypted databases in your applications.

Back

Secure logging with syslog-ng

Home

Speaker Stephan Marwedel
RoomUA2.114 (Baudoux)
TrackSecurity
Time14:30 - 14:55
Event linkView original entry

The design, implementation, and configuration of the secure logging service. Its aim is to provide tamper evident logging, i.e., to adequately protect log records of an information system against tampering and to provide a sensor indicating attack attempts. The secure logging service achieves this by authentically encrypting each log record with an individual cryptographic key used only once and protects integrity of the whole log archive by a cipher{based message authentication
code. Each attempt to tamper with either an individual log record or the log archive itself will be immediately detected during log archive verification. Therefore, an attacker can no longer tamper with log records without being detected which greatly enhances the use of log archives in forensic investigations.

Log records are normally produced by any information system in order to perform monitoring during normal operations and for troubleshooting in case of technical problems. Log information is equally important for retaining the security of an information system, as security relevant events are recorded and can later be monitored for unusual patterns which may indicate an attack attempt. Examples include log on and log off, startup and shutdown, network service access, network filter rule application, storage access, etc. Log records may also contain valuable information about a system that a potential attacker intends to compromise. If an attacker is able to successfully compromise a system, they are also able to tamper with log records, potentially hiding their traces. This makes forensic analysis extremely difficult, as no reliable data source about system behavior immediately before the attack is available to a security analyst performing incident investigation. Therefore, log information should be appropriately protected. The aim of the secure logging service is to provide tamper evident logging, i.e., to adequately protect log records of an information system and to provide a sensor indicating attack attempts. The secure logging service achieves this by authentically encrypting each log record with an individual cryptographic key used only once and protects integrity of the whole log archive by a cryptographic authentication code. Each attempt to tamper with either an individual log record or the log archive itself will be immediately detected during log archive verification. Therefore, an attacker can no longer tamper with log records
without being detected. ost information systems rely on standards in order to provide logging services. One of the most widely adopted standards is the syslog protocol which is specified in RFC 5424. Many implementations of this
protocol are available. A popular extensible implementation with additional features is syslog-ng, which is an enhanced logging daemon with advanced features for input and output. Furthermore, it features capabilities for log message filtering, rewriting, and routing. It can be used as a drop-in replacement for existing log daemons on UNIX systems. The implementation of the secure logging service providing tamper evidence and confidentiality of system logs based on the template mechanism of syslog-ng is presented together with an application example.

Back

Protecting plaintext secrets in configuration files

Home

Speaker Moisés Guimarães
RoomUA2.114 (Baudoux)
TrackSecurity
Time15:00 - 15:25
Event linkView original entry

Applications and services rely on configuration data in order to be customized and we will talk about how to keep them in a safer place other than plaintext configuration files.

The configparser module is Python's standard configuration file parser and many projects rely on it to achieve easy configuration with plaintext files. OpenStack Common Libraries (Oslo) has an alternative called oslo.config with additional sources of input like command line arguments or environment variables. With the addition of a feature called source drivers last year, we are now able to increase the security of configuration values storing them in a safer place.



This talk focuses on the new source driver that integrates Oslo.Config and Castellan, another Olso module specialized in talking to secret managers, and how we can store our sensitive configuration data using HashiCorp Vault.

Back

Application Whitelisting in Linux Environment

Home

Speaker Radovan Sroka
RoomUA2.114 (Baudoux)
TrackSecurity
Time15:30 - 15:55
Event linkView original entry

Are you a sysadmin and feeling paranoid? Let's promote security hardening to another level.
Perhaps, with the concept of Application Whitelisting you will be able to sleep again.

In this session we are going to explain the Application Whitelisting idea and its implementation, what benefits are there from a security point of view and how it differs from competitors.
We are going to show how to create a new set of rules based on distribution default for given examples.
As a result, an attendee should be able to setup the Application Whitelisting framework on his server or workstation.



This presentation is based on Red Hat/Fedora Linux environment.

Back

seccomp — Your Next Layer of Defense

Home

Speaker Philipp Krenn
RoomUA2.114 (Baudoux)
TrackSecurity
Time16:00 - 16:25
Event linkView original entry

Why should you allow all possible system calls from your application when you know that you only need some? If you have ever wondered the same then this is the right talk for you. We are covering:






Because your security approach can always use an additional layer of protection.

Back

Kernel Runtime Security Instrumentation

Home

Speaker Florent Revest
RoomUA2.114 (Baudoux)
TrackSecurity
Time16:30 - 16:55
Event linkView original entry

KRSI (Kernel Runtime Security Instrumentation) is an ongoing effort at Google to upstream an LSM (Linux Security Module) instrumentable using eBPF (extended Berkeley Packet Filter) to the Linux kernel.



KRSI allows system owners to dynamically attach eBPF programs to security hooks and write MAC and audit policies without having to reboot or patch the kernel thereby enabling a new class of system security and auditing software.



This talk presents the main concepts behind KRSI: it introduces the technologies leveraged and presents the API exposed to users.

Back

Using SELinux with container runtimes

Home

Speaker Lukas Vrabec
RoomUA2.114 (Baudoux)
TrackSecurity
Time17:00 - 17:25
Event linkView original entry

Lukas Vrabec built a new standalone tool, udica, for generating SELinux policy profiles for containers based on automatic inspecting these containers. We will focus on why udica is needed in the container world and how it can make SELinux and containers work better together. We will show real examples where SELinux separation for containers had to be turned off because the generic SELinux type container_t was too tight. With a tool like “udica”, users and developers can easily customize the policy with limited SELnux policy writing skills. Come to see how easy also you can create custom SELinux policy for your containers!

This talk will explain how SELinux works with containers. We will show how to enable/disable SElinux using multiple different container runtimes and define the default types. One issue with these types is that they are tough to customize. The two default types for running containers are containert which is a fully confined domain, which eliminates any use of the host files unless they are relabeled. Or spct, which is the type containers run with when SELinux is disabled for container separation, --privileged mode. As an example, If you had a container that you wanted to be able to gather the logs from /var/log on the host and send them to a centralized server, you have to disable SELinux separation.



Writing custom policy for each container that needed additional access would be very difficult and require a container policy writer.

Back

The hairy issue of e2e encryption in instant messaging

Home

Speaker Winfried Tilanus
RoomUA2.114 (Baudoux)
TrackSecurity
Time17:30 - 17:55
Event linkView original entry

End-to-end encryption is often regarded as the holy grail of security. But when you start implementing it soon becomes a security hell. Does it really protect against the threats it should protect against? And watch out for the pitfalls when implementing it: almost everybody fails there!

Lets start with the conclusion of this talk: after twenty years of designing and analyzing high security instant messaging systems, I came to the conclusion that end-to-end encryption (e2ee) in instant messaging is snake-oil. It creates a false sense of security.



First of all the threat model underneath e2ee has fundamental flaws, it doesn’t deliver protection against the threats commonly named to justify it. And if that isn’t enough, there a lot of issues that make a proper implementation very hard to get right. To name a few: key verification, one-to-many messages, store and forward and archiving.



But lets not end this talk all in black. Though we aren’t there yet, there are some developments that may solve these issues. I will name those too.

Back

What you most likely did not know about sudo…

Home

Speaker Peter Czanik
RoomUA2.114 (Baudoux)
TrackSecurity
Time18:00 - 18:25
Event linkView original entry

Everybody knows sudo, right? Sudo allows a system administrator to give certain users the ability to run some commands as root, while logging the executed commands and their arguments. It is installed by default on almost all Linux systems, and is available for most commercial UNIX systems. Still, even system administrators often only know it is the “prefix” to use before entering a command requiring root privileges. Learn how much more this simple looking tool can do!

Everybody knows sudo, right? Sudo allows a system administrator to give certain users the ability to run some commands as root, while logging the executed commands and their arguments. It is installed by default on almost all Linux systems, and is available for most commercial UNIX systems. Still, even system administrators often only know it is the “prefix” to use before entering a command requiring root privileges. Learn how much more this simple looking tool can do!
Most of the times the default configuration allows a group of users to run any commands:
%wheel ALL=(ALL) ALL
It’s a good first step, better than using the root account directly. This way you can trace who ran what commands on the system. But there are a lot more possibilities when configuring sudo, making your system more secure.
Life is simple when when you have to give access a single user to a single command. But as soon as you have multiple users with the same access rights, it is not just shorter but also easier to maintain, if you use aliases.
For added security, you can add a hash of binaries to sudo. This way if the binary changes for any reasons, like modifying it through a successful exploit, you can prevent it from being used.



Using sudo does not make much sense without proper logging and alerting. There are three major possibilities:
- syslog: all events are logged to syslog. For additional security, collect sudo logs centrally, so a malicious user cannot delete them easily.
- e-mail: sudo can send e-mail alerts on different kinds of failures
- debug: in depth logging of subsystems, mostly useful for developers
Session recording is a fourth possibility. The terminal output can be saved in a local file and played back. You can play back what happened, even if the user started up an interactive shell.



Instead of maintaining the sudoers file on each of your systems, you can use LDAP to configure sudo. It has some differences compared to a sudoers file, but also many advantages:
- local users cannot manipulate the rules,
- is easier to maintain,
- goes live immediately.



Starting with version 1.8, sudo has a plugin-based architecture. You can replace or extend sudo functionality using plugins. This way users can keep starting applications the usual way using sudo, but have a different configuration or policy engine behind the sudo command. There are both open source and commercial plugins available. For example the sudo_pair plugin – developed in Rust – enables monitoring and makes it possible to require interactive approval of sessions.



As you can see, sudo has several lesser-known features that can make monitoring and access management easier for large organizations.

Back

Getting started with quantum software development

Home

Speaker Tomas Babej
RoomUA2.118 (Henriot)
TrackQuantum Computing
Time10:30 - 11:00
Event linkView original entry

Back

Quantum machine learning with PennyLane

Home

Speaker Joshua Izaac
RoomUA2.118 (Henriot)
TrackQuantum Computing
Time11:05 - 11:40
Event linkView original entry

Back

Quantum computing hardware and control systems

Home

Speaker Felix Tripier
RoomUA2.118 (Henriot)
TrackQuantum Computing
Time11:50 - 12:25
Event linkView original entry

Back

The role of open source in building quantum computing ecosystem from scratch

Home

Speaker Hakob Avetisyan
RoomUA2.118 (Henriot)
TrackQuantum Computing
Time12:35 - 13:10
Event linkView original entry

Back

Quantum Advantage and Quantum Computing in the Real World

Home

Speaker Mark Mattingley-Scott
RoomUA2.118 (Henriot)
TrackQuantum Computing
Time13:20 - 13:55
Event linkView original entry

Back

Quantum circuit optimisation, verification, and simulation with PyZX

Home

Speaker John van de Wetering
RoomUA2.118 (Henriot)
TrackQuantum Computing
Time14:05 - 14:40
Event linkView original entry

Back

SimulaQron - a simulator for developing quantum internet software

Home

Speaker Axel Dahlberg
RoomUA2.118 (Henriot)
TrackQuantum Computing
Time14:50 - 15:25
Event linkView original entry

Back

AMENDMENT Quantum Game with Photons: Tensors in TypeScript, Visualized

Home

Speaker Piotr Migdal
RoomUA2.118 (Henriot)
TrackQuantum Computing
Time15:30 - 15:55
Event linkView original entry

This talk is a late addition that replaces "The Role of Open Source Frameworks in Quantum Computing and Technologies" by Jack Hidary.

Back

AMENDMENT Simulating noisy quantum devices with QuTiP

Home

Speaker Boxi Li
RoomUA2.118 (Henriot)
TrackQuantum Computing
Time16:00 - 16:25
Event linkView original entry

Simulation is a powerful tool to understand quantum phenomena. The simulation of quantum circuits is usually achieved by gate matrix product, such as in Qiskit and ProjectQ. Thanks to the Lindblad master equation solver, QuTiP is developing a simulator that simulates quantum circuits at the level of the real quantum evolution happening in your quantum device. This framework offers pulse-level control and a flexible way of noise simulation. Upon it, we are going to define different back-ends for different physics systems such as Cavity QED, Ion trap or Superconducting system. The simulator could help to improve your quantum device, study the effect of noise and create noise mitigation scheme.

Back

AMENDMENT bloqit

Home

Speaker Thomas Storwick
Kelly Zheng
RoomUA2.118 (Henriot)
TrackQuantum Computing
Time16:30 - 16:55
Event linkView original entry

This talk is a late addition that replaces "Computing with TensorNetwork & QML Tools" by Stefan Leichenauer

Back

Quantum classifiers, robust data encodings, and software to implement them

Home

Speaker Ryan LaRose
RoomUA2.118 (Henriot)
TrackQuantum Computing
Time17:05 - 17:40
Event linkView original entry

Back

Quantum computer brands: connecting apples and oranges

Home

Speaker Petar Korponaić
RoomUA2.118 (Henriot)
TrackQuantum Computing
Time17:50 - 18:25
Event linkView original entry

Back

Quantum Open Source Foundation

Home

Speaker Mark Fingerhuth
RoomUA2.118 (Henriot)
TrackQuantum Computing
Time18:30 - 19:00
Event linkView original entry

Back

Welcome to the Legal & Policy Issues DevRoom

Home

Speaker Tom Marble
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time10:30 - 10:35
Event linkView original entry

Welcome to the Legal & Policy Issues DevRoom including and overview of how the new Collaboration and Debate sessions will work.

Back

Technology challenges for privacy: the case of decentralized social media

Home

Speaker Cristina DeLisle
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time10:35 - 11:00
Event linkView original entry

As decentralized social media gathers more users, the privacy by design and default principles from the GDPR are in accordance to the design model it proposes. This talk is going to tackle the main advantages and challenges this approach brings, from the perspective of the data protection legislation and privacy architectural strategies.

Social media platforms have been a central feature in our generation and as we grow more toward understanding their power and taking part in their evolution, we realize the challenges they impose. One of those is how to protect personal data of users, and ensure that the processing is done in accordance with legislation such as GDPR.
Decentralized social media has developed as a space where personal data ownership is a priority, coming as an alternative to centralized platforms. Not coincidentally, they are mostly open source software, as transparency and offering control of the data to the users go hand in hand with this ambition. Blockchain based social media networks, and projects built on top of the ActivityPub protocol are some of the most popular examples of alternatives which have gathered significant numbers of users or data subjects, under the GDPR.
One of the main architectural strategies in building software which is privacy by default and design is data separation. It states that the processing of personal data should be performed whenever possible in a distributed manner. As the GDPR lists privacy by design and default as core principles, decentralized social networking poses a significant advantage compared to centralized solutions. One heuristic to take from this is if the future is privacy-oriented, then social media will be decentralized.
This talk is going to offer an analysis of the main benefits and challenges that decentralized social medial pose, from the points of view of personal data protection legislation and privacy design patterns for software architecture.

Back

DEBATE: Should FOSS licenses be enforced at all?

Home

Speaker Pamela Chestek
Giovanni Battista Gallus
Marc Jones
McCoy Smith
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time11:00 - 11:50
Event linkView original entry

In a perfect world, lawyers (and the entire legal system) should not be necessary. And in a perfect FOSS world, everyone respects each and every provision of every license. The reality is, however, very different, and enforcement may be a necessary evil. This need does not mean we have to open the gates to be flooded by "copyleft trolls", but to establish a sound enforcement policy, in order to unleash the lawyers only for the most blatant and repeated violations

Affirmative position: FOSS licenses should not be enforced.




  1. First Affirmative Constructive (1AC) = 7 minutes
    a. Cross-examination of First Affirmative by Second Negative = 3 minutes

  2. First Negative Constructive (1NC) = 7 minutes
    a. Cross-examination of First Negative by First Affirmative = 3 minutes

  3. Second Affirmative Constructive (2AC) = 7 minutes
    a. Cross-examination of Second Affirmative by First Negative = 3 minutes

  4. Second Negative Constructive (2NC) = 7 minutes
    a. Cross-examination of Second Negative by Second Affirmative = 3 minutes

  5. First Negative Rebuttal (1NR) = 3 minutes

  6. First Affirmative Rebuttal (1AR) = 3 minutes

  7. Second Negative Rebuttal (2NR) = 3 minutes

  8. Second Affirmative Rebuttal (2AR) = 3 minutes


Back

DEBATE: Does Careful Inventory of Licensing Bill of Materials Have Real Impact on FOSS License Compliance?

Home

Speaker Bradley M. Kuhn
Carol Smith
Jeff McAffer
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time12:00 - 12:50
Event linkView original entry

Projects today often have thousands of FOSS dependencies. Since risk
flows downstream in the supply chain; projects inherit and pass on the
risks of all their dependencies. In response, licensing bill of
materials tools often seek to push well-formed licensing inventory
data upstream in an effort to ease downstream compliance
challenges. At the same time, there has been a stark increase in
license violations, especially, though not exclusively, on copyleft
licenses. Is this approach to improving compliance working?

Affirmative position: Compliance at scale through tool-driven assembly of bills of materials is essential for FOSS




  1. First Affirmative Constructive (1AC) = 7 minutes
    a. Cross-examination of First Affirmative by Second Negative = 3 minutes

  2. First Negative Constructive (1NC) = 7 minutes
    a. Cross-examination of First Negative by First Affirmative = 3 minutes

  3. Second Affirmative Constructive (2AC) = 7 minutes
    a. Cross-examination of Second Affirmative by First Negative = 3 minutes

  4. Second Negative Constructive (2NC) = 7 minutes
    a. Cross-examination of Second Negative by Second Affirmative = 3 minutes

  5. First Negative Rebuttal (1NR) = 3 minutes

  6. First Affirmative Rebuttal (1AR) = 3 minutes

  7. Second Negative Rebuttal (2NR) = 3 minutes

  8. Second Affirmative Rebuttal (2AR) = 3 minutes


Back

COLLAB: How can we give users standing in free/open software/hardware?

Home

Speaker Italo Vignoli
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time13:00 - 13:25
Event linkView original entry

How can we give users standing in free/open software/hardware?
How can we motivate end users to care about FOSS if
they can't express their preference? What tools do we have beyond
the "court of public opinion"? Can we invent a NEW legal hack?

Back

COLLAB: The optics of the policy

Home

Speaker Nathan Willis
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time13:30 - 13:55
Event linkView original entry

Photography policies have begun to appear at free-software events in recent years. These policies typically seek to address personal privacy concerns for event attendees, but they sometimes conflict with the event's desire to record talks, Q&A periods, and social gatherings in public spaces. If not drafted with care, photo policies also run the risk of creating ambiguities for journalists, other attendees making personal photo or video recordings, and members of event-hosting organizations or the public. This session will be an open discussion about photo and video-recording policies, online tagging policies, and related personal-privacy policies, with the goal of clarifying the requirements, needs, and intents of all stakeholders in the FOSS community, so that future event organizers have a solid framework from which to draft clear policies that fit their situations.

Free-software events, like free-software projects, have to maintain a delicate balance between openness as a broad principle and privacy as an individual concern. In the past few years, more and more free-software events and community projects have developed "photo policies" that are intended to define when and how individuals and groups should be captured in media from the event and when and how those same people should be identified in the media. But a haphazard approach to policy writing can create unintentional ambiguities, such as how to define when an individual is the "subject' of a photograph or merely in the background. And free-software communities must also take care to write policies that do not come into conflict with local law, especially when events take place in public spaces. Finally, event organizers need to ensure that their photo policies, real-name policies, press policies, and session-recording consent policies work in concern with one another, not in conflict.



This session will be a broad discussion of photography policies and how they interact with other policy concerns. The intent will be to enumerate the concerns of all stakeholders, identify potential areas of confusion, note best practices, and — most importantly — establish resources and spaces for further discussion for project and community members creating photo policies in the future.

Back

DEBATE: The 4 Freedoms and OSD are outdated and no longer relevant in 2020

Home

Speaker Neil McGovern
Andrew Katz
Matt Jarvis
Luis Villa
Frank Karlitschek
Amanda Brock
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time14:00 - 14:50
Event linkView original entry

Are the FSF's 4 Freedoms and the OSI's Open Source Definition out
of date in 2020 and should be replaced.

Affirmative position: OSD/FSD is now irrelevant




  1. First Affirmative Constructive (1AC) = 7 minutes
    a. Cross-examination of First Affirmative by Second Negative = 3 minutes

  2. First Negative Constructive (1NC) = 7 minutes
    a. Cross-examination of First Negative by First Affirmative = 3 minutes

  3. Second Affirmative Constructive (2AC) = 7 minutes
    a. Cross-examination of Second Affirmative by First Negative = 3 minutes

  4. Second Negative Constructive (2NC) = 7 minutes
    a. Cross-examination of Second Negative by Second Affirmative = 3 minutes

  5. First Negative Rebuttal (1NR) = 3 minutes

  6. First Affirmative Rebuttal (1AR) = 3 minutes

  7. Second Negative Rebuttal (2NR) = 3 minutes

  8. Second Affirmative Rebuttal (2AR) = 3 minutes


Back

DEBATE: Should licenses be designed to advance general social goals?

Home

Speaker John Sullivan
Molly de Blanc
James Vasile
Josh Simmons
Dashiell Renaud
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time15:00 - 15:50
Event linkView original entry

We have seen several licenses proposed as "open source" that
carry some obligation or restriction related to ethics or
other social goals. Is this a good direction for FOSS license drafting?

Affirmative position: FOSS licenses should advance social goals




  1. First Affirmative Constructive (1AC) = 7 minutes
    a. Cross-examination of First Affirmative by Second Negative = 3 minutes

  2. First Negative Constructive (1NC) = 7 minutes
    a. Cross-examination of First Negative by First Affirmative = 3 minutes

  3. Second Affirmative Constructive (2AC) = 7 minutes
    a. Cross-examination of Second Affirmative by First Negative = 3 minutes

  4. Second Negative Constructive (2NC) = 7 minutes
    a. Cross-examination of Second Negative by Second Affirmative = 3 minutes

  5. First Negative Rebuttal (1NR) = 3 minutes

  6. First Affirmative Rebuttal (1AR) = 3 minutes

  7. Second Negative Rebuttal (2NR) = 3 minutes

  8. Second Affirmative Rebuttal (2AR) = 3 minutes


Back

DEBATE: Does FOSS need sustainability?

Home

Speaker Marc Jones
Philippe Ombredanne
Luis Villa
Mehdi Medjaoui
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time16:00 - 16:50
Event linkView original entry

Several prominent FOSS projects have changed their FOSS licenses to
alternate licenses that make software available, but with additional
restrictions intended to help financially sustain FOSS development and
combat "strip mining" by software-as-a-service providers. Additionally,
recently several related organizations have jumped into the the role of
helping sustain open source by providing (for a fee) funding conduits,
fundraising services, or other mechanisms to route money to maintainers.

Affirmative position: FOSS benefits from sustainability efforts




  1. First Affirmative Constructive (1AC) = 7 minutes
    a. Cross-examination of First Affirmative by Second Negative = 3 minutes

  2. First Negative Constructive (1NC) = 7 minutes
    a. Cross-examination of First Negative by First Affirmative = 3 minutes

  3. Second Affirmative Constructive (2AC) = 7 minutes
    a. Cross-examination of Second Affirmative by First Negative = 3 minutes

  4. Second Negative Constructive (2NC) = 7 minutes
    a. Cross-examination of Second Negative by Second Affirmative = 3 minutes

  5. First Negative Rebuttal (1NR) = 3 minutes

  6. First Affirmative Rebuttal (1AR) = 3 minutes

  7. Second Negative Rebuttal (2NR) = 3 minutes

  8. Second Affirmative Rebuttal (2AR) = 3 minutes


Back

Oracle v. Google: What are the implications for FOSS?

Home

Speaker Van Lindberg
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time17:00 - 17:25
Event linkView original entry

All the merits briefs for Oracle v. Google will be filed a couple weeks before FOSDEM 2020. This will be a rundown of the positions argued by various groups - how are the parties positioning the questions presented? What are the various amici arguing? Are there any positions that will be particularly impactful for FOSS groups and users?

On Friday, Nov. 15, the U.S. Supreme Court agreed to hear Oracle v. Google. It is hard to overstate how impactful this decision will be on FOSS. For the first time in a generation, the Supreme Court will be evaluating how copyright and software interact - and they will be discussing it in the context of GPL-licensed Java.



Right before FOSDEM, all the briefs by all parties will be due. As we sit in Brussels, the court clerks will be reading the various briefs and creating a "bench memo" for each justice, summarizing the arguments being advanced by both Oracle and Google, as well as the points raised by different amici.



This presentation will be a verbal "bench memo" for those in the FOSS community. Rather than advance a particular view, we will try to understand the scope of issues being argued, and how they may affect Free and Open Source Software in the United States - and worldwide.

Back

Legal Organizer's Panel

Home

Speaker Tom Marble
Bradley M. Kuhn
Karen Sandler
Richard Fontana
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time17:30 - 18:20
Event linkView original entry

The Legal & Policy Issues DevRoom Organizers gather to reflect on our DevRoom talks and FOSS issues of the day

Back

Improving the culture of automated testing in FOSS

Home

Speaker Alexandros Frantzis
RoomUB2.147
TrackTesting and Automation
Time10:30 - 11:00
Event linkView original entry

In this talk we will explore some of the FOSS specific mentalities and
practices that may discourage adoption of comprehensive automated testing, and
present advice for promoting and sustaining automated testing in FOSS projects.

Automated testing is on the rise in the FOSS world, but there is still ample
room for improvement when it comes to sufficiently comprehensive automated
tests. The test suites of many FOSS projects leave a lot to be desired,
a result that's often affected by useful FOSS practices that are
taken too far. Identifying such practices, like placing excessive trust in code
reviews because "given enough eyeballs, all bugs are shallow", or leaving tests
for later in the spirit of "release often, release early", is a first step in
having a discussion that will hopefully convince more projects to embrace
automated testing, and improve the quality of FOSS overall.

Back

Welcome to KernelCI

Home

Speaker Guillaume Tucker
RoomUB2.147
TrackTesting and Automation
Time11:05 - 11:35
Event linkView original entry

KernelCI is a project dedicated to testing the upstream Linux kernel.
Originally created by Linaro in 2014, it started a new chapter by
becoming a Linux Foundation project in October 2019. Its future looks
bright, with plenty of opportunities for new contributors to join.

The chosen one



The upstream kernel testing landscape is pretty wide, rich and
diverse, in the same ways that the Linux kernel is. But as there is
only one upstream kernel, it became clear that there should also be
one main test system associated with it. KernelCI was chosen to
fulfil this role, being rather neutral, versatile and based on a
distributed architecture.



A welcoming place



While the project now has a governing board via the Linux Foundation
membership, its involvement with the kernel community is only getting
stronger. It is of utmost importance to keep the roadmap aligned with
expectations from maintainers and developers to preserve the integrity
and overall purpose of project. In fact, it now needs to become an
easy tool to use by anyone who wants to add tests for their subsystem
or their hardware and for anyone to reproduce those tests locally.



An exciting year ahead



This is a new beginning for KernelCI, with many of its prior
limitations now being removed thanks to the framework provided by the
Linux Foundation. Contributors to the code, tests, hardware labs and
new project members will all have a great influence by joining the
project at this very special point in time. Now is the time to come
and help shape it as a successful project for the years to come.



Slides: gtucker-kernelci-fosdem-2020.pdf

Back

Abusing GitLab CI to Test Kernel Patches

Home

Speaker Nikolai Kondrashov
RoomUB2.147
TrackTesting and Automation
Time11:40 - 12:20
Event linkView original entry

See how Red Hat’s CKI project uses GitLab CI to test kernel patches as soon as they're posted to maillists.

Red Hat's CKI project uses GitLab CI to organize and track its pipelines, lint, patch, and build Linux kernels, and oversee testing. It also uses a number of supporting systems to discover kernel patches and commits, maintain hardware inventory, provision hardware and VMs, run tests, and finally record and report results.



See which tricks the project is pulling to tie all these parts together, and test patches posted to several maillists, commits to 15+ git repos, builds done by other build systems, as well as weird things like stable kernel patch queue, and parts of its own software stack.



Making such extensive use of a CI system inevitably uncovers its limitations, and a list of these will also be presented along with some possible solutions.

Back

OpenQA with the JDP data analyses framework

Home

Speaker Richard Palethorpe
RoomUB2.147
TrackTesting and Automation
Time12:25 - 13:15
Event linkView original entry

Overview of SUSE's Linux kernel testing in OpenQA, how we keep track of known issues, explore test results and other features of JDP. The JDP framework is written in Julia, uses Redis as a distributed data cache and Jupyter for interactive reporting. OpenQA is a large application used for testing operating systems and displaying the results.

Back

Automated Performance Testing for Virtualization with MMTests

Home

Speaker Dario Faggioli
RoomUB2.147
TrackTesting and Automation
Time13:20 - 13:50
Event linkView original entry

What benchmark? How many VMs? How big each VM is? Are they all equal or are they different? What's the host OS? What are the guest OSes? I.e., when wanting to do virtualization performance testing, the matrix of test cases tends to explode pretty quickly. This talk will show how we enhanced an existing benchmarking suite, MMTests, in order to be able to deal a little bit better with such complexity. And what our further activities and plans are, for even more and better automation.

Functional testing is already hard enough, in virtualization. For instance, because we need to make sure that things work with different combinations of versions of the OSes in hosts and guests. Doing performance testing, even more so. In fact, there are much more things to consider, such as how many VMs we use, how big they are, whether or not they are equally big or different, what to run in them, how to partition the host resources for them... And this is true either in case you have a specific (virtualized) workload and some KPI to meet, in which case you need testing and benchmarking to figure out whether or not the tuning you have done has brought you there, or in case you wonder how good (or how bad) a certain configuration of both your host and your guests works, for a number of workloads,



This talk will introduce the problem, showing how the size and the complexity of a typical 'virtualization performance testing matrix' really tend to explode. We will, as an example, show how some specific characteristics of a virtualized system were, despite tuning, causing us to not be able to achieve the desired performance levels. Then we illustrate how, at SUSE, we do automated performance benchmarking, how we enhanced the tool that was in use the most for baremetal benchmarks (the MMTests suite) in order for it to be much more useful in virtualized systems and how we are integrating it with other tools to bring the level of automation even further and achieve something that really resembles a Virtualization Performance CI system.

Back

Auto-healing cluster through negative testing

Home

Speaker Rajat Singh
RoomUB2.147
TrackTesting and Automation
Time13:55 - 14:20
Event linkView original entry

OCS stands for Openshift Container storage. It provides container-based storage for OCP(Openshift container platform). It’s easily scalable to bare metal, VMs and cloud platforms.
Auto healing is a property of OCS cluster that auto heals a cluster component automatically when passes through an unexpected condition. A component can be a node, a network interface, a service, etc. To make sure auto heals just fine, we introduced negative testing.
Negative Testing is defined as, a testing type that checks a system for unexpected conditions. In this presentation, We’re going to talk, what role negative testing plays, how to negative test components like node by shutting it down, deploying a heavy workload, etc. Similarly, for the network component, we are going to see what happens when the public network is disconnected along with many more scenarios.

Back

Introducing OpenTAP - Open Test Automation Project

Home

Speaker Rolf Madsen
RoomUB2.147
TrackTesting and Automation
Time14:25 - 14:55
Event linkView original entry

OpenTAP is a project aimed at automation in the test and measurement space. It is designed for test and measurement of hardware in R&D and manufacturing, but is moving more towards software testing e.g. with usage in cloud infrastructure testing. The project started as an internal product by Keysight Technologies and is used as the core of many products and solutions deployed around the world. As of 2019, we have released OpenTAP under the Mozilla Public License v2 and are working on building a community around it. The release was influenced by the team attending FOSDEM over the last few years, we will also welcome you for more detailed discussion at our booth on Saturday.



What we want is to grow an environment for people to share, leverage and co-develop test system plugins and solutions.



In this short talk, we will explain the basic concepts of OpenTAP, what it can be used for, and how to get started.

Back

One test output format to unite them all

Home

Speaker Boris Feld
RoomUB2.147
TrackTesting and Automation
Time15:00 - 15:30
Event linkView original entry

Since several years, software quality tools have evolved, CI systems are more and more scalable, there are more testing libraries than ever and they are more mature than ever and we have seen the rise of new tools to improve the quality of code we craft.



Unfortunately, most of our CI system still launch a script and check the return code, most of the testing libraries don't allow to select finely which tests to launch and most of CI advanced innovations, parallel running, and remote execution, are not available to developers on their workstation.



Each language community has its own set of tools, libraries, and command-line and visual interfaces increasing the effort for developers to learn or learn again how to write, run and debug tests in each language.



How to improve the situation? In this talk, I will present one of my project LITF (https://github.com/Lothiraldan/litf) a new protocol for test running and test output as well as BALTO (https://github.com/lothiraldan/balto), a test orchestrator using this new format. Thanks to this new format, BALTO can execute several test suites in different languages, remotely on a Kubernetes cluster and all in parallel. In any case, this is the goal of the stable version.

Back

Releasing Software with GitOps

Home

Speaker Sean McGinnis
RoomUB2.147
TrackTesting and Automation
Time15:35 - 16:05
Event linkView original entry

Overview of the process the OpenStack community uses to manage all software releases through automation around Git commits.

The OpenStack community has enabled a lot of automation around releasing their software. This presentation will give an overview of how code reviews are used to manage release activity. It will step through the use of Zuul CI jobs to perform validation of requests before they are accepted, and how commits are used to trigger jobs to tag and release the software, update documentation, trigger requirements updates, and other follow on work that needs to happen whenever new code is released.

Back

Writing Go(od) Tests

Home

Speaker Nikki Attea
RoomUB2.147
TrackTesting and Automation
Time16:10 - 16:55
Event linkView original entry

Every year we hear great content about how to develop in Go, but rarely do we focus on how to test in Go. Well written tests are critical to the success of a project, and more often than not, they can help drive developers to design features in more simple and concise ways.

In this talk, I'll explain the importance of test driven development and provide some tactics for how to implement the practice in your daily work and on your respective team. I'll dive into the testing, require, and assert packages to dissect which function calls are appropriate for different use cases, and present multiple different ways to write Go tests for each scope, including unit, integration and e2e. I'll also discuss how to refactor code to make it more testable (with examples), so you can optimize and simplify Go code for robust and reliable Go tests. Lastly, I will cover race conditions to help you debug concurrency related problems. Let's write Go(od) tests!

Back

CANCELLED Testing apps with third-party API integrations

Home

Speaker Carolyn Stransky (carostran)
RoomUB2.147
TrackTesting and Automation
Time17:00 - 17:10
Event linkView original entry

As reliance on third-party services becomes more prevalent in our ecosystem, developers need cost-effective, secure and reliable ways to mock these services. In this talk, we will briefly examine strategies and best practices for testing apps that make heavy use of third-party API integrations.

With such a short session, I want to focus on the bits that are the most useful for developers today. Whether you’re working in a hip startup or traditional company with legacy code, you’re likely going to need ways to mock services for both dev/staging environments and testing. So this will include a (very short) demo of Unmock, the fuzz testing library that I maintain, with references to other tools like Nock and PollyJS. After this session, audience members should walk away with practical ways to improve their testing practices for REST APIs and third-party integrations.

Back

Testing a large testing software

Home

Speaker Rémi Duraffort
RoomUB2.147
TrackTesting and Automation
Time17:15 - 17:45
Event linkView original entry

LAVA is an automated validation architecture primarily aimed at testing deployments of systems based around the Linux kernel on ARM devices, specifically ARMv7 and later.
LAVA is becoming the de facto standard to test software (bootloader, kernel, userspace) on development boards (rpi, juno, beagle, ...). It's used by many projects to build large testing systems like kernelci.



Testing a testing system like LAVA is sometimes a tricky task. In order to test LAVA we had to develop some specific tools (meta-lava, DummySYS, lavafed, ...) that I will present during this talk.

Back

Correlation analysis in automated testing

Home

Speaker Łukasz Wcisło
RoomUB2.147
TrackTesting and Automation
Time17:50 - 18:10
Event linkView original entry

Correlation Analysis is a statistical method that is used to discover if there
is a relationship between two variables, and how strong that relationship might
be. A correlation coefficient is a numerical measure of such correlation.
According to the Cauchy–Schwarz inequality it has a value between +1 and −1,
where 1 is total positive linear correlation, 0 is no linear correlation, and −1
is total negative linear correlation. One of the axioms of automated testing is
that tests are independent and in spite of that correlation coefficient should
be equal to 0. But often it isn't. In this work, we are going to present
a method of evaluation of tests suites quality based on correlation coefficient
and finding their weak points. Using PC Engines open-source firmware regression
test results, which are based on over 140 automated tests run with 2 flavors of
software on 4 different platforms, we will show how its quality can be described
numerically, and how that results can be used to optimize test criteria.

As far as automated testing is considered all the tests can have only two
expected output values - pass or fail. Originally Pearson's correlation
coefficient is the covariance of the two variables divided by the product of
their standard deviations - the first question was how to do it for Boolean
variables. We assumed that the only value that matters can be a failure of a
test. During the lecture, we will present how mathematical analysis can reveal
potential flaws in test criteria by targeting cases that have a large chance to
fail simultaneously.

Back

How to fail successfully and reliably

Home

Speaker Saleem Siddiqui
RoomUB2.147
TrackTesting and Automation
Time18:15 - 19:00
Event linkView original entry


Back

Extending sudo in Python

Home

Speaker Peter Czanik
RoomUB2.252A (Lameere)
TrackPython
Time10:30 - 10:55
Event linkView original entry

From my talk you will learn about some lesser-known features of sudo, and how you can make your security more flexible by extending sudo using Python.

Everybody knows sudo, right? Sudo allows a system administrator to give certain users the ability to run some commands as root, while logging the executed commands and their arguments. It is installed by default on almost all Linux systems, and is available for most commercial UNIX systems. Sudo allows you to fine-tune access policies, record sessions, and do extensive logging. Still, even system administrators often only know it is the “prefix” to use before entering a command requiring root privileges, and don’t realize its true powers.



Did you know that with version 1.8 sudo changed to a plugin-based architecture? You can extend or even replace basic functionality through plugins. While plugins provide the ultimate flexibility – and there are both open source and commercial plugins for sudo available – it is not easy to extend sudo. This is why the Python plugin is under way to sudo (and will be released before FOSDEM).



Sudo has a number of well defined APIs for plugins (https://www.sudo.ws/man/sudo_plugin.man.html). The Python plugin builds on these APIs. For example, you can set your own policies using the policy API, or access what is happening on the screen using the I/O API (used by session recording).



Using Python for extending sudo makes development not just easier (no development environment necessary), but opens up many new possibilities. For example, you can develop a plugin which analyzes on-screen activity in real-time, and breaks the session if the infamous “rm -fr /” command appears on screen. As multiple I/O plugins can work in parallel, you do not have to give up session recording to analyze sessions in real-time from Python.



From my talk you will learn about some lesser-known features of sudo, and how you can make your security more flexible by extending sudo using Python.

Back

Boosting Python with Rust

Home

Speaker Raphaël Gomès
RoomUB2.252A (Lameere)
TrackPython
Time11:00 - 11:25
Event linkView original entry

While working on the Mercurial version control system, we hit our heads against the limits of Python's performance. In this talk we will see how Python and Rust can cohabit to play off of each other's strenghts to improve a big open-source project, and what advances have been made in bridging the two languages.

Back

How to write a scikit-learn compatible estimator/transformer

Home

Speaker Adrin Jalali
RoomUB2.252A (Lameere)
TrackPython
Time11:30 - 11:55
Event linkView original entry

This is a hands-on short tutorial on how to write your own estimator or transformer
which can be used in a scikit-learn pipeline, and works seamlessly with the other
meta-estimators of the library.



It also includes how they can be conveniently tested with a simple set of tests.

In many data science related tasks, the use-case specific requirements require us to
slightly manipulate the behavior of some of the estimators or transformers present
in scikit-learn. Some of the tips and requirements are not necessarily well documented
by the library, and it can be cumbersome to find those details.



In this short tutorial, we go through an example of writing our own estimator,
test it against the scikit-learn's common tests, and see how it behaves inside
a pipeline and a grid search.



There has also been recent developments related to the general API of the estimators
which require slight modifications by the third party developers. I will cover these
changes and point you to the activities to watch as well as some of the private utilities
which you can use to improve your experience of developing an estimator.



The materials of the talk will be available on github as a jupyter notebook.

Back

Why is Django 3.0 a revolution for building websites with Python?

Home

Speaker Rémy Hubscher
RoomUB2.252A (Lameere)
TrackPython
Time12:00 - 12:25
Event linkView original entry

For almost 20 years, we relied on a CGI based protocol called WSGI to use Python to handle HTTP requests and responses software.
Because Python is singled threaded we relied on a couple of hacks such as Gunicorn or uWSGI to share a socket through multiple processes.
However the cost of all these multiple processes was a bit heavy and error prone.



Through Django Channels Andrew Godwin paved the way for a better way of creating web services with Python. This work landed in Django 3.0.
Let's explore how it works and why it worth it!

Back

Will somebody *please* tell me what's going on?

Home

Speaker Stephen Finucane
RoomUB2.252A (Lameere)
TrackPython
Time12:30 - 12:55
Event linkView original entry

How does one manage and document change in Python projects, be that new features or deprecation or removal of a feature? Let's explore some of the tools a Python developer can keep in their toolbox for just this purpose.

Software rarely stands still (unless it's TeX). Things are added, things are removed, things break and are then hopefully fixed. Managing this, from both the developer and user perspective, can be tough. In this talk we examine and compare some of the tools that one can use to make this process easier, such as 'debtcollector', 'reno' and 'towncrier', and contrast these with alternatives used in other projects. This talk would mainly be of interest to developers of open source libraries, though the same tooling can be used for any Python library or application that wishes to maintain stable interfaces and/or document changes in their product.

Back

Discover Static Code Analysis in Python with Coala Framework

Home

Speaker Lionel Lonkap Tsamba
RoomUB2.252A (Lameere)
TrackPython
Time13:00 - 13:25
Event linkView original entry

We, as developer, aim to provide code that, almost matches our team code style, looks better and behaves right. Static code analysis (SCA) tools are one of the way to achieves that. But, with multi-programming languages projects and all kinds of code related needs, It's difficult to address all thoses usecases without dealing with a vast majority of SCA tools.



Coala is a — language agnostic — static code analysis framework that provides a common command-line interface for linting and fixing all your code.



It is written in Python and supports way over 50 languages in addition to language independent routines. So, instead of building new analysis tools from scratch you can now build your own custom logic and let let coala deal with the rest.



This talk introduces the audience to the Coala Framework and guides them through how the can use it to build routines to do almost anything you want with your code.

AUDIENCE


Python Developers



LEVEL


Beginner / Intermediate / Advanced



Notes



This talk is for python developers with any level of experience.



At the en of the talk, the attendees will learn :






Abstract



We, as developer, aim to provide code that, almost matches our team code style, looks better and behaves right. Static code analysis (SCA) tools are one of the way to achieves that. But, with multi-programming languages projects and all kinds of code related needs, It's difficult to address all thoses usecases without dealing with a vast majority of SCA tools.



Coala is a — language agnostic — static code analysis framework that provides a common command-line interface for linting and fixing all your code.



It is written in Python and supports way over 50 languages in addition to language independent routines. So, instead of building new analysis tools from scratch you can now build your own custom logic and let let coala deal with the rest.



This talk introduces the audience to the Coala Framework and guides them through how the can use it to build routines to do almost anything you want with your code.



Agenda




Back

When Python meets GraphQL: Managing contributors identities in your open source project

Home

Speaker Miguel-Ángel Fernández
RoomUB2.252A (Lameere)
TrackPython
Time13:30 - 13:55
Event linkView original entry

SortingHat is an open source Python tool that helps to manage the different contributor identities within an open source project. Under the hood SortingHat relies on a relational database, which can be queried via SQL, command line or directly via its Python interface. However, these ways of interacting with SortingHat hinder its integration with external tools, web interfaces and new web technologies (e.g., Django, REST services). To overcome these obstacles, we have evolved SortingHat's architecture using a GraphQL model based on the Graphene-Django implementation.



This talk describes our experience in migrating to GraphQL, from adapting the SortingHat functionalities to refactoring the unit tests. Furthermore, we comment also on lesson learned, advantages and drawbacks of using this new approach



SortingHat is one of the core tools of GrimoireLab, an open-source software analytics platform part of CHAOSS project (Community Health Analytics Open Source Software) under the umbrella of the Linux Foundation.

Back

Follow Your Celery Tasks

Home

Speaker Nicolas Crocfer
RoomUB2.252A (Lameere)
TrackPython
Time14:00 - 14:25
Event linkView original entry

All Python developer who want to run asynchronous tasks should know Celery. If you have already used it, you know how great it is ! But you also discovered how it can be complicated to follow the state of a complex workflow. Celery Director is a tool we created at OVH to fix this problem : using some concepts of Event Sourcing, Celery Director helps us to follow the whole lifecycle of our workflows. It allows us to check when a problem occurred and relaunch the whole DAG (or just a subpart if tasks are not completely idempotent). During this talk we will introduce you the different concepts of Celery Director then we'll make a demonstration of it.

All Python developer who want to run asynchronous tasks should know Celery. If you have already used it, you know how great it is ! But you also discovered how it can be complicated to follow the state of a complex workflow. Celery Director is a tool we created at OVH to fix this problem : using some concepts of Event Sourcing, Celery Director helps us to follow the whole lifecycle of our workflows. It allows us to check when a problem occurred and relaunch the whole DAG (or just a subpart if tasks are not completely idempotent). During this talk we will introduce you the different concepts of Celery Director then we'll make a demonstration of it.

Back

Asyncio: understanding async and await in Python

Home

Speaker Jonathan Slenders
RoomUB2.252A (Lameere)
TrackPython
Time14:30 - 14:55
Event linkView original entry

Often when asyncio is discussed, people think of it as a high performance concurrency programming paradigm for Python. In this talk however, we approach asyncio from a different angle, one that will possibly help some of you to finally get what asyncio is about. it's not only about performance, but at least as much about correctness and readability of concurrent applications.

Concurrency is hard to get right.



Often when asyncio is discussed, people think of it as a high performance concurrency programming paradigm for Python. In this talk however, we approach asyncio from a different angle, one that will possibly help some of you to finally get what asyncio is about. it's not only about performance, but at least as much about correctness and readability of concurrent applications.



It is known that for multithreaded applications, synchronization is hard to get right. Doing it wrong can either lead to deadlocks or broken data structures.



We will have a look at how using asyncio is different from using threads, when it's better and what pitfalls we have.



This talk should be a good introduction for anyone just starting with asyncio, but can also clarify things for people that are using asyncio already. I expect people to have at least used some form of concurrency, either threads or an event loop like we have in JavaScript.

Back

Production-time Profiling for Python

Home

Speaker Julien Danjou
RoomUB2.252A (Lameere)
TrackPython
Time15:00 - 15:25
Event linkView original entry

Learn how to scrutinize your Python application in order to optimize them and make them run faster.

Getting inside knowledge of how your Python application runs is critical in order to achieve the best performance. Profiling is a mean to achieve this: by gathering all the runtime information available about the execution of your program, you might be able to understand how to optimize it. However, profiling running code in production might be a real challenge as it requires the profiler to be noninvasive and having low overhead.



Therefore, to profile production services, statistical profiling is the favorite analysis method. By regularly checking your program activity, you’ll be able to find production code bottlenecks down to the line of code. Profiling services that are running with real workload makes sure that you are collecting valuable data and that you are not guessing what the performance barrier might be.



This talk explains how it’s possible to build a statistical profiler that collects information about CPU time usage, memory allocation, and other information — all that while respecting the need for low overhead, data export format, and granularity. We’ll dig into some of the operating systems and CPython internals to understand how to build the best profiler possible.

Back

Introduction to Reactive Programming with RxPY

Home

Speaker Romain Picard
RoomUB2.252A (Lameere)
TrackPython
Time15:30 - 15:55
Event linkView original entry

Reactive Programming is an event based programming method. ReactiveX is a cross-platform implementation of Reactive Programming. It is heavily inspired from functional programming and contains many operators that allow to create, modify, and combine streams of events. Moreover it is composable and extensible. This short introduction presents Reactive Programming through RxPY, the Python implementation of ReactiveX.

The aim of this talk is to present RxPY to people that never used it, or used RxPY v1:




Back

Introducing HTTPX

Home

Speaker Tom Christie
RoomUB2.252A (Lameere)
TrackPython
Time16:00 - 16:25
Event linkView original entry

HTTPX is a next generation HTTP client, that supports HTTP/2 and HTTP/1.1.



It can be used in high-performance async web frameworks, using either asyncio or trio, and is able to support making large numbers of requests concurrently.



This talk will introduce HTTPX, demonstrate some of its features, and talk through the motivation and aims for the project.

The talk will cover:




Back

FoxDot and the Summer of 2019

Home

Speaker Moisés Guimarães
RoomUB2.252A (Lameere)
TrackPython
Time16:30 - 16:55
Event linkView original entry

Yeah, this is about my last summer. But I promise to focus on the story of how I was able to do four amazing lightning talks with Python and a harmonica.



This one is not about technical stuff, it is about finding something that python overlaps with your hobbies and sharing it back to the community.

This talk is about writing songs and playing music with python. Back in 2017 I came to know FoxDot, a python wrapper around SuperCollider, which is a super popular open source synthesizer. Since then, I have been using it to create entertaining lighting talks and would like to cover a bit more than just a lightning talk this time.

Back

Monads in Python: why and how?

Home

Speaker Vincent Perez
RoomUB2.252A (Lameere)
TrackPython
Time17:00 - 17:25
Event linkView original entry

In this talk I would give some motivating examples behind the idea of monads in Python, and show some implementation examples. I'd also show how we can leverage AST transformations to make the Python syntax more amenable to the use of monads. I have already given a talk on this topic during Pycon France 2018 in Lille. Unfortunately, the video footage has been lost, but the original slides can be found here: https://slides.com/v-perez/pythonic-monads-in-real-life#/



If this talk was selected, I'd probably update it a bit to account for the feedback I received, and new ideas I may have.

In this talk I would give some motivating examples behind the idea of monads in Python, and show some implementation examples. I'd also show how we can leverage AST transformations to make the Python syntax more amenable to the use of monads. I have already given a talk on this topic during Pycon France 2018 in Lille. Unfortunately, the video footage has been lost, but the original slides can be found here: https://slides.com/v-perez/pythonic-monads-in-real-life#/



If this talk was selected, I'd probably update it a bit to account for the feedback I received, and new ideas I may have.

Back

repcloud

Home

Speaker Federico Campoli
RoomUB2.252A (Lameere)
TrackPython
Time17:30 - 17:55
Event linkView original entry

repcloud is a tool for repacking postgresql databases in cloud written in python3.

repcloud is a tool for repacking postgresql databases hosted in cloud.
as pgrepack is a far better approach for repacking tables online, there are situations where is not possible to install the extension on a postgresql if it's hosted in cloud (e.g. Heroku).
Hence comes repcloud which can help in rebuilding the tables on line with the use of simple SQL and some postgreSQL magic.
The author will explain how the project started, the functionalities and the limitations.

Back

Thoth - a recommendation engine for Python applications

Home

Speaker Fridolín Pokorný
RoomUB2.252A (Lameere)
TrackPython
Time18:00 - 18:25
Event linkView original entry

Project Thoth is a recommendation engine that collects information about software packages, container images such as installation, assembling issues, runtime crashes or information about performance. This information is subsequently used in a recommendation engine that searches large state space of libraries and recommends the best possible combination of libraries suitable for your application using reinforcement learning. Let’s have a look at how such information is collected and how the large state space is explored to resolve the best application stack for your Python application based on different aspects.

Python ecosystem is experiencing significant growth and popularity especially with the hype machine learning, data science and AI are creating. As the ecosystem grows its many times not straightforward and easy to decide which libraries in which versions are the most suitable ones for an application. Project Thoth is a recommendation engine which aggregates various characteristics of Python packages, called "observations", and uses them to recommend the best possible software stack (a fully pinned down list of dependencies) suitable for user's runtime environment and the application purpose. In this talk, we give an overview of the project Thoth, main ideas in data aggregation and its recommendation engine based on reinforcement learning principles. We will also show how you can benefit from Thoth's recommendations.

Back

The Pythran compiler, 7 years later

Home

Speaker Serge Guelton (serge-sans-paille)
RoomUB2.252A (Lameere)
TrackPython
Time18:30 - 18:55
Event linkView original entry

7 years ago, a first file was commited in the Pythran git repo in order to create a compiler from Python to C++. The project now has hundreds of downloads per day on PyPI and has moved to a cross-platform compiler for scientific programs. This talks walks through the initial ideas, sorting out the good and the bad ones and compares the approach with other major Python compilers for scientific programs, most notably Cython, Pypy and Numba.

Relevant topics include :




Back

LPI Exam Session 1

Home

Speaker LPI Team
RoomUB4.132
TrackCertification
Time13:00 - 15:00
Event linkView original entry

LPI offers discounted certification exams at FOSDEM

As in previous years, the Linux Professional Institute (LPI) will offer discounted certification exams to FOSDEM attendees.
LPI offers level 1, level 2 and level 3 certification exams at FOSDEM with an almost 50% discount.



For further information and instructions see https://fosdem.org/certification.

Back

LPI Exam Session 2

Home

Speaker LPI Team
RoomUB4.132
TrackCertification
Time15:30 - 17:30
Event linkView original entry

LPI offers discounted certification exams at FOSDEM

As in previous years, the Linux Professional Institute (LPI) will offer discounted certification exams to FOSDEM attendees.
LPI offers level 1, level 2 and level 3 certification exams at FOSDEM with an almost 50% discount.



For further information and instructions see https://fosdem.org/certification.

Back

Openoffice Build system

Home

Speaker Peter Kovacs
RoomUB4.136
TrackOpen Document Editors
Time10:30 - 10:55
Event linkView original entry

This talk will be about the OpenOffice Build system. We will talk about how it works today's, issues with it. And the talk will highlight current development in this field plus where it might moves in the future. (plans of development)



Sheduled length will be 20 min +question

Currently the build system is a mixture of dmake, gmake, ant and other tools. After a short going through I like to describe the vision we have for the future build system we are working towards.

Back

Contributing to LibreOffice without C++ knowledge

Home

Speaker Ilmari Lauhakangas
RoomUB4.136
TrackOpen Document Editors
Time11:00 - 11:25
Event linkView original entry

A good grasp of C++ is rather useful when it comes to improving LibreOffice. However, in the project there are vital roles and tasks that do not involve writing C++. This talk explores these other ways of contributing.

Back

coverity and oss-fuzz issue solving

Home

Speaker Caolán McNamara
RoomUB4.136
TrackOpen Document Editors
Time11:30 - 11:55
Event linkView original entry

Back

LibreOffice lockdown and encryption improvements

Home

Speaker Thorsten Behrens
RoomUB4.136
TrackOpen Document Editors
Time12:00 - 12:25
Event linkView original entry

LibreOffice has builtin support for working with encrypted documents since a long time (with some recent improvements adding OpenPGP support). Further support for more fine-grained control of what a user can do with access-restricted documents was though missing.
Come and see what recent improvements we implemented for LibreOffice 6.4 and 6.5, to permit fine-grained access controls to individual LibreOffice documents, matching the feature set of MS Rights Management Solution.

Back

Prioritizing is key

Home

Speaker Xisco Fauli
RoomUB4.136
TrackOpen Document Editors
Time12:30 - 12:55
Event linkView original entry

Back

Proposal to inspect and highlight styles in Writer

Home

Speaker Heiko Tietze
RoomUB4.136
TrackOpen Document Editors
Time13:00 - 13:25
Event linkView original entry

Styles are the essence of a text processor. And while experts love to unleash the power of LibreOffice Writer, it’s at the same time a major source of nuisance. In particular when you receive documents from other people, it can be quite difficult to understand the applied formatting and to fix issues around. This talk presents two ideas for an improved feedback.

Back

LibreOffice Theme Changer

Home

Speaker Rania Amina
RoomUB4.136
TrackOpen Document Editors
Time13:30 - 13:55
Event linkView original entry

LibreOffice is free and open source office suite software that is very popular today. LibreOffice is almost used in various user segments, ranging from personal, community, education, and even companies. It would be very interesting to be able to have LibreOffice specific themes for each segment. For this reason, we (LibreOffice Indonesia Community) took the initiative to create a special extension to manage themes in LibreOffice, we call it LO-TC (read: Lotis) LibreOffice Theme Changer.

LibreOffice is free and open source office suite software that is very popular today. LibreOffice is almost used in various user segments, ranging from personal, community, education, and even companies. It would be very interesting to be able to have LibreOffice specific themes for each segment. This of course will also further strengthen the fact LibreOffice is truly free software.
Unfortunately, the features to set themes in LibreOffice are currently limited. As of now, LibreOffice only provides 6 theme choices for users. In the previous version, although there were many bug and problems, there was a persona theme menu that was quite interesting to me. For this reason, I and my friends in the LibreOffice Indonesia Community took the initiative to create a special extension to manage themes in LibreOffice, we call it LO-TC (read: Lotis) LibreOffice Theme Changer.
LO-TC was originally just a simple bash script that allows users to change some visual components in LibreOffice, some of which are:



• Images in headers and footers
• Intro or splash screen
• Colors in the application, and
• Icons (optional)


Because it is only based on bash scripts, LO-TC can only be used for Linux and Mac (with a few adjustments). Because of this limitation, we finally decided to rewrite LO-TC in the form of extensions so that it could later be used on many operating systems.
The various LibreOffice installation models (via repositories of distributions, snap, flatpak, etc.) pose quite daunting challenges for developers. In addition, this is our first extension project, so we need a lot of new things that we must understand in the process of working on this latest LO-TC. Current status of LO-TC development can be found here: https://github.com/libreofficeid/LO-TC-GUI

Back

Creating Word Clouds with OpenOffice

Home

Speaker Andrea Pescetti
RoomUB4.136
TrackOpen Document Editors
Time14:00 - 14:25
Event linkView original entry

OpenOffice Writer offers all features needed for basic frequency analysis and visualization. We'll see how to do automated text analysis and simple word clouds without using specialized external tools.

Back

Crowdfunding to advance open document editors

Home

Speaker Simon Phipps
RoomUB4.136
TrackOpen Document Editors
Time14:30 - 14:55
Event linkView original entry

Productivity software like LibreOffice has long been sustained by the commercial activities of community members as well as the contributions of countless volunteers. That's also driven standards engagement, like the work around Open Document Format (ODF). But the cloud is slowly strangling the desktop support business, and spare-time volunteers may not be enough for complex, mature software. The Document Foundation has been innovating to sustain LibreOffice and ODF; this talk will describe the COSM and TDC projects, and ask whether similar approaches might sustain other open source desktop software.

Back

Online Open Document Editing New Possibilities

Home

Speaker Cor Nouws
RoomUB4.136
TrackOpen Document Editors
Time15:00 - 15:25
Event linkView original entry

Open Document editing is, as many things in life, more and more an online action. Collabora introduced the important first steps in 2015. Since then much work has been done and LibreOffice and Collabora Online grew enormously in possibilities. This presentation will guide you trough the various areas. And in the Q&A, lets talk about expectations for the future.

Back

Make Online yours

Home

Speaker Pedro Pinto Silva
RoomUB4.136
TrackOpen Document Editors
Time15:30 - 15:55
Event linkView original entry

Collabora Online - The driving force behind putting LibreOffice in the cloud -is quite flexible in the means that you can alter to your personal taste without the need to change other core components.

Collabora Online - The driving force behind putting LibreOffice in the cloud - is quite flexible in the means that you can alter to your personal taste without the need to change other core components.
Tag along and see how can you customize Online's look and feel without a sweat and using mainly CSS, SVG! Don't know much about web technologies? No problem! There is no requirement to be eligible to attend, as I'll be talking in a casual fashion and with examples and hopefully illustrate each step of the way.

Back

Online: wrestling web Copy/Paste to usability

Home

Speaker Michael Meeks
RoomUB4.136
TrackOpen Document Editors
Time16:00 - 16:25
Event linkView original entry

Overcoming the synchronous web copy/paste API's limitations in real
browsers is extremely non-trivial. Come & hear how we provide rich
copy/paste support across browsers, inside our app and hear a bit
about how this mess should be fixed.

Collabora has been working to provide a good user-experience for Collabora
Online - bringing LibreOffice to the web, and a particularly
challenging aspect of this has been copy/paste. One of our challenges
is that by design we keep our document data on the server, which is at
the end of an asynchronous web-socket. Another challenge is the
impossibly baroque and arguably mis-designed set of clipboard APIs
that we have to work with.



Hear a story of how we defeated the issues, as well as the somewhat
sad UX compromises we were forced to make for the hard cases.

Back

Integrate Collabora Online with web applications

Home

Speaker Andras Timar
RoomUB4.136
TrackOpen Document Editors
Time16:30 - 16:55
Event linkView original entry

Come and hear how to integrate Collabora Online – a powerful online office suite based on LibreOffice code – with web applications. Learn about how Collabora developers helped to develop solutions by extending the WOPI-like API and PostMessage API of Collabora Online.

Back

LibreOffice Online adoption into 1&1 Mail&Media ecosystem

Home

Speaker Eduard Ardeleanu
RoomUB4.136
TrackOpen Document Editors
Time17:00 - 17:25
Event linkView original entry

Back

Collabora Office Android app gory details

Home

Speaker Jan Holesovsky
RoomUB4.136
TrackOpen Document Editors
Time17:30 - 17:55
Event linkView original entry

The LibreOffice Android app consists of the LibreOffice core as the native code and Java part that takes care of compositing of the tiles, input handling, etc. It is hard to maintain, because everything that has been implemented in LibreOfficeKit for the Online has to be ported to Java - which is a huge amount of work.



For the Collabora Office Android app, we have tried a new approach - to build on top of work pioneered by Tor Lillqvist for iOS: Using the native code for the rendering, the Online JavaScript for the composition of tiles, input handling, etc. and only a thin Java layer to instantiate a WebView where the JS lives.



Come and see the current state! And don't worry, all the work is contributed back to the LibreOffice code too :-)

Back

Lightning talk session

Home

Speaker Thorsten Behrens
RoomUB4.136
TrackOpen Document Editors
Time18:00 - 18:55
Event linkView original entry

Back

Open Source Hardware and Soldering Workshop

Home

Speaker Tsvetan Usunov
RoomUB4.228
TrackWorkshops
Time10:30 - 19:00
Event linkView original entry

Open Source Hardware room with two day soldering workshops.
Day 1 soldering workshop will be dedicated to Through Hole Technology and is good for beginners which has no experience with component soldering.



Beside the soldering workshop we will show our latest OSHW boards we work on, you are welcome to join and show your own OSHW projects too.

With this soldering workshop we will show that assembling printed circuit boards is not hard to learn.



We designed special board with through holes components for FOSDEM - the FOSDEM MUSIC BOX which is Arduino programmable and can play music.



During the soldering workshop we will introduce the electronic components used in the PCB and how to identify them and how components with polarity is to be recognized.



We will teach you the basics of soldering, how good and bad solder joints look like and what is cold solder joint.
At the end of the workshop you will build your own Music Box and could program it with Arduino IDE to play music.

Back

Past, Present and Future of DRLM project

Home

Speaker Didac Oliveira
RoomUD2.119
TrackBackup and Recovery
Time10:30 - 11:00
Event linkView original entry

Brief introduction to DRLM project, it's features and news in the 2.3.x release and the presentation of the new DRLM version 3 architecture and its development state.

This talk is going to explain our vision on the future of the DRLM project, that will continue evolving in DRLM 2.x while the new DRLMv3 is being developed.



For DRLMv3 we've prepared a complete development environment on Docker to provide an easy and fast way to contribute to the project.



In this session we'll show all interesting new features on DRLMv2, a DRLMv3 preview of what is developed at time of this presentation, and how easy is to have a complete DRLMv3 development
environment with a couple of commands.



We don't want to spoil anything, but this talk may be of interest for developers looking for a FLOSS project to contribute to ;).

Back

Relax-and-Recover (ReaR) Basics

Home

Speaker Gratien D'haese
Johannes Meixner
RoomUD2.119
TrackBackup and Recovery
Time11:05 - 11:35
Event linkView original entry

Introducing Relax-and-Recover (ReaR) for the novice users. What is it and what can it mean for you? Is Disaster Recovery useful to consider it or not? How can ReaR assist you with DR?
ReaR can store the details about your systems on disks (NAS, USB, SAN,...) or network (PXE, NFS, CIFS,...) including the complete backup. It also creates a bootable image which you need to recreate your system from scratch.
Furthermore, thanks to the modular concept, ReaR integrates perfectly with external backup solutions (be commercial and/or open source ones) to do the backup and restore part which makes ReaR very scalable in big enterprises. ReaR scales even with Cloud solutions and is the heart of another great project (DRLM or Disaster Recovery Linux Manager).

Relax-and-Recover (ReaR) is the de facto standard generic (bare metal) disaster recovery framework for all kind of Linux systems.
ReaR is in common use by admins for disaster recovery on thousands and thousands of Linux server systems.
The first part of the "ReaR Basics" talk will be presented by Gratien D'haese (one of the co-founders of ReaR).



In the second part of the talk Johannes Meixner (one of the main developers of ReaR) will use his own laptop to demonstrate the ReaR disaster recovery framework.
Therefore, Johannes will show a real live demo without safety net how ReaR is used to recover his own laptop from soft errors like deleted essential files
(rm -r /lib...) and/or destroyed partitioning/bootloader (dd if=/dev/zero of=/dev/sdX). Come and see for yourself how ReaR can save your day!

Back

Relax-and-Recover (ReaR) Mass Deployment

Home

Speaker Gratien D'haese
RoomUD2.119
TrackBackup and Recovery
Time11:40 - 12:05
Event linkView original entry

Installing and configuring ReaR on thousands of Linux systems might become a nightmare to keep track what/where was done with success. Luckily using configuration management software we can do this quite easily.
In this talk we will guide you through a recipe on how we have done this for a multi-national company.

Back

Overview of Bareos

Home

Speaker Andreas Rogge
RoomUD2.119
TrackBackup and Recovery
Time12:10 - 12:35
Event linkView original entry

This talk will give quick overview of Bareos and the new features in Bareos 19.2.

This talk will give quick overview of Bareos and the new features in Bareos 19.2.

Back

oVirt-Plugin for Bareos

Home

Speaker Andreas Rogge
RoomUD2.119
TrackBackup and Recovery
Time12:40 - 12:55
Event linkView original entry

Backing up virtual machines in larger environments is usually not a simple task. With the new oVirt-Plugin for Bareos you can now easily backup and restore your oVirt virtual machines.
This talk will give a short introduction how Bareos backs up oVirt virtual machines.

Back

Preserve kubernetes state using heptio velero

Home

Speaker Harshita Sharma
RoomUD2.119
TrackBackup and Recovery
Time13:00 - 13:15
Event linkView original entry

Stateful applications like databases needs to preserve their state as they need to save client data of one session for use in next session in persistent storage. Managing state in Kubernetes is difficult because the system’s dynamism is too chaotic for most databases to handle. So backup of data is very important especially in case of node failures, disk failures etc.
Velero is an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes.
In this talk, I will elaborate on why, how and when to use velero for your Kubernetes cluster and volumes.

Back

Percona XtraBackup Current and Future State

Home

Speaker Tyler Duzan
RoomUD2.119
TrackBackup and Recovery
Time13:20 - 13:35
Event linkView original entry

A brief overview of the current state of the backup tool, architecture, MySQL 8.0 support, new cloud native features, and the roadmap.

During this brief lecture, I will present the roadmap for Percona XtraBackup, talk about the importance of our Cloud direction, why PXB 8.0 is a separate binary, and how PXB fits into our Percona Distribution model for 2020.

Back

Percona Backup for MongoDB: Status and Plans

Home

Speaker Mykola Marzhan
RoomUD2.119
TrackBackup and Recovery
Time13:40 - 13:55
Event linkView original entry

A brief overview of the current state of backup tool, architecture, existing features, and the roadmap.

Back

Self-hosted server backups for the paranoid

Home

Speaker Axel Tripier
RoomUD2.119
TrackBackup and Recovery
Time14:00 - 14:30
Event linkView original entry

Quarkslab is a French company specializing in information security R&D, consulting and software development.



Due to strong data security constraints imposing self-hosted solutions coupled with limited resources in a fast-growth environment, data safety has been a pain point in our infrastructure.



After our backup server failed, we decided to recreate a new backup system from scratch, adapted to our needs and using technologies we were familiar with, to backup 30+ Linux servers.






In this talk, we will present how our old backup system failed, the key requirements we learned from this failure, and how we designed and implemented a new backup system based on Borg Backup, borgmatic, SSH, Python and FreeNAS to solve those requirements.



We will conclude by listing the shortcomings and improvement points of our approach, as well as comparing our solution to seven important properties every backup system should have.






Some interesting features of our new backup solution are strong data safety and security, fully self-hosted, using only open-source tools, simple to set up and easy to understand.



One specific requirement we solved was for the sysadmin team to be blind to the data they backup, managing only the process itself.



This lets people working on confidential project on dedicated and access-restricted servers to still use a centralized and resilient backup system without compromising data and server security.






We will open-source our Ansible roles and Python scripts on Github before FOSDEM.

Back

FASTEN: Scaling static analyses to ecosystems

Home

Speaker Georgios Gousios
RoomUD2.119
TrackDependency Management
Time15:00 - 15:30
Event linkView original entry

As recent events, such as the leftpad incident and the Equifax data breach, have demonstrated, dependencies on networks of external libraries can introduce projects to significant operational and
compliance risks as well as difficult to assess security implications. FASTEN introduces fine-grained, method-level, tracking of dependencies on top of existing dependency management networks. In our talk, we will present how FASTEN works on top of the Rust/Cargo and Java/Maven ecosystems.

Back

There's no sustainability problem in FOSS

Home

Speaker Carol Smith
Duane O'Brien
RoomUD2.119
TrackDependency Management
Time15:30 - 16:00
Event linkView original entry

The community seems to be rife with conversations about our sustainability problems. Do we actually have one? We’ll lead a discussion and debate around how we as a community can think about these issues, while drawing out the nuanced aspects of each as well as their potential solutions.

When something like left-pad or event-stream happens, how much responsibility should be taken on by companies who deployed a dependency that was critical enough to their operations that removing it created immediate crisis, but not well supported or understood enough that there was any kind of mitigation strategy or backup plan?



And yet, when you look at OpenSSL, curl, and other pieces of open source infrastructure that live in our dependency chains, there are many examples of projects that are important enough to be critical, but are under-resourced to the point that maintainers are having to make quality-of-life tradeoffs to stay on top of the project. We are responsible for ensuring that our shared dependencies are sustainably developed. But who is holding us accountable?



If a maintainer is driving themselves to burnout because they are supporting too many of their open source projects, don’t they bear some responsibility for that choice?



But how are we supposed to untangle which of the thousands of dependencies that we use are in most need of support - and what kind of support they prefer?



Is there a sustainability problem in FOSS after all?



This presentation will be co-presented with Duane O'Brien, Head of Open Source at Indeed.com, the world’s #1 jobs site.

Back

Comparing dependency management issues across packaging ecosystems

Home

Speaker Tom Mens
RoomUD2.119
TrackDependency Management
Time16:00 - 16:30
Event linkView original entry

In the last couple of years, the Software Engineering Lab of the University of Mons has extensively studied different aspects of dependency management within and across different package management ecosystems, including Cargo, npm, Packagist, Rubygems, CPAN, CRAN and NuGet. These ecosystems contain a large number of package releases with many interdependencies. They face challenges related to their scale, complexity, and rate of evolution. Typical problems are backward incompatible package updates, and the increasing proportion of fragile packages due to an excessive number of transitive dependencies.

This talk reports on our findings based on multiple empirical studies that we have conducted to understand different aspects of dependency management and their practical implications. This includes:
* the outdatedness of package dependencies, the transitive impact of such "technical lag", and its relation to the presence of bugs and security vulnerabilities.
* the impact of using either more permissive or more restrictive version contraints on dependencies.
* the virtues and limitations of being compliant to semantic versioning, a common policy to inform dependents whether new releases of software packages introduce possibly backward incompatible changes.
* the impact of specific characteristics, policies and tools used by the packaging ecosystem and its supporting community on all of the above.



The contents of the talk will be adapted to the target audience of open source software practitioners, but will be primarily based on the following peer-reviewed scientific articles:
* What do package dependencies tell us about semantic versioning? Alexandre Decan, Tom Mens. IEEE Transactions on Software Engineering, 2019. https://doi.org/10.1109/TSE.2019.2918315
* An empirical comparison of dependency network evolution in seven software packaging ecosystems. Alexandre Decan, Tom Mens, Philippe Grosjean. Empirical Software Engineering 24(1):381-416, 2019. https://doi.org/10.1007/s10664-017-9589-y
* A formal framework for measuring technical lag in component repositories and its application to npm. Ahmed Zerouali, Tom Mens, Jesus Gonzalez‐Barahona, Alexandre Decan, Eleni Constantinou, Gregorio Robles. Journal of Software: Evolution and Process 31(8), 2019. https://doi.org/10.1002/smr.2157
* On the Impact of Security Vulnerabilities in the npm Package Dependency Network. Alexandre Decan, Tom Mens, Eleni Constantinou. International Conference on Mining Software Repositories, 2018. https://doi.org/10.1145/3196398.3196401
* On the Evolution of Technical Lag in the npm Package Dependency Network. Alexandre Decan, Tom Mens, Eleni Constantinou. International Conference on Software Maintenance and Evolution, 2018. https://doi.org/10.1109/ICSME.2018.00050

Back

Building Confidence & Overcoming Insecurity

Home

Speaker Jeff McAffer
RoomUD2.119
TrackDependency Management
Time16:30 - 17:00
Event linkView original entry

The days of having only a few open source dependencies are over. Projects often have thousands of open source dependencies in their supply chain and companies may have millions. Even worse, risk is viral -- projects inherit and pass on the risks of all their dependencies. At the same time, software is shipping more frequently.



This creates numerous challenges for commercial and open source projects of any size -- how to discover the myriad of components being used across a range of ecosystems and scenarios, where to get high quality data to drive smart decisions, how to capture and evaluate comprehensive policies.



Enabling high-confidence, rapid delivery, requires integrating supply chain management automation deep into the engineering system. Core to this is accurate discovery and identification of dependencies and trustworthy, high-quality compliance and security data about the discovered components.



In this talk we detail the challenges in this space, look at various approaches such as ClearlyDefined, a crowd-sourced, open source project aimed at discovering and curating compliance data about open source components, and relate experiences running high performance, massive scale compliance systems for a wide range of open source and commercial projects.

Back

Precise, cross-project code navigation at GitHub scale

Home

Speaker Douglas Creager
RoomUD2.119
TrackDependency Management
Time17:00 - 17:30
Event linkView original entry

GitHub has recently added Code Navigation features (jump to definition and find all references) that let you navigate code directly on github.com. For the languages that we support, we extract and store symbol information for every named branch and tag, of every repository, public or private, with no configuration necessary. The compute and storage requirements to do this for all of the code on GitHub are quite large. In this talk, we'll discuss some of the trade-offs we've made to make this tractable at GitHub's scale, to be able to operate and monitor this service effectively, and to let us add support for new languages quickly and easily. We'll also talk about our ongoing work to extend Code Navigation to handle links that cross package and repository boundaries.

Back

Spack's new Concretizer

Home

Speaker Todd Gamblin
RoomUD2.119
TrackDependency Management
Time17:30 - 18:00
Event linkView original entry

Dependency resolution is deceptively complex; simply selecting a set of compatible versions for an arbitrary network of dependencies is NP-hard. Much effort has been spent on this problem for modern single-language ecosystems, but many of these ecosystems rely on natively compiled libraries, and dependency mangers often fail at managing the additional complexities that native libraries entail. Further, dependency resolution has traditionally been modeled as a SAT problem, where the package manager should find any workable solution to satisfy package constraints. However, any solution may not be good enough. Users want the most tested, most optimized, or most secure configuration, and this is a SAT problem coupled with complex optimization.



Spack is a package/dependency manager rapidly gaining popularity in High Performance Computing (HPC) that aims to address many of the complexities of native, multi-language, cross-platform dependency management. Spack has recently been reworked to use Answer Set Programming (ASP), a declarative logic programming paradigm that also provides sophisticated facilities for optimization. This talk will cover how we’ve been able to model the compiler toolchain, ISA, build options, ABI, and other constraints on native libraries. We’ll also talk about how ASP has been a useful tool for finding optimized dependency configurations. This work can be used to improve dependency resolvers in general — so that they can prefer more secure or tested configurations instead of simply selecting the most recent workable versions.

Expected prior knowledge / intended audience:
Audience should have basic knowledge of build systems and compiled languages, but we'll explain this up front with some brief background. The talk is aimed broadly -- for users, developers, packagers, researchers, package manager implementors, and HPC administrators.

Back

Package managers: resolve differences

Home

Speaker William Bartholomew
RoomUD2.119
TrackDependency Management
Time18:00 - 18:45
Event linkView original entry

Package managers have become the default way for managing dependencies for most projects but they’re not without their challenges and risks. In this panel we bring together experts representing several popular package managers for a lively discussion on package management best practices, the state of package management communities, and a look forward at what we can expect to see in the future.

Join our facilitators as they put representatives of popular package managers on the spot with difficult questions on package management infrastructure, security, and compliance.



We’ll tackle topics such as:
* Versioning and naming
* Knowing the full graph of packages you’re consuming
* Best practices for securing your use of package managers
* Finding and resolving vulnerabilities in packages you’re using
* Malicious packages and typo-squatting
* Meeting your open source license obligations
* Dealing with dependencies that aren’t packages



With package managers becoming the default way for managing dependencies, they are now a critical part of the software supply chain and while at first each package manager appears quite different, they share common requirements, are used in similar workflows, and are all targets for malicious actors.



In this panel we will focus on those common problems so that regardless of which package manager you use, you’ll come away with a breadth of knowledge on how to securely use package managers in your software supply chain.

Back

Introducing Tanka

Home

Speaker Malcolm Holmes
Tom Braack
RoomUD2.120 (Chavanne)
TrackInfra Management
Time10:30 - 10:55
Event linkView original entry

Introducing Tanka, a scalable Jsonnet based tool for deploying and managing Kubernetes Infrastructure

There are various tools available for managing Kubernetes resources. Whether they be Helm, Kustomize or others. Ksonnet offered a powerful approach with tremendous promise, but was discontinued by developers. In this presentation we will introduce Tanka, a drop in replacement for Ksonnet developed at Grafana labs and available on GitHub. For those not familiar with Ksonnet, we will introduce the Jsonnet language, and demonstrate its power as a way of interacting with Kubernetes. We will then demonstrate some of the enhancements we have already made to Tanka, and then explain our roadmap for the tooling, and how we believe it is already the best of breed configuration management solution for Kubernetes.

Back

Using OpenAPI to Maximise Your Pulp 3 Experience

Home

Speaker Dennis Kliban
RoomUD2.120 (Chavanne)
TrackInfra Management
Time11:00 - 11:25
Event linkView original entry

Pulp (https://pulpproject.org) enables users to organize and distribute software. Now that Pulp 3.0 is generally available, it’s time to integrate it into your software delivery workflows. While the REST API is the primary integration point, it is the OpenAPI schema definition of that API that enables users to build client libraries in various languages. These clients simplify the integration with Pulp 3.



This talk will provide a brief introduction to OpenAPI. This will be followed by a demonstration of how to use the Pulp’s OpenAPI schema to generate a Python client for Pulp’s REST API. The Python client will then be used to perform various workflows in Pulp 3.

Back

Doomed are the dinosaurs!

Home

Speaker David Heijkamp
RoomUD2.120 (Chavanne)
TrackInfra Management
Time11:30 - 11:55
Event linkView original entry

It may be hard to image, but some sysadmins do not operate in ideal, tightly controlled circumstances. Apparently, not every developer, application or organization is ready for Kubernetes…



In this presentation we will share a real world use case: deploying and configuring a brand new natural history museum. We’ll show how we built the museum with open source software and config management tools, dealing with a broad set of technologies, a tight schedule, a sector dominated by traditional organizations fixated on proprietary solutions and a whole bunch of actual fossils. We’ll show how far we’ve come, and what choices we made along the way.



Check out this talk if you want to see how Ansible, MAAS, PlatformIO, Nextcloud and other tools were used to not just automatically deploy and configure Linux based media players, games and digital signage screens, but also to manage Cumulus Linux-based switches, OPNsense firewalls, show controllers, Arduino microcontrollers, KNX gateways, projectors and even the odd OSX machine.

It may be hard to image, but some sysadmins do not operate in ideal, tightly controlled circumstances. Apparently, not every developer, application or organization is ready for Kubernetes…



In this presentation we will share a real world use case: deploying and configuring a brand new natural history museum. We’ll show how we built the museum with open source software and config management tools, dealing with a broad set of technologies, a tight schedule, a sector dominated by traditional organizations fixated on proprietary solutions and a whole bunch of actual fossils. We’ll show how far we’ve come, and what choices we made along the way.



Check out this talk if you want to see how Ansible, MAAS, PlatformIO, Nextcloud and other tools were used to not just automatically deploy and configure Linux based media players, games and digital signage screens, but also to manage Cumulus Linux-based switches, OPNsense firewalls, show controllers, Arduino microcontrollers, KNX gateways, projectors and even the odd OSX machine.

Back

Compliance management with OpenSCAP and Ansible

Home

Speaker Amit Upadhye
RoomUD2.120 (Chavanne)
TrackInfra Management
Time12:00 - 12:25
Event linkView original entry

Managing compliance of large IT environment is complex and challenging task. Today's hybrid cloud environments are having different life cycles, when considering many short lived system like cloud instances its difficult to manage compliance on the go. This talk focuses on how OpenSCAP policies, tools and Ansible can be used to have greater control of compliance of large environments.

Compliance management with OpenSCAP



Enterprise computing environments may consist of thousands of computer systems, having multiple applications and services. These systems are accessed by large and diverse set of users and applications. To have a greater control over security of these vast environments a standard and unified way to scan systems for compliance with security policies is needed.



This talk focuses on using SCAP tools to retain control over large environments, scan compliance with desired policy, and use Ansible to remediate detected problems,



Install and use the SCAP Security Guide.
Evaluate a server's compliance with the requirements specified by a policy from the SCAP Security Guide using OpenSCAP tools.
Create a tailoring file to adjust the policy's security checks so that they’re relevant and correct for a specific system and its use case.
Run Ansible Playbooks, included in the SCAP Security Guide, to remediate compliance checks that failed an OpenSCAP scan.
Demonstration
Back

AMENDMENT Introduction to Ansible collections

Home

Speaker Abhijeet Kasurde
Ganesh Nalawade
RoomUD2.120 (Chavanne)
TrackInfra Management
Time12:30 - 12:55
Event linkView original entry

The talk with give an introduction of Ansible collections and will talk about collection structure and how to deliver Ansible content with collections

The agenda for this talk will be:
1) What are Ansible collections?
2) Why Ansible collections are required?
3) Deep dive with collections (init, create, build, publish)
4) Demo

Back

AMENDMENT Designing for Failure

Home

Speaker Walter Heck
RoomUD2.120 (Chavanne)
TrackInfra Management
Time13:00 - 13:25
Event linkView original entry

Designing for Failure



While we all work very hard to build high-available, fault-tolerant and
resillient applications and infrastructures the end-goal is currently often
something along the lines of loosly-coupled/microservices with zero-downtime in
mind.
Upgrades are tied to CI/CD pipelines and we should be sipping pina coladas
on the beach. Time to unleash the Chaos Monkey, because that is what Netflix
does, so we should try it as well.



Now, the backend DB failed. The middleware application is returning errors, and
your frontend is showing a fancy 5xx.



While each layer is able to scale independently or fail-over to another region,
even a simple timeout @ the DB can cause a cascading failure.



The application is designed to work, not designed to recover from failure.



Designing for failure applies to both software development and infrastructure
architecture, and I'd like to talk about a couple of points to highlight this
paradigm.



Please note that this talk replaces one entitled "Introduction to Metal³" that was due to have been given by Stephen Benjamin, who has sent his apologies but is now unable to attend.

Back

Ephemeral Environments For Developers In Kubernetes

Home

Speaker Jeff Knurek
RoomUD2.120 (Chavanne)
TrackInfra Management
Time13:30 - 13:55
Event linkView original entry

A key aspect of a microservice architecture is to make sure individual services work in isolation. But as a developer its also important to make sure the service works in the full system. Providing developers a way to run pre-production code in a multi-service environment is challenging.



Making use of existing Helm charts and defaulting to production configuration does part of the work. Also important is being able to extend upon tools like Telepresence/Ksync for debugging in k8s. But while these great tools are available, what has been lacking is the "easy to use", single command that gives a developer a place to work with their own full, self-contained system. There are now a few open source solutions to do just that (like Garden, Acyl, & Armador). In this talk, Jeff will break down how these tools work, and what makes them different.

Back

Code Workload Management into the Control Plane

Home

Speaker Michael Hrivnak
RoomUD2.120 (Chavanne)
TrackInfra Management
Time14:00 - 14:25
Event linkView original entry

Join us to learn why Operators are the leading and default approach for managing workloads on Kubernetes. We will pull back the curtain to show you exactly what an Operator is, how to make one, and what it means to be “Kubernetes Native”.

SREs automate every aspect of workload management. Applying this mentality to the Kubernetes space, a pattern has emerged for coding such automation directly into the control plane. By adding native extensions to the Kubernetes API that are tailored to individual workloads, the Operator pattern enables infrastructure and workloads to be managed side-by-side with one set of tooling and access control.



Join us to learn why Operators are the leading and default approach for managing workloads on Kubernetes. We will pull back the curtain to show you exactly what an Operator is, how to make one, and what it means to be “Kubernetes Native”. To close we will discuss use cases from the field; how real organizations have created and/or re-used Operators to automate their operations.

Back

Immutable deployments: the new classic way for service deployment

Home

Speaker Matteo Valentini
RoomUD2.120 (Chavanne)
TrackInfra Management
Time14:30 - 14:55
Event linkView original entry

Immutable infrastructure paradigm is often associated with relative new concept like containers and orchestrators like kubernetes. In this talk will be illustrate how to obtain the same result but using most of the classic concepts, tools and simple cloud platforms.

In particular will be shown the usage of:






The illustrated approach is based on lessons learned in almost two years of using this methodology on a production service.

Back

Foreman meets Ansible

Home

Speaker Adam Růžička
RoomUD2.120 (Chavanne)
TrackInfra Management
Time15:00 - 15:25
Event linkView original entry

This talk focuses on how Ansible and Foreman integrate with each other and what
added value can the users get when using Ansible from Foreman. It describes two
primary approaches of using Ansible from Foreman. The first is a traditional
configuration management approach, where hosts are kept in a predefined state,
while the other works in a more remote execution fashion. The talk goes over
several scenarios and demonstrates how Foreman can leverage Ansible to
effortlessly solve the issues present in the given scenarios.

This talk focuses on how Ansible and Foreman integrate with each other and what
added value can the users get when using Ansible from Foreman. It describes two
primary approaches of using Ansible from Foreman. The first is a traditional
configuration management approach, where hosts are kept in a predefined state,
while the other works in a more remote execution fashion. The talk goes over
several scenarios and demonstrates how Foreman can leverage Ansible to
effortlessly solve the issues present in the given scenarios.

Back

Hacking Terraform for fun and profit

Home

Speaker Anton Babenko
RoomUD2.120 (Chavanne)
TrackInfra Management
Time15:30 - 15:55
Event linkView original entry

Using Terraform is often simple, extending it to do what YOU want, can be challenging (or “impossible”). Want to manage unsupported resources? Maintain lots of resources? Integrate non-integrable? The talk is an advanced guide about HOW to extend, integrate and execute Terraform to get things DONE.

If you’ve been using Terraform just by following the official documentation, you are not getting all from it.



As soon as one cloud provider announces a new service or a feature, you dream that Terraform has zero-day support for it. Well, it is not always like this, and I will show what we can do about it.



Are you using Terraform and keep asking yourself why I should copy-paste so much? What if you need to manage more than a dozen resources with Terraform (e.g., hundreds of GitHub repositories with permissions, or hundreds of IAM users and their permissions)? How can I use Terraform to manage absolutely ANY type of resource? What is beyond Terraform modules? What is a really dynamic module and what Terraform 0.12 will help us with?



Let's see the advanced and very unusual solutions of how Terraform can be extended, integrated, executed, or merely hacked to get the job done with the help of external open-source services and integrations.

Back

Building a self healing system with SaltStack

Home

Speaker Gareth J Greenaway
RoomUD2.120 (Chavanne)
TrackInfra Management
Time16:00 - 16:25
Event linkView original entry

As the number of servers that we are responsible for increases, the ability to manage issues on those systems becomes more and more difficult. Situations arise like log files filling up disks, failed login attempts that could be brute force attacks, and unwanted processes and services running. Using the Beacon and Reactor systems of SaltStack, we can monitor a system for these have SaltStack restore those systems to the desired state. In this talk, we’ll look at some real-life examples of these scenarios and how Saltstack can help to automatically heal the systems.

As the number of servers that we are responsible for increases, the ability to manage issues on those systems becomes more and more difficult.
Situations arise like log files filling up disks, failed login attempts that could be brute force attacks, and unwanted processes and services running.



Using the Beacon system of SaltStack, we can monitor a system for these & other scenarios.
Pairing this with the Reactor system, we can have SaltStack restore those systems to the desired state.



In this talk, we’ll look at some real-life examples of these scenarios and how Saltstack can help to automatically heal the systems.



The talk will include:




Back

Infrastructure testing, it's a real thing!

Home

Speaker Paul Stack
RoomUD2.120 (Chavanne)
TrackInfra Management
Time16:30 - 17:25
Event linkView original entry

Software developers have been testing their code for years. Why is it still not a common thing for infrastructure and operations people? We are in an era where it is expected everyone moves fast. Moving too fast can negatively affect our customers so it's vital that we ensure the changes we make to our infrastructure are tested like other code changes would be.



In this talk, Paul is going to demonstrate some of the methods for testing infrastructure code. The talk will demonstrate how to establish fast feedback loops that provisions infrastructure, as well as being able to check that the code adheres to company policies, and has not drifted from the plan of record as specified in our infrastructure as code repository.

Software developers have been testing their code for years. Why is it still not a common thing for infrastructure and operations people? We are in an era where it is expected everyone moves fast. Moving too fast can negatively affect our customers so it's vital that we ensure the changes we make to our infrastructure are tested like other code changes would be.



In this talk, Paul is going to demonstrate some of the methods for testing infrastructure code. The talk will demonstrate how to establish fast feedback loops that provisions infrastructure, as well as being able to check that the code adheres to company policies, and has not drifted from the plan of record as specified in our infrastructure as code repository.

Back

Mgmt Config: Autonomous Datacentres

Home

Speaker James Shubin
RoomUD2.120 (Chavanne)
TrackInfra Management
Time17:30 - 17:55
Event linkView original entry

Mgmt is a real-time automation tool that is fast and safe. One goal of the tool is to allow users to model and manage infrastructure that was previously very difficult or impossible to do so previously.



The tool has two main parts: the engine, and the language. This presentation will have a large number of demos of the language.



To showcase this future, we'll show some exciting real-time demos that include scheduling, distributed state machines, and reversible resources.



As we get closer to a 0.1 release that we'll recommend as "production ready", we'll look at the last remaining features that we're aiming to land by then.



Finally we'll talk about some of the future designs we're planning and discuss our free mentoring program that helps interested hackers get involved and improve their coding, sysadmin, and devops abilities.

Back

Gofish - a Go library for Redfish and Swordfish

Home

Speaker Sean McGinnis
RoomUD2.120 (Chavanne)
TrackInfra Management
Time18:00 - 18:25
Event linkView original entry

Gofish is a Golang library for interacting with Redfish and Swordfish enabled devices.

Gofish is a Golang library for interacting with Redfish and Swordfish enabled devices. This presentation will give an overview of the current state of the library and how it can be used to manage compute and storage resources using a common, standard API.

Back

Podman - The Powerful Container Multi-Tool

Home

Speaker Sascha Grunert
RoomUD2.208 (Decroly)
TrackContainers
Time10:30 - 10:50
Event linkView original entry

Podman is the container management tool of your choice when it comes to boosting
day-to-day development tasks around containers. The journey of Podman started as
a drop-in replacement for docker, but nowadays it’s even more than just that.
For example, Podman is capable of managing pods, running containers without
being root and supports fine granular configuration possibilities.

In this presentation, we will deep dive into the exciting world of Podman. We
discover how Podman fits into the containers ecosystem, learn about the
architecture behind the project and utilize practical examples for daily
development tasks.



For example, we will learn how rootless containers work technically, how to
safely share resources within multiple containers and which benefits a
daemon-less container management tool like Podman provides. All these exciting
features will be explained by utilizing live demos which leaves room for an open
discussion at the end of the talk.

Back

Lazy distribution of container images

Home

Speaker Akihiro Suda
RoomUD2.208 (Decroly)
TrackContainers
Time10:55 - 11:15
Event linkView original entry

The biggest problem of the OCI Image Spec is that a container cannot be started until all the tarball layers are downloaded, even though more than 90% of the tarball contents are often unneeded for the actual workload.



This session will show state-of-the-art alternative image formats, which allow runtime implementations to start a container without waiting for all its image contents to be locally available.



Especially, this session will put focus on CRFS/stargz and its implementation status in containerd (https://github.com/containerd/containerd/issues/3731).
The plan for BuildKit integration will be shown as well.

Back

BPF as a revolutionary technology for the container landscape

Home

Speaker Daniel Borkmann
RoomUD2.208 (Decroly)
TrackContainers
Time11:20 - 11:50
Event linkView original entry

BPF as a foundational technology in the Linux kernel provides a powerful tool for systems developers and users to dynamically reprogram and customize the kernel to meet their needs in order to solve real-world problems and without having to be a kernel expert. Thanks to BPF we have come to the point to overcome having to carry legacy accumulated over decades of development grounded in a more traditional networking environment that is typically far more static than your average Kubernetes cluster. In the age of containers, they are no longer the best tool for the job, especially in terms of performance, reliability, scalability, and operations. This talk provides a few examples on how BPF allows to rethink container networking based on recent work we did in Cilium. Among others, the audience will learn about running a fully functioning Kubernetes cluster without iptables, Netfilter and thus without kube-proxy in a scalable and secure way with the help of BPF and Cilium.

Back

Kata Containers on openSUSE

Home

Speaker Ralf Haferkamp
RoomUD2.208 (Decroly)
TrackContainers
Time11:55 - 12:15
Event linkView original entry

Kata Containers provide a secure container runtime offering an experience close to that of native containers, while providing stronger workload isolation and host infrastructure security by using hardware virtualization technology. This is particularly useful when containers are used to host and run third-party applications. In this presentation, after a short intro to Kata, we will demonstrate how easy it is to install and use on openSUSE. We will show it in action both as part of a podman setup as well as within a full-featured Kubernetes environment.

With containers becoming not only the preferred way of deploying applications, but also the building blocks of microservice architectures, infrastructure security and workload isolation concerns are being raised. The Kata Containers Open Source project addresses these concerns by using virtualization technology, in compliance with the "defense in depth" design principles. It is also a very flexible, dynamic and fast-moving project, with many components that need to be integrated among each others.



This presentation will illustrate how easy it can already be to use Kata as a container runtime on top of the openSUSE distribution. In fact, after giving a short introduction of Kata Containers and its architecture, we will provide a DEMO of how we have integrated Kata into openSUSE and how it can be used with podman to run containers in a secure and isolated fashion. As Kata is compatible with the OCI (Open Container Initiative) runtime specification, it can be used to seamlessly replace or coexist with other runtimes (e.g. runc) in existing Container Engines (podman, CRI-O, docker, ...), even inside a Kubernetes cluster. We will therefore be able to show how native containers and strongly isolated Kata containers can run together on the same platform. Finally, we will also demonstrate how to set Kata Containers up as an alternative runtime inside of a Kubernetes Cluster.

Back

Evolution of kube-proxy

Home

Speaker Laurent Bernaille
RoomUD2.208 (Decroly)
TrackContainers
Time12:20 - 12:50
Event linkView original entry

Kube-proxy enables access to Kubernetes services (virtual IPs backed by pods) by configuring client-side load-balancing on nodes. The first implementation relied on a userspace proxy which was not very performant. The second implementation used iptables and is still the one used in most Kubernetes clusters. Recently, the community introduced an alternative based on IPVS.
This talk will start with a description of the different modes and how they work. It will then focus on the IPVS implementation, the improvements it brings, the issues we encountered and how we fixed them as well as the remaining challenges and how they could be addressed. Finally, the talk will present alternative solutions based on eBPF such as Cilium.

Back

Container Live Migration

Home

Speaker Adrian Reber
RoomUD2.208 (Decroly)
TrackContainers
Time12:55 - 13:15
Event linkView original entry

The difficult task to checkpoint and restore a process is used in many container runtimes to implement container live migration. This talk will give details how CRIU is able to checkpoint and restore processes, how it is integrated in different container runtimes and which optimizations CRIU offers to decrease the downtime during container migration.



In this talk I want to provide details how CRIU checkpoints and restores a process. Starting from ptrace() to pause the process, how parasite code is injected into the process to checkpoint the process from its own address space. How CRIU transforms itself to the restored process during restore. How SELinux and seccomp is restored.



I want to end this talk with an overview about how CRIU is integrated in different container runtimes to implement container live migration.

Back

Supervising and emulating syscalls

Home

Speaker Christian Brauner
RoomUD2.208 (Decroly)
TrackContainers
Time13:20 - 13:40
Event linkView original entry

Recently the kernel landed seccomp support for SECCOMPRETUSER_NOTIF which enables a process (supervisee) to retrieve a fd for its seccomp filter. This fd can then be handed to another (usually more privileged) process (supervisor). The supervisor will then be able to receive seccomp messages about the syscalls having been performed by the supervisee.



We have integrated this feature into userspace and currently make heavy use of this to intercept mknod(), mount(), and other syscalls in user namespaces aka in containers.
For example, if the mknod() syscall matches a device in a pre-determined whitelist the privileged supervisor will perform the mknod syscall in lieu of the unprivileged supervisee and report back to the supervisee on the success or failure of its attempt. If the syscall does not match a device in a whitelist we simply report an error.



This talk is going to show how this works and what limitations we run into and what future improvements we plan on doing in the kernel.

Back

Below Kubernetes: Demystifying container runtimes

Home

Speaker Thierry Carrez
RoomUD2.208 (Decroly)
TrackContainers
Time13:45 - 14:05
Event linkView original entry

Today, the task of running containers involves a lot of technologies and levels of abstraction, and it can be difficult to understand, or just to keep up. How do CRI-O and containerd overlap ? Does Kata containers compete with Firecracker ? Is there any relationship between OCI and CRI ? How many different meanings can "container runtime" have ?



In this talk, we will navigate this treacherous sea of overlapping technologies and acronyms that take care of running container workloads, below Kubernetes all the way down to the Linux kernel. We will present at a high-level how these technologies, interfaces and levels of abstraction combine and overlap, and hopefully clarify which are spec vs. implementation, which are complementary, and which are alternative solutions.

This talk will cover the following points:




Back

Linux memory management at scale

Home

Speaker Chris Down
RoomUD2.208 (Decroly)
TrackContainers
Time14:10 - 14:40
Event linkView original entry

Memory management is an extraordinarily complex and widely misunderstood topic. It is also one of the most fundamental concepts to understand in order to produce coherent, stable, and efficient systems and containers, especially at scale. In this talk, we will go over how to compose reliable memory heavy, multi container systems that can withstand production incidents, and go over examples of how Facebook is achieving this in production at the cutting edge. We'll also go over the open-source technologies we're building to make this work at scale in a density that has never been achieved before.



We will go over widely-misunderstood Linux memory management concepts which are important to site reliability and container management with an engineer who works on the Linux kernel's memory subsystem, busting commonly held misconceptions about things like swap and memory constraints, and giving advice on key and bleeding-edge kernel concepts like PSI, cgroup v2, memory protection, and other important container-related topics along the way.

Back

Running full Linux systems in containers, at scale

Home

Speaker Stéphane Graber
RoomUD2.208 (Decroly)
TrackContainers
Time14:45 - 15:05
Event linkView original entry

LXD is a system container manager, its goal is to safely run full Linux systems at very high density and low overhead.
Containers may be created from pre-made images, covering most Linux distributions, or by importing an existing virtual machine or physical system.



Advanced resource control and device passthrough is available to expose as much or as little system resources to those containers.
Snapshot and backup tooling is available to safeguard those containers and data.
Storage pools and networks can be used to offer a variety of storage and network options to the containers.



Management happens through a REST API with a default CLI client.
LXD has built-in support for clustering which makes it trivial to scale a deployment to dozens of servers, all acting as one virtual LXD server.



In this presentation, we'll go over LXD's main features through a demonstration including usage of LXD's clustering abilities, running a variety of Linux distributions and converting existing systems to containers.

Back

How (Not) To Containerise Securely

Home

Speaker Andrew Martin
RoomUD2.208 (Decroly)
TrackContainers
Time15:10 - 15:40
Event linkView original entry

This talk details low level exploitable issues with container and Kubernetes deployments. We focus on lessons learned, and show attendees how to ensure that they do not fall victim to avoidable attacks.

Andy has made mistakes. He's seen even more. And in this talk he details the best and the worst of the container and Kubernetes security problems he's experienced, exploited, and remediated.



See how to bypass security controls, exploit insecure defaults, evade detection, and root clusters externally (and more!) in this interactive and highly technical appraisal of the container and cluster security landscape.

Back

Using crio-lxc with Kubernetes

Home

Speaker Tycho Andersen
Mike McCracken
RoomUD2.208 (Decroly)
TrackContainers
Time15:45 - 16:15
Event linkView original entry

Running application containers within Kubernetes presents a challenge to the operator for quickly handling security updates - every container must be patched, rebuilt and re-tested, and then updated separately. The slowest dev turnaround of all your containers is the fastest you can fully update your cluster.



However, for many fixes, the application likely will not care which compatible version of a system library it is using.
Using AtomFS, operators can update individual libraries inside app containers without a rebuild. Containers using an AtomFS storage backend can simply be restarted after a fix is applied, and they will see it reflected in their filesystems.



The AtomFS storage backend requires minor changes to your container runtime, and we demonstrate it with the LXC runtime and crio-lxc, an adapter to enable using LXC-based containers in Kubernetes using CRI-O.



In this talk Tycho will cover how AtomFS works, what changes are needed to make application container builds work with AtomFS, and fix an exploit live without a rebuild.

Back

Containers and Steam

Home

Speaker Simon McVittie
RoomUD2.208 (Decroly)
TrackContainers
Time16:20 - 16:40
Event linkView original entry

The availability of namespaces inside user sessions is increasing, and Valve's Steam game distribution platform is taking advantage of this for better gaming on Linux.



A recent beta of Steam for Linux adds pressure-vessel, an experimental mechanism developed by Collabora to put games in containers. This gives the game partial isolation from various aspects of the host system, and in particular allows it to use a runtime library stack that is not entangled with the host's, with different games using different runtimes.



Meanwhile, the unofficial Steam Flatpak app distributed on Flathub puts the entire Steam client and all of its games in a container. This gives the Steam client more thorough isolation from the host system, but all the games have to share that single container.



In this talk, pressure-vessel developer and Flatpak contributor Simon McVittie will compare the two approaches and the challenges they encounter, and look at where Steam containers might go in the future.

Back

Distributed HPC Applications with Unprivileged Containers

Home

Speaker Felix Abecassis
Jonathan Calmels
RoomUD2.208 (Decroly)
TrackContainers
Time16:45 - 17:15
Event linkView original entry

We will present the challenges in doing distributed deep learning training at scale on shared heterogeneous infrastructure. At NVIDIA, we use containers extensively in our GPU clusters for both HPC and deep learning applications. We love containers for how they simplify software packaging and enable reproducibility without sacrificing performance. Docker is a popular tool for running application containers on Linux, and while it is possible to enable container workflows for users by granting them access to the docker daemon, the security impact needs to be carefully considered, especially in a shared environment. Relying on docker for the container runtime also requires a large amount of complicated boilerplate code to start multi-node jobs using the Message Passing Interface (MPI) for communication. In this presentation, we will introduce a new lightweight container runtime inspired from LXC and an associated plugin for the Slurm Workload Manager. Together, these two open-source projects enable a more secure architecture for our clusters, while also enabling a smoother user experience with containers on multi-node clusters.

There are many container runtimes available, but none met all of our needs for running distributed applications with no performance overhead and no privileged helper tools. For our use case, we built a simple container runtime called enroot - it's a tool to turn traditional container images into lightweight unprivileged sandboxes; a modern chroot. One key feature is that enroot remaps all UIDs inside the container to a single UID on the host. So, unlike runtimes which rely on /etc/subuid and /etc/subgid, with enroot there is no risk of overlapping UID ranges on a node, and no need to synchronize ranges across the cluster. It is also trivial to remap to UID 0 inside the container which enables users to safely run apt-get install to add their own packages. Enroot is also configured to automatically mount drivers and devices for accelerators from NVIDIA and Mellanox using enroot's flexible plugin system. Finally, enroot is highly optimized to download and unpack large docker images, which is particularly useful for images containing large applications.



We also created a new plugin for the Slurm Workload manager which adds command-line flags for job submission. When the “--container-image” flag is set, our plugin imports a container image, unpacks it on the local filesystem, creates namespaces for the container, and then attaches the current job to these new namespaces. Therefore, tasks transparently land inside of the container with minimal friction. Users can even make use of the PMI2 or PMIx APIs to coordinate workloads inside the containers without needing to invoke mpirun, further streamlining the user experience. Currently, the plugin works with two different tools - enroot and LXC. It could be extended to other container runtimes in the future.

Back

Kubernetes on ARM64

Home

Speaker Jean-Frederic Clere
RoomUD2.208 (Decroly)
TrackContainers
Time17:20 - 17:40
Event linkView original entry

Building a Kubernetes cloud using Raspberry PI 4.
The RPI4/4G offers enough memory and cpu to build an educative Kubernetes cluster.
The presentation will show how to put the pieces togother to get an Apache Tomcat
operator to deploy a small web application in the build RPI4 Kubernetes cloud.

We will show:
- how to build a kernel for RPI4, use it to make a bootable SD card for a RPI4.
- how to configure it to use the WIFI board
- how to prepare Docker images for ARM64
- how to join the Kubernetes master
- how to use weave plugin to get the Kernetes internal network
- how to build and install the operator for Apache Tomcat.
- then run a small webapp using the operator.

Back

Inspektor Gadget and traceloop

Home

Speaker Alban Crequy
RoomUD2.208 (Decroly)
TrackContainers
Time17:45 - 18:05
Event linkView original entry

I will present Inspektor Gadget and traceloop, a tracing tool to trace system calls in cgroups or in containers using BPF and overwritable ring buffers.

Many people use the “strace” tool to synchronously trace system calls using ptrace. Traceloop similarly traces system calls but asynchronously in the background, using BPF and tracing per cgroup. I’ll show how it can be integrated with systemd and with Kubernetes via Inspektor Gadget.



Traceloop's traces are recorded in a fast, in-memory, overwritable ring buffer like a flight recorder. As opposed to “strace”, the tracing could be permanently enabled on systemd services or Kubernetes pods and inspected in case of a crash. This is like a always-on “strace in the past”.



Traceloop uses BPF through the gobpf library. Several new features have been added in gobpf for the needs of traceloop: support for overwritable ring buffers and swapping buffers when the userspace utility dumps the buffer.

Back

Extending and embedding: containerd project use cases

Home

Speaker Phil Estes
RoomUD2.208 (Decroly)
TrackContainers
Time18:10 - 18:30
Event linkView original entry

Over the past year, projects looking to extend and embed core container runtime functionality have looked to containerd and its clean API and extension points as a valuable resource. In this talk we'll look at the projects which have extended or embedded containerd for specific use cases and how containerd has enabled these uses via its design. We will also do a brief project update for the broader container ecosystem and community.

As containerd reaches its fourth birthday, it has already been adopted as a container runtime underneath Kubernetes in public cloud providers and various developer tools and platforms. In this talk we'll look deeper at the architecture choices and clean API layer which has enabled further use of containerd as an embedded and extensible runtime in additional projects, like Amazon's Firecracker integration, Kata's use of the v2 shim API, and Microsoft Azure's creation of the Teleport registry feature. We'll also look at in-flight work with CERN, Google, and others around remote "pre-seeded" snapshotters which allow for significant speedups in container startup time with special-case clusters like CERN's compute cloud.

Back

CANCELLED A way of GPU virtualization for container

Home

Speaker Shengbo Song
RoomUD2.208 (Decroly)
TrackContainers
Time18:35 - 19:00
Event linkView original entry

Please note that this talk has been cancelled as the speaker is unfortunately unable to attend FOSDEM.



Containers are widely used in clouds due to their lightweight and scalability. GPUs have powerful parallel processing capabilities that are adopted to accelerate the execution of applications. In a cloud environment, containers may require one or more GPUs to fulfill the resource requirement of application execution, while on the other hand exclusive GPU resource of a container usually results in underutilized resource. Therefore, how to share GPUs among containers becomes an attractive problem to cloud providers. In this presentation, we propose an approach, called vCUDA, to sharing GPU memory and computing resources among containers. vCUDA partitions physical GPUs into multiple virtual GPUs and assigns the virtual GPUs to containers as request. Elastic resource allocation and dynamic resource allocation are adopted to improve resource utilization. The experimental results show that vCUDA only causes 1.015% of overhead by average and it effectively allocates and isolates GPU resources among containers.

Back

Integrating Julius Speech Recognition Engine

Home

Speaker Thierry Bultel
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time10:30 - 10:55
Event linkView original entry

This presentation deals with the integration of Julius Speech Recognition Engine.



The aim of this Proof of Concept is to have a connectionless speech engine, working on an embedded device,
integrated as a binding of the AGL Application Framework.
The recognition uses Deep Neural Network realtime decoding, and for safer results and performances purpose,
uses a grammar.



Julius does not support wakewords out of the box, some hacking has been done to enable it in an efficient way.
Tests have been done on Renesas' H3, and UPSquare boards.

Back

Building Homebridge with the Yocto Project

Home

Speaker Leon Anavi
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time11:00 - 11:25
Event linkView original entry

Homebridge is a lightweight NodeJS server that emulates Apple HomeKit API. Combined with versatile plugins it allows you to make any device Homekit-compatible.
In the presentation you will understand how Homebridge works and how to integrated it in a custom embedded Linux distribution built with the Yocto Project and OpenEmbedded. We will go through the exact steps for leveraging the latest release of Poky, the reference system of the Yocto Project, with systemd, X11, openbox, surf web browser, nodejs, npm, Homebridge and some of its most popular plugins. Only open source software will be used, without any commercial licenses.
Practical examples for home automation with Homebridge on Raspberry Pi and the new STM32MP1 development boards will be demonstrated. The end result is an embedded device mounted in rack with a DIN rail that provides simple and user-friendly way to manage and configure Homebridge out of the box. The talk is appropriate for beginners.

Homebridge is a lightweight NodeJS server that you can run on your home network and emulate Apple HomeKit API. Started more than 5 years ago and available at GitHub under Apache License 2.0, Homebridge has a large and vibrant open source community.
Multiple plugins allow Homebridge to handle user's requests either via Siri or the Home app and this way to make any device Homekit-compatible. Raspberry Pi is the perfect platforms for hobbyists to install a local Homebridge instance. However, the installation of Homebridge on Raspbian requires numerous steps and despite the excellent tutorials, users without previous Linux experience face difficulties. Another disadvantage is that Raspbian is available only as 32-bit images which doesn’t use the full capabilities of the ARMv8 64-bit processors on Raspberry Pi 3 and 4.
The Yocto Project and OpenEmbedded provide all required tools to create a custom Linux distribution that out of the box offers user-friendly experience for configuring Homebridge in just a few easy steps. In the this presentation we do a code review of meta-homebridge Yocto/OE layer and we will walk through the exact steps for creating a lightweight Linux distribution with graphical user interface and a web browser that acts like a kiosk. We will integrated and configure popular open source software tools such as the Linux kernel, systemd, X11, openbox, surf web browser, nodejs, npm and of course Homebridge. Thanks to the meta-raspberrypi BSP Yocto/OE layer we will be able to unleash the full power of Raspberry Pi 3 and 4 by building 64-bit images.
At the end of the presentation demonstrations and tips for making an embedded device mounted in rack with a DIN rail will be shared. We will also mention the new STM32MP1 industrial grade development boards as alternatives of Raspberry Pi for this and similar projects in the maker community.
This talk brings benefits to the ecosystems of several open source communities. It will spread the word about Homebridge and significantly improve the getting started experience for user. Practical examples for using the Yocto Project and OpenEmbedded for makers will be revealed. As part of the development efforts for meta-homebridge Yocto/OE, a couple of new recipes, surf (simple web browser) and stalonetray (X11 system tray), have been contributed to the upstream of meta-openembedded.
The talk is appropriate for beginners. No previous experience is required. Hopefully, this presentation will encourage the audience to try out Homebridge and leverage their knowledge about the Yocto Project and OpenEmbedded with the example of this real-world entirely open source project.

Back

Building an embedded VoIP network for video intercom systems

Home

Speaker Elisa Nectoux
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time11:30 - 11:55
Event linkView original entry

IP video intercom systems combined with smartphones can leverage regular RTP/SIP VoIP technology to offer a new set of services to end-users: getting a notification when visitors press the door bell, seeing them on video before answering the call, interacting with them via voice and video and deciding to open the door, at home or anywhere else via wifi or 3G coverage.



Linphone (a SIP user-agent) and Flexisip (a SIP proxy server) can be integrated into IP video door phones, in-house panels and video surveillance devices to build a complete VoIP network.



Linphone and Flexisip use open standards to reliably send the audio and video streams captured from IP video intercoms to in-house devices, including smartphones and tablets, connected either to a local network or to the public internet.
These open source SIP-based software solutions can run perfectly on small hardware devices with reduced footprint, and can easily be integrated into GNU/Linux embedded systems, thanks to their Yocto packages.



This lecture will describe how Linphone and Flexisip can be used together to build an embedded SIP network dedicated to home automation or video surveillance.
The network architecture used in these contexts can also be deployed in other areas, such as the emergency services or the Internet of Things.

Linphone and Flexisip can be integrated into IP video intercom systems to make the audio and video capabilities of a door entry panel accessible by in-house control screens and smartphones, connected either to a local network or to the public internet.



Indeed, the linphone software fits well in embedded systems, which makes it a good candidate for being used in home automation devices, such as outdoor panels or indoor monitors, where video is to be capture or displayed.
However a SIP user-agent itself is not sufficient for setting up a fully functional SIP network: we propose the use of Flexisip, which is also able to run with reduced footprint on embedded devices as well as on a large scale cloud deployment, to fork incoming calls to in-house monitoring panels, smartphones or tablets.



When used together, Linphone and Flexisip offer advanced features for IP door phones and video monitoring systems, such as :
- HD video and HD voice (with support for H.264 and H.265 hardware accelerated codecs, and Opus codec)
- Call forking with early media video
- ICE, STUN and TURN support for optimised NAT traversal allowing peer-to-peer audio and video connections whenever possible
- secure user authentication with TLS client certificates
- Interconnection with push notifications systems, for reliably notifying of people ringing the door

Back

ROS2: The evolution of Robot Operative System

Home

Speaker Jose Luis Rivero
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time12:00 - 12:25
Event linkView original entry

In FOSDEM 2013, Open Robotics introduced an overview of the Robot Operating System (ROS), an open software integration framework for robots created in 2007. After more than a decade of great success, powering from Robocup teams to NASA robots in space, ROS2 was born to break any limitation detected previously by roboticians all around the globe. It's an exciting time.



This talk will explain the design changes and technical motivations that lead to the creation of ROS2 giving a quick overview of new features present on it: multi-platform, embedded devices, real time, etc.

Back

Introduction to Eclipse iceoryx

Home

Speaker Simon Hoinkis
Christian Eltzschig
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time12:30 - 12:55
Event linkView original entry

Bosch has open sourced a true zero-copy middleware for inter-process communication
on modern robotics and vehicle computers. The shared memory based
solution is compatible with Linux/QNX and achieves data independent communication
in constant time without serializing data. We would like to present our
recent development towards an open-source release and demonstrate our performance
and timing benchmarks on a privately developed embedded robot.

On FOSDEM 2018 Bosch presented OpenADx, an initiative to collaborate and accelerate
the development of automated driving with the open source community.
On FOSDEM 2020 we would like to present the first project under the OpenADx
umbrella called Eclipse iceoryx TM.
Over the course of its company history, Bosch could establish a solid understanding
of the needs and requirements of the automotive domain in terms of liability,
reliability, safety and determinism. In a time where highly automated driving hits
the road, these aspects become more and more important. An automated driving
kit is a networked system that processes a sensor data stream in the range
of GBytes/s. This naturally arises the need of an efficient and lightweight data
transfer mechanism. Our group at Bosch has its main focus on tackling exactly
this challenge. In order to approach the problem of distributing the high frequency,
high throughput data streams on fusion and planning computers, we’ve developed
a solution which can guarantee a time constant communication channel independently
of the size of data to be transported. Our approach is based on shared
memory which allows for transparently connecting the same range of memory
between multiple processes and thus enables a true zero-copy communication.
Given the nature of shared memory, an efficient data transport can thus be realized
solely by passing pointers to memory addresses from publishers to subscribers.
iceoryx is fully compatible with the ROS2 and Adaptive AUTOSAR APIs
and can be used as an implementation for both.

Back

Building a low-cost test fixture

Home

Speaker Guillaume Vier
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time13:00 - 13:25
Event linkView original entry

When printed circuit boards come out of the assembly line, a test fixture is required to perform functional testing and program the firmware.
These fixtures, called bed of nails, are sturdy setups usually built for high volume production, and can be quite costly.
The goal of this talk is to describe how you can build your own low cost fixture with basic PCB design skills and off the shelves components.

Functional testing of printed circuit boards (PCB) is typically done with a bed of nails fixture. The fixture holds the PCB in place over spring-loaded probes that make contact with the board’s test points.
The probes can be connected to an acquisition system that runs functional tests and to a programmer that flashes production firmware.
The Internet has tutorials and DIY kits suitable for PCBs with large test points (on a 2.54 mm grid). However as PCBs get smaller and more crowded, test points have to be smaller and closer to each other.
The goal of this talk is to describe how to build a test fixture with tighter requirements (test points with 0.6 mm diameter and 1.27 mm spacing) on a tight budget.



Main talking points:

- Making a PCB (with Kicad) to hold the probes in place and align the device under test (DUT)

- Using a Raspberry Pi Zero to instrument the setup and communicate with the DUT

- Running OpenOCD on the Raspberry Pi Zero to flash the production firmware.

Back

How to integrate secure elements

Home

Speaker Michael Schloh von Bennewitz
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time13:30 - 13:55
Event linkView original entry

In this half hour we study aspects of physically and cryptographically secure hardware (often termed secure element or SE) and the integration into existing circuits. We illustrate utility of such integration by inspecting a cryptocurrency wallet design, and explain the difficulty presented by nondisclosure agreements (NDA) common to industry closed adversaries. We examine several hardware devices, study their parts under a close range circuit camera, and suggest instructions on their use.

Building secure applications involves research of new technology while leveraging well known practices, for example when using ECDSA to secure systems with low power devices.



In this half hour lecture, we study an in depth example of using cryptoaccelerated hardware to research such secure applications.



We review common cryptography practices.






We proposes new hardware supported techniques.






We end by viewing a number of real device hardware circuits under a close range microscope, and possibly offer a device petting zoo to encourage exploration of hardware.

Back

Embedded systems, the road to Linux

Home

Speaker Angelo Dureghello
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time14:00 - 14:50
Event linkView original entry

As soon as you are on Linux you are generally confident with the embedded system, whatever it is.
But often the boot process may hide some misteries, and understanding the details may help to recover a
bricked board or to upgrade or replace a bootloader. The explained path would start from comparing some different
SoC's, passing from the ROM boot loader, static RAM, sdram init, secondary bootloader, and so on,
until the last "jump" to Linux. Most common non volatile boot devices would be introduced. A basic
knowledge would be ok for the audience.

Back

boot/loader — How to boot Linux and nothing else

Home

Speaker Łukasz Stelmach
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time15:00 - 15:25
Event linkView original entry

To boot Linux on a new ARM/ARM64 platform we have to port Linux to that platform and a bootloader. Aside from the platform setup code, we need to add and maintain some device drivers to both Linux and the bootloader. We decided to avoid the extra effort and get rid of the dedicated bootloader. With a few dozen lines of assembly code, Linux Kernel and a pinch of userland tools the "boot" kernel was running with all the cool features we wanted! Then we used kexec(2) to start a "full" Linux kernel.

How to boot Linux and nothing else



To boot Linux on a new ARM/ARM64 platform we have to port Linux to that platform and a bootloader. Aside from the platform setup code, we need to add at least some platform specific device drivers both Linux and the bootloader and maintain the drivers in both trees. We decided to avoid the extra effort and get rid of the dedicated bootloader.



We took a widely available Odroid XU4 board and replaced bootloader (U-Boot) with a few dozen lines of assembly code, Linux Kernel and a pinch of userland tools. The "boot" kernel was running with all the cool features we wanted! Then we used kexec(2) to start a "full" Linux kernel.



Dedicated bootloaders perform two types of tasks: platform specific setup and management (starting an OS, managing OS updates). We show that Linux is a better environment to implement management proccedures.



We want to share our experience and encourage others to join our effort to use Linux Kernel as a bootloader on ARM/ARM64 platforms.



Target audience



The presentation is meant for everyone interested in how the Linux Kernel handles the boot process and especially developers who commit to arch/* directories, and bootloader developers. We present our experience with ARM Odroid XU4 board, but we expect people working on other platforms will benefit too.



How we want to improve the ecosystem



Development of new bootloading code for ARM and other embedded platforms. We believe using Linux kernel for this task is beneficial in three different ways. Less platform specific code needs to be created and maintained in different repositories. General purpose code like filesystem drivers or network stack are maintained better in Linux than in U-Boot. This, as well as wide verity of libraries makes Linux better environment to develop advanced management functions (e.g OS updates, security checks etc.) in contemporary bootloader.

Back

PipeWire in the Automotive Industry

Home

Speaker George Kiagiadakis
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time15:30 - 15:55
Event linkView original entry

PipeWire has recently been adopted by Automotive Grade Linux for its implementation of the low-level platform audio service, replacing entirely previous solutions. Getting there had, of course, many challenges. In this talk, George is going to talk about how PipeWire has managed to overcome these challenges and has evolved to support automotive use cases and hardware through the design and implementation of a new, reusable, session & policy management component, WirePlumber.

Back

WPE, The WebKit port for Embedded platforms

Home

Speaker Philippe Normand
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time16:00 - 16:25
Event linkView original entry

WPEWebKit is a WebKit flavour (also known as port) specially crafted for embedded platforms and use-cases. During this talk I would present WPEWebKit's architecture with a special emphasis on its multimedia backend based on GStreamer. I would also demonstrate various use-cases for WPE, spanning from Kiosk apps and Set-top-box user-interfaces to advanced scenarios such as Web overlays for live TV broadcasting.

WPEWebKit is designed for simplicity and performance. It allows application developers to easily deploy hardware-accelerated fullscreen (or not) browsers with multimedia support, small (both in memory usage and disk space) and light as possible, and implementing the most relevant HTML specifications.



Traditionally WebKit ports are associated with a specific widget toolkit library (GTK, Qt, Cocoa,...) but WPEWebKit breaks with this monolithic design and thus enables a new range of use-cases. By delegating the final web page rendering to dedicated view-backends, WPEWebKit allows flexible and tight integration in a wide range of hardware platforms. We also provide a Qt5 QML plugin that can easily replace the deprecated QtWebKit-based module.



WPEWebKit leverages GStreamer for its multiple multimedia backends, ensuring your WPEWebKit-based browser supports zero-copy hardware video decoding on the most common embedded platforms such as the Raspberry Pi, i.MX6 and i.MX8M SoCS.



WPEWebKit can also be used in pure GStreamer applications! Thanks to the GstWPE plugin, web-pages can be "injected" in GStreamer pipelines as audio and video streams. This new plugin thus enables use-cases such has HTML overlays.



WPEWebKit is an open source project with a growing community, and it is developed within the ecosystem of the WebKit project, which powers many open source and proprietary web browsers.

Back

How Yocto extra tools help industrial project

Home

Speaker Pierre Ficheux
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time16:30 - 17:20
Event linkView original entry

Yocto is the most famous "build system" for embedded Linux. During this conference we'll study how to use some Yocto features to help the development of a free industrial project. We will study the eSDK (extended cross-toolchain), Ptest and Testimage (CI), Devtool and Devshell (recipe modification). We will also learn how to be in compliance with the GPLv3 / LGPLv3 licenses thanks to the "archiver" class (and more).

Back

The State of PTXdist

Home

Speaker Roland Hieber
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time17:30 - 17:55
Event linkView original entry

PTXdist has been around as a build tool for embedded systems for more than 16 years now, but many new features were added in the recent time. Most importantly this includes support for kconfig diffs and layered BSPs, infrastructure for code signing and license compliance, a homepage with online documentation and a cute logo, as well as several small improvements. This talk gives an overview for new and old users over the current feature set and the core concepts behind PTXdist.

Back

lognplot - logging and plotting data from micro's

Home

Speaker Windel Bouwman
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time18:00 - 18:25
Event linkView original entry

Embedded systems are hard to debug. Complex systems have a lot of variables.
When debugging those systems, we often log data into some files, and visualize
them later on, using excel, matplotlib or something else.



This talk is about tracing and logging. What are the options we have as embedded
software developers? I will present the lognplot tool, a project to plot incoming data
on the fly.

Embedded systems are hard to debug. Complex systems have a lot of variables.
When debugging those systems, we often log data into some files, and visualize
them later on, using excel, matplotlib or something else.



This talk is about tracing and logging. What are the options we have as embedded
software developers? I will present the lognplot tool, a project to plot incoming data
on the fly. There are two implementations, one in python, and one in rust
with gtk-rs. The data is stored internally in a zoomable format, allowing
large sets of data to be browsed easily.



During the talk you will learn how to draw a chart, and how to aggregate
large sets of data into summaries.



I will demo a STM32 serial wire viewer output connected to this tool
to enable live tracing of an embedded system.

Back

U:Kit: Open-source software and hardware smoke detector

Home

Speaker Slavey Karadzhov
RoomUD2.218A
TrackEmbedded, Mobile and Automotive
Time18:30 - 18:55
Event linkView original entry

The presenter will show the audience U:Kit ( https://github.com/attachix/ukit).
U:Kit is an open source (software and hardware) smoke and motion detector with the help of open source tools.
U:Kit is easy to assemble, has a plastic case, and can be attached to the ceiling and used with minimum efforts also from non-technical savvy people. But that is just the tip of the iceberg.
The presenter will share with the audience his experience in creating the devices and talk about some of the software and hardware challenges with which he and his team had to solve.

Back

CANCELLED United Nations Technology and Innovation Labs

Home

Speaker
RoomUD2.Corridor
TrackFreedom
Time18:00 - 18:50
Event linkView original entry

Amanda is the chair of the United Nations Technology and Innovation Labs' Open source and IP Advisory Board and will give an overview of the work being done by the labs and take the audience through a couple of case studies using data and blockchain for good in an open way.

Amanda will explain the goals and workings of the UNTIL Open Source Advisory Board and the opportunities for further community engagement with the labs, allowing a wider open community to be built supporting the labs through contributions and mentoring and the potential opportunities for Fellowship placings within the labs.
She will also look at the first projects working in the labs, with the Advisory and the open data and blockchain models that they have applied to these, using case studies.

Back

Open Source Under Attack

Home

Speaker Chris Aniszczyk
Max Sills
Michael Cheng
RoomJanson
TrackFreedom
Time09:00 - 09:50
Event linkView original entry

Whether it is "Open Core", the Mongo SSPL or the Common Clause, the core ethos of open source has been under attack for some time. As those parties who seek to limit the promise of free software enjoys more and more success, the community will need stronger and more forceful tools to defend ourselves. Presenters: Michael Cheng (Facebook), Max Sills (Google), Chris Aniszczyk (Linux Foundation)

Back

Is the Open door closing?

Home

Speaker Krzysztof Daniel
RoomJanson
TrackFreedom
Time10:00 - 10:50
Event linkView original entry

"Open Source" has been wildly successful, no doubt.



Yet, in recent years, we have seen a massive amount of failed 'open' projects.



Why is that?



I have identified 10+ scenarios in which the 'Open' approach works. But what is most interesting, is that those scenarios have enabling conditions, and while those conditions are taken for granted, they are not.



Not every 'Open' project is sustainable. Not every project is worth adopting or contributing to.



During the presentation, we will look into what works and why, and what to expect from different 'Open' initiatives. We will cover almost everything that can be open - starting from hardware, through software, education, and we will end up covering Open Governments.



Each sector is different, and for some of them, the 'Open' approach will not work. Come and see what I have found out in this space during my research, and evaluate whether you are working on the right project.



Because the only resource you will never get back is time.

If you are using or contributing to a software projects, especially on your own, you certainly want to know whether your project has a chance of slipping into oblivion.
Described scenarios will not only help you to answer that question, but will also help you to figure out what is most important for your project, right now.



The scenarios that will be covered include:




Back

The core values of software freedom

Home

Speaker Matthias Kirschner
RoomJanson
TrackFreedom
Time11:00 - 11:50
Event linkView original entry

If you are a Free Software (Open Source Software) developer, do you have to follow an open development model or a certain business model? Do you have to believe in or be a supporter of socialism, capitalism, or liberalism? Do we, when we work for software freedom, have to agree on certain positions on privacy, intelligence services, the military, the climate catastrophe, nuclear power, vaccinations, or animal rights?



Or should we accept to have different views or even allow each other not to discuss certain views, because what brings us together are other values?

I will argue that the core values of our movement are that everybody, no matter what background, can use the software for every purpose without discrimination. That everybody is allowed to study how software works. That you are always allowed to share your software with others, either to help other human beings or to make money. And that no individual, organisation or government should be forced to change their behaviour because of the software, but according to our principles, adapt and thereby improve the software for themselves and others.



Furthermore, the talk understands itself as a plea for more respect and diversity in Free Software communities. It will be argued that while sticking to those values we should treat others decently who might have other believes, or another or no opinions at all about a topic we ourselves care about. That we should not try to put an emphasis on our other believes while working together on Free Software/Open Source Software, but instead work together with other groups or movements to bring our other topics forward.

Back

Why open infrastructure matters

Home

Speaker Thierry Carrez
RoomJanson
TrackFreedom
Time12:00 - 12:50
Event linkView original entry

A lot of open source developers choose to deploy their software on infrastructure based on proprietary software. Behind this apparent paradox is the need to adapt to changing environments, adopt new technologies fast, and use increasing amounts of computing power. Open infrastructure (computing, networking and storage infrastructure based on open source software) has a lot to offer, but it's easy to overlook if you don't take the time to take a step back and analyze the situation rationally. In this talk, Thierry Carrez, VP of Engineering at the OSF, explains all the reasons why open infrastructure matters, and why it makes sense for you to adopt it today.

Outline of this talk:




Back

Why the GPL is great for business

Home

Speaker Frank Karlitschek
RoomJanson
TrackFreedom
Time13:00 - 13:50
Event linkView original entry

In the past few years we saw a lot of discussions around free software licenses and why they are bad for companies. This talk debunks this claim and shows how free software licenses are actually great for startups if done right.

In the last few years we saw a lot of discussion in the open source and free software startup space around licenses. Several companies stepped forward and claimed that it’s not possible to build a working company on top of a free software product. Some changed the license of their product to proprietary license like the Commons Clause or the Business Source License. They claim that this is needed to ‘save’ free software. This talk describes why this is fundamentally wrong. It’s possible to build a working startup and company on top of a free software product. This talk discusses how companies like Red Hat, SUSE and Nextcloud manages to have a 100% free software product including a big contributor community but is still able to pay developers and grow.

Back

AMENDMENT Open Source Won, but Software Freedom Hasn't Yet

Home

Speaker Bradley M. Kuhn
Karen Sandler
RoomJanson
TrackFreedom
Time14:00 - 14:50
Event linkView original entry

Karen and Bradley, building on the substantial feedback from last year's keynote, follow up their 2019 FOSDEM keynote with real-world suggestions, ideas, and discussion about how we, as software freedom activists, can live in a world with so much proprietary software. Software freedom is hard to find, but we can find it together, and we can support each other when we must face the proprietary software world and make hard decisions. Let's figure it out together and support each other!

History never unfolds as we would expect. It's surprising and jarring that we've achieved both so much and so little. Every day, there is more Free and Open Source Software (FOSS) in the world than ever in history, but it's also a little bit harder each day to live a life that avoids proprietary software. Today's world of software technology is a ridiculous paradox.



Most software that we depend on every day is under someone's else control. Whether it's the cloud service run by a big company, the medical devices that keep us alive, or the Javascript application for everything from our banking to our social media, the code that handles our most sensitive data and life-essential computing tasks is usually proprietary. Even Linux-based devices, which are ubiquitous, rarely comply with the GPL and therefore are more-or-less almost as proprietary as any other device. Linux is everywhere, yet early FOSS adopters have never had less software freedom than we do today.



Once upon a time, it was viable for someone living in the industrialized world to function in daily society in pure software freedom. In those days, being a software freedom activist was akin to being a vegan or vegetarian: activists could (relatively conveniently) live a lifestyle that reflected our values and proved our sociopolitical point in mundane, daily terms.



Leading by example is not so easy anymore. The strongest supporters of software freedom among us, if they chose to remain living in the industrialized world, make compromises. Our political opponents tell us that our cause is misguided since these compromises "aren't so bad". Meanwhile, our would-be political allies question our commitment to the cause because we carry devices with some proprietary firmwares. Navigating this complex climate may well be the hardest challenge we face.



Cooptation is a commonplace for social justice movements, and the cooption process can be ongoing for decades. The software freedom movement is a few years into this cooption: this is precisely why we see major leaders stand up and shout "Open Source is the default; Open Source has won!" while presenting slides from a Macbook. The most difficult days don't lie behind us; they lie ahead.



This talk is about surviving the personal struggle of software freedom activism in this current climate. Many of us want a world with only FOSS and no proprietary software, but we won't get there in our lifetimes. How do we live our lives to maximal effect to carry forward the torch of software freedom both in this generation and onto the next? How do we weather the inevitable failures and seemingly insurmountable challenges as we watch what was once FOSS slowly become proprietary again, or see new technologies exist only as proprietary, or, even worse, exist as a warped version of FOSS that "seems open" but fails to give most software freedoms to most users? Let's learn and explore together how to survive as activists now that the going got tough.



Please note that this replaces the talk originally scheduled to be given by Amanda Brock on "United Nations Technology and Innovation Labs".

Back

Regaining control of your smartphone with postmarketOS and Maemo Leste

Home

Speaker Merlijn B. W. Wajer
Bart Ribbers
RoomJanson
TrackFreedom
Time15:00 - 15:50
Event linkView original entry

Linux mobile software and GNU/Linux distributions are currently not widely available for smartphones. This talk covers why it is desirable to have GNU/Linux (not: Android or Android-based) on your smartphone, what the current state of various software attempts at Linux on smartphones is, what progress has been made, and will also dive into the available old and new hardware (including the PinePhone and Librem 5) to run the software & distributions on.

Smartphones running regular (F)OSS Linux distributions are not common. We intend to provide an overview of the current Linux FOSS mobile stacks, distributions that package/provide the mobile stacks and to discuss the hardware that one can use to run this software. We will provide additional details for the postmarketOS distribution and for Maemo Leste (Debian based FOSS mobile software). We also hope to go into some detail about the upcoming PinePhone (https://www.pine64.org/pinephone/)



postmarketOS is a distribution based on alpine, with a focus on minimalism, security and mobile software. postmarketOS supports many old and new smartphones with varying degrees of support, and also packages/ships with various mobile software suites like Plasma Mobile, Maemo/Hildon, Phosh and more.



Maemo Leste is based on Maemo Fremantle (from the Nokia N900 days), but completely open source. It's a repository on top of Debian/Devuan that pulls in the entire Maemo/Hildon user interface and suite of applications. Building on top of a proven set of interfaces, Maemo Leste also aims to be mostly compatible with Maemo the way many people might remember it, with a modern twist.



Pine64 (known for ARM laptops and SBC (Single Board Computers) has decided to get into the mobile business with the PineTab and the PinePhone device. Aiming to deliver developer devices in 2020Q1 and enthusiastic end-user devices in 2020Q2, they've energized software developers who arewriting mobile interfaces for Linux and have been producing a mobile phone at remarkable pace. We will show the Pine64 device and discuss the current state of Linux support on the device.



We plan to give live demos during the presentation, but will have pre-recorded videos as fallback.

Back

FOSSH - 2000 to 2020 and beyond!

Home

Speaker Jon 'maddog' Hall
RoomJanson
TrackKeynotes
Time16:00 - 16:50
Event linkView original entry

maddog reviews the past twenty years of FOSSH, and dares to predict the next twenty years.

Last year maddog gave a talk about computer science and FOSSH from 1969 to the present, but compressing fifty years of computer science into fifty minutes (OK, maddog took more time) was hard.



This year maddog will be talking about the twenty years leading up to today and making projections (always dangerous with maddog, because they often come true) into the future.



With only another fifty years to cover, maddog may actually stay within his time slot, but bring a beer (just in case).

Back

FOSDEM@20 - A Celebration

Home

Speaker Steven Goodwin
RoomJanson
TrackKeynotes
Time17:00 - 17:50
Event linkView original entry

Since 2001, FOSDEM has been a ritualistic retreat for thousands of free and open source-minded developers.



Incorporating contributions gathered from across the FOSDEM community, this talk reflects on those past events and makes comment on the changes in technology, community, and the conference itself.

For those new to FOSDEM this talk provides a lively and interesting discussion on how FOSDEM began and has changed over the years, told through the eyes of one of its most loyal supporters, Steven Goodwin, who has attended them all and, since 2003, has been documenting his (mis-)adventures in "The FOSDEM Diaries."



And for those veterans, it provides a trip down memory lane and a solid reminder that we were all young once, and that we should help the current generation in the way we were helped all those years ago!



This talk gives real insight on how communities change over time, and how those new to the fold can break into an arena that can often appear as one clique of kranky old developers! Steve will also explain the origin of the FOSDEM drinking game ‘beermat buckeroo!’

Back

Closing FOSDEM 2020

Home

Speaker FOSDEM Staff
RoomJanson
TrackKeynotes
Time17:50 - 18:15
Event linkView original entry

Some closing words. Don't miss it!

Back

SECCOMP your PostgreSQL

Home

Speaker Joe Conway
RoomK.1.105 (La Fontaine)
TrackDatabases
Time09:00 - 09:50
Event linkView original entry

SECCOMP ("SECure COMPuting with filters") is a Linux kernel syscall filtering mechanism which allows reduction of the kernel attack surface by preventing (or at least audit logging) normally unused syscalls. Recent security best-practices recommend, and certain highly security-conscious organizations are beginning to require, that SECCOMP be used to the extent possible. The major web browsers, container runtime engines, and systemd are all examples of software that already support SECCOMP.



This talk covers SECCOMP applied to PostgreSQL via 2 different methods -- namely top-down using systemd, and at the session level using a PostgreSQL extension called pgseccomp. The two methods will be explained and compared. We will also discuss how and why the two methods might be used in conjunction. Finally, a process to determine the list of expected/legitimate PostgreSQL kernel syscalls is described.

Back

dqlite: High-availability SQLite

Home

Speaker Free Ekanayaka
RoomK.1.105 (La Fontaine)
TrackDatabases
Time10:00 - 10:50
Event linkView original entry

SQLite has proven extremely successful at providing applications with a powerful, portable and embeddable SQL engine that can handle most of their data storage needs.



Unfortunately, SQLite is neither replicating nor fault tolerant. These two features are however very important for the rising Edge/IoT market: dqlite delivers both of them.



dqlite is a C library which exposes a SQLite database over the network and replicates it using the Raft algorithm, with built-in automatic failover.



It allows to build and operate a fault-tolerant cluster of nodes each running an instance of the user application.



dqlite was created to support clustering in the LXD container management project, where it has been used for over a year. In this talk we will look at its design, implementation and various use cases.

Distributed systems are ubiquitous these days: we need to commoditize the underlying technologies and algorithms, making them easy to consume. The dqlite project offers application developers the opportunity to build on top of a storage engine which is as easy and convenient as a plain SQLite database, but also meets higher durability and fault tolerance requirements.

Back

MySQL Goes to 8!

Home

Speaker Geir Høydalsvik
RoomK.1.105 (La Fontaine)
TrackDatabases
Time11:00 - 11:50
Event linkView original entry

The latest and greatest version of MySQL is MySQL 8.



Currently the most used version of MySQL is MySQL 5.7. This talk will highlight what is new in MySQL 8.0 - a huge step forward for our users. MySQL 8.0 delivers significant improvements on all fronts, such as dramatically improved SQL, GIS, and JSON support. The talk will also cover the MySQL Document Store (MySQL = NoSQL + SQL) and MySQL InnoDB Cluster (HA out of the box) as well as MySQL Shell which ensures power, freedom, and flexibility for the Developer.

Back

SWIM - Protocol to Build a Cluster

Home

Speaker Vladislav Shpilevoy
RoomK.1.105 (La Fontaine)
TrackDatabases
Time12:00 - 12:50
Event linkView original entry

SWIM - is a relatively new protocol to discover and monitor cluster nodes, to disseminate events and data between them. The protocol is extremely lightweight, decentralised, and its speed and load per node do not depend on cluster size.



The protocol solves several tasks at once. First - build and keep up to date topology of a cluster without explicit configuration. The task is quite intricate because:






According to the protocol, cluster nodes broadcast packets and send p2p ping requests. Broadcast helps to discover new nodes, p2p pings help to detect failure of a known node.



A second task - events dissemination in a cluster. Event is a node failure; UUID change; IP address update; new node appearance - anything that affects cluster state. Sometimes users define their own event types. When a node learns about an event, it needs to disseminate the event to other nodes. SWIM protocol describes an algorithm how to detect and disseminate events, and gives the following guarantees:






In the talk I tell about how SWIM works, how and with which essential improvements it was implemented, how to use SWIM, and what are the practical performance results.



Implementation is a part of Tarantool DBMS. Tarantool is the biggest Russian Open-Source DBMS. Tarantool currently goes toward better scalability, improvements in horizontal scaling, in cluster-wide calculations, and better cluster management. In scope of that roadmap SWIM protocol implementation was recently released.

Back

Improving protections against speculative execution side channel

Home

Speaker David Stewart
RoomK.1.105 (La Fontaine)
TrackMiscellaneous
Time13:00 - 13:50
Event linkView original entry

Speculative execution side channel methods pose new challenges to not only system administrators, users and security experts but also to developers. Developers can use different techniques to harden their code and reduce the feasibility of a possible malicious actor using these methods to leak secrets. But what is a secret? How can someone leak any of my data using these methods? This presentation introduces some architectural concepts that these methods use. It will also present how these methods work and how malicious actors might try to infer data from other users and codes. We will introduce some of the techniques that developers can use for mitigation, together with details about specific challenges that developers of different programming languages might face when implementing these mitigation techniques. Finally, we will present some of the mitigations that we are introducing in software to help ensure that these techniques can not be exploited in production environments.

No security or computer architecture background is required. Basic to intermediate programming skills are recommended.



Attendees will come away with a better understanding of what speculative execution side channel issues are, how they work, and what they really mean for developers.

Back

SaBRe: Load-time selective binary rewriting

Home

Speaker Paul-Antoine Arras
RoomK.1.105 (La Fontaine)
TrackMiscellaneous
Time14:00 - 14:50
Event linkView original entry

Abstract



Binary rewriting is a technique that consists in disassembling a program to modify its instructions, with
many applications, e.g. monitoring, debugging, reverse engineering and reliability. However, existing solutions suffer from well-known
shortcomings in terms of soundness, performance and usability.



We present SaBRe, a novel load-time framework for selective binary rewriting. SaBRe rewrites specific constructs of
interest — mainly system calls and function prologues — when the program is loaded into memory. This enables users to intercept those constructs at runtime
via a modular architecture allowing custom plugins to be linked with SaBRe using a simple and flexible
API. We also discuss the theoretical underpinnings of disassembling and rewriting, including conditions for
coverage, accuracy, and correctness; and how they affect SaBRe.



We developed two backends for SaBRe — one for x86_64 and one for RISC-V — which were in turn used to
implement two open-source plugins: a fast system call tracer and a fault injector. Our evaluation
shows that SaBRe imposes little performance overhead, between 0.2% and 4.3% on average.
In addition to explaining the architecture of SaBRe and demonstrating its performance,
we also show on a concrete example how easy creating a new plugin for SaBRe is.



SaBRe is a free open-source software released under the GPLv3 license
and originally developed as part of the Software Reliabilty Group at Imperial College London.

Introduction



The goal of binary rewriting is to add, delete and replace
instructions in binary code. There are two main types of binary
rewriting techniques: static and dynamic.
In static binary rewriting, the binary file is statically rewritten on disk, while
in dynamic binary rewriting it is rewritten in memory, as the
program executes.



Static binary rewriting has the advantage
that the rewriting process does not incur any overhead during
execution, as it is performed before the program starts running.
However, static binary rewriting is hard to get right: creating a
valid modified executable on disk is challenging, and correctly
identifying all the code in the program is error-prone in the
presence of variable-length instructions and indirect jumps.



By contrast, dynamic binary rewriting modifies the code in
memory, during program execution. This is typically accomplished by translating one basic block at a time and caching the
results, with branch instructions modified to point to already
translated code. Since translation is done at runtime, when the
instructions are issued and the targets of indirect branches are
already resolved, dynamic binary rewriting does not encounter
the challenges discussed above for static binary rewriting.
However, the translation is heavyweight and incurs a large
runtime overhead.



In this presentation, we introduce SaBRe, a system that implements
a novel design point for binary rewriting. Unlike prior techniques, SaBRe operates at load-time, after the program is
loaded into memory, but before it starts execution. Like static
binary rewriting techniques, SaBRe rewrites the code in-place,
but the translation is done in memory, as for dynamic binary
rewriting. To achieve a high level of both performance and reliability,
SaBRe relies by default on trampolines, which are extremely efficient
and can be used more than 99.99% of the time, and only falls back
on illegal instructions triggering a signal handler for pathological
cases.



The main limitation of SaBRe is that it is designed
to rewrite only certain types of constructs, namely system
calls (including vDSO), function prologues and some architecture-
specific instructions (e.g. RDTSC in x86). However, as we illustrate
later on, this is enough to support a variety of tasks, with
much lower overhead than with dynamic binary rewriting and
without incurring the precision limitations of static binary
rewriting.



We implemented two binary rewriters based on this design:
one for x86 64 and one for RISC-V code. Both rewriters
feature a flexible API, which we used to implement three
different plugins: a fast system call tracer, a multi-version
execution system (not open-sourced yet) and a fault injector.
In summary, our main contributions are:
1. A new design point for selective binary rewriting which
translates code in memory in-place at load time, before
the program starts execution.
2. An implementation of this approach for two architectures, one for x86 64 and the other for RISC-V.
3. A comprehensive evaluation using two open-source plugins: a fast strace-like
system call tracer and a fault injector.
4. An extremely simple API that can be leveraged by users to
implement and integrate their own plugins.

Back

The year of the virtual Linux desktop

Home

Speaker Lubosz Sarnecki
RoomK.1.105 (La Fontaine)
TrackMiscellaneous
Time15:00 - 15:50
Event linkView original entry

We made the Linux desktop work in VR. Join me to hear about the history and future of xrdesktop and the FOSS XR landscape.

With tracked controllers, heads and hands, AR and VR introduced the requirement for a new set of user interactions. In this talk you will learn about existing implementations and how the classical UX model with keyboard and mouse translates to these new concepts. I will highlight the technical aspect of these requirements and how they were solved in xrdesktop. Featuring 3D window management and synthesis for traditional input, xrdesktop is a software stack that integrates VR in the GNOME and KDE desktop environments. You will also get an overview of the history and status of Open Source in AR and VR.

Back

Making & Breaking Matrix's E2E encryption

Home

Speaker Matthew Hodgson
RoomK.1.105 (La Fontaine)
TrackMiscellaneous
Time16:00 - 16:50
Event linkView original entry

Matrix is an open protocol and open network for decentralised real-time communication; shifting control over communication from the big proprietary silos back to the general population of the Internet. In 2016 we added E2E Encryption based on the Double Ratchet, and since then have been working away on getting the encryption so polished that we can transparently turn it on by default everywhere. In this talk, we'll show how we have finally done this, what the blockers were, and then try to smash the encryption to pieces to illustrate the potential attacks and how we mitigate them.

Matrix is an ambitious project to build a open decentralised real-time communication network; providing an open standard protocol and open source reference implementations, letting anyone and everyone spin up a Matrix server and retake control of their real-time communication. Matrix is looked after by the non-profit Matrix.org Foundation, and as of Oct 2019 we have over 11.5M addressable users and around 40K servers on the public network.



Over the course of 2019 we spent a huge amount of time finalising Matrix's end-to-end encryption so we could finally turn it on by default without compromising any of the behaviour users had grown accustomed to in non-encrypted rooms. Specifically, the main remaining blockers were:






However, we have finally got there, and this talk will demonstrate how the final E2EE implementation works; the final problems we had to solve; the threat model we have implemented; and how we're doing on rolling it out across the whole network. More interestingly, we will then demonstrate a variety of attacks against the encryption (e.g. shoulder-surfing QR codes during device verification; MITMing TLS; acting as a malicious server implementation; global passive adversary) to demonstrate how well we handle them.

Back

AMENDMENT Regaining sovereignty over your router

Home

Speaker Lucas Lasota
RoomH.2215 (Ferrer)
TrackLightning Talks
Time10:00 - 10:15
Event linkView original entry

Router Freedom is the right that every one has to choose a router/modem to access to the internet. For years, the FSFE has been successfully fighting for Router Freedom in Germany. Now the debate has reached the European level with Internet Service Providers imposing their specific routers to customers. Based on the lessons learnt in previous years, the FSFE is preparing an activity package for organisations and activists to assist them in their fight for Router Freedom in their own countries, as well as on the EU level.



Please note that this talk replaces one entitled "Open Source - Killing standards organizations or saving them" by Charles Eckel that has been rescheduled to be given at 11:40 instead.

Router Freedom is the right that every one has to choose a router/modem to access to the internet. In Europe, since 2016, it is protected by the Directive 2015/2120 - Net Neutrality Directive.



However, the status of Router Freedom in Europe differs from country to country. The core of the debate is the question of where the Network Termination Point (NTP) is located. This defines where the network of the ISP ends and where the network of the user begins. If the modem and router are considered part of the Internet Service Providers' (ISP) infrastructure, a user cannot claim sovereignty of their communication and security. Besides, ISP across Europe have been imposing their own routers to costumers in clear violation of the Net Neutrality norms.



When FSFE started the Compulsory Routers campaign in 2013, the legal situation in Germany was unclear and some ISPs were already infringing Router Freedom. Three years later, we have been successful: a law has been established which defines the NTP as the “plug” on the wall, so routers are considered the users' equipment and therefore not subject to contract limitations.



Based on the lessons learnt in previous years, the FSFE is preparing an activity package for organisations and activists to assist them in their fight for Router Freedom in their own countries, as well as on the EU level.



The lightning talk will present the basic concepts and show how the audience can organise the necessary steps in order to stard advocating for Router Freedom in their own countries.

Back

emissions API

Home

Speaker Timo Nogueira Brockmeyer
RoomH.2215 (Ferrer)
TrackLightning Talks
Time10:20 - 10:35
Event linkView original entry

The European Space Agency’s Sentinel-5P satellite is built to monitor air quality data (carbon hydroxide, sulfur monoxide, ozone, …). All data gathered are publicly available …if you know what to do with those data sets, great, but if not:



Emissions API’s mission is to provide easy access to this data without the need of being an expert in satellite data analysis and without having to process terabytes of data.



This way, we hope to empower others to easily build apps that use this data – e.g. visually showing emissions of countries over time.

Achievements of climate goals are so far only verifiable for a very small group of people with specialized know-how. As a result, public discussion remains abstract and elusive for many people. Easy access to emissions data provides a more general audience with the opportunity to form a fact-based opinion. For example, one could evaluate the effectiveness of environmental regulations – such as diesel driving bans in inner cities or new sulfur limits in shipping–by comparing actual measurements from before and after on a map.



Emissions API is a solution that provides simple access to emissions data of climate-relevant gases. For this purpose, data of the European Space Agency’s Sentinel-5P earth observation satellite will be prepared in such a way that it allows programmers easy access without the need to have a scientific background in the field.



The project strives to create an application interface which lowers the barrier to use the data for visualization and/or analysis.
Tackling the problem



The project’s core is an API, which can be used to query the processed data. For this purpose, we develop a cloud service which queries the freely accessible data of Sentinel-5P, aggregates it, stores it in a cache and makes it available.
Target audience



This project targets developers who want to build their own services based on the satellite data of the Copernicus program, but who do not want to work with huge amounts of scientific data directly. We will provide examples and libraries to quickly get you started without being an expert in satellite data analysis.

Back

git-issue

Home

Speaker Diomidis Spinellis
RoomH.2215 (Ferrer)
TrackLightning Talks
Time10:40 - 10:55
Event linkView original entry

Git-issue is a minimalist decentralized issue management system based on Git,
offering (optional) biderectional integration with GitHub and GitLab issue management.
It has the following advantages over other systems.




Back

The Heptapod project

Home

Speaker Georges Racinet
RoomH.2215 (Ferrer)
TrackLightning Talks
Time11:00 - 11:15
Event linkView original entry

Heptapod is a friendly fork of GitLab CE that supports the Mercurial DVCS.
Today, Bitbucket starts dropping the support for Mercurial. Heptapod can provide nice new homes for projects that have to migrate out of Bitbucket.
We are looking for contributors - lots of different skills can be useful.

Mercurial is a free software distributed version control system (DVCS) written primarily in Python, with an intuitive command line interface and strong, safe history rewriting features.



Mercurial is in active development and in use at several large organisations, which
appreciate especially its extensibility and its ability to handle very large repositories.



However, Mercurial has been somewhat lacking public exposure in the past few years for not being a first class citizen in the prominent integrated hosting and collaboration solutions.
This culminated recently with Bitbucket announcing last summer its plan to drop support for Mercurial, in particular planning to stop accepting new repositories by February 1st, 2020 (that's the first day of this FOSDEM edition!).



In this talk, we will present the Heptapod project, which brings Mercurial support to GitLab Community Edition, the well-known open-source integrated platform for source collaboration and dev-ops. Lately, GitLab CE has been selected by some major free software projects, such as Debian and Gnome, to name only a few.



Several free and open-source projects have successfully migrated from Bitbucket to Heptapod. We are willing to help more of them doing so, either by hosting them directly if possible (contact us) or by giving them a hand in the transition.



Heptapod is a community-driven effort, whose development involves many programming languages: Ruby, Go, Python, Javascript and potentially Rust, but one does not need to be a expert in all of these to start contributing.



We are calling interested people to join us on our Heptapod instance (of course), there's a bit of low hanging fruit to grab there.

Back

puavo.org

Home

Speaker Juha Erkkilä
RoomH.2215 (Ferrer)
TrackLightning Talks
Time11:20 - 11:35
Event linkView original entry

A Finnish company Opinsys has provided and maintained
many thousands of Linux desktops in Finnish schools for
about fifteen years now. This is a short introduction
to the technology (called Puavo) they have developed
for this purpose.

Puavo can be used to manage Linux desktops suitable
for school environments. It is a combination of web
software (Puavo Web) and a specially configured system
(Puavo OS) using Debian GNU/Linux operating system as
its base. Puavo Web is built for managing user accounts
and devices. Puavo OS is designed for large-scale
deployment in primary and secondary schools. The source
code for both is free software under GPLv2+ license.

Back

AMENDMENT Open Source - Killing standards organizations or saving them

Home

Speaker Charles Eckel
RoomH.2215 (Ferrer)
TrackLightning Talks
Time11:40 - 11:55
Event linkView original entry

Open source communities move quickly, value running code, and docs are best effort at best. Standards move slowly, value precise specs, and negotiate compromises for broad alignment. Given these differences, why would open source communities fraternize with standards orgs? Standards orgs such as IETF and MEF realize they need to change to remain relevant. By embracing open source, standards orgs benefit from the speed and collaborative spirit of open source and get timely feedback on the clarity and correctness of standards as they evolve in parallel with running code. Open source communities gain users, address additional use cases, and gain the stability of standards to ease integration efforts and avoid forks. This session explores this evolution in standards orgs, highlights areas of mutual interest, and shares ideas on the benefit of closer collaboration.



Please note that this talk replaces one entitled "Open Source for students, by students" that was due to have been given by Kristin Yim. Also this talk was originally scheduled to be given at 10:00.

By collaborating with standards organizations and supporting existing and evolving standards, the open source community gains users, address a larger set of use cases, and benefit from the stability of standards that can help avoid harmful forking and ease integration efforts. Standards orgs benefit from the speed and collaborative spirit characteristics of open source, and they gain timely and critical feedback on the clarity and correctness of their standards as they evolve iteratively and in parallel with the open source code. The end result is open source code that is more consumable by industry, and standards that are more consumable by the open source community.

Back

Open Adult Education: a curriculum to bridge the digital skills gap with free and open source technologies

Home

Speaker Pia Groenewolt
RoomH.2215 (Ferrer)
TrackLightning Talks
Time12:00 - 12:15
Event linkView original entry

The OPEN-AE project is developing an open, and modular curriculum directed to train e-facilitators and trainers working with people who are in need of upskilling and reskilling in free and open-culture. The OPEN AE curriculum is meant to be modular and adaptable to the immediate training needs of the efacilitators. The training aims to introduce the trainers to the values of free and open-culture and empower them with the values behind this culture so it can be transmitted to low-skilled adults. the training will also have modules supporting a transition to free and open source software and culture. Open-AE is not merely about teaching free and open softwares, but aims to have the trainers be active participants in the culture, knowing how to license these openly how to collaborate and develop as a collective.

The OPEN-AE project is developing an open, and modular curriculum directed to train e-facilitators and trainers working with people who are in need of upskilling and reskilling in free and open-culture. The OPEN AE curriculum is meant to be modular and adaptable to the immediate training needs of the efacilitators. The training aims to introduce the trainers to the values of free and open-culture and empower them with the values behind this culture so it can be transmitted to low-skilled adults. the training will also have modules supporting a transition to free and open source software and culture. Open-AE is not merely about teaching free and open software but aims to have the trainers be active participants in the culture, knowing how to license these openly how to collaborate and develop as a collective.



ALL DIGITAL is a network that wants to ensure every European, or rather every person can be empowered by the digital transformation, in short, we work in the field of digital inclusion. Our organisation started slightly over ten years ago as a grassroots movement among digital competency centres, and this scaled to the European level. Today 43% lack basic digital skills with half of them having skills at all. It is important to ensure



At the moment ALL DIGITAL is coordinating the OPEN-AE project which aims to introduce free and open-source technologies to those who do not have enough digital skills or are in need upskilling. The project consortium involves partners working in the sector of digital inclusion running digital competency centres around Europe and one European network, ALL DIGITAL. The project aims to bridge the digital skills gap with free and open source technologies by developing a training curriculum directed to e-facilitators working in the non-formal sector. This curriculum will be open and modular allowing trainers to immediately adapt it to their training needs.



Open-AE aims to not merely be an one-off project but wants to start a movement in the digital inclusion sector to ensure the digital skills gap is not bridge merely with proprietary software, but empower users so they can be empowered users with access to the relevant software and they know how to participate in the open culture after the training ends. Many in the digital inclusion sector are intimidated by free and open source software, believing they need a higher level than basic skills to use the software effectively. Breaking down the image of free and open source software and making it more approachable is essential to ensure lower skills users can not only access and use digital technologies but are empowered by them.



The Open-AE consortium would like to present their work to FOSDEM to be engaged in a dialogue with developers and the open source community in activities directed to bridge the digital skills gap, to know what work developers are doing, but also engage in the open community in a direct capacity. Two partners based in Brussels ALL DIGITAL a European network, and Maks, Medie en Actie in Kureghem, will present the work. The presentation will go over the need to work in the field of digital inclusion.



The presentation will go over the challenges in the digital inclusion with approaching free and open source technologies, the process of how the curriculum was developed, the curriculum itself, and how to scale this movement and carry it further.

Back

Prototyping the Internet of Things with Wyliodrin STUDIO

Home

Speaker Alexandru Radovici
RoomH.2215 (Ferrer)
TrackLightning Talks
Time12:20 - 12:35
Event linkView original entry

In 2014, teaching a Raspberry Pi programming course was a real challenge, mostly due to the lack of development devices. This is how we came up with the idea of building Wyliodrin STUDIO.



Wyliodrin STUDIO is an easy to use IDE for the Internet of Things that enables remote control over embedded devices. While it is a good prototyping tool, the platform also targets students and educators who want to get started in the IoT field. It is designed to help both technical and non-technical people to get started with programming devices such as the Raspberry Pi.



In this talk we aim to present Wyliodrin STUDIO, how it works and how we and other universities used it to teach IoT technologies in classes such as computer science, power engineering and film directing.

Wyliodrin STUDIO is an open source, web-based IDE designed for fast prototyping of Internet of Things applications.



We have build this platform because we needed an affordable way of programming embedded devices such as the Raspberry Pi. Since 2014, when the first platform version was released, we have improved the solution so now it can be used to remotely program, control and monitor devices. For the moment, the platform is compatible with devices such as the Raspberry Pi and BeagleBone Black and supports Python, Node.js and visual, block-based programming languages.



Since 2014 the platform has been used by companies such as Intel and Cisco and in institutions such as UCLA, USC, Ulm University and Toronto Public Library.



The purpose of our presentation is not only to make an overview of Wyliodrin STUDIO's characteristics, but also introduce the audience to our experience in teaching IoT courses and how the platform helped us.

Back

DuckDB

Home

Speaker Hannes Mühleisen
RoomH.2215 (Ferrer)
TrackLightning Talks
Time12:40 - 12:55
Event linkView original entry

We present DuckDB, our new, Open Source embedded analytical data management system.

Data management systems have evolved into large monolithic database servers running as stand-alone processes. This is partly a result of the need to serve requests from many clients simultaneously and partly due to data integrity requirements. While powerful, stand-alone systems require considerable effort to set up properly and data access is constricted by their client protocols. There exists a completely separate use case for data management systems, those that are embedded into other processes where the database system is a linked library that runs completely within a ``host'' process. The most well-known representative of this group is SQLite, the most widely deployed SQL database engine with more than a trillion databases in active use. SQLite strongly focuses on transactional (OLTP) workloads, and contains a row-major execution engine operating on a B-Tree storage format. As a consequence, SQLite's performance on analytical (OLAP) workloads is very poor.



There is a clear need for embeddable analytical data management. This needs stems from two main sources: Interactive data analysis and edge computing. Interactive data analysis is performed using tools such as R or Python. The basic data management operators available in these environments through extensions (dplyr, Pandas, etc.) closely resemble stacked relational operators, much like in SQL queries, but lack full-query optimization and transactional storage. Embedded analytical data management is also desirable for edge computing scenarios. For example, connected power meters currently forward data to a central location for analysis. This is problematic due to bandwidth limitations especially on radio interfaces, and also raises privacy concerns. An embeddable analytical database is very well-equipped to support this use case, with data analyzed on the edge node. The two use cases of interactive analysis and edge computing appear orthogonal. But surprisingly, the different use cases yield similar requirements.



In this talk, we present our new system, DuckDB. DuckDB is a new purpose-built embeddable relational database management system created at the Database Architectures group of the CWI. DuckDB is available as Open-Source software under the permissive MIT license. To the best of our knowledge, there currently exists no purpose-built embeddable analytical database despite the clear need outlined above. DuckDB is no research prototype but built to be widely used, with millions of test queries run on each commit to ensure correct operation and completeness of the SQL interface.



DuckDB is built from the ground up with analytical query processing in mind. As storage, DuckDB uses a single-file format with tables partitioned into columnar segments. Data is loaded into memory using a traditional buffer manager, however, the blocks that are loaded are significantly larger than that of a traditional OLTP system to allow for efficient random seeks of blocks. Queries are processed using a vectorized query processing engine to allow for high performance batch processing and SIMD optimizations.

Back

Apache DataSketches

Home

Speaker Claude Warren
RoomH.2215 (Ferrer)
TrackLightning Talks
Time13:00 - 13:15
Event linkView original entry

In​ the analysis of b​ig data there are often problem queries that don’t scale because they require huge compute resources to generate exact results, or don’t parallelize well. Examples include c​ount-distinct, ​quantiles, most frequent items, joins, matrix computations, and graph analysis. Algorithms that can produce accuracy guaranteed approximate answers for these problem queries are a required toolkit for modern analysis systems that need to process massive amounts of data​ quickly. For interactive queries there may not be other viable alternatives, and in the case of real­-time streams, these specialized algorithms, called stochastic, s​treaming, sublinear algorithms,​ or 's​ketches',​ are the only known solution. This technology has helped Yahoo successfully reduce data processing times from days to hours or minutes on a number of its internal platforms and has enabled subsecond queries on real-time platforms that would have been infeasible without sketches. This talk provides a short introduction to sketching and to Apache DataSketches, an open source library of these algorithms designed for large production analysis systems.

Fast:
Sketches are fast. The sketch algorithms in this library process data in a single pass and are suitable for both real-time and batch. Sketches enable streaming computation of set expression cardinalities, quantiles, frequency estimation and more. This allows simplification of system's architecture and fast queries of heretofore difficult computational tasks.



Big Data Platforms:
This library has been specifically designed for big data platforms. Included are adaptors for Hadoop Pig, Hive, Spark, Druid, and Postgresql, which also can be used as examples for other systems, and many other capabilities typically required in big data analysis systems. For example, a Memory package for managing large off-heap memory data structures. Our sketch library is implemented in Java, C++ and Python and provides binary compatibility across languages and platforms. Some of our sketches provide off-Java-heap capability which dramatically improves performance in large systems. Our APIs provide a rich set of options to enable fine tuning performance parameters that are particularly important for large systems.



Analysis:
Built-in Theta Sketch set operators (Union, Intersection, Difference) produce sketches as a result (and not just a number) enabling full set expressions of cardinality, such as ((A ∪ B) ∩ (C ∪ D)) \ (E ∪ F). This capability along with predictable and superior accuracy (compared with Include/Exclude approaches) enable unprecedented analysis capabilities for fast queries.

Back

Go REUSE to license your code

Home

Speaker Max Mehl
RoomH.2215 (Ferrer)
TrackLightning Talks
Time13:20 - 13:35
Event linkView original entry

Developing Free Software is fun, dealing with the licensing and copyright information is not. The REUSE project changes that! With three simple steps, it makes adding and reading licensing and copyright information easy for both humans and machines. In this presentation, Max Mehl will guide through the REUSE principles and presents how to make clear licensing simple.

If you want to grant users the freedom to use, study, share, and improve your software, you have to grant those freedoms in the license of the software. To encourage people to develop Free Software, we help developers to understand and apply Free Software licensing. REUSE, started in 2017, contributes to this goal. Any project following the initiative's recommendations makes copyright and licensing information readable to both: humans and machines. This way, we want to ensure that individuals, organisations and companies who are re-using code are aware of the license terms chosen by the original author.



REUSE does not reinvent the wheel. On the opposite, it integrates nicely into development processes and other best practices for Free Software licensing. Additionally, there are tools and documentation to help you get started. We will have a closer look at these during this talk.

Back

PICTOR: A free-to-use open source radio telescope

Home

Speaker Apostolos Spanakis-Misirlis
RoomH.2215 (Ferrer)
TrackLightning Talks
Time13:40 - 13:55
Event linkView original entry

PICTOR, located in Athens, Greece, consists of a 1.5-meter parabolic antenna that allows anyone to make continuous and spectral (i.e. hydrogen line) drift-scan observations of the radio sky in the 1300~1700 MHz regime for free. The goal of this effort is to introduce students, educators, astronomers and others to the majesty of the radio sky, promoting radio astronomy education, without the need of building a large and expensive radio telescope.



PICTOR is a fully open source (software & hardware) project: https://github.com/0xCoto/PICTOR

PICTOR is a free-to-use open source and open hardware radio telescope that aims to promote radio astronomy on a budget. It consists of a 1.5 meter parabolic dish antenna, a 1420 MHz-optimized feedhorn, a two stage low noise amplifier (LNA) with a built-in high-pass filter, and an RTL-SDR. Future upgrades may also use higher-bandwidth SDRs, such as the LimeSDR Mini.



This radio telescope allows users to measure hydrogen line emissions from our galaxy. Under certain conditions, hydrogen atoms can emit photons with a wavelength of 21 cm, which corresponds to a frequency of 1420.405 MHz. Because our galaxy is so rich in terms of neutral hydrogen, a radio telescope like PICTOR is capable of detecting such faint radio emissions. When the telescope is pointing to the galactic plane (the Milky Way band), the intensity around 1420 MHz gets significantly stronger. Radio astronomers are able to use information like the Doppler shift such emissions have undergone, in order to determine neutral hydrogen concentration, map the spiral geometry of our galaxy, and even provide evidence for the existence of dark matter by plotting the rotation curve of the Milky Way!



In order to observe with PICTOR, a user can just visit www.pictortelescope.com, click "Observe", fill in their observation parameters (frequency, observing duration etc.) and submit their observation, and as soon as the observation is finished, the user will receive an email with their observation data and the parameters they entered.



Since the initial launch, PICTOR has gotten lots of updates and improvements, particularly in the software backend, providing more data to the users, using advanced techniques to increase the signal-to-noise ratio by calibrating spectra and mitigating radio frequency interference (RFI) (if present), and more.



There is also a PDF for users who are unfamiliar with radio astronomy and radio telescopes to get started: https://www.pictortelescope.com/Observingtheradioskywith_PICTOR.pdf



PICTOR is a fully open source (software & hardware) project, and everything can be found on the GitHub repository: https://www.github.com/0xCoto/PICTOR

Back

Advancing science with Dataverse

Home

Speaker Philip Durbin
RoomH.2215 (Ferrer)
TrackLightning Talks
Time14:00 - 14:15
Event linkView original entry

Dataverse is open source research data repository software installed by 52 institutions around the world and translated into ten languages. It facilitates data sharing, allowing researchers to replicate and build upon each other's work and receive academic credit in the form of citations for publishing data. Data deposited into Dataverse installations is made more discoverable through harvesting of metadata via standard protocols, publication to registries such as DataCite, and indexing into scholarly search engines such as Google Dataset Search. Data exploration is enabled by a variety tools contributed by the international Dataverse community that make use of Dataverse APIs to get data in and out. These APIs also enable a variety of integrations with scholarly publishing systems such as electronic lab notebooks, journal systems, reproducibility platforms, and more.

In this talk a core developer for Dataverse will introduce the audience to the world of scholarly publishing, making the case for data publication and how it contributes to the advancement of science. An emphasis will be made on how Dataverse goes beyond simply being open source by being friendly to contributions from newcomers.

Back

Towards decentralized alternatives for code collaboration

Home

Speaker Alexis Sellier (cloudhead)
RoomH.2215 (Ferrer)
TrackLightning Talks
Time14:20 - 14:35
Event linkView original entry

This talk will walk the audience through radicle, a peer-to-peer protocol for code collaboration. It will touch on the design approach of the radicle stack and outline the project's challenges.

Radicle is being built to provide a convenient collaboration workflow without intermedaries or central servers. Issues, patches and code review items can be shared seamlessly between peers and interacted with on the user's machine, just like one interacts with a git repository.



Designed around peer-to-peer networking, it offers a new design space for bazaar-style development. Login systems are replaced by public key cryptography, hosted issue trackers are replaced by local peer replication, and the idea of a single canonical upstream is replaced by a peer-to-peer model familiar to the open source hackers of the 90s and early 2000s—updated with a new protocol for open & trusted peer collaboration.

Back

Getting started with FPGA's for Packet Processing

Home

Speaker Miroslaw Walukiewicz
RoomH.2215 (Ferrer)
TrackLightning Talks
Time14:40 - 14:55
Event linkView original entry

The FPGA is an emerging technology that can increase the performance of packet processing due to need of increased protocol complexity.
There are many models of system where FPGA is suitable more or less depending on use-case.



The presentation will introduce the system models presenting how to use FPGA inside larger systems.
The models include inline, lookaside and fast-path packet processing capabilities requiring different approaches from system level making accelerated system more usable and easier to integrate with existing components.



In this talk we cover the challenges related to integration of the FPGA system with SW components like DPDK/kernel drivers and orchestration.
The examples of real FPGA deployments will be presented on base of Intel Programmable Acceleration Card family.

The talk describes a new usage for FPGA technology used for packet processing. It presents opportunities, problems going to be solve and challenges related to use a complex programmable systems for mass deployment.



It covers typical use-cases and some basic rules defining how to use FPGA system efficienly and integrate it with existing software stack and existing orchestration systems like Openstack or kubernetes.

Back

Quality diagrams with PyCirkuit

Home

Speaker Orestes Mas
RoomH.2215 (Ferrer)
TrackLightning Talks
Time15:00 - 15:15
Event linkView original entry

I'd like to present PyCirkuit, a little python application acting as a front-end to circuit-macros and dpic language, allowing the creation of high quality graphics and circuit diagrams to be included into LaTeX and other documents.



This application is inspired by "Cirkuit", a KDE4 applicattion written by Matteo Agostinelli

Back

License compliance for embedded Linux devices with Buildroot

Home

Speaker Luca Ceresoli
RoomH.2215 (Ferrer)
TrackLightning Talks
Time15:20 - 15:35
Event linkView original entry

Producing a Linux-based electronic device requires to put together lots of open source software packages, which is a complex task.



Complying to the licensing obligations for each of them is also complex, especially if you are not a lawyer. Not complying is immoral, illegal and risky.



Discover how your build system can help you!

With live demos, Luca will introduce you to:




Back

Open Source Support Program by OTA

Home

Speaker Gratien D'haese
RoomH.2215 (Ferrer)
TrackLightning Talks
Time15:40 - 15:55
Event linkView original entry

The Open Technology Assembly (OTA), formerly known as Belgian Unix Users Group, is a non-profit organisation which main goal was to promote and organise meetings around Unix and Open Source in general (mainly more than a decade ago).
Over the years the Internet took over so that the interest in our meetings declined and finally we stopped organise meetings altogether. However, as a non-profit organisation we would like to do something useful such as supporting and funding new open source related projects.

The Open Technology Assembly (OTA) will open an Open Source Support program to apply for funding. It would be nice that the chosen Open Source projects owners submit a (lightning) talk for next year's FOSDEM, however, it is not obliged.
We will also organise an independent jury to go over the submitted projects and make a list of the projects which will get funding. Before FOSDEM starts we will have web pages explaining most of the details.

Back

NGI Zero: A treasure trove of tech awesome

Home

Speaker Michiel Leenaars
RoomH.2215 (Ferrer)
TrackLightning Talks
Time16:00 - 16:15
Event linkView original entry

The Next Generation Internet initiative is the first concerted effort in Europe to put significant public funding to hands-on work to really fix the internet. The long term vision of the initiative is to make the internet what we need and expected it to be in the first place: Resilient. Trustworthy. Sustainable. The concrete mission of the Next Generation Internet initiative is to "re-imagine and re-engineer the Internet for the third millennium and beyond". With new projects starting all the time, the density of awesome open source, open hardware, new science and new standards in-the-making is already intense: about 200 projects are currently on their way. These range from encrypted synchronisation for calendars and address books to symbolical protocol verification, from an open hardware RISC-V SoC to removing binary seeds from operating systems, from ethical search to the Fediverse etc.



NGI Zero offers funding to independent researchers and FOSS developers working on free and open projects in the area of privacy and trust enhancing technologies and on search, discovery and discoverability. It also offers an elaborate 'pipeline' of supporting activities that live up to high standards (sometimes called 'walk the talk') in terms of security, privacy, accessibility, open source licensing, standardisation, packaging, etc. The talk will provide an overview of the awesome R&D that is now in the pipeline, how these projects are supported, and everything you need to know about the various opportunities to 'come and work for the internet'.

NGI Zero Discovery and NGI Zero PET are a significant effort and ambitious effort by a large group of organisations led by NLnet foundation (that was instrumental in pioneering the early internet in Europe):






The budget for the effort is kindly provided by the European Commission.

Back

European Software Engineering funded research

Home

Speaker Luis C. Busquets Pérez
RoomH.2215 (Ferrer)
TrackLightning Talks
Time16:20 - 16:35
Event linkView original entry

This lightning talk will explain the latest funding results of the European Framework Programme for Research (Horizon 2020) concerning software engineering.

Back

FOSDEM infrastructure review

Home

Speaker Richard Hartmann
RoomH.2215 (Ferrer)
TrackLightning Talks
Time16:40 - 16:55
Event linkView original entry

Informational and fun.

Back

WebMIDI

Home

Speaker Steven Goodwin
RoomH.1302 (Depage)
TrackJavaScript
Time09:00 - 09:25
Event linkView original entry

The connection between Web and MIDI seems as likely as the joining of Garlic with Bread! Yet, we now have the power to create music from the web browser! Either by generating MIDI files for later manipulation, or as live instruments, WebMIDI provides us with the power to build some amazing online music applications.

In this short form talk, Steven will cover the internal details of the MIDI protocol, and its related file format, to show how music is represented by computers. This details both the recording and playback of music within the browser, and external application. It comes complete with live demos, synthesized sounds, and occasional music jokes!

Back

Building Decentralized Social Virtual Reality using WebXR on your browser

Home

Speaker Rabimba Karanjai
RoomH.1302 (Depage)
TrackJavaScript
Time09:30 - 09:55
Event linkView original entry

Learn how to build auditable privacy aware social vre experinces right inside your webpage in javascript using WebXR API. That will be corss platform, instant and run in every device with a browser. Adapting to the MIxed Reality capability of your user.
This session will give you a short primer on WebXR API's and hands-on on building a small social VR experience using open source tools and javascript in your browser.

Virtual Reality (VR) is undoubtedly one of the most sought after technology of the present times. Everybody is trying to make their presence felt in the sphere. And one of the prime use cases of VR is shared spaces.
It opens up possibility of having shared earning space, meeting space and even training space in real-time and in an interactive way.
However often these kind of experiences are mired with privacy concerns, who is hosting the space, the data retained and who is observing the behaviors. On top the data retention implications.



What if we could build Virtual Reality experiences right on the browser using JavaScript and all the shared room logic too in an open auditable way. And what if we could host it in decentralized way. Where it would be resistant to takedowns and have ability for users to host their own rooms.



Interested in social VR, know about the possibilities it open, want to get your hands wet but don't want to invest learning another tool? Don't want to commit to the steep learning curve or buy expensive machines up front? Or are you afraid of the walled garden of SDK's? In this talk we rip open the veiled wall of proprietary VR with Web Virtual Reality designed to run in browser in any device including your phone. Learn how to build auditable privacy aware social VR experiences right inside your webpage in JavaScript using WebXR API. Adapting to the Mixed Reality capability of your user.

Back

PWAs on steroids

Home

Speaker Trishul Goel
RoomH.1302 (Depage)
TrackJavaScript
Time10:00 - 10:25
Event linkView original entry

PWAs bring the best of both mobile and native apps to user. PWAs equipped with service workers provide features like offline availability, push notifications etc. Now with modern webAPIs, PWAs are beyond the browsers; in Hardware. Consider turning bulb on/off with your PWA, sounds cool? Lets learn how to, in this talk!

PWAs have been limited to offline availability and push notifications for long time, I want to explain to developers there is much more to PWAs, I want to show how to interact with more device specific features with modern webAPIs like web Bluetooth, speech synthesis, speech recognition, webShare etc. With this talk I want to give attendees a compelling reason to develop PWA instead of a mobile app for there next thing. I plan to have a quick introduction to serviceworkers and then majorly explaining webAPIs, usage and implementation with demos.

Back

Write Safer JavaScript Applications with TypeScript!

Home

Speaker Davy Engone (Hackages)
RoomH.1302 (Depage)
TrackJavaScript
Time10:30 - 10:55
Event linkView original entry

In 25 minutes, you'll learn how TypeScript can help you build safer and more resilient JavaScript applications. Expect some demos, some code and a lot of fun!

Back

XR adds: “Try before you buy”

Home

Speaker Anastasiia Miroshnichenko
RoomH.1302 (Depage)
TrackJavaScript
Time11:00 - 11:25
Event linkView original entry

One day people will wake up and realize that Augmented Reality is here.
Advertisers are keen on AR because it can create higher levels of engagement than traditional ads. The power of gamification and the power of people experimenting with AR technology helps them to make better decisions as consumers.
You can find an ad in your news feed, open the camera, and preview a product in the "Real" world.

One day people will wake up and realize that AR is here. There are many questions around this sometimes-overhyped sector. And the biggest of them is how it will make money?
There are several answers to that, and we will take a look at the most popular ones.



Advertisers are keeping the eye on AR because it can create higher levels of engagement than traditional advertisements.



People are very interested in things like shopping and ads, things that help to make a better decision.
On the other side, brands want their customers to spend more time thinking about their products and personalizing them to their needs.



And XR (Extended Reality) can enable that!

Back

Creating symphonies in JavaScript

Home

Speaker Steven Goodwin
RoomH.1302 (Depage)
TrackJavaScript
Time11:30 - 11:55
Event linkView original entry

In 1996 I read about György Ligeti’s mechanical music (notably Poème Symphonique) and started to explore the possibility of precisely timed music. Knowing that computers now allowed more programmatic control over sound than they did in 1962, when Ligeti's wrote his piece, I set about writing algorithms to generate MIDI files which ultimately became Symphony 1 in C# minor.

This talk covers the methods of generating MIDI files algorithmically in JavaScript, using the ones from the Symphony 1 as examples, showing how you can turn an abstract idea into music. It also discusses the additional processes and work necessary to turn a core idea into a full album. Along the way Steven touches on copyright, music trivia, geeky tech stuff, and the philosophy of music that can never be performed by humans.

Back

AMENDMENT JavaScript lightning talks

Home

Speaker Ludovic Gasc
RoomH.1302 (Depage)
TrackJavaScript
Time12:00 - 12:25
Event linkView original entry

Take the opportunity of the lunch time to present your projects, tips or anything else while it's linked with JavaScript and OpenSource.

Three lightning talks have been presented:




  1. Flutter and react native by Taym Haddadi

  2. Svelte.js by ?

  3. Haxe by Laurent Deketelaere




(This replaces 'Improve React App using design patterns' by Ankush Behl)

Back

AMENDMENT JavaScript on Microcontrollers

Home

Speaker Niels Leenheer
RoomH.1302 (Depage)
TrackJavaScript
Time12:30 - 12:55
Event linkView original entry

Building an IoT device feels out of reach to a lot of web developers. Most of us didn’t study electrical engineering, and never learned to solder. And programming a microcontroller is on a completely different level than building a website.
But that is just not true anymore. Nowadays microcontrollers have become powerful enough to run JavaScript. And that opens up IoT to all of us. With just 20 lines of code we can build a connected lightbulb, and within seconds we can wirelessly deploy it. Write a couple of lines more, and you can even remotely control a car. And that is precisely what we are going to do.



(This replaces 'Let's Get Func-y' by Jemima Abu)

Back

Are PWAs ready to take over the world?

Home

Speaker Jarek Lipski
RoomH.1302 (Depage)
TrackJavaScript
Time13:00 - 13:25
Event linkView original entry

This talk offers a walk-through of the main PWA features and a comparison how they behave across different platforms (Linux, Android, iOS), on various web browsers (Firefox, Chrome, Safari). Practical code examples will come from Sojourner - a FOSDEM conference companion app. We will also discuss some UX/UI challenges and their potential solutions specific for PWAs.

During this talk we will discuss the following aspects of PWA and illustrate them with practical code examples.



Performance and network reliability:






Persistent Storage:






Installation:






Design:






At the end we will summarize the state of PWAs in 2020 and for which types of applications they work best.



Examples used to illustrate this talk will come from Sojourner FOSDEM conference companion app, written using Vue.js framework.

Back

2nd Generation JavaScript Frameworks & Libraries: Beyond Angular, React, and Vue!

Home

Speaker Geertjan Wielenga
RoomH.1302 (Depage)
TrackJavaScript
Time13:30 - 13:55
Event linkView original entry

An overview of an interesting new development over the past years -- many vendors, large and small, have been making their JavaScript-based technology stacks available on GitHub. What does that mean and how to evaluate this development? Find out in this session, which includes small code demos and tips and tricks.

Did you know that over the past few years, large enterprises have been developing and open sourcing their JavaScript technology stacks? On GitHub, you'll find solutions by ING, Uber, PayPal, the Financial Times, Oracle, Microsoft, and many others. Some of these are software vendors, while others are in a variety of other industries. Each of them start from open source frameworks and libraries and all of them are interested in contributions.



The session, with several live coding scenarios, focuses on something that's been going on below the surface, mostly unseen: large enterprises are using open source solutions in the JavaScript ecosystem (e.g., React, Vue, Knockout, Angular), developing their own internal tech stacks, and then pushing these stacks out to GitHub.



Let's explore the advantages of these and see what can be done and how practical these developments are.

Back

Pushing the limits of the Web with WebAssembly

Home

Speaker Jon Lopez Garcia
RoomH.1302 (Depage)
TrackJavaScript
Time14:00 - 14:25
Event linkView original entry

The Web is omnipresent nowadays and meets most of the needs of our applications.



For almost 10 years, leading browsers have been working hard to push the boundaries and performance of our connected apps.



This is also the main reason why WebAssembly, the new binary standard initially implemented by Safari, Chrome, Firefox and Edge, appeared. It allows the execution, in the browser, of your favourite programming languages ​​at almost native speed.

During this session, we browse the current means implemented by browsers to optimise the execution of JavaScript code. We will outline the limitations of current solutions, the benefits provided by the WebAssembly and review its performance.



Enjoy the future of the Web, now!

Back

State of Node.js Core

Home

Speaker Ludovic Gasc
RoomH.1302 (Depage)
TrackJavaScript
Time14:30 - 14:55
Event linkView original entry

Node.js is now over a decade old. With Node.js 12 just entering into long-term support, and Node.js 13 being released, let us take a look at the new features, breaking changes, and what is next.

Node.js is an open-source, cross-platform, JavaScript runtime environment that executes JavaScript code outside of a browser.



Corporate users of Node.js software include Groupon, IBM, LinkedIn, Microsoft, Netflix, PayPal, SAP, Walmart, and Yahoo!.



(Ludovic Gasc replaces Bethany Griggs as speaker.)

Back

Serverless.com framework

Home

Speaker Kirill Kolyaskin
RoomH.1302 (Depage)
TrackJavaScript
Time15:00 - 15:25
Event linkView original entry

Who told that everything about serverless computing should be proprietary?
Do you want deploy your functions and infrastructure in the open source way?
Do you want to have modular, JS-based tool for it?
Come and learn about Serverless.com - open source multicloud tool which support Kubeless, AWS Lambdas, Azure functions and many more!

2- minutes talk covering topics:
- what is the serverless computing?
- why is it important to have an open-source deployment tool?
- how serverless framework working
- advantages, weak moments and lesson learned

Back

New features of Vue 3.0

Home

Speaker Martin Naughton
RoomH.1302 (Depage)
TrackJavaScript
Time15:30 - 15:55
Event linkView original entry

Vue 3.0 is scheduled to be released in Q1 2020. With lots of new features in Vuejs 3.0 we look at biggest features and how they can be used in your code base to improve your programming experience.

Title: New features in Vue Js 3.0



Description:
Vue 3.0 is scheduled to be released in Q1 2020. It has been 6 years since the first version 0.6 of Vue Js was released. Since then the community has grown and it is now in the top 3 most popular JavaScript frameworks to use. Currently Vuejs 2.6 is released which has a lot of plugins that can expand the programming and functionality features. Some of these programming features have been incorporated in vue 3.0 code base.



The talk will explain the following new features and demo how they are used.




Back

How to create Javascript-powered Smartglasses

Home

Speaker Ruben van der Leun
RoomH.1302 (Depage)
TrackJavaScript
Time16:00 - 16:25
Event linkView original entry

After having worked at an AR-focused company with the Vuzix, a Google Glass-like product, Ruben had a lot of fun with the hardware and really wanted to have one at home to play around with. Unfortunately, they are too expensive to get one just for hobby purposes.



This session will cover what was required to build his own wearable, the pitfalls, the compromises, and the sheer joy of saying "Screw it, I'll build it myself!".

With the use of a Raspberry Pi, Vufine and a lot of Javascript, I've managed to cobble a hobby project together where I essentially have my own Javascript-powered smartglasses. The main reason why I built it is because I want to prototype ideas on what you could with the platform, but I think it's an interesting example to showcase how you can use Javascript and all out of the box. The focus of the talk is how the stack works, and how I'm building a platform that will allow web developers to easily test out their own ideas for what smartglasses could do.

Back

Web of Twins

Home

Speaker Philippe Coval
RoomH.1302 (Depage)
TrackJavaScript
Time16:30 - 16:55
Event linkView original entry

Overview of Web of Things concept, Mozilla WebThings IoT platform demonstrated using A-Frame 3D framework for building virtual reality experiences.

The Web of Things connects real-world objects to the World Wide Web,
Mozilla proposed an open source implementation of Web of things concept.
to connect and control smart home devices with Privacy by design.



Once the WebThing platform is in place WoT can be used for many other purposes,



As a demonstration, devices can be also "mirrored" in a virtual world
and interacted differently using XR devices.



Using A-Frame framework it's very easy to create models
and keep the the Digital Twins updated in real time on the Web.



Each components of this "Web of Twins" experiment will be detailed
from sensors or actuators to rich 3D user interfaces and more.

Back

STS in Ceph Object Storage

Home

Speaker Pritha Srivastava
RoomH.1308 (Rolin)
TrackSoftware Defined Storage
Time09:00 - 09:30
Event linkView original entry

Ceph is an open source, highly scalable, software defined storage that provides object, file and block interfaces under a unified system. Ceph Object Storage Gateway (RGW) provides a RESTful object storage interface to the Ceph Storage cluster. It provides an interface that is compatible with a large subset of AWS S3 APIs.



In this talk we discuss the implementation of a subset of the APIs of AWS Secure Token Service (STS). AWS STS is a web service which enables identity federation and cross-account access by providing temporary security credentials.



Ceph Object Storage Gateway now supports some APIs of AWS STS particularly related to web identity federation and cross-account access. The advantages of these temporary credentials are that they automatically expire after a certain duration, provide limited access (via IAM policies) to resources, are provided to the user upon request, and obviate the need for users/ applications to save permanent security credentials thereby removing a potential security loophole.



As an example consider a web application that has users and needs access to RGW S3 buckets to read/ write large files. The application can delegate identity management to a trusted third party identity provider(IDP). It can get temporary credentials from STS after authenticating with the IDP and access the required RGW S3 buckets.

Outline of the talk:




  1. Introduction to Ceph and Ceph Object Storage Gateway

  2. Current authentication mechanisms in Ceph Object Storage Gateway

  3. AWS Secure Token Service

  4. STS APIs implemented in Ceph Object Storage

  5. Advantages of using STS

  6. Example

  7. Future Work


Back

NFS Ganesha

Home

Speaker Arjun Sharma
RoomH.1308 (Rolin)
TrackSoftware Defined Storage
Time09:35 - 10:05
Event linkView original entry

NFS-Ganesha is an extensible user-space NFS server that supports NFS v3, v4, v4.1, v4.2, pNFS, and 9P protocol. It has an easily pluggable architecture called FSAL (File System Abstraction Layer), which enables seamless integration with many filesystem backends (GlusterFS, Ceph, etc.). There will be a discussion on the components along with an architectural explanation of NFS Ganesha with a detailed look at how a request flows through the various layers of NFS Ganesha and see some critical aspects in using NFS Ganesha. Along with the discussion on "your first contribution to NFS Ganesha" the audience will be engaged in a collaborative session and with a live demo, take a detailed look at the Clustered HA implementation using pacemaker/corosync with a specific example of a distributed storage, GlusterFS.



Finally, there will be an open dialogue about the inclusion of Transport Layer Security into NFS Ganesha. One major drawback seen with NFS is the lack of transmitting encrypted data packets to and from NFS Server and Client. This lack is widely frowned upon, and it seems like there is enough communication gap within the community about its development, which I hope to shorten and revive the chatter to begin progress in this course.

The intentions behind this talk:



-> Introduce architectural explanation of NFS Ganesha
-> A short code walk-through to explore crucial features of NFS Ganesha
-> Live demo of Clustered High Availability implementation using pacemaker/corosync
-> Talk about the implementation of Transport Layer Security(TLS) into NFS Ganesha

Back

Evolution of path based Geo-replication in Gluster

Home

Speaker Hari Gowtham
RoomH.1308 (Rolin)
TrackSoftware Defined Storage
Time10:10 - 10:30
Event linkView original entry

As data is becoming more and more important in the world, we can't afford to lose it even if there is a natural calamity. We will see how Geo-Replication came in to solve this problem for us and how it evolved over the days.
Through this session, the users will learn how easy it is to set up Georep for Gluster to use it for their storage and back up their data with minimal understanding of storage and linux. Having a basic Gluster knowledge will make it even more easy

Back

Run ZFS in userspace

Home

Speaker Harshita Sharma
RoomH.1308 (Rolin)
TrackSoftware Defined Storage
Time10:35 - 10:50
Event linkView original entry

While running in user space ZFS utilizes a user space binary called ztest.
In cStor, we followed a similar approach to create a binary called ‘zrepl’ that is part of cStor. It has been built using the libraries similar to what is used for ztest and contains transactional, pooled storage layers.
cStor uses ZFS behind the scenes by running it in the user space. This talk we will discuss in detail how we used ZFS in userspace for storage engine cStor and highlight a few challenges that our team faced while building this data engine.

Back

What's new in Samba ?

Home

Speaker Jeremy Allison
RoomH.1308 (Rolin)
TrackSoftware Defined Storage
Time10:55 - 11:30
Event linkView original entry

The presentation will give an overview of all the changes happening in the Samba project code, from the fileserver virtual filesystem (VFS) rewrite, the new features in the SMB3 code, the quest to remove the old SMB1 protocol and much more. Improvements in Samba scalability, clustering and the Active Directory code will be discussed.



The intended audience is anyone who uses the Samba code, creates products with Samba or is interested in the SMB protocol.

The presentation will give an overview of all the changes happening in the Samba project code, from the fileserver virtual filesystem (VFS) rewrite, the new features in the SMB3 code, the quest to remove the old SMB1 protocol and much more. Improvements in Samba scalability, clustering and the Active Directory code will be discussed. I'll also cover the changes to Samba development and tooling, and how we are modernizing the code base to stay relevant in the Cloud-connected world of software defined storage.



This can be either a 30 minute talk (20 mins + 5 questions) or 60 minute talk (45+10 for questions).

Back

Asynchronous Directory Operations in CephFS

Home

Speaker Patrick Donnelly
Jeff Layton
RoomH.1308 (Rolin)
TrackSoftware Defined Storage
Time11:35 - 12:15
Event linkView original entry

Metadata-heavy workloads are often the bane of networked and clustered filesystems. Directory operations (create and unlink, in particular) usually involve making a synchronous request to a server on the network, which can be very slow.



CephFS however has a novel mechanism for delegating the ability for clients to do certain operations locally. While that mechanism has mostly been used to delegate capabilities on normal files in the past, it's possible to extend this to cover certain types of directory operations as well.



The talk will describe work that is being done to bring asynchronous directory operations to CephFS. It will cover the design and tradeoffs necessary to allow for asynchronous directory operations, discuss the server and client-side infrastructure being added to support it, and what performance gains we expect to gain from this.


Back

AMENDMENT Rook Cloud Native Storage for Kubernetes

Home

Speaker Alexander Trost
RoomH.1308 (Rolin)
TrackSoftware Defined Storage
Time12:20 - 12:55
Event linkView original entry

What is Rook and the architecture of Rook + the storage run in Kubernetes.
We'll also take a look at new features added to Rook.

The talk will give an overview of what Rook can do and what is new since last years talk about Rook Ceph storage.
The overview will be about what Rook is and the architecture.
The second part is going to show newly added features to Rook.



Agenda:
* What is Rook
* Architecture of Rook
* New features
* Ceph
* New Storage Backend: Yugabyte
* EdgeFS
* Upcoming



Target audience are people interested in Rook, Ceph and Kubernetes.

Back

Building Blocks for Containerized Ceph

Home

Speaker Jose Rivera
Rohan Gupta
RoomH.1308 (Rolin)
TrackSoftware Defined Storage
Time13:00 - 13:45
Event linkView original entry

Originally, Kubernetes PersistentVolumes (PVs) could only present storage to containers as filesystems. Now, raw block PersistentVolumes (PVs) allow applications to consume storage in a new way. In particular, Rook-Ceph now makes use of them to provide the backing store for its clustered storage in a more Kubernetes-like fashion and with improved security. Now we can rethink the notion of how we structure our storage clusters, moving the focus away from static nodes and basing them on more dynamic, resilient storage devices.

Originally, Kubernetes PersistentVolumes (PVs) could only present storage to containers as filesystems. However, some applications prefer to use block storage, usually for reasons of performance, and have no need for a full filesystem. Several such applications have had to go as far as directly accessing local system directories to get the functionality they need in Kubernetes.



Raw block PVs are a relatively new feature that went beta in Kubernetes 1.13. They allow Kubernetes to present storage to containers as block devices, removing the need for provisioners to format filesystems on top of them. This not only allows for greater performance to the applications that expect it, it also helps improve security by reducing the level of permissions such an application's containers require to run.



Rook-Ceph is the Ceph operator for the Rook project. It provides resilient storage by running the various Ceph components as containers and managing them via Kubernetes. Originally, it would bind-mount system directories to manipulate the storage devices it consumed. It now leverages raw block PVs to store its data, expanding the types of storage it can consume. In particular, cloud environments are now a space where its storage Pods can migrate in response to node failures and have the storage devices move with their Pods.



For the uninitiated, this presentation will start with an overview of how storage is modeled and presented in Kubernetes. It will then describe how that storage was originally consumed by Rook-Ceph, what we changed about it, and the consequences (both good and bad) of those changes.

Back

Explicitly Supporting Stretch Clusters in Ceph

Home

Speaker Gregory Farnum
RoomH.1308 (Rolin)
TrackSoftware Defined Storage
Time13:50 - 14:25
Event linkView original entry

Ceph is an open source distributed object store, network block device, and file system designed for reliability, performance, and scalability. While Ceph is designed for use in a single data center, users have deployed “stretch” clusters across multiple data centers for many years, and deploying Ceph to back Red Hat’s OpenShift Container Storage product required us to support that workload explicitly and well — in particular, in the face of netsplits.
This requires improvements to our “monitor” leader elections and to the “OSD” peering process to keep data available without breaking our data integrity guarantees. This talk presents the whole cycle of that work from an algorithm and programmer perspective: the dangers we identified, the changes we needed, the architecture changes to support faster test iteration and coding, and the results.

Back

A 'Thin Arbiter' for glusterfs replication

Home

Speaker Ravishankar N.
RoomH.1308 (Rolin)
TrackSoftware Defined Storage
Time14:30 - 15:15
Event linkView original entry

Maintaining consistency in replication is a challenging problem involving locking of nodes, quorum checks and reconciliation of state, all of which impact performance of the I/O path if not done right. In a distributed system, a minimum of 3 nodes storing metadata is an imperative to achieve consensus and prevent the dreaded split-brain state. Gluster has had solutions like the trusted 3-way replication or the ' 2 replica + 1 arbiter' configuration to achieve this.



The latest in the series is a 'Thin Arbiter (TA)' which is more minimalist the existing '1 arbiter', targeted at container platforms and stretch cluster deployments. A TA node can be deployed outside a gluster cluster and can be shared with multiple gluster volumes. It requires zilch storage space and does not affect I/O path latencies in the happy case. This talk describes the design, working and deployment of TA and the potential gotchas one needs to be aware of while choosing this solution.



The intended audience is sysadmins/dev-ops personnel who might want to try out the thin-arbiter volume and troubleshoot any operational issues that may arise.

The Thin Arbiter (TA) is different from normal arbitration logic in the sense that even if only one file is bad in one of the copies of the replica, it marks that entire replica unavailable (despite it having other files in it that are healthy), until it is healed and syncs up to the other good copy. While this might seem like a very bad idea for a highly available system, it works very well to prevent split-brains due to intermittent network disconnects rather than a whole node going off-line indefinitely. In talking about this feature, my talk will cover:




Back

Management of Storage on OpenShift

Home

Speaker Ankush Behl
RoomH.1308 (Rolin)
TrackSoftware Defined Storage
Time15:20 - 15:45
Event linkView original entry

This talk will walk-through how users can deploy storage on OpenShift and manage it all from the browser. With just a few clicks and almost zero questions asked, we will demonstrate how anyone can deploy & manage storage like never before. From beginners to experts, this session has fun bits for every storage enthusiast.

Goal is to enable every participant to set up their own storage cluster and manage it from the comforts of their browser.
No hefty configurations, just a few clicks to get going.



Key Takeaways
- Operator Lifecycle Manager Overview
- Storage Operators Overview
- Container Storage in OpenShift
- Rook, Ceph
- Management Console for Storage

Back

The history of error correction and detection and how it led to Ceph’s Erasure Coding Techniques

Home

Speaker Danny Abukalam
RoomH.1308 (Rolin)
TrackSoftware Defined Storage
Time15:50 - 16:20
Event linkView original entry

70 years of academic innovation in the development of error correction codes have led to the advanced erasure coding techniques that we use in Ceph. Learn more about how these came about, the different types, how they work, and how we use them in distributed storage today.

Erasure Coding is the latest in a long line of error detection and correction approaches over the last 70 years which have all had an impact on the way we approach storing and recovering data in sensible and efficient ways. I’ll give an overview of the main approaches over the years, including the parity bit, the hamming codes, RAID, reed-solomon, and how they have impacted media storage, distributed storage, and their usage in other unexpected ways. I’ll then provide an overview of erasure coding across distributed storage and specifically Ceph.

Back

Ephemeral Pinning: A Dynamic Metadata Management Strategy for CephFS

Home

Speaker Sidharth Anupkrishnan
RoomH.1308 (Rolin)
TrackSoftware Defined Storage
Time16:25 - 17:00
Event linkView original entry

Having a separate cluster of Metadata Servers (MDS) is a well known design strategy among distributed file-system architectures. One challenge faced by this approach is how to distribute metadata among the MDSs. Unlike data storage and it's associated I/O throughput, which can be scaled linearly with the number of storage devices, file-system metadata is a fairly complex entity to scale due to it's hierarchical nature. In hindsight, a pure hashing based metadata distribution strategy seems like a perfect fit. But, this is not exactly the case. What are the pitfalls then? Too many inter-MDS hops (due to POSIX traversal semantics), loss of hierarchical locality degrades file-system performance, and as a result, this is not beneficial for a workload whose directory hierarchy tree grows in depth rather than breadth. CephFS's metadata balancer takes a different approach by partitioning metadata sub-trees across MDSs thereby preserving good locality benefits. Although efficient, this involves a lot of back and forth migrations of sub-trees and the locality benefits are sometimes trumped by sub-optimal distributions.



In this talk, we present a new metadata distribution strategy employed in CephFS - Ephemeral Pinning. This strategy combines the benefits of hashing and naive sub-tree partitioning by intelligently pinning sub-trees to MDSs so as to obtain a balanced distribution as the workload metadata grows by depth and breadth. A consistent hashing based load balancer helps in maintaining an optimal distribution during addition or failure of MDSs.

This talk will cover the following key ideas:






This talk would be beneficial for every distributed file-system project that handles file metadata separately. They would get an overview on existing metadata distribution strategies - it's pitfall's and benefits and the reason why we at CephFS came up with this approach. The benefit's of using consistent hashing for distributing metadata are also discussed.

Back

AMENDMENT In Memory of our Friend, Lars Kurth

Home

Speaker Ian Jackson
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time09:45 - 10:00
Event linkView original entry

A tribute, from the FOSDEM Virtualization & IaaS Devroom, to the memory of Lars Kurth



Please note that this is a late addition to the schedule, and the program will now be starting 10 minutes earlier than originally scheduled.

Back

Landscape of new challenges in modern virtualization platforms

Home

Speaker Olivier Lambert
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time10:00 - 10:30
Event linkView original entry

Because virtualization is everywhere, new challenges in the IT world are revealing that this crucial component has to be improved on a regular basis. This requires a lot of coordination between Open Source projects as well as intense research and development efforts.



NVMe storage performance revealing hidden bottlenecks, Intel CPU flaws changing the security landscape regarding isolation, increasing complexity of stacks requiring more and more components working together, hardware specialization, new protocols, new use cases on top (k8s): these are a few of the challenges that a virtualization platform must answer in 2020.



We'll first see a landscape of these new challenges, then the possible approaches to solve them, and finally a concrete example of what the XCP-ng project is doing to integrate all these changes in a fully Open Source fashion, inside a turnkey Xen distro.

Back

Public clouds and vulnerable CPUs: are we secure?

Home

Speaker Vitaly Kuznetsov
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time10:30 - 11:00
Event linkView original entry

A whole bunch of CPU vulnerabilities were revealed in the past few years:
Meltdown and Spectre, SSB, L1TF and MDS -- and there's little hope that we've
seen them all. Every time there is a new vulnerability released, big cloud
provides on day 1 claim that their hosts were updated and that their users
are secure. Is this so or do we also need to do something inside our Linux
guests to mitigate these vulnerabilities? And, do we have the required tools
to actually do the mitigations? Are all of them enabled by default or not? And,
if not, why? In the talk I'll try to answer these questions.

The talk will cover recently discovered CPU vulnerabilities starting with
Meltdown and Spectre. I will go through them and try to highlight 'public
cloud specifics': what has/can to be done in the infrastructure of the
cloud and what has/can be done inside Linux guests depending on the desired
level of security and usage patterns.

Back

virtio-fs

Home

Speaker Stefan Hajnoczi
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time11:00 - 11:30
Event linkView original entry

This talk covers the new virtio-fs shared file system that allows a host directory tree to be shared with guests. Sharing files with the guest is required by several use cases including container VMs, File-System-as-a-Service, and traditional virtualization. virtio-fs goes further than previous attempts by taking advantage of the co-location of the guest and host using DAX to share the host page cache. This presentation explains how to use virtio-fs, a bit about how it works internally, and the current status.

virtio-fs is a new shared file system giving access to a directory that both the host and guests can access. Traditionally shared file systems have been used to make data available to the guest during installation, boot a guest from a directory tree on the host, or to develop code on the host and test it in-place without copying files into the guest. New use cases including container VMs and File-System-as-a-Service have introduced new requirements that virtio-fs is designed to meet.



Previous attempts at shared file systems have included virtio-9p or simply used network file systems. virtio-fs is unique because it is possible to access files directly from the host page cache. This eliminates data copies and communication, resulting in lower memory footprint and higher performance for many workloads.



This presentation explains how to use virtio-fs and covers its architecture. It includes performance benchmarks showing how various features perform and a comparison with virtio-9p.

Back

io_uring in QEMU: high-performance disk I/O for Linux

Home

Speaker Julia Suvorova
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time11:30 - 12:00
Event linkView original entry

iouring is a new kernel asynchronous I/O processing mechanism proposed as a much faster alternative for conventional Linux AIO. Patches were merged in Linux 5.1 and gave a promised performance boost. We decided to integrate it into QEMU to make virtualized storage devices work more efficiently. Let's take a look at how iouring works in QEMU.

iouring enhances the existing Linux AIO API, and provides QEMU a flexible interface, allowing you to use the desired set of features: submission polling, completion polling, fd and memory buffer registration. By explaining these features we will come to examples of how and when you need to use them to get the most out of iouring. Expect many benchmarks with different QEMU I/O engines and userspace storage solutions (SPDK).



You will get a brief overview of the new kernel feature, how we used it in QEMU, combined its capabilities to speed up storage in VMs and what performance we achieved. Should io_uring be the new default AIO engine in QEMU? Come and find out!

Back

Lightweight virtualization in the Cloud and at the Edge

Home

Speaker Anastassios Nanos
Babis Chalios
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time12:00 - 12:30
Event linkView original entry

Running applications in the Cloud has changed the way users develop and ship
their code. Quite recently, the community has given rise to microservices-based
approaches, towards solutions that follow the paradigm of Platform-, Software-,
and Function-as-a-Service (PaaS, SaaS, and FaaS respectively).



To accommodate user demands, while maintaining security and isolation, Cloud
vendors have adopted a hybrid approach where user workloads are being executed
in lightweight sandboxed environments, where micro-hypervisors provide the
isolation and container-based images facilitate application deployment. As a
result, lighter virtualization stacks remains a key aspect to maximize
performance in a multi-tenant but isolated environment.



To this end, we started experimenting with various Virtual Machine Monitors
(VMMs) that could provide the ideal trade-off between performance, flexibility
and application portability. In this talk, we present the design of a minimal
VMM, based on KVM, residing entirely in the Linux Kernel and showcase the
merits and shortcomings (minimal footprint, security concerns), for each
use-case (Cloud FaaS, edge multi-tenancy). Additionally, we present our
experience from porting Firecracker to a low-power device (RPi4) demonstrating
the merits of lightweight hypervisor stacks for flexible application execution
at the edge.

Back

LXD for mixed system containers and VM workloads

Home

Speaker Stéphane Graber
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time12:30 - 13:00
Event linkView original entry

LXD is most known as a system container manager, offering a simple user experience and images for most Linux distributions.
It also offers a simple REST API, network and storage management, project views and easy clustering to dozen of hosts.



Over the past few months, LXD has now grown the ability to run virtual machines alongside containers, using the exact same REST API and configuration.
This presentation will cover that new feature, why it was done, where it's at now and where we're going with it, as well as provide a quick demo of setting up a small LXD cluster and running both containers and virtual machines on it.

LXD is an open source system container manager, developed by the team behind LXC, written in Go and that's been around for over 5 years now.
It's widely used both on servers, running the backend of services such as Travis-CI and on everyday devices like Chromebooks.
Containers are created from images with prebuilt images available for most Linux distributions.



Multiple hosts can easily be clustered together to form one large virtual host, exposing the exact same API as a single host would.
Storage pools and networks can also be created and managed through LXD and resources can be segmented into projects.



With the addition of virtual machine support (through qemu), it is now possible to manage a mixed deployment of virtual machines and containers, sharing the same configuration, storage and networks. With the use of our built-in agent, the exact same operations that are normally possible against a container also become possible against virtual machines, including executing commands, transferring files, ...



Existing API clients for LXD can also now drive both containers and virtual machines without any API changes required.

Back

oVirt 4k - teaching an old dog new tricks

Home

Speaker Nir Soffer
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time13:00 - 13:30
Event linkView original entry

Teaching oVirt to work with 4k storage.

How can we have compression and deduplication using VDO, the new Linux
compression layer? How can we use the latest and greatest disks drives?
We need to support disks with 4k block size.



oVirt is your best friend when you need to manage your virtualized data
center, but when it was created 10 years ago, support for 4k storage was
not considered. Can you teach an old dog new tricks? Sure you can!



In this talk we will share what we learned implementing 4k storage
support in oVirt. We will present the challenges teaching old and
stubborn code base to work with disks using 4k storage, and how we
addressed them; introducing storage format v5, moving from sectors to
bytes, detection of block size on file storage, improving testing in
storage area, adding new 4k APis to sanlock and improving qemu block
size detection.



Audience:
oVirt administrator interested in utilizing the latest and greatest
features and hardware. Developers looking for new ways to deal with old
code.

Back

Edge Clouds with OpenNebula

Home

Speaker Vlastimil Holer
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time13:30 - 14:00
Event linkView original entry

Edge computing is currently getting a lot of traction thanks to the growing availability of rented computing resources around the world. The idea is based on moving the core computational logic and storage to distant locations that are closer to the entities they interact with (e.g. users or sensors). The benefits come from improving network latencies, increasing user experience with the provided service, and lowering the transfers to the central locations. Edge clouds bring the flexibility and proven workflows of cloud computing to the edge.



OpenNebula is an open source framework to build private and hybrid clouds based on KVM, LXD, and/or VMware vCenter. While the main domain is the corporate private on-premises cloud, it comes with simple and extensible tooling ("oneprovision") for automated deployment of edge clouds. When provided with a deployment descriptor, it allocates the physical hosts on the public bare-metal cloud provider, configures all necessary services (e.g. install libvirt/KVM or LXD), and enables them for use in OpenNebula. The process is as simple as running a command-line tool and the cloud administrator gets a fully usable configured edge cluster in a few minutes.



As part of a usability validation exercise, we successfully deployed public gaming servers from scratch to running services on 17 different locations worldwide in just 25 minutes: https://opennebula.org/opennebula-a-lightning-fast-video-gaming-edge-use-case-2 .



This talk introduces the OpenNebula "edge" concept and shows the current state, capabilities, and limitations of edge cloud deployment tooling. It explores the difficulties of running the IaaS-in-IaaS cloud and demonstrates with practical examples the use of tooling and management of edge deployments.

Back

Baremetal at the Edge

Home

Speaker Ilya Etingof
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time14:00 - 14:30
Event linkView original entry

Deploying bare metal machines at the edge of the cloud may not play well with conventional PXE protocol suite. In this presentation we will explain how the emerging virtual media boot technology could significantly improve scalability, reliability and security of the Cloud.

In this talk, the latest advancements in bare metal provisioning service (ironic) will be explained and PXE-less machine deployment will be demonstrated showcasing two scenarios - bare metal management within the OpenStack cloud and a stand-alone ironic use-case (e.g. within a container orchestration system).

Back

A VM journey from VMware to Kubernetes

Home

Speaker Marek Libra
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time14:30 - 15:00
Event linkView original entry

Kubernetes became primary platform for managing containerized applications.
In connection with KubeVirt, it can manage both containers and virtual machines in a single cluster to enable mixed workloads and so give second breath to existing legacy workloads based on virtual machines which might not be feasible to containerize for either technical or business reasons.



Consolidation of so far distinct clusters for VMs and containers is the next logical step.
Come to see an end to end conversion of a VMware virtual machine into Kubernetes.

Conversion of a virtual machine from VMware to Kubernetes will be presented.



An attendee will learn:
- briefly about KubeVirt (virtualization add-on for Kubernetes)
- how to convert an existing VM to Kubernetes
- implementation aspects (deep-dive)
- about ongoing development and how to participate

Back

Back to the future

Home

Speaker Eyal Shenitzky
Daniel Erez
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time15:00 - 15:30
Event linkView original entry

Do you need to go back in time to restore data from important VMs? oVirt
does not provide a time machine yet, but you can build one using oVirt
backup APIs.



Building on changed blocks tracking in qemu, and upcoming libvirt backup
API, oVirt will provide API to perform incremental backups. You will be
able to back up VMs more efficiently, downloading only changed blocks.
Incremental backup will be simpler and more reliable, not requiring
creating and deleting snapshots. Uploading will support on-the-fly
conversion from raw to qcow2 when restoring disks.



In this talk we will travel into the future, introducing the oVirt
incremental backup API for starting and ending backups, and the
ovirt-imageio API for downloading changed blocks. Finally, we will travel
back to the past, and show how to restore raw guest data into new disks.



Audience:
Backup vendors and virtualization developers, interested in utilizing
incremental backup API. Also, oVirt administrators and users interested in
peeking into the future of oVirt.



Session summary:
Peek into the future of oVirt backup API.

Do you need to go back in time to restore data from important VMs? oVirt
does not provide a time machine yet, but you can build one using oVirt
backup APIs.



Building on changed blocks tracking in qemu, and upcoming libvirt backup
API, oVirt will provide API to perform incremental backups. You will be
able to back up VMs more efficiently, downloading only changed blocks.
Incremental backup will be simpler and more reliable, not requiring
creating and deleting snapshots. Uploading will support on-the-fly
conversion from raw to qcow2 when restoring disks.



In this talk we will travel into the future, introducing the oVirt
incremental backup API for starting and ending backups, and the
ovirt-imageio API for downloading changed blocks. Finally, we will travel
back to the past, and show how to restore raw guest data into new disks.



Audience:
Backup vendors and virtualization developers, interested in utilizing
incremental backup API. Also, oVirt administrators and users interested in
peeking into the future of oVirt.



Session summary:
Peek into the future of oVirt backup API.

Back

Running virtual machines out of thin air

Home

Speaker Nir Soffer
Daniel Erez
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time15:30 - 16:00
Event linkView original entry

How to run virtual machines in oVirt without copying their disks.

Have you ever wanted to run your virtual machine in oVirt without copying its hugh disks?
Tired of checking the slow transfer progress? We did, and we have a good plan to avoid the wait!



In this talk we show how oVirt can start virtual machine without copying the disks,
using external disk via NBD or other protocols supported by qemu.
Once the virtual machine is running, copy the disks in the background to oVirt storage.
This minimizes downtime to seconds instead of minutes, and can be used in many scenarios
such as importing virtual machines from other systems (even from foreign systems via virt-v2v),
previewing backups before restore, and provisioning a virtual machines.



Audience:
Virtualization administrators or developers interested in oVirt architecture and would like a peek into future development.

Back

Reaching "EPYC" Virtualization Performance

Home

Speaker Dario Faggioli
RoomH.1309 (Van Rijn)
TrackVirtualization and IaaS
Time16:00 - 16:30
Event linkView original entry

Virtualization brings many advantages, but what about the overhead it introduces? What about performance? This talk will show how great virtualization performance can be achieved, if proper tuning is applied to all the components of the system: hypervisor, host and guests, for both Xen and KVM. As a case study, we will describe how we tuned our OS in order to be able to reach, inside VMs, close to baremetal performance, on a server powered by a CPU from the AMD EPYC 7002 (codename "Rome") series. We will, of course, show the benchmarks proving that (run on KVM), even when memory encryption is used.

Virtualization is great because it decouples the software from the hardware on top of which it runs, and this brings benefits in terms of flexibility, security, reliability and cost savings. But what about the overhead that this, unavoidably, introduces?



Well, often enough, a virtualized system is really able to fulfill its goals with an acceptable quality of service, efficient exploitation of HW resources, satisfactory user experience, etc., only if all the components are configured properly. This is not entirely new, as baremetal systems need tuning too, but in a virtualized environment one has to take care of tuning both the the host and the guests. And beware that the interactions between all the different components may not always be straightforward, especially on a large server with complex CPU architecture, such anything based on the AMD EPYC 7002 (codename "Rome") series of processors.



This talk will go over some of the typical virtualization “tuning tricks” (for both Xen and KVM). Then, as a case study, we will illustrate how we managed to reach, inside Virtual Machines, a performance level that almost matches the one of the host, on a server powered by a CPU from the AMD EPYC 7002 series. In fact, we will show the results of running CPU and memory intensive benchmarks (on KVM) with and without the suggested tuning. Last (but not least :-D), we will show the impact that the Secure Encrypted Virtualization (SEV) technology has on performance.

Back

Introduction to the devroom and the Open Source Design collective

Home

Speaker Bernard Tyers
Amit Nambiar
RoomH.2213
TrackOpen Source Design
Time09:30 - 09:50
Event linkView original entry

A brief introduction to the Open Source Design collective.

Every year we take some time to introduce the Open Source Design collective, what we do, where to find us and how to get involved.

Back

Open Source design - Africa

Home

Speaker Peace Ojemeh
RoomH.2213
TrackOpen Source Design
Time10:00 - 10:20
Event linkView original entry

Open Source Community Africa (O.S.C.A) is a movement that promotes and drive the open source culture within and across Africa. We aim to bridge the diversity gap of the open source culture through advocacy because of potential and great energy coming from the continent. This presentation will help put the African ecosystem closer to the existing platforms which will bring more diversity that includes representing the black/African community showcasing how mentorship and training are doing centred around opensource.

This presentation will cover the challenges and limitations of design contributions from Africa. What OSCA is doing to bridge the gab. As a designer who has been in the space of contributing to open source and has faced the challenges of how to convince a maintainer why design is as important as code, there has always been this problem which I have personally experienced as one who loves opensource, I got to understand the processes because I was patient enough to learn about the structure to influence it, but not every designer have that mindset to leave their comfort zone. So what I have been able to do as a designer is to influence the open source community Africa to take this challenge as a project as I believe it is something that designers should learn as a process of collaboration.

Back

What are we talking about when we say "open design"?

Home

Speaker Manufactura Independente
RoomH.2213
TrackOpen Source Design
Time10:30 - 10:50
Event linkView original entry

As designers working with Libre/Open Source software, we have a strong opinion on how tools shape practice. And as designers, in general, we care deeply about our methods, workflows, principles and licenses. For some time now we've been debating the issue of terminology: how we describe our practice to others and how free/libre software terms and ideas can be integrated into design methods.



In this talk, we want to propose an exercise in labeling. Building up from conversations we had with other designers moving in the sphere of Libre Software and Libre Culture, we want to:
- present terms and try to define their scope in the hopes of improving the ways in which open design can be explained to others
- distinguish between possible stances inside the sphere of open design, such as using libre tools vs. proprietary toolchains, designing for free software vs. designing with free software
- explore how we understand other designers/studios/communities' ideas and practices
- better understand how we can present our views and work vis-à-vis other design approaches -- especially to "traditional" and proprietary-oriented audiences



This talk is a follow-up to "Open Design, Libre Graphics: Why terminology matters", that we presented at Libre Graphics Meeting 2019. This was the starting point of a discussion we feel the need to bring to the table, along with other designers that share the love for F/LOSS.

As designers working with Libre/Open Source software, we have a strong opinion on how tools shape practice. And as designers, in general, we care deeply about our methods, workflows, principles and licenses. For some time now we've been debating the issue of terminology: how we describe our practice to others and how free/libre software terms and ideas can be integrated into design methods.



In this talk, we want to propose an exercise in labeling. Building up from conversations we had with other designers moving in the sphere of Libre Software and Libre Culture, we want to:
- present terms and try to define their scope in the hopes of improving the ways in which open design can be explained to others
- distinguish between possible stances inside the sphere of open design, such as using libre tools vs. proprietary toolchains, designing for free software vs. designing with free software
- explore how we understand other designers/studios/communities' ideas and practices
- better understand how we can present our views and work vis-à-vis other design approaches -- especially to "traditional" and proprietary-oriented audiences



This talk is a follow-up to "Open Design, Libre Graphics: Why terminology matters", that we presented at Libre Graphics Meeting 2019. This was the starting point of a discussion we feel the need to bring to the table, along with other designers that share the love for F/LOSS.

Back

Some Excerpts from the Theory of Design in Architecture

Home

Speaker Amit Nambiar
RoomH.2213
TrackOpen Source Design
Time11:00 - 11:20
Event linkView original entry

This talk discusses some theories from architectural discourse which attempt to solve problems which required critical and creative thinking. It is interesting to see the overflows and overlaps of theories between design(architecture) and construction(engineering) to solving similar underlying problems. The lengthy history of the field throws up some interesting references and may expose some ideas which might be applicable in new age design+engineering problems.

An architect as a professional in today's day and age represents someone involved in the construction industry tasked with a responsibility for conceptualizing physical spaces and bringing them into reality. Etymologically the word finds its origins in ancient Greek with the word being comprised of 'arkhi' and 'tekton' which loosely translate to 'master' and 'builder' correspondingly.



Historically architectural design and construction was carried out by artisans such as stone masons and carpenters, who were coordinated by a "master builder". There was no clear distinction between architect and engineer. This has created a body of theoretical knowledge which spans across different industry.



This created a field of study spanning the spectrum of creative thinking and logical thinking. The talk sheds light on some interesting examples of how 'architects' have dealt with problems in such a space.

Back

UI/UX Tips & Tricks for developers

Home

Speaker Ecaterina Moraru
RoomH.2213
TrackOpen Source Design
Time11:30 - 11:50
Event linkView original entry

I will present some general UI/UX tips & tricks that will help you design better. Everyone should know the basic principles and patterns of design, and once you understand them you will naturally integrate them in your work.

UI/UX is a craft. The more you practice it, the better you are at it. Some people argue that you need to have 'good taste' in order to be a designer, to be the 'artsy type'. While this might be true for Graphic Design, Branding and Visual Arts in general, when it comes to Interface, Interaction and Product Design, the focus is more on practicality and 'common sense'.

Back

Accessibility in MuseScore

Home

Speaker Peter Jonas
Marc Sabatella
RoomH.2213
TrackOpen Source Design
Time12:00 - 12:20
Event linkView original entry

MuseScore is the world's most popular sheet music program. It is used by millions of musicians around the world, including many who are blind, partially sighted, or who struggle to use a traditional mouse-based interface. In this presentation, we share our experience in making a popular open source program accessible to keyboard and screen reader control.

A screen reader is a "robot voice" that describes what happens on the screen, which enables people who are blind to use a computer. Screen readers cannot "see pixels", so it is up to the programmer to tell the screen reader what is going on. We give tips on how to do this in Qt’s traditional C++ framework as well as it’s JavaScript-based QML language.



Screen readers know how to deal with buttons, menus and text, but they have no idea what a treble clef is! As a music notation program, many parts of MuseScore's interface are custom widgets that do not have a counterpart in traditional UI design. This poses additional challenges when it comes to implementing accessibility. We share the thinking behind our design decisions in MuseScore, and how these may be applied to other programs.



Keyboard navigation is a key aspect of accessibility, not only for people who are blind, but also for people who are motor-impaired, or ordinary users who find it quicker to use the keyboard than the mouse. There is more to keyboard navigation than shortcuts and getting around with the Tab key! We talk about how we have introduced groupings and hierarchy into MuseScore’s design to improve the experience for keyboard users.



Our accessibility work is undertaken in partnership with UKAAF and RNIB, two leading accessibility organisations based in the UK.



MuseScore is written in C++ / Qt, with some Javascript / QML. It is available for Windows, macOS and Linux under GPL version 2.

Back

Gitflow Design

Home

Speaker Diogo Sergio
RoomH.2213
TrackOpen Source Design
Time12:30 - 12:50
Event linkView original entry

Gitflow design as mentioned in the description is a git workflow for designers and design work. It's meant to be open, platform-agnostic and help minimise dependencies on proprietary software and help to increase collaboration.



By using git, we get to take advantage of a lot of useful features available such as controlled access, review process, feedback system, version control files, preview changes in context with side-by-side diffs, detailed history of changes and more, something developers had for years, but that designers never really took advantage o

Gitflow Design exists as a way to mitigate issues commonly found in design workflows of open-source software projects where the work created never comes into git. Which means such work is not being tracked and doesn't have an auditable history. Files also might not be stored in a common place accessible to everyone, so its always a hit and miss on how to gain access to them, and sometimes due to these being created with proprietary software they can sit behind a closed gateway and as contributors come and go from the project their access can be lost.



By using git we make things easier and open for anyone wanting to collaborate and hopefully streamlining the work process by connecting the development and design repositories together. For this to work a success we need to adopt a design workflow that focuses on open-source ideals, so that no one is restricted by proprietary software and gatekeepers. We introduce such workflow bellow which we have been testing, It's called Gitflow Design.

Back

UXBOX, the time for an open source online prototyping platform has arrived

Home

Speaker Pablo Ruiz-Múzquiz
RoomH.2213
TrackOpen Source Design
Time13:00 - 13:20
Event linkView original entry

Hello World UXBOX! This will be our first public announcement of the coming of UXBOX, the open source prototyping online platform based on SVG. We will share our vision and the 2020 product roadmap, explaining the resources that are committed to them. We will perform a quick demo and hope to start a productive conversation with the Open Source Design Community.

After 2 years of painstakingly slow development, Kaleidos (UXBOX sponsor, also Taiga creator) has finally decided to devote full-time resources and relevant investment to once and for all create a modern open source UX/UI prototyping online platform. With open standards (SVG) as a core feature and value, we hope to contribute a much needed platform to the Open Source Design community, which we consider to be instrumental to make UXBOX the best online prototyping tool out there, period.
This short talk would share both the vision of the product, its current state and the 2020 roadmap. Moreover, we will want to nurture a healthy and welcoming community around the development and usage of UXBOX and we will give some hints on how we plan to do this so we can already take great feedback from FOSDEM attendees.
UXBOX is being developed by open source enthusiasts coming from the tech and UX/UI trenches that have already built 20+ products for startups and launched Taiga in the past. Kaleidos, its umbrella company, has raised seed money and is devoting funds to make sure there is an amazing multidisciplinary team focused on UXBOX, also able to engage with a newly born community of users and contributors.

Back

Using biometric gadgets for express-tests in the UX/UI research

Home

Speaker Dmitriy Kostiuk
RoomH.2213
TrackOpen Source Design
Time13:30 - 13:50
Event linkView original entry

Estimating the user’s physical and mental state with a set of special measuring devices can be helpful in detecting bottlenecks of the human-computer interaction. Until recent years, evaluating cognitive and physical load by biometrical parameters (heart rate, galvanic skin response, brain waves, gaze direction, etc.) was too expensive to be widely adopted for FLOSS. However contemporary consumer-grade gadgets targeted at fitness and entertainment are much more affordable and precise enough to be used in the UX/UI comparison. Still, their different primary goal often complicates their usage for the research. The talk will highlight which devices are the most suitable ones for the research purposes in the open-source world (the ones having open-source and GNU/Linux frameworks to access biometric data). Gadgets covered with the talk are fitness-trackers, EEG headsets, and eye-trackers. Patterns of getting data, problems with cyphering and licensing will be discussed, as well as brief biometry usage scenarios and examples of the UI express-testing.

Back

Beyond the Pile of Knobs: Usability and Design for Privacy, Security, Safety & Consent

Home

Speaker Georgia Bullen
RoomH.2213
TrackOpen Source Design
Time14:00 - 14:20
Event linkView original entry

Simply Secure will share examples of how we can design, centering the needs of the most vulnerable. We will present the problems, e.g. why the UX patterns that make consenting or refusing consent so difficult in practice and why open source security tools are often associated with bulky user interfaces and inaccessible jargon, and share findings from our 5 years of working with projects in the Internet Freedom, Digital Rights, Media Justice, Translation, Training, Civic Tech and Development communities.

Security and privacy are usually handed off to development teams as technical challenges, with the design and user experience as an after thought — meaning that as designers, we are building off of limited examples and a small research field. With security risks posing a real threat on the internet, design and usability are critical to building safer more trustworthy technology because users will work around poorly design experiences.



In this talk, we will share examples of how we can design, centering the needs of the most vulnerable. We will present the problems, e.g. why the UX patterns that make consenting or refusing consent so difficult in practice and why open source security tools are often associated with bulky user interfaces and inaccessible jargon, and share findings from our 5 years of working with projects in the Internet Freedom, Digital Rights, Media Justice, Translation, Training, Civic Tech and Development communities.



Simply Secure is a nonprofit that supports practitioners by putting people at the center of trustworthy technology. Launched in 2014, our work focuses on building technology that enhances and protects human dignity by centering the needs of vulnerable populations. We use a human-centered approach because we believe that the user experience of a device, program, or application plays a critical role in building trustworthy technology. At a minimum, responsible user experience (UX) offers timely, comprehensible, and actionable information to users — it gives them genuine agency in interacting with the system. Fundamentally our goal is to support practitioners in developing the skills needed to work on the wicked problems presented by technology today.

Back

Jumpstarting your business with Odoo

Home

Speaker Jeroen Baten
RoomH.2213
TrackOpen Source Design
Time14:30 - 14:50
Event linkView original entry

In this talk I will describe the process of discovering the wonders of Odoo when I got the project to write a book about it. My writing tools are Vim, Git, Asciidoctor-pdf and Inkscape. I will describe both the technical as the organisational challenges during the writing process.

Back

File sharing & storage for human rights organizations

Home

Speaker Allon Bar
Abigail Garner
RoomH.2213
TrackOpen Source Design
Time15:00 - 15:20
Event linkView original entry

Least Authority is presenting a design research project that looks at open source file storage and sharing solutions for human rights organizations. We will present the project, the first phase of our research process, and outlook on next steps that involve adapting our open source tools.

Least Authority's mission is to build and strengthen ethical and usable technology solutions that advance digital security and support privacy as a fundamental human right. We support open source projects, such as Tahoe-LAFS, Gridsync, Magic Folders and Magic Wormhole. This session presents a design research project, where we investigate file storing/file sharing needs of human rights organisations. As an outcome of this research, we are exploring how we can adapt our open source tools to best meet use cases and usability requirements of human rights organisations. This presentation will focus on the research process and the development of findings with the purpose of making open source tools available to meet human rights organizations’ needs. The project is funded by the Open Technology Fund.

Back

Design contributions to OSS: Learnings from the Open Design project at Ushahidi

Home

Speaker Eriol Fox
RoomH.2213
TrackOpen Source Design
Time15:30 - 15:50
Event linkView original entry

Ushahidi builds OSS humanitarian tools, remotely for some of the most marginalized people across the globe. To tackle these systemic problems with how to ‘open source’ a design effort and bring the community along with the ‘on-staff’ Ushahidi designers, we’ve been piloting a series of design events on our OSS crisis communication tool TenFour with our partners Designit and Adobe. Together, we’re looking to solve the problems with how open source design can work by engaging through meaningful technology that makes a difference in the world.



We’re here to take you through that journey and what we’ve learnt about design contributions to OSS.

In this session, we'll briefly cover the history of the project and the main problems we attempted to solve and we'll present the learning and adaptions to our workshop framework and methodology that aims to engage design teams and individuals that are not yet 'on-board' with OSS as an ethos or movement.



Looking into some the abstract deeper motivations for design professionals to contribute but also some practical tips on structuring issues, labelling and maintaining design (and extended functions like research, UX and product management) you'll leave with a set of tools and methods you can apply to your OSS to engage with designers.

Back

Designing to change it all

Home

Speaker Winfried Tilanus
RoomH.2213
TrackOpen Source Design
Time16:00 - 16:20
Event linkView original entry

At the ‘SamenBeter’ project we firmly believe a good product is a product that improves how people work. So we start with designing the change we want before we even start thinking about a product. How that looks in practice? Come and see, but be prepared for a fast ride!

Is it possible to design a radical change in the healthcare system? Why is the first product we build for that an authorization standard? Why are we talking about the ‘personality of a system’? How come the people designing it ended up in community centres, playing a game about privacy with the visitors? What is that ‘developer journey’ we are talking about? why are we making such a big fuss about a license nobody seems to read?



The ‘SamenBeter’ project has a moderate goal: to change healthcare, all of it. And because we are fuelled with design thinking, we are making deliberate design choices about everything. About what we want to change in healthcare, about the process of getting there, about the role of technology, about the values the technology should adhere, about the way we should develop the technology, about the way the technology will be adapted and about how the community should be developed.



In this talk I will showcase the SamenBeter project as example of how design is about much more then user interfaces. It is about designing processes, user interactions and designing the adaptation of the product itself. And last but not least: why did we take the effort to design everything and not just a product? Yes, it is too much to fit in one talk, so fasten your seatbelts!

Back

Pitch your project

Home

Speaker Bernard Tyers
Amit Nambiar
RoomH.2213
TrackOpen Source Design
Time16:30 - 16:50
Event linkView original entry

If you contribute to a free / open source project in need of design contributions, come and pitch it to the designers in the room.

In this session, FOSS projects as given time to present and ask for contributions to the designers in the room. Each project is given 3 minutes to present. In those 3 minutes, they should briefly introduce the project, explain what design help they need, and provide contact details so designers can reach them after FOSDEM.



The Open Source Design collective will use the information to submit a "job" for each project to the Open Source Design "jobs board" (https://opensourcedesign.net/jobs/), so that the request for design help reaches not just the designers in the room, but also the wider design community.

Back

Fibonacci Spirals and 21 Ways to Contribute to Postgres—Beyond Code

Home

Speaker Claire Giordano
RoomH.2214
TrackPostgreSQL
Time10:00 - 10:50
Event linkView original entry

Postgres is growing like gangbusters: in popularity, in adoption, and in the size of the ecosystem. And over 400 developers contribute code to Postgres today: their expertise, design chops, and skill are big factors in the increasing popularity of Postgres. But what if you’re not a developer? Are there things you can do to help grow the usage and popularity of Postgres? And are these non-code ways to contribute to Postgres important? Valued? Will they make a real difference?



If you love Postgres and want to help drive Fibonacci growth of the Postgres community, this talk is for you. I’ll walk through 21 different (and important) ways to contribute to Postgres—along with tips and resources for getting started.

Back

Find your slow queries, and fix them!

Home

Speaker Stephen Frost
RoomH.2214
TrackPostgreSQL
Time11:00 - 11:50
Event linkView original entry

Where, oh where, is all that time going? What in the world is that database thing doing?! This talk will help you understand what's happening (and why) and how to analyze poor query performance. We'll also go over steps and strategies to take to improve them and get the performance (and scalability!) you need.



It all starts with figuring out what queries are slow, we'll do that by going into the various PostgreSQL configuration options for logging queries and a few helpful modules for getting even more information about ongoing queries. Next we'll go over EXPLAIN and EXPLAIN ANALYZE output for select queries, what the EXPLAIN output means in terms of how the query is being executed. Lastly (this is the good part- you have to stay til the end to get it!) we'll go over ways to improve the queries, including index creation, rewriting the query to allow PG to use a different plan, and how to tune parameters for specific queries.

Back

A Deep Dive into PostgreSQL Indexing

Home

Speaker Ibrar Ahmed
RoomH.2214
TrackPostgreSQL
Time12:00 - 12:50
Event linkView original entry

Indexes are a basic feature of relational databases, and PostgreSQL offers a rich collection of options to developers and designers. To take advantage of these fully, users need to understand the basic concept of indexes, to be able to compare the different index types and how they apply to different application scenarios. Only then can you make an informed decision about your database index strategy and design. One thing is for sure: not all indexes are appropriate for all circumstances, and using a ‘wrong’ index can have the opposite effect that you intend and problems might only surface once in production. Armed with more advanced knowledge, you can avoid this worst-case scenario! We’ll take a look at how to use pgstatstatment to find opportunities for adding indexes to your database. We’ll take a look at when to add an index, and when adding an index is unlikely to result in a good solution. So should you add an index to every column? Come and discover why this strategy is rarely recommended as we take a deep dive into PostgreSQL indexing.

Back

PostgreSQL on K8S at Zalando: Two years in production

Home

Speaker Alexander Kukushkin
RoomH.2214
TrackPostgreSQL
Time13:00 - 13:50
Event linkView original entry

Many DBAs avoid any kind of cloud offering and prefer to run their databases on dedicated hardware. At the same time companies demand to run Postgres at scale, efficiently, automated and well integrated into the infrastructure landscape. The arrival of Kubernetes provided good building blocks and an API to interact with and with it solve many problems at the infrastructure level.



The database team at Zalando started running highly-available PostgreSQL clusters on Kubernetes more than two years ago. In this talk I am going to share how we automate all routine operations, providing developers with easy-to-use tools to create, manage and monitor their databases, avoiding commercial solutions lock-in and saving costs, show open-source tools we have built to deploy and manage PostgreSQL cluster on Kubernetes by writing short manifests describing a few essential properties of the result.



Operating a few hundred PostgreSQL clusters in a containerized environment has also generated observations and learnings which we want to share: infrastructure problems (AWS), how engineers use our Postgres setup and what happens when the load becomes critical.

Back

An ultimate guide to upgrading your PostgreSQL installation

Home

Speaker Ilya Kosmodemiansky
RoomH.2214
TrackPostgreSQL
Time14:00 - 14:50
Event linkView original entry

Even an experienced PostgreSQL DBA can not always say that upgrading between major versions of Postgres is an easy task, especially if there are some special requirements, such as downtime limitations or if something goes wrong. For less experienced DBAs anything more complex than dump/restore can be frustrating.



In this talk I will describe why we need a special procedure to upgrade between major versions, how that can be achieved and what sort of problems can occur. I will review all possible ways to upgrade your cluster from classical pg_upgrade to old-school slony or modern methods like logical replication. For all approaches, I will give a brief explanation how it works (limited by the scope of this talk of course), examples how to perform upgrade and some advice on potentially problematic steps. Besides I will touch upon such topics as integration of upgrade tools and procedures with other software — connection brokers, operating system package managers, automation tools, etc. This talk would not be complete if I do not cover cases when something goes wrong and how to deal with such cases.

Back

The State of (Full) Text Search in PostgreSQL 12

Home

Speaker Jimmy Angelakos
RoomH.2214
TrackPostgreSQL
Time15:00 - 15:50
Event linkView original entry

How to navigate the rich but confusing field of (Full) Text Search in PostgreSQL. A short introduction will explain the concepts involved, followed by a discussion of functions, operators, indexes and collation support in Postgres in relevance to searching for text. Examples of usage will be provided, along with some stats demonstrating the differences.

Back

RTFM

Home

Speaker Federico Campoli
RoomH.2214
TrackPostgreSQL
Time16:00 - 16:50
Event linkView original entry

Reading the manual before starting a new work is always a good practice.



However some situations like pressure for delivery or lack of attention may lead to wrong assumptions that cause unpredictable results or even disasters.



The talk, in a semi serious way, will walk the audience through some of corner cases caused by the lack of the good practice of RTFM.

Back

Plain Text Accounting BOF

Home

Speaker
RoomH.3242
TrackBOFs (Track B - in H.3242)
Time13:00 - 13:30
Event linkView original entry

Back

OpenSCAD BOF

Home

Speaker
RoomH.3242
TrackBOFs (Track B - in H.3242)
Time14:00 - 14:30
Event linkView original entry

Back

TPM dev BOF

Home

Speaker
RoomH.3242
TrackBOFs (Track B - in H.3242)
Time14:30 - 15:15
Event linkView original entry

Back

NGI Meetup

Home

Speaker Michiel Leenaars
RoomH.3242
TrackBOFs (Track B - in H.3242)
Time15:30 - 17:00
Event linkView original entry

The Next Generation Internet initiative is one of the most substantial efforts in recent years to move the state of technology forward. It consists currently of over 200 projects, ranging from open hardware, middleware, web services, ActivityPub and cryptography to more fair search technology and decentralised internet tools. More projects are being added through open calls regularly. There are some twenty different talks related to this programme at FOSDEM 2020! This Birds of a Feather is for anyone interested in or involved with the Next Generation Internet initiative.

Back

Linux on Mobile BOF

Home

Speaker
RoomH.3244
TrackBOFs (Track C - in H.3244)
Time10:00 - 11:00
Event linkView original entry

Back

Terminus DB BOF

Home

Speaker
RoomH.3244
TrackBOFs (Track C - in H.3244)
Time11:00 - 11:30
Event linkView original entry

Back

purism librem 5 BOF

Home

Speaker
RoomH.3244
TrackBOFs (Track C - in H.3244)
Time11:30 - 12:30
Event linkView original entry

Back

pinephone porters BOF

Home

Speaker
RoomH.3244
TrackBOFs (Track C - in H.3244)
Time12:30 - 13:30
Event linkView original entry

Back

PINETIME BOF

Home

Speaker
RoomH.3244
TrackBOFs (Track C - in H.3244)
Time13:30 - 14:30
Event linkView original entry

Back

Blockchain BOF

Home

Speaker
RoomH.3244
TrackBOFs (Track C - in H.3244)
Time14:30 - 15:30
Event linkView original entry

Back

Fedora Modularity BOF

Home

Speaker
RoomH.3244
TrackBOFs (Track C - in H.3244)
Time16:00 - 16:30
Event linkView original entry

Back

GitLab BoF

Home

Speaker Ray Paik
RoomJ.1.106
TrackBOFs (Track A - in J.1.106)
Time10:00 - 11:00
Event linkView original entry

GitLab is a complete DevOps platform, delivered as a single application, changing the way Development, Security, and Ops teams collaborate and build software. GitLab is also an open source project with a wider community of over 2,500 contributors[1] since we started as an open source project in 2011.



This meetup is for GitLab team members, users, contributors, and others who want to meet with community members attending FOSDEM. Anyone who wants to share their learnings from using GitLab and discuss how we can improve GitLab (both product and community) are welcome to join us!



[1] https://contributors.gitlab.com

Back

Weblate meetup

Home

Speaker Michal Čihař
RoomJ.1.106
TrackBOFs (Track A - in J.1.106)
Time11:00 - 12:00
Event linkView original entry

Weblate future, features, bugs, collaboration between users and other related topics.

Back

Kodi Community Meeting

Home

Speaker Sascha Bergner
RoomJ.1.106
TrackBOFs (Track A - in J.1.106)
Time12:00 - 13:00
Event linkView original entry

Back

OpenHMD Community meeting 2020

Home

Speaker TheOnlyJoey
RoomJ.1.106
TrackBOFs (Track A - in J.1.106)
Time14:00 - 15:30
Event linkView original entry

Open Source VR driver development, reverse engineering and community gathering of OpenHMD.



We will discuss the last year of development, including positional tracking progress, some big changes in the release schedule, and discuss new steps for the project!

Gathering for the OpenHMD open source AR/VR community.



We have been working on a BOOST licensed driver for AR/VR/MR hardware for platform and hardware independent use. During this BoF session we will go over the current state of Open Source VR, go over the progress we have made, and have open discussion about VR and Open Source. In our 2017 session, we even hacked together basic camera support for the Oculus CV1, have fixed up our then recent Sony PSVR driver and made plans and collaborations with other Open Source developers.



If you are interested in this field, develop games/engines, have experience with reverse engineering (USB), like Computer Vision or DSP logic, please come and hang out!

Back

Room Unavailable

Home

Speaker
RoomJ.1.106
TrackBOFs (Track A - in J.1.106)
Time15:30 - 17:00
Event linkView original entry

This BOF Room is only available until 15.30 today.

Back

Free Software Radio Devroom Introduction and Hackfest Review

Home

Speaker Philip Balister
Andrej Rode
RoomAW1.120
TrackFree Software Radio
Time09:00 - 09:30
Event linkView original entry

Greetings and plans for the day and future

Back

Modernizing Distribution of SDR Tools and Libraries with Conan

Home

Speaker Brennan Ashton
RoomAW1.120
TrackFree Software Radio
Time09:30 - 10:00
Event linkView original entry

There are so many great open source libraries and tools that people have written that make up the software defined radio ecosystem, but we have unfortunately created a high bar for consumption of this software, and an even higher bar for using modern versions. In this presentation we look at how we can use modern C/C++ package management with Conan to simplify the lives of our users who want to use the latest versions without living in dependency hell.

Users of SDR software have spent far too much time staring at cmake builds and wondering what how to get the missing build time dependency. The package feeds from major distributions contain a lot of the popular software, but they tend to lag development significantly and lock users into specific versions. If you want to run the latest code you are back to compiling it. It does not have to be this way. Using Conan these applications and their dependencies can be managed in a modern way where users can pull binary packages when available and easily rebuild packages when needed all in a sandboxed environment.



This talk cover the following points:
* What is Modern C++ Package Management
* How this can simplify consumption from a SDR user perspective (I just want to play with my hardware...)
* Demo of using this workflow with some packaged SDR applications
* Example of how this packaging works for a library
* What we can do better as developers to simplify this task

Back

AMENDMENT Channel Equalization using GNU Radio

Home

Speaker Josh Morman
RoomAW1.120
TrackFree Software Radio
Time10:00 - 10:30
Event linkView original entry

We examine the use of equalizers in wireless communication systems, how these are implemented in GNU Radio, and how the existing GR equalizer functionality can be extended with a new OOT using training-based adaptation. The theory of multipath channels, ISI, and how to overcome with adaptive equalization will be reviewed and shown with interactive flowgraphs.



Please note that this talk was originally scheduled to be given at 2:30 PM and will now take place at 10:00 AM.

Back

How to evolve the GNU Radio scheduler

Home

Speaker Marcus Müller
RoomAW1.120
TrackFree Software Radio
Time10:30 - 11:00
Event linkView original entry

GNU Radio is the widest used software radio stack for research and development on PC-style hardware, having enabled hundreds of high-rate applications. I'll discuss where its limits are, where we need to stick to GNU Radio's recipe for SDR success, and where to disruptively address its architectural shortcomings

Today's GNU Radio hits hard limits when it comes to a few things that
are absolutely crucial for modern communication stacks: It doesn't make any
guarantees on latency, and its architecture doesn't allow for tight integration
with hardware accelerators. And whilst most communications are packet-based,
packeted data is a second-class citizen in the kingdom of sample streams that is
GNU Radio.



In this talk, we'll discuss why that is the case, and what can be remedied
within the current framework, and what not. We'll try to assess what usage
paradigms are worth keeping for the future of GNU Radio, and what needs to
change.

Back

A Rose by Any Other Name Would Run Just as Long

Home

Speaker John Brunhaver
RoomAW1.120
TrackFree Software Radio
Time11:00 - 11:30
Event linkView original entry

Abstract: Radio based communication systems and imagers operate under real-time constraints. Off-loading computes to an FPGA seems like a solution to speeding-up your application but comes with many pitfalls. Specifically, software-oriented implementations fail to achieve the required interface bandwidths or computational throughput required to see a speed-up. In this talk, we will discuss the organization of common compute motif's in software-defined-radio and their complexity in time and resources for FPGAs.

Rough goals of talk:
1) Communicate why FPGA acceleration would be attractive
2) Discuss common pitfalls
2a) A behaviorally oriented accelerator
2b) Starving the beast, failing to provide the required data bandwidths
2c) Processor oriented runtime, creates execution overheads
3) Thinking about accelerators, what do they look like
3a) FFT
3b) Correlators
3c) Matrix-Vector Multiply
4) Building an off-load model

Back

gr-satellites latests developments

Home

Speaker Daniel Estévez
RoomAW1.120
TrackFree Software Radio
Time11:30 - 12:00
Event linkView original entry

gr-satellites is a GNU Radio out-of-tree module with the goal of decoding every Amateur satellite. Currently it supports more than 80 different satellites. After GNU Radio 3.8 was released last summer, gr-satellites is seeing a lot of development and important changes. A refactored version, which will be released as gr-satellites 3.0 is on the works. This version brings more modularity to avoid code duplication, more flexibility in the input and output that the user can employ, and the idea to improve its integrability with other tools. Satellites are defined using a YAML file and the GNU Radio flowgraph is constructed on the fly by a Python script by connecting so called "component" blocks. Advanced users can also use the components directly in their own flowgraphs. This talk gives an overview of the gr-satellites 3.0 development progress.

Back

r2cloud - Decode satellite signals on Raspberry PI

Home

Speaker Andrey Rodionov
RoomAW1.120
TrackFree Software Radio
Time12:00 - 12:30
Event linkView original entry


  1. Java for digital signal processing


    • why java?

    • how to do digital signal processing in Java. Some examples

    • decoding LRPT (with images), BPSK (with real data)



  2. Working base station network


    • how it differs from satnogs

    • testing, code coverage. Enterprise approach for building communication software



  3. Plans. Q&A


Back

Platform independent CPU/FPGA co-design: the OscImp-Digital framework

Home

Speaker Jean-Michel Friedt
RoomAW1.120
TrackFree Software Radio
Time12:30 - 13:00
Event linkView original entry

Combining the flexibility of FPGA hardware configuration with the high abstraction level of an operating system running on a general purpose central processing unit (CPU) requires mastering a broad range of knowledge, from low level hardware configuration to kernel drivers to libraries and userspace application. While some vendor specific frameworks tackle the challenge, we focus on a vendor independent solution applicable to current FPGA Systen on Chip providers: the OscImp Digital framework provides a comprehensive set of FPGA IP, associated Linux driver, library and userspace examples based on GNU Radio running on the embedded CPU. We demonstrate its use on the Redpitaya platform processing baseband signals as well as the Zynq, most significantly associated with the AD9363 radiofrequency frontend on the PlutoSDR board. In both cases, the FPGA is not only used to stream I/Q coefficients but pre-process the datastream in order to reduce bandwidth and efficiently feed the CPU: we demonstrate embedded FM broadcast radio reception as well as GPS decoding on the PlutoSDR custom bitstream. The framework is available at https://github.com/oscimp/oscimpDigital

Back

Striving for Performance Portability of Software Radio Software in the Era of Heterogeneous SoCs

Home

Speaker Jeffrey Vetter
RoomAW1.120
TrackFree Software Radio
Time13:00 - 13:30
Event linkView original entry

Future heterogeneous DSSoCs will be extraordinarily complex in terms of processors, memory hierarchies, and interconnection networks. To manage this complexity, architects, system software designers, and application developers need design and programming technologies to be flexible, accurate, efficient, and productive. Recently, our team has started to explore the mapping of GnuRadio to various heterogeneous SoCs in order to understand how programming technologies can support this goal of making this SDR framework performance portable. Using our software stack, we are porting several SDR applications to GPUs from NVIDIA, AMD, and ARM, and to NVIDIA Xavier SoCs, Qualcomm Snapdragon, and Xilinx Zynq devices. Our current approach uses a directive-based programming model and a new intelligent runtime scheduler to port and execute the workflows. We are evaluating several open programming models to enable performance portability; initially, they include directive-based compilers, OpenCL, and SYCL. All of these approaches will generate tasks that are then queued and scheduled by our open-source intelligent runtime scheduler, which is a critical component of our approach. Initial performance results appear promising; however, more automation will further broad deployment. Also, we have developed a host of tools to examine and profile SDR workflows and modules. Specifically, these analysis tools enable automated characterization of the behavioral and computational features of GNU Radio blocks and workflows. The static tools in GR-tools help developers to create ontologies and queries to classify GR modules based on custom scenarios. The dynamic toolset provides automated profiling capabilities of GR workflows and presents detailed statistics on how components in a given software defined radio application perform. GR-tools also produces a graph-based representation of the analyzed data and provides powerful visualization options to filter and display the information obtained from the static and dynamic tools. Our software is available as open-source software and will be made available to the community.

Back

Cooperative Perception in Future Cars using GNU Radio

Home

Speaker Augusto Vega
RoomAW1.120
TrackFree Software Radio
Time13:30 - 14:00
Event linkView original entry

Title: Cooperative Perception in Future Cars using GNU Radio



Speaker: Augusto Vega, IBM Research (NY, USA)



Abstract:
The phenomenon of self-driving (autonomous) vehicles is a symbol of the grand re-emergence of artificial intelligence and robotics as a promising technology. The most general model of future vehicular transportation is that of artificially intelligent, connected, autonomous vehicles (CAVs) [1].



In this talk, we present a representative open-source application for CAVs operating as a collaborative swarm and communicating via GNU Radio. The application, called ERA [2], incorporates local sensing, creation of occupancy grid maps, vehicle-to-vehicle (V2V) communication of grid maps between neighboring vehicles using GNU Radio-based dedicated short-range communication (DSRC), and map fusion to create a joint higher-accuracy grid map [3]. Specifically, each vehicle in ERA uses its onboard sensors to generate local occupancy grid maps, which it communicates to other nearby vehicles using DSRC. When a vehicle receives occupancy maps from nearby cars, it merges the received ones with the locally-generated occupancy maps, expanding the scope and increasing the accuracy of this vehicle's perception. The DSRC transceiver adopted in ERA is an open-source GNU Radio implementation of the IEEE 802.11p standard by Bastian Bloessl [4]; while perception and map creation is implemented using ROS (Robot Operating System) [5]. We created an appropriate software interface between GNU Radio and ROS which enables proper execution and interaction of both frameworks.



In addition to presenting a deep dive into ERA's code, we will also show performance analysis results of ERA (including its GNU Radio components) and discuss potential acceleration opportunities for performance and efficiency improvement -- including optimizations of Viterbi decoding and complex exponential through hardware acceleration. We believe that ERA can help to fill the gap between the fast-growing CAV R&D domain and GNU Radio, specifically when it comes to the wireless communication aspect of future vehicles.



[1] A. Vega, A. Buyuktosunoglu, P. Bose, “Towards "Smarter" Vehicles Through Cloud-Backed Swarm Cognition,” Intelligent Vehicles Symposium 2018: 1079-1086.



[2] ERA. URL: https://github.com/IBM/era



[3] E. Sisbot, A. Vega, A. Paidimarri, J. Wellman, A. Buyuktosunoglu, P. Bose, D. Trilla, “Multi-Vehicle Map Fusion using GNU Radio,” Proceedings of The GNU Radio Conference 2019, 4(1).



[4] B. Bloessl, “IEEE 802.11 a/g/p transceiver for GNU radio,” URL: https://github.com/bastibl/gr-ieee802-11



[5] ROS. URL: https://www.ros.org



Desired slot time: 30 mins

Back

srsLTE project update

Home

Speaker Andre Puschmann
RoomAW1.120
TrackFree Software Radio
Time14:00 - 14:30
Event linkView original entry

The talk will provide an update about past, ongoing and future features of srsLTE. We'll give an overview about the features that have been added last year. We talk about our testing infrastructure and also discuss upcoming new features like 5G-NR, NB-IoT, C-V2X, Carrier Aggregation, etc.

Back

AMENDMENT The Space Operations Facility of FH Aachen (FHASOF)

Home

Speaker Sacha Tholl
Hannah Walther
RoomAW1.120
TrackFree Software Radio
Time14:30 - 15:00
Event linkView original entry

The space operations facility is a multi-mission ham ground station located at Aachen / Germany, which is operated by students. We perform Telemetry, Tracking, and command of own but also for foreign amateur radio satellites on a best effort basis for a purely educational purpose. Because we obtain only a very limited amount of funding, we entierly rely on the use of freeware and open-source software and the "do it yourself experience". In this talk, we explain why we integrate and how we use open-source software in our daily operations and what is our user experience from a missions-operator point of view.



This timeslot was originally for talk Channel Equalization using GNU Radio. That talk is now at 10:00 AM due to a last minute timing conflict.

Back

Task Scheduling of Software-Defined Radio Kernels in Heterogeneous Chips: Opportunities and Challenges

Home

Speaker Augusto Vega
RoomAW1.120
TrackFree Software Radio
Time15:00 - 15:30
Event linkView original entry

Title: Task Scheduling of Software-Defined Radio Kernels in Heterogeneous Chips: Opportunities and Challenges



Speaker: Augusto Vega, IBM Research (NY, USA)



Abstract:
The proliferation of 'heterogeneous' chip multiprocessors in recent years has reached unprecedented levels, especially in the context of IoT and distributed edge computing (e.g. connected and autonomous vehicles). By combining the right set of hardware resources (cores, accelerators, chip interconnects and memory technology) along with an adequate software stack (operating system and programming interface), heterogeneous chips have become an effective high-performance and low-power computing alternative.



However, heterogeneous architectures come with new challenges. Fundamentally, the complexity derived from the design's heterogeneous nature challenges the effective scheduling of tasks (processes), a scenario that becomes even more critical when real-time execution deadlines must be met. This is particularly important in the context of GNU Radio, given that its underlying scheduler is completely unaware of chip heterogeneity today. Early stage prototyping and evaluation of GNU Radio scheduling policies in heterogeneous platforms becomes a valuable asset in the design process of a future GNU Radio scheduler.



In this talk, we present a new open-source simulator for fast prototyping of task scheduling policies, called STOMP (Scheduling Techniques Optimization in heterogeneous Multi-Processors) [1]. It is written in Python and implemented as a queue-based discrete-event simulator with a convenient interface that allows users and researchers to "plug in" new scheduling policies in a simple manner. We also present a systematic approach to task scheduling in heterogeneous platforms through the evaluation of a set of progressively more "intelligent" scheduling policies using STOMP. We rely on synthetic kernels representative of a GNU Radio application [2], including functions like Viterbi decoding and fast Fourier transform (FFT) that have to be scheduled across general-purpose cores, GPUs or hardware accelerators to meet the application's real-time deadlines. We will show results indicating that relatively simple scheduling policies can satisfy real-time requirements when they are properly designed to take advantage of the heterogeneous nature of the underlying chip multiprocessor.



[1] STOMP. URL: https://github.com/IBM/stomp
[2] ERA. URL: https://github.com/IBM/era



Desired slot time: 30 mins

Back

SDR4IoT - Using SDR for IoT Device Fingerprinting and Localization

Home

Speaker Alexis DUQUE
RoomAW1.120
TrackFree Software Radio
Time15:30 - 16:00
Event linkView original entry

This talk will present the result of our experimentation done at i.Lab Wireless Testbed in Ghent, in the context of FED4Fire+ H2020 project. Our project aims to collect raw radio frequency (RF) signals of widely used radio protocols for Internet of Things (IoT) devices in the 2.4GHz ISM bandwidth, such as Bluetooth Low Energy and LoRa, using software-defined radio (SDR).
This will allow us collecting a large, reliable and reproducible dataset of RF fingerprint. This dataset will be further used to develop deep learning algorithms for IoT device fingerprinting and localization. Our use case is the authentication of autonomous vehicles or robots in a building according to their localization, without any over-the-air key exchange algorithm.

Back

openwifi

Home

Speaker Xianjun Jiao
RoomAW1.120
TrackFree Software Radio
Time16:00 - 16:30
Event linkView original entry

An open source "Wi-Fi chip design"(Will be AGPLv3) will be presented and a live demo will be shown in the room! The design is based on SDR (Software Defined Radio) and offers full-stack 802.11a/g/n capabilities on FPGA and ARM Linux (Xilinx Zynq SoC + AD9361 RF front-end).It conforms with Linux mac80211 framework and behaves just like COTS Wi-Fi chip under Linux. The main components of the design are: RF front-end control; PHY; low-MAC; interfacing (DMA, register) with ARM; mac80211 compliant Linux driver; high-MAC (mac80211 framework); Linux user space tools (ifconfig, iwconfig, dedicated tools via netlink).​ Since it is a SDR based "white box" design instead of commercial “black box” chip, you can do Wi-Fi research and customization without any reverse engineering efforts.



Why does it fit FOSDEM?



It will be the 1st open source project for full-stack Wi-Fi SDR implementation. Lots of people, especially wireless network/security researchers, SDR researchers and hackers, will be interested in. We are eager to show the demo in the room and hear feedback from people/community. Potential contributors are also very welcomed, and we will be glad to offer help.

Features:






Performance (AP: openwifi at channel 44, client: TL-WDN4200 N900 Wireless Dual Band USB Adapter. iperf test):






Supported SDR platforms:




Back

Software Defined Radio based scientific instrumentation

Home

Speaker Jean-Michel Friedt
RoomAW1.120
TrackFree Software Radio
Time16:30 - 17:00
Event linkView original entry

Software Defined Radio is best known for receiving and processing radiofrequency signals transmitted over the ether. However, many scientific experiments benefit from the flexibility, stability and reconfigurability of digital signal processing even when handling radiofrequency signals. In this presentation, we address two demonstrations of this concept. First, readily available SDR hardware is used to replace general purpose laboratory instruments (spectrum analyzer, lock in amplifier)
for characterizing radiofrequency processing acoustic transducers (filters, resonators). The benefit of SDR lies in communication bandwidth: while general purpose
instrument communication protocols (GPIB, VXI11 over Ethernet) require hundreds of milliseconds or seconds to transfer data, SDR platforms stream at high bandwidth I/Q coefficients collected on the fly on a ZeroMQ socket by the (GNU/Octave) processing software. We demonstrate a 10000 fold bandwidth gain when converting a general purpose instrument experiment to a SDR approach. Another approach is to address high bandwidth radiofrequency oscilloscopes as radiofrequency source for time of flight measurement. The gr-oscilloscope GNU Radio source demonstrates how to communicate between GNU Radio and laboratory grade equipment, here oscilloscopes, for processing discontinuous data streams using GNU Radio.

Back

Welcome to the BSD devroom

Home

Speaker Rodrigo Osorio
RoomAW1.121
TrackBSD
Time09:00 - 09:05
Event linkView original entry

Welcome to the BSD devroom

Back

Orchestrating jails with nomad and pot

Home

Speaker Luca Pizzamiglio
RoomAW1.121
TrackBSD
Time09:10 - 09:55
Event linkView original entry

Docker and Kubernetes are changing the way to deploy services and applications in the Linux world.
What about FreeBSD?
2 years ago we presented pot, another jail abstraction framework. In time, the pot framework has developed to provide features containers-alike.
The plugin interface provided by nomad (a container orchestrator), allowed us to develop a driver for pot, enabling nomad to orchestrate pot jails.
In this talk, we'd like to present this FreeBSD-based ambitious alternative to Docker-Kubernetes

Back

OpenSMTPD over the clouds

Home

Speaker Giovanni Bechis
RoomAW1.121
TrackBSD
Time10:00 - 11:00
Event linkView original entry

OpenSMTPD has gained filters support in its latest version and it is a now an smtp server that can compete with other better known mail servers and can be used to handle both incoming and outgoing mail flows in a secure way.
Its simple configuration and its "secure by design" approach makes it one of the best candidates for a mail server software.

Now that OpenSMTPD has gained filter support it can be used as a mail server with all features of other email servers.
This is the story of how OpenSMTPD can be setup in an high availability environment and how other OpenBSD tools can be used to achieve the goal.

Back

NetBSD - Not Just For Toasters

Home

Speaker Benny Siegert
RoomAW1.121
TrackBSD
Time11:05 - 11:35
Event linkView original entry

NetBSD may seem like an exotic choice for an operating system. But it is actually a decent desktop OS for developers and a rock-solid server OS, in the cloud as well as on old hardware. Come for the freedom from systemd, stay for the great packages, modern features and enthusiastic community!

In this talk, I will give reasons why adopting NetBSD makes sense, show some cool hardware that you can run NetBSD on, and talk about new features in the upcoming NetBSD 9 release.

Back

FreeBSD Around the World!

Home

Speaker Deb Goodkin
RoomAW1.121
TrackBSD
Time11:40 - 12:20
Event linkView original entry

The FreeBSD Foundation has been supporting the FreeBSD Project and community for 20 years! In this talk, I'm going to share a little history of FreeBSD and the Foundation, how the Project works, and why you should get involved. I'll continue to share how the Foundation has been advocating for FreeBSD around the world, and what you can do to help bring on new users and contributors. Finally, I'll cover significant projects we are supporting to keep FreeBSD relevant, stable, and secure.

Back

FreeBSD and LLVM support

Home

Speaker David Carlier
RoomAW1.121
TrackBSD
Time12:25 - 13:05
Event linkView original entry

We will explain what is LLVM project all about and how central it is under the FreeBSD operating system, as it serves to basically build it ; first we ll go through its major components and what is supported..

Back

Break your BSD kernel

Home

Speaker Maciej Grochowski
RoomAW1.121
TrackBSD
Time13:10 - 14:10
Event linkView original entry

Fuzzing is an efficient technique to find bugs and vulnerabilities in the software.
Todays BSD based operating systems allows using such techniques to test the kernel code easily.
This talk is designated to be a starting point for everyone who would like to start his journey with fuzzing his BSD kernel as well provide all necessary information needed.

The kernel is a central part of most of the modern operating systems. This place where hardware meets software controls main subsystems like Networking Stack (and other communication stacks), File Systems, Security and many other.
Due to this fact security of overall system relay on the safety of the kernel.
One of the well-proven techniques to test software security is fuzzing.
For the last couple of years, researchers found a long list of vulnerabilities in many popular Open Source projects thanks to the efficiency of this technique.
Kernel fuzzing was always more complicated than userspace programs. Nevertheless, that is constantly improving and today's entry barrier is much lower than it used to be, thanks to the improvement made in recent years.
For the last couple of years, NetBSD became strong with new security features in the BSD world, as Sanitizers or Fuzzers.
Due to the work of the community, it grew to an attractive target for people interested in operating systems and security.
Fuzzing can be also a very beneficial technique for kernel and drivers developers who want to improve or test the security of their code.

Back

KDE on FreeBSD

Home

Speaker Adriaan de Groot
RoomAW1.121
TrackBSD
Time14:15 - 14:30
Event linkView original entry

The state of KDE (the Plasma desktop and applications) on FreeBSD, what works, what needs better support lower in the stack. How do we get rid of HAL?

Back

NetBSD Native APIs - A Userland Perspective (Audio, Input)

Home

Speaker Nia Alarie
RoomAW1.121
TrackBSD
Time14:35 - 15:20
Event linkView original entry

For a while now, Nia has been working to encourage the usage of native NetBSD APIs in third-party software.



One of these is the rather interesting and unique native audio API, distinct from OSS, inherited from an early version of the Solaris API with extensions and improvements made over the years.



In this talk, Nia describes the advantages of using NetBSD's native audio API in comparison to other alternatives.



She also discusses her improvements to third-party software to encourage usage and adoption of NetBSD APIs, including her struggles getting a working Wayland compositor on an operating system with very different input and event APIs to Linux.

Back

X11 and Wayland: A tale of two implementations

Home

Speaker raichoo
RoomAW1.121
TrackBSD
Time15:25 - 15:55
Event linkView original entry

In this talk I will outline my journey implementing my X11 window manager
hikari and the corresponding Wayland compositor shortly after. hikari is a
stacking window manager/compositor with some tiling capabilities. It is still
more or less work in progress and currently targets FreeBSD only but will be
ported to Linux and other operating systems supporting Wayland once it has
reached some degree of stability and feature completeness.

This talk covers:




Back

Graphing FreeBSD disk utilization with Prometheus

Home

Speaker Thomas Steen Rasmussen
RoomAW1.121
TrackBSD
Time16:00 - 17:00
Event linkView original entry

All in a days work: How to write a Prometheus gstat_exporter and integrate it in a Grafana Dashboard

Back

Making poetry with Racket

Home

Speaker Jérôme Martin
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time09:30 - 09:50
Event linkView original entry

Racket allows us to create languages on the fly. It's extremely practical for making DSLs (domain specific languages), but can it also be used to make art?
That's what we'll see in this talk, making (executable) poetry with Racket!

I've always been fascinated by languages, may it be programming languages, "natural" , or constructed ones.
The way streams of words convey meaning, information, requests, but also emotions, thoughts, feelings, stories, mood...
Humans have been using languages for thousands of years. It's one of our most important creation and the one we use the most.
For a long time, we only had two sorts of language: natural and constructed. But since the invention of computing, and the concept of "software", we now have third sort: programming languages.
We've been writing programming languages for a century now, but, can we consider programming languages to be languages human could also talk with?
Can we consider making art with them? Poetry?
Let's try out using the wonderful language forge that is Racket, and see if we can learn something on the way, and have fun!

Back

A small, FRP DSL for distributed systems

Home

Speaker James Shubin
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time09:50 - 10:10
Event linkView original entry

Mgmt is a next gen config management tool that takes a fresh look at existing automation problems.
The tool has two main parts: the engine, and the language.
This presentation will have a large number of demos of the language.
The language is a minimalistic, functional, reactive DSL.
It was designed to both constrain the user with safe types, and no core looping constructs, but also to empower the user to build powerful real-time distributed systems.
This year we will expand on last years talk by showing more of the core language features like classes, functions, closures and more!
Finally we'll talk about some of the future designs we're planning and make it easy for new users to get involved and help shape the project.

Back

XL, an extensible programming language

Home

Speaker Christophe de Dinechin
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time10:10 - 10:30
Event linkView original entry

XL is an extensible programming language, designed to grow with Moore's law instead of being killed by it.
Extensible means that programmers can add features and notations much like they would add functions or classes in existing languages.
The mechanisms are based on meta-programming, and are a bit similar to the macros that gave Lisp its longevity, but with interesting twists.
As a proof of this extensibility, basic arithmetic (addition, multiplication, etc) or control statements (if-then-else, loops, etc) are implemented by libraries in XL, yet offer similar performance and usability as built-in features in C++.
Another validation of the extensibility is Tao3D, an XL-based interactive graphic language that will be used to give the presentation.

Every 5 year or so, a major paradigm arrives, that makes earlier languages obsolete. Past examples include object-oriented programming, distributed programming or application extension languages. The inability of classical programming languages to integrate such changes also leads to a number of niche languages implementing minority paradigms, ranging from Prolog to make to yaml.



Extensible in XL means that programmers can add features and notations much like they would add functions or classes in existing languages. XL is quite minimalist, since all this is done using a single operator written "is".
This operator can be used to define variable (X is 0), functions (is_even X is X mod 2 = 0), multi-operator expressions (X in Y..Z is X >= Y and X < Z), or programming constructs. Loops are defined in XL as follows:



loop Body is
Body
loop Body


This extensibility mechanisms is therefore based on meta-programming, and are in that way similar to the macros that gave Lisp its longevity. Lisp was first to normalize object-oriented programming with CLOS. But XL has interesting twists.
For starters, there is a strong focus on making notations match concepts. For example, XL will let you write (1 + 2 * 3) or program if statements that look exactly like in other languages. This is actually important.



This talk will give three proofs of this extensibility:






Tao3D will be used to give the presentation, to give an idea of what XL is (or rather was) capable of.



The language is currently being redefined in a more precise way, including a fully revamped type system. You can access the work-in-progress definition here: https://c3d.github.io/xl/ (also in links).

Back

Forth - The New Synthesis

Home

Speaker Ulrich Hoffmann
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time10:30 - 10:50
Event linkView original entry

The "new synthesis" of Forth is an ongoing effort in spirit of the Forth Modification Laboratory workshops. Its aim is to identify the essentials of Forth and to combine them in a new way to build systems that can scale-down as Forth always did
and can scale-up to large applications and development projects.



The new synthesis is guided by the two principles biological analogy and disaggregation.



We scrutinize many aspects of traditional and modern Forth implementations trying to separate techniques that are normally deeply intertwined. After isolating the techniques we thrive to combine them in new ways.



The talk describes two mile stones of the ongoing project:






We try to use Forth wherever possible in order to minimize semantic and formalism mismatches. Everything should be readily available - no hidden secrets.



Of course many of the subjects we are looking at have been used by others in the Forth community and outside - we are dwarfs standing on the shoulders of giants - however we believe our new synthesis to be original.

Our findings in the new synthesis so far can be summarized:






This talk will go into the details of preForth and seedForth and will how the source code tokenizing works as well as how to extend seedForth to become a modern interactive yet minimal programming environment (<1000 LOCs).



References



[1] Implementing the Forth Inner Interpreter in High Level Forth, Ulrich Hoffmann, EuroForth Conference 2016, Reichenau, 2016

[2] Stack of Stacks, Ulrich Hoffmann, Forth Tagung 2017, Kalkar, 2017

[3] A Recognizer Influenced Handler Based Outer Interpreter Structure, EuroForth 2017, Bad Vöslau, 2017

[4] Bootstrapping Forth, Forth Tagung 2018, Linux Hotel, Essen, 2018

[5] A descriptor based approach to Forth strings, Andrew Read and Ulrich Hoffmann, EuroForth conference, Edinburgh, 2018

[6] String descriptors on GitHub https://github.com/Anding/descriptor-based-strings

[7] preForth and seedForth on GitHub https://github.com/uho/preForth

Back

A minimal pur object-oriented reflective language

Home

Speaker Stephane Ducasse
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time10:50 - 11:10
Event linkView original entry

Pharo is a minimalist reflective dynamically typed object-oriented language. Pharo is inspired from Smalltalk: Its full syntax fits on a postcard.
Its model is simple: everything is an object instance of a class, methods are all public virtual, attributes are first class objects and are protected. There is single inheritance and
traits. And nothing else! (see http://mooc.pharo.org). Still Pharo is a real language that is started to be used in industry http://pharo.org/success and http://consortium.pharo.org. The entire Pharo stack is MIT.
Pharo reflective core is bootstrapped from source code. Experiences shows that we can have down to 11k (adding 2 smallint) and that a simple web app can be
trim down to 500 kb.

In this talk I will present Pharo in a nutshell: Syntax, model, but also user stories.
I will show the vision of the project and where we want to go. I will present some key architectural choices.
I will show some powerful features such as stack on the fly reification and their application: contextual breakpoint, on the fly program transformation.



Bio: I'm one of the core developer of Pharo, head of the consortium and helping consortium engineers.

Back

Bootstrapping minimal reflective language kernels

Home

Speaker Pablo Tesone
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time11:10 - 11:30
Event linkView original entry

In this talk, we present a series of tools to bootstrapping smaller crafted kernel languages.
Smaller kernels allow us to run applications in resources limited environments (IoT).
Also, it allows us to modify and study language modifications and extensions easing the evolution of new and existing languages.
These experiments are performed in a fully debuggable simulated environment, allowing us to overcome common mistakes and problems.
This is not only an experimental environment as it allows us to generate production-ready language kernels.



We use Pharo to generate language kernels that are intended to run on top of the Pharo VM.
These tools are also used to bootstrap bigger systems as Pharo itself.

The current explosion of embedded systems (i.e., IoT, Edge Computing) implies the need for generating tailored and customized software for them. Different approaches have been taken for building, deploying, updating and debugging these systems, although there is still no standard way to do this.
The goal of this talk is to present the tools and techniques necessary for building, debugging, and deploying custom small language kernels.



Kernel languages are generated from a combination of language definition and the description of the elements and processes to generate the runtime.
Kernel languages must be consistent internally and in relation with the environment where they run (e.g. the VM, the OS)
Languages that are bootstrapped from their source code are not new.
However, correctly defining a working consistent language kernel is a complex manual task without support from tools to debug or test before the generation of the language and its deployment.
This complexity limits the study of new languages, the creation of prototypes and the evolution of existing ones.



We present a set of tools to overcome the different difficulties that bootstrapping a reflective language kernel presents. Allowing us to simulate the kernel, debug it, validate it and generate it.
Moreover, our proposed approach offers tools to detect many common mistakes and causes of error.



We center our solution in reflective object-oriented languages that run on top of a VM.
Our tool uses Pharo and generates languages to run on top of its VM.

Back

Universal package & service discovery with Guix

Home

Speaker Pierre Neidhardt
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time11:30 - 11:50
Event linkView original entry

GNU Guix is a universal functional package manager and operating system which
respects the freedom of computer users. It focuses on bootstrappability and
reproducibility to give the users strong guarantees on the integrity of the full
software stack they are running. It supports atomic upgrades and roll-backs
which make for an effectively unbreakable system.

I'll present how I intend to leverage the Guile programming language to boost
searchability of packages and services via intuitive user interfaces and semantics.



Guix, like many other package managers, suffers from usability
issues when it comes to the explorability of the content (that is,
packages and services), facing the exorbitant amount of software there
is out there. Users should be able to discover the programs they need
for a specific task, without having to know them in advance. It should
be easy to specify build options for the packages, e.g. "build without
sound" or "add IPFS support." They should not have to waste time
writing their own hacks and scripts when ready-to-use services already
exist and are just waiting to be discovered.



So how do we fix the issue of improving this discoverability, with ease
of use?



In the context of the Next Generation Internet initiative, I've started
working on enhancing search, discovery and reusability of packages and
services.



"A universal software navigator on steroids -- for everyone."

Back

GNU Mes

Home

Speaker Jan Nieuwenhuizen (janneke)
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time11:50 - 12:10
Event linkView original entry

Last year GNU Mes brought the Reduced Binary Seed bootstrap to GNU Guix: gcc, glibc and binutils were removed and the size of the bootstrap binaries went down from 250MB to 130MB. This year we introduce the Scheme-only bootstrap: Awk, Bash, Core Utilities, Grep, Gzip, Make, Sed, Tar are replaced by Gash and Gash Core Utils, halving the size of the Guix bootstrap seed again, to 60MB. Next up, the Full Source bootstrap!

Back

Lisp everywhere!

Home

Speaker Pjotr Prins
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time12:10 - 12:40
Event linkView original entry

Minimalism matters in computing. Minimalism allows for smaller systems
that take less resources and consume less energy. More importantly,
free and open source minimalism allows for secure systems that are
easy to understand. Minimalism is also educational and brings back the
fun of the early days of computing where people learn to understand
systems from the ground up. As a co-organizer of this devroom I will
talk about my journey through many programming languages and ending up
with Scheme (a minimal Lisp). Lisp is the second oldest language in
use today and growing. I'll show you that once you master Lisp you can
use it everywhere from software deployment, the shell, the editor and
debugging and for programming systems and in the browser. As a matter
of fact, Lisp is everywhere!

Back

Celebrating Guile 2020

Home

Speaker Andy Wingo
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time12:40 - 13:10
Event linkView original entry

Guile maintainer Andy Wingo shares his thoughts on the last lap of the race to Guile 3. We'll go over ways that Guile got faster, more capable, and more minimal at the same time.

New languages are often lovely and minimal but don't have a wide user community. To the extent that an old language has a community, it also has a legacy burden of supporting that community's old code. How should these be balanced? Is there a balance?



In this talk, Andy Wingo takes the opportunity of the Guile 3 release to reflect on change and continuity: how can a language stay minimal over time, and how is Guile working towards this goal? We cover cases in which things have gone well, not so well, as well as ongoing challenges and opportunities.

Back

Introduction to G-Expressions

Home

Speaker Christopher Marusich
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time13:10 - 13:40
Event linkView original entry

This talk will present an overview of G-Expressions and how the GNU Guix project uses them.

The GNU Guix project invented G-Expressions to make it easier to "stage" data or code, in the form of S-Expressions, for later manipulation or evaluation. They are similar to S-Expressions, hence the name, but they provide useful code staging features beyond what can be easily accomplished with just "quasiquote" and "unquote". A high-level object (such as a Guix package) can be included in a G-Expression; the transitive dependencies of that high-level object will then be automatically carried along with the G-Expression. When the G-Expression is converted to an S-Expression and stored on disk for later manipulation or evaluation, the high-level object will be automatically "lowered" to an appropriate representation (such as the package's output path) via a "compiler". Compared to direct manipulation of S-Expressions, G-Expressions can provide a simpler and more intuitive way to stage data or code.



The Guix project uses G-Expressions to accomplish a wide variety of tasks, including:




Back

Let me tell you about Raku

Home

Speaker Juan Julián Merelo
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time13:40 - 14:00
Event linkView original entry

Most languages steadily incorporate new programming concepts in new releases, and new languages have these concepts already baked in. These concepts are related to how functions work and are considered and invoked, different data structures and working with things like Unicode. There's a language, Raku, that incorporates most of the new concepts that have appeared in this century. This talk is an introduction to the language by way of the concepts it uses.

Known as Perl 6 until October 14th this year, and released in Christmas 2015, Raku (https://raku.org) was designed as "The language for the next 100 years", and as such, it was created with the intention of incorporating most modern programming concepts. With the same motto as its (kind-of) predecessor, Perl, "There are many ways to do it", Raku is a multi-paradigm language that is functional, asynchronous, object-oriented, and with new interesting features like grammars.
In this talk I'll take a look at a dozen of features of modern languages; I'll exemplify every feature with examples from different languages, trying to get through the different concepts of Raku by way of how they are implemented in other languages.
Finally, we'll see a few examples of Raku, showing how its rich feature set makes it ideal for learning new programming concepts, as well as putting them to good use to solve your own problems.

Back

Minimalistic typed Lua is here

Home

Speaker Hisham Muhammad
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time14:00 - 14:30
Event linkView original entry

In this talk I will present a typed dialect of Lua with a minimalistic implementation. I will discuss the design choices that went into the design, implementation and development approach. We will also discuss whether Lua's minimalism is retained and ponder on the nature of the resulting dialect. This is a sequel for last year talk in which I discussed the challenges on typing dynamic languages and Lua in particular, presenting the results achieved since then.

Back

RaptorJIT: a fast, dynamic systems programming language

Home

Speaker Max Rottenkolber
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time14:30 - 14:50
Event linkView original entry

RaptorJIT is a Lua implementation suitable for high-performance low-level system programming. With the project scope reduced to the systems programming domain we want RaptorJIT fit one use case and excel at it, and we’re not afraid of radical change.



This talk will be about our efforts to reduce the project’s complexity to improve maintain-ablility and pave the way for new features. A story about porting the LuaJIT interpreter from assembly to C, ruthless trade-offs, and ambitious performance targets in an expressive language.



Topics include: predictable performance in JIT compilers, always-on profilers, memory safety in low-level programming

Back

The best of both worlds?

Home

Speaker Steph Hobbs
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time14:50 - 15:20
Event linkView original entry

Long has raged the war between static and dynamic typing proponents. Dynamic typing promises speedy development, less verbose code, and happier developers. Static typing promises to find bugs earlier, help you fix them when they're found, and ease refactoring. Crystal is a statically typed language, but with several novel features aimed in a different direction: the perfect compromise between the two. In this talk I will cover the history and basics of Crystal, and explore the type system which makes Crystal unique.

Back

Nim on everything

Home

Speaker Peter Munch-Ellingsen
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time15:20 - 15:50
Event linkView original entry

Nim is an interesting new language whose design is focused around the concept of a small core and great extensibility through a powerful macro system and multiple compilation targets. In this talk I want to showcase how Nim compiles to both C/C++ and JavaScript, and what this means for how easy interoperability and targeting many different platforms can be. Showcasing how the same language can be used for programming anything from the tiniest resource scarce microcontrollers to web-sites or web-technology based desktop applications (like Electron) to normal desktop applications and server applications.

Back

Move semantics in Nim

Home

Speaker Andreas Rumpf (Araq)
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time15:50 - 16:20
Event linkView original entry

This talk explains Nim's move semantics and their connection to reference counting, how Nim's model differs from C++ and why move semantics can offer superior performance. Nim with deterministic memory management never has been easier.

Back

Designing an ultra low-overhead multithreading runtime for Nim

Home

Speaker Mamy Ratsimbazafy
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time16:20 - 16:40
Event linkView original entry

While multithreading abstractions are consolidating over a couple of basic primitives around the notion of tasks and futures, under the hood implementations are vastly differing.
The abstraction "details" are significant in the current era as developers now have to find parallelism opportunities for 16+ cores on consumer CPUs.



We go over the design space of task-parallel and data-parallel multithreading runtime library and present an unique, scalable approach
based on message passing.

Back

Async await in Nim

Home

Speaker Dominik Picheta
RoomAW1.125
TrackMinimalistic, Experimental and Emerging Languages
Time16:40 - 17:00
Event linkView original entry

The most basic API for async IO that is high level uses callbacks, but working with those becomes convoluted very quickly. A great solution is async await, but implementing it in a language is a complex endeavour. That is unless your language is flexible enough with strong enough metaprogramming support to make it possible to implement it without modifications to the compiler. Nim is one such language and its async await implementation is entirely implemented inside the standard library. In this talk I will describe how async await in Nim works, both at the syntax level and the event loop level.

Back

istsos3: Data Analysis and statistical tools and unit conversions

Home

Speaker Rahul Chauhan
RoomAW1.126
TrackGeospatial
Time09:00 - 09:15
Event linkView original entry

Project Name: Data Analysis and statistical tool suite (GSoC 2017)
The primary goal of my project was to create OAT(Data analysis and statistics) extension in RESTFul Web API and OAT extension having data analysis and statistical tools for istSOS which is being used to automate the creation of statisticate documents using OAT library (FREEWAT) and harvesting the data from an istSOS server.



Project Name: istSOS - Support of unit of conversion in istsos3 (GSoC 2018)
The aim of my project primarily is to add plugins conversion of the unit of measure in istSOS3. The user can convert a unit in another specified unit. For Unit of measure conversion in istSOS3 we added postgresql-unit and pint libraries which has a powerful feature of unit conversion along with many specified functions like unit conversion function instantly and all types of operations support to istsos3.

Project Name: istSOS (OSGeo) - Data Analysis and statistical tool suite (GSoC 2017)



This section comprises of following parts:
1. OAT installation
2. Implemented OAT methods



OAT is a Python package that is integrated in the FREEWAT environment through an interface exposing its features to modelers and non-programmer users. OAT library implements two main classes: the Sensor class that is designated to handle time­series data and metadata and the Method class which is designated to represent a processing method. The library applies the behavioral visitor pattern which allows the separation of an algorithm from the object on which it operates: thanks to this design pattern it is possible to add a new processing capability by simply extending the Method class without the need to modify the Sensor class. From a dependency point of view, OAT takes advantage of the PANDAS (McKinney, 2010), NUMPY and SCIPY (Van der Walt et. al. 2011) packages.



Project Name: istSOS (OSGeo) - Support of unit of conversion in istsos3 (GSoC 2018)
The aim of my project primarily is to add plugins conversion of the unit of measure in istSOS3. The user can convert unit in another specified unit. For Unit of measure conversion in istSOS3 we added postgresql-unit and pint libraries which has a powerful feature of unit conversion along with many specified functions like unit conversion function instantly and all types operations supports to istsos3 data like add, subtraction, multiplication, and division with magnitude and units.

Back

STAC: Search and discovery of geospatial assets

Home

Speaker Matthias Mohr
RoomAW1.126
TrackGeospatial
Time09:20 - 09:40
Event linkView original entry

The talk introduces STAC, the SpatioTemporal Asset Catalog specification. It aims to enable a cloud-native geospatial future by providing a common layer of metadata for better search and discovery. It is an emerging open standard to catalog and expose geospatial data from different sources either in a static or dynamic way.



We’ll cover the core set of metadata fields for STAC Catalogs, Collections, and Items first, along with available extensions for describing different types of data (EO, SAR, Point Cloud, etc.). With the basics of STAC in hand, the talk will go through the Open Source ecosystem for working with STAC metadata: validators, graphical user interfaces and client command line tools and libraries for search, access, and exploitation.

The SpatioTemporal Asset Catalog (STAC) specification is an emerging standard to catalog and expose geospatial data from different sources. It aims to enable a cloud-native geospatial future by providing a common layer of metadata for better search and discovery.



This talk gives a detailed overview of STAC and the way it allows for static and dynamic implementations at the same time. The simple concept of static catalogs living alongside the data on cloud file storage (e.g., AWS S3, GCS) by adding small JSON files is highlighted before talking through the dynamic searchable APIs built on top of the new OGC API – Features standard.



The talk will cover the core set of metadata fields for STAC Catalogs, Collections, and Items, along with available extensions for describing different types of data (EO, SAR, Point Cloud, etc.). With the basics of STAC in hand, the talk will go through the Open Source ecosystem for working with STAC metadata: validators, graphical user interfaces and client command line tools and libraries for search, access, exploitation and API generation.



The specification is an open standard developed on GitHub by a wide range of organizations with a strong focus on extensibility to support various domains. It encourages interested parties to extend the specification for their needs for a future of interoperable discovery and work with geospatial data. An ecosystem of Open Source tooling is evolving around the specification.

Back

openEO: Interoperable geoprocessing in the cloud

Home

Speaker Matthias Mohr
RoomAW1.126
TrackGeospatial
Time09:40 - 10:00
Event linkView original entry

openEO is a new API specification for Earth Observation data cubes that supports data extraction, processing and viewing.



Both the standard and its implementations are Open Source projects, which itself rely on Open Source libraries under the hood, such as GRASS GIS, GDAL, Geotrellis, Rasdaman, or provide a standardized interface to proprietary systems such as Google Earth Engine. Client implementations are available for JavaScript, R, Python, QGIS and web browsers.



This talk will show an overview of the main capabilities, and available client and backend implementations.

Earth Observation data are becoming too large to be downloaded locally for analysis. Also, the way they are organised (as tiles, or granules: files containing the imagery for a small part of the Earth and a single observation date) makes it unnecessary complicated to analyse them. The solution to this is to store these data in the cloud, on compute back-ends, process them there, and browse the results or download resulting figures or numbers. Unfortunately, data and APIs are too often proprietary solutions and lock-in to a service provider happens easily so an interoperable standard across service providers is much needed.



The aim of openEO is to develop an open API to connect R, Python, JavaScript and other clients to big Earth observation cloud back-ends in a simple and unified way. With such an API, each client can work with every service provider, and it becomes possible to compare them in terms of capacity, cost, and results (validation, reproducibility).



The specification is centered around Earth Observation data cubes that supports data extraction, processing and viewing. It specifies a set of common processes to be used so that switching between service providers is less of a problem. Both the standard and its implementations are Open Source projects. Under the hood, the client and backend implementations rely on Open Source libraries, such as GRASS GIS, GDAL, Geotrellis, Rasdaman, but also provide a standardized interface to proprietary systems such as Google Earth Engine.

Back

GeoServer Basics

Home

Speaker Jody Garnett
RoomAW1.126
TrackGeospatial
Time10:05 - 10:25
Event linkView original entry

GeoServer Basics
Welcome to GeoServer, a popular web service for publishing your geospatial data using industry standards for vector, raster and mapping.



Are you just getting started with GeoServer, or considering it for the first time?



This presentation is here to help, introducing the basics of:
Usage: Concepts used to connect to your data and publish as a spatial service.
Context: What role GeoServer plays in your organization and what value the application provides.
Community: How the project is managed, and a discussion of the upcoming activities.



Attend this presentation to get a running start on using GeoServer in your organization.

Back

GeoNetwork Basics

Home

Speaker Jody Garnett
RoomAW1.126
TrackGeospatial
Time10:25 - 10:45
Event linkView original entry

GeoNetwork Basics
Welcome to GeoNetwork, a leading web service for keeping track of the spatial information used by your organization.



Jody is an experienced open source community member, digging into what this technology offers, and how it is used. This presentation shares these findings with you, and touches on what makes GeoNetwork succeed:



We look at what GeoNetwork is for, the business challenge it is faced with, and the amazing technical approach taken by the technology.
For context we look at the core layer publishing workflow to see what is required
We peek under the hood at how the editor works, and discover the central super-power of GeoNetwork
Look at examples of how GeoNetwork has been extended by organizations to see what is possible with this technology



Attend this presentation for an informative tour of the GeoNetwork ecosystem.

Back

Getting inspired by open software for a web site: g3n.fyi

Home

Speaker Thomas Bremer
RoomAW1.126
TrackGeospatial
Time10:50 - 11:10
Event linkView original entry

So you are here at FOSDEM in Brussels. Also sightseeing? Geocaching? Tried to optimize your way along the sights or to find many caches without making it a hike? Then you've got the traveling salesman problem! Famous in computer science because finding the optimum is extremely difficult and finding good approximations can be done easily.



Last year we had a talk about 3geonames.org where the Hilbert curve was mentioned to be used in name generation. When researching about this space curve it turned out that such space filling curves give good approximations for the traveling salesman problem. This has already been evaluated scientifically. Route finding using thees curves is extremely simple. Other algorithms need much more computational effort. Using a space filling curve to find a route proposal and improving it with 2-Opt optimization algorithm gives the quality of 2-Opt at high speed. Even so fast that it keeps track with interactive changes of the waypoints on a moving map display.



This mechanism gives short routes for your sightseeing or geocaching planning and can also be used professionally if you have to visit several locations on a single tour as in package delivery, meals on wheels, or elderly care.

Back

Arabesque: a geographic flow visualization application

Home

Speaker Nicolas Roelandt
RoomAW1.126
TrackGeospatial
Time11:15 - 11:35
Event linkView original entry

Arabesque is a FLOSS web application dedicated to flow mapping and analysis. Using web technologies, it provides tools to the user to load O/D data. The user can easily modify filters parameters or add new ones. A default symbology is proposed but the user can define is own.



Arabesque aims to provides a fast, lightweight framework to visualize and explore flow data with a special attention on graphics and symbology in order to produce beautiful and just flow maps.



This presentation will be a short presentation of the project and a quick demo of the application. It is the extension of the presentation made at FOSS4G 2019: release, code publication, updates and a live demo.

Geographic flow visualization (gFlowiz) is an interdisciplinary project dedicated to flows and networks in the geoweb.
It is led by a team of French researchers and engineers in Geography, Cartography and Computer Science from both IFSTTAR and CNRS.



A state of the art on current issues of flows and movement analysis on the geoweb has been produced through the compilation of around 70 applications in a thematic dashboard, and a 200 respondants survey on flow map usages and needs has been realized.



The results of this were combined to create the specifications of the Arabesque application.
Arabesque is a FLOSS web application dedicated to flow mapping and analysis. Using web technologies, it provides tools to the user to load Origin/Destination data in robust and well documented formats (CSV or geojson). Arabesque will display the most significant data at launch but the user can easily modify filters parameters or add new ones on nodes and/or links. A default symbology is proposed but color scales, size, transparency and shape of objects can be modified as well.



Arabesque aims to provides a fast, lightweight framework to visualize and explore flow data.
A special attention has been provided on graphics and use of correct symbology in order to produce
beautiful and just flow maps.



This presentation will be a short presentation of the Gflowiz project and a quick demo of the Arabesque application.



Speaker bio




Back

Boost.Geometry R-tree - speeding up geographical computation.

Home

Speaker Adam Wulkiewicz
RoomAW1.126
TrackGeospatial
Time11:40 - 12:00
Event linkView original entry

What is a spatial index? How does R-tree work? How to create one in C++ and how to use it with geographical data in order to solve a problem on the surface of ellipsoid?



We give answers to these questions and show solutions using utilities provided by Boost.Geometry library, the library that is currently being used to provide GIS support to MySQL. We discuss properties of the R-tree and show differences between variants of this spatial index. This is a practical talk presenting a set of example problems and various solutions to these problems. The complexity and performance of the algorithms is discussed to allow the listener to choose the best solution for a specific purpose.

Back

Testing Navit using Device Farms

Home

Speaker Patrick Höhn
RoomAW1.126
TrackGeospatial
Time12:05 - 12:25
Event linkView original entry

Back

Reverse Geocoding is not easy

Home

Speaker Ilya Zverev
RoomAW1.126
TrackGeospatial
Time12:30 - 12:50
Event linkView original entry

Having seen a dozen of different OpenStreetMap-based geocoders, I did not expect to find myself writing another one. But here I am, tasked with making a reverse geocoder better than the industry-standard Nominatim. Turns out it is a fun and not so straightforward task. Let’s see what can go wrong.

Back

Working with spatial trajectories in Boost Geometry

Home

Speaker Vissarion Fysikopoulos
RoomAW1.126
TrackGeospatial
Time12:55 - 13:15
Event linkView original entry

Recently, there is a growing interest in geospatial trajectory computing. We call trajectories the sequences of time-stamped locations. As the technology for tracking moving objects becomes cheaper and more accurate, massive amounts of spatial trajectories are generated nowadays by smartphones, infrastructure, computer games, natural phenomena, and many other sources.



In this talk we will present the set of tools available in Boost Geometry to work with trajectories highlighting latest as well as older library developments. Starting with more basic operations like length, distance and closest points computations between trajectories we move forward to more advanced operations like compression or simplification as well as the conceptually opposite operation of densify by interpolating or generating random points on a given trajectory. We conclude with the important topic of similarity measurements between trajectories.



All implemented algorithms are parameterized by using the Boost Geometry's strategy mechanism that control the accuracy-efficiency trade-off and work for 3 different coordinate systems (namely, cartesian, spherical and ellipsoidal) each of which comes with its own advantages and limitations.

Back

MobilityDB

Home

Speaker Mahmoud Sakr
RoomAW1.126
TrackGeospatial
Time13:20 - 13:40
Event linkView original entry

MobilityDB is an open source moving object database system (https://github.com/ULB-CoDE-WIT/MobilityDB). Its core function is to efficiently store and query mobility tracks, such as vehicle GPS trajectories. It is engineered up from PostgreSQL and PostGIS, providing spatiotemporal data management via SQL. It integrates with the postgreSQL eco-system allowing for complex architectures such as mobility stream processing and cloud deployments.



The presentation will explain the architecture of MobilityDB, its database types, indexes, and operations. An end to end example will be demonstrated, starting with data preparation, loading, transformation, querying, until visualization. This presentation will be of special interest to the PostgreSQL community, and to professionals in the transportation domain.



This presentation will build on our talks in PGConf.ru 2019, and FOSS4G Belgium 2019.

MobilityDB is an open source PostgreSQL extension that adds support for temporal and spatio-temporal objects to the PostgreSQL and PostGIS. MobilityDB implements the Moving Features specification from the Open Geospatial Consortium (OGC).



Features:
- Time types: Period, PeriodSet, and TimestampSet which, in addition of the the TimestampTz type provided by PostgreSQL, are used to represent time spans.
- Temporal types: tbool, tint, tfloat, and ttext which are based on the bool, int, float, and text types provided by PostgreSQL and are used to represent basic types that evolve on time.
- Spatio-temporal types: tgeompoint and tgeogpoint which are based on the geometry and geography types provided by PostGIS (restricted to 2D or 3D points) and are used to represent points that evolve on time.
- Range types: intrange and floatrange which are used to represent ranges of int and float values.



All these types have associated an extensive set of functions and operators. GiST and SP-GIST index support for these types are also provided.

Back

Geo-spatial queries on multi-petabyte weather data archives

Home

Speaker Emanuele Danovaro
RoomAW1.126
TrackGeospatial
Time13:45 - 14:05
Event linkView original entry

Geo-spatial queries on multi-petabyte weather data archives
John Hanley, Nicolau Manubens, Tiago Quintino, James Hawkes, Emanuele Danovaro



Weather forecasts produced by ECMWF and environment services by the Copernicus programme act as a vital input for many downstream simulations and applications. A variety of products, such as ECMWF reanalyses and archived forecasts, are additionally available to users via the MARS archive and the Copernicus data portal. Transferring, storing and locally modifying large volumes of such data prior to integration currently presents a significant challenge to users. The key aim for ECMWF effort in H2020 Lexis project is to provide tools for data query and pre-processing close to data archives, facilitating fast and seamless application integration by enabling precise and efficient data delivery to the end-user.



ECMWF aims to implement a set of services to efficiently select, retrieve and pre-process meteorological multi-dimensional data by allowing multi-dimensional queries including spatio-temporal and domain-specific constraints. Those services are exploited by Lexis partners to design complex workflows to mitigate the effect of natural hazards and investigate the water-food-energy nexus.



This talk will give a general overview of Lexis project and its main aims and objectives. It will present the pilot applications exploiting ECMWF data as the main driver of complex workflows on HPC and cloud computing resources. In particular, it will focus on how ECMWF's data services will provide geospatial queries on multi-dimensional peta-scale datasets and how this will improve overall workflow performance and enable access to new data for the pilot users.



This work is supported by the Lexis project and has been partly funded by the European Commission's ICT activity of the H2020 Programme under grant agreement number: 825532.

Back

actinia: geoprocessing in the cloud

Home

Speaker Markus Neteler
RoomAW1.126
TrackGeospatial
Time14:10 - 14:30
Event linkView original entry

With a rapidly increasing wealth of Earth Observation (EO) and geodata, the demand for scalable geoprocessing solutions is growing as well. Following the paradigm of bringing the algorithms to the data, we developed the cloud based geoprocessing platform actinia (https://actinia.mundialis.de and https://github.com/mundialis/actinia_core). This free and open source solution is able to ingest and analyse large volumes of data in parallel. actinia provides a HTTP REST API around GRASS GIS functionality, extended by ESA SNAP and user scripts written in Python. Core functionality includes the processing of raster and vector data as well as time series of satellite images. The backend is connected to the full Landsat and Copernicus Sentinel archives. actinia is an OSGeo Community Project since 2019 and a backend of the openEO.org API (EU H2020 project).

Back

neat-EO.pink: Deep Learning Computer Vision patterns extraction at scale

Home

Speaker Olivier Courtin
RoomAW1.126
TrackGeospatial
Time14:35 - 14:55
Event linkView original entry

neat-EO.pink, a Deep Learning Computer Vision framework for GeoSpatial Imagery,
allow you to perform at scale:






This presentation will focus on the latests enhancements of RoboSat.pink,
and mainly on:




Back

Apache Spark on planet scale

Home

Speaker Denis Chaplygin
RoomAW1.126
TrackGeospatial
Time15:00 - 15:20
Event linkView original entry

Apache Spark is an open-source distributed general-purpose cluster-computing framework with implicit data parallelism. OpenStreetMap is a huge database of features, found on Earth surface. Working with that database is hard, so Spark is a natural solution to solve OSM size-caused processing issues. I'm going to show how to load OSM data to Spark, run processing algorithms like extract/merge or render and how using Spark improves development process and cuts processing times greatly.

Will show, how to use Spark OSM DataSource to load data to the Spark DataFrame and how to use Spark for OSM data merge/extract, simple analysis, rendering etc. Talk will also mention multithreaded OSM PBF parser, that can be used independently of Spark or other processing library.

Back

Creating GPX tracks from cycle routes in OpenStreetMap

Home

Speaker Henry Miskin
RoomAW1.126
TrackGeospatial
Time15:25 - 15:45
Event linkView original entry

Across Europe, there are many regional, national and international cycle routes; they provide safe ways for families and friends to travel and explore by bike. They can however, be hard to follow: overgrown vegetation can hide directions, signs are subject to vandalism and sometimes it is just easy to miss a turn.



Having freely available GPX tracks for cycle routes means people can better plan their journey and avoid wrong turns when following the route. OpenStreetMap is the best source of information for cycle routes and these relations can easily be downloaded using the OverpassAPI.



In this talk I will present an Open Source tool to download GPX tracks of cycle routes, and a website for people to download the generated GPX files. I will discuss some of the nuances of how cycle routes are stored as relations and what processing needs to be performed in order to create a continuous route. In addition, I will speak about how the tool can be used to identify inconsistencies in OpenStreetMap data.

Back

Divide and map. Now.

Home

Speaker Jiri Vlasak
RoomAW1.126
TrackGeospatial
Time15:50 - 16:10
Event linkView original entry

There is a Tasking manager by the Humanitarian OpenStreetMap Team (HOT). We use it heavily during mapathons (mapping for developing countries). The Tasking Manager serves one primary purpose: take some great area to be mapped and split it to squares a human can map in a few minutes. With this divide and map approach, we can map a lot.



There are some issues with the Tasking Manager, however. The main problem is performance -- it is slow and failing when loaded by requests.



The next is the architecture of the Tasking Manager. It is wrong, in my opinion.



The last but not least is that the Tasking Manager is not community-driven. The Tasking Manager is a product of HOT for which you can download the source code.



In the talk, I want to introduce Divide and map. Now. -- damn project. It is an alternative to the Tasking Manager that tries to fix the issues noted above.

I am going to publish the full description between Dec 2019 and Jan 2020 as it is not ready yet. The damn project is not fully released yet, although parts of it are developed as opensource:






EDIT: The project is released:




Back

Integration Processes

Home

Speaker María Arias de Reyna
RoomAW1.126
TrackGeospatial
Time16:15 - 16:35
Event linkView original entry

To run our software we need a flow of data going through it. Usually we write scripting pieces to make that workflow of data moving from one component to the next. Integration Processes are the "glue" between these software pieces. Automating the data flows, adding conditional steps, handling credentials on a secure way,... That's usually a tedious and repetitive task lots of developers do again and again. With Integration Processes frameworks we can make it simpler and reuse expertise from other developers. We could even build entire workflows without throwing a single line of code.

Apache Camel is an open source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data. Based on Enterprise Integration Patterns to help you solve your integration problem by applying best practices out of the box. Camel is one of the the most active project in the Apache Foundation and is the base of many other FOSS projects.



Syndesis is an open source project that helps non-developers create complex integrations easily on a graphic interface.

Back

Wallonia GeoChallenge

Home

Speaker Emmanuel Jauquet
RoomAW1.126
TrackGeospatial
Time16:40 - 17:00
Event linkView original entry

During the first months of 2020, the Geoportail of Wallonia, and some partners will organise a public event called « The GeoChallenge ».



In a few words, the idea is to ask participants to solve public services or citizens expectations by using Wallonia’s geographic information and ressources.



More than a hackathon, the event will last a few weeks with the idea to create results that translate into concrete benefits.



The first call for proposals will be launched in february and will last until the end of 16th of march.

Back

Do Linux Distributions Still Matter with Containers?

Home

Speaker Scott Mccarty
RoomK.3.201
TrackDistributions
Time09:00 - 09:55
Event linkView original entry

In the beginning there was compiling and static linking. My first programs when I was 10 years old worked like that. Then, we discovered dynamic linking. This was great because we could now patch one library and all of the programs would pick up the change on restart. But we created another problem - dependencies. Throughout the history of computing we have solved one problem and created another. Containers are no different. This talk will walk through why we invented Linux distros and why we should continue to appreciate them in a world full of container images...

In the beginning there was compiling and static linking. My first programs when I was 10 years old worked like that. Then, we discovered dynamic linking. This was great because we could now patch one library and all of the programs would pick up the change on restart. But we created another problem - dependencies. Throughout the history of computing we have solved one problem and created another. Containers are no different. This talk will walk through why we invented Linux distros and why we should continue to appreciate them in a world full of container images.



The short answer is yes. Yes, they still matter because of several reasons:




  1. A linux distribution is a bunch of human beings that work together to create a dependency tree of software packages. This dependency tree is still convenient in container images


  2. There is a TON of knowledge embedded in systemd unit files on how to properly start/stop commonly used daemons


  3. Mapping CVEs to packages with meta data is still useful in a container


  4. Standardized C libraries like glibc are used by binaries, interpreters like Python, and even virtual machines managers like the JVM


  5. Critical libraries like libssl, openssl, and hardware accelerated bridges, are useful to everyone


  6. Linux distros are a connection point with gravity which builds community. Community is what solves problems


  7. Host and container image portability (glibc actually can take different code paths depending on what hardware is made available by the kernel. Also, glibc has a min/max kernel version that it supports well when compiled)



Back

What's up on Haiku?

Home

Speaker François Revol (mmu_man)
RoomK.3.201
TrackDistributions
Time10:00 - 10:25
Event linkView original entry

What are the new features in the upcoming R1/beta2? How did the packaging system work out? How to make your software easier to port to it, and how to contribute?

Haiku is a Free Software Operating System, inspired by the BeOS, which focuses on personal computing.



It's been in the making for more than 18 years now. We'll see what's coming up for the R1/beta2 release.



The packaging system has been integrated for some years now, as a different approach to software distribution. Did it live up to its promise? How well does it scale with the growing number of available packages?



What are the specifics of Haiku that you should care about when writing portable software?



How to contribute to various parts of the system?

Back

Homebrew: Features and Funding

Home

Speaker Mike McQuaid
RoomK.3.201
TrackDistributions
Time10:30 - 10:55
Event linkView original entry

A talk about the Homebrew package manager and how we've been working with our users to introduce new features to subsets of users, encouraging users to donate to the project and communicating both these to as many users as possible without being annoying.

Back

GNU Guix as an alternative to the Yocto Project

Home

Speaker Mathieu Othacehe
RoomK.3.201
TrackDistributions
Time11:00 - 11:25
Event linkView original entry

This talk demonstrates how to use GNU Guix to build a root filesystem for an
embedded device (Pine A64 LTS). I will also try to answer the question:
could GNU Guix be a viable alternative to the Yocto project and what would be
the benefits of using GNU Guix?

Working as an embedded software engineer, I've been using Yocto and Buildroot
projects to create root filesystems for embedded devices. While Buildroot is
only suitable for small embedded systems, Yocto does scale well, but is a
really complex tool.



Plus, both tools are difficult to handle for developers without a strong
understanding of Linux system integration, and on the other hand, do not
provide APIs and introspection tools for integrators.



In this talk, I want to explore the possibility of using GNU Guix as an
alternative to the Yocto project to generate embedded root filesystems.



With 7 years of existence, more than 10000 packages and 4 supported
architectures, GNU Guix can be used as a transactional package manager and an
advanced distribution of the GNU operating system running on the Linux kernel.



What would be missing to cover all Yocto features? How could the embedded
developer benefit from GNU Guix features such as its high level Scheme API,
package substitution mechanism, strong reliability and reproducibility?



To provide some real world application, I'll compare the process of adding
support for a new board (Pine A64 LTS) on Yocto and GNU Guix. Then I'll
compare how to configure, build and flash a small root filesystem for
that same board, on the two tools.

Back

Software distribution: new points of failure

Home

Speaker Alexander E. Patrakov
RoomK.3.201
TrackDistributions
Time11:30 - 11:55
Event linkView original entry

There is a multitude of software or code ecosystems: Linux distribution packages, language-specific (e.g. Python or node.js) modules, third-party desktop themes, git repositories, and recently also Flatpak and Snap. Users thus obtain software and code mainly from the network. This talk explores what can go wrong in such code delivery mechanisms, and what actually went wrong when a new threat has materialized: networks in certain countries started to be unreliable "thanks" to the governments (classical example: https://isitblockedinrussia.com/?host=7-zip.org == true). And what technical steps can be done in order for the said ecosystems to survive when censorship and overblocking spreads over the globe even more.

The focus will be on how mirror networks and CDNs operate (and what's the difference and why it matters), illustrated by examples of Debian mirrors and NPM. Both availability and integrity concerns regarding code delivery will be discussed.

Back

Reinventing Home Directories

Home

Speaker Lennart Poettering
RoomK.3.201
TrackDistributions
Time12:00 - 12:55
Event linkView original entry

The concept of home directories on Linux/UNIX has little changed in the last 39 years. It's time to have a closer look, and bring them up to today's standards, regarding encryption, storage, authentication, user records, and more.



In this talk we'll talk about "systemd-homed", a new component for systemd, that reworks how we do home directories on Linux, adds strong encryption that makes sense, supports automatic enumeration and hot-plugged home directories and more.

Back

Using systemd security features to build a more secure distro

Home

Speaker Zbigniew Jędrzejewski-Szmek
RoomK.3.201
TrackDistributions
Time13:00 - 13:55
Event linkView original entry

Systemd provides a bunch of features which can be used to contain and secure services,
making security and isolation primitives provided by the kernel accessible to system programs.
This allows service authors to write much simpler code, and often to avoid any integration
with the operating system for security purposes.
Unfortunately, those features are still not widely used, possibly because developers
want to maintain compatibility with a wide range of systems.



I'll talk about the features that are the most useful,
how they can be used in practice, and how this could be used
to make a noticeable change in security at the distribution level.

The number of security features that systemd provides is long and growing:
First, it performs setup like creating runtime directories and opening sockets, so the service doesn't need privileges.
Second, it makes it easy to run services as unprivileged users, removing a whole set of problems.
Third, it uses kernel features like mount and network namespaces, capabilities, resource limits, to constrain services.
Fourth, it implements additional filters using BPF (per-service firewalls, devices controller).
Fifth, it does resource cleanup after the service is done, removing the need for privileges again.



We could use this to vastly simplify services and to provide an additional level of security for system services.
Some distributions are making use of this, but not nearly enough.
Fedora is probably at the forefront, but the common case is still to run as root will full access to everything the service doesn't need.
Debian is now discussing a General Resolution to drop SysV Init compatibility and empower packagers to use all systemd features.
Full support in the two biggest distro families would motivate upstreams to make systemd their "baseline" and build more secure services.
New features like "dynamic users" could be used to make Linux systems take more modern approaches to system security.



I want the talk to serve as a prompt for a general discussion how we could modernize service packaging in distros,
to avoid reimplementing security features in individual daemons, and how to stop the 90's mentality of running everything as root.

Back

Introducing libeconf

Home

Speaker Richard Brown
RoomK.3.201
TrackDistributions
Time14:00 - 14:25
Event linkView original entry

systemD has a very distribution-friendly way of providing it's configuration, with distributions providing defaults in /usr and users being able to override things either selectively or entirely with their own files in /etc.
This is especially nice for distributions wishing to be in some way stateless, support a read-only root filesystem, or provide some kind of factory-reset.
libeconf is a newly written C library to ease the adoption of similar configuration layering in other programs across the Linux ecosystem.

This talk will give a brief introduction to libeconf, how to use it in your existing programs and demonstrate some examples that have already adopted libeconf (eg. PAM, util-linux, rebootmgr, etc).



The session will also share some future plans and welcome suggestions for future contributions, especially for additional features, language bindings, etc.



The target audience is primarily developers of 'low level' distribution plumbing (eg. core daemons & services, package managers, etc) that are most likely to benefit from libeconf, but might be of interest to anyone developing any service for linux distributions.

Back

GRUB upstream and distros cooperation

Home

Speaker Daniel Kiper
RoomK.3.201
TrackDistributions
Time14:30 - 14:55
Event linkView original entry

The presentation will discuss current state of GRUB upstream development and cooperation with distributions.

The first half of presentation will be focusing on last year, current and future development efforts. The second half will discuss cooperation between GRUB upstream and distros. In general it will show current progress in the project and main pain points. One of the goals of the presentation is to solicit some help from the community. Maintainers are quite busy and they are not able to solve all issues themselves. So, help from others is greatly appreciated. At the end of presentation Q&A session is planned.

Back

Integrating new major components on fast and slow moving distributions

Home

Speaker Frederic Crozat
RoomK.3.201
TrackDistributions
Time15:00 - 15:55
Event linkView original entry

Upgrading big components in Linux distributions is hard. But integrating them while minimizing regressions (for stable distributions) and not slowing down release pace (for rolling releases) requires a lot of process and tooling.



Let's deep dive in those.

Over the previous months, openSUSE and SUSE teams have worked together on integrating latest GNOME release (3.34) in their various distributions, while minimizing duplicated work and sharing as much code as possible.



We'll describe how it was done for the 3 differents flavors of distributions:
- openSUSE Tumbleweed (rolling release)
- SUSE Linux Enterprise 15 SP2 (Enterprise release)
- openSUSE Leap 15.2 (stable release)

Back

Fedora rawhide packages gating, for real!

Home

Speaker Pierre-Yves Chibon
RoomK.3.201
TrackDistributions
Time16:00 - 16:55
Event linkView original entry

Rawhide, the, rolling, development version of Fedora has often been considered has too broken and too unstable. Sometime to the detriment of the development of stable releases as well.
In a near future, this should no longer be happening as now every change made to every package landing in Fedora rawhide can be gated based on test results.



This talk will walk you through the processes and changes that Fedora landed to make of this idea a reality.

Rawhide is the development version of Fedora. It is the version from which stable Fedora releases branch from and thus every change made to it will trickle down to the next stable release. This also means that there are time in the development of Fedora where changes landing in rawhide can (and do!) have a detrimental effect on the next stable release.



With the rawhide package gating initiative, Fedora has gained mechanisms to test and gate packages based on the results of these tests.



In this talk we will go through the mechanisms built to allow this gating, how it works, how to debug if there are issues with it.
We will also gladly receive feedback from Fedora contributors who have interacted with it.

Back

Discover dependency license information with ClearlyDefined

Home

Speaker Jeff Mendoza
RoomK.3.401
TrackRust
Time09:00 - 09:25
Event linkView original entry

Complying with license obligations can incur a lot of hurdles. This results in developers skipping compliance or failing to achieve correct compliance. To compound this, package authors sometimes fail to describe the license of their package clearly or omit license information of included files. ClearlyDefined is a community curated repository of discovered license information for Crates packages, among other types.



This talk will cover: the importance of the license obligations of the dependencies of your Rust package, tool, or application. How to discover the license information of those packages. And demonstrate some Cargo compatible tooling that allows automated license recordkeeping and notice file generation as a part of your CI system.

Back

cargo deny

Home

Speaker Jake Shadle
RoomK.3.401
TrackRust
Time09:30 - 09:55
Event linkView original entry

A talk about cargo-deny, why we created it, and how it helps us manage our dependencies in the long term.

The slides are available on Github, it uses reveal-md to present the slides, or you can just browse the contents of all the slides in their text form in slides.md

Back

Packaging Rust programs in GNU Guix

Home

Speaker Efraim Flashner
RoomK.3.401
TrackRust
Time10:00 - 10:25
Event linkView original entry

Rust is a language with a healthy ecosystem and a strong developer base. With built-in dependency management it's easy to build and install new programs even for those who have never used the language. But how is its adoption among Linux distros?
Come with me as we figure out how best to package rust libraries and binaries in Linux distributions which demand total control over dependency management.

Back

rustdoc: beyond documentation

Home

Speaker Guillaume Gomez
RoomK.3.401
TrackRust
Time10:30 - 10:50
Event linkView original entry

Rust compiler comes with a few tools, rustdoc is one of them. It is THE standard rust tool to generate documentation for your crates.

Rust compiler comes with a few tools, rustdoc is one of them. It is THE standard rust tool to generate documentation for your crates.



You can write documentation using "///" or "//!" patterns (which are syntaxic sugar over #[doc = "..."]).



It generates HTML which can used locally without a need for internet connection. The documentation search is running in JS directly in your browser. You have a source code viewer integrated. You can pick different themes (and even add one yourself). It works with javascript disabled. It provides settings to make your docs browsing more comfortable. You can generate docs with extra content (take a look at https://docs.rs/pwnies for a good example!).



But not only it generates documentation, it also adds things for each type that you didn't know was available thanks to the "Auto-traits implementation" and "Blanket implementation" sections.



In addition to generate documentation, it provides functionalities such as an integrated documentation's test runner (which themselves can be quite customized!). It also provides lints that can you deny (missingdocs, missingdoc_example).



With just all this, rustdoc is already a quite complete tool. But more will come in the future:




Back

Rusty instruments

Home

Speaker Benedict Gaster
RoomK.3.401
TrackRust
Time10:50 - 11:35
Event linkView original entry

This talk will introduce the Muses project, which applies programming language theory and practice, physical computing, networking, and musicial theory to design and implementation of Digital Musical Instruments. Rust is a key ingredient in the Muses project, providing a robust and performant foundation for cross platform, desktop, and embedded system development.



The talk will give a brief introdution to Muses project as a whole and then focus on the use of Rust in developing a selection of very different components
in the system and its benefits for these wildy varing use cases.



Demos of the Digitial Musical Instruments with Rust at their heart will shown through out the talk.

Controller and gesture interaction with audio and/or visual media is today ubiquitous, requiring the development of intuitive software solutions for interaction design. Designing and building these interfaces often require extensive domain expertise in audio and visual media creation, e.g. the musician, but additionally in engineering and software development. In this talk we focus on custom controller-based interactive systems for sound and musical performance, with a focus on an intuitive and simple design process that is accessible to artists.



A large part of the software developed for these systems is low-level system code, where direct access to hardware and understandable performance are hard requirements for these systems. Historically, these systems are written in C/C++ and in the case of embedded systems C is still the language of choice. With the emergence of the system programming language Rust an alternative for developing these systems is now with us with its support for high-level features such as traits, type inference, pattern matching, and of course it's affine based type system for pointers.



This talk will introduce the Muses project, which applies programming language theory and practice, physical computing, networking, and musical theory to design and implementation of Digital Musical Instruments. Rust is a key ingredient in the Muses project, providing a robust and performant foundation for cross platform, desktop, and embedded system development.



A high-level overview of the schedule is:






The demonstration will include the following physical components:






The framework also includes an approach to automatically generating interfaces from a DSL for SVG interfaces, written largely in Haskell, but with a tessellation pipeline written in Rust. However, while this will be mentioned in passing it is not the intention of this talk to cover this aspect of the system in detail. (For more information on this, see the provided link for the project website and associated papers, also linked from the site.)



Expected prior knowledge / intended audience



Knowledge of programming will be expected and prior use of C/C++, Rust, or other systems programming language would be useful.



Audio topics will be introduced through out the talk and it is not expected that audience members have a musical background.



Speaker bio



Dr Benedict R. Gaster is an Associate Professor at University of West of England, he is the director of the Computer Science Research Centre, which within he also leads the Physical Computing group. He research focuses on the design embedded platforms for musical expression and more generally the IoT, he is the co-founder of Bristol LoRaWAN a low power wide area network for Bristol city, is the technical lead for city wide project on city pollution monitoring for communities, having developed UWE Sense a hardware platform for cheap sensing. Along with his PhD students and in collaboration with UWE's music tech department, is developing a new audio platform based on ARM micro-controllers using the Rust programming language to build faster and more robust sound!



Previously Benedict work at Qualcomm and AMD where he was a co-designer on the programming language OpenCL, including the lead developer on AMD's OpenCL compiler. He has a PhD in computer science for his work on type systems for extensible records and variants. He has published extensively, has given numerous presentations, including ones at FOSDEM on Rust and LoRaWAN.



Links to some previous talks by the speaker



Below are are some examples of recent talks:




Back

Optimizing rav1e

Home

Speaker Luca Barbato
RoomK.3.401
TrackRust
Time11:40 - 12:25
Event linkView original entry

rav1e is a fast AV1 encoder written in rust (and plenty of assembly), released monthly.



Since the 0.1.0 release we try to make sure we provide an adequate speed or quality boost compared to the previous.



This talk is about what tools are available in the rust ecosystem and what are the practices that worked best for us.

The presentation will touch the following topics:
- Exploring a codebase and profiling it effectively, both for cpu usage and memory usage.
- Which are the optimization strategies that worked for us better (critical path analysis vs peak consumer)
- Benchmarking and tracing



I'll provide examples on what tools worked well or not so well and what currently I consider the best and most promising tools for each tasks.

Back

sled and rio

Home

Speaker Tyler Neely
RoomK.3.401
TrackRust
Time12:30 - 12:55
Event linkView original entry

sled is an embedded database that takes advantage of modern lock-free indexing and flash-friendly storage. rio is a pure-rust io_uring library unlocking the linux kernel's new asynchronous IO interface. This short talk will cover techniques that have been used to take advantage of modern hardware and kernels while optimizing for long term developer happiness in a complex, correctness-critical Rust codebase.

Back

RedisJSON

Home

Speaker Christoph Zimmermann
RoomK.3.401
TrackRust
Time13:00 - 13:45
Event linkView original entry

Over the last decade, Redis has become one of the most popular NoSQL DBs delivering on the promise of high throughput and low latency. What started as a pure C code base is gradually being augmented with Rust due to the trifecta of safety, concurrency, and speed. A primary example is thre RedisJSON module which turns Redis into a document DB.



The talk outlines the principal architecture of the re-implementation of RedisJSON, the challenges encountered and the solutions for these. The focus is on the practical aspects rather than conveying theoretical knowledge. A comparison with other open source document DB concludes this presentation, concentrating on latency and throughput aspects.

Back

Sharing memories of Python and Rust

Home

Speaker Raphaël Gomès
RoomK.3.401
TrackRust
Time13:50 - 14:15
Event linkView original entry

The Mercurial version control system now has an option for running Rust code to improve performance. In this talk we will explore the challenges of using Rust efficiently inside a performance-sensitive Python project with more than 10 years of backwards compatiblity.

Back

Building WebGPU with Rust

Home

Speaker Dzmitry Malyshau
RoomK.3.401
TrackRust
Time14:20 - 15:05
Event linkView original entry

WebGPU is a new graphics and compute API designed from the ground up by a W3C community group. It's a successor to WebGL, which also has a chance to become a standard on native platforms. We are going to talk about the API itself as well as our Rust-based implementation "wgpu".

Expected prior knowledge / intended audience: basic familiarity with one of the graphics APIs is good but not required.
We'll explain in details how this is different from existing APIs.



"wgpu" is the native implementation of this API in Rust, running on top of Vulkan, Metal, D3D12, D3D11, and potentially OpenGL.
This is a talk about the API architecture being designed as well as our implementation of it.
We want to share the experience of leveraging the power of Rust ecosystem and language in order to build this level of abstraction.
We'll show a few demos and spread excitement about the new API.



The talk is related to the Fosdem 2018 talk about gfx-rs: the old talk mentioned WebGPU as one of the clients for the low-level abstraction.

Back

Progress of Rust and WASM in 2019

Home

Speaker Ilya Baryshnikov
RoomK.3.401
TrackRust
Time15:10 - 15:35
Event linkView original entry

There was a huge progress in Rust tools for WebAssembly in the last year. Let's review some of the most noticeable changes. The talk is mostly about wasm-bindgen.

Back

Rustifying the Virtual Machine Introspection ecosystem

Home

Speaker Mathieu Tarral
RoomK.3.401
TrackRust
Time15:40 - 16:05
Event linkView original entry

TLDR: Even though VM Introspection has lots of applicable areas,
it has not gained it's full potential yet, due to a fragmented ecosystem.



This talk will quickly review the state of the technology and we
will present our solution: a Rust library aiming at solving the issue mentionned before,
in order to make VMI a commodity in the future for apps developers, enabling them to do:






Targeting any VMI compatible hypervisor or emulator.






From stealth malware analysis to OS hardening through fuzzing, virtual machine
introspection is expanding the possibilities offered by our hypervisors,
shifting our view of virtual machines, from opaques containers to fully
transparent and instrumentable systems.



Today the VMI ecosystem is made of a multitude of applications, targeting one
hypervisor or emulator, with their own semantic library. (Examples includes
Drakvuf, PANDA, PyREBox, icebox, etc...). If we want to make the most out of VMI
in the future, we need to build the libraries that will unify this ecosystem and
let the developers focus on what matters: building quality VMI apps.



This is where libmicrovmi comes into play. It aims to solve this problem, by
providing a core, foundation library, written in Rust, to be cross-platform,
hypervisor-agnostic and emulator-agnostic, on top of which higher-level
libraries and apps can rebase.



Rust makes a lot of sense for VMI for 2 main reasons:






Therefore Rust is the de facto choice for VMI apps in the future, and we are
building it today, by providing libmicrovmi, a new foundation for VMI.



Libmicrovmi has drivers for:




What is VMI ?



Vrtual Machine Introspection is a concept born in a 2003 research paper titled
"A Virtual Machine Introspection Based Architecture for Intrusion Detection".
The idea resides in inspecting and understanding the real-time high-level state
of a virtual machine, based on the hardware layer, for security purposes.



Since then the technology has made its way, from research and academic
developments to being fully integrated and supported into mainstream
hypervisors, like Xen.



What are the use-cases ?



The initial population who adopted VMI has been malware sandbox providers. Since
common malware had a tendency to hide from debuggers, the level of stealth
reached with this technology made it perfectly suited for this job, alongside
the full system view.



Today VMI has grown to be applied in various domains:






What is the state of the technology today ?



As of today, Xen is the leading hypevisor, haivng VMI APIs since 2011. And since
2017, both KVM and VirtualBox have patches available, and even reviewed on the
mailing list for KVM.



Regarding the libraries available, LibVMI stands out, as it provides a unified,
hypervisor-agnostic, VMI API to applications, and a well-known malware analysis
framework (Drakvuf) is based on it.



However, most of VMI applications today do not share the same common set of core
libraries, which makes the ecosystem fragmented and hard to deal with, where a lot
of efforts is spent solving the same problems everyone has, isolated by their
own stacks.



Why Rust ?



This is where Rust comes into play. The language itself combines 3 important features:






Building this core library that will unify the ecosystem is the goal of libmicrovmi.



Related work



I have been building a hypervisor-level debugger, based on LibVMI. It can
introspect a Windows guest and debug a specific process, while providing a GDB
stub to be plugged into your favorite GDB fronted (IDA, radare2, etc ....)



Expected knowledge



The audience will need a bit of familiarity with virtualization concepts, this
will be enough to understand the idea of introspection.



They can be totally new to Rust, as I once was a few months ago.

Back

zbus: yet another D-Bus library

Home

Speaker Zeeshan Ali
RoomK.3.401
TrackRust
Time16:10 - 16:55
Event linkView original entry

In this talk, I will present zbus, a D-Bus crate written from scratch. D-Bus is an inter-process communication mechanism, available and used on almost all modern Linux desktops and many embedded systems. I will start with why I felt the need to take this huge undertaking on my shoulders, followed by the design goals, the challenges faced and how I overcame them during the development.

Back

Low-end platform profiling with HawkTracer profiler

Home

Speaker Marcin Kolny
RoomK.4.201
TrackDebugging Tools
Time10:05 - 10:30
Event linkView original entry

HawkTracer is low-overhead instrumentation-based profiler built at Amazon Video for platforms with limited capabilities. It's written in C but can be used almost with any other language (we've successfully used it with JavaScript, LUA, Python and Rust). It's highly extensible (at compile time) and portable so it can be run on almost any embedded device. In this talk I'll introduce the architecture of the profiler, present it's advantages and limitations, show how can you instrument the code and demonstrate the profiler in action by running it with an example cross-language (Rust, C++ and Python) project.

Back

GDB pipelines -- convenience iteration over inferior data structures

Home

Speaker Matthew Malcomson
RoomK.4.201
TrackDebugging Tools
Time10:35 - 11:05
Event linkView original entry

We introduce a GDB plugin for working with large data structures in the inferior.



This plugin brings some of the flexibility of Unix pipelines to the GDB command prompt, providing the ability to conveniently run some action on every element in a data structure that matches certain criteria.



One big aim of this plugin is to make it easy and convenient for a user to write their own sub-commands to iterate over the data structures used in their own program.



This is intended for anyone who has found difficulty inspecting large data structures from inside GDB.

MDB -- the debugger on Solaris -- has a feature called "walkers" that is used to great effect when inspecting the contents of large data structures in the Solaris Kernel.



We introduce a GDB plugin to provide the same type of functionality.



Similar to Unix pipelines, one can now flexibly write a surprisingly powerful command by combining several "walkers".



Some examples are:
- Search an inferior data structure for nodes that are malformed.



 gdb-pipe <mywalker> <startnode> | if ! <some verification test>





This plugin has a strong aim to make it easy for users to write "walkers" over their own data structures, and already has "walkers" for the open source projects "neovim" and "GCC".



We would like to discuss possible future directions for this plugin with regards to speed improvements to work on extremely large data structures, and how there could be a tie-in with pretty-printers.

Back

The GDB Text User Interface

Home

Speaker Tom Tromey
RoomK.4.201
TrackDebugging Tools
Time11:10 - 11:40
Event linkView original entry

GDB has had a curses-based interface for many years. Come see what new features are available and how it can improve your debugging experience.

This talk will cover GDB's text user interface (the "TUI"). In particular we'll discuss the benefits of the rewrite, the new features that are available, and how you can easily extend it yourself. A fun demo will be included.

Back

Memcheck Reloaded

Home

Speaker Julian Seward
RoomK.4.201
TrackDebugging Tools
Time11:45 - 12:25
Event linkView original entry

Valgrind's Memcheck tool reports various kinds of errors. One of the most important are those where an if-condition or a memory address uses undefined data. Detecting that reliably on optimized code is challenging, and recent compiler development has made the problem worse.

Two years ago, at FOSDEM 2018, I did a talk describing the techniques Memcheck uses to achieve a very low false positive rate. But by 2018 both GCC and Clang were routinely emitting code with branches on uninitialised data. Surprisingly, there are situations where such code is correct. Unfortunately Memcheck assumes that every conditional branch is important and so emits many complaints when this happens.



The worst thing was, this problem couldn't be solved using the bag of tricks we'd accumulated over Memcheck's decade-plus lifetime. Our options didn't look good. But in early 2019 it became clear how to fix this: enhance Valgrind's trace generation machinery to analyse more than one basic block at a time, and use that to recover the source-level &&-expressions, which can then be instrumented precisely. This talk tells the story.



The implementation (appears to!) work. If all goes well, it will ship in the upcoming 3.16 release.

Back

Postmodern strace

Home

Speaker Dmitry Levin
RoomK.4.201
TrackDebugging Tools
Time12:30 - 13:00
Event linkView original entry

strace is a diagnostic, debugging and instructional utility for Linux. It is used to monitor and tamper with interactions between processes and the Linux kernel, which include system calls, signal deliveries, and changes of process state. In this talk the maintainer of strace will describe new features implemented since FOSDEM 2018.

Several interesting features were implemented within strace project since FOSDEM 2018, including:






In this talk the maintainer of strace will describe these new features and demonstrate what kinds of problems they help to solve.

Back

strace: fight for performance

Home

Speaker Eugene Syromyatnikov
RoomK.4.201
TrackDebugging Tools
Time13:05 - 13:35
Event linkView original entry

The talk gives an overview of various optimisations implemented in strace over the past several years. While most of them are quite trivial (like caching of frequently-used data or avoiding syscalls whenever possible), some of them are a bit more tricky (like usage of seccomp BPF programs for avoiding excessive ptrace stops) and/or target more specific use cases (like the infamous thread queueing patch[1], which was carried as a RHEL downstream patch for almost 10 years).



[1] https://gitlab.com/strace/strace/commit/e0f0071b36215de8a592bf41ec007a794b550d45

Back

strace --seccomp-bpf: a look under the hood

Home

Speaker Paul Chaignon
RoomK.4.201
TrackDebugging Tools
Time13:40 - 14:10
Event linkView original entry

strace is known to add significant overhead to any application it traces.
Even when users are interested in a handful of syscalls, strace will by
default intercept all syscalls made by the observed processes, involving
several context switches per syscall. Since strace v5.3, the
--seccomp-bpf option allows reducing this overhead, by stopping observed
processes only at syscalls of interest. This option relies on seccomp-bpf
and inherits a few of its limitations.



In this talk, we will describe the default behavior of ptrace and strace,
to understand the problem --seccomp-bpf addresses. We will then detail
the inner workings of the new option, as seen from ptrace (seccomp-stops)
and bpf (syscall matching algorithms). Finally, we'll discuss limitations
of the new option and avenues for improvement.




Part of this talk is covered in the following blog post:
https://pchaigno.github.io/strace/2019/10/02/introducing-strace-seccomp-bpf.html.

Back

Tools and mechanisms to debug BPF programs

Home

Speaker Quentin Monnet
RoomK.4.201
TrackDebugging Tools
Time14:15 - 14:55
Event linkView original entry

By allowing to safely load programs from user space and to execute them in the kernel, eBPF (extended Berkeley Packet Filter) has brought new possibilities to the Linux kernel, in particular in terms of tracing and network processing.



But when a program fails to load, or when it does not return the expected values, what tools do we have to examine, inspect and debug eBPF objects? This talk focuses on the different tools and mechanisms available to help eBPF developers debug their programs, at the different stages of the workflow. From bpftool to test runs, let's find the best way to track bugs!

Back

Support for mini-debuginfo in LLDB

Home

Speaker Konrad Kleine
RoomK.4.201
TrackDebugging Tools
Time15:00 - 15:15
Event linkView original entry

The "official" mini-debuginfo man-page describes the topic best:



Some systems ship pre-built executables and libraries that have a
special .gnu_debugdata section. This feature is called MiniDebugInfo.
This section holds an LZMA-compressed object and is used to supply extra
symbols for backtraces.



The intent of this section is to provide extra minimal debugging information
for use in simple backtraces. It is not intended to be a replacement for
full separate debugging information (see Separate Debug Files).



In this talk I'll explain what it took to interpret support for mini-debuginfo
in LLDB, how we've tested it, and what to think about when implementing this
support (e.g. merging .symtab and .gnu_debugdata sections).

Back

The elfutils debuginfod server

Home

Speaker Mark Wielaard
Frank Ch. Eigler
RoomK.4.201
TrackDebugging Tools
Time15:20 - 16:00
Event linkView original entry

Debugging data is a necessary evil. It is necessary for running debuggers in situ, some tracing tools, or for coredump analysis. It is evil because it is big - potentially many times the size of the binaries. Therefore, software distributions have conflicting needs to generate & keep this data but not burden everyone with its storage.



We will review some degrees of freedom for debugging data distribution, across compiled languages and OS distributions, identifying some of the best practices. We will identify the remaining shortcomings that necessitate exploring yet another way of making debuginfo data available needed.



We will present the elfutils debuginfo-server, where a web service offers a lightweight, build-id-indexed lookup of debuginfo-related data on demand. This service is designed to run on a nearby host, private or shared within teams, or even by OS distributions. Clients built into various debugging type tools will be demonstrated.

Back

Debugging apps running in Kubernetes

Home

Speaker Jeff Knurek
RoomK.4.201
TrackDebugging Tools
Time16:05 - 16:35
Event linkView original entry

New tools are coming out to make it possible to add breakpoints and debug running code in a Kubernetes Pod. This talk will present an overview of some of these tools. We'll cover tools that make it easy to update the code that's running in a Pod (Squash/Ksync/Telepresence). And we'll also cover how to connect your IDE to the code and set breakpoints.


Back

BSP generator for 3000+ ARM microcontrollers

Home

Speaker Fabien Chouteau
RoomK.4.401
TrackHardware Enablement
Time09:00 - 09:25
Event linkView original entry

For embedded developers using alternative programming languages, but also for anyone using third party driver frameworks such as libopencm3, one of the main pain points to start using a microcontroller is to make a Board Support Package.



Things like linker script or startup code (crt0) not only require skills, but also information that are not always easily accessible.



In this talk we will present a tool that generates linker script, startup code, and low level hardware binding for 3000+ ARM microcontrollers using information extracted from archives provided as part of the ARM Cortex Microcontroller Software Interface Standard (CMSIS).

Back

On-hardware debugging of IP cores with free tools

Home

Speaker Anton Kuzmin
RoomK.4.401
TrackHardware Enablement
Time09:30 - 09:55
Event linkView original entry

An approach to challenges of an on-FPGA debugging of IP cores based on
free software tools is demonstrated. Various aspects and related problems
of an on-hardware debugging are presented along with the tools to
address them, such as OpenOCD, sigrok/PulseView, GHDL, etc. Real-life
working configuration and missing bits of software are accompanied by
the live debug session demo running on Open-source Hardware.

Debugging of hardware blocks on an FPGA is always challenging and may
be frustrating, especially with no reliable tools at hands. Way too
often the process turns into developing and debugging of the tools,
instead of a target design.



Commercial solutions are available (SignalTap, ChipScope, Synopsys
Identify RTL Debugger, MicroSemi Smart Debug), at the same time there
are a lot of well known problems associated with them: vendor lock,
single target, closed source and not always flexible enough, license
terms and costs.



Owing to free software developers essential tools for
on-hardware debugging of IP cores are available today. However there
are problems associated with these tools too. Among the most notable
ones are a weak integration between separate tools and small bits of
code and config files missing here and there. A working combination of
tools along with explanations of how they may be used together to
debug IP cores is provided. A presentation covers such free
software as GHDL, sigrok/PulseView, and OpenOCD. Source code of free
IP cores, all configuration and script files and presentation slides
will be available in a dedicated repository on github.



A live demonstration of the PulseView connected to an in-FPGA
logic analyzer via JTAG interface and working in parallel with a gdb
debug session on a RISC-V soft-core CPU in the same FPGA with an
open and low-cost hardware will be presented.



An outline of the open tasks and possible future development
directions concludes the presentation.

Back

Continuous Integration for Open Hardware Projects

Home

Speaker Mario Behling
RoomK.4.401
TrackHardware Enablement
Time10:00 - 10:25
Event linkView original entry

While it is standard to deploy every single code commit using CI systems and deploy new code automatically we are only at the beginning of automation for designing hardware. In this talk I will share the experience with continuous integration tools in FOSSASIA hardware projects, and specifically our Pocket Science Lab. I will outline opportunities and challenges for implementing CI processes for hardware.

While it is standard to deploy every single code commit using CI systems and deploy new code automatically we are only at the beginning of automation for designing hardware. In this talk I will share the experience with continuous integration tools in FOSSASIA hardware projects, and specifically our Pocket Science Lab. I will outline opportunities and challenges for implementing CI processes for hardware.



With PSlab apart from the firmware we have connected CI processes to our hardware repository. This means each design change can be automatically build into a digital prototype. Electronics materials are largely standardized and with KiCad we are even able to create package lists and Gerber files automatically. Furthermore we deploy to Kitspace using a yaml file. Here any user can order all components and the board through a one-click process. Every version could easily be built here.

Back

Open Source Firmware Testing at Facebook

Home

Speaker Andrea Barberio
Marco Guerri
RoomK.4.401
TrackHardware Enablement
Time10:30 - 10:55
Event linkView original entry

We talked extensively about LinuxBoot, a Linux-based environment intended to be integrated into the firmware on the boot ROM. This time we want to talk about how do we test LinuxBoot before it goes to production. We will talk about ConTest, an open-source continuous and on-demand system testing framework that we designed to be modular, validating, and infrastructure-agnostic, and how it is helping us validate open source firmware on our datacenter platforms.

With LinuxBoot we became vendors of our own system firmware. In order to go to production we need a reliable quality assurance process, and firmware testing was a necessity. In this talk we are presenting ConTest (short for Continuous Testing), a modular framework aimed at automating system testing workflows, like firmware validation and provisioning. ConTest has several goals in mind: being open source and community-driven; validate as much as possible at compile time and at job submission time, to minimize unnecessary operations and run-time failures; being lightweight and infrastructure-agnostic, so it can run in Facebook’s datacenters as well as on a Raspberry Pi; being composable, thanks to an interface-and-plugins architecture; being user-oriented so that it’s not necessary to know the internals to use it effectively; and being metrics and events driven, so that users can gain valuable insights about their jobs, more than just success rate (e.g. micro-benchmarking and trend analysis).



ConTest is aimed at anyone who need to automate system-level testing. Various plugins are provided out of the box, with examples on how to use them. The users can combine them like building blocks using a simple job description format based on JSON, and test scenarios of variable complexity. When default plugins are not enough, for example in order to talk to a custom service, users can develop new plugins, and plug them just like if they were part of the core framework. Open-sourcing your own plugins is always appreciated!

Back

AMENDMENT How to run Linux on RISC-V

Home

Speaker Drew Fustini
RoomK.4.401
TrackHardware Enablement
Time11:00 - 11:55
Event linkView original entry

Want to run Linux with RISC-V on Open Source Hardware? This talk will explore the current options including how open source FPGA tools can be leveraged to build open Linux-capable systems.

I will introduce the open RISC-V instruction set architecture (ISA) and explain how it is enabling a new generation of open source chip design. I will also discuss the important of free software FPGA tools like yosys for synthesis, and nextpnr for place and route, and how SymbiFlow is leveraging bitstream documentation from Project IceStrom (iCE40), Project Trellis (ECP5), and Project X-Ray (Xilinix).



I will explain how myself and others at Hackaday Supercon teamed up to get Linux running on RISC-V core in the ECP5 FPGA badge. I will explain what LiteX is and how it enabled us to quickly build a System-on-Chip (SoC) capable of running Linux.



In conclusion, I will explore the landscape of open source chip design projects and the Linux-capable RISC-V silicon chips on horizon for 2020.



This talk is a replacement for Jasper Nuyens' cancelled "FreedomEV 2.0" talk, as Jasper decided to not attend FOSDEM20.

Back

A free toolchain for 0.01 € - computers

Home

Speaker Philipp Klaus Krause
RoomK.4.401
TrackHardware Enablement
Time12:00 - 12:25
Event linkView original entry

The Taiwanese company Padauk makes small 8-bit microcontrollers, the smallest of which are available at 0.01 € even in small quantities. Even the larger ones are just a few cents; a particularly interestign feature is the hardware multithreading support available in larger devices.
Until recently, the only available toolchain was Padauk's non-free toolchain based around their "MINI-C" IDE (which despite, the name, ist just a bit of C-like syntactic sugar coating for assembler, and in no way a C implementation).



In 2019, an effort to provide a free alternative resulted in a full free toolchain. Documentation beyond that provided by Padauk was created by reverse-engineering. A free design for a programmer along with firmware was created. Assembler, simulator and a Small Device C Compiler (SDCC) backend were written.

The Padauk microcontrollers (µC) are small 8-bit Systems-on-a-Chip (SoC). Their program memory (PROM or Flash) is 0.5 KW to 4 KW with word sizes from 13 to 16 bit. Their data memory is 60 B to 256 B with 8-bit Bytes. The µC only have few peripherals; however, some of the larger devices are barrel processors with support for up to 8 hardware threads. This allows the emulation of even timing-critical peripherals in software. Padauk also supplies a programmer and a non-free "MINI-C" IDE. There is a lack of documentation when it comes to aspects not needed for users of MINI-C. In particular, there is no documentation of opcodes and very little information on the programming protocol.
There are 4 subarchitectures, which we name by the word size in the program memory pdk13, pdk14, pdk15 and pdk16. There is some variation in the form of optional instructions within the subarchitectures.



In 2019, a full free toolchain for these µC was created.
The necessary documentation was reverse-engineered. A free programmer design and firmware was created. For the pdk13, pdk14 and pdk15, we wrote free assemblers and simulators. We also wrote an SDCC backend for these. SDCC is a free C compiler that emphasizes standard-compliance and generating efficient code for small devices; while not up to the level of GCC and LLVM it tends to hold up well against many non-free compilers targetting small devices (see e.g. the FOSDEM 2018 talk "The free toolchain for the STM8"). While stack handling on the Padauk µC is much better than on e.g. small Mirochip PIC devices, it is still not efficient (in particular, there is no stack-pointer-relative addressing mode). Thus SDCC does not place local variables on the stack by default, which makes functions non-reentrant and is not standard-compliant, but a common choice in such cases (see e.g. the mcs51 backend in SDCC or Keil for MCS-51). However, this approach does not work well for devices with hardware-multithreading (i.e. few pdk14 and all pdk16).

Back

Status of AMD platforms in coreboot

Home

Speaker Michał Żygowski
RoomK.4.401
TrackHardware Enablement
Time12:30 - 12:55
Event linkView original entry

The presentation is about AMD's involvement in coreboot evolution and development. Gives a high-level overview of the engagement of the silicon vendor in the coreboot project history. The presentation may contain a little bit of technical aspects of firmware and BIOS. However, the intended audience is not only firmware and BIOS developers, but also free and libre hardware enthusiasts too. If anybody is interested in the future of famous platforms like Asus KGPE-D16, Lenovo G505S, PC Engines apu1/apu2, please attend the presentation.

The history of AMD cooperation in coreboot projects reaches 2007 where the
first contribution appeared for the Geode LX processors. AMD's open-source
support continued for many years until now (with some break). This presentation
will briefly introduce the history of AMD and coreboot, the evolution of the
code, processors, creation of CIMX and AGESA and so on. It will also show the
gradual change in the AMD attitude to open-source and introduction of binary
platform initialization. Binary blobs, very disliked by the open-source
community started to cause problems and raised the needs for workarounds to
support basic processor features. Soon after that AMD stopped supporting the
coreboot community. Moreover, recent coreboot releases started to enforce
certain requirements on the features supported by the silicon code base. Aging
platforms kept losing interest and many of them (including fully open ones) are
being dropped from the main tree. Nowadays AMD released the newest AGESA with
the cooperation of hired coreboot developers, but only for Google and their
Chromebooks based on Ryzen processors. These are hard times for open firmware
on AMD platforms. If you are curious about what is the present status of AMD
boards and hardware (for example famous Asus KGPE-D16, Lenovo G505S, PC Engines
apu1/apu2) in coreboot and what future awaits them, this presentation will give
you a good overview.

Back

Open Source Hardware for Industrial use

Home

Speaker Tsvetan Usunov
RoomK.4.401
TrackHardware Enablement
Time13:00 - 13:55
Event linkView original entry

Olimex is designing Open Source Hardware Linux computers since 2012.
They are adopted by hundreds of manufacturers all around the world and prove the Open Source business model is sustainable.
The lecture is about the advantages which OSHW bring to the industrial vendors and what drives their decision to use our boards.
We will explain the benefits for the SOC vendors to have OSHW designs with their ICs, the end user benefits and how OSHW helps us to excel our products and make them better and better.

Back

FOSDEM Video Box

Home

Speaker Mark Van den Borre
Luc Verhaegen
Gerry
RoomK.4.401
TrackHardware Enablement
Time14:00 - 14:55
Event linkView original entry

A bespoke OSHW HDMI video capture solution is being developed for use at FOSDEM and other open source conferences. This talk will explain the what, why, how and hopefully when.

FOSDEM is unique in that it has 750+ talks on two days, with more than 28 parallel tracks. All are captured and streamed out live, and a sanitised version of each talk is re-encoded for separate viewing after the event. For each track, there are (at least) two capture boxes, one for the speakers' laptop (which also feeds the projector), and at least one for a camera. This means FOSDEM has close to 60 video capture boxes deployed.



The FOSDEM video boxes are also used at several other conferences, and we are in active contact with several more, as each conference basically has the same problem to solve, albeit at a smaller scale than FOSDEM.



Today, our capture boxes are an amalgamation of many different devices tied together, and while the whole is working surprisingly well, it is far from ideal. Some bits are non-free, some bits are hard or even impossible to control, and the result is bulky and, relatively speaking, expensive. Sourcing the exact same components again to expand the current array of boxes is nigh impossible, and the bulk of these boxes means that deployment is more of a hassle than it could be.



To solve almost all of our issues, we are creating a bespoke HDMI capture solution by tying a HDMI-to-parallel-RGB decoder chip to the camera input of the venerable Allwinner A20 SoC. The Allwinner A20 was chosen due to its feature set, large community (linux-sunxi), its advanced upstream support and the availability of OSHW board designs. We were lucky to find a suitable board in the Olimex Lime2 (which is OSHW), which exposes all the necessary IO pins on pinheaders.



Capturing 1280x720@50Hz, encoding it to h.264 for local storage and streaming over the network, while displaying the captured frames directly to HDMI or VGA (projector) and a status LCD, while also capturing (and passing through) audio is a challenge on this cheap, low power but open hardware. It forces us to make use of a wide array of SoC HW blocks, not all of which previously had driver support, and we are close to saturate the available memory and bus bandwidth.



So while a good part of this talk is about describing the bigger problem we are trying to solve, this project very much is an issue of hardware enablement.

Back

Using OSHW and OSS for building your custom hardware platform

Home

Speaker Priit Laes
RoomK.4.401
TrackHardware Enablement
Time15:00 - 15:55
Event linkView original entry

Talk describes a journey (and bunch of bragging stories) of designing and implementing an extendable hardware platform utilizing OSHW and OSS.

We talk about:




Back

Extending the lifetime of smartphones with Replicant, a fully free Android distribution

Home

Speaker Denis Carikli (GNUtoo)
RoomK.4.401
TrackHardware Enablement
Time16:00 - 16:55
Event linkView original entry

After a very quick introduction on Replicant and the smartphones ecosystem, we will look at what affects smartphones' and tablets' lifetime and how to increase it by making Replicant more sustainable.

Back

seL4 Microkernel Status Update

Home

Speaker Gernot Heiser
RoomK.4.601
TrackMicrokernels and Component-based OS
Time09:00 - 09:50
Event linkView original entry

I will give an overview of where seL4 stands today in terms of functionality, verification, ecosystem, deployment and community. The focus will be on what has happened in seL4 land over the past 12 months, which is a lot: seL4 Foundation, RISC-V support and introducing time protection.

The biggest news of the year is that we are in the process of setting up the seL4 Foundation, as an open, transparent and neutral organisation tasked with growing the seL4 ecosystem. It will bring together developers of the seL4 kernel, developers of seL4-based components and frameworks, and those deploying seL4-based systems. Its focus will be on coordinating, directing and standardising development of the seL4 ecosystem in order to reduce barriers to adoption, raising funds for accelerating development, and ensuring clarity of verification claims. I will report on the state of this.



The other big development is that we are closing in on completing verified seL4 on the open RISC-V architecture. This includes the functional correctness proof (that guarantees that the kernel is free of implementation bugs), the binary correctness proof (which guarantees that the compiler did not introduce bugs) and the tranition to the new mixed-criticality scheduling model, which supports the safe co-location of critical real-time software with untrusted components, even if the latter can preempt the former.



Finally, on the research side we have introduced the new concept of time protection (the temporal equivalent of the established memory protection) that allows us to systematically prevent information leakage through timing channels.

Back

M³: Taking Microkernels to the Next Level

Home

Speaker Nils Asmussen
RoomK.4.601
TrackMicrokernels and Component-based OS
Time10:00 - 10:35
Event linkView original entry

Current microkernels have shown to provide advantages in terms of security, robustness, and flexibility of systems. However, in recent years, the hardware added new challenges that need to be addressed as well, demanding approaches that include the hardware into the picture. First, hardware is getting more and more heterogeneous and consists not only of general-purpose cores, but contains also various accelerators. Second, system designers need to integrate untrusted third-party components (e.g., accelerators or modems) to meet today's performance, energy, and development-time demands. And third, security vulnerabilities such as Meltdown, Spectre, and Fallout have shown that today's complex general-purpose cores should not be trusted anymore to properly enforce isolation boundaries between different software components.



In my talk, I will present a new system architecture that takes existing microkernel ideas to the "next level" to address the mentioned challenges. We use a hardware/operating system co-design consisting of a small and simple hardware component, called trusted communication unit (TCU), that we add next to each processing element (core, accelerator, modem, etc.) and an operating system, called , that takes advantage of it. The TCU provides a uniform interface for all processing elements, simplifying the management and usage of heterogeneous processing elements, and enables secure communication between arbitrary processing elements. M³ is designed as a microkernel-based system and runs its components on different processing elements with TCU-based communication channels between them. To account for the security vulnerabilities in today's cores, M³ places components onto different and physically isolated processing elements by default, but allows sharing of processing elements as a fallback.

Back

HelenOS in the Year of the Pig

Home

Speaker Jakub Jermář
RoomK.4.601
TrackMicrokernels and Component-based OS
Time10:40 - 10:55
Event linkView original entry

This is going to be an all-encompassing update talk for HelenOS developments that happened in the Year of the Pig (since the last FOSDEM).

Back

Linux Kernel Library

Home

Speaker Hajime Tazaki
RoomK.4.601
TrackMicrokernels and Component-based OS
Time11:00 - 11:25
Event linkView original entry

LKL (Linux Kernel Library) is aiming to allow reusing the Linux kernel
code as extensively as possible with minimal effort and reduced
maintenance overhead. It allows us to link the library with any
programs (which wish to call as a function call) containing Linux
kernel code. There are many use cases: reading/writing files without
general system calls, putting experimental protocol implementation
without neither of host kernel update nor kernel module installation,
using customized kernel in container instance, building a unikernel
based on existing rumprun framework, or testing/fuzzing kernel
implementation in userspace execution, etc.



In this talk, we are going to present




Back

Phantom OS

Home

Speaker Dmitry Zavalishin
RoomK.4.601
TrackMicrokernels and Component-based OS
Time11:30 - 11:55
Event linkView original entry

Phantom OS is an Operating system based on the orthogonal persistence. Application does not feel OS shutdown and restart. Even abrupt restart. It is guaranteed that application will be restarted in consistent state.


Back

Gneiss: A Nice Component Framework in SPARK

Home

Speaker Johannes Kliemann
RoomK.4.601
TrackMicrokernels and Component-based OS
Time12:00 - 12:25
Event linkView original entry

Gneiss is an abstraction layer for component based environments that aims to
provide a foundation for formally provable components. It enables the creation
of platform independent, asynchronous components in SPARK and provides
function contracts that allow to prove the correct interaction with the
underlying platform.

Back

A Component-based Environment for Android Apps

Home

Speaker Alexander Senier
RoomK.4.601
TrackMicrokernels and Component-based OS
Time12:30 - 12:55
Event linkView original entry

With 2.5 billions of active users Android is the most widely deployed mobile operating system in the world. Its vast complexity paired with a monolithic architecture regularly result in severe security issues like the infamous Stagefright bug. In this presentation we talk about an ongoing research project which aims at running Android applications on top of the component-based Genode OS framework and secure them using formally verified components. We discuss how Android applications interact, how well this matches the semantics of Genode and what it takes to support unmodified Android apps.

Back

Demonstration of the Sculpt Operating System

Home

Speaker Norman Feske
RoomK.4.601
TrackMicrokernels and Component-based OS
Time13:00 - 13:50
Event linkView original entry

Sculpt OS is a novel general-purpose operating system designed from the ground up and implemented using the building blocks of the Genode OS framework. It started with the vision of a truly trustworthy OS that combines a completely new system structure with microkernels, capability-based security, sandboxed device drivers, and virtual machines. The talk is a live demonstration of the current incarnation of Sculpt.

The Genode OS framework is an operating-system technology created from scratch. Over the past decade, it steadily evolved from a fairly obscure research prototype to a practical day-to-day operating system.



Being a component-based system designed after the principle of least privilege from the very beginning, it breaks with many concepts that we take for granted in traditional operating systems, e.g., the central role of files. Instead, Genode introduces a novel way of composing system scenarios out of building blocks where the building blocks are able to cooperate without ultimately trusting each other. Those building blocks include not only applications but also all classical OS functionalities including kernels, device drivers, file systems, and protocol stacks.



In 2018 - after more than 10 years of developing Genode in a shadowy corner of the open-source community - the project created Sculpt OS, which is a Genode-based general-purpose OS for commodity PC hardware. Since it is not derived from any existing OS, Sculpt re-approaches established concepts like the installation, configuration, and spawning of programs from a new angle. This is reflected by its custom user interface.



Besides presenting the motivation and the fundamental ideas behind Genode, the talk will introduce and demonstrate the current state of Sculpt OS, draw connections to related open-source projects, and give a glimpse on the project's future plans.

Back

A Brief Survey through Genode's ARMv8 Playground

Home

Speaker Stefan Kalkowski
RoomK.4.601
TrackMicrokernels and Component-based OS
Time14:00 - 14:25
Event linkView original entry

The Genode project has evolved over the past decade into a versatile toolkit for general-purpose computing. Even though support for ARM embedded devices is an inherent part of it since the very beginning, the focus of the past years was more x86-Architecture centered. Sculpt, the desktop incarnation of Genode, being the prime example. Recently, Genode's ARMv8 port ignited extensive development efforts to support more sophisticated workloads on top of modern embedded and mobile devices.



The talk provides an overview about the current ARMv8 Genode landscape, its ambitions and potential. It will live demonstrate recent achievements from device support up to hardware-assisted virtualization on top of the NXP i.MX8 SoC.

Back

NOVA Microhypervisor on ARMv8-A

Home

Speaker Udo Steinberg
RoomK.4.601
TrackMicrokernels and Component-based OS
Time14:30 - 15:20
Event linkView original entry

NOVA is a modern open-source microhypervisor that can host unmodified guest operating systems next to critical host applications. Although originally developed for the x86 virtualization extensions of Intel and AMD, the internals of the microhypervisor and its external API were designed with flexibility in mind, such that the code could also be ported to other architectures.



In this talk we present the first ever version of NOVA on ARMv8-A. We will show how the NOVA abstractions map onto the ARM architecture, how modern virtualization features such as GIC and SMMU are being used, discuss the ongoing evolution of the NOVA API and how the ARM port differs from the earlier x86 version.



The talk will conclude with a short demo, an outlook into the NOVA roadmap and the formal verification efforts around the code base, as well as opportunities for collaboration with the NOVA community.

Back

The HIPPEROS RTOS

Home

Speaker Antonio Paolillo
RoomK.4.601
TrackMicrokernels and Component-based OS
Time15:30 - 15:55
Event linkView original entry

HIPPEROS is an upcoming open source RTOS that was developed at ULB and by a former spin-off company of ULB.
The talk will be a presentation followed by an open discussion about the main architecture principles of the HIPPEROS kernel and OS, what platforms and architectures we support and our agenda regarding open source.

This multitasking RTOS is specifically designed to take advantage of multi-core platforms for critical, hard real-time applications that must be predictable. It targets high-end embedded platforms that exhibit heterogeneous parallelism. The HIPPEROS kernel is designed from scratch in order to support recent process models present in the real-time systems research literature.
The RTOS is based on a micro-kernel with an asymmetric master/slave architecture. It allows to natively support parallel processor architectures, by dedicating one core to the heavy operations of the kernel (scheduling, memory management, etc.) and the other cores can then execute user mode application and serve real-time tasks with very few interferences.



The OS kernel was designed and implemented by a team made of people from the ULB PARTS laboratory (http://parts.ulb.ac.be/).
The goal was to create a spin-off company around the topic of Real-Time Operating Systems, including the creation of a new micro-kernel for high-end embedded systems with an innovative software architecture, backed-up by research (both theoretical and applied), designed, developed and maintained with "good" (and agile) software design methodology.
Within this business, a side objective was to maintain strong links with universities and the research world, by validating the OS design in an academic environment and continuous research activities.



Currently, the company is under liquidation. The state of the project is frozen but we started an ongoing initiative aiming to open source the code base. This would allow external contributions and therefore continuing maintenance and development of new features.
HIPPEROS could then become a test ground for academics and industrials that aim to try new ideas regarding the reliability and efficiency of their systems. With no such undertaking, the code base might just disappear, therefore cancelling out the 7-year team effort.

Back

Unikraft: A Unikernel Toolkit

Home

Speaker Simon Kuenzer
RoomK.4.601
TrackMicrokernels and Component-based OS
Time16:00 - 16:25
Event linkView original entry

Although unikernels - images containing specialized OS primitives and libraries targeting a specific application - have shown impressive performance potential (e.g., fast I/O of 40 Gbps, fast instantiation in the millisecond range, minimal memory footprints of only KBs and a minimal trusted compute base), creating them has proven to be a complicated and time-consuming process. This is mostly because operating system components have to be individually specialized and developed for each target application and target platform.



In this talk we give an update on the Unikraft open source project. Unikraft is a toolkit for creating specialized unikernels and it aims to remove the need for time-consuming, expert work. In the past two years, the community has put a lot of effort into supporting OS functionality, drivers, and platforms, porting libraries, and providing tools to ease porting of existing applications. We will give an overview of all the exciting achievements and conclude with an outlook of recent project directions: binary compatibility (Linux ABI), support for a wide range of compiled and interpreted languages (e.g., web assembly, Go, Python, Ruby, etc.), enhanced safety features, and the ability to seamlessly produce images ready to run as extremely lean VMs, containers, or directly on bare metal. The aim is that Unikraft will represent a step forward towards wider adoption of unikernels beyond the research community.

We have spent quite a bit of our time over the last years developing unikernels – highly specialized kernels targeting specific applications. We have been originally interested in them for virtualized network functions because of their fantastic performance benefits: tiny memory footprints, boot times comparable to those of processes, and fast I/O performance, to name a few.



Despite the fact that this work and work from several others is proof of their potential, unikernels have yet to see massive adoption. One of the main showstoppers is development time: for instance, developing Minipython, a MicroPython unikernel, took the better part of 3 months to put together and test. ClickOS, a unikernel for NFV, was the result of a couple of years of work. What’s particularly bad about this development model besides the considerable time spent is that each unikernel was basically a “throwaway”: every time we wanted to create a new unikernel targeting a different application and a different platform, we would start more or less from scratch. This comes from the fact that each application has different OS dependencies and benefit from different optimizations and specializations of these layers.



Two years ago, we started Unikraft as an open source incubator project under the umbrella of the Xen Project and the Linux Foundation. Our goal is to build a common pool of decomposed OS functionalities, called libraries, where various Unikernel projects can share implementations and optimizations with others. The project provides Unikernel builders tools that help them to select needed libraries and configurations. Unikraft's build system quickly and automatically creates images tailored to the needs of their specific applications. The users can choose multiple target platforms (e.g., extremely lean VMs, containers, or directly as bare metal) without having to do additional work for each of them.



We are going to present the efforts and achievements done by the community in the last two years. We will also give an outlook of recent project directions: binary compatibility (Linux ABI), support for a wide range of compiled and interpreted languages (e.g., web assembly, Go, Python, Ruby, etc.), and enhanced safety and protection features. With a bit of left time, we will show a live demo to the audience.

Back

VUOS: Give Your Processes a New VU

Home

Speaker Renzo Davoli
RoomK.4.601
TrackMicrokernels and Component-based OS
Time16:30 - 16:55
Event linkView original entry

VUOS is a different perspective on namespaces, anykernels and related concepts. The main idea behind VUOS is that it is possible to give processes their own "view" using partial virtual machines.



A partial virtual machine intercepts the system call requests and operates like a filter: system calls can be forwarded to the kernel of the hosting system or processed by the partial virtual machine hypervisor.
In this way processes can see a mix of resources provided by the kernel (on which they have the same view of the other processes) and virtual resource. It is possible to mount filesystems, load networking stacks, change the structure of the file system tree, create virtual devices.



The hypervisor is just a user process so while it gives new perspective for processes, it does not widen the attack surface of the kernel.

Back

How Firefox upholds its values and keeps up with change

Home

Speaker Emma Humphries
RoomUA2.114 (Baudoux)
TrackMozilla
Time09:00 - 09:25
Event linkView original entry

How the Firefox team changed how we thought about shipping features and replaced a process biased towards those who had the loudest voices and the luxury of time with a process that is more inclusive and allows us to reduce risk to Firefox users when we ship.

In this talk you'll learn how the Firefox team changed how we ship new features in the browser, and adopted industry best practices including gradual deployments and feature flags.



Why?






I'll talk about what we learned and how it's shaped how we release Firefox at Mozilla.

Back

Thunderbird in 2020 and Beyond

Home

Speaker Ryan Lee Sipes
RoomUA2.114 (Baudoux)
TrackMozilla
Time09:30 - 10:00
Event linkView original entry

We take a look at Thunderbird's roadmap for the future with Ryan Sipes, Thunderbird's Community and Business Development Manager, and break down what the project has planned for 2020 and beyond. Despite some of the Internet saying that Thunderbird was on its deathbed a few years ago, the team has been able to put together a lot of resources and developers to create a better Email client for the world. And we have bigger plans for the coming years.

The Thunderbird project was in a dire situation a few years ago, having been spun out from the Mozilla Corporation and the paid team pulled off the project. Many people thought it was only a matter of time before the Email client would fall into a state of disrepair, and that the community working on it was only prolonging the inevitable. But the community managed to prove them wrong.



In the past few years Thunderbird has been able to put together a healthy amount of resources, hire developers, and address longstanding issues that threatened the project. Now we take a look at the roadmap for 2020 and beyond to share where we are planning to go and how we intend to get there. We'll look at new features, share our philosophy on how we decided what the pursue with Thunderbird, and even share some secret plans that will be revealed at this talk!



Join Thunderbird team members: Ryan Sipes, the Community and Business Development Manager, and Magnus Melin, the project's Technical Manager for a talk that will please and surprise you.

Back

Make it accessible

Home

Speaker Gabriele Falasca
RoomUA2.114 (Baudoux)
TrackMozilla
Time10:00 - 10:25
Event linkView original entry

This talk focuses on Web Accessibility, namely the practice of ensuring that people with disabilities—be it physical, situational or socio-economic—have access to and can interact with websites and applications.

This talk focuses on Web Accessibility, namely the practice of ensuring that people with disabilities—be it physical, situational or socio-economic—have access to and can interact with websites and applications.



We will begin the talk sharing the concept of an interface, intended as a layer between two parties that do not speak the same language. We’ll continue talking about accessibility devices and myths that surround them. Then we will dive into how to refactor codebases for screen reader optimization, following accessibility guidelines that improve the user experience for all people. We will conclude by sharing some useful tools that help developers build accessible webpages.



In the end we'll see some useful tools for helping developers to make accessible webpages

Back

Is the web rea11y for all?

Home

Speaker Ioana Chiorean
RoomUA2.114 (Baudoux)
TrackMozilla
Time11:00 - 11:25
Event linkView original entry

The web is pretty much present in everyone's life nowdays. But is it really for all? How are we making it a smooth usage for those with disabilities, for those with reduce time and internet connection? Or even more further, for those that just use the web for the first time?
Let's learn a bit how the devtools from Firefox can help us improve everyone's experience without much of a sacrifice on our end.

A11y and inclusion is getting more traction nowadays but people still think that they address only those with disabilities. As a person without any officially i did find myself in this situation and felt confused about how the web should be ( was about to loose money on ticket conferences because i was not seeing the categories). Some Developers are a bit defensive saying they don't have users for whom to build more a11y in or the time to add these features. Showing the devtools Firefox has for it - will show how fast you can check your page and how no costy is. Also not knowing all best practices won't be a blocker and firefox will give you suggestions. In a way i wanted to bring attention to A11y nd Inclusion in a more technical way.

Back

What are the Top 10 Frustrations for Web Developers and Designers?

Home

Speaker Kadir Topal
RoomUA2.114 (Baudoux)
TrackMozilla
Time11:30 - 12:10
Event linkView original entry

The MDN Web Developer Needs Assessment is the first edition of an annual study providing a prioritized list of designer and developer needs.

As an industry working on the Web, as a platform and set of tools, we recognized a critical voice was missing when it came to making decisions about feature development — that of web designers and developers.



The MDN Web Developer Needs Assessment is the first edition of an annual study providing a prioritized list of designer and developer needs.



We put this report together with the help of more than 30 stakehold- ers from the MDN Product Advisory Board member organizations and the input of more than 28,000 developers and designers from 173 countries who took the twenty minutes necessary to complete the survey entirely. That’s more than 10,000 hours contributed by the community to help us understand their pain points, wants, and needs. With that involvement, we believe the MDN Web DNA is the largest web developer and designer focused research study ever conducted.



Their input now, and in future versions, will influence how browser vendors prioritize feature development so we can address the needs of designers and developers, both on and off the Web. By producing the report annually, we can track needs and pain points over time so we can see the impact of our efforts.



A critical aspect of the report is that it provides a voice for commu- nities of practitioners. We did not tailor it to current assessments and priorities of participating browser vendors. A single browser vendor does not own it.

Back

Discover the New Firefox Profiler

Home

Speaker Nazım Can Altınova
RoomUA2.114 (Baudoux)
TrackMozilla
Time12:15 - 12:40
Event linkView original entry

Using a profiling tool help developers to get detailed information about the execution of their application and allow them to understand the behavior of it.



The Firefox Profiler is a profiler that is built into Firefox. It has tighter integration with Firefox than external profilers. It can provide more information and insight into what the browser is doing. It can also show the memory usage and Firefox internal code execution.



During the talk, I will be explaining, how to capture a good profile and how to analyze profile data. I will be sharing Firefox Profiler specific features like memory tooling, single tab/advanced view and how to use them. I will also be sharing the future of Firefox Profiler!

Web applications are more popular than ever and users expect more from a web application. Unfortunately, they are using applications more frequently from low powered devices, which strains your application’s performance. Managing application performance can be challenging as modern applications have many dependencies and their complexity can hide the issues.



Using profiling tools to look for potential bottlenecks can significantly reduce the number of problems in your application. It helps you to get detailed information about the execution of your application and it allows you to understand the behavior of it.



The Firefox Profiler is a profiler that is built into Firefox and is available at https://profiler.firefox.com/. It has tighter integration with Firefox than external profilers. It can provide more information and insight into what the browser is doing. Aside from understanding the execution of a web page, it can also show the memory usage and Firefox internal code execution.



The intended audience is all web developers and people who want to contribute to Mozilla by helping us analyze the Firefox performance issues by capturing profiles and filing bugs.

Back

Web compatibility and ML

Home

Speaker Giannis Giannelos
RoomUA2.114 (Baudoux)
TrackMozilla
Time12:45 - 13:10
Event linkView original entry

In 2019, Mozilla's Open Innovation and WebCompat team joined forces to improve the process of gathering web compatibility issues. One of the experiments was to introduce machine learning capabilities in the triaging process and automate some steps. This talk is about the early steps and how we got some hands on experience with machine learning, what we've achieved so far and potential next steps.

Back

Facilitating distributed deterministic computation with WASI

Home

Speaker Jakub Konka
RoomUA2.114 (Baudoux)
TrackMozilla
Time13:15 - 13:55
Event linkView original entry

WebAssembly System Interface (WASI) is the new brilliant community effort at standardising the use of WebAssembly (Wasm) outside the browser environment. Initiated by Mozilla, now under the umbrella of Bytecode Alliance, WASI has the potential to revolutionise the way we think about the "build once, run anywhere" in a truly secure manner. But could WASI also lend itself to a task of running any code within a network of distributed, untrusted nodes such as BOINC or Golem Network, and ensuring that the results received are indeed correct? The short answer is yes, if determinism of computations could be enforced which opens many ways at verifying the results. Enforcing determinism is a rather difficult thing to achieve in other platforms such as JVM etc., now possible thanks to Wasm and WASI. This talk will delve deep into the inner workings of the WASI spec, and its goto implementation, the wasi-common library, and explore how and if determinism can be enforced at the WASI syscall level.

Back

Loanwords, Agriculture & WebAssembly

Home

Speaker Andre Garzia
RoomUA2.114 (Baudoux)
TrackMozilla
Time14:00 - 14:25
Event linkView original entry

In this talk we take a whirlwind tour through untranslatable language jargon and native indigenous agricultural techniques to help us think about where the web ecosystem appears to be headed and how to switch the current course towards a more sustainable future.

Back

AMENDMENT A Mozilla IoT Forecast thats Sunny and Clear -- No Clouds!

Home

Speaker Philippe Coval
RoomUA2.114 (Baudoux)
TrackMozilla
Time14:30 - 14:55
Event linkView original entry




WebThings (by Mozilla) is an open source smart home implementation to improve privacy, security, and interoperability.

Want to manage your own private smart home? Want your connected things to be interoperable across brands, securely accessible and controllable over the web? Come see how to run your entire smart home on the edge, in your own home, no clouds required! This talk demonstrates how to run the WebThings Gateway on a Raspberry Pi (or in a Docker container on your favorite platform) to manage IoT devices that you build or buy. You'll also learn how to build your own "web things", in minutes, using open source WebThings framework libraries.

Back

AMENDMENT Generate a DeepSpeech model with the help of your community

Home

Speaker Daniele Scasciafratte
RoomUA2.114 (Baudoux)
TrackMozilla
Time15:00 - 15:25
Event linkView original entry

The story of how Mozilla Italia added the Italian language to Common Voice and after an year generated the language model.
With the help of a lot of people in the various related project, developing tools and scripts, find and gather the sentences, do promotion and finally generate the model for Italian.

A common issue in Common Voice is how to join and involve a community instead of doing all the tasks alone. Discourse is full of the same questions but there is no story or tutorial that show how this can be a way to work together for the same result, to benefit all the country/region.
Mozilla is an inclusive community but the user case or the story behind from a specific community often is hidden and this is a problem, because Mozilla is open to everyone but the bureaucracy or the missing experience can be a blocker.



Please note that this replaces the talk '(re)Activating the Common Voice project at a local level' by Redon Skikuli.

Back

Privacy by Design

Home

Speaker Trishul Goel
RoomUA2.114 (Baudoux)
TrackMozilla
Time15:30 - 15:55
Event linkView original entry

Most of the websites leak out the user data, and most of the times (~90%) it is not known to the site admins. These leakage happen via tag managers, third party CDNs, embeds, fonts etc. In this talk I would like to discuss the opportunities for developers to avoid these while developing websites. This talk will help them to ensure their websites does exactly what they intend their website to do. The side-effect of this always is better performance 😎

Every other website on the internet uses third party trackers to improve their product and provide better user experience, which eventually leads to a compromise in user privacy. As developers, we are responsible for making our users feel safe and do whatever it takes to secure the privacy of our users.
In this talk we will discuss the data leaks which happen while using third party trackers and will walk through the measures we can take to avoid them and ensure the privacy of our users, right from the time of development.

Back

What Makes People Come and What Makes Them Stay

Home

Speaker Gloria Dwomoh
RoomUA2.114 (Baudoux)
TrackMozilla
Time16:00 - 16:30
Event linkView original entry

Over the years the tech industry has been trying to change its diversity and inclusion statistics but that seems to have been a hard nut to crack. This is a talk about what makes people come, but then also what makes people stay. Because diversity is inviting people to the dance, but inclusion is enabling them to join it. Let's figure out how you can make people come and want to stay in your organizations, and teams, and let's see one use-case where Mozilla did the same.

Back

Kotlin DevRoom Welcoming Remarks

Home

Speaker Nicola Corti
RoomUA2.118 (Henriot)
TrackKotlin
Time09:00 - 09:10
Event linkView original entry

Welcoming participants to the first edition of the Kotlin DevRoom @ FOSDEM

Back

Useful coroutine patterns for Android applications

Home

Speaker Erik Hellman
RoomUA2.118 (Henriot)
TrackKotlin
Time09:15 - 09:55
Event linkView original entry

Kotlin Coroutines are a great match for implementing common features in Android applications. In this presentation, we will go through a number of patterns that solves the pattern we often encounter when implementing traditional asynchronous features.

Coroutines can help us build robust solution for asynchronous work in Android applications, but it can also be difficult to learn what pattern should be used in different scenarios. When should you use launch instead of async, or when should you use a Channel instead of Flow, or maybe both? What use is a conflated channel and why is the catch() operator important? All this and some more will be covered in this talk.

Back

Migrating FOSDEM Companion to Kotlin

Home

Speaker Christophe Beyls
RoomUA2.118 (Henriot)
TrackKotlin
Time10:00 - 10:25
Event linkView original entry

FOSDEM Companion is currently the most used mobile application at FOSDEM. It has been around since 2014 and is updated every year. In 2020, it's finally made the big leap to Kotlin!

The app has been entirely rewritten using the Kotlin programming language. This talk will cover the conversion process, and how the new code makes use of language features and APIs that are not available in Java to become more than a simple Java conversion.



For example, we'll talk about:






The talk will be illustrated by many code examples.



Intended audiences: Android developers with medium experience looking to improve their Kotlin codebases. Java developers curious about Kotlin.

Back

Idiomatic Kotlin Microservices

Home

Speaker Christoph Pickl
RoomUA2.118 (Henriot)
TrackKotlin
Time10:30 - 10:55
Event linkView original entry

Although Kotlin is, from a language perspective, 100% interoperable with Java, due to the slight paradigm shift (nullability) there might be some pain when using Java frameworks, e.g.: the need of private var lateinit when using JUnit, having to use compiler plugins to open up Spring annotated beans or a generated zero-arg constructor for Hibernate.
This talk goes through a full fledged alternative to the common Java stack when writing microservices, using: Ktor (Web framework), Kodein (DI container), Exposed (SQL library), Spek (test framework), Gradle Kotlin DSL, ...

This live coding session aims to share my experiences on how to combine (and how not to combine!) several open source libraries into a maintainable, scalable, sustainable, and all the other xxx-ables you can think of... based on several years of experience using Kotlin in production in different companies. Using libraries written from scratch for Kotlin enables us to fully make use of this beautiful language. Its functional nature, extension function types, null type safety, delegated properties; its preference for immutability and simply the feeling of writing proper, idiomatic Kotlin. The demo itself will cover a basic CRUD web service, following a TDD approach resulting in a production ready artifact.



The target audience are primarily developers/architects and requires basic knowledge of the language (intermediate level) and preferably some experience with the existing Java ecosystem.

Back

Automate your workflows with Kotlin

Home

Speaker Martin Bonnin
Michel Gauzins
RoomUA2.118 (Henriot)
TrackKotlin
Time11:00 - 11:25
Event linkView original entry

From git housekeeping to releasing. From keeping a sane backlog to
handling internationalization and monitoring status, the life of a
developer involves more than meets the eye.



This talk will be a feedback about how we use Kotlin extensively to:






That's a lot to cover and the goal of this talk is not to dig into
each technology but more to inspire and show the range of possibles
opened by Kotlin.

From git housekeeping to releasing. From keeping a sane backlog to
handling internationalization and monitoring status, the life of a
developer involves more than meets the eye.



This talk will be a feedback about how we use Kotlin extensively to:






That's a lot to cover and the goal of this talk is not to dig into
each technology but more to inspire and show the range of possibles
opened by Kotlin.

Back

AMENDMENT Modern asynchronism with coroutines

Home

Speaker Geoffrey Métais
RoomUA2.118 (Henriot)
TrackKotlin
Time11:30 - 11:55
Event linkView original entry

Presentation of Kotlin Coroutines, a great new framework for asynchronism which makes it more powerfull and easier to manage.



Please note that this talk replaces one entitled "Experimenting with the Kotlin Compiler" that was due to have been given by Jossi Wolf, who has sent his apologies but is now unable to attend as he has fallen ill. We wish him a speedy recovery.

Kotlin coroutines is a Framework inspired by Go channels and C# await features.
It introduces the principle of supending functions and enhances it with powerful tools like Channels and Flows.



In this talk I will present the Framework and its best practices, with code examples to show how to leverage it.
This is a feedback of VLC-Android migration to coroutines which helped to improve threading capabilities, performances and helped suppress callback hells.

Back

Communication Break Down | Coroutines

Home

Speaker Bob Dahlberg
RoomUA2.118 (Henriot)
TrackKotlin
Time12:00 - 12:25
Event linkView original entry

Coroutines are great, I think we all agree on that. But as the async, and possibly parallel, programming becomes easier the risk of sharing mutable variables between coroutines arises.



When the boundaries are abstracted away we should rely on safe ways to communicate between our coroutines.



In this session, I will go through safe and unsafe ways of communication between different coroutines, and why it's not safe to think of them purely as threads.

Back

How Kotlin can change developer experience with modern graphics APIs

Home

Speaker Giuseppe Barbieri
RoomUA2.118 (Henriot)
TrackKotlin
Time12:30 - 12:55
Event linkView original entry

Creating modern games in Kotlin forces you to use non-object oriented graphic languages and getting your hand dirty with native resource managemente. With wrappers around OpenGL and Vulkan, we make powerful graphics familiar to jvm devs and type-safe.

Vulkan and OpenGL are the cross-platform standard for creating modern games and graphics. However, their respectively stateless and state-based systems have no relation to an object-oriented language like Kotlin and you have to be careful to handle native resources interacting with the API.



Wrappers that we have built at Kotlin Graphics allow for them to become object oriented. This talk will teach the attendees how to use our wrappers VKK and GLN for Vulkan and OpenGL respectively in order to create performant, modern graphics.



Graphic power meets Kotlin expressiveness



VKK and GLN bring features such as type safety through inline classes and enums, DSL constructs, extension functions, typealias, less verbosity, pure jvm allocation strategy and an easy to pick up system. Because of the type-safety guarantee, we can directly call the native methods without performing potentially expensive checks.



This allows users of Kotlin to create performant, modern games like those you could build on C or C++.

Back

Improve your Android app with coroutines

Home

Speaker Geoffrey Métais
RoomUA2.118 (Henriot)
TrackKotlin
Time13:00 - 13:25
Event linkView original entry

Koltinx Coroutines library implementation in Android development.



Presentation of coroutines integration and how it improved VLC on Android

Demonstration of kotlinx Coroutines framework use for application development.



Coroutines helped a lot to improve VLC app performances on Android.
This talk is a feedback on this conversion and a collection of good practices.



We will see how it helped make the app more performant and maintainable.
Especially the benefit of structured concurrency and how it greatly helps solving most asynchronism problems.
Use your components lifecycle to scope asynchronous jobs.
Get rid of callback APIs with Flow API.
And some more examples of how to dismiss the GUI frameworks limitations with coroutines

Back

Confessions of a Serial K–otlin Multiplatform–er

Home

Speaker Eugenio Marletti
RoomUA2.118 (Henriot)
TrackKotlin
Time13:30 - 13:55
Event linkView original entry

What is Multiplatform?




  1.  HELL




    • harder to develop, no documentation


    • just a trend, failed in the past


    • not suitable for performant apps that feel “native”

       




  2.  HEAVEN




    • simpler to develop, removes platform barriers


    • write half the code, ship in half the time


    • only need to hire “generalists” instead of “specialists”

       




  3.  OTHER  ✔︎




    • elaborate in 25 minutes or less:

      ________________________________

       

       




Back

Kotlin MP: Into the Multi-Verse

Home

Speaker Nishant Srivastava
RoomUA2.118 (Henriot)
TrackKotlin
Time14:00 - 14:25
Event linkView original entry

Kotlin Multiplatform is the new kid on the cross-platform block. The approach although is very different from what you have seen in the past. The new approach utilizes Kotlin Native to compile Kotlin language to native binaries for specific target platform which can run without a virtual machine. Thus enabling simplified code sharing across multiple platforms.
In this talk, you will be introduced to Kotlin/Native and demonstrate how to build a Kotlin Multiplatform app that runs on both iOS and Android using shared Kotlin code.

Back

Multiplatform Kotlin Library Development

Home

Speaker Russell Wolf
RoomUA2.118 (Henriot)
TrackKotlin
Time14:30 - 14:55
Event linkView original entry

Multiplatform Kotlin facilitates code-sharing by making platform-agnostic portions of the standard library available in common code that is written once but can run on any target. As Multiplatform development really starts to take off over the next year, there must also be a robust ecosystem of third party libraries available to application developers.



I’ll talk through what it looks like to create such a library, with lessons from my experiences building one of the early libraries in the mobile Multiplatform space. We'll talk about how to find shared abstractions around different platform APIs, how to handle the fast-paced evolution of this environment, and what this all felt like as a first-time library developer. When we're done, you’ll be ready to leverage the growing ecosystem as well as make your own contributions.

Back

AMENDMENT From Swagger to Kotlin via Gradle

Home

Speaker Nicola Corti
RoomUA2.118 (Henriot)
TrackKotlin
Time15:00 - 15:35
Event linkView original entry

When writing networking code, you want to make sure the code between the client and server implementation is consistent. After spending hours defining the APIs with your backend engineers, you’ll probably need to spend even more time implementing those details over to your code (and guess what… your backend engineers will be doing the same!). This process is boring, time consuming, and prone to errors.



Wouldn’t it be cool if this process were automated? If you could have classes and interfaces that mirror your API definition? And if they were automatically tested and ready to use out of the box?



It can be done!



In this talk I'm going to present swagger-gradle-codegen, an open source Gradle Plugin I developed to automatically generate Kotlin Data Classes and Retrofit APIs from a Swagger Spec file. The plugin is specifically designed to simplify Android Developers' life and can easily integrated in a CI pipeline to automate code generation in your project.



We will see some of the challenges I faced with code generation and I'm going to discuss the feature roadmap together with other contributors. If you're looking for a Kotlin open source project to contribute to, this could be your chance!



Please note that this talk replaces one entitled "Bridge the physical world: Kotlin/Native on Raspberry Pi" that was due to have been given by Qian Jin, who has sent her apologies but is now unable to attend FOSDEM this year

Back

Dissecting the inline keyword in Kotlin

Home

Speaker Suraj Shah
RoomUA2.118 (Henriot)
TrackKotlin
Time15:40 - 16:15
Event linkView original entry

Kotlin has a keyword called inline. While being mostly auto-suggested by the IDE, this little optimization forms the backbone for features like coroutines and APIs for sequences and collections and a lot more!

Java 8 introduced the concept of lambdas, while this was done at the language level, using bytecode instructions introduced in Java 7. Kotlin, tries to make this lambda functionality available for application targeting even Java 6! How does it do it? What optimizations does it do to make this compatibility happen?



How does this feature form the backbone for features like coroutines and APIs for collections and sequences.



How it doesn't stop there and introduces the concept of Inline classes, which enable Kotlin to have unsigned integers.



How all of this is done, by just adding one keyword support in Kotlin!

Back

Designing a DSL with Kotlin

Home

Speaker Nicolas Frankel
RoomUA2.118 (Henriot)
TrackKotlin
Time16:20 - 17:00
Event linkView original entry

Kotlin is one of those “new” JVM languages that are currently rocking the boat. Although it’s made a great impact on Android, it’s equally good on the server side. As Domain-Specific Languages are constrained by the language they run on, Kotlin frees developers from Java fluent builders to propose something better.



Using the Vaadin web framework as an example, I’ll demo how one could design its own DSL with Kotlin.

Back

Close lid to encrypt

Home

Speaker Tim Dittler
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time09:00 - 09:10
Event linkView original entry

Today, hard disk encryption only protects user's data when their machine is shut down.
"Close lid to encrypt" aims to enhance this protection also to suspend mode.

Hard disk encryption is a necessity for everyone, who fears the physical theft or seizure of their device. However, your data is still only protected while the machine is shut down. But most people rarely shutdown their devices anymore. Usually, you just close the lid of your notebook and you're on your way.



"Close lid to encrypt" aims to improve the privacy of your data. When you close the lid of your notebook, it goes into sleep/suspend mode. All processes are frozen and don't need to access your hard disk anymore. We use this opportunity to clean the keys of your encrypted devices and suspend them as well. Therefore, the data on your hard drive is protected.
When resuming your computer, you must re-enter the password of your encrypted volumes. But then you're just where you've been working before.



To make all this work, we rely on a small kernel patch, the cryptsetup project, initramfs and cgroups2. "Close lid to encrypt" right now focuses on Debian and it derivatives and we plan to bring all code upstream. This effort is funded by the German Prototypefund.

Back

AMENDMENT Almonit: Decentralized Websites and Web Services

Home

Speaker Eyal Ron
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time09:10 - 09:20
Event linkView original entry

Almonit is a project for decentralized websites and web services.



Decentralized websites and web services are an alternative to the way the web functions today. They combine decentralized storage (like IPFS), decentralized name services (like ENS) and P2P networks in order to replace the server-based model of the web.



This lecture describes the Almonit project, its architecture, the technical details of the technology and the ecosphere in which it is created.



Come discover the state-of-the-art of this up-and-coming area!



This talk has been accepted late, due to Open and federated identities with ID4me by Vittorio Bertola being promoted to a full talk at 11:00.

How does the web function? Generally speaking, it uses servers to provide content or services, and DNS to provide access to those servers.



The Almonit project provides an alternative to the traditional server-DNS model. Instead of a server, we use decentralized storage (like IPFS), and instead of DNS, we use a decentralized name service (like ENS). We provide tools, methods and a portal for creating and accessing the decentralized web.



The decentralized websites and web services scene is very new. In its current form, it was developed mostly in 2019. So far, it contains a few dozen websites experimenting with the technology, but its potential is promising. We envision it being used to create community-driven websites which would be made, moderated and maintained by a community instead of a central entity.



This lecture describes the Almonit project, its architecture and technical details. We also present the ecosphere of decentralized websites and web services in which the project is created, and survey the past, present and also possible future use-cases of this technology, including its limitations.

Back

Identity Box

Home

Speaker Marcin Czenko
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time09:20 - 09:30
Event linkView original entry

Society is becoming increasingly more aware of the importance of protecting digital information and it is becoming clear that the current centralized model has came to an end.
The future of the Internet is distributed. Unsupervised, unmoderated access, affordable storage, data-replication, and security and privacy built-in are the most important aspects of the Internet of the future.



Unfortunately, a global, reliable, decentralized network cannot be built without actual physical nodes, as the opposite of thousands of nodes in centralized cloud data center. Only by building an open network of physical nodes we can pave our way as a society to the decentralized Internet of the future.



Identity Box is a personal P2P networking device giving you access to a global network of distributed storage, digital identity, and distributed personal apps. It is a community effort of
building the next-generation, decentralized infrastructure that enables an open platform for privacy-preserving ecosystems.

Most of the data today belong to just a handful of companies. Personal documents, photographs, videos, things that we put online in general, contain lots of sensitive information. Information that we would rather prefer to stay private. Very often the same companies that provide more or less "complimentary" storage space for our disposal, also help us managing our whole digital existence. The combination of the data and the identity information is a powerful combination which empowers well-established business models where the user's data or the user itself become a product. Allowing sensitive data to be kept by well-known service providers makes it easier than ever for illegal institutions, but also the state, to gain insights into the data that they have no rights to access.



Our sensitive personal data are kept by the state, healthcare organizations, financial institutions, and corporations. We do not have control over these data and our access to them is limited. Every institution storing the data has not only its own policies, but also uses proprietary technologies to access the data. These data silos make interoperability hard and give institutions almost complete freedom to use the data without consent of the user.



Society is becoming increasingly more aware of the importance of protecting the digital content and it is becoming clear that the current centralized model has came to an end.
The future of the Internet is distributed. Unsupervised, unmoderated access, affordable, unlimited storage, security and privacy built-in are the most important aspects of the Internet of the future.



Unfortunately, a global, reliable, decentralized network cannot be built without actual physical nodes, as the opposite of thousands of nodes in centralized cloud data centers. Users need to be re-introduced to the concept of decentralization and learn the advantages of technologies like self-sovereign identity, and content-addressable networks. Only by building an open network of physical nodes we pave our way as a society to the decentralized Internet of the future. Building the decentralized Internet of the future is therefore a community effort, where all participants become the actual owners of the distributed global infrastructure.



To support this community movement, we propose Identity Box: a personal P2P networking device giving you access to the global network of distributed storage, digital identity, and distributed personal apps.



Identity Box is a physical device, but at the same far more than just piece of hardware. Together with the included software and Identity App, Identity Box enables an ecosystem of rich, distributed personal applications. It supports IPFS, Self-Sovereign Identity, and end-to-end encrypted storage. And that's just the beginning.



Join us in building the decentralized Internet of the future!

Back

AMENDMENT Android Content Providers for the Web

Home

Speaker Friedger Müffke
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time09:30 - 09:40
Event linkView original entry

Inspired by the concept of sharing data between apps on Android devices through Content Providers, this talk explains how this can be achieved on the Web today using decentralized identity and storage (identity hubs).



This talk has been accepted late to replace "Decentralized object storage An open source decentralized object storage" by Ivan Fraixedes. Due to health issues Ivan's talk had to be cancelled. We wish him a speedy recovery.

On mobile devices, users got used to give apps permissions to access their personal data. This can be for example the address book, calendar, pictures or any other file. Re-entering or copying the data in different apps does not make sense. However, on the web, users do it every day. Contact details are re-imported, pictures transferred or videos copied. Some web apps support the integration of 3rd-party services that keep the data at one place. However, apps thereby imply that users have an account with these 3rd-party services. The apps and users depend on the terms and conditions of these services. This dependency can be removed With decentralized identity and storage.



In this talk, two current projects are presented that offer APIs to use personal data across web apps: 3Box storage and Gaia storage. It is explained how apps can make use of this type of storage and how cryptographic keys (belonging to a decentralized identity) protect the users' data.



While these projects are used today (as demonstrated with existing apps) it is exciting to see that the different projects are working towards a standard of these APIs. The current status is presented shortly at the end of the session.

Back

Librecast: Privacy and Decentralization with Multicast

Home

Speaker Brett Sheffield
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time09:40 - 09:50
Event linkView original entry

Written in 2001, RFC 3170 states: "IP Multicast will play a prominent role on the Internet in the coming years. It is a requirement, not an option, if the Internet is going to scale. Multicast allows application developers to add more functionality without significantly impacting the network."



Nearly two decades later, multicast is still largely ignored and misunderstood.



This talk explains why multicast is the missing piece in the decentralization puzzle, how multicast can help the Internet continue to scale, better protect our privacy, solve IOT problems and make polar bears happier at the same time.

NB: This is a shortened lightning-talk version of the full talk given at Linux.conf.au in January.



There are many common misconceptions about multicast, including that it is only useful for streaming video and audio. It does so much more than that.



Multicast is really about group communication. It is, by definition, the most efficient way to distribute data to groups of nodes.



Multicast brings with it a very different way of thinking about distributed systems, and what is possible on the Internet. From database replication to chatops, server federation, configuration management and monitoring.



Even applications, such as chat, which are fundamentally multicast in nature are being built on top of unicast protocols. There is a Better Way.



Unicast networking leads to centralised and inefficient systems that are more open to attack and censorship. This talk will show how multicast allows for more efficient, decentralized designs, leading to increased efficiency and much-reduced energy consumption. This is better for our democracy, human rights and our planet.



Multicast lets us do things that would be impossible with unicast. Imagine sending software updates to a billion IoT nodes simultaeneously, using just one tiny virtual server.



At a time when even the web is moving to UDP with HTTP/3 and WebRTC, it is time we took a serious look at what we're missing by not using multicast at the network layer to underpin our Internet protocols.



We'll discuss how you can start using multicast in your project today, and how multicast design and thinking differs from unicast. We'll cover some of the different types of IP multicast, the basics of multicast routing, how to build in TCP-like reliability and take a look forward to how improvements in multicast can make a better Internet for the future.

Back

SCION

Home

Speaker Mateusz Kowalski
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time09:50 - 10:00
Event linkView original entry

Please note this is a lightning-fast version of our full talk taking place on Saturday at 18:00 in the Main Track



Do you know where your internet traffic flows? Does it go through China even if you don't want it to? SCION is a new internet architecture aimed at solving this problem. We will show how you can easily join the already existing worldwide network.

The current Internet was not designed with control and security considerations in mind: incidents such as the hijacking of all traffic for YouTube by a Pakistani ISP in February 2008, the Cloudflare DNS service hijacked by AnchNet in May 2018, or a large chunk of European mobile traffic being rerouted through China in June 2019 show that we cannot quite trust the current Internet. SCION is a proposed future Internet architecture aiming to offer high availability and security, even in the presence of actively malicious network operators and devices.



Designing a new Internet from scratch gives us the opportunity to make it work a lot better: we are aiming to notably improve security, availability, and performance. At the same time, just replacing the Internet would not be feasible, and thus we also emphasise practical concerns, such as incremental deployment and backwards compatibility. Thanks to that, SCION is currently the only clean-slate Internet architecture with a world-wide research network and production deployments in several large institutions in Switzerland; and you can start using it today.



In this lightning talk, we will briefly present the current state of SCION implementation, focusing on how it provides its most important features:






We will point you to the resources presenting how easy it is today for the end user to join the network and start using the available services through the world-wide test deployment, SCIONLab, consisting of around 50 different points-of-presence around the globe, many of them connected via direct, BGP-free, links.

Back

RFC 1984

Home

Speaker Esther Payne
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time10:00 - 10:30
Event linkView original entry

In 1996 Brian E. Carpenter of IAB and Fred Baker of IETF wrote a co-statement on cryptographic technology and the internet. This RFC wasn't a request for a technical standard, it was a statement on their concerns about Governments trying to restrict or interfere with cryptography. They felt that there was a need to offer "All Internet Users an adequate degree of privacy"



Since that time successive governments around the world have sought to build back doors into encrypted apps and services to access more citizen and visitor data. As of July 2019, the AG of the United States William Barr stated: “Some argue that, to achieve at best a slight incremental improvement in security, it is worth imposing a massive cost on society in the form of degraded safety,” i.e For security Americans should accept weakened encryption. The head of the FBI also claimed that weakened encryption wouldn't break it. At the moment the US Government is actively trying to stop Facebook implementing end to end encryption across it's suite of apps.



In Australia the metadata retention laws have been abused against journalists with 58 searches carried out by the AFP. In 2015 ACT police carried out 115 metadata searches. UK officials have a cavalier attitude to the EU SIS database which tracks undocumented migrants, missing people, stolen cars, or suspected criminals.



The EU isn't immune to this either with France considering implementing Facial Recognition on its government services.



IETF Session 105 mentioned privacy and concerns with the mass collection of data. While the IAB and IESG were worried about US export controls on cryptography there is an argument for RFC 1984 to be updated to include the unnecessary mass collection of data and to use it as a term for IT professionals, privacy advocates and the public to rally behind.



In this talk let's recount a brief history of governments around the world wanting to weaken encryption as RFC 1984 warned us about.



We live in a time where citizens put data into commercial, healthcare and Government systems to access services, some services are only accessible online. From CCTV to Facebook people have little understanding of why mass collection of data is dangerous. There is little scrutiny of who can access that data, from Scotland to the US.



Open Surveillance is only a small part of the picture when profiling citizens. It still counts as personal data, when combined with metadata and the actual data that people put into social media and services like ancestor DNA test kits. Businesses who use CCTV have to put up signs to warn the public they are recording. So called anonymized data still contains identifiers that can tie to individuals.



Let's talk about Ovid and peacocks. Let's explore how to expand the RFC to cover recent developments in surveillance capitalism with governments accessing that data, but not securing it. We need to make it clear weakened encryption, the mass collection and careless retention of data isn't acceptable. RFC1984 became Best Practice in 2015, we need to do more to raise awareness and to implement it in our projects.

Why we need to implement RFC 1984:



"The Internet Architecture Board (IAB) and the Internet Engineering
Steering Group (IESG),[...] are concerned by the need for increased
protection of international commercial transactions on the Internet,
and by the need to offer all Internet users an adequate degree of
privacy.
"



I'd like to start by briefly mentioning Ovid and the legend of Io. Ovid was anti authoritarian during the time of Augustus as he'd been exiled by the Emperor. He wrote The Metamorphoses; an epic poem about Greek myths with the theme of transformation. The myth is often used as a metaphor for surveillance. With Io suffering restriction of liberty and being abused by authority. Being turned into a cow was bad enough, to make things worse she was constantly watched by the agent of Hera another authority Argus (Argus Panoptes) the 100 eyed giant. Argus is a great name for a security firm in fact there are quite a few firms that use an eye in the logo.



Pop culture like Neil Gamien's American gods on Amazon have also referenced this legend to show surveillance and how it can convey power to authority. In the end a modern interpretation of the myth could argue that Hermes sending Argus to sleep to kill him is a good metaphor for opposing actors using exploits to subvert and disable surveillance to access information to Citizens data. We focus more on Argus the agent of Surveillance rather than Io, who was violated, changed and then incarcerated with surveillance against her will.



Argus Panoptes inspired the idea of the Panopticon. A building design by English Philospher Jeremy Bentham as a prison that could be observed by a single guard. Our Internet is in danger of being a virtual panopticon for future citizens. The EFF already started thinking about this with panopticlick so that you can test who's tracking you through your browser. So who's watching us?



Of course this explanation and the metaphor is from a Western Perspective. Privacy doesn't mean the same thing to all countries and cultures. Neither does the symbolism of the Peacock.



Many IT professionals consider RFCs are more like guidelines, see RFC Clueless.org. Popular email services like Me.com, Outlook.com and even gmail.com have been listed on RFC ignorant, then it's successor RFC clueless . Sadly the giants often ignore RFCs. Which breaks the idea of interoperable standards and protocols and leaves us in danger of being at the mercy of large hosting giants.



There is a narrative that threads through the media since that time. Privacy is dead, you need to give up that freedom to stay safe. Politicians like the UK Prime Minister David Cameron in 2015 stated:



."In our country, do we want to allow a means of communication between people which even in extremis, with a signed warrant from the home secretary personally, that we cannot read? “Up until now, governments have said: ‘No, we must not'." "



Malcolm Turnbull the Australian Prime Minister in 2017 stated that " the laws of Australia take precedence over the laws of mathematics."



With organizations like Palantir providing information to ICE to target illegal immigrants in the US; The UK Home Office deliberately destroying data in the the Windrush scandal; It's clear that human rights, specifically the right to privacy is in danger. Recently the EU confirmed that UK Border Force officials had illegally copies Shengen SIS data to third party Organizations based in the US.



That's before I even start on repressive regimes where that data can and will be used to oppress citizens of that regime.



The recent IETF Session 105 this month mentioned privacy and concerns with the mass collection of data. While the IAB and IESG were worried about US export controls on cryptography there is an argument for RFC1984 to be updated to include the unnecessary mass collection of data and to use it as a term for IT professionals, privacy advocates and the public to rally behind.



I propose a brief history of governments around the world wanting to weaken encryption as RFC1984 warned us about:



" The IAB and IESG are therefore disturbed to note that various
governments have actual or proposed policies on access to
cryptographic technology that either:



(a) impose restrictions by implementing export controls; and/or



(b) restrict commercial and private users to weak and inadequate mechanisms such as short cryptographic keys; and/or



(c) mandate that private decryption keys should be in the hands ofthe government or of some other third party; and/or



(d) prohibit the use of cryptology entirely, or permit it only to specially authorized organizations."



RFC 1984 was explicitly named to reference an Orwellian Society that uses mass surveillance. Let's expand that beyond encryption to the mass collection of data and ask how do we limit this? How do we limit access to this data? How do we stop the nightmare?

Back

Fixing healthcare data exchange with decentralized FOSS

Home

Speaker Steven van der Vegt
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time10:30 - 11:00
Event linkView original entry

In The Netherlands we have a interesting problem: in 2011, weeks before going live, the national electronic health record system got shut down by our senate. They decided not to interveine and let the market fix the problem. Now, 9 years later, the market has made a mess out of it: there is no uniform way of exchanging medical data in The Netherlands.
Architects write countless of pages with solutions, the government pours millions into subsidised programs, but the problem is only getting bigger.



So, 2 years ago together with a group of other software vendors we started a foundation called Nuts. The goal is to end this impasse by building an open source decentralised infrastructure that nobody controls and can be used by everyone. It should be cheap to join, privacy by design, and use technology over lawyers.



Our infrastructure allows parties to exchange data "peer to peer", only helping them solve four generic problems: user identity, patient consent, discovery of endpoints and logging.



In this talk I would like to show our architecture, explain which choices we made, what we have learned while working with a distributed software and some anecdotes about what happens if you pitch such an idea to the establishment.






A little more background: I'm one of the main devs. The system is written mostly in Go and some parts in Java. Every software vendor can spin up a node and join the network. Patient Consents are distributed by a DLT (Corda) and are only stored on the nodes of vendors who already process the patient`s data. No medical data flows through the system, Nuts is only used to connect them and provide a level of trust. Identities are managed by a self-sovereign Identity system called IRMA (irma.app) which is based on IBM idemix.

Back

AMENDMENT Open and federated identities with ID4me

Home

Speaker Vittorio Bertola
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time11:00 - 11:30
Event linkView original entry

Online identities are the cornerstone on which data-based capitalism is built - so, Google, Facebook and other OTTs are trying to dominate them and close them into silos. The ID4me platform extends OpenID Connect to create an open and federated architecture that allows any number of providers to interoperate, and gives back control to users, and a role to community service providers.



This talk was promoted from a lightning talk because unfortunately the talk "P2P how and Kademlia" by Kishan Sagathiya has to be cancelled due to administrative issues. It is moved from 09:10 to 11:00 and becomes a 30 min lecture.

In the last years, the Internet has been increasingly centralized into the hands of GAFAM and other over-the-top companies that built walled gardens in fields like messaging and social networks. More and more of these companies have user data monetization and targeted advertising as a core revenue stream; thus, tracking people across their Internet activities is necessary to their existence.



This is why they have also built closed identity systems that supply single-sign-on and very easy sign up for new websites and services, at the expense of privacy and user control. As managing hundreds of separate accounts is inconvenient and insecure, and as alternatives such as password managers are not easy enough for the average Internet user, clicking on “sign in with Google” or “sign in with Facebook” has become a very common choice.



We think that this is bad for the Internet in general, and thus we are creating a platform that allows anyone to provide identities, creating an open, public and federated single-sign-on and data management system. We are extending OpenID Connect just a bit, the bit that is necessary to break the silos and allow interoperability; and we are basing the system on the DNS, the widely available and already federated naming directory of the Internet.



The talk will explain how the system works and encourage participation and contributions.

Back

GNUnet: A network protocol stack for building secure, distributed, and privacy-preserving applications

Home

Speaker Martin Schanzenbach
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time11:30 - 12:00
Event linkView original entry

In this talk we will give a brief introduction into the GNUnet peer-to-peer framework, its architecture and existing applications.
This includes details on the p2p overlay, double-ratchet messaging channels (CADET) the GNU Name System (GNS) and a selection of other applications and features.
We present the current status of the project, the roadmap as well as ways to participate and use GNUnet.

GNUnet is a new network protocol stack for building secure, distributed, and privacy-preserving applications. With strong roots in academic research, our goal is to replace the old insecure Internet protocol stack.



GNUnet is typically run as an overlay network on top of the existing Internet infrastructure forming the basis of a hybrid peer-to-peer mesh and relay backbone for applications to run on. It could just as well be run independently of the Internet, over dedicated radio and cable.



GNUnet is made for a free and open society: It's a self-organizing network and it is free software as in freedom. GNUnet puts you in control of your data. You determine which data to share with whom, and you're not pressured to accept compromises.

Back

Knocking Down the Nest

Home

Speaker Devan Carpenter
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time12:00 - 12:30
Event linkView original entry

More and more people are inviting corporate-controlled networked devices into their homes. Can we make truly "smart devices" which we control, and communicate directly with, instead of through the cloud? We're building a privacy-preserving and peer-to-peer IoT platform: secushareBOX

Lightbulbs, thermostats, video cameras, maybe even toasters; People are putting all kinds of networked devices in their homes these days. The majority of these devices can only be controlled through proprietary and centralized cloud based services, with the data and metadata being ingested by surveillance machine.



Let's build a better "Internet of Things".



secushareBOX is a peer-to-peer and privacy-preserving project for remote system management, including embedded devices. Conceived as an alternative to the centralized pattern so common to IoT platforms; With secushareBOX you communicate directly with your devices, and manage access control with your peers and other systems as modeled in a social graph. Using GNUnet as our underlying p2p framework, we inherit an active network of nodes with which we are able to route our traffic in a manner which preserves our metadata, and utilizes end-to-end encryption for all connections.



This talk will introduce the ideals, and concepts of the project. Followed with a demo of the current state, and discussion of our future plans.

Back

Peer-to-peer collaboration, search & discovery

Home

Speaker TG x
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time12:30 - 13:00
Event linkView original entry

A data-centric, offline-first approach to decentralized collaborative application development focusing on data ownership and privacy.

Exploring replicated mergeable data structure stores as building blocks of decentralized applications that enable asynchronous collaboration and offline search in combination with peer-to-peer gossip-based protocols that provide pub/sub, dissemination, and recommendation services both over the internet as well as on local and mobile proximity networks, thereby forming interest-based networks that facilitate discovery of personally relevant content and people.

Back

DAT protocol in the browser: Progress and Challenges

Home

Speaker Sam Macbeth
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time13:00 - 13:30
Event linkView original entry

Dweb protocols, like DAT and IPFS, promise significant benefits over the standard client-server protocols for web content. Particularly for self-hosting and -publishing, these protocols could reduce barriers to entry by eliminating server costs as well as promoting data ownership. Despite this, there has been no adoption of these protocols in mainstream browsers yet. This talk gives an overview of work to add native-like support for the DAT protocol to Gecko-based browsers. We discuss the limitations of the current WebExtension APIs in Chrome and Firefox for this purpose, and how Firefox's libdweb project improves on this. We present the dat-webext browser extension which implements DAT support in Firefox on Desktop and for Geckoview on Android.

This talk will cover the content of the following two blog posts, as well as more recent developments:




Back

An Introduction to the Tor Ecosystem for Developers

Home

Speaker Alexander Færøy
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time13:30 - 14:00
Event linkView original entry

Tor is a free and open-source software anonymization system that allows people around the world to use the internet safely. The Tor network itself is operated by various volunteering individuals and organizations around the globe, and the network carries around 200 Gbit/s of traffic and helps somewhere between 2,000,000 and 8,000,000 users every day. The Tor ecosystem is much larger than the anonymity system that Tor provides itself: The Tor Project, the non-profit behind the anonymity system, also develops and maintains a web browser based on Mozilla Firefox. The organization also does monitoring of the network, work on emerging anti-censorship technology, work with translators, and downstream distributions that do packaging in free software operating system distributions.



In this presentation, we will have a look at what it takes to develop and maintain an anonymity system like Tor and the various other components in the Tor ecosystem. We will look at what The Tor Project has been up to lately, primarily with a focus on core Tor itself. However, we will also have a look at some of our recent developments with anti-censorship technology. Finally, we will have a look at how the participant can contribute to the Tor project.



No prior knowledge of Tor is necessary to participate.

Back

OpenPush

Home

Speaker Marcus Hoffmann
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time14:00 - 14:30
Event linkView original entry

Push messages are an essential part of connected mobile devices. They are also one of the critical missing pieces in the open source Android ecosystem.
Until now, free Android apps would either need to implement their own push notification system, do without any push messaging or use the proprietary Google Cloud Messaging service.
In this talk I will introduce OpenPush, a free and open source, self-hosted and decentralized replacement for Google Cloud Messaging.

We expect both a long battery life and instant notifications from our mobile devices.
When implementing your own mobile push functionality you can usually optimize for either of these goals.
This is especially true if the user is running multiple applications which each come with their own persistent on-going connection for push notifications.
Wanting to combat the battery drain associated with maintaining multiple connections Google introduced the Google Cloud Messaging (GCM) framework which recently has become Firebase Cloud Messaging (FCM). Firebase Cloud Messaging relies on the availability of the proprietary Google Play Services Framework on an Android device. Using FCM also requires the inclusion of the proprietary FCM client library into open source Android apps like Signal, Wire or even Firefox, which makes them effectively non-free software which cannot be distributed via the fully free F-Droid software repository.
Additionally all push notifications delivered via FCM need to pass through Google's servers leaving a metadata trace, even if it's an empty wakeup event or if the content of the message is encrypted.



Decentralized, self-hosted systems like Matrix, Nextcloud or RocketChat currently still have a dependency on Google's infrastructure and Terms of Service for delivering push Notifications.



In this talk I'll present a self-hosted, free alternative push messaging implementation which can either run alongside or as a replacement to FCM.
The talk will give a general architecture overview as well as walk through the design and implementation challenges of a push messaging service.



Further I'll present how OpenPush can be used by different projects and discuss some additional ideas on how the wider ecosystem could look like in the future.

Back

The Path to Peer-to-Peer Matrix

Home

Speaker Matthew Hodgson
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time14:30 - 15:00
Event linkView original entry

Matrix is an open source project run by the non-profit Matrix.org Foundation dedicated to building an open protocol and communication network for decentralised, encrypted communication - providing a viable open alternative to WhatsApp, Slack, Discord an other proprietary communication silos. In this talk we will show of the work we've been doing over the last year to shift Matrix from being a decentralised-server architecture to a fully decentralised-client p2p architecture, through running clientside homeservers and experimenting with libp2p and friends as a p2p transport. We'll also show the route we'll be following over the year to go from proof-of-concept to the live Matrix network.

Traditionally Matrix decentralises communication by replicating conversation history over a mesh of servers, so that no single server has ownership of a given conversation. Meanwhile, users connect to their given homeserver from clients via plain HTTPS + DNS. This has the significant disadvantage that for a user to have full control and ownership over their communication, they need to run their own server - which comes with a cost, and requires you to be a proficient sysadmin. In order to fully democratise communication and eliminate a compulsory dependency on a homeserver, we've started seriously working on making Matrix run as a P2P protocol by compiling homeservers to run clientside and using P2P transports such as libp2p - while seamlessly supporting all existing Matrix clients (e.g. Riot.im), bots and bridges with negligible changes. This work includes:






In this talk we'll show off our progress so far, and lay out the path forwards over the coming year as we go from proof-of-concept to the live Matrix network.

Back

Building a Web App that Doesn’t Trust the Server

Home

Speaker Daniel Huigens
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time15:00 - 15:30
Event linkView original entry

How do you know WhatsApp Web isn’t spying on your messages, despite the end-to-end encryption? Why did Signal decide to build a desktop application instead of a web app?



Open Source clients are a necessary, but unfortunately not sufficient, requirement for guaranteeing privacy. This talk explores two other issues: how to securely deliver that source code, and how to securely deliver the encryption keys that users use to communicate. It also presents our proposed solutions to these problems.

At ProtonMail, we’re aiming to build a web application that gives users the guarantee that we are physically unable to read their email, even if we wanted to. This comes with a set of challenges: how can the user trust the source code that comes from the server (without reading it each time), and how can the user trust the public keys that they receive (without hosting key signing parties, however fun they may be :)).



We currently support self-hosting, and manual key verification and pinning as solutions to these issues, respectively. However, these are highly manual solutions. This talk will present two projects we’ve been working on to provide privacy guarantees without requiring any action: Source Code Transparency and Key Transparency.



Finally, we’ll also briefly discuss what kind of APIs we could add to browsers to make it easier to develop web apps that don’t trust the server.

Back

MaadiX, your cloud in your hands

Home

Speaker Maddish Falzoni (MaadiX)
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time15:30 - 16:00
Event linkView original entry

Avoiding repressive surveillance, circumventing censorship and protecting privacy can become a complicated and costly challenge. Many of the available alternatives do not completely solve the problem of trust, centralization of information and dependency on whoever is offering the services to us. Initiatives that offer alternative tools often become targets of censorship and repressive surveillance. Others do not include all the services the community needs, or require a minimum of technical knowledge, forcing organizations to continue using applications offered by third parties or renounce them.



MaadiX is a solution that reverses this imbalance in favor of users, giving them back control over their communications and data, as well as over all the applications they need in order to process them, facilitating the technical adoption and maintenance of server-side, privacy-oriented, secure, and censorship circumvention technologies.

MaadiX provides one click installation of advanced free and open source applications such as email server, mailman 3, openVPN, RocketChat. Owncloud, Nextcloud, OnlyOffice. Libre Office Online among many others, on remote or local private servers, without the need to have access to their system and data, and providing updates and technical support.



MaadiX acts as a repository of 'recipes' that provides all the instructions and commands needed to automatically install and configure applications from a graphical interface.



The catalogs are served through Puppet modules but we've changed the way these technologies works, improving them in order to avoid creating yet another centralized google-like model and avoiding having access to users' systems and data.



MaadiX has been reviewed by external security auditors. We would like to dicuss with the audience how to better deal with the balance between security and usability and share how MaadiX works as well as which community is actually around the project or is using it.

Back

Decentralizing OAuth2.0 in a post-GDPR world for full privacy and portability

Home

Speaker Mehdi Medjaoui
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time16:00 - 16:30
Event linkView original entry

Users want their data back and the ability to transfer them the way they want to the platform they want. This si user's freedom in a digital world. Today, because of current authorization protocols/framework design like OAuth2.0, power is concentrated to the identity providers who decide what applications they allow to access their API and the user cannot say anything about it. New regulations like GDPR have appeared to enforce this freedom for users by law but there is not yet tooling for developers to make GDPR data ownership and GDPR data portability happen, useful for users to avoid this
To really decentralize data permissions from platforms control, make users in control of their privacy and make companies GDPR compliant, you need now to update OAuth2.0 dance into a stateless flow and tokenize the GDPR authorization and agreements to make it programmable for developers.
In this talk, Mehdi will explain how you can use open source technologies to automate GDPR requests for your users to, build APIs on top of GDPR takeouts, export GDPR user 3rd-party data in your system and tokenize your GDPR agreements to make them programmable for compliance using opens source technologies.

Making GDPR programmable and adding decentralization of data portability to OAuth2.0



In the classic OAuth 2.0 flows, the authorization server and the resource server are behind the same firewall, giving full power and control about sharing capabilities to the Identity Provider (i.e. Facebook, Amazon, Google etc...). The Identity Provider decides what can be shared to whom via its API, and the user is limited into making data exportable to what the Identity provider allows.
Because of new regulations about data portability (GDPR in Europe and CCPA in California), now every user is able to ask a full export of its data to be stored anywhere, breaking Identity Provider monopoly and control. In that context, users can now own fully a copy of their data and share it to who they want. They can now become theoretically independent from previous Identity providers, by becoming their own Identity Provider if they are able to install a server to do so themselves, or theoretically choose the Identity provider that is the best delivering value for them about managing their personal data and permissions.
As we seen in Bitcoin, a large majority of users will still want to delegate authorizations to a trusted 3rd-party to manage permissions, as they do until today with banks for their money, or to wallet managers for their Bitcoins/Crytocurrencies. In the Alias protocol ecosystem,users decide where their data is stored (on the server of their choice) and decide the Alias authorization server that will manage its permissions.



Introducing Alias protocol



Alias is a protocol enabling decentralized data export authorizations. When implemented, Alias enables for users to decide to share the data they want, to whom they want, without limitations from any centralized Identity Provider, and in fine grained control.
Technically, Alias is a decentralized protocol based on OAuth 2.0, where each user, identified by an cryptographic alias, can let third-parties ("clients") access to their data stored in servers ("resource servers"). Access to the data is controlled by an Authorization server ("authorization servers") that manages permissions and scopes. The main innovation of Alias is that the resource server and the authorization server do not need to be behind the same firewall, enabling users to decide freely and in full control who store their data and who manage permissions in a decentralized way.

Back

Who will Decentralise the Fediverse?

Home

Speaker kyzh
RoomUA2.220 (Guillissen)
TrackDecentralized Internet and Privacy
Time16:30 - 17:00
Event linkView original entry

The promise of the internet has not been kind. In mainstream tech and open source alike, social media tech has failed a lot of people. People often face surveillance and abuse over valuable human interaction, or technology for technology's sake.



Software like Mastodon has signaled a significant step forward towards a vision for how we can take existing social media and distribute power so that people can benefit.
In many respects, the experience is still not ideal, this talk highlight some of the key point that can make or break the fediverse

The talk will be split in 3 parts.
Mostly looking at the past and the Fediverse history.
The present and the set of challenges
The future and some proposal on how to overcome those challenges.
Quesions



Past:



The success of Mastodon and other AP compatible software brought a lot of people from different sphere together, and with that diversity the network took off.



Free and Open Source enthusiasts, activists, hacktivists, sex workers, G+ / Tumblr/Twitter refugee, communities of interest (tabletop, craft, parenthood, etc.) and people that simply wanted a more personal place to socialise, all mingled together and really pushed the envelop of what a social network can be.



Within this mix of people and interest, the most marginalized people challenged the status quo and got us better tools than we ever had like post visibility, content warning, image description, etc. as a standard way of communication.



Forward to today:



There are a few major obstacle for the fediverse to operate:
* A naive vision of moderation and hostile actors constantly puts community moderators on the back foot. Gab and kiwifarm showed us that admins need to be constantly on the lookout if they want to protect their community






Future






Question



Question from the audience.

Back

Tesselle image viewer

Home

Speaker Arnaud Pichon
RoomUB2.147
TrackOpen Media
Time09:30 - 09:55
Event linkView original entry

Tesselle is an open source image viewer allowing anyone to open, annotate and share big images on the web. It is part of the "Quinoa" project family, a suite of digital storytelling tools tailored for the FORCCAST teaching program and the scientific activities of Sciences Po's médialab. (list tools with links ?)

Tesselle is a tool for annotating and publishing large, very large, huge images!
To scale with High Res image, it embeds a tiling feature to seamlessly display and navigate them on the web.
It allows to comment on specific portions of photographs, maps or visualizations. It gives you the possibility to explore and analyse visual items in detail and precisely.
Furthermore, Tesselle allows to export your work as a simple folder to publish anywhere on the web.
Those features have been built to allow scholars to create stories be crafting annotations on cartographic map, artwork analysis, network vizualisations... Using image opens many usecases.



It is built using Typescript, React and Leaflet.
As a standalone serverless webapp, Tesselle has met some pitfalls:
- What is the fastest way to tile an image on the front-end?
- Are there enough tools in a browser or shall we bring WebAssembly in?
- How to handle memory management when dealing with hundreds of tiles?
- Can we beat a "native" image viewer?
- How should we handle sharing and embedding?
- Is IIIF an appropriate standard?
- What are our limitations?
Those are some of the questions we had / have to answer while building Tesselle.



https://github.com/medialab/tesselle/

Back

The Rise and Fall and Rise of JPEG2000

Home

Speaker Aaron Boxer
RoomUB2.147
TrackOpen Media
Time10:00 - 10:25
Event linkView original entry

JPEG 2000 was developed to replace the very successful JPEG standard, but it has instead
remained a niche code. With recent updates to the standard speeding up decode by 10X, is
world domination around the corner ? This talk will describe many of the sophisticated features
that JPEG 2000 offers, and discuss why a 20 year old standard may be the codec of the
future.

Released in 2000 as a potential replacement for the wildly successful JPEG standard, JPEG 2000 is a versatile codec with many sophisticated features including:



Superior compression at low bit rates
Storage of multiple resolutions in a single bitstream
Precise rate control without re-compression
Lossy and losssless compression
Progression by resolution, component, spatial region or quality


It is an essential codec in medical imaging, digital cinema and remote sensing. However, due to its high complexity,
it has remained a niche codec that never gained the popularity of its predecessor.



All of this is about to change with the recently released High Throughput JPEG 2000 standard that speeds up the codec by up to 10x,
while leaving almost all of its features intact. This will propel it into the mainstream, particularly in broadcast and digital cinema.



I will talk about the history of JPEG 2000, give an overview of its features and discuss the upcoming changes.
I will also talk about current and planned GStreamer support for JPEG 2000.

Back

Rendering QML to make videos in Kdenlive

Home

Speaker Akhil Gangadharan Kurungadathil
RoomUB2.147
TrackOpen Media
Time10:30 - 10:55
Event linkView original entry

How QML, a language prominently used for designing UI, could be used to create title video clips containing text and/or images which can then be rendered and composited over videos in the video editing process. Kdenlive's Google Summer of Code 2019 project tried to achieve this and is still under active development.

QML is used primarily for UI development in Qt Applications providing an easy way of designing and creating interactive, clean and a modern UI. Kdenlive is a popular non-linear open-source video editor and it currently makes use of XML to describe a title clip (which are clips which contain text or images used to composite over videos). XML requires more processing in the backend as one needs to explicitly write code for, say an animation of the text. Using QML eases this restriction, making the backend more robust and maintainable as rendering in QML makes use of a dedicated Qt Scene Graph. Kdenlive's Google Summer of Code 2019 Student tried to achieve this by creating a new rendering backend library and a new MLT QML producer which is still under active development. Owing to the dedicated scene graph while rendering, this could also possibly lead to greater overall performance.

Back

GStreamer on the Magic Leap One

Home

Speaker Xavier Claessens
RoomUB2.147
TrackOpen Media
Time11:00 - 11:25
Event linkView original entry

Magic Leap One is an augmented reality glasses. Let's run an Open Source Browser (Mozilla Servo) using GStreamer multimedia framework on it.

The Magic Lean One device runs a custom OS called LuminOS, derived from Android with JAVA stripped off. Servo is Mozilla's browser written in Rust that uses GStreamer to render multimedia content.



Presentation of the challenges and solutions found to make it happen:
- GStreamer's Meson build system.
- Stagefright-ish API in the SDK.
- Completely new audio API, with 3D space localization.
- OpenGL rendering, including stereoscopic SBS.
- It's now all upstream!

Back

GPAC 1.0 Overview

Home

Speaker Jean Le Feuvre
RoomUB2.147
TrackOpen Media
Time11:30 - 11:55
Event linkView original entry

In this talk, we present the next release of GPAC, the complete rearchitecture of its streaming core, the many new features and possibilities of the multimedia framework. Get ready for a lot of OTT/IP streaming and broadcast, encryption, packaging and media composition!

Back

IMSC Open Source Projects

Home

Speaker Andreas Tai
RoomUB2.147
TrackOpen Media
Time12:00 - 12:25
Event linkView original entry

IMSC is the Internet Media Subtitle and Caption Profile of the W3C Timed Text Markup Languages. The presentation will show how to combine different open source tools to create, render and validate IMSC subtitles. The focus will be on an open-source editor for IMSC.

Back

Which video network streaming protocol should I use?

Home

Speaker Olivier Crête
RoomUB2.147
TrackOpen Media
Time12:30 - 12:55
Event linkView original entry

Open source stacks such as GStreamer, ffmpeg and UPipe now implement a large number of different ways to stream audio & video over a network. Just to name a few, there are RTSP, SRT, RIST, WebRTC, HLS, DASH, AES67, SmoothStreaming, RTMP! Some are for local networks and some target the Internet, depending on the use-case, these protocols have different upsides and downsides. To create a successful project, one needs to select the best suited technology. I'll go over the various protocols and explain how they relate to each other and their individual advantages and inconveniences.

Back

FOSS in Animation

Home

Speaker Frank Rousseau
RoomUB2.147
TrackOpen Media
Time13:00 - 13:25
Event linkView original entry

The Animation industry has always been ruled by proprietary software, mainly from Autodesk, Adobe and The Foundry. But recently we noticed a rise of interest in software like Blender or Krita. Alongside them, initiatives like the Academy Software Foundation are popping. Last but not least, more and more studios publish the sources of their in-house software. During this conference, we'll explain how a typical production pipeline works. Then, we'll discuss how open source impacts animation productions and what we can expect for the future. As a conclusion, I'll explain how studios collaborate more together through free and open-source software.

Back

dav1d: 1 year later

Home

Speaker Jean-Baptiste Kempf
RoomUB2.147
TrackOpen Media
Time13:30 - 13:55
Event linkView original entry

dav1d is an open source decoder for the AV1 format, focused on being fast and lean.

It was started a bit more than one year ago. This is a talk to see where this project is now, how fast we achieved for decoding AV1 samples and what is left to do on this project.

Back

rav1e - 0.3.0 and after

Home

Speaker Luca Barbato
RoomUB2.147
TrackOpen Media
Time14:00 - 14:25
Event linkView original entry

rav1e is an opensource Av1 encoder.



We'll see what makes it fairly unique beside the choice of using Rust as main development language.

We'll see what we did in the past releases, what design choices we took and what we plan to do in the next two releases.



By February we will have the release 0.2.0 and the release 0.3.0 out. I'll present what's coming in the release 0.4.0 and 0.5.0.



This will include some performance evaluation and describing some of the features are currently unique in rav1e.

Back

Spleeter by Deezer

Home

Speaker Anis Khlif
Félix Voituret
RoomUB2.147
TrackOpen Media
Time14:30 - 14:55
Event linkView original entry

Source separation, stem separation, de-mixing are all different ways of referring to the same problem of recovering the mono-instruments tracks that were mixed together to produce a music file. Recently, the research team at Deezer released a free and open source software as well as trained models to perform multi-source separation of music, with state-of-the-art accuracy.
In this presentation we come back on our journey to open sourcing the Spleeter library, from doing the ground research, training the models, to releasing them. We put emphasis on the technological challenges that had to be solved as well as the practical and legal considerations that came into play.

Released on october 29th 2019, the Spleeter (https://github.com/deezer/spleeter) github repository received more than 5000 stars on its first week online and numerous positive feedbacks as well as press coverage. This talk will explain how we went from research code to this fairly easy to use open Python library, that integrates pre-trained models for inference and re-training.



While not a broadly known topic, the problem of source separation has interested a large community of music signal researchers for a couple of decades now. It starts from a simple observation: music recordings are usually a mix of several individual instrument tracks (lead vocal, drums, bass, piano etc..). The task of music source separation is: given a mix can we recover these separate tracks (sometimes called stems)? This has many potential applications: think remixes, upmixing, active listening, educational purposes, but also pre-processing for other tasks such as transcription.



The current state-of-the-art systems start to give convincing results on very wide catalogs of tracks, but the possibility of training such models remains largely bound by training data availability. In the case of copyrighted material like music, getting access to enough data is a pain point, and a source of inequality between research teams. Beside, an essential feature of good scientific research is that it must be reproducible by others. For these reasons and to even the playing field, we decided to not only release the code, but also our models pretrained on a carefully crafter in-house dataset.



Specific topics on which our presentation will dwell on are:
- technical aspects of the models architecture and training
- software design, and how to leverage tensorflow’s API in a user facing python library
- how to package and version a code that leverages pre-trained models and that can be run on different architectures: CPU and GPU.
- licensing and legal concerns
- what we learned along the way
- legacy

Back

Functional audio and video stream generation with Liquidsoap

Home

Speaker Romain Beauxis
RoomUB2.147
TrackOpen Media
Time15:00 - 15:25
Event linkView original entry

The talk will give a general overview of the Liquidsoap language, and put focus on recent new features: support for HLS, efficient video, etc.

The task of generating multimedia streams such as for webradios or live youtube channels is a complicated task. One needs to face low-level issues (properly encoding and distributing the streams), mid-level issues (performing normalization, signal processing, color grading, etc.) and high-level issues such as generating the stream from a wide variety of sources (local files, other streams, live interventions, user requests, etc.) and properly combining them (performing transitions, adding commercials, vary the contents during the day, etc.). In this talk, we present Liquidsoap, a dedicated high-level functional language, which allows performing all these tasks in a modular way, with strong guarantees that the stream will not fail on the long run.

Back

Building an Open-Source based audio streaming platform

Home

Speaker Maxime Bugeia
RoomUB2.147
TrackOpen Media
Time15:30 - 15:55
Event linkView original entry

Learn how Radiofrance leverage open-source software to transport, encode, deliver and monitor audio stream in the cloud. You will get a global infrastructure overview on a platform that serve audio stream at scale.

What we will talk about:
- How we chose our audio streaming protocol (HLS and Icecast).
- How to transport audio from corporate SI to the cloud with the protocol SRT.
- How we use Liquidsoap as streaming server to implement high availability logic, encode and mux our stream.
- How we monitor our system with Prometheus and Grafana.
- How to scale an audio streaming platform for 80 radios and 200K+ concurrent listeners.

Back

The moldability of mpv

Home

Speaker David Heijkamp
RoomUB2.147
TrackOpen Media
Time16:00 - 16:15
Event linkView original entry

Many museums around the world use commercial closed source solutions to present media and interact with their visitors. Biodiversity museum Naturalis has decided to present most of their interactive content using open source software. In this presentation we talk about our experiences with mpv as the go to tool for a diverse selection of media setups in our brand new museum and how we made it work with show controllers, Arduino devices and open source content and config management tools.

Back

Getting Your Virtual Hands On RIST

Home

Speaker Sergio Ammirata
RoomUB2.147
TrackOpen Media
Time16:20 - 17:00
Event linkView original entry

There are a number of error correction protocols that provide backwards error correction. These are commonly used to transport media streams from remotes to the content provider, or the content provider to distribution. They allow, for example, streams from a pro basketball game to be transported over public Internet from stadium to network NOC without error; or as another example, packages of ethnic TV channels, to be moved from continent to continent. Players include DVEO, which uses the proprietary Dozer protocol for which the speaker holds the patent; WOWZA uses a customized SRT which is based on open source, and a few more. They all work on the principle of shooting off a bunch of udp packets from one IP to another, setting up a buffer, and then using an automatic re-request mechanism to request re-sends of lost or corrupted udp packets. RIST was designed with the participation of several vendors to bring some of the features normally found in proprietary error correction protocols into the free and open source world. It may even become a "lingua franca" between vendors. VLC, upipe and gstreamer can already reassemble and play back RIST transported streams. We will talk about a new open source project that provides an easy to use lib for rist and we'll discuss two pre-packaged images we've made available for AWS, Azure, VMWare and KVM. With these images, you can send a RIST encoded stream from cloud to end user viewer, or from cloud to cloud.

Back

The State of Go

Home

Speaker Francesc Campoy
Maartje Eyskens
RoomUB2.252A (Lameere)
TrackGo
Time09:00 - 09:30
Event linkView original entry

Go 1.14 is planned to be released in February 2019 and this talk covers what's coming up with it.



We'll talk about new features and fixes in Go, new proposals for Go 2. All of the new things you might have missed.

This has been a staple talk of the Go devroom, opening the stage every year, and has always been a successful one.

Back

Functional Programming with Go

Home

Speaker Dylan Meeus
RoomUB2.252A (Lameere)
TrackGo
Time09:30 - 10:00
Event linkView original entry

Are you tired of seeing Object Oriented code everywhere, with mutations, side-effects and headaches? Luckily, writing Go does not have to be that way! Functional programming is perfectly possible within Go, hence we can leverage FP techniques to make our code more robust, testable and fun.

Functional Programming in Go
Go is a multi-paradigm language, yet most code you encounter ‘in the wild’ is written in a mostly Object-Oriented way. Yet, Go allows us to write code in a functional way as well, which offers certain advantages over a more traditional “OO” approach. What follows in this description is also the general flow of how it will be presented.



What is FP?
First we’ll start by defining what we mean by functional programming. Superficially Go might not look like what you expect a functional language to look like. But we’re really just missing the syntactic sugar here, as a lot of the underlying concepts that are central to Functional Programming are reflected in Go. Hence it is important to take a look at what many programmers consider requirements for being “functional” such as: Higher-order functions, recursion (with tail-call optimization), purity, idempotence, .. and how these requirements are met (or aren’t met) by Go.



How to leverage them in Go
Once we have convinced ourselves that Go gives us the building blocks for writing FP code, we’ll dive into some concrete examples of what we can do with Go. We’ll look at function currying, continuation-passing style programming, pure functions, recursion (without TCO), monads and more. It’s important to highlight here why we want to use these constructions and when. At best case, you’ll learn how to leverage them in your own codebase. At worst, you’ll have seen some cool things with Go. Don’t be put of if you don’t know these terms yet, we’ll start with the easy concepts you’ve probable already used such as recursion, and work our way up to the complexer ones.



Using libraries
Go actually has libraries that provide an API for programming in a more functional style. We’ll give them an honorable mention but they won’t be the focus of the talk, as you can get started easily without them. But, they do offer certain things we “miss” in Go by default (like Generics).



Benefits
Writing Go code in this style has numerous benefits over our traditional approach. My goal of this talk is not just to show you cool things you can do with Go, but also why you want to them. You’ll also see introducing them to an existing codebase is easy, and that FP is really not as scary as it might sound!



Downsides
Using this style of programming is not entirely a walk in the park. There’s a price to be paid for writing functional code in Go, the main one will be that you’ll take a performance hit. But the performance hit might not be where you expect it! Functional programming is one tool in your toolbox, it’ll greatly empower you to solve certain problems, whilst it’ll help you shoot yourself in the foot for other problems.



Bonus benefits!
Yes, you’ll even take away something from this talk you might not have expected! A lot of people think of Haskell when they hear functional programming. Which might have scared them away from functional programming. In this talk you’ll get a look at functional programming with a familiar syntax and a language you already love. This will help you understand the underlying concepts and see how they relate to Haskell and other functional languages, where the syntax might be a bit different but the idea remains the same.



Do I need prior knowledge of FP?
No, absolutely not! You don’t need to have done functional programming before to benefit from this talk. There are concepts for all levels of understanding of functional programming. If you don’t know anything about functional programming yet, you’ll discover it in this talk. And if you’re already an FP-wizard who dreams in Haskell, you’ll learn how to transfer that understanding to Go.

Back

Porting Go to NetBSD/arm64

Home

Speaker Maya Rashish
RoomUB2.252A (Lameere)
TrackGo
Time10:00 - 10:30
Event linkView original entry

An introduction to calling conventions, thread-local storage, signal handling and how they relate to Go, in the context of my new port of Go to NetBSD/arm64.

Running a weird operating system comes at the cost of having to adjust software to run on it.
Go is probably one of the hardest projects to adjust.
Doing so required learning the guts of:
- ARM64 calling conventions
- signal handling
- thread-local storage
- a lot of Go-specifics



Which will be discussed in this lecture.

Back

Build real-world gaming hardware with TinyGo

Home

Speaker Daniel Esteban
RoomUB2.252A (Lameere)
TrackGo
Time10:30 - 11:00
Event linkView original entry

Learn the multiples and fun possibilities of using Go on microcontrollers like Arduino to make gaming related hardware.

After a brief overview of the new features of TinyGo (http://tinygo.org), we'll move onto some cool and easy to make smart-toys that run Go. From classic PONG and a "Simon says" device to a pocket gaming console, and some other surprises. We'll end with an Open LED Race competition and the possibility to win hardware and make your our TinyGo device.



This talk will feature bright lights and sounds, maybe lasers too.

Back

Diversity, Finally

Home

Speaker Ronna Steinberg
RoomUB2.252A (Lameere)
TrackGo
Time11:00 - 11:30
Event linkView original entry

What if we decided to solve, once and for all, the problem of underrepresentation in the Go community of women, gender minorities, people of color, or any other group the same way we handle our problems in production, by identifying "bugs" and then fixing them? Can it even be done? What if we took the engineering approach? Ronna is planning to convince you it's not a matter of if, but a matter of how, and we are going to analyze some of the statistics, find where the problems actually lay, and build a Trello card full of achievable tasks to address them.

Back

From Go to Kubernetes CRDs and Back

Home

Speaker Stefan Schimanski
RoomUB2.252A (Lameere)
TrackGo
Time11:30 - 12:00
Event linkView original entry

Kubernetes is built using Golang. CustomResourceDefinitions are the primary extension points for bringing custom data into a Kubernetes cluster. This hands-on talk is about the workflow of API definitions in Golang, generation of OpenAPI schemas as part of the CRD, client and informer generation and how to use these to process data in real-time using logic implemented in Golang controllers.

This hands-on talk is an introduction to the creation of CustomResource based API extensions for Kubernetes clusters. Following an example project we will






The talk does not require any knowledge about Kubernetes, just some Golang experience for unterstanding API type definitions.

Back

Deterministic debugging with Delve

Home

Speaker Derek Parker
RoomUB2.252A (Lameere)
TrackGo
Time12:00 - 12:30
Event linkView original entry

In this talk I will begin by delivering the "State of Delve" update. In similar fashion to the popular "State of Go" talk I will begin by discussing all of the exciting new features and changes that have happened over the past year, since last FOSDEM. Following that I will go into a live demo showcasing how Delve can leverage Mozilla RR in order to perform deterministic debugging. This talk will begin by introducing users to the concept of deterministic debugging and the power that comes with it. Following that I will launch into a demo showcasing how to leverage this concept to debug an otherwise unpredictable highly concurrent program. Users will walk away with immediate practical knowledge they can begin applying in their day to day debugging.

It's been a year since last FOSDEM and a lot has changed with Delve! I will discuss all the new features and changes that have been implemented in the last year.



Following the "State of Delve" introduction, I will dig into how Delve can be utilized to perform deterministic debugging. This style of debugging enables users to record the execution of their process and "play it back" in a deterministic fashion in order to more quickly and efficiently perform root cause analysis on a bug that may otherwise be difficult to reproduce or track down. This section of the talk will begin by introducing the concept of deterministic debugging and why it is so useful and powerful. Once everyone is familiar with the concept I will launch into a live demo showcasing how to leverage this debugging approach to track down and fix a bug which is hard to reproduce and happens only intermittently.



Attendees will walk away with practical knowledge that they can begin applying to their debugging problems immediately.

Back

Classify things in Go: the easy way.

Home

Speaker Sheimy Rahman
RoomUB2.252A (Lameere)
TrackGo
Time12:30 - 13:00
Event linkView original entry

Go and public training models can provide great potential: A fast way to build "eyes around the world", also known as classifiers. And with great powers, come great opportunities, such as building fantastic applications to turn our world a better place to live through the technology with few steps.

Go and public training models can provide great potential: A fast way to build "eyes around the world", also known as classifiers. And with great powers, come great opportunities, such as building fantastic applications to turn our world a better place to live through the technology.
The GO language, have GoCV package, and it provides the most modern and advanced Computational Vision libraries that exist like OpenCV.
In this talk, I'll demonstrate how to use public models from TensorFlow Hub and OpenCV library to easily build classifiers for APIs, taking a super leap from draft to a working classifier, in a few steps!
The idea is to demystify the concepts behind classifiers and show how to build one in a few steps and make rankers accessible to the business, showing how GO does this in a unique, scalable and self-performing way, and of course, encouraging the community to contribute and sharing more training models to they can turn more and more accurate!

Back

Dragons of CGO

Home

Speaker K.S. Bhaskar
RoomUB2.252A (Lameere)
TrackGo
Time13:00 - 13:30
Event linkView original entry

YottaDB is a mature, hierarchical key-value, free / open source NoSQL database whose is used in enterprise-scale mission-critical applications in banking and healthcare, and also scales down to fit on IoT devices like the Raspberry Pi Zero, as well as applications in-between (like the University of Antwerp library catalog system). When a customer funded us to develop a Go API to YottaDB, we thought it would be a straightforward project. But it was a very painful exercise for us. The presentation discusses the problems we faced, and how we overcame them.

Go is a popular language for writing highly concurrent software, and works well when used in isolation. Using Go alongside software written in other languages (such as C) can be done, but there are some hidden dragons to keep an eye out for. In addition to obvious problems, such as calling C variadic functions, other more subtle problems are hidden deep in the depths of Go documentation.



Consider callback functions; how does one pass a function pointer from Go to C to provide a callback? There are strict limitations enforced by Go on what pointers may be passed to C routines, intended to prevent faults resulting from Go structures being garbage collected without knowledge of the C code. Without being able to pass pointers, even function pointers, how do we “pass” a callback function to the C code to callback into? Furthermore, how do we pass data to the callback function, since we can’t pass Go structures?



Given that we can’t pass Go structures to C code, at some point we will need to allocate C structures to store data for the C code to operate on. Go promises one thing about memory allocated in C land; it will not keep track of it for you. The garbage collector will gladly clean up any Go structures no longer needed, but will not clean up the associated C memory. How can one write code which isn’t likely to result in memory leaks, using this model?



Perhaps the most difficult challenge to overcome is that Go makes no promises about what thread is running code. Go does its best to hide the identity of Go routines from the user, so they won’t rely on this metadata for handling code execution. This presents a problem for many C applications, which often use POSIX mutexes to control access, and the owner of a mutex needs to belong to a specific thread. How can one write Go applications that allow the concurrency Go users expect, without trashing the libraries they are calling?



Of course, none of the solutions we talk about here are any good unless you can compile your program. The hidden dragons of Go also lurk behind the “go build“ command; fitting in the required C flags requires knowledge of, among other things, pkg-config, a systems tool used behind the scenes by programs like CMake and autotools.



We had to tackle all these problems, and many more, during the development of the YottaDB Go wrapper (https://gitlab.com/YottaDB/Lang/YDBGo and https://yottadb.com). This presentation hopes to pass some of our hard-learned lessons to other programmers who will use Go to interface with non-Go libraries and utilities.

Back

Advanced debugging techniques of Go code

Home

Speaker Andrii Soldatenko
RoomUB2.252A (Lameere)
TrackGo
Time13:30 - 14:00
Event linkView original entry

In an ideal world, you would write Go code, compile it, and then it would work perfectly the first time. But unfortunately it doesn't work in this manner. There are many different books and articles about how to write good code in go, but not so many how to debug code efficiently. In my talk I'll try to cover such important topic.
Go is a new programming language with best tools for development. In my talk I'll cover how to efficiently using these tools to debug your code. I’ll start from history of debuggers, later I'll show you how to debug go itself, if you need to find bug in language. Than I can demonstrate how to effectively debug microservices using docker and k8s, what’s remote debugging and how to apply it to application which already has been deployed. Debugging unit tests and not only code. Some tricks of debugging command line applications.

My talk is about:
- compare go debuggers (delve, gdb) in real world applications;
- how to effectively debug inside containers (using remote-debuggers)
- how to use Mozilla rr to record and play you golang app (https://rr-project.org/)
- how to use dig into slices using gdb
I'm using the term docker and k8s to show how to debug applications in different environments without lot's of details of k8s, rather showing tips/tricks to speed up you microservices.

Back

Debug code generation in Go

Home

Speaker Jaana Dogan
RoomUB2.252A (Lameere)
TrackGo
Time14:00 - 14:30
Event linkView original entry

If you are interested to learn about what and how Go generates machine code, this talk is for you. By learning more about the compilation, you can either avoid unnecessary hand-crafted optimizations or learn more about the compiler to become a contributor to the Go compiler.

Have you ever optimized some Go code to later realize Go compiler is already doing the same optimization automatically? Have you ever tried to understand what makes of of the Go compiler? Or, have you ever wondered how can you inspect machine code generated from Go source code? If you are interested to learn about what and how Go generates machine code, this talk is for you. By learning more about the compilation, you can either avoid unnecessary hand-crafted optimizations or learn more about the compiler to become a contributor to the Go compiler.

Back

Uplift your Linux systems programming skills with systemd and D-Bus

Home

Speaker Leonid Vasilyev
RoomUB2.252A (Lameere)
TrackGo
Time14:30 - 15:00
Event linkView original entry

Systemd is a de-facto standard process manager in all mainstream Linux distributions for almost a decade.
D-Bus is most widely used inter-process communication on a local host. It's used in many core apps on Linux Desktop.



Yet both systemd and D-Bus are undervalued.
Very often, programs that are only intended to run on Linux attempt to re-implement (with bugs) what systemd and D-Bus already provide
(for example: watchdog function, reliable process termination, notifying another program about some event, coordination between multiple processes).



The goal of this talk is to shift perspective on systemd and D-Bus (using concrete practical examples in Go),
and show how basic building block these systems provide can be re-used in software you write for modern Linux system.

This is an exploratory talk. Then intent is to look at systemd and D-Bus from a different angle.



Most of current tutorials about systemd focused on operating a service like apache, nginx or redis.
D-Bus tutorials are very abstract, basic and lack any concrete useful use-cases.



I plan to present few recent additions to systemd, such as portable services and resource control.
As well as re-introduce few existing concepts, like sd-notify, watchdogs and transient units.



On D-Bus I plan to show how to use bus abstraction and few neat features,
like passing file descriptors and receiving notifications.



The focus is on how to not re-invent things that systemd and D-Bus do much better.



Examples are given as a few simple Golang programs, with full source available on github.



The indented audience is anyone who write and operate Go code on Linux.
Preferred experience of the audience: basic knowledge of Linux and Golang, familiarity with systemd and D-Bus concepts would be useful as well.

Back

Deep Learning For Gophers

Home

Speaker Rashmi Nagpal
RoomUB2.252A (Lameere)
TrackGo
Time15:00 - 15:30
Event linkView original entry

The software has not eaten the world yet, but infact has changed the way it was before. That software has given us, the human a new superpower which is the power of artificial neural networks. The goal of those networks is to help us answer the question : “Given X, predict Y with Z% accuracy”. This is where Deep Learning comes into picture. Let’s build a basic building block of deep learning : neural network.

Back

Speed up the monolith

Home

Speaker Alessio Caiazza
RoomUB2.252A (Lameere)
TrackGo
Time15:30 - 16:00
Event linkView original entry

GitLab is a ruby on rails application, but this didn’t prevent us from having fun with Go.
Learn how we decomposed our monolith by writing a smart reverse proxy in Go that handles I/O intensive operations.
A technique that every web app can use, regardless of the company stack.

We set a deadline for releasing a cloud-native version of GitLab and put a team of engineers to work planning the helm charts, splitting several components into independently scalable PODs. The team faced a few challenges.



GitLab’s main codebase is written in Ruby, which has a global interpreter lock. We relied on NFS to asynchronously upload files from our workers fleet. Removing shared file system by uploading directly from the controller was not an option. We wanted to move to an object storage based solution, but that was a paid feature and we had to port it to the open-source codebase. Oh, we also needed to make sure the rest of our engineers could keep shipping new features at our regular monthly cadence.



At the same time, we were planning our infrastructure migration from Azure to Google Cloud. Removing this intermediate state, where a file is on GitLab server NFS but not yet uploaded to the object storage, would have made the migration a lot easier.



We had to remove the NFS dependency to make GitLab easily deployable on Kubernetes and we needed a performant multi-cloud object storage uploader viable also for on-prem installations, a solution that would work for a single server setup up to Ggitlab.com scale.



Luckily we already had written workhorse, a smart reverse proxy written in Go for handling git operations. It was time to extend workhorse capabilities leveraging the full power of goroutines.



We had a plan, but the devil is in the detail. Allow me to guide you through this journey. During the talk I’ll tell you how a ruby-on-rails company began to write Go code, how we implemented an object storage uploader inside our proxy, the problems we faced, and tradeoffs we took to deliver this in time.

Back

Go Lightning Talks

Home

Speaker
RoomUB2.252A (Lameere)
TrackGo
Time16:00 - 17:00
Event linkView original entry

At the end of the day we will have lightning talks of 8 minutes in the Go Devroom!
Each talk will be 8 minutes long, the CfP for these is open till a few hours before the talks start to give everyone the chance to submit a proposal.

Back

LPI Exam Session 3

Home

Speaker LPI Team
RoomUB4.132
TrackCertification
Time09:30 - 11:30
Event linkView original entry

LPI offers discounted certification exams at FOSDEM

As in previous years, the Linux Professional Institute (LPI) will offer discounted certification exams to FOSDEM attendees.
LPI offers level 1, level 2 and level 3 certification exams at FOSDEM with an almost 50% discount.



For further information and instructions see https://fosdem.org/certification.

Back

LibreOffice Exam Session 1

Home

Speaker LibreOffice Team
RoomUB4.132
TrackCertification
Time12:00 - 13:00
Event linkView original entry

LibreOffice Certifications are designed to recognize professionals in the areas of development, migrations and trainings who have the technical capabilities and the real-world experience to provide value added services to enterprises and organizations deploying LibreOffice on a large number of PCs.

In the future, LibreOffice Certifications will be extended to Level 1 and Level 2 Support professionals.



The LibreOffice Certification is not targeted to end users, although Certified Training Professionals will be able to provide such a service upon request (although not as a LibreOffice Certification). In general, end user certification is managed by organizations with a wider reach such as the Linux Professional Institute.

Back

LibreOffice Exam Session 2

Home

Speaker LibreOffice Team
RoomUB4.132
TrackCertification
Time13:30 - 14:30
Event linkView original entry

LibreOffice Certifications are designed to recognize professionals in the areas of development, migrations and trainings who have the technical capabilities and the real-world experience to provide value added services to enterprises and organizations deploying LibreOffice on a large number of PCs.

In the future, LibreOffice Certifications will be extended to Level 1 and Level 2 Support professionals.



The LibreOffice Certification is not targeted to end users, although Certified Training Professionals will be able to provide such a service upon request (although not as a LibreOffice Certification). In general, end user certification is managed by organizations with a wider reach such as the Linux Professional Institute.

Back

LibreOffice Exam Session 3

Home

Speaker LibreOffice Team
RoomUB4.132
TrackCertification
Time15:00 - 16:00
Event linkView original entry

LibreOffice Certifications are designed to recognize professionals in the areas of development, migrations and trainings who have the technical capabilities and the real-world experience to provide value added services to enterprises and organizations deploying LibreOffice on a large number of PCs.

In the future, LibreOffice Certifications will be extended to Level 1 and Level 2 Support professionals.



The LibreOffice Certification is not targeted to end users, although Certified Training Professionals will be able to provide such a service upon request (although not as a LibreOffice Certification). In general, end user certification is managed by organizations with a wider reach such as the Linux Professional Institute.

Back

Continuous Delivery starts with Continuous Infrastructure

Home

Speaker Kris Buytaert
RoomUB4.136
TrackContinuous Integration and Continuous Deployment
Time09:00 - 09:40
Event linkView original entry

Most organisations start their journey towards Continuous Delivery with their development teams, or often their web or mobile teams. I’ve seen many of these journeys fail because “ops” was not included in the picture. The organisation assumed DevOps didn’t need ops. So the team didn’t adapt, didn’t provide the right stacks, couldn’t support the tools. I’ve started a number of successful journeys with the ops teams doing Continuous Delivery of their infrastructure as code. They changed their mindset, allowing them to understand, support and onboard the development teams. This talk will document that approach with some supporting cases and examples.



Taking one step further we'll showcase a on how to do Continuous Delivery of your Infrastructure as Code, obviously with Open Source tools

Back

An event based approach for CI/CD pipelines

Home

Speaker Emelie Pettersson
Fredrik Fristedt
RoomUB4.136
TrackContinuous Integration and Continuous Deployment
Time09:45 - 10:25
Event linkView original entry

How can we listen to when new upstream software has been tested to the extent that we feel comfortable integrating it into our software? How can we communicate about new artifacts available for others to integrate? How can we see what has been integrated where? How can we achieve traceability across pipelines run on different tooling infrastructure? How can we visualize our pipelines to follow changes from source code to customer deployment?



We will describe these challenges and show how we tackled them using self-documenting integration pipelines providing traceability and visualization to benefit multiple needs in the organization. We will present based on our experience from large-scale software development.

Back

Mario’s adventures in Tekton land

Home

Speaker Andrea Frittoli
Vincent Demeester
RoomUB4.136
TrackContinuous Integration and Continuous Deployment
Time10:30 - 11:10
Event linkView original entry

In this talk, the speakers will present their experiences about using Tekton - a cloud-native pipeline system - to test, release and continuously deploy itself.

Tekton is a Kubernetes-native, lightweight, easy to manage CI/CD pipelines engine. Pipeline building blocks can be reused, version controlled and curated in a catalogue that embeds best practices. Tekton, hosted by the CD Foundation, aspires to be the common denominator in CI/CD, modelling what Kubernetes has become in cloud-native application development. The Tekton team wanted to make sure that the project is going in the right direction by "dogfooding" i.e. by using Tekton to run its own automation "plumbing". The initial continuous integration setup embedded most of the testing pipelines in bash scripts. The speakers replaced this with Tekton, hence improving the readability of the pipelines and the reproducibility of CI runs. Eventually, they moved onto continuously delivering Tekton and its pipelines via Tekton. In this talk, the speakers will tell their experiences about using a cloud-native pipeline system to test, release and continuously deploy itself.

Back

Test Software On Emulated Hardware In Containers... In The Cloud

Home

Speaker Sean A. Parker
Paul Schroeder
RoomUB4.136
TrackContinuous Integration and Continuous Deployment
Time11:15 - 11:55
Event linkView original entry

Modernizing the traveler information systems of an international railway and transportation company, including the modernization and renewal of traveler facing devices at the train stations. For a variety of devices ranging from 20-year-old x86 PC104 based embedded systems up to modern 64bit multi-core systems, a Buildroot based Linux system, and a custom application stack is being developed.

In this talk, we will show how we use a fully automated CI pipeline to build our custom application components resulting in deployable Linux disk images. These images are then containerized and deployed on our Kubernetes cluster. Using Qemu in our containers allows us to simulate external hardware normally connected through serial interfaces and is the basis for automated tests.




Back

AMENDMENT Advancing the Future of CI/CD Together

Home

Speaker Tracy Miranda
RoomUB4.136
TrackContinuous Integration and Continuous Deployment
Time12:00 - 12:15
Event linkView original entry

The Continuous Delivery Foundation was launched in 2019 as the new home to FOSS projects Jenkins, Jenkins, Spinnaker and Tekton.
The foundation is also a community to advance adoption of CI/CD best practices and tools. This talk outlines the initiatives and ways to get involved so we can all work together to accelerate CI/CD adoption.



Please note that this talk replaces one entitled "Infrastructure CICD with KubeVirt and Tekton" that was due to have been given by Tyler Auerbeck, who unfortunately wasn't able to travel to FOSDEM.

The Continuous Delivery Foundation hosts key CI/CD projects. This talk gives a brief overview of those projects and how we are working toward interoperability between them.
We also look at the goals of the CDF and key initiatives such as CI/CD landscape, security, diversity and MLOPs. This talk will share how you can get involved so we can all work together in open source to drive forward the direction of CI/CD and make software delivery better for everyone.

Back

Choosing The Right Deployment Strategy

Home

Speaker Viktor Farcic
RoomUB4.136
TrackContinuous Integration and Continuous Deployment
Time12:20 - 13:00
Event linkView original entry

Deployment strategies affect everyone, no matter whether we are focused only on a single aspect of the application lifecycle or we are in full control. The way we deploy affects the architecture, testing, monitoring, and many other aspects. And not only that, but we can say that architecture, testing, and monitoring affect the way we deploy. All those things are closely related and affect each other.



We'll discuss different deployment strategies and answer a couple of questions. Is your application stateful or stateless? Does its architecture permit scaling? How do you roll back? How do you scale up and down? Do you need your application to run always? Should you use Kubernetes Deployments instead of, let's say, StatefulSets? Answers to those questions will not serve much unless we are familiar with some of the most commonly used deployment strategies. Not only that knowledge will help us choose which one to pick, but they might even influence the architecture of our applications.

For many people, deploying applications is transparent or even irrelevant. If you are a developer, you might be focused on writing code and allowing magic to happen. By magic, I mean letting other people and departments figure out how to deploy your code. Similarly, you might be oblivious to deployments. You might be a tester, or you might have some other role not directly related to system administration, operations, or infrastructure. Now, I doubt that you are one of the oblivious. The chances are that you would not be even reading this if that's the case. If, against all bets, you do belong to the deployment-is-not-my-thing group, the only thing I can say is that you are wrong.



Deployment strategies affect everyone, no matter whether we are focused only on a single aspect of the application lifecycle or we are in full control. The way we deploy affects the architecture, testing, monitoring, and many other aspects. And not only that, but we can say that architecture, testing, and monitoring affect the way we deploy. All those things are closely related and affect each other in ways that might not be obvious on the first look.



We'll discuss different deployment strategies and answer a couple of questions. Is your application stateful or stateless? Does its architecture permit scaling? How do you roll back? How do you scale up and down? Do you need your application to run always? Should you use Kubernetes Deployments instead of, let's say, StatefulSets? Those are only a few of the questions you need to answer to choose the right deployment mechanism. But, answers to those questions will not serve much unless we are familiar with some of the most commonly used deployment strategies. Not only that knowledge will help us choose which one to pick, but they might even influence the architecture of our applications.



We'll explore serverless, recreate, rolling update, and canary deployment strategies and we'll automate them all using Jenkins X.

Back

Progressive Delivery

Home

Speaker Carlos Sanchez
RoomUB4.136
TrackContinuous Integration and Continuous Deployment
Time13:05 - 13:45
Event linkView original entry

Progressive Delivery makes it easier to adopt Continuous Delivery, by deploying new versions to a subset of users and evaluating their correctness and performance before rolling them to the totality of the users, and rolled back if not matching some key metrics. Canary deployments is one of the techniques in Progressive Delivery, used in companies like Facebook to roll out new versions gradually. But good news! you don't need to be Facebook to take advantage of it.



We will demo how to create a fully automated Progressive Delivery pipeline with Canary deployments and rollbacks in Kubernetes using Jenkins X, an open source platform for cloud native CI/CD in Kubernetes, and Flagger, a project that uses Istio and Prometheus to automate Canary rollouts and rollbacks.

Back

A Practical CI/CD Framework for Machine Learning at Massive Scale

Home

Speaker Alejandro Saucedo
RoomUB4.136
TrackContinuous Integration and Continuous Deployment
Time13:50 - 14:05
Event linkView original entry

Managing production machine learning systems at scale has uncovered new challenges that have required fundamentally different approaches to that of traditional software engineering and data science. In this talk, we'll provide key insights on MLOps, which often encompasses the concepts around monitoring, deployment, orchestration and continuous delivery for machine learning. We will be covering a hands on an example where we will be training, deploying and monitoring ML at scale. We'll be using Jenkins X (+ Prow & Tekton) to deploy/promote these models across multiple environments. We will use KIND (Kubernetes in Docker) to run integration tests in our development environment. Finally, we'll be using Seldon to orchestrate & monitor these models leveraging advanced ML techniques.

Managing production machine learning systems at scale has uncovered new challenges which have required fundamentally different approaches to that of traditional software engineering and data science. In this talk, we'll provide key insights on MLOps, which often encompasses the concepts around monitoring, deployment, orchestration and continuous delivery for machine learning. We will be covering a hands on an example where we will be training, deploying and monitoring ML at scale. We'll be using Jenkins X (+ Prow & Tekton) to deploy/promote these models across multiple environments. We will use KIND (Kubernetes in Docker) to run integration tests in our development environment. Finally, we'll be using Seldon to orchestrate & monitor these models leveraging advanced ML techniques.

Back

Our road to a k8s/GKE based Closed Build Environment

Home

Speaker Siegfried Kiermayer
RoomUB4.136
TrackContinuous Integration and Continuous Deployment
Time14:10 - 14:50
Event linkView original entry

My team build a new Closed Build Environment for building Release Packages on Google Cloud Platform(gcp) with Google Kubernetes Engine (GKE).



I like to take you on a small journey through a variety of topics we came across (open for change):






And others



Simple real live example how my team is doing it. Looking forward to inspire and to get feedback from others!

Back

From a Pipeline to a Government Cloud

Home

Speaker Toby Lorne (tlwr)
RoomUB4.136
TrackContinuous Integration and Continuous Deployment
Time14:55 - 15:35
Event linkView original entry

Since 2016, the UK Government has been running an open-source, cross-government Platform-as-a-Service (PaaS) to make it easier and cheaper to build government services. The GOV.UK PaaS is built on BOSH and Cloud Foundry, and is deployed using Concourse.



Concourse is "an open-source continuous thing-doer", with abstractions that help build pipelines quickly, and for extending the behaviour of the system.



This presentation will provide an introduction to Concourse, and then describe how the GOV.UK PaaS team use Concourse to continuously deploy a whole PaaS whilst ensuring high-availability and minimal impact to services and users.

Toby Lorne is a site reliability engineer working at the UK Government Digital Service on the GOV.UK Platform-as-a-Service.



This presentation is split into four parts:




  1. An overview of the architecture and implementation of GOV.UK PaaS: Terraform - a tool for managing infrastructure as code; BOSH - a tool for release engineering and software lifecycle management; Cloud Foundry - a set of components for Platform-as-a-Service; Prometheus & Grafana - monitoring and visualisation tools


  2. An introduction to Concourse: how it works; the abstractions; the design decisions, patterns, and anti-patterns


  3. A walkthrough of the pipelines used in deployment and development


  4. An examination of patterns used in the GOV.UK PaaS deployment pipeline, and how you could use these patterns in your pipelines.



Back

Deployment to hardware

Home

Speaker William Salmon
RoomUB4.136
TrackContinuous Integration and Continuous Deployment
Time15:40 - 16:20
Event linkView original entry

Our project takes a fun, road-following app which leverages a basic neural network and deploys it to real hardware with an OStree update system. This has meant managing a variety of different CI-runners; GPU, aarch64 and x86_64. These have variety of different dependencies, drivers and have interfaces with a number of services and caches.



I will focus on how we constructed and developed our CI pipelines to build, test and integrate a number of disparate components to produce images and push updates into an OStree server to be deployed over the air onto our hardware.

Back

AMENDMENT How secure is your build/server?

Home

Speaker Patrick Debois
RoomUB4.136
TrackContinuous Integration and Continuous Deployment
Time16:25 - 17:00
Event linkView original entry

We have learned that we need to trust others, but as our parents used to say - don’t trust strangers. So we secure our production server more than ever.



Yet, there is this no-man's land: “the build server”. We think it’s time to take a closer look at some of the good practices around securing builds & artifacts to improve our day to day level of trust.



Please note that this talk replaces one entitled "Safe, gated and integrated GitOps for Kubernetes" that was due to have been given by Mohammed Naser, who unfortunately is now unable to present. We wish him a speedy recovery.

Development has changed over the years, from doing everything yourself to a 3rd party package for every function. Operations has changed too, running your own servers is now considered an exception. To the cloud!
We have learned that we need to trust others, but as our parents used to say - don’t trust strangers. So we secure our production server more than ever.



Yet, in the middle sits this no-man's land: “the CI server”. We think it’s time to take a closer look at some of the good practices around securing builds & artifacts to improve our day to day level of trust.



With Marked Sherman statement “Development is now assembly” in mind, the talk will focus more on the package/artifact/repository aspect. Less on the app security inside the code itself or at the OS/Machine level.






It will explain these topics using practical/code examples from the Node.js and Docker ecosystems.
All this will be presented from the different viewpoints from “dev” , “sec” and “ops”.



Let’s take ownership of your trust , we are already responsible when things go wrong anyway.

Back

Open Source Hardware and Soldering Workshop

Home

Speaker Tsvetan Usunov
RoomUB4.228
TrackWorkshops
Time10:00 - 17:00
Event linkView original entry

Open Source Hardware room with two day soldering workshops.



Day 2 soldering workshop will be dedicated to Surface Mount Technology and is good for beginners which have no experience with SMT technology.



Beside the soldering workshop we will show our latest OSHW boards we work on, you are welcome to join and show your own OSHW projects too.

We will solder Binary Watch PCB made exclusively with SMT components.



During the soldering workshop we will introduce the electronic components used in the PCB and how to identify them and how components with polarity is to be recognized.



We will teach you the basics of SMT soldering, how to print solder paste with stencil, how to reflow using hot air, how good and bad solder joints look like and what is cold solder joint.
At the end of the workshop you will build your own binary watch.

Back

Introducing HPC with a Raspberry Pi cluster

Home

Speaker Colin Sauze
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time09:00 - 09:25
Event linkView original entry

This talk will discuss the development of a RaspberryPi cluster for teaching an introduction to HPC.



The motivation for this was to overcome four key problems faced by new HPC users:




  1. The availability of a real HPC system and the effect running training courses can have on the real system, conversely the availability of spare resources on the real system can cause problems for the training course.

  2. A fear of using a large and expensive HPC system for the first time and worries that doing something wrong might damage the system.

  3. That HPC systems are very abstract systems sitting in data centres that users never see, it is difficult for them to understand exactly what it is they are using.

  4. That new users fail to understand resource limitations, in part because of the vast resources in modern HPC systems a lot of mistakes can be made before running out of resources. A more resource constrained system makes it easier to understand this.




The talk will also discuss some of the technical challenges in deploying an HPC environment to a Raspberry Pi and attempts to keep that environment as close to a "real" HPC as possible. The issue to trying to automate the installation process will also be covered.

Back

Building an open source data lake at scale in the cloud

Home

Speaker Adrian Woodhead
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time09:30 - 09:55
Event linkView original entry

This presentation will give an overview of the various tools, software, patterns and approaches that Expedia Group uses to operate a number of large scale data lakes in the cloud and on premise. The data journey undertaken by the
Expedia Group is probably similar to many others who have been operating in this space over the past two decades - scaling out from relational databases to on premise Hadoop clusters to a much wider ecosystem in the cloud. This talk
will give an overview of that journey and then describe the various open source components that Expedia Group have used and built to create multi-petabyte data lakes. These include existing open source projects like Hive, Hadoop, Terraform,
Docker, Kubernetes as well as open source tools that we built to overcome some of the unexpected challenges we faced. The first of these is Circus Train — a dataset replication tool that copies Hive tables between clusters and clouds. We will also discuss various other options for dataset replication and what unique features Circus Train has. The second tool is Waggle Dance — a federated Hive metadata service that enables querying of data stored across multiple Hive metastores. We will then look at Apiary - a means to simplify the deployment of the various components of an open source data lake at scale including the Hive metastore, Waggle Dance, S3 bucket access, metadata change notifications and much more. We focus on actual problems and solutions that have arisen in a huge, organically grown corporation, rather than idealised architectures.

Back

Magic Castle: Terraforming the Cloud for HPC

Home

Speaker Félix-Antoine Fortin
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time10:00 - 10:25
Event linkView original entry

Compute Canada provides HPC infrastructures and support to every academic research institution in Canada. In recent years, Compute Canada has started distributing research software to its HPC clusters using with CERN software distribution service, CVMFS. This opened the possibility for accessing the software from almost any location and therefore allow the replication of the Compute Canada experience outside of its physical infrastructure.



From these new possibilities emerged an open-source software project named Magic Castle, which aims to recreate the Compute Canada user experience in public clouds. Magic Castle uses the open-source software Terraform and HashiCorp Language (HCL) to define the virtual machines, volumes, and networks that are required to replicate a virtual HPC infrastructure. The infrastructure definition is packaged as a Terraform module that users can customize as they require. Once their cluster is deployed, the user is provided with a complete HPC cluster software environment including a Slurm scheduler, a Globus Endpoint, JupyterHub, LDAP, DNS, and over 3000 research software compiled by experts with EasyBuild. Magic Castle is compatible with AWS, Microsoft Azure, Google Cloud, OpenStack, and OVH.



Compute Canada staff has been using this software to deploy ephemeral clusters for training purposes every other week for the past two years. Magic Castle is also gaining in popularity with HPC cluster users for development, testing, and continuous integration.



In this talk, we will give a live demonstration of the creation of a cluster. We will present the architecture of Magic Castle, explain infrastructure and provisioning design, and present use cases. We will conclude by describing some of the challenges experienced while developing this novel usage of Terraform.

Back

Maggy: Asynchronous distributed hyperparameter optimization based on Apache Spark

Home

Speaker Moritz Meister
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time10:30 - 10:55
Event linkView original entry

Maggy is an open-source framework built on Apache Spark, for asynchronous parallel execution of trials for machine learning experiments. In this talk, we will present our work to tackle search as a general purpose method efficiently with Maggy, focusing on hyperparameter optimization. We show that an asynchronous system enables state-of-the-art optimization algorithms and allows extensive early stopping in order to increase the number of trials that can be performed in a given period of time on a fixed amount of resources.

In "The Bitter Lesson of AI", Rich Sutton (father of reinforcement learning) claimed that general purpose methods (like search and learning) that scale with increased computation are the future of AI. Apache Spark is a general purpose framework for scaling out data processing with available compute, but there are challenges in making Sparks' bulk-synchronous execution mechanism work efficiently with search and (deep) learning.
In this talk, we will present our work on Maggy, an open-source framework to tackle search as a general purpose method efficiently on Spark. Spark can be used to deploy basic optimizers (grid search, random search, differential evolution) proposing combinations of hyperparameters (trials) that are run synchronously in parallel on executors. However, many such trials perform poorly, and a lot of CPU and hardware accelerator cycles are wasted on trials that could be stopped early, freeing up resources for other trials. What is needed is support for asynchronous mechanisms.
Maggy is an asynchronous hyperparameter optimization framework built on Spark that is able to transparently schedule and manage hyperparameter trials, by allowing limited communication, thereby increasing resource utilization, and massively increasing the number of trials that can be performed in a given period of time on a fixed amount of resources. Maggy is also built to support parallel ablation studies and applies to black box optimization/search problems in general. We will report on the gains we have seen in reduced time to find good hyperparameters and improved utilization of GPU hardware. Finally, we will perform a live demo on a Jupyter notebook, showing how to integrate Maggy in existing PySpark applications.

Back

Snorkel Beambell - Real-time Weak Supervision on Apache Flink

Home

Speaker Suneel Marthi
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time11:00 - 11:25
Event linkView original entry

The advent of Deep Learning models has led to a massive growth of real-world machine learning. Deep Learning allows Machine Learning Practitioners to get the state-of-the-art score on benchmarks without any hand-engineered features. These Deep Learning models rely on massive hand-labeled training datasets which is a bottleneck in developing and modifying machine learning models.



Most large scale Machine Learning systems today like Google’s DryBell use some form of Weak Supervision to construct lower quality, large scale training datasets that can be used to continuously retrain and deploy models in a real-world scenario.



The challenge with continuous retraining is that one needs to maintain prior state (e.g., the learning functions in case of Weak Supervision or a pre-trained model like BERT or Word2Vec for Transfer Learning) that is shared across multiple streams, while continuously updating the model. Apache Beam’s Stateful Stream processing capabilities are a perfect match here including support for scalable Weak Supervision.



Prior work on using Beam’s State coupled with Flink’s dynamic processing capabilities to store and update word embeddings for real-time Online Topic Modeling of text has been presented at Flink Forward Berlin 2018. Similar streaming pipelines would also work for real-time model updates using Weak Supervision and Transfer Learning. In this talk, we’ll be looking at a framework - Snorkel BeamBell - a framework leveraging Stanford’s Snorkel library for Weak Supervision and Apache Beam for large scale Weak Supervision Learning for online labeling of large amounts of data that can continuously learn new classification models based on Stateful Learning Functions and user feedback.

The advent of Deep Learning models has led to a massive growth of real-world machine learning. Deep Learning allows Machine Learning Practitioners to get the state-of-the-art score on benchmarks without any hand-engineered features. These Deep Learning models rely on massive hand-labeled training datasets which is a bottleneck in developing and modifying machine learning models.



Most large scale Machine Learning systems today like Google’s DryBell use some form of Weak Supervision to construct lower quality, large scale training datasets that can be used to continuously retrain and deploy models in a real-world scenario.



The challenge with continuous retraining is that one needs to maintain prior state (e.g., the learning functions in case of Weak Supervision or a pre-trained model like BERT or Word2Vec for Transfer Learning) that is shared across multiple streams, while continuously updating the model. Apache Beam’s Stateful Stream processing capabilities are a perfect match here including support for scalable Weak Supervision.



Prior work on using Beam’s State coupled with Flink’s dynamic processing capabilities to store and update word embeddings for real-time Online Topic Modeling of text has been presented at Flink Forward Berlin 2018. Similar streaming pipelines would also work for real-time model updates using Weak Supervision and Transfer Learning. In this talk, we’ll be looking at a framework - Snorkel BeamBell - a framework leveraging Stanford’s Snorkel library for Weak Supervision and Apache Beam for large scale Weak Supervision Learning for online labeling of large amounts of data that can continuously learn new classification models based on Stateful Learning Functions and user feedback.



The audience would come away with a better understanding of how Weak Supervision with Apache Beam’s stateful stream processing can be used to accelerate the labeling of training data, and real-time training and update of machine learning models.

Back

Efficient Model Selection for Deep Neural Networks on Massively Parallel Processing Databases

Home

Speaker Frank McQuillan
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time11:30 - 11:55
Event linkView original entry

In this session we will present an efficient way to train many deep learning model configurations at the same time with Greenplum, a free and open source massively parallel database based on PostgreSQL. The implementation involves distributing data to the workers that have GPUs available and hopping model state between those workers, without sacrificing reproducibility or accuracy. Then we apply optimization algorithms to generate and prune the set of model configurations to try.

Deep neural networks are revolutionizing many machine learning applications, but hundreds of trials may be needed to generate a good model architecture and associated hyperparameters. This is the challenge of model selection. It is time consuming and expensive, especially if you are only training one model at a time.



Massively parallel processing databases can have hundreds of workers, so can you use this parallel compute architecture to address the challenge of model selection for deep nets, in order to make it faster and cheaper?



It’s possible!



We will demonstrate results from this project using a version of Hyperband, which is a well known hyperparameter optimization algorithm, and the deep learning frameworks Keras and TensorFlow, all running on Greenplum database using Apache MADlib. Other topics will include architecture, scalability results and bright opportunities for the future.



We look forward to presenting this topic at FOSDEM’20!

Back

Predictive Maintenance

Home

Speaker Felix Bert
Corvin Jaedicke
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time12:00 - 12:25
Event linkView original entry

Predictive maintenance and condition monitoring for remote heavy machinery are compelling endeavors to reduce maintenance cost and increase availability. Beneficial factors for such endeavors include the degree of interconnectedness, availability of low cost sensors, and advances in predictive analytics. This work presents a condition monitoring platform built entirely from open-source software. A real world industry example for an escalator use case from Deutsche Bahn underlines the advantages of this approach.

Predictive maintenance and condition monitoring for remote heavy machinery are compelling endeavors to reduce maintenance cost and increase availability. Beneficial factors for such endeavors include the degree of interconnectedness, availability of low cost sensors, and advances in predictive analytics. This work presents a condition monitoring platform built entirely from open-source software. A real world industry example for an escalator use case from Deutsche Bahn underlines the advantages of this approach.



Audio analysis is performed on miliseconds of audio data to get accurate predictions of an asset's condition. Even with this high resolution knowledge about our equipment under supervision, sensitive alarming of our customers requires a system of systems approach taking into account up to several months of data.



This talk highlights the challenges and learnings involved in building the platform and high-level aggregation for our alarming system.

Back

Towards reproducible Jupyter notebooks

Home

Speaker Ludovic Courtès
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time12:30 - 12:40
Event linkView original entry

Jupyter has become a tool of choice for researchers willing to share a narrative and supporting code that their peers can re-run. This talk is about Jupyter’s Achille’s heel: software deployment. I will present Guix-Jupyter, which aims to make notebook self-contained and to support reproducible deployment.

Jupyter has become a tool of choice for researchers in data science and others fields. Jupyter Notebooks allow them to share a narrative and supporting code that their peers can re-run, which is why it is often considered a good tool for reproducible science.



However, Jupyter Notebooks do not describe their software dependencies, which significantly hinder reproducibility: What if your peer runs different Python version? What if your notebook depends on a library that your peer hasn’t installed? What will happen if you try to run your notebook in a few years?



All these issues are being addressed by tools such as Binder and its friend repo2docker. These solutions, though, do not address what we think is the core issue: that notebooks lack information about their software dependency.



In this talk I will present our take on this problem, Guix-Jupyter. Guix-Jupyter allows users to annotate their notebook with information about their run-time environment. Those annotations are interpreted and Guix takes care of deploying the dependencies described. Furthermore, Guix-Jupyter ensures that code runs in an isolated environment (a container) as a way to maximize reproducibility.



Guix-Jupyter is work-in-progress and we are eager to share our approach and get your feedback!

Back

Buildtest: HPC Software Stack Testing Framework

Home

Speaker Shahzeb Siddiqui
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time12:45 - 12:55
Event linkView original entry

HPC support teams are often tasked with installing
scientific software for their user community and the complexity of
managing a large software stack gets very challenging. Software
installation brings forth many challenges that requires a team of
domain expertise and countless hours troubleshooting to build an
optimal software state that is tuned to the architecture. In the past
decade, two software build tools (Easybuild, Spack) have emerged
that are widely accepted in HPC community to accelerate building
a complete software stack for HPC systems. The support team are
constantly involved in fulfilling software request for end-users
which leads to an ever-growing software ecosystem. Once a
software is installed, the support team hands it off to the user
without any testing because scientific software requires domain
expertise in order to test software. Some software packages are
shipped with a test suite that can be run at post build while many
software have no mechanism for testing. This poses a knowledge
gap between HPC support team and end-users on the type of
testing to do. Some HPC centers may have developed in-house test
scripts that are suitable for testing their software, but these tests
are not portable due to hardcoded paths and are often site
dependent. In addition, there is no collaboration between HPC
sites in building a test repository that will benefit the community.
In this talk I will presents buildtest, a framework to automate software
testing for a software stack along with several module operations
that would be of interest to the HPC support team.

HPC computing environment is a tightly coupled system that
includes a cluster of nodes and accelerators interconnected with
a high-speed interconnect, a parallel filesystem,multiple storage
tiers, a batch scheduler for users to submit jobs to the cluster and
a software stack for users to run their workflows. A software
stack is a collection of compilers, MPI, libraries, system utilities
and scientific packages typically installed in a parallel filesystem.
A module tool like environment-modules or Lmod is generally used for loading the software environment into
the users’ shell environment.



Software are packaged in various forms that determine how
they are installed. A few package formats are: binary, Makefile,
CMake, Autoconf, github, PyPi, Conda, RPM,tarball, rubygem,
MakeCp, jar, and many more. With many packaging formats,
this creates a burden for HPC support team to learn how to build
software since each one has a unique build process. Software
build tools like Easybuild and Spack can build up to
1000+ software packages by supporting many packaging
formats to address all sorts of software builds. Easybuild and
Spack provide end-end software build automation that helps
HPC site to build a very large software stack with many
combinatorial software configurations. During the installation,
some packages will provide a test harness that can be executed
via Easybuild or Spack which typically invokes a make test or
ctest for packages that follow ConfigureMake, Autoconf, or
CMake install process.



Many HPC sites rely on their users for testing the software
stack, and some sites may develop in-house test scripts to run
sanity check for popular scientific tools. Despite these efforts,
there is little or no collaboration between HPC sites on sharing
tests because they are site-specific and often provide no
documentation. For many sites, the HPC support team don’t
have the time for conducting software stack testing because: (1)
lack of domain expertise and understaffed, (2) no standard testsuite and framework to automate test build and execution.
Frankly, HPC support teams are so busy with important day-day
operation and engineering projects that software testing is either
neglected or left to end-users. This demands for a concerted
effort by HPC community to build a strong open-source
community around software stack testing.



There are two points that need to be addressed. First, we need
a framework to do automatic testing of installed software stack.
Second, is to build a test repository for scientific software that is
community driven and reusable amongst the HPC community.
An automated test framework is a harness for automating the
test creation process, but it requires a community contribution to
accumulate this repository on per-package basis. Before we
dive in, this talk will focus on conducting sanity check of the
software stack so tests will need to be generic with simple
examples that can be compiled easily. In future, buildtest will
focus on domain-specific tests once there is a strong community
behind this project.

Back

Facilitating HPC job debugging through job scripts archival

Home

Speaker Andy Georges
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time13:00 - 13:10
Event linkView original entry

SArchive is a lightweight tool to facilitate debugging HPC job issues by providing support teams with the exact version of the job script that is run in the HPC job in an archive either on the filesystem, in Elasticsearch, or by producing it to a Kafka topic.

HPC schedulers usually keep a version of the user’s job script in their spool directory for the lifetime of the job, i.e., from job submission until the job has run to completion — either succesfully or failed. However, once the job has completed, the job script and associated files are removed to avoid stacking up a large number of files. HPC systems typically run several millions of jobs, if not many more, over their lifetime -- it is not feasible to keep them all in the spool directory. In case the job failed, user support teams are often asked to help figure out the cause of the failure. For these occasions, it often is helpful if the exact job script is available. Since a typical scheduler setup will make changes to every submitted script through, e.g., a submission filter, simply obtaining what the user submitted requires an extra hoop to run the given script through the filter(s). Furthermore, users may have tweaked, changed, or removed the job script, which may add to the difficulty of debugging the issue at hand.



SArchive aims to address this problem by providing user support teams with an exact copy of the script that was run, along with the exact additional files that are used by the scheduler, e.g., to set up the environment in which the jobs runs. It can be argued that making a backup copy is actually the job of the scheduler itself, but we decided to use a tool outside the scheduler. This has the advantages that (i) one need not have access to the scheduler’s source code (not all schedulers are open source) and (ii) sites running multiple schedulers need not make any changes to each of them, but only to SArchive — which should be a fairly limited effort, if any at all. SArchive is currenly tailored towards the Slurm scheduler (hence the name), but it also supports the Torque resource manager. Adding support for other schedulers should be fairly straightforward — pull requests are welcome :)



Currently, SArchive provides three archival options: storing archived files inside a file hierarchy, ship them to Elasticsearch, or produce them to a Kafka topic. File archival is pretty feature complete, the code for shipping to Elasticsearch and Kafka is still under development and only has what is needed in our (HPCUGent) specific setup — which may evolve.

Back

Sharing Reproducible Results in a Container

Home

Speaker Efraim Flashner
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time13:15 - 13:25
Event linkView original entry

Containers do a great job separating out different parts of a system, making sure that they don't interact unless we want them to. What happens when a colleague hands us a project they've written and we're supposed to host it for them? They're not programmers, they're scientists. Who knows what they have in their program? How can we keep it up to date and deployed with a minimum of fuss?
Come and see how we've solved this problem with Guix, from rebuilding or replacing the dependencies with modern versions like a pro, having only the bare minimum required software in the container, deploying in an artisanally crafted container like a hero, and upgrading and rolling back when ready.

Back

Putting Artificial Intelligence back into people's hands

Home

Speaker Vincent Lequertier
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time13:30 - 13:55
Event linkView original entry

Artificial intelligence is now widespread for critical tasks such as crime recidivism risk assessment, credit risk scoring, job application review or disease detection. Because it has more and more impact on our lives, it becomes essential to make auditable AI software so that everyone can benefit from it and participate in its development.



This talk will present the methods that can be used to build fairness into artificial intelligence and explain how to control its progress thanks to the four freedoms of Free Software.

The talk is divided in three parts:






For each part the context will be presented as well as possible solutions.

Back

GraphBLAS: A linear algebraic approach for high-performance graph algorithms

Home

Speaker Gabor Szarnyas
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time14:00 - 14:25
Event linkView original entry

There is increasing interest to apply graph analytical techniques to a wide array of problems, many operating on large-scale graphs with billions of edges. While graph algorithms and their complexity is textbook material, efficient implementation of such algorithms is still a major challenge due to a number of reasons. First, the irregular and unstructured nature of graphs leads to a massive amount of random data access, which makes it difficult to use typical caching and parallelization techniques. Second, to optimize their code, developers need to be aware of the nuances of the underlying hardware which, at the very least consists of multiple CPU cores but often also incorporates heterogeneous components such as GPUs or even FPGAs. During the last decade, a number of graph programming models (such as Google's Pregel) have been proposed but most of these focused defining high-level abstractions for distributed execution environments and introduced a significant runtime overhead.



A potential approach for defining efficient graph processing algorithms is to exploit the well-known duality of graphs and sparse adjacency matrices, using matrix operations to capture algorithms. Surprisingly, only a few recent research prototypes have used this model with little consensus on the set of necessary building blocks. The GraphBLAS initiative (launched in 2013) aims to define a standard to capture graph algorithms in the language of linear algebra - following the footsteps of the BLAS standard which, starting four decades ago, revolutionized scientific computing by defining constructs on dense matrices.



In this talk, I give an overview of the GraphBLAS standard and its key components. First, I illustrate how matrix operations on various semirings correspond to the steps in graph algorithms. I then use these operations to present fundamental graph algorithms such as breadth-first search, shortest paths, and the clustering coefficient. Finally, I demonstrate the scalability of the GraphBLAS-based algorithms with the LDBC Graphalytics benchmark. The presented implementations are available open-source as part of LAGraph, a library built on top of GraphBLAS to demonstrate how to design efficient algorithms in linear algebra.

Intended audience: Developers interested in implementing high-performance graph algorithms.



Expected prior knowledge: Familiarity with linear algebra helps plus but we only use basic concepts such as matrix-matrix multiplication

Back

Selecting a Finite Element Analysis Backend for Exascale Fusion Reactor Simulations

Home

Speaker Aleksander J. Dubas
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time14:30 - 14:55
Event linkView original entry

Accelerating the development of fusion energy requires large scale simulations on cutting edge supercomputing resources.
Great hardware is only half the challenge and the software must be scalable to match.
This talk presents an objective approach to selecting a suitable back end to fusion simulations.

The UKAEA's mission is to develop commercially viable fusion energy.
Current fusion technology is yet to break even
on power out compared to power in,
thus designs for future reactors,
which necessarily must exceed break even,
carry a great amount of uncertainty.
With cost estimates of a first of a kind fusion reactor
in the order of billions of euros,
any design flaw making it through to the construction stage
will be an expensive mistake.



Thankfully, software can help.
By simulating a fusion reactor prior to construction,
the design can be tested and refined for a considerably lower cost.
However, covering all the necessary scales and physics
for a digital twin of a fusion reactor
requires computational resources at the exascale.



In this work, a number of potential finite element backends
for a multiphysics reactor simulation are evaluated.
The sheer scale makes open source a practical necessity
and scalability is the primary performance metric.
From the plethora of open source finite element libraries,
the most promising are selected
and compared against a number of objective, unbiased criteria.



None of the tested back ends scored perfectly in all criteria,
so a method and rationale for weighting the results
to select the best one for the purpose is presented.
The aspects of open source projects
that are important to high performance computing are highlighted.

Back

Build for your microarchitecture: experiences with Spack and archspec

Home

Speaker Todd Gamblin
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time15:00 - 15:25
Event linkView original entry

In HPC, software is typically distributed as source code, so that users can build optimized software that takes advantage of specific microarchitectures and other hardware. While this approach provides a lot of flexibility, building software from source remains a huge barrier for users accustomed to simple, fast binary package mangers. Most package managers and container registries label binaries with a high-level architecture family name, e.g., x86_64 or ppc64le, but there is no standard way to label binaries for specific microarchitectures (haswell, skylake, power9, zen2, etc.).



We’ll present a new project called “archspec” that aims to bridge this gap. Archspec provides a standard set of human-understandable labels for many popular microarchitectures. It models compatibility relationships between microarchitectures, and it aggregates information on ISA extensions, compiler support, and compiler flags needed to optimize these machines. Finally, it provides a standard set of names for both microarchitectures and ISA features. These features allow container tools and package managers to detect, build, and use optimized binaries.



Archspec grew out of the Spack package manager, but it is intended for widespread use by other build, packaging, and containerization tools. We will describe how it has been used in practice so far, how it has simplified writing generic packages, and our plans to get contributions from vendors and the broader community.

Expected prior knowledge / intended audience:
Audience should have basic knowledge of build systems, as well as some knowledge about processor architectures. There will be some brief background on this in the talk. This will be interesting to HPC users, developers, packagers, and admins, as well as to anyone writing tools that deal with microarchitecture metadata (like container systems).



Speaker bio:
Todd Gamblin is a Senior Principal Member of Technical Staff in the Advanced Technology Office in Livermore Computing at Lawrence Livermore National Laboratory. His research focuses on scalable tools for measuring, analyzing, and visualizing parallel performance data. In addition to his research, Todd leads LLNL's DevRAMP (Reproducibility, Analysis, Monitoring, and Performance) team and the Software Packaging Technologies project in the U.S. Exascale Computing Project. He created Spack, a popular open source HPC package management tool with a community of over 450 contributors. Todd has been at LLNL since 2008.



Links to code / slides / material for the talk (optional):
To be provided closer to FOSDEM.



Links to previous talks by the speaker:
https://www.youtube.com/watch?v=DRuyPDdNr0M
https://www.youtube.com/watch?v=edpgwyOD79E&t=2891s
https://www.youtube.com/watch?v=BxNOxHu6FAI
https://insidehpc.com/2019/03/spack-a-package-manager-for-hpc/
https://www.youtube.com/watch?v=iTLBkpHskzA



See https://tgamblin.github.io/cv/todd-cv.pdf for more (including tutorials and other presentations at major conferences)

Back

HPC on OpenStack

Home

Speaker Ümit Seren
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time15:30 - 15:55
Event linkView original entry

HPC systems have been traditionally operated as monolithic installations on bare-metal hardware primarily used by users with computational background to submit classic batch jobs. However the commoditization of compute resources and the introduction of new scientific fields such as life sciences to high performance computing has caused a shift in this paradigm. Today, an increasing number of biological software is made accessible through web portals. This improved ease of use has led towards a democratization of access to computational resources
Users of those fields don’t have the same computational knowledge as traditional HPC users from physics or chemistry and additionally require different kinds of workloads and applications that don’t fit traditional non-interactive batch scheduling resource management systems. Additionally, cloud computing is becoming more and more relevant and various efforts to lift HPC into the Cloud were started.



We manage the HPC infrastructure for 3 life science and 2 particle physics institutions at the Vienna Bio Center (VBC). For the new HPC system that was procured at the end of 2018, we decided to go with an on-prem cloud framework based on OpenStack to accommodate the various emerging workflows and programs. OpenStack is not a finished product and requires considerable amount of engineering. It took us around 2 years of testing and engineering to feel confident in deploying the new HPC infrastructure on top of OpenStack. Since summer 2019 we have our 200 node production SLURM cluster running on top of VMs in OpenStack.



In this talk we want to share our experiences from our endeavor into HPC on OpenStack. We want to briefly discuss the reasoning behind HPC in the cloud and specifically OpenStack.
Often times these kind of projects either completely fade away in case of failure or get published in a high-level white paper that is only useful as marketing material.
We want to share our honest experience from both implementer and operator perspective. We discuss how we use 3 environments to test updates and configuration changes. We will also explain our approach to automation and infrastructure as code all the way from the underlying infrastructure to the SLURM payload and how we keep our sanity using development procedures around pull requests and code reviews. We will also share some stories from the trenches, such as why you still learn new things about OpenStack after 1000 deploys or discover that a simple config change can destroy performance.
This talk will contain information that you won’t find in success stories or white papers but is hopefully very helpful or anyone who considers deploying HPC on OpenStack.

Back

Interactive applications on HPC systems

Home

Speaker Erich Birngruber
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time16:00 - 16:25
Event linkView original entry

Exploratory data analysis has increased the demand for interactive tools. In the same way, workshops and other teaching events often benefit from immediate and on-demand access to preconfigured, interactive environments.



For low resource requirements these interactive environments can be run on workstations. However, as user count and resource demand increase, these setups become more complex. While these frameworks typically provide good support for cloud based deployments in container orchestrations, it is often preferable to deploy them on existing compute infrastructure that provides access to both software packages and the data to be analysed. The deployment on HPC batch systems specifically brings challenges on how to handle authentication, user identities, and job submissions.



The architecture of these applications can be considered as following the master -- minion paradigm in most cases. One central component manages user access and acts as a gateway. It launches one or multiple per-user instances of a compute component, that provides the actual user environment.



We want to demonstrate how we provide applications like Galaxy, Jupyterhub, and RStudio to scientists of the Vienna Biocenter. The presentation will focus on the similarities and pitfalls of these deployments. We run the web application gateway based on our standardized container environment. The compute components run as SLURM jobs on the CLIP batch environment (CBE). Specific focus will be placed on the integration of web-based Single-Sign-On, and how we address the management of user identities for starting jobs on the batch system. Sources and configuration examples on the specific setup will be provided.



After the operator’s perspective, we will pan to the end-users view. Beginners and workshop situations typically prefer a static, pre-configured setup of the user session. Contrary to that, advanced users will want to customize their execution environment as much as possible. We will explore how scientists can tailor the setup to their individual needs.



Finally, we will summarize the setups of the applications in a high-level comparison from both the operators and the end-users perspective.

Back

Building cloud-based data services to enable earth-science workflows across HPC centres

Home

Speaker John Hanley
RoomUB5.132
TrackHPC, Big Data, and Data Science
Time16:30 - 16:55
Event linkView original entry

Weather forecasts produced by ECMWF and environment services by the Copernicus programme act as a vital input for many downstream simulations and applications. A variety of products, such as ECMWF reanalyses and archived forecasts, are additionally available to users via the MARS archive and the Copernicus data portal. Transferring, storing and locally modifying large volumes of such data prior to integration currently presents a significant challenge to users. The key aim for ECMWF within the H2020 HiDALGO project is to migrate some of these tasks to the cloud, thereby facilitating fast and seamless application integration by enabling precise and efficient data delivery to the end-user. The required cloud infrastructure development will also feed into ECMWF's contribution to the European Weather Cloud pilot which is a collaborative cloud development project between ECMWF and EUMETSAT.



ECMWF and its HiDALGO partners aim to implement a set of services that enable the simulation of complex global challenges which require massive high performance computing resources alongside state-of-the-art data analytics and visualization.



ECMWF's role in the project will be to enable seamless integration of two pilot applications with its meteorological data and services delivered via ECMWF's Cloud and orchestrated by bespoke HiDALGO workflows. The demonstrated workflows show the increased value of weather forecasts, but also derived forecasts for air quality as provided by the Copernicus Atmospheric Monitoring Service (CAMS).



The HiDALGO use-case workflows are comprised of four main components: pre-processing, numerical simulation, post-processing and visualization. The core simulations are ideally suited to running in a dedicated HPC environment, due to their large computational demands, coupled with the heavy communication overhead between parallel processes. However, the pre-/post-processing and visualisation tasks generally do not demand more than a few cores to compute and do not require message passing between instances, hence they are good candidates to run in a cloud environment. Enabling, managing and orchestrating the integration of both HPC and cloud environments to improve overall performance is the key goal of HiDALGO.



This talk will give a general overview of HiDALGO project and its main aims and objectives. It will present the two test pilot applications which will be used for integration, and an overview of the general workflows and services within HiDALGO. In particular, it will focus on how ECMWF's cloud data and services will couple with the test pilot applications to improve overall workflow performance and enable access to new data for the pilot users.



This work is supported by the HiDALGO project and has been partly funded by the European Commission's ICT activity of the H2020 Programme under grant agreement number: 824115.

Back

Community DevRoom Welcoming Remarks

Home

Speaker Laura Czajkowski
Leslie Hawthorn
RoomUB5.230
TrackCommunity devroom
Time09:00 - 09:05
Event linkView original entry

Welcoming remarks and housekeeping items from your Community DevRoom 2020 Co-Chairs, Laura Czajkowski, Leslie Hawthorn, and Shirley Bailes

Back

Applying Open Culture Practices across Distributed Teams

Home

Speaker Katrina Novakovic
RoomUB5.230
TrackCommunity devroom
Time09:05 - 09:30
Event linkView original entry

Distributed teams are where people you work with aren’t physically co-located, ie. they’re at another office building, home or an outsourced company abroad. They’re becoming increasingly popular, for DevOps and other teams, due to recruitment, diversity, flexibility and cost savings. Challenges arise due to timezones, language barriers, cultures and ways of working. People actively participating in Open Source communities tend to be effective in distributed teams. This session looks at how to apply core Open Source principles to distributed teams in Enterprise organisations, and the importance of shared purposes/goals, (mis)communication, leading vs managing teams, sharing and learning. We'll also look at practical aspects of what's worked well for others, such as alternatives to daily standups, promoting video conferencing, time management and virtual coffee breaks. This session is relevant for those leading or working in distributed teams, wanting to know how to cultivate an inclusive culture of increased trust and collaboration that leads to increased productivity and performance. All are welcome to attend.

Back

Organizing Open Source for Cities

Home

Speaker Jacob Green
RoomUB5.230
TrackCommunity devroom
Time09:35 - 10:00
Event linkView original entry

Open Source is vital in the expansion wave of smart cities. Yet, where is the sustainable municipal open innovation economic engine/s, and how do we start them spinning at scale? — Only through structured collaboration and community. We present the community collaboration efforts, accomplishments, and vision of the partners behind the launch of the Johns Hopkins Open Source Program Office for Open Cities, the community creation efforts of the City of Paris's open source city services platform Lutece, and the interactions with and between Baltimore communities, Paris communities, and open source communities and institutions.

Open Source is vital in the expansion wave of smart cities. Yet, where is the sustainable municipal Open innovation economic engine/s, and how do we start them spinning at scale? — Only through structured collaboration and community. The open source communities and institutions are highly successful at this in other industries. In cities, open source is not enough, we also need open data, open standards, etc. As we scale, openness and transparency, interoperability, feedback mechanisms, security, non-bias, privacy, become dominating design requirements. Accelerating the scale of good solutions needs help and structure. There are 18,000 municipalities in the US alone, currently siloed, and meaningful technical and community cooperation is minimal. We need a new flexible institutional framework to advance cooperation and scaling within our interdisciplinary design requirements.



The open source program office is a successful industry construct in the open source world, and we aim to investigate adapting this construct to accelerate and scale open cities; from open source software & services, open data, and standards, to non-bias, security, privacy, access, diversity, and above all TRUST!



Johns Hopkins University has launched what is the believed to be the first OSPO for higher education and launching it in part to support Open Cities including Baltimore. Jacob Green from Mosslabs.io discussed the launch of the JHU OSPO, its initial initiatives, and collaborations with City of Paris.

Back

The next generation of contributors is not on IRC

Home

Speaker Matthew Broberg
RoomUB5.230
TrackCommunity devroom
Time10:05 - 10:30
Event linkView original entry

There is some combination of a turf war and a diaspora happening in the open source communities I participate in. There are synchronous and async channels galore. Every one of them has fans and haters with firmly held opinions on how it's the best or worst thing in the world.



Let's take a step back and take a look at the landscape together. What are our communities searching for when they hop into communication channels? How do we meet new members where they are comfortable in order to be more welcoming?



As a self-defined GitHub generation of open source enthusiast, I'd like to start a conversation from my personal experience and then jump into research on the options available to us today. I hope we can leave with a view of the world spanning across channels with a focus on our contributors.

Topics to be discussed:
-Defining new contributors - where are they coming from and why
-Empathize with an inexperienced contributor trying to jump right in
-Introduce the concept of "a third place"
-The challenges of synchronous communication (IRC, Slack, Gitter, others)
-Options to focus on asynchronous channels (Email, Discourse, others)
-Understanding when you need which
-A pitch for why IRC could be great with Matrix.org (Riot.im)
-Bringing it back to the new contributor with specific examples
-An optimistic conclusion of our collective efforts to improve

Back

The Ethics of Open Source

Home

Speaker Don Goodman-Wilson
RoomUB5.230
TrackCommunity devroom
Time10:35 - 10:55
Event linkView original entry

Open Source was supposed to level the playing field for creating and consuming software by reducing the monopolistic power of companies building proprietary software. But we didn't get the kind of democratized gift economy we were expecting. Instead, we are seeing open source creating opportunities and incentives for the already privileged to create new, and exacerbate existing, injustices. To the extent that we want to use software to create a just world, we should reject the Open Source ideology, and start thinking seriously about what comes next.

The Open Source Movement has always been focused on code. The result is a system that sadly neglects people, and now many maintainers are in a bad place, as they struggle to figure out how to make ends meet even as their labor creates immense value for others, and how to avoid making the lives of others worse through weaponized code. We find ourselves in this position because the key Open Source values exacerbate an existing injustice; by valuing the consumers of code over the producers of code, Open Source helps concentrate power in the hands of already powerful economic actors at the expense of maintainers. I feel that not only could we do better, we have a moral imperative to find better development models.



Building on Scanlon's contractualist theory of morality, we will apply it to the world of open source—and the results will shock you. Open Source as an ideology is focused first and foremost on code, rather than people. As I have argued in the past, and will continue to argue, morality is about people first and foremost. This by itself doesn’t damn the Open Source movement, but it doesn’t take a whole lot of digging into the heart of Open Source to see that it has created a context in which maintainers are dehumanized, atrocities are visited upon innocent third parties, and large wealthy corporations are lionized. If we think that people matter, then we must reject the Open Source ideology. I’ve not the foggiest idea what comes next, but it’s time to start having a serious conversation about what a collaborative development model that values people first and foremost looks like.

Back

Engineers, Call Your Policy People!

Home

Speaker Astor Nummelin Carlberg
Paula Grzegorzewska
RoomUB5.230
TrackCommunity devroom
Time11:00 - 11:25
Event linkView original entry

OpenForum Europe's mission is to connect communities, organisations and developers with policymakers. In this talk we will discuss what we've learned during our campaign against the Copyright Directive and present an important opportunity to engage with European policymaking today: providing input to the European Commission’s study on the impact of Open Source Software and Hardware.

Anyone can get involved in influencing policies surrounding FOSS, either by building relationships with your policy or government relations teams within companies and organisations or by engaging directly with organisations specialising in digital policy. One of the direct ways is to participate in our study aiming to fill a major gap in the debate, as there are currently no authoritative studies on the economic impact of Open Source. Our project will focus on the influence of OSS and OSH on European economy and take lessons from comparative perspectives from around the world, all in order to serve as a scientific basis for the development of future OS policies in Europe with a potential of global importance.



OFE and Fraunhofer ISI will conduct the study together, and we want to invite the broader OS community to get involved just like we did throughout our successful Copyright Directive campaign. This is more important than ever, as there are new challenges and opportunities for FOSS on the horizon, and we all know: the community is key.

Back

Building Ethical Software Under Capitalism

Home

Speaker Deb Nicholson
RoomUB5.230
TrackCommunity devroom
Time11:30 - 11:50
Event linkView original entry

The software that is the easiest to build -- the software that is the easiest to fund the development of -- tends to serve those who are already extremely well-served. So, how do we bridge the gap between what society needs and what many people with money want to fund? Free and open source software platforms can get us part of the way there, but without some big changes, it won't be enough. Let's talk structure!

We want to provide useful, intuitive, non-invasive software that all people can use, whether they personally have money for fancy customizations or not. But the software that is the easiest to build -- the software that is the easiest to fund the development of -- tends to serve those who are already extremely well-served. A technology community that primarily serves privileged people, while leaving all other users behind is not one we should expect people to spend their unpaid or volunteer time on. And for certain reprehensible functions, no one should be building the software at all, under any license. So, how do we bridge the gap between what society needs and what many people with money want to fund?



This talk will cover:



* Non-profits, fundraising and community-building
* Small businesses, co-ops and other niches
* Possible changes to the broader landscape


If we want to build a better world, we will have to move beyond quick fixes and silver bullets. Free and open source software platforms can get us part of the way there, but without some big changes, it won't be enough. We need to build ethical structures for the creation of ethical software.

Back

Cognitive biases, blindspots and inclusion

Home

Speaker Allon Mureinik
RoomUB5.230
TrackCommunity devroom
Time11:55 - 12:20
Event linkView original entry

Open source thrives on diversity. The last couple of years has seen huge strides in that aspect with codes of conduct and initiatives like the Contributor Covenant. While these advancements are crucial, they are not enough. In order to truly be inclusive, it’s not enough for the community members to be welcoming and unbiased, the communities’ processes and procedures really support inclusiveness by not only making marginalized members welcome, but allowing them to fully participate.

Back

Growing Sustainable Contributions Through Ambassador Networks

Home

Speaker Shilla Saebi
Alison Yu
RoomUB5.230
TrackCommunity devroom
Time12:25 - 12:50
Event linkView original entry

Open Source Program Offices are utilizing ambassador programs more and more. We'll talk about why we decided to implement ambassador programs, how we implemented them, got buy-in (from a time and budget standpoint), and more.



We'll both talk about how we use this program in our respective companies to scale and reach thousands of developers internally. We'll also throw in a few case studies and lessons learned throughout our (ongoing) journeys.

Comcast and Indeed are committed to fostering open source contributions to the external projects that we depend on. One type of program that both companies use is an Open Source Ambassador Program to help new and experienced individuals contribute to open source projects that they use.



During this talk we’ll go over what an ambassador program is, how we decided to use them in our organizations, the path to buy-in and budget approval, how they were implemented, results we saw, and lessons learned. We’ll present specific case studies of how our Ambassador Programs helped with specific campaigns and how that fosters open source sustainability.



At Indeed we’ll compare results from Hacktoberfest in 2018 and 2019, and the results that we saw before and after implementing the Open Source Ambassador Program.



At Comcast, the Open Source Ambassador working group was formed in 2018 where ambassadors focus on Open Source practices, raising awareness, compliance and strategy. The aim of the OSAP is to foster discussion across the various Comcast tech offices and understand Open Source needs and issues across the organization.

Back

Bringing back ethics to open source

Home

Speaker Tobie Langel
RoomUB5.230
TrackCommunity devroom
Time12:55 - 13:15
Event linkView original entry

Most discussions around ethical licenses today consider the Open Source Definition (OSD) with the same reverence as Moses did the tablets delivered to him on Mount Sinai.



The OSD is in fact much more mundane than that. And it tells us more about its authors than about the open source movement in general; had open source been born in less privileged circles, ethical considerations would have been baked in from the start.



With that in mind, let's revisit what we're actually trying to collectively achieve through the open source movement and reconsider the notion that its mission requires we allow the software we build be used in violation of Human Rights.



There are minimally-disruptive changes that can be made to the OSD and to existing licenses which would puth ethical concerns centerstage, where they belong, and help us foster responsibility and accountability within our community and within software vendors.



We'll look at the past attempts at creating ethical licenses and why they have failed. We'll ask all of the hard questions, even those we don't have good answers to yet. And we'll propose a new, multi-pronged approach to this issue. One that we believe, while more demanding to implement, has a much better chance of success than previous attempts have had.

Back

Be The Leader You Need in Open Source

Home

Speaker Megan Sanicki
RoomUB5.230
TrackCommunity devroom
Time13:20 - 13:45
Event linkView original entry

Stronger open source leadership can address a myriad of sustainability challenges and there is a call for more leaders in every project. Good news! Every contributor is a leader either through self leadership, leading others, or leading the community, yet most people have never been trained on how to lead.



This talk provides the leadership the training you need and covers:
- Why strengthen community leadership
- Key leadership and emotional intelligence principles
- Practical ways to lead as a contributor

Open source crossed the chasm into mainstream with users in all industries. Maintaining the users’ trust and sustaining innovation is key to open source’s success.



However, in a world where communities are passionate, multicultural, and primarily use online communication, it is challenging to move communities towards a shared vision in a frictionless, sustainable way. Community challenges can impact innovation, putting user adoption at risk and even more importantly, hurting community members.



Stronger open source leadership can address these challenges and there is a call for more leaders in every project. Good news! Every contributor is a leader either through self leadership, leading others, or leading the community, yet most people have never been trained on how to lead.



This talk provides the leadership the training you need and covers:
- Why strengthen community leadership
- Key leadership and emotional intelligence principles
- Practical ways to lead as a contributor

Back

Building Community for your Company’s OSS Projects

Home

Speaker Dawn Foster
RoomUB5.230
TrackCommunity devroom
Time13:50 - 14:10
Event linkView original entry

Your company has just started an open source project, but where is the community? This talk provides practical tips and suggestions along with what not to do when building a community around your company’s open source project.

Building a community around your company’s open source project is no easy task, and there is no magic bullet or one size fits all solution. However, there are some things that you can do (or not do) to increase the chances of successfully building a community for your project.



A few of the dos and don’ts covered in this talk include:






The audience will walk away with practical advice about building communities for corporate open source projects.

Back

Lessons Learned from Cultivating Open Source Projects and Communities

Home

Speaker JJ Asghar
RoomUB5.230
TrackCommunity devroom
Time14:15 - 14:40
Event linkView original entry

Over the last decade, I’ve had the privilege professionally of building and cultivating some Open Source projects and communities. I’ve grown other projects along the way some successful, and some not. I’ve learned a ton on this journey; honestly still am, and I want to tell this story.

Over the last decade, I’ve had the privilege professionally of building and cultivating some Open Source projects and communities. To start off this isn’t a tools talk, this is a talk about the soft skills you have to have to be able to succeed as a leader in an Open Source project. My journey started tending the frequently asked questions for a small Linux Distribution called CRUX, and then years later professionally moved to the OpenStack-Chef project to build OpenStack clouds. I’ve grown other projects along the way helped build tooling and communities some successful and still running today, others were just flashes in the pan. I’ve learned a ton on this journey; honestly still am, but I have some lessons that are hard learned and hopefully I warn pitfalls that can cause wasted cycles and pain.
I’ll be going over:



This isn’t a tools talk
Scoping your project
Personally-backed
Corporate-backed
Empathy and audience is important
Celebrations
Defeats
Successful traits of Open Source projects
Trust
Clear Vision
Have a plan to move on if needed
Honestly, is it even worth this hassle?
Back

Free software in education

Home

Speaker Dominik George
Niels Hradek
RoomUB5.230
TrackCommunity devroom
Time14:45 - 15:10
Event linkView original entry

Schools is where IT and software users of tomorrow are made, and next to teaching digital skills, educating on privacy and consequences of the use of different types of software and servcies plays an important role. We would like to report on various projects from the field.

Helping schools and teachers with using free software is far more involved than just selling a product. Where big companies have huge budgets for advertising and marketing, free software projects have to attract educators with the power of the community.



The most exciting upside of this is that our community does not only sell a product, but wants to get people involved. One of the benefits of free software for educators is that all our community goals play into their hands - free software is the basis of extending independence, democracy and all values modern schools are supposed to convey into the digital lives of students.



As a person or project getting involved with free software in education, there are many challenges and opportunities. Teckids and the projects around it have collected experience from the work with schools, teachers, political decision makers and free software developers that we would like to share with the community.

Back

Engaging Enterprise consumers of OSS

Home

Speaker Jacob Redding
RoomUB5.230
TrackCommunity devroom
Time15:10 - 15:35
Event linkView original entry

It is no secret that open source software is a foundational element to many enterprise IT and software development strategies and it's also not a secret that the rate of participation, contribution, or support amongst many enterprise companies lags significantly behind the adoption rate. Higher rates of participation are seen in software-based or forwarded companies founded in the past decade, but older companies have been slow to adapt. The solution to participation is often seen as a cultural shift, but this only accounts for a portion of the lack of participation. Motiviation and incentive structures, legal structures, and project governance and management structual alignments can have a bigger impact on enterprise participation in open source projects. In this talk I'd like to discuss a mixture of academic research and my personal real-world experience in bridging the gap between enterprise development and open source projects.

Over the past decade Open Source has grown to become the de facto standard and preferred software for Enterprise businesses and government agencies and continues to be the choice amongst small businesses and non-profits. We are now entering a new era where not only are these organizations adopting the use of Open Source, but they are actively participating in and contributing back to the projects. This is a large shift that requires change both within the Enterprise organizations and in the open source projects to welcome these new contributors and methods of working. However, not all organizations are ready for this shift and not all open source projects are ready to handle enterprise collaboration.



As an example the motivation and incentive structure are often misaligned with an open source project incentivizing long-term participation and enterprise software development encouraging short and fast development and deployment. The problem is compounded when discussing the role of system integrators, outsourced development agencies, and consulting agencies, which are frequently used to accelerate the development of enterprise software. This mismatched timelines, management practices, and incentive structured can lead to reduced participation in open source software. However, there are changes that can be made on both sides to counteract this tension, which can lead to greater participation of enterprise software developers in open source software.



In this talk I’d like to call upon academic research and lessons learned in other industries. In particular I pull lessons from the following areas:



“Community Development as a Process: 1970 - Lee J. Cary - A collection of academic researching in community development organizations with reviews of the psychological (Warren C. Haggstrom) and sociological (Willis A. Sutton, Jr) implications of development organization (including discussions of Burnout and the impact thereof to the individual and community), and the role of the agent in the community development process (Robert Morris)



“Governing The Commons” - 1990 - Elinor Ostrom - An academic review of institutions for collective actions including longitudinal studies of fisheries and forest management.



“Roles of Boundary Organizations” - 2008 - Siobhan O’Mahony and Beth A. Bechky (University of California, Davis) - Research drawing on social movement and organizational theory that reviews the roles of a boundard organization (Association, Foundations, etc.) in managing four critical domains - governance, membership, ownership, and control over production - to provide
analytic levers for determining when boundary organizations work.



“How Firms Leverage Crowds and Communities for Open Innovation” - 2016 - Joel West (Keck Graduate Institute) and Jonathan Sims (Babson College) - Research on crowds and communities, identifying a third form — a crowd-community hybrid — that combines attributes of both



"100 Years of Sustainability" - 2019 - Me :) Jacob Redding - A review of the American Society of Composers, Artists, and Publishers (ASCAP) and the lessons the open source world could learn and adapt to build the next century of sustainability in open source development and innovation.



In addition I want to bring in my own personal experience spending 12+ years growing the Drupal Open Source project spending time as a developer, founding board member, and founding Executive Director watching the project grow from a handful of developers to over 30,000 active contributors. I also pull from the other side of the table in my current role as an Open Source Strategy/Governance lead within Accenture - a systems integrator with nearly 500,000 employees worldwide. In my role at Accenture I craft our internal open source strategies including the use of Inner source and legal and cultural shifts to encourage more open source participation.

Back

Recognising Burnout

Home

Speaker Andrew Hutchings
RoomUB5.230
TrackCommunity devroom
Time15:40 - 16:05
Event linkView original entry

Mental health is becoming an increasingly important topic. For this talk Andrew will focus on one particular aspect of mental health, burnout. Including his own personal experiences of when it can get really bad and steps that could be taken to help catch it early.

Working in technology can be extremely demanding and stressful. People put a lot of passion and themselves into what they do, removing the separation of their work from themselves. This can lead to burnout in many cases which is similar to depression in many ways.



In this talk Andrew will talk through his personal experience of his worst case of burnout including the mental and physical toll it took, as well as giving advice on how to spot it early and ways to help mitigate against it.

Back

How Does Innersource Impact on the Future of Upstream Contributions?

Home

Speaker Bradley M. Kuhn
RoomUB5.230
TrackCommunity devroom
Time16:10 - 16:35
Event linkView original entry

Innersource is a growing phenomenon that is widely viewed as improvement over existing regimes of proprietary silos within for-profit corporate walls. The bargain it strikes is compelling but curious: developers yield benefits that please them regarding software sharing & improvement, while companies succeed in keeping their software crown jewels locked up & locked down.
How will that impact software freedom? Will it increase or decrease upstream contribution? Will developers use Innersource as a jumping ground to FLOSS contribution, or will silos stay siloed? What can Open Source Program Offices do to mitigate downsides to Innersource in an effort to increase FOSS-curious employee retention and interest? This talk explores these issues.

Innersource is a growing phenomenon that is widely viewed as improvement over existing regimes of proprietary silos within for-profit corporate walls. The bargain it strikes is compelling but curious: developers yield benefits that please them regarding software sharing & improvement, while companies succeed in keeping their software crown jewels locked up & locked down.
How will that impact software freedom? Will it increase or decrease upstream contribution? Will developers use Innersource as a jumping ground to FLOSS contribution, or will silos stay siloed? What can Open Source Program Offices do to mitigate downsides to Innersource in an effort to increase FOSS-curious employee retention and interest? This talk explores these issues.

Back

Building a thriving community in company-led open source projects

Home

Speaker Ray Paik
RoomUB5.230
TrackCommunity devroom
Time16:40 - 17:00
Event linkView original entry

Traditional open source projects involve contributors from different organizations with the goal of addressing a common challenge for participating organizations or even the entire industry. Today, there are also many companies with businesses built around open source software. At first glance, it may seem impossible to build an open and authentic community for a company-based open source project when you need to balance the needs of the company with those of the community. In this talk, you will find that there are actually a lot similarities between contributors in company-based and traditional open source projects. Additionally, it is possible to build a thriving community by satisfying the intrinsic motivation of contributors and providing opportunities for community members to have a voice in how the community (or even the business) is run. The presenter will also discuss how they transitioned from working as a community manager in foundation-based projects to a company-based open source project.

Some of the attributes of successful open source projects include welcoming diverse viewpoints, lowering barriers to entry, being responsive to community contributions, and remaining transparent about decision making. When you are building a community for a company-based open source project, there’s no reason why you can not incorporate most (if not all) of these aspects.



Additionally, community members often decide to get involved in an open source community (whether it is company-based or otherwise) because they are passionate about the technology, culture, and/or other members of the community. So when you are building a community in a company-based open source project, it is important to understand your members’ (especially intrinsic) motivations and to focus on cultivating an environment where community members can have a fulfilling experience. In your community, you may also be able to provide opportunities for professional growth (e.g. in programming, leadership, collaboration, and other skills) that community members can benefit from.



There are of course differences between company-based and traditional open source communities, and you will need to adopt different approaches in areas such as communications, community governance, metrics, recognition, etc. The presenter will be sharing their learnings, including their transition from traditional open source communities to a (for-profit) open source company.

Back

Welcome to the Free Tools & Editors Room!

Home

Speaker Geertjan Wielenga
Lars Vogel
Trisha Gee
RoomUD2.119
TrackFree Tools and Editors
Time09:00 - 09:05
Event linkView original entry

A quick introduction to the room, the sessions, and the team that put everything together. :-)

Back

Insights into the Eclipse Open Source Project - News from the Eclipse Platform and IDE Project

Home

Speaker Lars Vogel
RoomUD2.119
TrackFree Tools and Editors
Time09:05 - 09:25
Event linkView original entry

Join this talk to learn about the current status of the Eclipse IDE Open Source projects. We'll talk about new developments, our improvements in the development process, performance improvements, and new features in the Eclipse IDE.

In this talk, we'll demo the new features of the Eclipse IDE and show the improvements of the latest, best, and fastest Eclipse IDE ever.

Back

Surfing the Tsunami - News from the IntelliJ IDEA Community

Home

Speaker Trisha Gee
RoomUD2.119
TrackFree Tools and Editors
Time09:30 - 09:50
Event linkView original entry

With releases of Java coming thick and fast every 6 months, it's a full time job staying on top of the features. If your IDE can help you here, it's one less thing to worry about. IntelliJ IDEA Community had three releases this year, each on improved the support for modern versions of Java, but that's not the only thing on offer.



Join this session to see what's new in IntelliJ IDEA Community (the free one!). This is not limited to just supporting new language features, which some of us might not get to use for ages, but better support for things developers do every day, and improved performance and stability, because an all-singing, all-dancing IDE is all well and good, but if it's not usable those features mean nothing.

Back

From Oracle to Apache - News from the NetBeans Community

Home

Speaker Geertjan Wielenga
RoomUD2.119
TrackFree Tools and Editors
Time09:55 - 10:15
Event linkView original entry

NetBeans is now a top level Apache project! How did it get to Apache and what's the state of the donation process? What are the new features and how can you get involved? Join this session to find out!

Back

New Java Features & Apache NetBeans

Home

Speaker Jan Lahoda
RoomUD2.119
TrackFree Tools and Editors
Time10:20 - 10:40
Event linkView original entry

The Java platform experiences an outburst of cool new features – recently, local variable type inference, switch enhancements and multi-line string literals have been added to the Java language. Many other features are being in the pipeline and are actively worked on, like simple data carriers and pattern matching for the Java language, or value classes for the Java virtual machine. These features are delivered quickly, thanks to the recently adopted, predictable, six-months schedule of major Java SE releases. This new release cadence means new Java platform features are delivered twice every year!



In this talk, we will show a live demo of many of the recently added and newly developed features and improvements for the Java platform, including those mentioned above. The Apache NetBeans IDE will be used to demonstrate the features.

Back

OpenBeans IDE - Creating an Apache NetBeans Distribution

Home

Speaker Emilian Bold
RoomUD2.119
TrackFree Tools and Editors
Time10:45 - 11:05
Event linkView original entry

A talk about how OpenBeans, an Apache NetBeans IDE distribution was possible due to pkgsrc, the NetBSD package management framework.

OpenBeans IDE started in Nov 2018 under the name CoolBeans. This was an Apache NetBeans IDE 'distribution' which repackaged the Apache NetBeans modules, plus some other modules that were still not available yet (such as the C/C++ support modules, JavaEE modules, native notifications on macOS/Windows, etc).



CoolBeans was open sourced under the plain ISC license under the OpenBeans name.



OpenBeans produces Windows installers as well as macOS disk images for end users. It does this using pkgsrc from NetBSD, which seems to be a very good match for distribution-like projects.

Back

LSP for Java and GraalVM Development

Home

Speaker Martin Entlicher
RoomUD2.119
TrackFree Tools and Editors
Time11:10 - 11:30
Event linkView original entry

In this session, we will explore how the Language Server Protocol, LSP, is used to aid developers. We will cover not only the Java language, but also other languages, especially those supported by the GraalVM, like JavaScript, R, Python, Ruby. This includes development in various IDEs and editors, for example the NetBeans IDE and VisualStudio Code.

Back

Eclipse Loves LSP - Achieving More with Less

Home

Speaker Alexander Kurtakov
RoomUD2.119
TrackFree Tools and Editors
Time11:35 - 11:55
Event linkView original entry

Eclipse and Language Server Protocol - what, why, how and most important visible results for users thanks to it.

Brief description of how it all started. Followed by examples of the success stories benefiting almost every Eclipse user by getting better tools while minimizing the burden on Eclipse developers.

Back

Language Server Protocol & Debug Adapter Protocol to the Rescue of Web Development in Eclipse IDE

Home

Speaker Mickael Istria
RoomUD2.119
TrackFree Tools and Editors
Time12:00 - 12:20
Event linkView original entry

The state of Web (HTML, CSS, JS...) development in Eclipse IDE used to be bad. Indeed, some internal parsers had to be maintained to follow the frequent and major changes in those standards; the developer community behind them couldn't catch up, leading tools to a pretty bad and outdated state. Fortunately, the rise of technologies like TextMate grammars, Language Servers and Debug Adapters as reusable components have allowed Eclipse ecosystem to catch up with the best tools for web development. By integrating pieces of VSCode and others and with a relatively small investment, Eclipse Wild Web Developer provides a quite comfortable and efficient tool set for a wide variety of web-based projects. In this presentation, we'll demo Wild Web Developer in practicie, explaining as we demo how the LSP/DAP world is leveraged to enable those productive workflows.

Back

Flutter Development in Eclipse

Home

Speaker Jonas Hungershausen
RoomUD2.119
TrackFree Tools and Editors
Time12:25 - 12:45
Event linkView original entry

The Eclipse IDE was once heavily used for mobile app development on Android. The Android Development Tools (ADT) for Eclipse have since been deprecated and replaced by a dedicated Android Studio. To make mobile app development possible again, we leveraged the language server protocol to provide stable support for the Dart language and the Flutter SDK in the Eclipse IDE.



This talk will provide an overview of the Dartboard project including its Dart language support as well as the Flutter development experience.

Back

Emacs Should Be Emacs Lisp - Thoughts on the Future of Emacs

Home

Speaker Tom Tromey
RoomUD2.119
TrackFree Tools and Editors
Time12:50 - 13:10
Event linkView original entry

Emacs Lisp is good, actually, and Emacs should primarily be written in Emacs Lisp. This talk will describe a way forward.

The future of the Emacs core is a frequent topic in the Emacs community. Should the core be in C? Or Rust? Should Elisp continue to be the scripting language? Or Guile? Or Python, Perl (the "Perfect Emacs Rewriting Language") or JS?



This talk advocates the rarely discussed view that Emacs Lisp is good, actually, and that Emacs should be written in Emacs Lisp. Threading, compilation, and addressing Emacs' other low-level deficiencies will be covered, with an eye toward a practical way to roll out the changes.

Back

AMENDMENT Just-in-Time Programming

Home

Speaker Neil C Smith
RoomUD2.119
TrackFree Tools and Editors
Time13:15 - 13:35
Event linkView original entry

A few thoughts on the trials and tribulations of building a just-in-time programming system in Java. But wait, isn’t it a JIT-compiled language anyway? Job done … next talk … no, wait...

Java may have JIT-compiled bytecode, and that makes it a useful tool for this endeavour, but that doesn’t get us from source code to running code. And anyway, there’s far more to a useful just-in-time programming system than just getting code running.



PraxisLIVE is a hybrid-visual IDE for live programming. At its heart is PraxisCORE, a forest-of-actors runtime for real-time programming, bringing aspects of Erlang, SmallTalk and Extempore into the Java world. We’ll explore the inter-relationship of liveness and feedback between runtimes and tools … possibly with the help of a few bleepy noises!



Please note that this replaces a talk on "TerosHDL - Open Source IDE for FPGA Developers" by Carlos Alberto.

Back

Projectional Editing and Its Implications

Home

Speaker Václav Pech
RoomUD2.119
TrackFree Tools and Editors
Time13:40 - 14:00
Event linkView original entry

Let’s shake some of the dogmas that constrain our programming worldview. In this session, I would like to take you to an alternative world - a world where programming languages are not parsed, a world where languages can be downloaded from the Internet and plugged easily into your IDE. A world where you have the power to customize the languages that you use. You’ll see that projectional editing in JetBrains MPS gives you incredible freedom in how you express your thoughts. It allows you to choose notations that best fit the task at hand. Your code can be edited as text, tables, diagrams, a form, or a combination of those. This is especially useful for Domain-specific languages and we’ll see real-life examples from domains such as the insurance industry, embedded software development, bioinformatics, enterprise systems and legislation. We’ll also discuss the downsides and integration challenges that projectional editors face. My goal is that you’ll leave this session inspired, enriched and motivated to try something new.

Back

IntelliJ Elixir - Elixir Plugin for JetBrains IDEs

Home

Speaker Luke Imhoff
RoomUD2.119
TrackFree Tools and Editors
Time14:05 - 14:25
Event linkView original entry

Using Java, Kotlin, and GrammarKit to reimplement to Erlang, Yecc grammars, and Elixir for static analysis for Elixir source and BEAM bytecode. How decompiling and disassembly tools can quickly answer optimization arguments.

IntelliJ Elixir is the Elixir plugin for JetBrains IDEs like IntelliJ and Rubymine. It uses JetBrains OpenAP, JFlexI and GrammarKit to reimplement the Elixir grammar, which is natively implemented as bespoke Erlang lexer and YECC LALR parser. This meant translating a recursive Erlang lexer into a strict regular expression state machine used by JFlex with some interesting needed extension. Porting the grammar from LALR Yecc to the LL Pratt Parser generated by Grammar Kit involved understanding the non-universality of BNF. Reimplementing and extensive testing of the plugin led to finding bugs in native Elixir, showing that alternative implementations of languages in editors and tools can find bugs in the original implementations. The BEAM bytecode decompiler and disassembler has led to better understanding of how the VM optimizes different Elixir code.

Back

VSCode Extension for OpenShift Developers

Home

Speaker Sudhir Verma
RoomUD2.119
TrackFree Tools and Editors
Time14:30 - 14:50
Event linkView original entry

IDE based extension to run the instance of OpenShift on the local machine.



Easy to use all OpenShift related command in VSCode to create, building, and deploying an application on OpenShift.



Key points:




  1. OpenShift VSCode IDE base extension and it's Dependency.

  2. Ease installation of extension from VSCode Market Place.

  3. How to create Project, application, component, Services, Storage and more in OpenShift VSCode extension (Demo)




Link: https://github.com/redhat-developer/vscode-openshift-tools

Back

Developer Workspace As Code - Is Developer Heaven in the Cloud?

Home

Speaker Lukas Krejci
RoomUD2.119
TrackFree Tools and Editors
Time14:55 - 15:15
Event linkView original entry

Developer workspaces are assembled using outdated wiki pages and usually require weeks to "get right". What if the IDE and workspace setup (in the broadest sense) lived directly with the code?

In the age of devops many things live with the source code. We compose our applications from Kubernetes/OpenShift objects, CI/CD pipelines and IDEs have their configuration in dot files, all defined along with the source code of our precious applications. But how do you define the common development environment/workspace for your developers? Let's take a look at the devfile - a declarative format for specifying the developer workspace with all the tools the developers need to code, build, test and debug the applications and how it enables Eclipse Che to be the next-gen Kubernetes-native IDE for developer teams.

Back

FaaS You Like It: Create Serverless Functions & Run Anywhere

Home

Speaker Ewan Slater
RoomUD2.119
TrackFree Tools and Editors
Time15:20 - 15:40
Event linkView original entry

"Serverless" is a hot topic right now, and something that a lot of developers are keen to try.



A lot of focus has been on implementations that are proprietary to and only run on a single provider's cloud.



In this talk, I'll show how you can develop "serverless" functions on your laptop, with an open source platform and run them where you like.

However, if you attend a Serverless event, you may come away with the impression that it's a world of proprietary walled gardens from the major cloud providers.



In this talk I will cover:




Back

Bach.java: Lightweight Java Build Tool for Modular Projects

Home

Speaker Christian Stein
RoomUD2.119
TrackFree Tools and Editors
Time15:45 - 16:05
Event linkView original entry

I present "Bach.java" - a lightweight build tool for Java. Bach.java uses jshell/java to build
modular Java projects. It supports a "zero installation" run mode, convention over configuration
pragmatism, simple properties file to tweak defaults, and an API to build custom projects.



In the spirit of Rémix Forax, who wrote: "No need to be a maven to be able to use a build tool",
Bach.java is targeted to coders of small to mid-size Java projects, who want to focus on their
ideas and modules instead of learning and taming a build tool.

Back

Unit Testing with JUnit Jupiter - How to Use the new JUnit Jupiter Platform

Home

Speaker Karl Heinz Marbaise
RoomUD2.119
TrackFree Tools and Editors
Time16:15 - 16:35
Event linkView original entry

Starting with the new JUnit Jupiter Platform which is available for longer time there are much more options than with JUnit 4 or TestNG. This lecture will show the differences of JUnit Jupiter platform and how it can be used to write better unit- and or integration tests.

It will be shown how to migrate to JUnit Jupiter platform and which things should be considered/changed based on the differences between JUnit Jupiter and JUnit 4/TestNG and which things have been changed in comparison to TestNG/JUnit 4. Furthermore things like dynamic tests will be taken a short overview and things like extensions will also taken into account.

Back

Intro

Home

Speaker Richard Hartmann
RoomUD2.120 (Chavanne)
TrackMonitoring and Observability
Time09:00 - 09:05
Event linkView original entry

Introduction and welcome to the monitoring and observability devroom

Back

Distributed Tracing for beginners

Home

Speaker Juraci Paixão Kröhling
RoomUD2.120 (Chavanne)
TrackMonitoring and Observability
Time09:10 - 09:35
Event linkView original entry

Distributed tracing is a tool that belongs to every developer's tool belt, but what it actually can do remains a mystery to most developers.



In this slideless talk, we will introduce you to the world of distributed tracing by developing a cloud native application from scratch and applying all important distributed tracing concepts in practice, at first by hand and then by using existing libraries to automate our work.



You will learn not only what distributed tracing is, but how it works, what it can do and what it can’t. By the end of this talk, you will have working knowledge to start using distributed tracing tools with your new projects, as well as with your legacy ones.

Back

Grafana: Successfully correlate metrics, logs, and traces

Home

Speaker Andrej Ocenas
RoomUD2.120 (Chavanne)
TrackMonitoring and Observability
Time09:50 - 10:15
Event linkView original entry

This talk presents current capabilities of Grafana to integrate metrics, logs and traces and shows how to setup both Grafana and application code to be able to correlate all 3 in Grafana. It assumes some familiarity with Grafana to follow the How To steps but should be suitable for beginner users. Afterwards it shows future direction of Grafana in context of "Experiences", for even more seamless experience when correlating data from multiple data sources.


Back

Jaegertracing in Ceph

Home

Speaker Deepika Upadhyay
RoomUD2.120 (Chavanne)
TrackMonitoring and Observability
Time10:30 - 10:55
Event linkView original entry

Jaeger and Opentracing provide ready to use tracing services for distributed systems and are becoming widely used de-facto standard because of their ease of use. Making use of these libraries, Ceph, can reach to a much-improved monitoring state, supporting visibility to its background distributed processes. This would, in turn, add up to the way Ceph is being debugged, “making Ceph more transparent” in identifying abnormalities.
In this session, the audience will get to learn about using distributed tracing in large scale distributed systems like Ceph, an overview of Jaegertracing in Ceph and how someone can use it for debugging Ceph.

Back

Stories around ModBus

Home

Speaker Richard Hartmann
RoomUD2.120 (Chavanne)
TrackMonitoring and Observability
Time11:10 - 11:35
Event linkView original entry

Society would end if all ModBus stopped working overnight. Good thing it has zero security built in. Still, it's useful to get data out of industrial systems, be they a datacenter or a power plant.

Back

Monitoring strawberries

Home

Speaker Jean-Marc Davril
RoomUD2.120 (Chavanne)
TrackMonitoring and Observability
Time11:50 - 12:15
Event linkView original entry

According to the United Nations, 2.5 billion more people will be living in cities by 2050. This trend has caused indoor farming to draw a lot of attention and effort in recent years, in an attempt to scale the production of highly nutritious, healthy food inside cities.



Over the past 3 years, Agricool has recycled 20 industrial containers into farms that grow strawberries, herbs and salads, in the very heart of cities, and without any pesticide. These urban farms are currently operated in Paris and Dubaï.



Operating a fleet of indoor farms presents a diverse set of observability challenges. At the most traditional end of the observability spectrum, engineers rely on devops tools to operate computers, microservices, and an IoT infrastructure embedded inside the farms. On the other hand, living organisms like strawberry plants draw their own observability requirements, such as disease detection, physiological measurements, nutrient absorption, water analysis, or exposition rate to pollinating bumblebees.



The purpose of this talk is to highlight observability challenges and best practices that are specific to indoor farming, and to illustrate them through the learnings that were made at Agricool when building observability pipelines.


Back

Querying millions to billions of metrics with M3DB's inverted index

Home

Speaker Rob Skillington
RoomUD2.120 (Chavanne)
TrackMonitoring and Observability
Time12:30 - 12:55
Event linkView original entry

The cardinality of monitoring data we are collecting today continues to rise, in no small part due to the ephemeral nature of containers and compute platforms like Kubernetes. Querying a flat dataset comprised of an increasing number of metrics requires searching through millions and in some cases billions of metrics to select a subset to display or alert on. The ability to use wildcards or regex within the tag name and values of these metrics and traces are becoming less of a nice-to-have feature and more useful for the growing popularity of ad-hoc exploratory queries.



In this talk we will look at how Prometheus introduced the concept of a reverse index existing side-by-side with a traditional column based TSDB in a single process. We will then walk through the evolution of M3’s metric index, starting with ElasticSearch and evolving over the years to the current M3DB reverse index. We will give an in depth overview of the alternate designs and dive deep into the architecture of the current distributed index and the optimizations we’ve made in order to fulfill wildcards and regex queries across billions of metrics.

Back

Secret History of Prometheus Histograms

Home

Speaker Björn Rabenstein (Beorn)
RoomUD2.120 (Chavanne)
TrackMonitoring and Observability
Time13:10 - 13:35
Event linkView original entry

Representing distributions in a metrics-based monitoring system is both important and hard. Doing it right unlocks many powerful use cases that would otherwise require expensive event processing. Prometheus offers the somewhat weirdly named Histogram and Summary metric types for distributions. How have they become what they are today with all their weal and woe? To help understand the present, let's shed light on the past. Studying this piece of Prometheus's history will also allow a glimpse of the bigger picture, why certain things are the way they are in Prometheus, and which parts of the original vision are still awaiting fulfillment.

Back

Are You Testing Your Observability? Patterns for Instrumenting Your Services

Home

Speaker Bartek Plotka
Kemal Akkoyun
RoomUD2.120 (Chavanne)
TrackMonitoring and Observability
Time13:50 - 14:15
Event linkView original entry

Observability is the key to understand how your application runs and behaves in action. This is especially true for distributed environments like Kubernetes, where users run Cloud-Native microservices.



Among many other observability signals like logs and traces, the metrics signal has a substantial role. Sampled measurements observed throughout the system are crucial for monitoring the health of the applications and, they enable real-time, actionable alerting. While there are many open-source robust libraries, in various languages, that allow us to easily instrument services for backends like Prometheus, there are still numerous possibilities to make a mistake or misuse those tools.



During this talk, two engineers from Red Hat: Kemal and Bartek (Prometheus and Thanos project maintainer) will discuss valuable patterns and best practices for instrumenting your application. The speakers will go through common pitfalls and failure cases while sharing valuable insights and methods to avoid those mistakes. In addition, this talk will demonstrate, how to leverage unit testing to verify the correctness of your observability signals. How it helps and why it is important. Last but not least, the talk will cover a demo of the example instrumented application based on the experience and projects we maintain.



The audience will leave knowing how to answer the following important questions:



What are the essential metrics that services should have?
Should you test your observability? What are the ways to test it on a unit-test level?
What are the common mistakes while instrumenting services and how to avoid them?



And more!

The end goal of this talk is to demonstrate to the audience, how to harvest the powers of metric-based instrumentation in their applications. We would like to share some pragmatic, best practices and common patterns that we learned while maintaining several open-source projects.



During this talk:



We will discuss valuable patterns and best practices for instrumenting libraries and applications.
We will go through a set of common pitfalls failure cases, and methods to avoid those mistakes. Some of the topics we plan to mention: common cardinality issues, summaries vs histograms, choosing histogram bucket, testing, instrumenting libraries vs applications, common middlewares etc
We will demonstrate, why, when and how to leverage unit testing to verify your observability signals.
We plan to present a demo of the example instrumented application. We plan to use Go as an example language of such application but the talk should be mostly language agnostic.

Back

How to measure Linux Performance Wrong

Home

Speaker Peter Zaitsev
RoomUD2.120 (Chavanne)
TrackMonitoring and Observability
Time14:30 - 14:55
Event linkView original entry

In this presentation, we will look at typical mistakes measuring or interpreting Linux Performance. Do you use LoadAvg to assess if your CPU is overloaded or Disk Utilization to see if your disks are overloaded? We will look into these and a number of other metrics that are often misunderstood and/or misused as well as provide suggestions for better ways to measure Linux Performance.

Back

From Zero to Useless to Hero: Make Runtime Data Useful in Teams

Home

Speaker Florian Lautenschlager
Robert Hoffmann
RoomUD2.120 (Chavanne)
TrackMonitoring and Observability
Time15:10 - 15:35
Event linkView original entry

We introduced distributed tracing, central logging with trace correlation and monitoring with Prometheus and Grafana in a large internationally distributed software development project from the beginning. The result: Nobody used it.



In this talk we show the good and not so good sides we have learned while introducing and operating the observability tools. We show which extensions and conventions were necessary in order to carry out a cultural change and to awaken enthusiasm for these tools. Today the tools are a first-class citizen and people are shouting when they are not available.

Back

Grafana-As-Code: Fully reproducible Grafana dashboards with Grafonnet

Home

Speaker Julien Pivotto
Malcolm Holmes
RoomUD2.120 (Chavanne)
TrackMonitoring and Observability
Time15:50 - 16:15
Event linkView original entry

Grafana configuration can nowadays be fully done as code, which enables code review, code reuse, and in general better workflows when working with dashboards.



This talk will present Grafonnet, a Jsonnet library to generate Grafana dashboards and some tips and tricks about how to use it efficiently and how to manage fully your grafana instances from code. We will also explore how Jsonnet and Grafonnet enable collaboration on dashboards, using Mixins and explain how to push dashboards to Grafana, either using Kubernetes, or direct to the Grafana API.


Back

Monitoring of a Large-Scale University Network: Lessons Learned and Future Directions

Home

Speaker Simone Mainardi
Tobias Appel
RoomUD2.120 (Chavanne)
TrackMonitoring and Observability
Time16:30 - 16:55
Event linkView original entry

The complexity of network monitoring strongly depends on the size of the network under observation. Challenges in monitoring large-scale networks arise not only from dealing with a large volume of traffic, but also from keeping track of all traffic sources, destinations, and who-talks-to-whom communications. Analyzing this information allows to uncover new behaviors that would have not been visible by merely observing common metrics such as bytes and packets. The drawback is that extra pressure is put on the monitoring system as well and on the downstream data- and timeseries-stores.



This talk presents a case study based on the monitoring of a large-scale university network. Challenges faced, findings, and lessons learned will be examined. It will be shown how to make sense of the input data to properly manage and reduce its scale as early as possible in the monitoring system. The discussion will also highlight the advantages and limitations of the opensource software components of the monitoring system. In particular, the opensource network monitoring tool ntopng and the timeseries-store InfluxDB will be considered. It will be shown what happens when ntopng and InfluxDB are pushed to their limits and beyond, and what it can be done to ensure their smooth operation. Relevant findings, behaviors uncovered in the network traffic, and future directions will conclude the talk. Intended audience is technical and managerial individuals who are familiar with network monitoring.


Back

Janus as a WebRTC "enabler"

Home

Speaker Lorenzo Miniero
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time09:20 - 09:35
Event linkView original entry

This talk will cover several aspects related to Janus as a WebRTC "enabler" for non-WebRTC applications. In particular, it will focus on the RTP management in Janus, namely how to use it as input/output to interact with external applications for different use cases.

Janus is an open source and general purpose WebRTC server. Its modular nature makes it easy to implement heterogeneous multimedia applications based on WebRTC, whether it's for conferencing, talking to a SIP infrastructure, broadcast a stream or interacting with an IoT device. One of its strongest points is the ability to seamlessly involve plain RTP within the context of a WebRTC communication, whether it's for feeding media to a WebRTC endpoint, or use a WebRTC stream somewhere else: this makes Janus a good WebRTC "enabler" for platforms that may not be aware of, or be compliant with, the WebRTC specification.



This talk will cover the different features Janus provides implementers with, when it comes to RTP. In particular, it will introduce the Streaming plugin (RTP- and RTSP-to-WebRTC broadcaster), the SIP/NoSIP plugins (for legacy VoIP integration) and the so-called RTP forwarders (to relay media coming from WebRTC sources as plain RTP to external endpoints), and explain how these different components can be used together in different scenarios, whether it's just to increase scalability or to implement a complex and rich multimedia application. Besides, it will spend a few words on how simulcast, SRTP and recordings can be part of the picture.

Back

Build your own ENUM server using CGRateS

Home

Speaker Teofil Voivozeanu
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time09:40 - 09:55
Event linkView original entry

In this talk, Teo will explain how you can use CGRateS as an ENUM server together with other subsystems to achieve advanced ENUM functionality as number portability or least cost routing.
CGRateS is a battle-tested Enterprise Billing Suite with support for various prepaid and postpaid billing modes.

In this talk, Teo will explain how you can use CGRateS as an ENUM server together with other subsystems to achieve advanced ENUM functionality as number portability or least cost routing.
As part of the recipe for such implementation, there will be few major CGRateS components exemplified: AttributeService ( combined with FilterService) which will be used to replace data from events arrives in CGRateS based on filters and SupplierService which can be used to select your desired provider based on different strategies such as least cost, highest cost, weight, load distribution or a mixed ( in case a two providers have the same cost if supplier was configured it can fall through on weight strategy automatically)
CGRateS is a battle-tested Enterprise Billing Suite with support for various prepaid and postpaid billing modes.

Back

Linphone Instant Messaging Encryption

Home

Speaker Elisa Nectoux
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time10:00 - 10:20
Event linkView original entry

For many years, Linphone has been one of the most active free communication software. Originally focused on voice, aditionnal functionalities were rapidly added like video, group chat and presence. All of these communication modes imply privacy.



To achieve a good level of privacy, users must be able to ensure that their communications can only be displayed to the receiver of those communications, especially no-one from server infrastructure crossed by the messages shall be in the position of compromising secrecy of the communication. Basically, this is what end-to-end encryption is aiming to achieve.



Linphone does implement end-to-end encryption for voice and video communications thanks to ZRTP (rfc 6189). However, for messaging, security was only performed using point-to-point cyphering, based on SIP TLS. To bring users of instant messaging features the same level of security, we decided to implement end-to-end encryption mechanisms for messaging too, including group chat. Linphone Instant Messaging Encryption follows state-of-the-art methods for forward secrecy and MitM detection.



This discussion will focus on protocols' extension to existing SIP standards, implementation challenges and future extensions.

Back

Collaboration between Free RTC projects

Home

Speaker Daniel Pocock
Timothée Floure
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time10:25 - 10:45
Event linkView original entry

This session will discuss how we can share infrastructure and resources between projects, with the goals of reducing administrative burdens, reducing duplication of effort and increasing interoperability between our solutions. To satisfy user expectations, interoperability is more critical in the field of real-time communications than any other free software eco-system. In particular, we will look at how to share management of an event calendar, Planet sites, repositories, CI for interop testing and various other tools.

Back

XMPP: get your shopping cart ready!

Home

Speaker Winfried Tilanus
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time10:50 - 11:10
Event linkView original entry

The eXtensibility of XMPP makes it extremely powerful. But it is easy to get lost in the supermarket of extensions. In this talk I will guide you though the extensions: how to read the labels? How to cook some common recipes? I will also show some exotic but nice ingredients. And last but not least: we will be having some protocol-fun!

The eXtensible Messaging and Presence Protocol, XMPP, has extensibility at its core. It is because of the extensibility that there is a vivid XMPP ecosystem: it is easy to adapt XMPP to new developments and to new use cases. The XMPP Standards Foundation maintains a list of extensions to XMPP. In this talk I will dive into this list of extensions: what kind of extensions are there? What statuses can an extension have? What extensions to use in some common use cases? And I will touch some exotic extensions for use cases you may never have thought about. Oh, and what is the story about these ‘humorous’ extensions, can a protocol be humorous?

Back

Crossing the Bifröst - Bridging All The Things with Matrix

Home

Speaker Will Hunt
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time11:15 - 11:35
Event linkView original entry

Matrix is an open source project developing an open protocol and network for decentralised end-to-end-encrypted communication, providing a viable open alternative to the proprietary communication silos of WhatsApp, Slack, Discord and friends. One of Matrix's main goals is to provide a highest-common-denominator open network which can bridge together existing communication silos. In this talk, we'll show off Bifröst, our new application framework for building bridges, and demonstrate high quality bridging with XMPP, Slack, Discord, WhatsApp, and more!

Matrix owes its name to the idea of binding together existing communication platforms into an open 'matrix' in which they can interoperate. Over the last year the wider Matrix ecosystem has been focusing increasingly on bridging, with lots of exciting development happening around:






We'll explain all the flavours of bridging available these days (from bot-based, virtual users, puppetted, gatewaying etc) and show off Bifröst providing full gatewayed bridging between Matrix & XMPP (so that anyone in Matrix can reach anywhere in XMPP and vice versa), as well as double-bridging (e.g. IRC<->Slack via Matrix) - and generally illustrate how Matrix can be used to heal fragmented communities which have ended up shattered between Slack, Discord, IRC and other platforms.



(Will Hunt has replaced Matthew Hodgson as the speaker.)

Back

High quality VoIP platforms with Kamailio

Home

Speaker Henning Westerholt
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time11:40 - 11:55
Event linkView original entry

Kamailio as widely-used Open Source SIP Server is used to implement large and complex real-time communication platforms. Ensuring a good user experience, performance and quality can be a challenge in these enviroments. The talk will present different ways how to use the power of Kamailio to support you with a modern testing and debugging workflow.

Kamailio as widely-used Open Source SIP Server is used to implement large VoIP and real-time communication platforms. Ensuring a good user experience, perfomance and quality can be a challenge in these complex enviroments. The talk will present different ways how to use the power of Kamailio to support you with a modern testing and debugging workflow.



Kamailio contains different modules that support you to test and debug your configuration. Different usage examples based on real world problems are used to explain them in detail. A usual process to find problems in code is to use a debugger to step through your code. Due to the architecture of Kamailio this setup needs a bit more preparation. Additional the talk will show how to step through your configuration language execution process to debug complicated logic.



Testing config logic that involves different modules together can be challenging due to the many involved moving parts. Kamailio can support you here with a dedicated and easy to use component testing infrastructure. This test infrastructure is based on docker container, is freely available and can help also in your setup.



Ensuring a good code quality is especially important for a stable and secure VoIP server. Kamailio is using different static code analyzer and other testing methods to improve the quality of the server. The talk will give you an overview about this tools and highlight how to use them by yourself.

Back

AMENDMENT RTC: A sea of opportunities

Home

Speaker Ruben Homs
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time12:00 - 12:20
Event linkView original entry

The love for technology is what binds us and pushes us to keep improving. Though there are a lot of thriving RTC projects, collaboration between them is rare. This results in duplication of efforts, differing standards and lack of interoperability. In this session we will discuss the opportunities that lie in wait for the Free RTC space and how we're trying to tackle some of these issues through the Open VoIP Alliance.



NOTE: This is a late addition to the schedule to fill a gap left by a speaker who cancelled.

Back

VoIP Troubleshooting and Monitoring FAQs

Home

Speaker Oleg Agafonov
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time12:25 - 12:40
Event linkView original entry

It's most likely that you have already heard about the SIP3 platform which allows you to monitor and troubleshoot your VoIP infrastructure.



For the last year SIP3 team has grown and added tons of nice features into platform.



This talk will tell about past, present and upcoming future of SIP3 and will be useful for both newcomer users and people who've played with SIP3 in the past.

Back

WebRTC isn't just for (video) conference calls

Home

Speaker Tim Panton
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time12:45 - 13:05
Event linkView original entry

WebRTC is showing up in many places: security cameras, babymonitors, games streaming, autonomous cars etc
I'll describe the advantages of WebRTC in these devices but also the challenges of non-mainstream usage.
I'll bring a demo or two and some sample code.

Back

AMENDED Wazo Platform

Home

Speaker Benoit Aubas
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time13:10 - 13:30
Event linkView original entry

Learn what is Wazo Platform. How it evolved from Wazo and Xivo. What it is the vision for this Open Source project. How we leverage Asterisk, Kamailio and RTPEngine in this vision. How to get involved.

Please note this talk was originally advertised as being given by Mathias Wolff.

Back

HOMER 2020

Home

Speaker Lorenzo Mangani
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time13:35 - 13:50
Event linkView original entry

HOMER 2020: The future of the HEP Stack

Meet the latest HOMER and its ground breaking features, and learn about our project vision for the future of HEP and OSS RTC monitoring tool and technologies

Back

Nextcloud Talk

Home

Speaker Björn Schießle
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time13:55 - 14:15
Event linkView original entry

Real-Time communication happens often in teams these days: at work, in your sports club, your Free Software project and in many other places. For a long time this was, and in many areas often still is, dominated by centralized and proprietary tools. While there are many great Free Software projects out there which fill the gap, Nextcloud Talk is unique as it ingrates in a complete collaboration platform. You can have your files, calendar, contacts, project plan and any other data in the same place where the communication takes place, all nicely integrated. You can edit collaboratively office documents (text, spreadsheets, presentations,...) or markdown files while having a chat, video- or audio call for example. This talk will introduce you to some of the unique features Nextcloud Talk can offer to your teams.

Nextcloud Talk is part of the Nextcloud collaboration platform. A complete Free Software and on-premise cloud solution which allows you to manage all kind of data and work collaboratively on it, both in one Nextcloud instance and even across different Nextcloud instances. Nextcloud Talk is based on WebRTC and offers a nicely integrated tool for text-, video- and audio-chats. As Nextcloud Talk is a first-class citizen in the Nextcloud platform it is well integrated with all the other components. You can share documents from Nextcloud Files into a room and every user will directly be able to see, sync and edit the files. You can work on a document with your team while have a call or chat, and many more things are possible. While you can find for every single task great Free Software tools these days. The uniqueness of Nextcloud Talk is the integration in one collaboration platform so that all the different areas can work nicely together, integrated in a way which often even outperform proprietary competitors. This talk will introduce the audience to all the possibilities of Nextcloud Talk and will show them how they can reach a new level of productivity for their group with the collaboration platform Nextcloud.

Back

Modern VoIP in Modern Infrastructures

Home

Speaker Giacomo Vacca
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time14:20 - 14:35
Event linkView original entry

In the last years we have seen huge changes in IT infrastructures and concepts. VoIP architectures too are evolving towards Software Defined Telecoms. In this talk we'll see how VoIP solutions are being shaped by the Cloud, the open points and share some thoughts about its future.



This is co-authored by Giacomo Vacca and Federico Cabiddu.

Back

AMENDMENT Jitsi: video conferencing for the privacy minded

Home

Speaker Saúl Ibarra Corretgé
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time14:40 - 15:00
Event linkView original entry

Jitsi is a set of Open Source projects which provide state-of-the-art videocconferencing capabilities. In this presentation we will explore the Jitsi ecosystem from a privacy minded point of view.

Communicating privately via a public network (specially if we are using video) can be challenging. Jitsi provides the necessary tools to do so and we'll explore the seccurity model employed by our tools and how to setup a Jitsi instance with ease, while respecting your privacy.



NOTE: This talk was moved from the 9:00 original time, replacing a talk on "Migrating reSIProcate to C99 stdint types" by Izabela Bakollari.

Back

Introducing Falconieri: Remote Provisioning Service as a Service

Home

Speaker Matteo Valentini
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time15:05 - 15:25
Event linkView original entry

Remote Provisioning Service is a service offered by phones vendors for easily and quickly provide a configuration for a phone.
Despite the advantages of have a phone ready to use without any specific network configuration (except for a internet connection), there are some drawbacks like different APIs for any vendors.
Falconieri try to unify all the vendors specific APIs under a set of HTTP rest APIs.

In this talk will be shown the motivations beoynd the creation of Falconieri and the technical choises.
Will be illustrated also the APIs of the four currently supported vendors:




Back

Asterisk: A Project Update

Home

Speaker Matthew Fredrickson
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time15:30 - 15:45
Event linkView original entry

This talk will be about what's happened this last year in the world of Asterisk, including what's happened in the latest major release of Asterisk (Asterisk 17) as well as a discussion of some of the more recent developments that have happened since 17's release.

This talk will be about what's happened this last year in the world of Asterisk, including what's happened in the latest major release of Asterisk (Asterisk 17) as well as a discussion of some of the more recent developments that have happened since 17's release.

Back

Chat Over IMAP (COI): State of the Union

Home

Speaker Robert Virkus
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time15:50 - 16:05
Event linkView original entry

With the free & open COI standard we enable every mail user to chat via email. We presented this idea last year, have in the meantime launched the COI plugin of the Dovecot IMAP server and the OX COI Messenger app. In this talk you will learn how the basic idea evolved over time, what we have learned during the journey, where we are heading to and: Why and how you should join us on that trip.

We presented the idea on last year's FOSDEM: Instead of trusting companies like Facebook, Tencent and rely on their infrastructure for personal communication, we wanted to create a new chat ecosystem based on open standards and federated infrastructure. As much as we like privacy-entered messengers like Signal, it's just another silo and you have to trust the provider.
In the end all popular messaging services today are proprietary, closed and operated by single providers.
This has unwilling consequences we are not willing to accept anymore:
Your provider knows all about your social network: when you communicate with whom, the frequency of your communication and the number, type and length of your messages – even with end to end encryption in place.
You are locked in as a user: You cannot simply change to another provider, because your friends and peers are also using your current provider. If you want to switch you would need to convince everyone to follow. And of course you would lose all your conversation history if you dare to move away.
The network is only controlled by a single party, you cannot start hosting your own WhatsApp, WeChat or FB Messenger service. They set the rules and usually do not give you access to the API for creating your own software.
So why not taking an existing network based on open protocols which is already used by almost everyone? Why not using.... email? It already provides a federated infrastructure and is based on open standards like SMTP and IMAP. Why not building build a chat ecosystem on top of it? We called it COI - Chat Over Imap.
This was the initial idea and some key players joined the initiative around this idea right from the beginning: Dovecot (the most popular IMAP server), DeltaChat (an open source email based messenger) and Open-Xchange (an open source email technology and service provider) joined forces and kicked of COI - Chat Over IMAP.
The Dovecot team started working on extending the existing IMAP protocol and build some services on top to allow encrypted push notifications, to reduce latency etc. The DeltaChat developers worked on improving the client's core and adjust it to the needs of the Open-Xchange team who worked on compatible Flutter-based clients for Android and iOS. Of course everything was a little more complicated than most of us thought, but in the end we made it: In October 2019 we presented beta versions of iOS and Android COI Messenger clients and we introduced the COI plugin for Dovecot.
In this talk we will share the main challenges we have been facing while adding real-time messaging features to IMAP and how we solved them. And we will give insights into the problems we ran into in client development where we had to combine a Rust-based DeltaChat-Core code base with Flutter based mobile UIs and platform-specific native features.

We would also like to share with you what the main outstanding challenges are that might still stop us from being the WhatsApp killer we'd love to be.
And of course we would love to encourage you to join the party.

Back

Reach for the Clouds With OpenSIPS 3.0

Home

Speaker Liviu Chircu
RoomUD2.208 (Decroly)
TrackReal Time Communications
Time16:10 - 16:30
Event linkView original entry

Tune in and get up-to-date with the philosophy and features behind the major
version shift in the latest OpenSIPS 3.0 release.

OpenSIPS 3.0 embraces the cloud movement with open arms and aims to be easier to deploy and
a lot more enjoyable to maintain. You can now automate routing logic updates to hundreds of
instances with a single click, thanks to the long-awaited ability to reload the routing script which
is finally here!



Individual running instances of OpenSIPS 3.0 are now capable of automatically upscaling or downscaling
themselves according to the volume of traffic that is running through them. By auto-forking more SIP
worker processes during peak day hours and un-forking them at night, OpenSIPS 3.0 maintains steady
performance while minimizing the costs of the cloud instances which host it.



Say goodbye to "opensipsctl" and meet the new Python3-based tool for managing 3.x instances:
"opensips-cli". While retaining the majority of features of its predecessor, opensips-cli builds
upon them, adding an intuitive way of interacting with multiple instances, the ability to filter, trace
and troubleshoot specific calls which take place on any of them or to ask for an instance
diagnosis, which will instantly pinpoint issues such as insufficient memory, slow SQL/NoSQL service,
slow DNS service, overflowing UDP queues and many more!



Presented by Liviu Chircu and Răzvan Crainea

Back

How many engineers does it take to change an IOT light bulb?

Home

Speaker Benjamin Henrion (zoobab)
RoomUD2.218A
TrackInternet of Things
Time09:00 - 09:10
Event linkView original entry

Welcome to the IOT devroom

How many engineers does it take to change an IOT light bulb?



Let's discuss the Philips Hue architecture.



"Our new cloud "maintains a permanent, open socket connection into every Hue home, 24x7, everywhere in the world to obtain real-time performance"
"- Holy fcking what?"

Back

Checkpointing in a real time OS for transiently-powered embedded systems

Home

Speaker David Garriou
RoomUD2.218A
TrackInternet of Things
Time09:10 - 09:30
Event linkView original entry

Some constrained embedded systems cannot use batteries, those are called transiently-powered embedded systems.
They can be equipped with a non volatile RAM (NVRAM) and a super capacitor for gathering energy when available.
Developing in such an environment is not straight.
During this talk, we will expose our work on a constrained real time OS tolerant with power loss on a MSP430 based platform.
This platform is a MSP430FR5994 Launchpad equipped with FRAM and a super capacitor.

This talk deals with constrained transiently-powered embedded systems equipped with non volatile RAM (NVRAM).
Transiently-powered systems may be autonomous sensors, sensor networks, mobile devices without batteries, systems where the use of batteries is infeasible.
In our case, such systems are equipped with a super capacitor that gathers energy and restitutes that energy to the system for a short time (from a few tens of milliseconds to a few minutes).
Programming on such a device is difficult because it operates only intermittently, as energy is available.
Such devices must not reboot but continue their computations all along successive powered periods, meaning that they must keep their states and values even when loosing power.
Using NVRAM may seem to be an easy solution, but that is not the case. Indeed, using NVRAM as a kind of RAM is likely to lead the system to an altered behaviour, an inconsistent state.



Our motivation is to provide a transiently-powered computing platform to accommodate both usual sensing and transmission functions as well as as heavy as possible computations aka edge computing.
This platform shall abstract the use of NVRAM or at least assist the user in the design of its application.



Our based test platform is an Autosar compliant open source real time OS, Trampoline (https://github.com/TrampolineRTOS/trampoline), already used inside high-end vehicles.
Trampoline is very light, configurable, suitable for constrained devices.
We have made some experimentations on a MSP430FR5994 platform, it is equipped with a MSP430 and a 0.22F super capacitor.
Our device shall start just once, and then restart from the last checkpoint on each power recovery.



During this talk we will present the current state of our experimentations.
That is to say :
- Real-time and estimation of remaining energy;
- Task scheduling model for such devices;
- Platform energy consumption models: start and restart;
- Peripherals states and initializations.

Back

Building composable IOT toolsets with Docker, Node-Red and OpenOCD

Home

Speaker Dimitri del Marmol (ddm)
RoomUD2.218A
TrackInternet of Things
Time09:30 - 09:50
Event linkView original entry

We will demonstrate how to quickly develop simple tools to: check for a GPIO state, communicate with an I2C OLED screen or gdb into a running firmware.

Thanks to multiarch/qemu-user-static, it is now trivially easy to build arm32 docker images from any workstation or server. These images can be shared on public repositories like any other x86 image.



Single board computers such as the Raspberrry Pis have built in Linux primitives for basic manipulations of GPIOs and ports (I2C, UART, SPI) such as /sys/class/gpio or /dev/i2c*.



Node-RED allows to visually develop simple workflows that can leverage the kernel primitives but also other packages like OpenOCD.



Most of the presentation will be live with just a few stops on GitHub or the Docker Hub. No slides should be necessary.

Back

Making an IoT robot

Home

Speaker Philippe Coval
RoomUD2.218A
TrackInternet of Things
Time09:50 - 10:10
Event linkView original entry

Technical barrier to target low cost micro controllers can be too high for many developers already used to high level API. But did you know that those devices can support many operating systems like NuttX inspired by POSIX (same for Linux), but it goes even behind than C APIs, even JavaScript runtimes like IoT.js can be supported too. IoT.js can also support JS community modules such as Generic-sensor-lite to support sensors and actuators or webthing-iotjs to enable REST API for embedded applications.

A Robot demonstration running on IoT.js will be explained from porting task to support new STM32F7 board, generate PWM signal to handle servo motors and also providing REST API for "Web Of Things (WoT)" applications. To illustrate "Digital Twins" concept the robot can interact with a WebVR application using A-Frame framework bridged to Mozilla webthing protocol.

Back

AI at the edge with Tensorflow Lite to Design the Future of Vertical Farming

Home

Speaker Alexis DUQUE
RoomUD2.218A
TrackInternet of Things
Time10:10 - 10:30
Event linkView original entry

While Machine Learning is usually deployed in the cloud, lightweight versions of these algorithms that fit for constrained IoT systems such as microcontrollers are appearing.
Using Machine Learning « at-the-edge » has indeed several advantages such as the reduction of network latency, it provides better privacy, and are working offline.
In this presentation, we will demonstrate how to deploy Deep Learning algorithms on IoT devices thanks to TensorFlow Lite. We will see how to use it to design a smart vertical farming system able to predict and optimize the plant growth, at home or in developing countries where a reliable Internet connection still is missing.

In this talk I will show how trending technologies like IoT, Machine Learning and Tensorflow can make the world better :)
I will discuss how we can use Tensorflow Lite on IoT and evaluate its performances and limits.
I will explain our use case in vertical farming, show code snippets and make some short demo.



Summary:




Back

Sphactor: actor model concurrency for creatives

Home

Speaker Arnaud Loonstra
RoomUD2.218A
TrackInternet of Things
Time10:30 - 10:50
Event linkView original entry

We propose a combined visual and text-based programming environment based on the actor model suitable for novice to expert programmers. This model encompasses simple communicating entities which easily scale from utilizing threads inside the computer to massive distributed computer systems. Our proposal is very suitable for IOT scenarios, creative coding practices and rapid prototyping. The prototype utilizes zeromq transports and embeds python for easy creation of actors.

Sphactor is currently a research project for a framework for concurrent programming suitable for novice users while maintaining features needed by expert programmers. The library features an actor model at its core and features a GUI application to manage actor dependencies visually and also program individual actors using a classical text based approach.



One of the initial questions for Sphactor was the fact that when students want to access new technologies they often need to be experienced programmers. However this is hardly ever the case. As an example; students need to access motion capture sensors however only an SDK is provided. We can overcome this hurdle by adding some software to make this more accessible. We found that most tools used by students in our academy can utilize the OSC (Open Sound Control) out of the box. Therefore we started transmitting sensor data through OSC. This has proven to be very comfortable for students. We then ran into the situation that for every technology we needed to develop a piece of software to translate its features to OSC. To prevent creating a jungle of tools we started researching how we could create a general intermediate layer between technologies and use OSC as a transport. This is a common question in the IOT world.



Continued research showed us that students, using existing tools and frameworks were hardly ever utilizing all processors in their machines. This is due to the fact that tools they operate are only designed for single threaded situations. Tools utilizing all processors are very rare, especially for novice users. Message passing is one of the fundamental models for concurrent programming and is actually very similar to what we were already doing in our intermediate software layer and in common IOT scenarios when we are processing all our sensor data.



These situation are driving the development of Sphactor. We currently have a prototype ready for testing which we will demonstrate and talk about. Sphactor is being researched by the HKU University of the Arts Utrecht in the Netherlands for use in creative processes and as a educational environment for programming and interacting with new technologies. Libsphactor is developed in C using Zeromq's czmq framework. The gui is done is C++ using an Immediate Mode UI with minimal dependencies.



This research project is a continuation of research which was presented in the FOSDEM IOT devroom in 2015 and 2016.

Back

Tarantool Cartridge

Home

Speaker Artur Barsegyan
RoomUD2.218A
TrackInternet of Things
Time10:50 - 11:10
Event linkView original entry

We will have to talk about Tarantool Cartridge: it's framework for creating distributed applications. The application will be based on Tarantool – blazing fast in-memory database and Lua application server in one.
Instances interconnect with each other by SWIM protocol – UDP Gossip protocol. We believe that our platform can bring brand new opportunities for the IoT world: we have the aggregated solution for many potential cases in IoT.

Back

How to build Webthings?

Home

Speaker Christian Paul
RoomUD2.218A
TrackInternet of Things
Time11:10 - 11:30
Event linkView original entry

Mozilla Webthing is a smart home platform built with Privacy by Design. It is an implementation of Web of Things concepts specified by W3C. The presentation explains how to create new things and interact with gateway using addon adapters.

Mozilla's WebThing schemas specify many IoT devices. They are also flexible enough to describe any devices using generic types. Standalone devices can be interacted with using WebThings REST API or using the Mozilla gateway. The gateway is designed to be extensible using addon adapters. A couple of my contributions involving sensors and virtual things will be demonstrated. Recipes will be shared to build your own adapters using your favorite language (Js, Python, etc). The addons can easily be deployed on your RaspberryPi for self-hosted home automation.

Back

Astarte: A Data-First approach to IoT

Home

Speaker Dario Freddi
RoomUD2.218A
TrackInternet of Things
Time11:30 - 11:50
Event linkView original entry

Even though the IoT buzz has been around for years, ecosystems are still scattered and developers must usually patch together a number of solutions to achieve their goals. Astarte is a free software, opinionated "blackbox" solution which aims at empowering developers with a platform which puts Data as the first-class citizen rather than focusing on Device-to-Cloud communication, and can scale to production-tier deployments easily.

Astarte enables developers to skip the details of all the plumbing in IoT data collection, and skip straight to easy-to-use mechanisms for harnessing data produced by IoT devices through analytics, AI, or simply visualisation.
The talk will go over Astarte's design and architecture, both from a plumbing perspective and from the daily developer usage. The live demo will show how to install Astarte in a Kubernetes Cluster, set up a Device (real or a simulator) to interact with Astarte, and build a minimal web application to interact with it - all in the timespan of the talk.

Back

Building IoT solutions with Eclipse IoT technology

Home

Speaker Sven Erik Jeroschewski
RoomUD2.218A
TrackInternet of Things
Time11:50 - 12:10
Event linkView original entry

The IoT working group within the Eclipse Foundation is a joint effort to develop generic building blocks for creating IoT solutions. As of now, they host over 30 projects, which address different aspects of the realization of IoT use cases. The vast number of projects allow the design of tailored IoT solutions but bear the risk that people get lost in the wide range of projects. The recently introduced project Eclipse IoT packages will help here. It aims to provide pre-bundled software packages for the IoT.



In this talk, we are going to introduce the work of the Eclipse IoT working group and showcase selected projects with the focus on how one can use those technologies to build custom domain specific IoT solutions. Specifically, we will focus on the message hub Eclipse Hono,, the digital twin solution Eclipse Ditto, the update manager Eclipse hawkBit and Eclipse Vorto a description language for IoT devices. We further plan to show how one can combine and use the projects in a sensible way within the Eclipse IoT packages project.

Building your own backend for an IoT based solution can be difficult, as one needs to solve a number of challenges like among others connecting large number of devices in a scalable way, abstracting the access to the device and managing the software on the device. Within the Eclipse Foundation, developers contributes to over 30 projects to tackle the aforementioned and further issues. Various partners drive this working group from academia and industry like the strategic partners RedHat, Eurotech and Bosch. In this talk we intend to give an overview of the ongoing work in the working group and want to showcase some of the projects that are developed in that context. Namely, we plan to focus on the following projects:



Eclipse Hono – A scalable message hub for connecting a larger number of IoT devices to the cloud supporting various protocols



Eclipse Ditto – Abstracting the access to the state and actions of the physical IoT device in the digital world. This is done, by the provision of HTTP and WebSocket endpoints that can be used by other applications in order to access the data of the device or send commands to the device.



Eclipse hawkBit – Manage the software version of an IoT device and manage campaigns for rolling out new software updates.



Eclipse Vorto – Description language for stating the capabilities and features of an IoT device. The Eclipse Vorto models can then be used by other tools e.g. to provide an API for the features or generate implementations that are already integrated with the backend.



Based on those projects it is possible to build your own IoT solution for which we will give examples during this talk. Moreover, we will introduce the new Eclipse IoT packages project, which aims to provide pre-bundled packages of Eclipse IoT projects. The aim of creating those packages is to support developers who want to leverage Eclipse IoT technology in their specific context or domain.

Back

IoT Updates with IPv6 Multicast

Home

Speaker Brett Sheffield
RoomUD2.218A
TrackInternet of Things
Time12:10 - 12:30
Event linkView original entry

Could we update a billion IoT nodes from just one tiny virtual server?



Could a server that is behind a completely closed inbound firewall, using no caching, no CDNs and which never accepts any inbound traffic communicate at massive scale?



How can we handle flow control, with no feedback mechanism?

Lets find out!

Back

IoT with CircuitPython

Home

Speaker David Glaude
RoomUD2.218A
TrackInternet of Things
Time12:30 - 12:50
Event linkView original entry

Introduction to CircuitPython and how to make basic IoT without a development environment.

A brief history of CircuitPython
CircuitPython vs MicroPython



Hello World demo:
1. Hello World in REPL
2. Hello World in a Python script
3. Blink (the electronic Hello World)
4. Cheerlights (the internet connectivity Hello World)
5. Hide and Seek (a BLE Hello World?)



Circuit Python supported hardware used for the IoT demo:
* nRF52840 (Nordic Semiconductor) with build-in BLE
* ATSAMD51 (Microchip) M4 with Airlift (ESP32 used as a Wifi Co-Processor)

Back

PSLab.io

Home

Speaker Mario Behling
RoomUD2.218A
TrackInternet of Things
Time12:50 - 13:00
Event linkView original entry

PSLab is a small USB powered iOT board to do measurements. It comes with slots for ESP WiFi chips and Bluetooth and can be used as hardware extension for Android phones or PCs. PSLab has a built-in Oscilloscope, Multimeter, Wave Generator, Logic Analyzer, Power Source, and we are constantly adding more digital instruments. To start measuring, connect two wires to the relevant pins and use the Android or desktop apps to view and collect the data. You can also plug in hundreds of compatible I²C standard sensors to the PSLab pin slots or even control robots with the robotic arm tool.

PSLab is a small USB powered iOT board to do measurements. It comes with slots for ESP WiFi chips and Bluetooth and can be used as hardware extension for Android phones or PCs. PSLab has a built-in Oscilloscope, Multimeter, Wave Generator, Logic Analyzer, Power Source, and we are constantly adding more digital instruments. To start measuring, connect two wires to the relevant pins and use the Android or desktop apps to view and collect the data. You can also plug in hundreds of compatible I²C standard sensors to the PSLab pin slots or even control robots with the robotic arm tool.

Back

Erlang and Elixir on IoT devices using AtomVM

Home

Speaker Davide Bettio
RoomUD2.218A
TrackInternet of Things
Time13:00 - 13:10
Event linkView original entry

This talk will present AtomVM, a tiny portable virtual machine that allows Elixir and Erlang code to run on microcontrollers with less than 500KB of RAM such as the ESP32 or several STM32.

Erlang and Elixir are really good at handling network packets, running concurrent processes, dealing with faults, writing testable software and enabling rapid development.
All this features are relevant for any language or framework that aims to IoT devices, therefore Erlang and Elixir might be good choice for IoT devices. However the BEAM, which is the standard Erlang and Elixir VM, cannot fit on those devices. AtomVM aims to overcome this limitation, by implementing a tiny virtual machine from scratch.



This talk aims to present AtomVM and to show how a functional language such as Elixir might boost development of IoT projects.

Back

IOT Lightning Talks

Home

Speaker
RoomUD2.218A
TrackInternet of Things
Time13:10 - 13:30
Event linkView original entry

Show us your IOT pet project, 5mins each, don't be shy

Show us your IOT pet project, 5mins each, no need to register, if you have a proposal, send a quick email to zoobabATgmail.com.

Back

IoT Projects in FLOSS Foundations

Home

Speaker Alvaro del Castillo
RoomUD2.218A
TrackInternet of Things
Time13:30 - 14:00
Event linkView original entry

A data based analysis of IoT Projects in FLOSS Foundations

In the last decade, Industry 4.0 has emerged as a revolution for the traditional technology, and the Internet of Things (IoT) is at the core of it. Apache, Eclipse and Linux foundations, three of the main actors in Open Source, have put in place their own IoT architectures powered by different Open Source projects. In this talk, these architectures are compared and a common architecture is identified based on emerging standards, with a special focus in the Edge. Then, the common architecture is used to classify the different Open Source projects.



For each project, activity and community analysis based on the data extracted from Git and Github issue trackers is achieved using the GrimoireLab platform, a powerful Open Source tool for software analytics. Finally, the data obtained is used to understand the Open Source IoT landscape in terms of companies involved, leading projects, technologies adopted and communities.



A total of 55 projects have been analyzed and all of them are classified in the categories: Edge, Cloud, Enterprise, Tools. And inside the Edge category, five subcategories are defined: OS and virtualization in devices, communication protocols, data processing, platforms for interoperability and applications. For all the projects the data for the activity (commits) and community size (people doing commits) are extracted and analyzed in time series.



The data will be presented as dashboards that all attendees can consult online and all the data could be shared with interested people for further analysis.




Back

Kubernetes of Things

Home

Speaker Stefan Schimanski
RoomUD2.218A
TrackInternet of Things
Time14:00 - 14:30
Event linkView original entry

Kubernetes allows to extend its API surface with custom objects called CustomResources (CR) whose JSON contents is described via OpenAPI schemas. The REST API allows realtime notification of changes sent out to multiple listeners. This sounds like the ingredients necessary to build an open IoT platform. This talk is about using CustomResources for Kubernetes as Things, i.e. namespaced objects representing sensors and actors. It is based on a case-study implementing this idea following an example of a deep sea station with valves and pumps, a controller controlling the air pressure in the station.

Back

Insight Fieldtracks

Home

Speaker yanosz
RoomUD2.218A
TrackInternet of Things
Time14:30 - 15:00
Event linkView original entry

Understanding the course of a drill is important for both steering and evaluation. Fieldtracks implements localization and tracking based on BLE, ESP32 and mesh networking for indoor and outdoor environments. This talk will introduce you into fieldtracks and its challenges.

Back in 2018, we started implementing a small prototype for indoor tracking using inexpensive and compact Espressif ESP32 devices. Beaconing and scanning at the same time, results are transmitted using MQTT for real time analysis and recording.



The first results looked promising and shifted the focus towards user-experience (UX) and visualization. An Angular client allows accessing data in real-time, utilizing a distributed on-site network integrated into the cloud. On-site, UX also concerns the rapid tactical deployment of mesh networks utilizing various links (IEEE 802.3 / 802.11, ITU G.992.5 aka DSL).



The initial deployment happened in September 2019. It has demonstrated the general feasibility and outlined the room for a lot of UI and UX improvements, from which the need for expertise and new contributors has arisen.



Let's have a look at the challenges we encountered building the network and see, why it's exciting to build an IoT-network to be used in the field. We would like to invite you to work with us and welcome some networking later on.

Back

WolfBoot

Home

Speaker Daniele Lacamera
RoomUD2.218A
TrackInternet of Things
Time15:00 - 15:30
Event linkView original entry

Firmware updates in IoT pose a new set of security risks. Secure bootloaders can be handy to deploy new versions of the firmware on those devices that are only reachable through a remote connection.



wolfBoot is a portable, GPL, OS-agnostic, secure bootloader solution for 32-bit microcontrollers, relying on wolfCrypt for firmware authentication, providing secure firmware update mechanisms.



Due to the minimalist design of the bootloader and the tiny HAL API, wolfBoot is completely independent from any OS or bare-metal application, and can be easily ported and integrated in existing embedded software projects to provide a secure firmware update mechanism.



This presentation will focus on the implementation details and the design choices of the project, and the porting done to ARM Cortex-M and RISCV32 microcontrollers.

Back

Using Micropython to develop an IoT sensor platform with an Augmented Reality UI

Home

Speaker Nicholas Herriot
RoomUD2.218A
TrackInternet of Things
Time15:30 - 16:00
Event linkView original entry

IoT with Augmented Reality



Anyone with a curious mind and interested in how to marry the physical world and IoT with the virtual. This talk is pitched at intermediate, but for beginners extensive documentation and a github repo is available to read and learn from. It shows the tiny Micorpython Pyboard 'D' being used as a tiny web server serving up an Augmented Reality display. The display shows sensor data overlay on a marker used to identify the IoT device. The work is open sourced from a project being run at the Samsung Research Institute in the UK.

Using Micropython to develop an IoT multimode sensor platform with an Augmented Reality UI



Who is this for



Anyone with a curious mind and interested in how to marry the physical world and IoT with the virtual. This talk is pitched at intermediate, but for beginners extensive documentation and a github repo is available to read and learn from.



What is it about



The story of a project at Samsung to capture data for the purpose of using in AI projects, which then became an OS platform with innovative ideas more important that the original concept. A multimode sensor platform using Augmented Reality for visualization.



Will I See The Real Prototype



Yes there will be a demonstration of the prototype and this will be used as part of the talk to explain the concepts.



What Technologies Will It Cover



This talk will cover a microcontroller with plug-able sensors using micropython for the main software development platform. For the serve side components the talk will be using a micro web server used to serve sensor data and a simple AR display. For Augmented Reality the talk will switch to using HTML and JavaScript needed to render objects in the browser for the demo.



What Will I Learn



Hopefully you will learn an introduction to IoT concepts to developers who are interested in this tech, but we will use micropython for embedded systems and show how to used. You will see in this talk:
1. How to use a tiny microcontroller with real sensors using micropython.
2. How to connect 'things' to the platform.
3. A micro web server to serve content.
4. The web side, using web technologies like A-Frame, AR.js for the purpose of having AR capabilities.



In Summary



This talk will cover going from the small embedded world of IoT and sensors using micropython. Some time will be given to go over the general micro server. Then the last part of the talk will go over the concepts of Augmented Reality and using a mobile browser to view data from the sensors.



Resources



The main github repo for the project is here. It's open source and still changing. Be sure to keep track on it! :-)

Back

TinyGo: Fast, Small, Concurrent: Choose Three

Home

Speaker Ron Evans
RoomUD2.218A
TrackInternet of Things
Time16:00 - 16:30
Event linkView original entry

This talk will show using live demos why TinyGo (http://tinygo.org) the Go compiler for "small places" is a excellent choice for next generation IoT and embedded devices.

This talk will show using live demos why TinyGo (http://tinygo.org) the Go compiler for "small places" is a excellent choice for next generation IoT and embedded devices.

Back

Zyre: p2p messaging to fuck the cloud

Home

Speaker Benjamin Henrion (zoobab)
RoomUD2.218A
TrackInternet of Things
Time16:30 - 17:00
Event linkView original entry

Pieter Hintjens last IOT project, running OpenWRT and the Zyre p2p library.

Zyre has the potential to change the cloud paradigm, with auto-discovery inside the LAN, and without any requirement of an internet connectivity.



Your TV can then discover your fridge.



3 years ago, I was working with Pieter Hintjens, main author of ZeroMQ and organiser of the IOT devroom, on a demo setup for his last conference at IOT Munich.



When he came back from Munich, he was diagnosed with terminal lung cancer.



We demonstrated a pile of OpenWRT routers blinking LED lamps in an orchestrated way.



Most of my work included assembling the hardware, writing OpenWRT packages, making a CI/CD for the devices.



I will show how to build OpenWRT packages with new cluster technologies, like Kubernetes.

Back

PGP Keysigning

Home

Speaker FOSDEM Staff
RoomUD2.Corridor
TrackKeysigning
Time14:00 - 16:00
Event linkView original entry

The FOSDEM 2020 PGP Keysigning

The keysigning event takes place in the corridor on the second level of the U building. There is no fixed end time. Previous editions last for approximately one hour per 100 keys on the list. You must register before the conference to take part. Please bring the printed list, a pen and appropriate form of identification with you the event.



Please note that you must register your key at least a week in advance of the conference.