09:30
10:00
11:00
12:00
13:00
14:00
15:00
16:00
17:00
18:00
10:00
11:00
12:00
13:00
14:00
15:00
16:00
17:00
18:00
12:00
12:20
12:40
13:00
13:20
13:40
14:00
14:20
14:40
15:00
15:20
15:40
16:00
16:20
16:40
17:00
17:20
17:40
18:00
18:20
18:40
10:30
11:20
11:50
12:20
13:05
13:35
14:05
14:50
15:20
15:50
16:20
16:50
17:20
17:50
18:20
10:30
11:10
11:30
12:10
12:30
12:50
13:10
13:30
14:10
15:00
15:40
16:20
16:40
17:20
17:40
10:30
10:35
11:10
11:45
12:10
12:35
13:10
13:45
14:05
15:00
15:40
16:20
17:00
17:40
18:20
10:30
10:55
11:20
11:55
12:20
12:45
13:20
13:45
14:10
14:25
14:50
15:15
15:50
16:15
16:40
17:05
17:30
17:55
18:20
10:30
10:40
11:10
11:40
12:10
12:40
13:10
13:40
14:10
14:40
15:10
15:40
16:10
16:40
17:10
17:40
18:10
18:40
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
15:00
16:00
16:30
17:00
18:00
18:30
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
15:00
15:25
15:50
16:15
16:40
17:05
17:30
17:55
18:20
18:40
10:30
10:35
11:30
12:00
13:00
13:30
14:00
15:00
15:30
16:00
17:00
17:30
18:00
18:30
18:50
10:30
11:00
11:30
12:00
12:30
13:00
13:30
13:45
14:00
14:15
14:30
15:00
15:30
16:00
16:30
17:00
17:30
18:00
18:30
10:30
10:45
11:15
12:35
13:05
14:05
15:05
15:45
16:55
17:15
17:50
10:30
10:50
11:10
11:30
11:50
12:10
12:30
12:50
13:10
13:30
13:50
14:10
14:55
15:00
15:35
16:10
16:45
17:20
17:55
18:30
10:30
11:15
12:10
12:55
13:50
14:35
15:20
16:05
17:00
17:45
11:00
11:30
12:00
13:00
14:00
15:00
16:00
17:00
18:00
18:30
10:30
10:40
11:20
12:00
12:40
13:20
14:00
15:00
15:30
16:00
16:30
17:00
17:30
18:00
18:30
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
14:30
15:00
15:30
16:00
16:30
17:00
17:30
18:00
10:30
11:05
11:50
12:35
13:20
14:05
14:50
15:30
16:00
16:30
17:05
17:50
18:30
10:30
10:35
11:00
12:00
13:00
13:30
14:00
15:00
16:00
17:00
17:30
10:30
11:05
11:40
12:25
13:20
13:55
14:25
15:00
15:35
16:10
17:00
17:15
17:50
18:15
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
14:30
15:00
15:30
16:00
16:30
17:00
17:30
18:00
18:30
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
14:30
15:00
15:30
16:00
16:30
17:00
17:30
18:00
10:30
11:05
11:40
12:10
12:40
13:00
13:20
13:40
14:00
15:00
15:30
16:00
16:30
17:00
17:30
18:00
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
14:30
15:00
15:30
16:00
16:30
17:30
18:00
10:30
10:55
11:20
11:55
12:20
12:55
13:20
13:45
14:10
14:45
15:10
15:45
16:20
16:45
17:20
17:45
18:10
18:35
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
15:00
15:30
16:00
16:30
17:30
18:00
18:30
09:00
10:00
11:00
12:00
13:00
14:00
15:00
16:00
17:00
17:50
09:00
10:00
11:00
12:00
13:00
14:00
15:00
16:00
10:00
10:20
10:40
11:00
11:20
11:40
12:00
12:20
12:40
13:00
13:20
13:40
14:00
14:20
14:40
15:00
15:20
15:40
16:00
16:20
16:40
09:00
09:30
10:00
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
14:30
15:00
15:30
16:00
16:30
09:00
09:35
10:10
10:35
10:55
11:35
12:20
13:00
13:50
14:30
15:20
15:50
16:25
09:45
10:00
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
14:30
15:00
15:30
16:00
09:30
10:00
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
14:30
15:00
15:30
16:00
16:30
10:00
11:00
12:00
13:00
14:00
15:00
16:00
09:00
09:30
10:00
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
14:30
15:00
15:30
16:00
16:30
09:00
09:10
10:00
11:05
11:40
12:25
13:10
14:15
14:35
15:25
16:00
09:30
09:50
10:10
10:30
10:50
11:10
11:30
11:50
12:10
12:40
13:10
13:40
14:00
14:30
14:50
15:20
15:50
16:20
16:40
09:00
09:20
09:40
10:05
10:25
10:50
11:15
11:40
12:05
12:30
12:55
13:20
13:45
14:10
14:35
15:00
15:25
15:50
16:15
16:40
09:00
10:00
10:30
11:00
11:30
12:00
13:00
14:00
14:30
15:00
16:00
09:00
09:30
10:00
10:30
10:50
11:40
12:30
13:00
13:50
14:20
15:10
15:40
16:10
10:05
10:35
11:10
11:45
12:30
13:05
13:40
14:15
15:00
15:20
16:05
09:00
09:30
10:00
10:30
11:00
12:00
12:30
13:00
14:00
15:00
16:00
09:00
10:00
10:40
11:00
11:30
12:00
12:30
13:00
14:00
14:30
15:30
16:00
16:30
09:00
09:30
10:00
11:00
11:30
12:15
12:45
13:15
14:00
14:30
15:00
15:30
16:00
09:00
09:15
10:00
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
14:30
15:00
15:40
16:20
09:00
09:10
09:20
09:30
09:40
09:50
10:00
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
14:30
15:00
15:30
16:00
16:30
09:30
10:00
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
14:30
15:00
15:30
16:00
16:20
09:00
09:30
10:00
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
14:30
15:00
15:30
16:00
09:00
09:45
10:30
11:15
12:00
12:20
13:05
13:50
14:10
14:55
15:40
16:25
09:00
09:30
10:00
10:30
11:00
11:30
12:00
12:30
12:45
13:00
13:15
13:30
14:00
14:30
15:00
15:30
16:00
16:30
09:00
09:05
09:35
10:05
10:35
11:00
11:30
11:55
12:25
12:55
13:20
13:50
14:15
14:45
15:10
15:40
16:10
16:40
09:00
09:05
09:30
09:55
10:20
10:45
11:10
11:35
12:00
12:25
12:50
13:15
13:40
14:05
14:30
14:55
15:20
15:45
16:15
09:00
09:10
09:50
10:30
11:10
11:50
12:30
13:10
13:50
14:30
15:10
15:50
16:30
09:20
09:40
10:00
10:25
10:50
11:15
11:40
12:00
12:25
12:45
13:10
13:35
13:55
14:20
14:40
15:05
15:30
15:50
16:10
09:00
09:10
09:30
09:50
10:10
10:30
10:50
11:10
11:30
11:50
12:10
12:30
12:50
13:00
13:10
13:30
14:00
14:30
15:00
15:30
16:00
16:30
Speaker |
FOSDEM Staff |
---|---|
Room | Janson |
Track | Keynotes |
Time | 09:30 - 09:55 |
Event link | View original entry |
FOSDEM welcome and opening talk.
Welcome to FOSDEM 2020!
Speaker |
Thorsten Leemhuis |
---|---|
Room | Janson |
Track | Keynotes |
Time | 10:00 - 10:50 |
Event link | View original entry |
Today's Linux kernel and the ones from the early FOSDEM days still have some things in common, but in the end are totally different beasts. This talk will take a closer look at how the Linux kernel and its development during those twenty years evolved and adapted to new expectations.
The talk will thus mention some of the changes to the Linux kernel that had the most impact in those two decades; many might have seemed small or insignificant in the beginning, but turned out to pave the path for ground breaking techniques like Android, Cloud or Containers we all deal with today. The speaker will also take a closer look at how Linux kernel development itself changed during those twenty years, but also cover aspects that didn't change much – like the email driver development model, which in the days of Github and Gitlab might look anarchic and from the Middle Ages to some. The speaker also will step on thin ice and speculate a few minutes what the future might hold, as there are a few strong indicators that Linux is entering a new era and slowly but steady get more aspects of a microkernel.
Speaker |
Michael Meeks |
---|---|
Room | Janson |
Track | History |
Time | 11:00 - 11:50 |
Event link | View original entry |
From ten years of LibreOffice, how can you apply what we
learned to your project ? What is going on in LibreOffice today, and
where is it going ? and How can you re-use or contribute to the story.
Come hear about the story of LibreOffice, the reasons we
started - and some of the highlights: successes, failures and other
lessons learned from our first ten years. Hear how our initial
decisions and vision of open-ness and vendor neutrality panned
out. See what has been learned about building an effective commercial
ecosystem, with certification.
Hear about the trajectory of technical updates and how we
re-juvenated an open-source code-base through massive re-factoring, as
well as re-targetting for web and mobile.
Catch up with the latest in Online innovation, optimization
and scalability work as well as our growing integration with lots of
other Open Source projects.
Finally catch up with the latest and greatest feature/function
improvements as we move towards LibreOffice 7, and find out how you
can best get involved with the next decade of the LibreOffice story.
Speaker |
James Shubin |
---|---|
Room | Janson |
Track | History |
Time | 12:00 - 12:50 |
Event link | View original entry |
Over the past twenty years, the automation landscape has changed dramatically.
As our hunger for complex technical infrastructure increased, and our inability to keep up with these demands faltered, we've outsourced a lot of the work to third-parties and cloud providers.
We'll step backwards and show where we came from, and where we're going.
If we don't understand this future, and step up to the challenge, then we eventually won't control our own computers anymore.
We'll discuss this timeline from a tools perspective and showcase many live demos of the past, present, and what will be possible in the future.
This presentation will contain many demos and interactive examples. I will showcase some modern ideas I have with my Free Software project called mgmtconfig.
Speaker |
Ton Roosendaal |
---|---|
Room | Janson |
Track | History |
Time | 13:00 - 13:50 |
Event link | View original entry |
The presentation is going to be audiovisual and entertaining; based on a number of short videos I want to tell the story of Blender. Starting in late 90s, how Blender became open source, going over the big milestones for Blender, end ending with the fast growth of our project and the interest of the film and game industry. Blender now is a more mature project now, which involves a different dynamics than it used to be. How are we going to tackle the challenges of the industry, while not losing the community that brought us this far?
Speaker |
Warner Losh |
---|---|
Room | Janson |
Track | History |
Time | 14:00 - 14:50 |
Event link | View original entry |
The early history of Unix is much richer than is usually presented. There are many hidden gems that are little known and quite interesting to learn about. The dates of many of the "firsts" for Unix accomplishments is much earlier than people think. This well-researched talk explores those gems, firsts and shares many artifacts from the early days of Unix. Many of these artifacts have just come to light in recent years, and change how we view the early history of Unix. Even the oldest of grey beards will learn things they didn't know about Unix from this talk.
Most histories of Unix follow the same old boring plan: talk about Multics, Ken scrounging the pdp-7, moving to the pdp-11, rewriting in C and then the explosion that happened with V6 and V7 before jumping into the Unix wars between AT&T and BSD followed by something about Linux (either pro or con depending on the speaker's politics). We've all seen it, and many can predict which "classic" pictures will be used, the points that will be made, and the arcs drawn.
This talk is nothing like that. It brings all the early years of Unix to life in a unique way. The early years of Unix were surprising rich. The author will use original sources to take you on a tour of many of the firsts in Unix and explore the community ties key to Unix's early success. Many of today's fads, like microkernels, hypervisors, multiprocessing and user mode execution actually happened early on in Unix's history, long they were today's fads. "What's old is new again" has never been so apt. You'll be surprised to learn how early each of these things happened. Come see the secret history of Unix as it played out both in obscure business units of AT&T and in the world wide users groups who banded together to support each other when AT&T wouldn't. You'll see footage of early machines as well as the first real Unix application: space travel (newly rediscovered and restored by the TUHS group). See first hand the machines, programs, newsletters and documentation that together weave a rich tale of innovation, community and working within constraints. Learn how today's open source movement owes a debt to these early communities and how they paved the way for Unix to become the open and ubiquitous system it is today and helped sow the seeds for the communities of today.
Speaker |
Liam Proven |
---|---|
Room | Janson |
Track | History |
Time | 15:00 - 15:50 |
Event link | View original entry |
For as long as computers have been around, roughly every 10-15 years, the whole industry underwent a reset and reinvented itself anew… until the early 1990s, when somehow, the industry skipped a generation. Instead, it looked backwards, and adopted an older model of computing. The cost has been very high and is holding back the development of the entire field.
This talk looks at how we turned to the past instead of the future, what we missed out on as a result, and how to move forward. It follows on from the above proposal, but takes a different tack and should stand alone. It looks at where personal computers might have – but didn’t – go in the 1980s and 1990s. At a sampling of advanced OS technologies that never caught on – Plan 9, Inferno, Taos, Oberon – and the cool stuff we lost out on as a result.
It will end with trying to identify the next such generation gap, and why we should consider starting afresh rather than adapting current tech for the next gen.
Speaker |
Daniel Stenberg |
---|---|
Room | Janson |
Track | Internet |
Time | 16:00 - 16:50 |
Event link | View original entry |
HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.
HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.
HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.
Daniel Stenberg does a presentation about HTTP/3 and QUIC. Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.
Speaker |
Pili Guerra |
---|---|
Room | Janson |
Track | Internet |
Time | 17:00 - 17:50 |
Event link | View original entry |
The Tor Project is building usable free software to fight surveillance and censorship across the globe. In this talk we'll give an update on what we got up to during 2019, what happened in the wider Tor ecosystem, and what lies ahead of us.
During the past year the Tor Project has been working hard on improving the software, building and training communities around the world as well as creating an anti-censorship team and roadmap that can push forward technologies to circumvent censorship.
This talk will cover major milestones we achieved and will give an outline about what is lying ahead. In particular, we'll talk about our work to scale the network so it can cope with increased demand as we move forward with our plans for mainstream adoption of Tor Browser and the Tor network.
We will also share updates about our anti-censorship efforts, a year on from the formation of a dedicated Anti-Censorship team, and their work on next generation pluggable transports. Moreover, we'll explain our defense against website traffic fingerprinting attacks and plans for improving onion services and making them more usable (DDoS resistance, better user interfaces for authentication and dealing with errors).
Finally, we'll shed some light onefforts to get Tor support directly embedded into other browsers, like Firefox and Brave, and educating users both by reorganizing the content on our website, creating dedicated community and developer portals and extensive trainings throughout the world.
Speaker |
Mateusz Kowalski Kamila Součková |
---|---|
Room | Janson |
Track | Internet |
Time | 18:00 - 18:50 |
Event link | View original entry |
Do you know where your internet traffic flows? Does it go through China even if you don't want it to? SCION is a new internet architecture aimed at solving this problem. We will show how you can easily join the already existing worldwide network.
The current Internet was not designed with control and security considerations in mind: incidents such as the hijacking of all traffic for YouTube by a Pakistani ISP in February 2008, the Cloudflare DNS service hijacked by AnchNet in May 2018, or a large chunk of European mobile traffic being rerouted through China in June 2019 show that we cannot quite trust the current Internet. SCION is a proposed future Internet architecture aiming to offer high availability and security, even in the presence of actively malicious network operators and devices.
Designing a new Internet from scratch gives us the opportunity to make it work a lot better: we are aiming to notably improve security, availability, and performance. At the same time, just replacing the Internet would not be feasible, and thus we also emphasise practical concerns, such as incremental deployment and backwards compatibility. Thanks to that, SCION is currently the only clean-slate Internet architecture with a world-wide research network and production deployments in several large institutions in Switzerland; and you can start using it today.
In the first part of this talk, we will drive you through the current state of SCION design and implementation, showing how it provides its important features:
The world-wide test deployment, SCIONLab, consists of around 50 different points-of-presence around the globe, many of them connected via direct, BGP-free, links. Having many independent organizations belonging to a continually evolving network introduces some non-trivial challenges of managing what you don’t own, which we will also talk about.
We will show a live demo presenting how easy it is today for the end user to join the network and start using the available services. We will also present how taking down a part of the network can look and how SCION prevents a scenario of traffic passing by China or Pakistan.
To close the talk, we will very briefly present the future plans and the direction in which we want the project to evolve.
Speaker |
Danese Cooper |
---|---|
Room | K.1.105 (La Fontaine) |
Track | Community and Ethics |
Time | 10:00 - 10:50 |
Event link | View original entry |
Free and Open Source software has revolutionized the Software Industry and nearly all other areas of human endeavor, but until now its reach into actual governance at the municipal citizen level has not been very deep. Initiatives like Code for America have encountered challenges driving acceptance for FOSS alternatives to proprietary software for citizen governance. At the same time the gap between citizen need and cities’ capabilities as widened. But several new projects are aiming to change this state of affairs to help empower citizens and improve municipal services worldwide through adoption of FOSS. Learn about these efforts and how you can get involved.
Speaker |
James Bottomley |
---|---|
Room | K.1.105 (La Fontaine) |
Track | Community and Ethics |
Time | 11:00 - 11:50 |
Event link | View original entry |
It has become very popular in the last several years to think of free and open source as a community forward activity, indeed the modern approach is to try and form a community or foundation first and do code second. There is also much talk about maintainer burn out and community exploitation. However, the same people who talk about this still paraphrase the most famous quote from the Cathedral and the Bazaar "Scratching your own itch". They forget this is your own itch not everyone else's because Open Source begins as a selfish activity. The fact that communities do form around a load of selfish competing individuals is actually a testament to the unacknowledged power of open source to co-opt the selfish instinct and make it synergistic to some communal good.
This talk will explore the selfish origins of free and open source and dissect the co-option power it has to form broad and deep communities from what are apparently simple transactional engagements. We'll also explain how some of the more recent community failures have been engendered by the concentration on long term community to the detriment of enabling purely transactional mechanics by which initially selfish contributors come to the project.
The origins of selfish contributions, while called out in the founding canons of the open source and free software movements, were initially not very well recognized until the first open source projects (and here the author will use the example he's familiar with: the Linux Kernel) tried to engage successfully with companies trying to utilize the projects for their own ends. We became adept at explaining why forking is bad and how your cost structure will rise exponentially if you do it and how the cheapest method of achieving your business goals is in fact to work with the existing community on whatever feature the company was trying to add as part of its business plan. In fact, those of use who did company engagement because transactional sales people offering a corporation the achievements of business goals in exchange for some unexpected community engagement.
Nowadays, all companies have drunk the open source coolaid and all pledge their loyalty to further the interests of the community and this previous learning is all but forgotten in the third and fourth waves of open source. However, at their hearts, corporations are still the same business goal focussed transactional entities we had to deal with in the early days of open source and an understanding of how to co-opt their transactional nature would go a long way to assisting new open source developers in their quest to form communities.
This talk will begin with some history of the Linux kernel's corporate engagement, explore and explain some of the processes within open source development that lead to the conversion of transactionalism into long term community synergy (the truly astounding ability of open source to co-opt selfishness) and then give a few examples of how serving the community rather than enabling transactionalism can give rise to burn out and apparent exploitation. In conclusion we'll give some simple rules to enable this co-opting and suggest how some of the competing interests of the ethical source and social justice movements might likewise be co-opted.
Speaker |
Molly de Blanc |
---|---|
Room | K.1.105 (La Fontaine) |
Track | Community and Ethics |
Time | 12:00 - 12:50 |
Event link | View original entry |
Internet of Things (IoT) devices are part of the future we were promised. Armed with our mobile devices, we can control everything from our cars to our toasters to the doors of our homes. Along with convenience, IoT devices bring us ethical quandaries, as designers and users. We need to consider the ethical implicates of the technologies we are building and ask ourselves not just about the ways they are being used, for both good and evil, but the potential use cases we might encounter in the future.
IoT devices are becoming increasingly prevalent in our lives -- even my water filter is wifi enabled. In this session, we'll be looking at two case studies of how one might interact with IoT devices, and then consider the ethical implications of these devices, focused on the social impacts they can have on an individual or a society. While we will focus on smart doorbells and smart locks and situations in which installing these could significantly impact quality of life, we will touch on other common IoT devices.
Speaker |
Justin W. Flory Michael Nolan |
---|---|
Room | K.1.105 (La Fontaine) |
Track | Community and Ethics |
Time | 13:00 - 13:50 |
Event link | View original entry |
Despite the number of working groups, advisory committees, and coordination roundtables, there is little progress towards creating more ethical and safe AI systems. AI systems are deployed in increasingly fragile contexts. From law enforcement to humanitarian aid, several organizations use AI powered systems to make or inform critical decisions with increasingly outsized side effects.
What is a rights-based approach for designing minimally safe and transparent guidelines for AI systems? In this talk, we explore what a Free AI system might look like. Then, taking research and guidelines from organizations such as Google and the UN Office for the Coordination of Humanitarian Affairs, we propose practical policies and tools to ensure those building an AI system respect user freedom. Lastly, we propose the outlines of a new kind of framework where all derivative works also respect those freedoms.
Rights based approaches are commonly used within humanitarian contexts to approach problems that the sector faces. In this talk, we use the same approach to solving the issue of “unethical” AI systems. We do so by first defining a set of rights that we feel must be respected, proposing a number of methods that we feel helps ensure a system respects those rights, and lastly propose an organizational and regulatory framework that we feel could help encourage and enforce those methods be used by developers of AI systems.
Speaker |
Daniel Riek |
---|---|
Room | K.1.105 (La Fontaine) |
Track | Containers and Security |
Time | 14:00 - 14:50 |
Event link | View original entry |
Free Software (as in Freedom) had won. The vertically integrated Cloud now is the predominant operational paradigm and is threatening to undermine software freedom. To many all seems lost, but the world keeps changing and decentralized compute is making a comeback. Containers and Kubernetes are already having a deep impact on the Linux operating system (OS) that goes well beyond DevOps and cloud-native applications. The concepts of application-centric packaging, process isolation through Linux containers, and immutable infrastructure are shaking up the core traditions of today's GNU/Linux operating systems. These concepts are also challenging the assumptions and approaches derived from the past 40+ years of work that originated with UNIX. The Linux distribution as we know it is coming to an end, and is being replaced by a new concept of containerized, multi-instance, multi-user applications, which can be deployed in scale-out environments as well as for widely distributed compute scenarios. In this session, we'll assess this new OS environment in the times of '5G' de-centralized cloud and take a deep look at the consequences this new OS model has for both developers and admins.
This talk will draw on the history of compute in general and Free and Open Source in specific to explain an evolution of paradigms from the GNU/Linux Distribution to modern Free Software application plattforms build on Kubernetes and how they can shape the future of compute in the face of major technological changes.
Speaker |
Kris Nova |
---|---|
Room | K.1.105 (La Fontaine) |
Track | Containers and Security |
Time | 15:00 - 15:50 |
Event link | View original entry |
Kubernetes is complex, and extremely vulnerable. In 2019 we explored the complexity of the Kubernetes codebase, and the antipatterns therein. This year we want to look at understanding how we observe our cluster at runtime. Let's live code some C and C++ and explore the libraries that bring Wireshark, Falco, and Sysdig to life. We concretely demonstrate how we are able to audit a Kubernetes system, by taking advantage of auditing the kernel's syscall information while enriching this data with meta information from Kubernetes.
We start off by presenting the problem of Kubernetes security at runtime. We discuss concerns with namespace and privilege escalation in a Kubernetes environment. We discover how auditing the kernel gives us visibility into both the container layer, as well as the underlying system layer.
We look at building an eBPF probe, or kernel module to begin auditing syscall metrics. We discover how we are able to pull those out of the kernel into userspace, and start exploring powerful patterns for using these metrics to secure a Kubernetes cluster.
The audience walks away understanding how the kernel treats containers, and how we are able to easily make sense of them. The audience also walks away equipped with an OSS toolkit for understanding, observing, and securing a Kubernetes environment.
Speaker |
James Bottomley Mike Rapoport |
---|---|
Room | K.1.105 (La Fontaine) |
Track | Containers and Security |
Time | 16:00 - 16:50 |
Event link | View original entry |
Security is a big problem especially in the cloud of container workloads. This presentation investigates improving security in the Linux kernel itself. The first target is securing sensitive application data, for instance, private keys.
Address space isolation has been used to protect the kernel and userspace programs from each other since the invention of the virtual memory.
Assuming that kernel bugs and therefore exploits are inevitable it might be worth isolating parts of the kernel to minimize damage that these exploits can cause. Moreover, restricted mappings in the kernel mode may improve mitigation of hardware speculation vulnerabilities.
There are several ongoing efforts to use restricted address spaces in Linux kernel for various use cases:
* speculation vulnerabilities mitigation in KVM
* support for memory areas visible only in a single owning context
* hardening of the Linux containers
We are going to present the approach for the implementation of restricted mappings in the Linux kernel and how this implementation would be used with various use-cases.
We are also going to take a closer look at possibility to assign an address space to the Linux namespaces, so that tasks running in namespace A have different view of kernel memory mappings than the tasks running in namespace B. For instance, by keeping all the objects in a network namespace private, we can achieve levels of isolation equivalent to running a separated network stack.
Speaker |
Ludovic Courtès |
---|---|
Room | K.1.105 (La Fontaine) |
Track | Containers and Security |
Time | 17:00 - 17:50 |
Event link | View original entry |
This talk will reflect on what GNU Guix has to offer to users and how it compares to other approaches—from CONDA and pip to Flatpak and Docker. Guix is not only a transactional package manager and declarative GNU/Linux distribution: it’s also an environment manager, a container provisioning tool, and more. We will describe these tools and our journey to 1.0, emphasizing key properties that set Guix apart:
reproducibility, transparency, and hackability.
When it comes to software deployment, we are getting used to a new distribution of roles among many actors: traditional distros take care
of core software pieces, “application bundles” à la Docker/Flatpak provide complex applications, Cabal, Gem, npm, pip, and friends take care of language-specific software, and Puppet/Ansible orchestrate the whole thing. Each of these tools has its justification, but the end result is a maze that’s hard to deal with.
In this talk I will present GNU Guix, a software deployment toolbox and its associated distro that feature transactional upgrades and rollbacks, declarative OS deployment inspired by Nix, and reproducible builds. I will show how Guix approaches a variety of use cases: “package management” à la apt-get, environment management à la VirtualEnv, Ansible-style declarative OS deployment, and container provisioning à la Docker.
Guix emphasizes programmability and one of its salient features is that it provides a unified Scheme programming environment to deal with with all the aspects of configuration, deployment, and system management—including service management by PID 1. I will illustrate how
this works out on standalone Guix systems, and show the benefits of the approach by discussing applications that take advantage of Guix as a library to support reproducible software deployment.
Last, I will reflect on the road that led to Guix 1.0 six months ago and present some of the challenges ahead.
Speaker |
Dan Shearer |
---|---|
Room | K.1.105 (La Fontaine) |
Track | Databases |
Time | 18:00 - 18:50 |
Event link | View original entry |
LumoSQL is an experimental fork of SQLite, the embeddable database library founding everything from Android to iOS to Firefox. As a replacement for fopen(), SQLite is a good choice for singer-writer applications and disconnected, slow and small devices. Modern IoT and application use cases are increasingly multi-writer, fast, high-capacity and internet-connected, and LumoSQL aims to address these very different modern needs. LumoSQL initially aims to improving speed and reliability, by replacing the internal key-value store with LMDB, by updating and fixing a prototype from 2013, and allowing multiple storage backends. Next up we are designing the architecture for replacing the write-ahead log system (as used by all other open and closed source databases) with a single-level store, drawing on LMDB as an example of a single-level store in production at scale. Challenges so far involve code archeology, understanding and updating benchmarking, designing a system for keeping parity with upstream code changes, file format migration and identifying bugs in both SQLite and LMDB.
Please do join us in testing and improving at https://github.com/LumoSQL/LumoSQL .
In this talk we welcome questions and contributions. This conference has many SQLite users and developers. What do you want to see?
LumoSQL is a combination of two embedded data storage C language libraries: SQLite and LMDB. LumoSQL is an updated version of Howard Chu's 2013 proof of concept combining the codebases. Howard's LMDB library has become ubiquitous on the basis of performance and reliability, so the 2013 claims of it greatly increasing the performance of SQLite seem credible. D Richard Hipp's SQLite is relied on by many millions of people on a daily basis (every Android and Firefox user, as just two projects of the thousands that use SQLite) so an improved version of SQLite would benefit billions of people.
The original code changes btree.c in SQLite 3.7.17 to use LMDB 0.9.9 . It takes some work to replicate the original results because not only has much changed since, but as a proof of concept there was no project established to package code or make it accessible. LumoSQL revives the original code and shows how it is still relevant in 2019. The premise seems sound. Some bugs have been fixed in LMDB and the prototype SQLightning work.
There needs to be multiple backends, initially the original SQLite on-disk format and LMDB and initially for compatibilit and conversion purposes. However the ability to have more backends is very attractive and already there are draft designs for where that could lead.
The design taking shape for tracking SQLite upstream may be useful to other projects, where an automated process and can handle most changes that do not change some of the basic APIs.
Write-Ahead Logs are in every single widely-used database today, a concurrency model developed in the 1990s and now the only option in both closed and open source SQL databases. There are pros and cons for WALs, but the merge-back model of WALs is a lack of atomicity that becomes obvious in corruption and reliability issues at speed and scale. Databases go to very expensive efforts to avoid this, but combined with a lack of real-time integrity checking in almost all databases, this is a fundamental problem and especially for modern SQLite-type use cases.
Speaker |
Matthias Kirschner |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 12:00 - 12:15 |
Event link | View original entry |
More and more traditionally processes in our society now incorporate, and are influenced by software.
Processes that decide for example: Who will be able to go to which university? Who will be invited for a job interview? How long does someone have to go to jail?
Therefore many organisation which work for people's rights are now confronted with the problems proprietary software creates for society. The pupils associations, the unions, human right organisations, or environmental organisations -- all of them need to understand how software works to do their work in our society.
To continue to fulfil their role, civil society needs to understand how processes are implemented in software, they need to be able to challenge the assumptions, the values, and the way programmers designed them, and have a better understanding how you could change them.
In short: in a world in which more and more of our live is controlled by software, civil society organisations need us as Free Software hackers to support them doing their job.
Speaker |
Mikel Cordovilla |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 12:20 - 12:35 |
Event link | View original entry |
OpenOlitor is a SaaS open-source tool facilitating the organization and management of CSAs (Community Supported Agriculture) communities. This
tool covers a large spectrum of functionalities needed for CSAs such as member management, emailing, invoicing, share planning and delivery, absence
scheduling, etc. This software is organized and monitored by an international community that promotes the tool, helps operate it and support the
interested communities. In order to promote the sustainability of the tool and this international community an organization based on CSS
(Community Supported Software) has been proposed.
The Community Supported Agriculture movement has grown considerably the last few years proposing a new model of food production and distribution. CSA initiatives connect producers and consumers directly. The consumer receives a basket of locally produced, fresh products on a subscription basis. Risk is shared among all participants of the community, independently of being a consumer or producer. The growing popularity of the CSA model has lead to larger sized communities. Consequently, the management effort for this organizations is becoming unaffordable on a volunteer basis.
In such conditions a software helping with the redundant tasks, and connecting all participants of these communities, can be particularly supportive.
The main motivations for the creation of OpenOlitor (OO) are to attend to the growing popularity of the CSA model with the aim of keeping the model
economically viable and sustainable by reducing management task time investment through building software which facilitates easy organization.
OpenOlitor is a web-based, open source tool facilitating the organization and management of CSAs (Community Supported Agriculture)
communities. This tool is composed of two different access points:
The OO platform is divided in back-end and front-end:
Members portal (https://github.com/OpenOlitor/openolitor-client-kundenportal)
Back-end: Programmed in Scala using the Akka library for concurrency and distributed systems. A single server can be shared among multiple initiatives. The code is published on Github. Even if the server can be shared among multiple organizations, a MariaDB database is created per initiative. This model guaranties data ownership and privacy. The code is accessible publicly in Github: https://github.com/OpenOlitor/openolitor-server
Currently, seven CSAs are using OpenOlitor to support their organizations in a production environment. Three more CSAs are in the process of transitioning to this digital tool. A few numbers extracted from the last two years and a half to put into perspective the work volume facilitated by OO:
As shown by the architecture, the database and the front-end are CSA specific but the server can be used by multiple organizations. We promote the idea of hosting communities where a group of CSAs share the costs for the hosting and the effort for the operational work. This model is currently in use for all CSAs using the software.
As a financial model we promote the idea of Community Supported Software (CSS). This model is directly inspired by the CSA model where all participants share the risk and responsibility of the food production. Transferring this idea to software, an organization working with OO is invited to participate on the maintenance, operation, support and further development of the software. A fixed financial contribution is defined per basket delivered. All contributions are shared with all projects using the tool. Using this model, the effort and risk is equally shared, independently of the size of the CSA.
This software is organized and monitored by an international community that promotes this tool:
OpenOlitor International: Non legally formed organization composed by around 10 members from different European countries. This organization is in charge of the vision and principles of all interactions with the software. This group meets periodically to decide OO main features and how to evolve the platform. Everyone interested in OO is invited to join;
OpenOlitor association: non-profit organization based in Switzerland. Legal body managing the founds received by initiatives and public sources;
OpenOlitor operations association: Organization in charge for the operation and support of the Swiss based CSAs;
Sunu: a German based organization that promotes digitalization for CSAs. They are promoting the ue of OO as well as the operational and support for German CSAs.
Speaker |
Pierre Slamich |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 12:40 - 12:55 |
Event link | View original entry |
Open Food Facts is a collaborative and crowdsourced database of food products from the whole planet, licensed under the Open Database License (ODBL). It was launched in 2012, and today it is powered by 27000 contributors who have collected data and images for over 1 million products in 178 countries (and growing strong…)
This is the opportunity to learn more about Open Food Facts, and the latest developments of the project.
Scan a product using your phone, take a picture, and you're already part of the Open Food Facts revolution !
In this talk we'll show how Open Food Facts leverages open source technologies such as Perl, Python, TensorFlow, MongoDB, Java, Swift, React and Flutter as well as the great power of communities to open data of public interest for health & science, as well as unforeseen applications in your daily life.
We will also introduce you to Open Beauty Facts, for freeing your cosmetic cupboard: shampoos, toothpastes, lipsticks, etc.
How does it work?
Using our Android or iPhone app, you can easily scan the barcode of products from your home or local store.
You can either check them out (thanks to the decoding and comparison tools) or contribute pictures of their labels, assisted by our label-reading AI.
The same can also be done from the website, where additional tools are available to fill in the product details from the labels, navigate or vizualise the database based in various ways, or access the APIs and raw data to make your own tools and analysis.
Open Food Facts is developed and managed by a community of open source, open data and food enthusiasts and is organised as a non-profit association. All its creations are open:
- the collected data is published as Open Data,
- the software running the server(s) is open source and reusable (it was also used to create the Open Beauty Facts database),
- the mobile applications are open source as well.
Speaker |
Bruno Škvorc |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 13:00 - 13:15 |
Event link | View original entry |
For as long as human society has existed, humans have been unable to trust each other. For millennia, we relied on middlemen to establish business or legal relationships. With the advent of Web2.0, we also relayed the establishment of personal connections, and the system has turned against us. The middlemen abuse our needs and their power and we find ourselves chained to convenience at the expense of our own thoughts, our own privacy. Web3 is a radical new frontier ready to turn the status quo on its head, and these are the technologies we're using to make it happen.
Speaker |
Atlas Engineer |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 13:20 - 13:35 |
Event link | View original entry |
While actual browsers expose their internals through an API and limit access to the host system, Next doesn't, allowing for infinite extensibility and inviting the users to program their web browser. On top of that, it doesn't tie itself to a particular platform (we currently provide bindings to WebKit and WebEngine) and allows for live code reloads, thanks to the Common Lisp language, about which we'll share our experience too.
Next is a keyboard-oriented, extensible web browser designed for power users. While most (all?) current browsers expose their internals through an API, Next exposes its entire code to the user. Modern browsers limit access to the host system, and Next doesn't, allowing for a broad range of new features. Similar projects have failed due to being tied to a particular platform (Vimperator, Conkeror…), but Next's choice is to have its core written as an independent library, and to provide bindings to web engines (currently WebKit and WebEngine are supported). Next's magic touch is its live coding capability: we can develop a command from a REPL, compile the function and try the changes immediately, without restarting anything. Or just edit our init file and reload it into the current session. This flexbility comes for free thanks to the Common Lisp language, the experience with which we'd like to share too.
Next is usable as of today. It features ad-blocking (only basic for now, contributions are welcome), multi-selection, bookmarks, session persistence, Emacs and Vim-like shortcuts, history seen as a tree, fuzzy completion everywhere, basic git-forking and file management interfaces…
We are entering a new phase of development, with hopefully announcements that we can make public at the time of the conference :)
Speaker |
Michal Čihař |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 13:40 - 13:55 |
Event link | View original entry |
Please note that this talk will now be given by Michal Čihař instead of Václav Zbránek.
You will learn how to localize your project easily with little effort, open-source way. No repetitive work, no manual work with translation files anymore. Weblate is unique for its tight integration to VCS. Set it up once and start engaging the community of translators. More languages translated means more happy users of your software. Be like openSUSE, Fedora, and many more, and speak your users' language now thanks to Weblate!
I will show you the main perks of Weblate and the setup of the project. If you have a project with open repo and you want to start translating it, take your git:// link, and we will set it up right on the spot. FOSDEM is a great time and place to found your translating community.
Speaker |
Roberto Abdelkader Martínez Pérez |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 14:00 - 14:15 |
Event link | View original entry |
This talk is about "Kapow!" an open source webframework for the shell developed by BBVA Innovation Labs. We will talk about the current development of the project including an overview of Kapow!'s technology stack and the recent release of the first stable version.
The motivation behind the project was to create an adapter between the shell and the web. Allowing users to expose command line programs as HTTP services with a high degree of freedom and flexibility, not imposing any predefined behavior.
The project is based on an open specification.
Kapow! supports an increasing list of HTTP features; including forms, websockets and streaming. The architecture is based on exposing a private REST API through which the shell can interact with the incoming user HTTP requests.
Leveraging this architecture an ecosystem of specialized tools is planned, to help with common high level tasks in Kapow! services.
For example:
- Shell commands that can interact with incoming web browser requests and render specialized web pages.
- Automatic mapping of shell command parameters to REST API through bash completion scripts.
Roberto will do some demos about creating Kapow! services from scratch (expose nmap as a service, tcpdump and pandoc).
Speaker |
Kevin Jahns |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 14:20 - 14:35 |
Event link | View original entry |
Shared editing is the ability to collaboratively edit the same text in real-time. The market for shared editing solutions is fragmented. Once you choose a solution you will be locked into using a specific editor and a specific backend. Yjs is a data synchronization framework that aims to enable shared editing in all open-source editors using any networking stack.
Yjs is a data synchronization framework that is specifically designed for creating shared editing applications like Google Docs. The number of editors, that Yjs supports, is steadily growing. At this time we implemented shared-editing support for six of the most prominent open-source rich-text editors - including Quill, ProseMirror, and CodeMirror. We are currently working on integrating Yjs in Gutenberg, the new WordPress editor.
Since Yjs is network agnostic and does not require a central authority to resolve sync conflicts, it is possible to use any networking stack to share document updates of the collaborative document. We created an ecosystem of modules that handle data synchronization over, for example, WebRTC, Dat Protocol, IPFS, and traditional client-server connections via WebSockets.
In this lightning talk, we want to show the huge potential of Yjs that hopefully will make it the go-to solution for creating shared editing applications on the web.
Speaker |
Ludovic Dubost |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 14:40 - 14:55 |
Event link | View original entry |
We'll show a real demonstration of how you can encrypt your data and collaborate with others in real-time using the CryptPad Open Source project.
This demonstration will include real-time Wysiwyg, Text, Kanban, Spreadsheet, File storage and Teams features allowing to share your documents securely with your friends and co-workers.
The Internet business model is about "surveillance capitalism" and every day our data is being used to target us with more and more invading advertisements, and every day data is being leaked by Internet business and cloud providers.
CryptPad is using end-to-end encryption to protect your data, while including innovative algorithms to allow collaboration between users. With CryptPad, the cloud hoster cannot read your data and if data leaks, hackers only get encrypted data which they cannot read without the keys.
Start ditching proprietary privacy invading cloud services for an free software privacy protecting alternative.
Come join the 20000 weekly users of the cryptpad.fr main instance and the 300 other instances available, including the German Pirate Party and C3 Wien instances.
Come join us restoring our privacy.
Speaker |
Stephan Schwichtenberg |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 15:00 - 15:15 |
Event link | View original entry |
Agenda
1) Current situation: complicated & incomplete threat models
2) Concepts worth looking into
3) data sovereignty
4) named data networks
5) zero trust
6) Our hands-on experience with the above
The current state of play to protect data is a tedious task that involves many stakeholders and blocks resources.
The shift from on-premise to private/public cloud systems requires a careful inspection of an assumed threat model.
The application of a zero-trust model is one radical shift to authenticate and authorize at any given point in your
IT landscape, but most importantly it breaks with the assumption that we can design systems that are "safe and secure".
The talk will highlight the above mentioned concepts and will give a brief outline of a new approach called Named Data
Network (NDN) and how this could improve the situation in terms of data sovereignty.
Speaker |
Julio Merino |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 15:20 - 15:35 |
Event link | View original entry |
The Bazel build system sandboxes each action (e.g. each compiler invocation) to ensure the action only has access to declared inputs and that the action only generates the promised outputs. This ensures that the execution of each build tool is deterministic and not subject to system-wide state. Unfortunately, creating these sandboxes is costly, and every millisecond added to the runtime of each action has a large impact on total build time. Just consider that Bazel focuses on large-ish builds with thousands of actions in them: each little inefficiency quickly multiplies and can result in significant slowdowns, and developers always want faster build times. In this talk, I'll explain how Bazel implements sandboxing and I will cover a FUSE file system I've been developing, sandboxfs, to optimize this process. I'll go into the details of the file system, explain how it started as a Go project and was then rewritten in Rust, and then show some performance metrics.
Outline:
Speaker |
Claude Warren |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 15:40 - 15:55 |
Event link | View original entry |
Bloom filters are a probabilistic data structure that tell us where things are not. They also utilize one way hash functions to build a probabilistic representation of an object. This talk will address how this structure can be used to provide an index into encrypted data that can be made publicly available with minimal risk.
Talk will cover how bloom filters are constructed, the Flat Bloofi indexing implementation and how to take the properties to be indexed and create Bloom filters, and then how to associate the bloom filter with the encrypted object in the index.
The result is an extremely fast index that can retrieve data items containing partial keys.
After this talk participants will be able to provide search capabilities across a collection of encrypted objects.
Code examples will be provided.
Speaker |
Nadim Kobeissi |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 16:00 - 16:15 |
Event link | View original entry |
Verifpal is new software for verifying the security of cryptographic protocols. Building upon contemporary research in symbolic formal verification, Verifpal’s main aim is to appeal more to real-world practitioners, students and engineers without sacrificing comprehensive formal verification features. Verifpal represents a serious attempt at making the formal analysis of advanced cryptographic systems such as Signal and TLS 1.3 easier to achieve.
Contemporary research in symbolic formal verification has led to confirming security guarantees (as well as finding attacks) in secure channel protocols such as TLS and Signal. However, formal verification in general has not managed to significantly exit the academic bubble. Verifpal is new software for verifying the security of cryptographic protocols that aims is to work better for real-world practitioners, students and engineers without sacrificing comprehensive formal verification features.
In order to achieve this, Verifpal introduces a new, intuitive language for modeling protocols that is easier to write and understand than the languages employed by existing tools. Its formal verification paradigm is also designed explicitly to provide protocol modeling that avoids user error. By modeling principals explicitly and with discrete states, Verifpal models are able to be written in a way that reflects how protocols are described in the real world. At the same time, Verifpal is able to model protocols under an active attacker with unbounded sessions and fresh values, and supports queries for advanced security properties such as forward secrecy or key compromise impersonation.
Verifpal has already been used to verify security properties for Signal, Scuttlebutt, TLS 1.3 and other protocols. It is a community-focused project, and available under a GPLv3 license.
An Intuitive Protocol Modeling Language:
The Verifpal language is meant to illustrate protocols close to how one may describe them in an informal conversation, while still being precise and expressive enough for formal modeling. Verifpal reasons about the protocol model with explicit principals: Alice and Bob exist and have independent states.
Modeling that Avoids User Error:
Verifpal does not allow users to define their own cryptographic primitives. Instead, it comes with built-in cryptographic functions — this is meant to remove the potential for users to define fundamental cryptographic operations incorrectly.
Easy to Understand Analysis Output:
When a contradiction is found for a query, the result is related in a readable format that ties the attack to a real-world scenario. This is done by using terminology to indicate how the attack could have been possible, such as through a man-in-the-middle on ephemeral keys.
Friendly and Integrated Software:
Verifpal comes with a Visual Studio Code extension that offers syntax highlighting and, soon, live query verification within Visual Studio Code, allowing developers to obtain insights on their model as they are writing it.
Speaker |
Teddy Hogeborn |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 16:20 - 16:35 |
Event link | View original entry |
Disk encryption is essential for physical computer security, but seldom used due to the trouble of remembering and typing a password at every restart. We describe Mandos, a program which solves this problem, its security model, and the underlying concepts of its design, and some of its evolution over the 10 years since its initial release.
Any security system must have a clear view of its intended threat model – i.e. what threats it is actually intended to protect against; the specific choices and tradeoffs made for Mandos will be explained. Another danger of security system design is the risk of its non-use; i.e. that the system will not be used for some real or perceived drawbacks, such as complexity. The deliberate design choices of Mandos, involving low-interaction, “invisible” and automatic features, will be covered. If possible, the many necessary changes made since the last FOSDEM talk in 2015 will also be described.
Speaker |
Dirk-Willem van Gulik |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 16:40 - 16:55 |
Event link | View original entry |
In this talk we will show you, practical, hands on, how you can secure your application, a small user community or environment using industry best of breed security, fully self-supporting and without having to rely on a central certificate authority, big-tech or other central powerhouses. As devolved & federated, or as central as you want - you set the rules. Working for you, with your standard-issue iPhone, your router or your laptop out of the box.
Project redwax produces industry best practice crypto in a small package. Available today. If you know how to install the apache webserver - you are almost there.
Project Redwax lets you download,a set of easy to deploy simple tools that capture and hard code a lot of industry best practice and specialist PKI knowledge. So that they can be put into the hands of a much wider community than currently served by a few specialist industries. It provides a non centralised, interoperable, open standard, open source, fully federated trust network where participants are not required to ask permission and can be self sufficient.
This presentation presents what is available today and our plans on how to take this further, to engage with the wider open source community that together we can support individuals, organisations and (small) companies to get best of breed, distributed, peer to peer, security, confidentiality and privacy without having to rely on central infrastructures.
Wouldn’t it be nice to be able to trust your own environment without having to trust a corporation or government? Wouldn’t it be nice to take the sting out of certificate management?
With some hands on examples we introduce the audience to the advantages of running your own certificate authority for security IOT in and around your home and establishing a trusted channel for exchanging information with your friends.
This project (and code) helps you to decentralize trust management so that the values security, confidentiality and privacy can be upheld in public infrastructure and private interactions. We strengthen the existing technologies and infrastructure by providing a modular, very simple and foremost practical set of tools to manage public key based trust infrastructures as currently used.
Project Redwax lets you download, a set of easy to deploy simple tools that capture and hard code a lot of industry best practice and specialist PKI knowledge. So that they can be put into the hands of a much wider community than currently served by a few specialist industries. It provides a non centralised, interoperable, open standard, open source, fully federated trust network where participants are not required to ask permission and can be self sufficient.
With support from NLNET project Redwax has made the code available under the Apache license from a infrastructure that is firmly rooted in Europe. And the good news - it is simple - if you know how to install the Apache Webserver - you are almost there. We are working with the unix/linux community to get the code in standard distro’s and cloud init scripts so that these capabilities are made available to wider community of application developers.
This presentation presents what is available today and our plans on how to take this further, to engage with the wider open source community that together we can support individuals, organisations and (small) companies to get best of breed, distributed, peer to peer, security, confidentiality and privacy without having to rely on central infrastructures, rare knowledge or big interests.
Speaker: Dirk-Willem van Gulik (Dirkx@apache.org)
Talk: 50 minutes.
Travel support: not needed
Language: English
BIO
Dirk-Willem van Gulik?
During the startup-years - Dirk-Willem van Gulik helped shape the world-wide-web. He was one of the founders, and the first president, of the Apache Software Foundation; and worked on standards such as HTTP at the Internet Engineering Taskforce. He has worked for the Joint Research Centre of the European Commission, the United Nations, telecommunications firms, the BBC, several satellite&space agencies and founded several startups. He participated in different international standards bodies, such as the IETF and W3C on metadata, GIS, PKI, Security, Architecture and Internet standards. Dirk build the initial engineering team at Covalent - the first open source company; and was one of the Founders of Asemantics, a leader in Enterprise Information Integration; which helped make the Semantic Web a reality. He then initiated Joost.com, a peer to peer based video and build and lead the team that created the worlds first instant play P2P viewer and a back office system with user profile driven advert targeting and payment settlements. He was the Chief Technical Architect at the BBC where has helped shape the audience facing delivery platform Forge in the time for the Olympics and where he made information security and compliance a core enabler for business processes. He currently works on several medical and privacy intensive security projects with a heavy emphasis on Architecture and Governance. When not at work, he loves to sail, hang out at the makerspaceleiden.nl or play with his lego.
Speaker |
Volker Krause |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 17:00 - 17:15 |
Event link | View original entry |
Getting your itinerary presented in a unified, well structured and always up to date fashion rather than as advertisement overloaded HTML emails or via countless vendor apps has become a standard feature of digital assistants such as the Google platform. While very useful and convenient, it comes at a heavy privacy cost. Besides sensitive information such as passport or credit card numbers, the correlation of travel data from a large pool of users exposes a lot about people's work, interests and relationships. Just not using such services is one way to escape this, or we build a privacy-respecting alternative ourselves!
Standing on the shoulders of KDE, Wikidata, Navitia, OpenStreetMap and a few other FOSS communities we have been exploring what it would take to to build a free and privacy-respecting travel assistant during the past two years, resulting in a number of building blocks and the "KDE Itinerary" application. In this talk we will look at what has been built, and how, and what can be done with this now. In particular we will review the different types of data digital travel assistants rely on, where we can get those from, and at what impact for your privacy.
The most obvious data source are your personal booking information. Extracting data from reservation documents is possible from a number of different input formats, such as emails, PDF files or Apple Wallet passes, considering structured annotations and barcodes, but also by using vendor-specific extractors for unstructured data. All of this is done locally on your own devices, without any online access.
Reservation data is then augmented from open data sources such as Wikidata and OpenStreetMap to fill in often missing but crucial information such as timezones or geo coordinates of departure and arrival locations. And finally we need realtime traffic data as well, such as provided by Navitia as Open Data for ground-based transport.
We will of course also look at how features based on that data can be integrated into applications. While there is the KDE Itinerary mobile application presenting these information in a timeline view with access to the corresponding boarding passes or ticket tokens, the underlying building blocks are explicitly meant to be reusable for wider integration. This might be of particular interest for people working on e.g. email application or digital assistants.
Should the author fail to show up to this presentation it might be that last year's fixes for the barcode scanners at the Brussels airport station didn't work after all ;-)
Speaker |
Timo Savola |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 17:20 - 17:35 |
Event link | View original entry |
This presentation is an introduction of an open source project I have been working on for five years.
Building on WebAssembly, Gate makes it possible to snapshot running programs and resume them in diverse environments: unlike with other snapshot-and-restore solutions, the snapshots are portable across CPU architectures and operating systems. Part of the solution is appropriate resource abstraction. The presentation includes a quick demonstration of migration of a program between x86-64 and ARM64 machines.
A subproject of Gate is a specialized WebAssembly compiler implemented in Go.
Gate is at the stage where a proof-of-concept has been implemented; it works. Its future would greatly benefit from wider community engagement, so I wish to present the project and its ideas to a technical user and developer audience.
Speaker |
Johannes Tigges |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 17:40 - 17:55 |
Event link | View original entry |
The pool next to the ocean: How to bring OpenSource skills to more people
OpenSource powers the world and is everywhere with more and more enterprises and large companies understanding the value of it and the need to be able to be a good OpenSource citizen.
However, not everyone in those enterprises has the skills to participate in OpenSource communities, feels ready to contribute something or to create and run a vibrant OpenSource community. I observed that there are two distinct groups of people - one with OSS background, ability and willingness to operate in that domain and those that will likely only use OSS without any likeliness to contribute or participate.
Let's change that and build a bridge between those two groups while generating value for the enterprise making it more likely to receive support for this activity.
InnerSource, the application of OpenSource principles and practices within the enterprise, can be this bridge.
You'll learn about creating opportunities for people who haven't been exposed to OpenSource collaboration to learn about the OpenSource ways of collaboration in a safe environment within their organization by creating shared projects internally that follow OpenSource practices and principles.
You'll also learn about how organizations can profit from cross-team/silo collaboration and knowledge exchange. Also, the acquisition of very valuable skills by their employees that can facilitate the successful transition of those internal projects into OpenSource and creation of vibrant communities around them.
This approach is successfully used by many enterprises, and I'm part of a community who has built and is building OpenSource-d training material for this.
Attend this talk if you want to learn about how to deal with silo issues within your company, how to facilitate your companies way to transition projects to OpenSource or how to build up skills to successfully interact with OpenSource projects. Also attend if you want to hear a bit about freely available training material explaining InnerSource concepts for people who haven't been involved in it yet.
Speaker |
Alasdair Kergon |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 18:00 - 18:15 |
Event link | View original entry |
This short talk will discuss a few of the difficulties of tracking and monitoring storage devices on linux systems today and discuss some projects that are trying to improve the situation.
Speaker |
Christophe de Dinechin |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 18:20 - 18:35 |
Event link | View original entry |
Programming is the art of turning ideas into code.
Ideas and code do not live in the same space. Any translation is lossy.
Concept programming is a cross-language approach that focuses on this translation process, and helps identify often overlooked classes of issues.
Ideas and code do not live in the same space. Consequently, any translation is lossy. But this loss is not immediately visible. For example, how does your programming language coerce you into coding a concept as simple as "maximum" or list in a way that is generally full of warts?
Concept programming is a cross-language approach that focuses on this translation process, and helps identify often overlooked classes of issues. It separates the "concept space" and the "code space", and focuses on how the mechanics in one space translate (or not) into the other.
It introduces a few core ideas:
Syntactic noise is the difference in look between code and concept. For example, in Lisp, you write (+ 1 2)
Semantic noise is the difference in behavior between code and concept. For example, in C, text is null-terminated.
Bandwidth is the amount of the concept space covered by the code. For example, the "+" operator has higher bandwidth in C++ than in C
Signal/noise ratio is the amount of code that does not carry any useful concept. For example, curly braces and semi-colons in C.
Speaker |
Omer Akram |
---|---|
Room | H.2215 (Ferrer) |
Track | Lightning Talks |
Time | 18:40 - 18:55 |
Event link | View original entry |
DeskConnD is a cross-platform, python based daemon that uses Crossbar and WAMP to enable end-to-end encrypted IPC over the network.
In this talk Omer Akram will talk about his new project that he has been working on for the past year to mainly make it easy for developers to add functionalities to their apps that were previously cumbersome to implement, having no platform-specific dependencies enables this project to be cross-platform.
Combining proven technologies like WebSocket/WAMP and ZeroConf, DeskConnD allows different components of a distributed app on the local network to securely communicate and do messaging based on the RPCs and PubSub paradigms.
Speaker |
Mark Reinhold |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 10:30 - 11:15 |
Event link | View original entry |
A review of the past year in the life of the OpenJDK Community, and a look at what's ahead.
Speaker |
Andrew Haley |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 11:20 - 11:45 |
Event link | View original entry |
Project Loom, an OpenJDK project, is "intended to explore, incubate and deliver Java VM features and APIs built on top of them for the purpose of supporting easy-to-use, high-throughput lightweight concurrency and new programming models on the Java platform." These feature include Lightweight Threads, delimited continuations, and tail-call elimination.
The speaker, a Project Loom team member, will describe the project in depth, in particular the gnarly details of how coroutine and continuation scheduling mechanism works, and a new feature, Scoped Locals.
Speaker |
Thanos Stratikopoulos |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 11:50 - 12:15 |
Event link | View original entry |
The proliferation of heterogeneous hardware in recent years has driven us to consider that every system we program, most likely includes a mix of computing elements; each of which with different hardware characteristics enabling programmers to improve performance while reducing energy consumption. These new heterogeneous devices include multi-core CPUs, GPUs and FPGAs. This trend has been accompanied by changes in software development norms that do not necessarily favor programmers. A prime example is the two most popular heterogeneous programming languages, CUDA and OpenCL, which expose several low-level features to the API making them difficult to use by non-expert users.
Instead of using low-level programming languages, developers in industry and academia tend to use higher-level, object-oriented programming languages, typically executed on managed runtime environments, such as Java, R, and Javascript. Although many programmers might expect that such programming languages would have already been adapted for transparent execution on heterogeneous hardware, the reality is that their support is either very limited or absent.
In this talk, we present TornadoVM (https://github.com/beehive-lab/TornadoVM), a heterogeneous programming framework for Java programs. TornadoVM co-exists with standard JVMs (e.g., OpenJDK) that implement the JVMCI. TornadoVM consists of three components: 1) a simple API for composing pipelines of existing Java methods, 2) an optimizing JIT compiler that extends the Graal compiler with hardware-aware optimizations that generate OpenCL C code, and 3) a runtime system that executes TornadoVM specific bytecodes, performs memory management, and schedules the code for execution on GPUs, multicore CPUs, and FPGAs. Essentially, TornadoVM is a “VM-in-a-VM” that can adapt execution completely dynamically and transparently to the user, always finding the highest-performing combination of hardware accelerators through dynamic reconfiguration.
Speaker |
Maurizio Cimadamore |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 12:20 - 13:00 |
Event link | View original entry |
Abstract: Direct buffers are, to date, the only way to access foreign,
off-heap memory. Despite their success, direct buffers suffer from some
limitations --- stateful-ness, lack of addressing space,
non-deterministic deallocation to name a few --- which makes them a
less-than-ideal choice under certain workloads. In this talk we paint
the path to the future: a safe, supported and efficient foreign memory
access API for Java. By providing a more targeted solution to the
problem of accessing foreign memory, not only developers will be freed
by the above limitations - but they will also enjoy improved
performances, as the new API is designed from the ground-up with JIT
optimizations in mind - and all without sacrificing memory access safety.
Speaker |
Mike Milinkovich |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 13:05 - 13:30 |
Event link | View original entry |
In September 2017 Oracle announced that it would be migrating governance of the Java EE platform to the Eclipse Foundation, the home of MicroProfile. Two years later Jakarta EE 8 shipped, signaling the successful completion of that move. As a result, Free Java has a new home for a significant piece of the Java ecosystem. A home which is purely open source, vendor neutral, and community led.
This talk will be about how the long and painful journey from Java EE to Jakarta EE unfolded. But more importantly it will focus on how the new Jakarta EE community works, and how there is a new, open, specification process for Java APIs (other than SE) that is available for the community. We are looking forward to welcoming many of those interested in Free Java to participate in driving new innovation in Java APIs for cloud and other exciting use cases.
Speaker |
Roman Kennke |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 13:35 - 14:00 |
Event link | View original entry |
Shenandoah GC landed in JDK12 about a year ago, giving OpenJDK
another low-pause garbage collector. It has undergone substantial
changes since then. Specifically we have a new barrier scheme, and have
eliminated the extra forwarding pointer word per object, thus
substantially reducing memory footprint. After giving a general
introduction to OpenJDK GC landscape and Shenandoah GC, this talk
focuses on those recent changes in Shenandoah and what's in it for you.
Speaker |
Jie Kang |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 14:05 - 14:45 |
Event link | View original entry |
JDK Flight Recorder provides production time profiling and diagnostics
via a compact events-based infrastructure inside the Hotspot JVM
itself. JDK Mission Control is a stand-alone application that provides
real-time monitoring information for Hotspot JVMs, as well as tools to
read, automatically analyze and visualize flight recording data
produced by JDK Flight Recorder.
When this talk is presented, JMC 7.1.0 has (hopefully) been out for a
little while. This presentation talks about what is new and the
roadmap for the upcoming JMC 8. We will also discuss recent changes in
the project, such as the move to Skara. Towards the end we will demo
how JDK Flight Recorder and JMC core libraries can be used to diagnose
applications deployed in container orchestration platforms.
Speaker |
Andrew Dinn Josh Matsuoka |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 14:50 - 15:15 |
Event link | View original entry |
The GraalVM project provides, among other options, a means to deliver
Java programs as compact, self-contained, fast-startup native images.
GraalVM has been moving from research to development for quite a few
years now. However, it is only just beginning to be properly integrated
with the latest OpenJDK releases and there is still much to be done to
get it fully productized and to improve usability and performance.
This talk will recount our joint experiences of trying to add new and/or
improved capabilities to the the GraalVM code base. Our story will
stumble gracelessly from one pitfall to the next cock-up in the hope
that by exposing and explaining our own history of lamentable error and
occasional failure you will be able to avoid being doomed to repeat it.
We will provide a guide to getting started and building GraalVM, an
overview of the how the compiler, native image generator and other
elements of the GraalVM toolkit operate plus a map of what code sits
where in the source tree and how it fits together and offer tips for
debugging the Graal compiler and native image generator -- all the tasks
you will need to perform in order to attain a vantage point from which
to change or add to the current functionality.
Speaker |
Nikola Grcevski |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 15:20 - 15:45 |
Event link | View original entry |
In this talk we'll explore ways that the JVM can reduce the object allocation rate of Java programs automatically by performing stack allocation of objects that are known to be local to a method, or in compiler terms non-escaping. The discussion is focused on employing the escape analysis optimization in the OpenJDK Hotspot C2 compiler to determine which Java objects can be stack allocated, and how this optimization can reduce pressure on the Java JVM garbage collectors.
We'll show some results on how various real world applications can benefit from such optimizations and describe the methodology of how we prototyped this in OpenJDK. Our work is only in prototype state at this moment and we are looking for more data to understand how broadly applicable this optimizations is. This work wouldn't be possible without free open source access to Java.
Speaker |
Stefan Johansson |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 15:50 - 16:15 |
Event link | View original entry |
Abstract: G1 has been around for quite some time now and since JDK 9 it
is the default garbage collector in OpenJDK. The community working on G1
is quite big and the contributions over the last few years have made a
significant impact on the overall performance. This talk will focus on
some of these features and how they have improved G1 in various ways. We
will also take a brief look at what features we have lined up for the
future.
Speaker |
Martin Doerr |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 16:20 - 16:45 |
Event link | View original entry |
Client compiler, server compiler, JVMCI, Graal ... What are we using today and how do they work together?
In this talk I'll give and overview of the Just-in-time compilers included in OpenJDK and explain how to play with them.
I'll also address Just-in-time compiler threads and resource related issues.
Speaker |
Christoph Langer |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 16:50 - 17:15 |
Event link | View original entry |
One of the most prevalent - if not the most prevalent - exception type in Java is the NullPointerException. While Java set out to overcome the possibilities
to do the mistakes one can do when programming in languages like C/C++ by not exposing pointers in the Java language, the misleading term 'pointer' sneaked
into this exception. To this day, NullPointerExceptions thrown by the runtime system didn't contain messages. All you had was a callstack and a line number.
But in typical expressions and statements there are several dereferences where an NPE can occur in one line.
We - some engineers in the SAP team - thought this could be helped by a little enhancement. The new NPE message gives precise information about the location
and tries to explain what was going on when a null reference was encountered. However, due to its prominent nature, it eventually became a JEP.
In my talk I'll demonstrate the improvements that come with this enhancement. I will lift the hood a little and provide a glance at its implementation details.
And finally I'll say some words about the current status and share some ideas for further improvements in the area of exception messages.
Speaker |
Thomas Stüfe |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 17:20 - 17:45 |
Event link | View original entry |
When examining memory footprint of a JVM process, the delta between Java
heap usage and actual working set size can be surprisingly large. The JVM
uses off-heap memory for a lot of things: thread stacks, compiler arenas,
code heap, byte buffers, GC control... however, one of the largest
consumers of off-heap memory can be class metadata. Class metadata are
stored in Metaspace, which includes the Compressed Class Space.
The talk will explore what Metaspace actually is and what is stored there;
describe the architecture of the Metaspace allocator and the Compressed
Class Space; how it interacts with the GC; how it is sized. We will
highlight waste areas and demonstrate how to use jcmd to examine Metaspace
internals.
The current implementation of the Metaspace allocator suffers from a number
of shortcomings. They can manifest in excessive waste and a certain
"clinginess" - an unwillingness to let go of unused memory.
At SAP, we developed an improved version which is more frugal with memory
and provides a much improved elasticity. So the second part of this talk
will concentrate on our new implementation. We will highlight the
differences to the old architecture, demonstrate advantages and examine how
it works.
Speaker |
Andrew Dinn |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 17:50 - 18:15 |
Event link | View original entry |
The OpenJDK Java Virtual Machine presents some interesting challenges
when it comes to guarding against potential vulnerabilities. This talk
will explain how dynamic class-loading, JIT compilation, speculative
compilation and other aspects of the JVM's operation present a moving
attack surface that presents some very different challenges to those
found in other programs or runtimes.
This talk won't say anything about specific vulnerabilities but it will
identify a few areas of the OpenJDK JVM where some of these unique types
of vulnerability have been identified and resolved. It may teach you
some things you didn't know about the complexity of the JVM and
hopefully reassure you that the OpenJDK devs are very aware of what
could possibly go wrong. Whether we have got it all right is left as a
follow-up exercise for attendees.
Speaker |
Charles Nutter Thomas Enebo |
---|---|
Room | H.1302 (Depage) |
Track | Free Java |
Time | 18:20 - 19:00 |
Event link | View original entry |
Rubyists work from a command line, which makes JRuby startup time a critical concern. Traditionally, the JVM has not been optimized for startup, but that's changing. This talk will explore all available options for making a heavy runtime like JRuby start up quickly, from using class data caching services like Hotspot's CDS and OpenJ9's Quickstart to ahead-of-time compilation of JRuby using GraalVM's Native Image. We'll compare approaches and trade-offs.
Speaker |
Magnus Karlsson |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 10:30 - 11:10 |
Event link | View original entry |
Many people and companies are betting that cloud-native networking
will be the preferred way of implementing network functions in an easy
and scalable way. It is based around the tenants of modularity, high
availability, scalability, low-overhead networking, and ease of
deployment. And a number of companies such as Google has shown that it
is really possible to achieve these properties with it. But the
architectural basis of cloud-native is quite different from the ones
of virtualization-based NFV, but nevertheless, in many cases we
continue to use the software packages that were designed for that
drastically different architecture. The question is, how well does
the current set of open source projects used in NFV work in a
cloud-native environment and what needs to change in them in order to
realize the cloud-native vision?
In this presentation, I will define what I mean with cloud-native
networking and from that derive the system requirements needed to realize
that vision. Based on these requirements, we can deduce a number of
basic architectural properties, features and services that are needed
in the system to be able to satisfy these requirements. Then I will go
through the most popular open source projects such as Linux, DPDK and
OVS and see how they satisfy these architectural properties and
features. The main contribution of this presentation will be to show
what we need to work on within these SW packages in order to realize
cloud-native networking. Or maybe we need completely new SW projects
to be able to achieve this.
Speaker |
Sylvain Baubeau Sylvain Afchain |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 11:10 - 11:30 |
Event link | View original entry |
Skydive is an open source real-time network topology and protocols analyzer providing a comprehensive way of understanding what is happening in your network infrastructure.
Skydive is a toolbox to monitor, visualize and troubleshoot an infrastructure.
It first collects all the information about the physical and logical infrastructure : network interfaces, Linux and Openvswitch bridges, network namespaces, Docker/runc containers, Kubernetes objects, virtual machines and more. All these objects are stored into a graph to allow the operator to visualize and query the whole topology. On top of this, Skydive is able to inject, capture and analyze traffic at any point of this infrastructure - using various technics such as AFpacket, eBPF, DPDK or SFlow samples. This make possible use cases like :
• troobleshoot dropped packets
• get metrics and bandwidth about some specific traffic
• trigger alert on some abnormal traffic detection
• get latency on the whole path of a flow
• and a lot more...
Skydive is agnostic to any SDN, container orchestration engine or virtualization platform. That being said, Skydive has plugins for specific technologies such as Kubernetes/Istio, OpenStack, Openvswitch, Network Service Mesh, OpenContrail, VPP and more.
This presentation will present the architecture of Skydive and demonstrate some use cases through a demo.
Speaker |
Emma Foley Krzysztof Kepka |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 11:30 - 12:10 |
Event link | View original entry |
As CoSP’s accelerate their adoption of SDN and NFV technologies, the increased need for metrics, performance measurement and benchmarking becomes a focus, to ensure the continued delivery of “best in class” services. As NFV environments have grown in size and complexity, the tools required to gain this greater visibility into the NFVi need to continue to evolve to meet the requirements for manageability, serviceability and resiliency.
Using Collectd as a metrics collection tool, OPNFV Barometer monitors the performance of the NFVi resources and has the capability to expose these insights via open industry standard interfaces to analytics or MANO components for potential enforcement or corrective actions. Barometer works with related open source technologies and communities (collectd, DPDK, OpenStack, Prometheus, SAF, etc.) to provide numerous metrics and events that address various different use cases such as service healing, power optimization and ensuring application QoS.
Speaker |
Quentin Monnet |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 12:10 - 12:30 |
Event link | View original entry |
The Linux kernel networking capabilities have been undergoing major changes over the last years. At the heart of the performance gain, eBPF (extended Berkeley Packet Filter) and XDP (eXpress Data Path) have brought new possibilities in terms of tracing and network packet processing. eBPF is a trendy topic in the Linux world, and today it needs little introduction among the SDN and NFV community. But the technology is still under heavy development, bringing new features, more flexibility, and better performance to the users. This presentation is an update on the latest evolutions in the eBPF world!
Many of those changes occur directly inside the eBPF subsystem architecture. New program types are being added. Early constraints such as the maximal number of instructions for programs, or the unavailability of loops, are changing. The internals are improved with support for debug information (BTF) or 32-bit instructions. And many new mechanisms are implemented, such as global data support, the “BPF trampoline”, batched map operations, dynamic linking. Let's review all the latest trends in eBPF kernel development!
But beyond kernel code, eBPF has grown as a full ecosystem, with a variety of tools used to work with it, or to build upon it. Bpftool, a reference utility to manage eBPF programs, keeps evolving. The networking projects using eBPF keep growing in number (e.g. Katran, Suricata, Sysdig, Hubble, Libkefir) or in features (e.g. Cilium). Let's review (briefly) some of those projects that assert eBPF as one of the essential fast dataplane solutions in the Linux world.
Speaker |
Michal Rostecki Swami Vasudevan |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 12:30 - 12:50 |
Event link | View original entry |
Michal Rostecki is a Software Engineer working at SUSE. He's working on Cilium, both upstream and on integrating it with openSUSE Kubic Project and SUSE CaaS Platform.
Swaminathan Vasudevan is a Software Engineer working at SUSE. Worked on Neutron Networking Upstream and currently migrating to Cilium and openSUSE Kubic Project and SUSE CaaS Platform.
Cilium is an open source project which provides networking, security and load balancing for application services that are deployed using Linux container technologies by using the native eBPF technology in the Linux kernel.
In this presentation we would talk about:
- The evolution of the BPF filters and will explain the advantages of eBPF Filters and its use cases today in Linux especially on how Cilium networking utilizes the eBPF Filters to secure the Kubernetes workload with increased performance when compared to legacy iptables.
- How Cilium uses SOCKMAP for layer 7 policy enforcement
- How Cilium integrates with Istio and handles L7 Network Policies with Envoy Proxies.
- The new features since the last release such as running Kubernetes cluster without kube-proxy, providing clusterwide NetworkPolicies, providing fully distributed networking and security observability platform for cloud native workloads etc.
Speaker |
Stephen Hemminger |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 12:50 - 13:10 |
Event link | View original entry |
One of the challenges of doing software network applications is observing the inputs, outputs, and what the application is doing with them. Linux provides a rich tool set with eBPF but integrating this into a DPDK application is challenging. The DPDK libraries for capturing is incomplete which leads to lots of time debugging the tools. This talk addresses these issues, recommends solutions and proposes enhancements to make developers live easier.
The DPDK provides a limited form of packet capture, but it only works on a single interface with no filtering and inaccurate timestamps.
I go over what packet capture does now, how it can be improved, and how it can be integrated with other tracing.
This talk is an extension of the talk (based on community feedback) given in 2019 at the DPDK userspace summit.
Speaker |
Ilias Apalodimas Lorenzo Bianconi |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 13:10 - 13:30 |
Event link | View original entry |
XDP support is an increasing trend on the network devices. XDP main goal is
processing packets at the lowest point in the software stack avoiding
overheads. Memory recycling of received buffers achieved through
the in kernel pagepool API plays a fundamental role in the increased performance.
Adding XDP support on a driver can be non-trivial. In this talk we'll demonstrate
how porting a standard ethernet driver (mvneta/netsec) to XDP and the pagepool API can
boost performance.
Part of the page_pool evolution involves adding the recycling support
in the kernel's SKB stack and leverage the increased performance
attributes of the API.
Speaker |
Bryan Boreham |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 13:30 - 14:10 |
Event link | View original entry |
A tour of the internals of Weave Net, one of the most popular container networks:
design challenges and lessons learned from five years in the wild. Including
Kubernetes integration and how CNI was born.
Weave Net is written in Go, using many Linux kernel features such as veths, bridges and iptables.
Aimed at developers rather than network engineers, Weave Net tries to be self-configuring and
find the best available transport between nodes. The control plane operates via gossip,
with no central point of control.
Speaker |
Ahmed Abdelsalam Miroslaw Walukiewicz Filip Gschwandtner Daniel Bernier |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 14:10 - 15:00 |
Event link | View original entry |
Kubernetes (k8s) is currently the de-facto standard for containers orchestration. However, K8s does not provide any solution for handling containers networking. Instead, it offloads the networking to third-party certified plugins called CNI plugins. Contiv-VPP is a k8s CNI plugin that offers fast I/O by leveraging the carrier-grade capabilities of VPP and DPDK in the dataplane.
The adoption of containers and microservices calls for IPv6 to provide addressing and reachability for such massive number of endpoints. SRv6 leverages the IPv6 dataplane to provide overlay networking, traffic engineering, load balancing, network policy and service chaining.
In this talk, we present an SRv6-based solution for k8s networking. We will show how SRv6 is used for pod-to-pod communication, k8s services and service function chaining (SFC), and how SRv6 solves several k8s networking challenges. We will also show the integration of our solution in Contiv-VPP. This solution is the result of combined effort between Bell Canada, Cisco and Pantheon.tech.
Speaker |
Yolanda Robla Mota Ricardo Noriega |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 15:00 - 15:40 |
Event link | View original entry |
Blueprints in the Kubernetes-Native Infrastructure Blueprint Family leverage the best-practices and tools from the Kubernetes community to declaratively manage edge computing stacks at scale and with a consistent, uniform user experience from the infrastructure up to the services and from developer environments to production environments on bare metal or on public cloud.
All blueprints in this family share the following characteristics:
Come and see the leading edge!
Launched in 2018, Akraino Edge Stack aims to create an open source software stack that supports high-availability cloud services optimized for edge computing systems and applications.
As part of the Akraino project, Kubernetes-Native-Infrastructure blueprint family represents the reference edge stack managed as a declarative platform, where controllers monitor a system for deviations between the user-declared target state and reality and take corrective
actions to reconcile reality with the declared target state.
KNI blueprints cover up two different use cases:
Provider Access Edge: as part of the network transformation, telco operators are moving to run its radio access network in a cloud-native manner. Technologies like vRAN will be only possible with a declarative approach, and leveraging open networking best practices.
Industrial Edge: workloads such IoT, AI/ML, AR/VR, and ultra-low latency control will be run in the edge. These workloads will require specific hardware such GPUs and FPGAs. KNI can show how this needs can be a reality today.
With Kubernetes Native Infrastructure learn about the k8s way of managing infrastructure. By defining a declarative state, the edge administrator will be able to manage thousands of sites by following an innovative GitOps approach.
If you are interested in these exciting topics, don't miss the talk!
Speaker |
Nathan Skrzypczak Aloys Augustin |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 15:40 - 16:20 |
Event link | View original entry |
QUIC was introduced by Google to move the transport protocol implementation out of the kernel, and is now being standardized in the IETF. It provides both encryption and multiplexing, and will be the default transport for HTTP/3. In this talk we'll present the work we've done investigating whether QUIC would benefit from vectorized packet processing, the impact it has on performance and how it can be consumed by external applications.
VPP (vector packet processing) is a fast network data plane, part of the Linux Foundation FD.io project providing fast network functions on top of DPDK. It provides an optimized support of TCP & UDP allowing significant performance improvements. In this presentation, we'll discuss:
Speaker |
Ciara Loftus Kevin Laatz |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 16:20 - 16:40 |
Event link | View original entry |
"With its recent advancements, AFXDP is gaining popularity in the high performance packet processing space. As a result, existing frameworks for packet processing, such as DPDK, are integrating AFXDP support to provide more options for moving packets to user space applications. The challenge with such integration is that both AF_XDP and frameworks like DPDK have their own assumptions and constraints about such things as, for example, how to align or manage packet buffers, making the integration less straight forward than it might appear at first glance.
This talk takes a look at the usability of AFXDP pre-kernel v5.4, before diving into the recent challenges we encountered when integrating DPDK and AFXDP, and how we made changes (on both sides) to allow the two to work together in a much more seamless manner."
Speaker |
Bruce Richardson Harry van Haaren |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 16:40 - 17:20 |
Event link | View original entry |
Modern CPUs provide a wide variety of Single-instruction-multiple-data (SIMD) instructions, or vector instuctions, for operating on larger blocks of data than with regular instructions. Though thought of by many programmers primarily as instructions for doing calculations in parallel on arrays of data, these vector instructions can actually be used in other ways to accelerate packet processing applications. This talk goes through a number of examples in open-source projects, such as DPDK and OVS, where vector instructions have been used to boost performance significantly, and explains the general techniques used that can be applied to other applications.
The talk focuses on the work done on DPDK and OVS to leverage the SSE and AVX instruction sets for packet acceleration. It shows how the different tasks to be performed in those applications can be mapped to SIMD instructions, and presents general guidelines on how to think about packet processing work from a vectorization viewpoint. It also discusses some considerations in application design so as to allow the app to run with best performance on a variety of platforms, each of which may have different instruction sets available.
Speaker |
Benoît Ganne |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 17:20 - 17:40 |
Event link | View original entry |
While userspace networking has demonstrated great performance benefits, it does come with greater complexity than kernel networking.
In parallel, Remote Direct Memory Access (RDMA) was developed as an efficient way to move data in HPC and storage clusters with great success.
Key properties of this technology are also highly desirable for userspace networking: native integration with the operating system (OS), OS bypass and a very efficient software interface.
RDMA-capable network adapters are now enabling standard Ethernet networking functions through the RDMA interface, allowing userspace networking software such as VPP to achieve extreme performance while integrating transparently with the OS.
We'll present:
Speaker |
Max Rottenkolber |
---|---|
Room | H.1308 (Rolin) |
Track | Software Defined Networking |
Time | 17:40 - 18:00 |
Event link | View original entry |
Vita is a high-performance IPsec VPN gateway designed with medium and large network operators in mind. It is written in a high-level language (Lua) using the Snabb networking toolkit and achieves high performance via networking in userspace, i.e. bypassing the kernel network stack.
This talk will discuss Vita and how it was developed using Snabb. Topics include: fast software networking using a dynamic, high-level language; cryptographic ciphers implemented in software software accelerated by x86 extensions; modern cryptography; limiting complexity; multi-core scaling; YANG enabled control planes; minimalist NIC drivers.
Speaker |
Shane Kerr Pieter Lexis Peter van Dijk |
---|---|
Room | H.1309 (Van Rijn) |
Track | DNS |
Time | 10:30 - 10:35 |
Event link | View original entry |
Welcome to the DNS DevRoom
Speaker |
Graham Hayes |
---|---|
Room | H.1309 (Van Rijn) |
Track | DNS |
Time | 10:35 - 11:05 |
Event link | View original entry |
All major clouds have integrated DNS management these days, and OpenStack is one of them. We will run through the OpenStack DNS (Designate) project - how it works, why we laid it out the way we did, how you can use it, and how other OpenStack components can leverage it.
We will run through the general architecture of the project, and show how we can remain a simple control layer over multiple DNS servers and service providers.
We will show how you can run Designate stand alone, as a multi tenant API for managing DNS inside your company, and how you can use the ability to have multiple pools of servers available for multiple purposes.
Finally we will show the myriad of both OpenStack and other Open Source software integrations for DNS management, and DNS-01 ACME validation.
Speaker |
Renzo Davoli |
---|---|
Room | H.1309 (Van Rijn) |
Track | DNS |
Time | 11:10 - 11:40 |
Event link | View original entry |
Would you like a DNS server for IPv6 where adding a new node is as simple as typing in its name?
If the answer is yes, try HashDNS.
IPv6 autoconfiguration methods can give addresses to interfaces but do not provide any means of configuring the DNS. So autoconfiguration is suitable for clients. If a host has to act as a server, it must have a fully qualified domain name and the DNS service has to map its name to its IP address.
In the Internet of Thread scenario, the number of network nodes can be orders of magnitude higher than before, as each process or thread can be a node. This idea of hash based IPv6 addresses is a viable solution to the problem to manage the DNS resolution in IoTh environments.
The host part of an IPv6 address can be computed as the result of a hash function computer on the fully qualified domain name.
In this way it is possible to write a DNS server able to resolve the addresses of any hostname in a sub-domain provided the network prefix of that sub-domain.
The installation of a new node of the network (computer, namespace, IoTh process) is as simple as providing it with its IPv6 address (the one obtained by concatenating the network prefix and the host address computed by the hash function).
There is no need to change the configuration of the DNS.
Actually the installation of a new node (or its renaming) is even simpler than that. The host can use a DHCP service designed to get the right address from the DNS server given its fully qualified domain name.
So a system administrator has nothing to do more than assigning the new node its name. (They have just to baptize the new node)
Speaker |
Erwin Hoffmann (feh) |
---|---|
Room | H.1309 (Van Rijn) |
Track | DNS |
Time | 11:45 - 12:05 |
Event link | View original entry |
The fehQlibs and djbdnscurve6 provide both a DNS library which support IPv6 LLU addresses. The inclusion and use of IPv6 LLU addresses is discussed. While the typical Unix /etc/resolv.conf is applied system-wide and the Microsoft Window's pendent works interface-dependent, here application specific DNS settings can be used.
Overview:
1. Background and heritage on fehQlibs and djbdnscurve6
2. Application specific DNS resolver settings
3. Integration of IPv6 LLU addresses - benefits
4. Integration of IPv6 LLU addresses - recipe
5. Outlook and future challenges
Speaker |
Stéphane Bortzmeyer |
---|---|
Room | H.1309 (Van Rijn) |
Track | DNS |
Time | 12:10 - 12:30 |
Event link | View original entry |
Of course, encrypting DNS is necessary for privacy and security, like
for every other Internet protocol. That's why DoT and DoH deployment
is very important, so that users could safely go to a resolver they
trust. Now, it is time to assert the technical compliance and
performance of these trusted resolvers. We will talk about the things
that could and should be tested against DoT and DoH servers and how to
implement it. We will then discuss performance measurements, specially
with the opportunities brought by parallelism (both in DoT and DoH)
and the challenges they create for measurements.
This talk will be inspired by the development of a tool which is, at
this stage, in a very alpha state.
Speaker |
Ondřej Surý |
---|---|
Room | H.1309 (Van Rijn) |
Track | DNS |
Time | 12:35 - 13:05 |
Event link | View original entry |
BIND 9 consists of a huge and old codebase. In this talk, I would like you to show all the available tools that we use on regular basis to improve, refactor and make the BIND 9 code safer. I'll show the examples of various Google/LLVM Sanitizers, cppcheck, LLVM scan-build and semantic patching using coccinelle.
Speaker |
Florian Obser |
---|---|
Room | H.1309 (Van Rijn) |
Track | DNS |
Time | 13:10 - 13:40 |
Event link | View original entry |
DNS is easy. You type fosdem.org in your browser's address bar, hit enter and you will be greeted by your favorite open-source event's start page. Actually...
We will introduce unwind(8) - an always-running, validating DNS recursive nameserver, answering queries on localhost (127.0.0.1). We will explain its privilege-separated design and show that it is secure to run this daemon by default. We will then show how its novel approach of observing changes in network location and actively probing the quality of the local network improve the user experience in DNS resolution. The focus will be on laptops that move through many networks, some good, some bad, some outright hostile.
We will compare unwind(8) to prior solutions and show how its design enables it to run without user intervention.
Speaker |
Leo Vandewoestijne |
---|---|
Room | H.1309 (Van Rijn) |
Track | DNS |
Time | 13:45 - 14:00 |
Event link | View original entry |
another approach in automating maintenance
Catalog zones are DNS zones containing dynamic configuration,
or better said “configuration data”:
the domain zones to be loaded/unloaded) in your DNS daemon.
This already works find in Bind, be it slow when running million of domains.
So I wish to add a section that would allow to iterate over the relevant data.
Speaker |
Edward Lewis |
---|---|
Room | H.1309 (Van Rijn) |
Track | DNS |
Time | 14:05 - 14:25 |
Event link | View original entry |
The DNS Protocol has features that have grown to become liabilities. The query type "ANY" is one. Earlier this year a published RFC document describes how a DNS server may respond to such queries while reducing the liability. But the document does not define a definitive means for a server to signal that it is differing from the original protocol. This presentation measures of the impact of having no definitive means specified and examines the "fear, uncertainty, and doubt" of lacking explicit signals.
The "minimal ANY responses" RFC (Providing Minimal-Sized Responses to DNS Queries That Have QTYPE=ANY, a.k.a. RFC 8482) results in about 1% of the TLD nameservers indicating they are minimizing ANY responses.
That's (only) about 250 cases.
What is troubling is that there are about 9 different responses observed to indicate the response is "minimized"
9 different ways in just 250 samples, "fuzzing" the protocol
The morale of this tale is that "fuzzying" the protocol is worrisome. (Not that minimizing ANY is a bad thing.)
Speaker |
Nic Jansma |
---|---|
Room | H.1309 (Van Rijn) |
Track | Web Performance |
Time | 15:00 - 15:35 |
Event link | View original entry |
Boomerang is an open-source Real User Monitoring (RUM) JavaScript library used by thousands of websites to measure their visitor's experiences. The developers behind Boomerang take pride in building a reliable and performant third-party library that everyone can use without being concerned about its measurements affecting their site. We recently performed and shared an audit of Boomerang's performance, to help communicate its "cost of doing business", and in doing so we found several areas of code that we wanted to improve. We'll discuss how we performed the audit, some of the improvements we've made, how we're testing and validating our changes, and the real-time telemetry we capture for our library to ensure we're having as little of an impact as possible on the sites we're included on.
Boomerang is an open-source Real User Monitoring (RUM) JavaScript library used by thousands of websites to measure their visitor's experiences.
Boomerang runs on billions of page loads a day, either via the open-source library or as part of Akamai's mPulse RUM service. The developers behind Boomerang take pride in building a reliable and performant third-party library that everyone can use without being concerned about its measurements affecting their site.
Recently, we performed and shared an audit of Boomerang's performance, to help communicate the "cost of doing business" of including Boomerang on a page while it takes its measurements. In doing the audit, we found several areas of code that we wanted to improve and have been making continuous improvements ever since. We've taken ideas and contributions from the OSS community, and have built a Performance Lab that helps "lock in" our improvements by continuously measuring the metrics that are important to us.
We'll discuss how we performed the audit, some of the improvements we've made, how we're testing and validating our changes, and the real-time telemetry we capture on our library to ensure we're having as little of an impact as possible on the sites we're included on.
Speaker |
Dario Rossi |
---|---|
Room | H.1309 (Van Rijn) |
Track | Web Performance |
Time | 15:40 - 16:15 |
Event link | View original entry |
The World Wide Web is still among the most prominent Internet applications. While the Web landscape has been in perpetual movement since the very beginning,
these last few years have witnessed some noteworthy proposals such as SPDY, HTTP/2 and QUIC, which profoundly reshape the application-layer protocols family.
To measure the impact of such changes, going beyond the classic W3C notion of page load time, a number of Web performance metrics has been proposed (such as
SpeedIndex, Above-The-Fold and variants). At the same time, there is still a limited amount of understanding on how these metrics correlate with the user
perception (e.g., such as user ratings, user-perceived page load time, etc.). In this talk, we discuss the state of the art in metrics and models for Web
performance evaluation, and their correlation with user experience through several real-world studies. Additional information, software and datasets are
available at https://webqoe.telecom-paristech.fr
Speaker |
Sia Karamalegos |
---|---|
Room | H.1309 (Van Rijn) |
Track | Web Performance |
Time | 16:20 - 16:55 |
Event link | View original entry |
We all love fonts. From Google Fonts to Typekit, Hoefler&Co and more, they give character and tone to our websites. The down side of fonts is that they can really slow down our loads. In this talk we'll learn about common pitfalls like critical requests depth and how to use resource hints to play tricks with latency to load web applications faster. We'll walk through a network profile to understand what's going on in the browser and how to make it faster.
Speaker |
Robin Marx |
---|---|
Room | H.1309 (Van Rijn) |
Track | Web Performance |
Time | 17:00 - 17:35 |
Event link | View original entry |
Come learn about how browsers try to guess in what order web page resources should be loaded and how servers use that information to often (accidentally) make your web page slower instead.
We look at what resource prioritization is, how it's often implemented terribly in modern HTTP/2 stacks and how we're trying to fix it in QUIC and HTTP/3.
We use clear visualizations and images to help explain the nuances in this complex topic and also muse a bit on whether prioritization actually has that large an impact on web performance.
HTTP/2 started the move from multiple parallel TCP connections to a single underlying pipe. QUIC and HTTP/3 continue that trend.
While this reduces the connection overhead and lets congestion controllers do their work, it also means we no longer send data in a truly parallel fashion.
As such, we need to be careful about how exactly we send our resource data, as some files are more important than others to achieve good web performance.
To help regulate this, HTTP/2 introduced a complex prioritization mechanism. Browsers use complex heuristics to try and estimate the importance of a resource and, with various success, communicate their preferences to the servers.
It has however become clear that this scheme does not work well in practice. Between server implementation bugs, questionable browser choices and bufferbloat in caches and network setups, HTTP/2 prioritization is sometimes more a liability than a useful feature.
For this reason, this feature is being completely reworked in HTTP/3 over QUIC. However, there a whole new can of worms is opened.
One of QUIC's main features for improving performance over TCP is that it removes "head of line blocking": if one resource suffers packet loss, other can still make progress.
That is... if there are other resources in progress! What performs well for lossy links turns out to be exactly what to prevent for high speed connections.
Along the way, we also discuss existing options for web developers to impact the browser's heuristics and server behaviour (such as resource hints (e.g., preload) and the upcoming priority hints).
Finally, we question about how we got in this terrible state of things to begin with: if people made so many mistakes implementing HTTP/2 prioritization, why didn't anyone really notice until 3 years later?
Could it be its impact on web performance is actually limited? Or have we just not seen its full potential yet?
We make this complex topic approachable with plenty of visualizations and animations.
The content is mainly based on our own research (and papers) and that of others in the web community, such as Patrick Meenan and Andy Davies.
Speaker |
Nicolás Peña Moreno |
---|---|
Room | H.1309 (Van Rijn) |
Track | Web Performance |
Time | 17:40 - 18:15 |
Event link | View original entry |
Adding new web performance APIs to the web is a complex process. In this talk, I'll go over the steps we went through to ship the Element Timing API in Chromium, which enables measuring rendering timing of image and text content. You'll learn about the process to ship an API exposing performance information to web developers. There were many steps involved in the process: engaging with developers and other browser vendors, brainstorming, privacy and security reviews, Origin Trials, posting an Intent, and addressing questions and ideas after the API has shipped.
Speaker |
Stefan Burnicki Nils Kuhn |
---|---|
Room | H.1309 (Van Rijn) |
Track | Web Performance |
Time | 18:20 - 18:55 |
Event link | View original entry |
Keeping track caring about web performance is hard with constantly changing standards, improving browsers, frameworks and devices.
It gets even harder when you develop a tool meeting these changing requirements.
Eight years ago, as an IT service provider, we were faced with the task of permanently monitoring the performance of one of the largest e-commerce platforms. After the initial use of WebPagetest, we quickly needed to develop our own features.
What started as minor extensions became a separate project over time.
In this talk, we would like to take you on the journey we have taken developing OpenSpeedMonitor. You will hear about some unexpected challenges, what we learned the hard way and why we would have failed years ago, if we didn't decide to develop FLOSS.
Speaker |
Tsvetan Usunov |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 10:30 - 10:50 |
Event link | View original entry |
We have possibility to setup small electronic assembly/production "factory" at our house for less than EUR 1000.
I will try to explain every step from the design to final product:
We live in exciting times. It was never so easy to design and produce electronic devices like today.
This really unleash people's creativity.
The Open Source Hardware movement helps people to study, modify, improve and share designs and knowledge.
Today we have FOSS CAD tools like KiCad to design our electronic boards.
There are multiply choices to manufacture PCBs even in small quantity.
There are lot of places to source components at low cost.
We have possibility to setup small electronic assembly/production "factory" at our house for less than EUR 1000.
I will try to explain every step from the design to final product:
and will demonstrate Do-It-Yourself oven, solder paste printer, manual pick and place tools which could be used for production.
Speaker |
Urban Bruhin |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 10:55 - 11:15 |
Event link | View original entry |
An overview about what's new in LibrePCB since the last presentation at FOSDEM 2018, and a short live demonstration to see LibrePCB in action.
Speaker |
Yorik van Havre Brad Collette |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 11:20 - 11:50 |
Event link | View original entry |
A walk through the different ways in which people from different areas and backgrounds use a same application (FreeCAD), and the impact this has on their workflows, and even on FreeCAD development
The FreeCAD project gathers a community of developers and users coming from a very large array of specialties: Makers, mechanical engineers, civil engineers, electronics engineers, architects, opticians, graphic designers, etc. All these people using the same software is a unique opportunity to explore and build cross-discipline workflows, and have people coming from one field learn unusual ways from other fields. This constant interchange of paradigms also influences FreeCAD development itself, and connects it to other fields and applications too, to create larger ecosystems. In this talk, we will show some examples of how this happens in different areas.
Speaker |
Holger Vogt |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 11:55 - 12:15 |
Event link | View original entry |
An update of the development activities will be presented leading to ngspice-32. Its interface to KiCad has been extended, PSPICE device model compatibility and OpAmp convergence are improved, several bugs have been fixed.
The VBIC bipolar model and the VDMOS power MOS model now incorporate the self heating effect.
This will lead to the second part of the talk: ngspice may be very well used to simulate thermal device behavior. Heat generation, transport and temperatures are translated into electrical signals. Thus we simulate two circuits: The electrical circuit with its power losses, and the thermal circuit withany resulting device heating, its feedback on the electrical behavior, and the external cooling measures that need to be provided. Some ciruit examples will be given.
Speaker |
Adam Urbanczyk |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 12:20 - 12:40 |
Event link | View original entry |
CadQuery (CQ) [1] is a Python library for building of parametric 3D models. The overarching
design goal is to be extremely fluent and as close as possible to the design intent. CQ is based on
the open source CAD kernel from OpenCascade and therefor offers industry standard B-Rep
modeling capabilities and allows exporting to lossless formats such as STEP as well as lossy ones
such as STL. Originally it used Python bindings based on FreeCAD [2] but recently we switched to
PythonOCC [3] to be more flexible and have full access to the underlying CAD kernel capabilities.
In the talk I will summarize the current status of the CQ project, show some interesting
usage examples and discuss newly implemented features. Furthermore I will elaborate on the future
plans of the core development team and touch on some of the challenges of maintaining a project
such as CQ. I will also present a fairly new addition to the CQ ecosystem – CQ-editor [3]. It is a
Python/PyQt5 based lightweight cross-platform GUI editor that allows to quickly develop and
preview CQ 3D models. It also offers graphical debugging and CQ stack introspection capabilities
which dramatically lowers the entry barrier for trying out and using CQ.
References
[1] https://github.com/CadQuery/cadquery
[2] https://www.freecadweb.org
[3] https://github.com/tpaviot/pythonocc-core
[4] https://github.com/CadQuery/CQ-editor
Speaker |
Wayne Stambaugh |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 12:45 - 13:15 |
Event link | View original entry |
I will talk about KiCad's role in the Open Hardware design movement and how it is remarkably similar to the early days of the Free, Libre, Open Source Software (FLOSS) movement and what it means for the future of Open Hardware.
Speaker |
Mario Behling |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 13:20 - 13:40 |
Event link | View original entry |
In this talk we will cover the development path of the Pocket Science Lab (PSLab) board from version one in 2014 to today and outline how we use tools like KiCad to bring the device to large scale production. We will also share some major issues that we solved to get the device manufacturing ready and challenges that lie ahead of us like ensuring thorough device testing at production.
In this talk we will cover the development path of the Pocket Science Lab (PSLab) board from version one in 2014 to today and outline how we use tools like KiCad to bring the device to large scale production. We will also share some major issues that we solved to get the device manufacturing ready and challenges that lie ahead of us like ensuring thorough device testing at production. The goal of Pocket Science Lab is to create an Open Source hardware device (open on all layers) and software applications that can be used for experiments. The tiny pocket lab provides an array of instruments for doing science and engineering experiments. It provides functions of numerous measurement tools including an oscilloscope, a waveform generator, a frequency counter, a programmable voltage, current source and even a component to control robots with up to four servos.
Speaker |
Marius Kintel |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 13:45 - 14:05 |
Event link | View original entry |
Reflecting on OpenSCAD's 10 years of history and what we've learned and discovered along the way. Discussion on opportunities and potential avenues forward, and some stories from the trenches.
Speaker |
Lasse Mönch |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 14:10 - 14:20 |
Event link | View original entry |
Incorporating new components into PCBs is a difficult task that often requires reading multiple datasheets and creating prototypes to get it right. The funny thing is: every engineer needs to re-read reference designs! Even though there are tens of thousands of designs with new components documented and available on Github. The reason: it is almost impossible to find a relevant project. The solution? Instead of using Github search, which only retrieves files by filename, our approach creates a local database that takes the search results from Github, and then parses the used components inside the PCB designs to index them. That way, you can easily search a component and get the most relevant designs as a reference.
This talk will give an overview of the software that was created, discusses the difficulties that were overcome and the potential for improvement in future work.
We have currently indexed 8,000 KiCad-projects on GitHub containing 37,000 electronic components as a proof-of-concept. The project was kindly supported by AISLER with a server instance for a prototype of the search engine, which can be accessed at https://search-dev.aisler.net
Speaker |
Kjell Morgenstern |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 14:25 - 14:45 |
Event link | View original entry |
Making electronics accessible to the broad public was mainly made possible by Arduino, Raspberry PI and last but not least Fritzing. Back in 2009, it was a pain to get from a loose wiring on a breadboard to a PCB. Fritzing came up first with a unique breadboard view and a simple to use PCB layout. Fast forward 10 years to Fosdem 2019, Fritzing was in a major crisis. Despite well over 200.000 users, thousands of downloads per day and an enthusiastic community, development had stalled. It has now been rebooted, and the project is back to gaining momentum. So what has happened between last year and this year?
This talk will give a rough introduction to Fritzing and its ecosystem, including how we overcame the problems, learned from our mistakes and how we plan to keep improving Fritzing in the future.
Speaker |
Alexandre Halbach |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 14:50 - 15:10 |
Event link | View original entry |
This presentation describes sparselizard: a fast, general, robust and user-friendly finite element c++ library with high potential for low-maintenance integration to open-source simulation tools. It is demonstrated with a large range of validated examples that the library has the ability to simulate heavily nonlinear multiphysics problems involving at least mechanic, fluid, electric, magnetic and electromagnetic physics. Its robustness, speed and user-friendliness are also demonstrated.
Speaker |
Alexander Malyshev |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 15:15 - 15:45 |
Event link | View original entry |
Open Cascade Technology is a framework for B-Rep modeling. The presentation highlights key features available in the toolkits.
The following topics are covered:
- What is OCCT?
- Development facts
- OCCT architecture
- Modeling algorithms
- Visualization
- Data exchange
- Versions history
Speaker |
Seth Hillbrand |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 15:50 - 16:10 |
Event link | View original entry |
STEP files generated by many FOSS modelling programs are much larger than those generated by the equivalent commercial software packages. This does not meaningfully matter for most files. However, for libraries of STEP files containing thousands of models, the extra bytes in downloads and storage can be problematic.
This talk introduces a single-purpose library and command line utility called STEPReduce that removes the superfluous elements, replacing them with their equivalent, existing in-file references. I will show a reduction of 50% file size in both compressed and uncompressed STEP files over the full KiCad STEP library. I will also show the resulting improvement in complex model load times using the industry standard OpenCascade import routines. Finally, I will show the utility's verification routines of solid model equivalence between compressed and uncompressed files.
Please note that this talk replaces one entitled "News from gEDA/gaf" that was due to have been given by Roland Lutz, who has sent his apologies but is now unable to attend as he has fallen ill. We wish him a speedy recovery.
Speaker |
Christophe Geuzaine |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 16:15 - 16:35 |
Event link | View original entry |
Gmsh (http://gmsh.info) is an open source finite element mesh generator with built-in pre- and post-processing facilities. Under continuous development for the last two decades, it has become the de facto standard for open source finite element mesh generation, with a large user community in both academia and industry. In this talk I will present an overview of Gmsh, and highlight recent developments including the support for constructive solid geometry, new robust and parallel meshing algorithms, flexible solver integration and a new multi-language Application Programming Interface in C++, C, Python and Julia.
Speaker |
Sebastian Pichelhofer |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 16:40 - 17:00 |
Event link | View original entry |
The presentation will give a brief overview of the projects history & lessons learned during the course of developing a high tech camera device as community project. We also want to demo and explain the produced hardware, enclosures and sample footage then look at the challenges still ahead. Last 5 minutes reserved for Q&A
Speaker |
Lukas Kramer |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 17:05 - 17:25 |
Event link | View original entry |
This talk covers my motivation for starting a new EDA package in 2016 and the main ideas behind horizon as well as what has changed in the last year. I'll also go into my short- and long-term plans for the project.
Horizon EDA is a from-scratch EDA package with focus on useful parts management, rule-driven design and good usability. It has already proven its suitability for medium-complexity projects in the board design for my master thesis and in various hobby projects.
This talk covers my motivation for starting a new EDA package in 2016 and the main ideas behind horizon as well as what has changed in the last year. I'll also go into my short- and long-term plans for the project.
Speaker |
David Wentzlaff |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 17:30 - 17:50 |
Event link | View original entry |
As contemporary industrial ASIC designs have reached hundreds of billions transistor count, EDA tools must have the scalability to handle such large designs. However, few open-source RTL designs reflect the scale that industrial ASICs have reached. In this talk, we will present OpenPiton, a scalable, tiled manycore design that can reach as many as 65,536 cores in a single chip, and up to 500 million cores on a multi-chip design. The modularity and scalability of the OpenPiton design can enable EDA tool developers to test their tools' functionality at contemporary scales and adapt their development for future larger designs. With its many configurability options, extensive scalability, and heterogeneity, the OpenPiton platform is well placed to supercharge open-source EDA tool development and pave the way for a completely open-source ASIC synthesis and back-end flow tested using open-source designs.
Title:
OpenPiton: An Open-Source Framework for EDA Tool Development
Abstract:
As contemporary industrial ASIC designs have reached hundreds of billions transistor count, EDA tools must have the scalability to handle such large designs. However, few open-source RTL designs reflect the scale that industrial ASICs have reached. In this talk, we will present OpenPiton, a scalable, tiled manycore design that can reach as many as 65,536 cores in a single chip, and up to 500 million cores on a multi-chip design. The modularity and scalability of the OpenPiton design can enable EDA tool developers to test their tools' functionality at contemporary scales and adapt their development for future larger designs. With its many configurability options, extensive scalability, and heterogeneity, the OpenPiton platform is well placed to supercharge open-source EDA tool development and pave the way for a completely open-source ASIC synthesis and back-end flow tested using open-source designs.
Preferred Session length:
Short (20 minutes)
Speaker: Prof. David Wentzlaff (Princeton University)
Speaker bio:
David Wentzlaff is an associate professor of electrical engineering at Princeton University. Wentzlaff's research has earned several awards, among them an NSF CAREER award, DARPA Young Faculty Award, AFOSR Young Investigator Prize, induction into the MICRO Hall of Fame, and the ASPLOS WACI Test-of-Time Award. He received his M.S. and Ph.D. from MIT and received a B.S. in electrical engineering from the University of Illinois at Urbana-Champaign. He was Lead Architect and Founder of Tilera Corporation, a multicore chip manufacturer now owned by Mellanox. David's current research interests include how to create manycore microprocessors customized specifically for Cloud computing environments, how to design computer architectures in a post Moore’s Law world, and how to reduce the impact of computing on the environment by optimizing computer architecture for fully biodegradable substrates. Many of the research projects created by Wentzlaff’s group have been open-sourced including the PriME simulator, OpenPiton, and PRGA.
Link to any hardware / code / slides for the talk:
https://parallel.princeton.edu/openpiton/
https://github.com/PrincetonUniversity/openpiton
Speaker |
Eric Herman Kendrick Shaw Stephanie Medlock |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 17:55 - 18:15 |
Event link | View original entry |
The three main contributors to the OpenElectronicsLab projects started out as relative novices. The wealth of online resources and some trial-and-error opens the doors to the world of hardware design.
This will reflect on what lowered the barriers, insights gained, what needed to be done to handle things which turned out to be harder than expected, and to encourage hesitant novices to get started designing their own hardware.
Speaker |
Wolfgang Bangerth |
---|---|
Room | H.2213 |
Track | Open Source Computer Aided Modeling and Design |
Time | 18:20 - 18:50 |
Event link | View original entry |
The finite element method has been the method of choice to simulate the deformation of solids as well as the flow of many kinds of fluids for nearly 70 years now. In the case of solids, it provides a general framework to describe how a body reacts to external stimuli by modeling how deformation affects the internally stored energy. While most software that implements the method used to be homegrown for a particular purpose, the 2000s have seen the emergence of large and professionally developed. open source software libraries that provide a broad range of functionality that makes the implementation of such codes straightforward. I will give an overview of one of these libraries, deal.II, and how and where it is used.
Speaker |
Frédéric Descamps Ian Gilfillan |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 10:30 - 10:40 |
Event link | View original entry |
Welcome to the FOSDEM MySQL, MariaDB & Friends Devroom 2020
Speaker |
Peter Zaitsev |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 10:40 - 11:00 |
Event link | View original entry |
MySQL 8 and MariaDB 10.4 are the latest Major versions for MySQL and MariaDB. While MariaDB started by being slightly different MySQL variant, now it has grown into very much different database platforms which grows more different from every release.
In this presentation, we will look into the differences between MySQL and MariaDB in the core areas such as SQL features, query optimizations, replication, storage engines, and security as well as discuss unique features and capabilities MySQL 8 and MariaDB 10.4 offers compared to each other.
Speaker |
Alkin Tezuysal |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 11:10 - 11:30 |
Event link | View original entry |
In this talk, we'll walk through RocksDB technology and look into areas where MyRocks is a good fit by comparison to other engines such as InnoDB. We will go over internals, benchmarks, and tuning of MyRocks engine. We also aim to explore the benefits of using MyRocks within the MySQL ecosystem. Attendees will be able to conclude with the latest development of tools and integration within MySQL.
Speaker |
Sveta Smirnova |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 11:40 - 12:00 |
Event link | View original entry |
It is common knowledge that built-in asynchronous master-master (active-active) replication is not safe. I remember times when the official MySQL User Reference Manual stated that such an installation is not recommended for production use. Some experts repeat this claim even now.
While this statement is generally true, I worked with thousands of shops that successfully avoided asynchronous replication limitations in active-active setups.
In this talk, I will show how they did it, demonstrate situations when asynchronous master-master replication is the best possible high availability option and beats such solutions as Galera or InnoDB Clusters. I will also cover common mistakes, leading to disasters.
Speaker |
Jean-François Gagné |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 12:10 - 12:30 |
Event link | View original entry |
Have you ever needed to get some additional write throughput from MySQL ? If yes, you probably found that setting sync_binlog to 0 (and trx_commit to 2) gives you an extra performance boost. As all such easy optimisation, it comes at a cost. This talk explains how this tuning works, presents its consequences and makes recommendations to avoid them. This will bring us to the details of how MySQL commits transactions and how those are replicated to slaves. Come to this talk to learn how to get the benefit of this tuning the right way and to learn some replication internals.
Speaker |
Hrvoje Matijakovic |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 12:40 - 13:00 |
Event link | View original entry |
MariaDB/MySQL/Percona Server provide some features in this space, but currently there isn't one product that covers all the needs (at least not available as FOSS).
This talk will provide an overview of Data-at-Rest-Encryption features in MySQL, MariaDB and Percona Server for MySQL, their availability across versions, and status (experimental/GA).
Speaker |
Nick Vyzas |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 13:10 - 13:30 |
Event link | View original entry |
ProxySQL, the high performance, high availability, protocol-aware proxy for MySQL is now GA in version 2.0. This version introduces several new features, like causal reads using GTID, better support for AWS Aurora, native support for Galera Cluster, LDAP authentication and SSL for client connections.
This session provides an overview of the most important new features.
Slide agenda:
Speaker |
Matthias C Ivan Groenewold |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 13:40 - 14:00 |
Event link | View original entry |
SELinux (Security Enhanced Linux) provides enhanced security mechanism for more advanced access control and auditing. It allows your application software and your system users to only access the resources it's been preconfigured to allow. Of course when you want to move your data- or log files to a non-standard location these policies will stop MySQL from starting.
The easy way out is obviously to set SELinux to disabled or permissive. But someone once said: "Every time you disable SELinux a kitten dies". We'll show you a few ways how you can find out if it actually is SELinux that is blocking you and how to update the policies to properly keep you system secured.
Speaker |
Sami Ahlroos |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 14:10 - 14:30 |
Event link | View original entry |
Running databases in Kubernetes has come a long way.
Focusing on MySQL, we will explore the challenges and issues of running production databases in Kubernetes. We'll look at the opportunities and benefits of running in Kubernetes too.
While rolling out a database is easy enough, things can get interesting when production tasks are undertaken.
How do you achieve scaling – whether that's scaling up or down? How do you know that your latest backup will restore safely?
We will also take a look at an open source solution for monitoring your database deployments, adding support for Kubernetes as a robust production environment.
Focusing on MySQL, we will explore the challenges and issues of running production databases in Kubernetes. We'll look at the opportunities and benefits of running in Kubernetes too.
While rolling out a database is easy enough, things can get interesting when production tasks are undertaken.
How do you achieve scaling – whether that's scaling up or down? How do you know that your latest backup will restore safely?
We will also take a look at an open source solution for monitoring your database deployments, adding support for Kubernetes as a robust production environment.
Outline:
- Introduction
- Installing MySQL in Kubernetes
- Scaling up, scaling down
- Backup, restore, verification
- An open source monitoring solution
- What could possibly go wrong?
Takeaways:
This presentation should encourage the audience to embrace the possibilities of running production databases on Kubernetes, and will help attendees understand the "do's and dont's" of such a deployment.
Speaker |
Marko Mäkelä |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 14:40 - 15:00 |
Event link | View original entry |
ALTER TABLE
in MySQL used to copy the table contents row by row. We can do much better; in the best case, allow instantaneous schema changes, even for nontrivial changes, such as ADD COLUMN…AFTER
and DROP COLUMN
. This talk describes how ALTER TABLE
has been improved over the years for the InnoDB storage engine in MySQL 5.1, 5.5, 5.6, 5.7, and MariaDB Server 10.2, 10.3, 10.4, 10.5, mostly by the presenter.
The talk enumerates different classes of ALTER TABLE
operations:
ADD
or DROP COLUMN
)We also show how ALTER TABLE
can be executed concurrently on multiple nodes in statement-based replication.
Finally, we cover some theoretical limits of what kind of ALTER TABLE
operations can be supported without rebuilding the table, by introducing an optional validation step and on-demand conversion of records in previous schema versions of the table.
Speaker |
Sergei Golubchik |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 15:10 - 15:30 |
Event link | View original entry |
Imagine, you're given a time machine. A fairly limited one, it cannot transport you anywhere. Still, it can show you the past, what your tables looked like at any given point in time. This is exactly what the SQL:2011 standard and MariaDB 10.3+ are giving you. System versioned tables allow you to rewind time and see their content as it was in the past — all using normal SELECT statements. This talk will show how to create system versioned tables, how to use them and how not to kill the performance when doing that. It will present various new applications and use cases that became possible now. Having a time machine, what will you use it for?
Imagine, you're given a time machine. A fairly limited one, it cannot transport you anywhere. Still, it can show you the past, what your tables looked like at any given point in time. This is exactly what the SQL:2011 standard and MariaDB 10.3+ are giving you. System versioned tables allow you to rewind time and see their content as it was in the past — all using normal SELECT statements. This talk will show how to create system versioned tables, how to use them and how not to kill the performance when doing that. It will present numerous different applications and use cases that became possible now. Having a time machine, what will you use it for?
Speaker |
Varun Gupta |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 15:40 - 16:00 |
Event link | View original entry |
The talk will start with a recap of how MariaDB(or MySQL) handles the
ORDER BY LIMIT optimization and examples demonstrating why the current
optimizer is not good enough.
Further, the talk will describe how the optimizer in MariaDB 10.5 mostly
solves the issue, the remaining unresolved issues and how DBAs can tackle them.
FULL DESCRIPTION:
For the first part of the talk, I will discuss the possible strategies by
which ORDER BY LIMIT optimization is handled in MariaDB (or MySQL)
The strategies are:
1) Using an ordered index (ref, range or index scan)
2) Using filesort on the first non-const table
3) Using filesort on the temporary table, that stores the output of the join
Then I will discuss how the current MariaDB/MySQL optimizer makes the choice between the strategies and show
the situations where it will never get a good query plan
For the second part of the talk, I will describe how a new cost-based
optimization in MariaDB 10.5 solves the above issue.
The talk will contain details about how the costs were taken into account
during the optimization phase. Further, with the help of examples
I would demonstrate how the execution differs for this new optimization
and how this leads to improved performance for ORDER BY LIMIT queries.
Speaker |
Vinicius Grippa |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 16:10 - 16:30 |
Event link | View original entry |
Come to see some real-life examples of how you can do CPU profiling with perf and eBPF/BCC, to create FlameGraphs and ColdGraphs visualizations of the on-CPU/off-CPU time spent by the database. Based on these visualizations and reading the database source code (this is why we love Open Source!) you can quickly gain insight about what's burning CPU (FlameGraphs) and what's causing CPU to wait (ColdGraphs), and with this knowledge you will be several steps closer to answering "what's consuming all that CPU time".
Speaker |
Erik Frøseth |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 16:40 - 17:00 |
Event link | View original entry |
JOIN is one of the most common operation in a database system, and for a long time, the only algorithm for executing a join in MySQL has been variations of the nested loop algorithm. But starting from MySQL 8.0.18, it is now possible to execute joins using hash join. This presentation will walk you through how we were able to implement hash join using our new iterator executor, how hash join in MySQL works, when it is used, and everything else that is worth knowing about hash join in MySQL.
Speaker |
Marco Tusa (the Grinch) |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 17:10 - 17:30 |
Event link | View original entry |
MySQL 8.0.18 comes (finally) with a long waited and desired hash-join implementation.
That was already present in other pseudo MySQL distributions like MariaDb.
But, what is has-join, how it works, what problems it solves, when and how to use it.
Last but not least are all the different implementations doing the same things, or are they acting and performing differently.
We are going to perform a short journey in hash-join implementations and answer all these questions.
Speaker |
Pedro Figueiredo |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 17:40 - 18:00 |
Event link | View original entry |
Data protection is an extensive and hot topic. Making sure that
whoever accesses your data has identity well established and is
authorized can be a complex and hard task. Moreover, nowadays
data tends to move around quickly between different instances of the
same service, different services and different data consumers. This
often implies that data traverses different administrative domains. It
is key that MySQL handles, stores and replicates data complying
with the security requirements that business and regulations demand.
This session showcases the new developments in MySQL 8.0 that tighten
related replication security setups, and reduce the attack surface of
the different replication topologies. We will talk about secure
inter-server communication, encryption of replication data at rest
and the new features that make the replication applier run under a
specific security context. Come and learn about security related
replication features in MySQL 8.0.
Speaker |
Shlomi Noach |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 18:10 - 18:30 |
Event link | View original entry |
Schema migration is more than running an ALTER TABLE. It is about designing, reviewing, approving, queuing, scheduling, executing, auditing, controlling and versioning the changes.
At GitHub we run multiple migrations per day, and much of this flow used to be manual, taking a significant toll from the databases team. In this session we illustrate how we automated away migration using free and open source solutions, and based on trusted development flow.
Schema migration is more than running an ALTER TABLE. It is about designing, reviewing, approving, queuing, scheduling, executing, auditing, controlling and versioning the changes.
At GitHub we run multiple migrations per day, and much of this flow used to be manual, taking a significant toll from the databases team. In this session we illustrate how we automated away migration using free and open source solutions, and based on trusted development flow.
We highlight the use of the skeema tool, with GitHub Actions, git flow and gh-ost.
Speaker |
Frédéric Descamps |
---|---|
Room | H.2214 |
Track | MySQL, MariaDB and Friends |
Time | 18:40 - 19:00 |
Event link | View original entry |
MySQL Shell is a new client for MySQL. It comes with multiple functionalities like the adminAPI commands to setup and operate a MySQL InnoDB Cluster but also check for upgrades, import JSON, parallel import and more... It also allows you to communicate with the MySQL Server in SQL, Python or Javascript !
During this session we will write a plugin from scratch to extend the Shell using the MySQL Shell Plugin Framework. The code will be written live in Python.
Speaker |
Zoran Regvart |
---|---|
Room | H.3242 |
Track | BOFs (Track B - in H.3242) |
Time | 11:00 - 12:00 |
Event link | View original entry |
Apache Camel is a free software integration framework from the Apache Software Foundation. This meetup is for anyone wishing to meet and discuss Apache Camel development, share experiences and meet in meat space other folk in the Apache Camel community.
Apache Camel has been around and its used quite widely for in all sorts of software integration projects. Camel version 3 was recently released and the community has kicked of several sub-projects: Camel K - a serverless, cloud native integration on top of Kubernetes, Camel Quarkus - low memory, fast startup support utilizing Quarkus, and Camel Kafka Connector - for running Camel inside of Kafka. Let's meet and discuss these initiatives or just talk and exchange ideas around Camel. Anyone wishing to share can present on any topic relating to Apache Camel is welcome to do so, though please be mindful of the time - we have one hour in this session and would like that everyone would have the chance to participate.
Speaker |
Ron Evans |
---|---|
Room | J.1.106 |
Track | BOFs (Track A - in J.1.106) |
Time | 11:00 - 12:00 |
Event link | View original entry |
Speaker |
Drew DeVault |
---|---|
Room | J.1.106 |
Track | BOFs (Track A - in J.1.106) |
Time | 12:00 - 13:00 |
Event link | View original entry |
Members of the closely linked Sourcehut and aerc communities meet up to put faces to names and discuss the present and future of both projects, and to collect stickers.
Sourcehut is a free/libre project hosting platform with sophisticated git hosting, mailing lists, continuous integration, and more. We'll be discussing the remaining tasks for the alpha, planning and seeking feedback for the beta, and showing off cool stuff added in the past year.
aerc is a FOSS email client for your terminal designed especially for software developers which integrates nicely into the mailing list workflow endorsed by Sourcehut. Early in its development, you'll have a chance to discuss what you'd like to see in your dream email client and speculate wildly on the lofty goals it aims to achieve.
Speaker |
thingee |
---|---|
Room | J.1.106 |
Track | BOFs (Track A - in J.1.106) |
Time | 13:00 - 14:00 |
Event link | View original entry |
Please join us for our Birds of a Feather session on open source storage. We'll be talking about Ceph, Gluster, and all other things open source and storage. Mike Perez, community human for Ceph at Red Hat, will be leading the discussion and is happy to give some cool demos during the BoF.
Please join us for our Birds of a Feather session on open source storage. We'll be talking about Ceph, Gluster, and all other things open source and storage. Mike Perez, community human for Ceph at Red Hat, will be leading the discussion and is happy to give some cool demos during the BoF.
Speaker |
Pili Guerra |
---|---|
Room | J.1.106 |
Track | BOFs (Track A - in J.1.106) |
Time | 14:00 - 15:00 |
Event link | View original entry |
Tor Meetup
Speaker |
Karsten Wade |
---|---|
Room | J.1.106 |
Track | BOFs (Track A - in J.1.106) |
Time | 15:00 - 16:00 |
Event link | View original entry |
tl;dr - contribute to a community management guide written for practitioners, by practitioners; bring your internet-enabled device.
Whether you are a community manager or any other contributor to a free/libre open source software (FLOSS) project, you'll find yourself learning and thinking of good methods and practices for having a happy, successful, and productive project. Community managers and members share these practices with each other all the time -- in talks, in person, as blog posts, and sometimes as a stand-alone book. But has anyone ever really collaborated on a guide that is for practitioners and collaboratively written BY practitioners?
Actually, yes: 10 years ago The Open Source Way 1.0 was written and released. In the passage of that decade, the best practices around caring for FLOSS projects have evolved, as has the work of the people in those community shepherding roles.
We have embarked on a revamp to this guide, and we invite you to comelearn about, participate in, and contribute to this living document real-time during this BOF.
We will begin the BOF with a brief introduction to the guide itself, what is covered within it currently, and an overview of the narrative being told from/to community management practitioners. We'll then get hands-on with the contribution process, starting with a walk through for what is needed for the 2.0 release. After we do some real-time submissions to the guide, the BOF participants can begin working together in small groups or individually on portions of the guide, including both content and publication toolchain.
Speaker |
David Ludovino |
---|---|
Room | J.1.106 |
Track | BOFs (Track A - in J.1.106) |
Time | 16:00 - 17:30 |
Event link | View original entry |
Replicant is a fully free Android distribution running on several devices, a free software mobile operating system putting the emphasis on freedom and privacy/security.
This meeting is for everyone interested in the Replicant project (users, developers, devices vendors, etc.). Among other things, we will present the ongoing efforts on Replicant 9 and discuss how we should move forward. Everyone's point of view is welcomed.
Here's a non-exhaustive list of the topics that will be up for debate:
We will try to setup an audio conference (through Mumble) for those that cannot come to FOSDEM. For that we need several unidirectional
(cardioid) microphones, being it lapel or handheld, that can be connected to a laptop. If you have one please bring it.
Speaker |
OSOR team |
---|---|
Room | J.1.106 |
Track | BOFs (Track A - in J.1.106) |
Time | 17:30 - 19:00 |
Event link | View original entry |
The Open Source Observatory (OSOR) of the European Commission is an online collection that provides its community with an information observatory, community building activities, as well as assistance and support services. On behalf of OSOR, we propose to make a presentation on the currently ongoing study towards a guidelines document for creating sustainable open source communities in the public sector.
In this context, OSOR is producing guidelines for creating sustainable open source communities within the public sector. The purpose of the guidelines is to act as a practical tool that can be used by public sector officials interested in establishing open source communities or by members of such communities. The production of the guidelines is a multi-step process, involving desk research, primary data collection, development of four case studies and conduction of interviews with key stakeholders.
After presenting the objectives and approach to produce the guidelines, the OSOR representatives will present the preliminary findings related to the guidelines, including the key success factors associated with healthy communities. The audience will be also invited to further brainstorm in groups the key success factors of sustainable OSS communities as well as to identify the key components that our guidelines should contain.
The production of the guidelines is a multi-step process, involving both desk research and primary data collection. More specifically, our team has conducted an in-depth literature review, followed by a questionnaire targeting OSS communities in the public sector, which will be running between January and February 2020. Following the analysis of data collected from the questionnaire, our team will develop four case studies illustrating successful and failed OSS initiatives in the public sector. Interviews with key case study stakeholders are to be conducted within each case study.
The key objective of our workshop at FOSDEM is not only to present our preliminary findings to the audience but to also obtain their views on our findings and future guidelines. We want to ensure that OSOR puts forward truly community driven guidelines.
During the workshop, we will recount the findings so far which are the data of the literature review and survey. More specifically, we will present the key success factors and their components associated with sustainable OSS communities as well as some interesting failed and successful public sector OSS initiatives. The audience will be then invited to further brainstorm in groups the key success factors of sustainable OSS communities as well as to identify the key components that our guidelines should contain.
We will also invite the audience to contribute to our ongoing survey and invite them to get in touch with the OSOR community for further ideas on our guidelines and case studies.
The target group of OSOR is wide-ranging within the open source community. It includes policy makers, IT managers, IT developers, researchers and students, and OSS advocates and enthusiasts in general. OSOR invites members of the audience who are interested in the development of OSS within public administrations, community managers and members, developers and organisations who would like to learn more about the project of measuring the health and sustainability of an open source project. The audience is also invited to follow the work of OSOR and the European Commission on OSS more closely. OSOR promotes collaboration, sharing and digital development through various services provided to its community.
Speaker |
Sander Van Dooren |
---|---|
Room | AW1.120 |
Track | Collaborative Information and Content Management Applications |
Time | 10:30 - 10:55 |
Event link | View original entry |
After a quick overview of the semantic web landscape, we'll look into a practical application of Linked Data in the public sector: the Joinup platform of the European Commission.
Joinup makes use of Linked Data to harvest information from various sources into a knowledge base. We'll have a look at the various components that we needed to build (all Open Source) to make the CMS system (Drupal) 'Linked Data enabled'.
We'll look at the technical problems that needed to be solved and the solutions we came up with, as well as a the opportunities that this technology can bring to the CMS world.
Speaker |
Anca Luca |
---|---|
Room | AW1.120 |
Track | Collaborative Information and Content Management Applications |
Time | 11:00 - 11:25 |
Event link | View original entry |
Sharing knowledge in a team of people working together (company, association, study group or any other kind of project) is key for its long-term success. Even if this is not properly identified from the beginning as a main concern, setting up tools and processes that allow knowledge to be accumulated and organized correctly ends up being on the TODO list at some point, and may come with the wish to have done it earlier. Thus, the best solution is one that is quick enough to setup from the very beginning and versatile enough to be able to incrementally develop into a proper solid fortress of knowledge.
In this presentation I will use the XWiki platform to incrementally build a collaborative intranet from scratch and will try to address some frequent needs of knowledge sharing in a team, using already-made add-ons or new tools (list not exhaustive):
* Blog, for unidirectional communication,
* Meeting notes, for spoken knowledge not to be lost,
* File Manager for just dropping files to share,
* Task manager for lightweight ticketing,
* Holiday requests and Recruitment applications for team management,
* Unstructured free content sharing, using standard wiki pages and page templates,
* Structured content tailored to fit your exact specific needs,
* Multi-wiki separation for teams isolation,
* Powerful search in all this,
* Authentication add-ons to plug your own users management,
* Easy look & feel setup, to brand it as your own.
Speaker |
Alex Mikheev |
---|---|
Room | AW1.120 |
Track | Collaborative Information and Content Management Applications |
Time | 11:30 - 11:55 |
Event link | View original entry |
ONLYOFFICE is an open-source HTML5-based office suite for editing and collaborating on text documents, spreadsheets, and presentations online. Here, maximizing format compatibility, establishing browser-agnostic content display and optimizing real-time data transfer in co-authoring are the core principles in building applicable editing software.
End user-side demand indicates that deeper integration in productivity solutions and content management applications is one of the main directions for ONLYOFFICE. We would like to share our experience in building connectors that allow users to edit and co-author their documents securely right within a platform they use.
Presentation milestones:
Speaker |
Jos Poortvliet |
---|---|
Room | AW1.120 |
Track | Collaborative Information and Content Management Applications |
Time | 12:00 - 12:25 |
Event link | View original entry |
With Nextcloud you can sync, share and collaborate on data, but you don't need to put your photos, calendars or chat logs on an American server. Nope, Nextcloud is self-hosted and 100% open source! Thanks to hundreds of apps, Nextcloud can do a lot and in this talk, I will highlight some cool things.
Consider this a follow-up from my talk about 200 things Nextcloud can do last year! An update on what's new and some cool new stuff. What, what is Nextcloud
? Let's see. A private cloud is one way to put it, though that's a contradiction of course. It is a way to share your data, sync your files, communicate and collaborate with others - without giving your data to GAFAM! Keep it on your own server, or something close (like a local hosting provider or data center). Nextcloud is a PHP app that does all that, and more! Easy to use, secure (really) and fully open source of course.
Speaker |
Michael Meeks |
---|---|
Room | AW1.120 |
Track | Collaborative Information and Content Management Applications |
Time | 12:30 - 12:55 |
Event link | View original entry |
The Collabora Online code-base can bring the power of LibreOffice into
an iframe inside your web app. Come and hear how this works, how to integrate
secure, collaborative document editing with your software, and about all the
latest greatest work going on there.
Collabora Online uses a WOPI-like protocol, and it is rather simple to integrate.
Come hear about the total of three REST methods you need for a simple
integration, as well as the wealth of options to control how collaboration works.
Hear about some of our integrations into Nextcloud, Kolab,
Mattermost, Moodle, ownCloud, and many more.
See the internals of Collabora Online, and how you can get involved with
building, debugging, and developing it, and checkout some of the new features
for Mobile and PC browser that we've been working on to make life better
for our users.
Speaker |
Jean-Marc Libs |
---|---|
Room | AW1.120 |
Track | Collaborative Information and Content Management Applications |
Time | 13:00 - 13:25 |
Event link | View original entry |
The Tiki Wiki CMS Groupware software community obviously uses Tiki itself for collaboration and knowledge management.
Yet, many other software tools or infrastructures are used. I will review and explain how and why.
Tiki Wiki CMS Groupware was initially released in 2002 and is still very much alive with a vibrant community.
We obviously want to use Tiki itself for collaboration and knowledge management as much as possible, but the real situation is, we use a lot more tools.
We leverage the fact that Tiki is part of wikisuite by using the other wikisuite software, but these still don't cover everything we use.
The reasons vary from old habits to convenience or improved efficiency and they are different for each collaboration tool.
I will review them and explain why we use them and how useful they are to us. Also, I will mention our plans for the future.
Speaker |
Lydia Pintscher |
---|---|
Room | AW1.120 |
Track | Collaborative Information and Content Management Applications |
Time | 13:30 - 13:55 |
Event link | View original entry |
Wikidata, Wikimedia's knowledge base, has been very successful since its inception 7 years ago. Wikidata's general purpose data about the world is powering everything from Wikipedia to your digital personal assistant. Its linked, machine readable data is collected and maintained by a community of over 20000 people. But not all data should and can be in Wikidata. Instead we are taking the software powering Wikidata, Wikibase, to new places. We empower communities and institutions all around the world who want to collect structured, machine-readable data about a topic area of their choice to run their own Wikibase. These Wikibase instances are then connected to form a thriving ecosystem. In this talk we'll go over what Wikibase is, where it's coming from and what it is enabling right now.
Speaker |
TG x |
---|---|
Room | AW1.120 |
Track | Collaborative Information and Content Management Applications |
Time | 14:00 - 14:25 |
Event link | View original entry |
A data-centric, offline-first approach to decentralized collaborative application development focusing on data ownership and privacy.
Exploring replicated mergeable data structure stores as building blocks of decentralized applications that enable asynchronous collaboration and offline search in combination with peer-to-peer gossip-based protocols that provide pub/sub, dissemination, and recommendation services both over the internet as well as on local and mobile proximity networks, thereby forming interest-based networks that facilitate discovery of personally relevant content and people.
Speaker |
Alberto Massidda |
---|---|
Room | AW1.120 |
Track | Coding for Language Communities |
Time | 15:00 - 16:00 |
Event link | View original entry |
We present: 1) a full pipeline for unsupervised machine translation training (making use of monolingual corpora) for languages with low available resources; 2) a translation server making use of that unsupervised MT with an HTTP API compatible with Moses toolkit, a once prominent MT system; 3) a Docker packaged version of the EU funded free Computer Aided Translation (CAT) tool MateCAT for ease of deployment.
This full translation pipeline enables a non technical user, speaking a non-FIGS language for which there is scarcity of parallel corpora, to start translating documents and software following translation industry standards.
Localization within community suffers from the fragmentation of technologies (too wide wedge between commercial Computer Aided Translation tools and free ones), available language resources (making difficult to train a Machine Translation) and lack of clear and robust pipelines to get started.
Low resource language communities suffer the most, since MT systems require training corpora of millions of words and industry has settled to expecting the massive corpora available to FIGS (French, Italian, German, Spanish) languages.
Moreover, the community suffers from a lack of adoption of established technologies and workflows, leading to reinventing the wheel and suboptimal efforts’ outcomes.
Today we would like to present a connector for the implementation of an unsupervised MT (made by Artetxe et al.), that claims a BLEU of 26 on limited language resources (which is enough as a support system) integrated with MateCAT, an industry level, free, web based tool funded by EU, in order to provide a more viable alternative to resorting to Google Translate and commercial LSPs.
Speaker |
Lydia Pintscher |
---|---|
Room | AW1.120 |
Track | Coding for Language Communities |
Time | 16:00 - 16:30 |
Event link | View original entry |
Wikidata, Wikimedia's knowledge base, has been collecting general purpose data about the world for 7 years now. This data powers Wikipedia but also many applications outside Wikimedia, like your digital personal assistant. In recent years Wikidata's community has also started collecting lexicographical data in order to provide a large data set of machine-readable data about words in hundreds of languages. In this talk we will explore how Wikidata enables thousands of volunteers to describe their languages and make it available as a source of data for systems that do automated translation, text generation and more.
Speaker |
Sander van Geloven |
---|---|
Room | AW1.120 |
Track | Coding for Language Communities |
Time | 16:30 - 17:00 |
Event link | View original entry |
Nuspell version 3 is a FOSS checker that is written in pure C++17. It extensively supports character encodings, locales, compounding, affixing and complex morphology. Existing spell checking in web browsers, office suits, IDEs and other text editors can use this as a drop-in replacement. Nuspell supports 90 languages, suggestions and personal dictionaries.
In this talk we will summarize the functionality of Nuspell version 3 and provide easy to follow examples on how to use it as a command-line tool or link to the C++ library. Newly made integrations in Firefox and Enchant will be discussed. The audience will be invited to further integrate Nuspell into their software, create new language bindings, port it to other operating systems and help grow its community. This new spell checker has outgrown from an MVP to a faster and more complete spell checker.
Speaker |
Michal Čihař |
---|---|
Room | AW1.120 |
Track | Coding for Language Communities |
Time | 17:00 - 18:00 |
Event link | View original entry |
Please note that this talk will now be given by Michal Čihař instead of Václav Zbránek.
The presentation will show you how to localize your project easily with little effort, open-source way. Why we started Weblate? We said no to repetitive work, no to manual work with translation files anymore. Weblate is unique for its tight integration to VCS. Set it up once and start engaging the community of translators. More languages translated means more happy users of your software. Be like openSUSE, Fedora, and many more, and speak your users' language now thanks to Weblate!
The presentation will show you how to localize your project easily with little effort, open-source way. Why we started Weblate? We said no to repetitive work, no to manual work with translation files anymore. Weblate is unique for its tight integration to VCS. Set it up once and start engaging the community of translators. More languages translated means more happy users of your software. Be like openSUSE, Fedora, and many more, and speak your users' language now thanks to Weblate! I will show you the main perks of Weblate and the setup of the project. If you have a project with open repo and you want to start translating it, take your git:// link, and we will set it up right on the spot. FOSDEM is a great time and place to found your translating community. And I am looking forward to answer all your questions!
Speaker |
Alexander Rush Thierry Tambe |
---|---|
Room | AW1.120 |
Track | Coding for Language Communities |
Time | 18:00 - 18:30 |
Event link | View original entry |
The last half decade has seen a major increase in the accuracy of deep learning methods for natural language translation and understanding. However many users still interact with these systems through proprietary models served on specialized cloud hardware. In this talk we discuss co-design efforts between researchers in natural language processing and computer architecture to develop an open-source software/hardware system for natural language translation and understanding across languages. With this system, users can access state-of-the-art models for translation, speech, and classification, and also run these models efficiently on edge device open-hardware designs.
Our work combines two open-source development efforts, OpenNMT and FlexNLP. The OpenNMT project is a multi-year collaborative project for creating an ecosystem for neural machine translation and neural sequence learning. Started in December 2016 by the Harvard NLP group and SYSTRAN, the project has since been used in many research and industry applications. The project includes highly configurable model architectures and training procedures, efficient model serving capabilities for use in real world applications, and extensions to tasks such as text generation, tagging, summarization, image to text, and speech to text. FlexNLP is an open-source fully retargetable hardware accelerator targeted for natural language processing. Its hardware design is targeted to key NLP computational functions such as attention mechanisms and layer normalization that are often overlooked by today’s CNN or RNN hardware accelerators. FlexNLP’s rich instruction set architecture and microarchitecture enable a diverse set of computations and operations that are paramount for end-to-end inference on state-of-the-art attention-based NLP models. Together they provide an open pipeline for both model training and edge device deployment.
Speaker |
Peter Bouda |
---|---|
Room | AW1.120 |
Track | Coding for Language Communities |
Time | 18:30 - 19:00 |
Event link | View original entry |
The Poio project develops language technologies to support communication in lesser-used and under-resourced languages on and with electronic devices. Within the Poio project we develop text input services with text prediction and transliteration for mobile devices and desktop users to allow conversation between individuals and in online communities.
In this lightning talk I will present the current architecture of the Poio Corpus, our corpus collection and data management pipeline. I will show how to add a new language to the corpus and how you can use the pipeline to build language models for the predictive text technology. Our goal is to make collaboration with language communities as smoothless as possible, so that developers, data engineers and speakers of under-ressourced language can collaborate to build grassroots language technologies. Poio started as a language revitalization project at the Interdisciplinary Centre for Social and Language Documentation in Minde/Portugal, a non-profit organization dedicated to the documentation and preservation of linguistic heritage.
Speaker |
Loïc Hoguin |
---|---|
Room | AW1.121 |
Track | Erlang, Elixir and Friends |
Time | 10:30 - 10:50 |
Event link | View original entry |
Farwest is an Erlang framework for building RESTful Web applications and APIs.
Well written Farwest applications apply the HATEOAS principles and as a result can be interacted with using a single client. This removes entirely the need to write a separate client per API and lets servers decide how the data is best consumed by everyone.
This demo will show how to use Farwest to write a simple API to a document oriented database.
Speaker |
Greg Mefford |
---|---|
Room | AW1.121 |
Track | Erlang, Elixir and Friends |
Time | 11:00 - 11:20 |
Event link | View original entry |
Learn how distributed tracing can revolutionize the way you troubleshoot errors and performance issues, in both monolithic and distributed micro-service architectures.
OpenTelemetry is an industry standard for distributed tracing, merging the tech and communities of OpenCensus and OpenTracing.
Speaker |
Gabriele Santomaggio |
---|---|
Room | AW1.121 |
Track | Erlang, Elixir and Friends |
Time | 11:30 - 11:50 |
Event link | View original entry |
In this talk, we will see how to debug/trace on a running RabbitMQ node.
Erlang remote access and remote debugging are builtin features in Erlang/Elixir.
With these features, it is possible to see what's happening inside a BEAM node (as RabbitMQ).
I will show also how to use "dynamic loading" to add a not native code in a running beam.
Erlang remote access and remote debugging are builtin features in Erlang/Elixir.
With these features, it is possible to see what's happening inside a BEAM node (as RabbitMQ).
There are a set of tools inside the beam like etop, eprof, dbg, fprof ... that work in the same Linux way.
In this talk, we will see how to use some of these features on a running RabbitMQ node.
I will show also how to use "dynamic loading" to add a not native code in a running beam.
Speaker |
Arjan Scherpenisse |
---|---|
Room | AW1.121 |
Track | Erlang, Elixir and Friends |
Time | 12:00 - 12:20 |
Event link | View original entry |
Intended as a introduction to Nerves, the IoT platform for the BEAM, this talk is a journey through the land of library ecosystems, device drivers and pixel manipulators, in search for the holy grail: a stable and maintainable IoT device.
The Nerves project (https://nerves-project.org/) is a framework for building IoT devices with Elixir. In this talk I will explain how a Nerves project is structured and then move on to show and demonstrate one of the projects that I did with it, focussing on the development experience and the state of the Nerves ecosystem.
Speaker |
Luke Imhoff |
---|---|
Room | AW1.121 |
Track | Erlang, Elixir and Friends |
Time | 12:30 - 12:50 |
Event link | View original entry |
Lumen is an alternative compiler, interpreter and runtime to the BEAM designed for WebAssembly. Lumen allows Erlang and Elixir to run in the browser.
The Lumen project is a reimplementation of the BEAM in Rust. Using Rust, Lumen is able to leverage the cutting edge tools of the Rust WASM ecosystem. Compiling Elixir and Erlang from source to LLVM IR, the Lumen compiler is able to do whole program optimizations allowing for dead-code elimination of parts of the user application, OTP, and the runtime itself. Eliminating the dead code makes shipping OTP size-competitive with JavaScript frameworks, while retaining the benefits of thousands of concurrent processes and supervision trees.
Speaker |
Viktor Gergely |
---|---|
Room | AW1.121 |
Track | Erlang, Elixir and Friends |
Time | 13:00 - 13:20 |
Event link | View original entry |
The speaker started to experiment with running BEAM modules on Android during summer of 2019. A prototype called CoffeeBeam has been created that is capable of loading and running BEAM files on Android. The solution also contains a virtual machine that provides a lightweight Erlang runtime system. Most of the implemented functionality is independent of the source language of the BEAM files, so the platform is easily extensible to support further languages on the BEAM. During the talk, the speaker is going to present a real-life example of running a BEAM file on Android, while presenting the concepts of the implementation and sharing the story of this journey.
CoffeeBeam is a lightweight Erlang virtual machine that provides easy integration of BEAM files with Android applications. The current alternative solutions provide almost fully functional Erlang runtime systems in the form of Erlang shell on Android devices. However, CoffeeBeam follows a different approach, targeting easy integration of pre-compiled BEAM files into standalone Android applications. The characteristics of Android devices are in focus: they provide large amount of memory while CPU usage needs to be optimized to provide longer device lifetime. It is preferred to make the communication between Erlang and the Android application transparent to provide better user experience.
Let's assume that you chose a language over the BEAM to implement an application logic efficiently. CoffeeBeam provides a framework to build on this logic and enable communication between the BEAM and the Android application with only minor changes to your original code. The demonstrated example is a TicTacToe
game where the game logic is implemented in Erlang that is extended with a graphical user interface implemented as an Android activity in Java.
The TicTacToe
game is implemented as an Android activity which is a common way of creating interactive applications. The activity contains the view for displaying textual information (game name and user instructions depending on the state of the game) and widgets (game board and new game button) for initiating user actions towards the game logic.
The CoffeBeam VM provides the runtime system for the game logic. It is written in Java and included as a .jar
library inside the Android application source code. Starting and stopping the VM is connected to the onCreate()
and onDestroy()
callbacks of the activity.
The flow of the game and the computer player's intelligence is implemented as an Erlang module (approximately 250 lines of code) and the compiled .beam
file is packaged into the Android application as resource.
The BeamClient
class provides interface for starting and stopping the VM, and manages communication between the VM and the Android application through function calls and callback functions. The default behavior can be redefined by extending the BeamClient
class. The forms of communication are described in detail below.
User actions in the Android application are translated into function calls in the VM using the apply(String module, String function, ErlList args)
method of the BeamClient
class. The function call implies creating a new process in the Erlang VM and applying module:function
with the list of args
. The TicTacToe
game logic provides the following functions:
start()
. The game process is spawned that initializes the board for a new game.new_game(GamePid)
. The game board is cleared and a new game starts in the game process identified by GamePid
.put(GamePid, X, Y)
. The player marks the (X,Y)
field of the game board with an X
sign.When the Erlang function is executed in the VM, the result of the function initiates a callback in the BeamClient
as handleResult(ErlTerm result)
. In the TicTacToe
example, the process identifier of the game process is returned as the result of the tictactoe:start()
function. The returned value can be used to send Erlang messages to the game process during the game.
Each call in the form of beamclient:function(arg)
in the Erlang modules results in a BeamClient
callback handleCall(String function, ErlTerm arg)
. Each game event invokes a beamclient:update({Event, Board})
function call that is translated into handleCall
callback in the Android application.
CoffeeBeam executes BEAM files in a lightweight VM that can be packaged into the Android application. The above TicTacToe
example showed how to include the Erlang game logic in the Android application that provides the graphical user interface. The game flow runs in a separate process in the CoffeBeam VM, and the communication with Android is done through BeamClient
function calls and callbacks.
The CoffeeBeam VM is open source and available for further development to extend the VM functionality or implement customizations for other languages running on the BEAM. The source code with documented interface is available at: https://github.com/vikger/coffeebeam.
Speaker |
Wiebe-Marten Wijnja |
---|---|
Room | AW1.121 |
Track | Erlang, Elixir and Friends |
Time | 13:30 - 13:50 |
Event link | View original entry |
Compilation and execution are as different as night and day. Or are they? By blurring the lines, Elixir (and the BEAM VM) enable some very powerful and useful meta-programming techniques.
In this presentation, Marten will talk about running and generating code at compile-time, Elixir's hygienic macros, and how to compile and hot-reload altered or extra code, while your program is running!
Besides explaining these concepts, their usefulness will be motivated using various practical real-world examples.
Speaker |
Evadne Wu |
---|---|
Room | AW1.121 |
Track | Erlang, Elixir and Friends |
Time | 14:00 - 14:20 |
Event link | View original entry |
A popular way to manage long-running state in Erlang and Elixir programs is by using processes; this model is well-understood and well-supported, but remains firmly rooted within known orthodoxy. Within this session, I shall demonstrate application of the Orleans model to existing Erlang and Elixir applications, review existing work done by the community, and compare this way of programming against other models. For maximum enjoyment, the audience is advised to possess working knowledge of Erlang and Elixir. Some background knowledge in Web applications will be helpful as well. The session will be presented with live demo in Elixir.
Speaker |
Vincent Cave |
---|---|
Room | AW1.121 |
Track | Graph Systems and Algorithms |
Time | 15:00 - 15:20 |
Event link | View original entry |
Python has proven to be a popular choice for data scientists in
the domain of graph analytics. The multitude of freely available
frameworks and python packages allow to develop applications
quickly through ease of expressibility and reuse of code.
With petabytes of data generated everyday and an ever evolving
landscape of hardware solutions, we observe a graph processing
framework should expose the following characteristics: ease of
use, scalability, interoperability across data formats, and
portability across hardware vendors.
While existing python packages have been helping to drive
application development, our assessment is that none of the
packages address all the aforementioned challenges.
We propose a community led, open source effort, to design and
build a graph processing python library to specifically address
these challenges.
Speaker |
Sylvain Baubeau |
---|---|
Room | AW1.121 |
Track | Graph Systems and Algorithms |
Time | 15:25 - 15:45 |
Event link | View original entry |
Graffiti is the graph engine of Skydive - an open source networking analysis tool. Graffiti was created from scratch to provide the features required by Skydive : distributed, replicated, store the whole history of the graph, allow subcribing to events on the graph using WebSocket and visualization.
Skydive (https://skydive.network) is an open source analysis tool. It collects information about an infrastructure topology - such as network interfaces, Linux bridges, network namespaces, containers, virtual machines, ... and store them into a graph database called Graffiti (https://github.com/skydive-project/skydive/tree/master/graffiti)
The graph is :
- distributed : some agents only have a portion of the graph
- replicated : for high availability and load distribution
- historical : every change on the graph is archived, allowing retrieval of the graph at any point in time or getting all the revisions of a set of nodes and edges during a period of time
A custom implementation of the Gremlin language is used to query the graph, with some additional steps to specify the time context of the query for instance.
In addition to the core engine, a WebSocket based user interface - based on D3JS - is available to visualize and interact with the graph.
This presentation will showcase a demo of Graffiti and try to advocate its use in your own project.
Speaker |
Max Kießling |
---|---|
Room | AW1.121 |
Track | Graph Systems and Algorithms |
Time | 15:50 - 16:10 |
Event link | View original entry |
Graph algorithms play an increasingly important role in real-world applications. The Neo4j Graph Algorithms library contains a set of ~50 graph algorithms covering a lot of different problem domains. In our talk, we’ll present the architecture of the library and demonstrate the different execution phases using a real world example.
Graph algorithms play an increasingly important role in real-world applications. Use-cases that we see in the wild are related to fraud detection, fraud detection, retail recommendation and identifying influencers for marketing campaigns. The Neo4j Graph Algorithms library contains a set of ~50 graph algorithms covering the above-mentioned problem domains.
Running a graph algorithm in Neo4j involves three essential steps: loading the graph from the database in an optimized in-memory format, executing the algorithm, and streaming or writing of results. For the user, these steps are hidden behind single procedure calls, integrated in the Cypher query language.
In our talk, we will explain and demonstrate what happens in the system when a user calls an algorithm procedure. This involves scanning Neo4j store files, constructing our in-memory graph representation and executing an algorithm via our Java Graph API.
Attendees will learn how to setup and use the Neo4j Graph Algorithms Library. Furthermore, they will get a good understanding of how the library works internally and how to tune it for specific use-cases.
Speaker |
Muhammad Osama |
---|---|
Room | AW1.121 |
Track | Graph Systems and Algorithms |
Time | 16:15 - 16:35 |
Event link | View original entry |
Gunrock is a CUDA library for graph-processing designed specifically for the GPU. It uses a high-level, bulk-synchronous, data-centric abstraction focused on operations on vertex or edge frontiers. Gunrock achieves a balance between performance and expressiveness by coupling high-performance GPU computing primitives and optimization strategies, particularly in the area of fine-grained load balancing, with a high-level programming model that allows programmers to quickly develop new graph primitives that scale from one to many GPUs on a node with small code size and minimal GPU programming knowledge.
Features of Gunrock include:
- Best of class performance among GPU graph analytics frameworks
- A large number of graph applications (28 at last count)
- A data-centric programming model targeted at GPUs that offers advantages over other programming models
- A programming model that scales to multiple GPUs with high performance while still using the same code as a single-GPU primitive
Gunrock began in 2013 as a project under DARPA's XDATA program and is currently the performance reference in DARPA's HIVE program. Gunrock is also in development as a component in NVIDIA's RAPIDS platform for data analytics. The Gunrock team actively develops and improves Gunrock under an Apache 2.0 license.
https://gunrock.github.io/
Speaker |
Margaret Martonosi Aninda Manocha |
---|---|
Room | AW1.121 |
Track | Graph Systems and Algorithms |
Time | 16:40 - 17:00 |
Event link | View original entry |
Graph databases and applications have attracted much attention in the past few years due to the efficiency with which they can represent big data, connecting different layers of data structures and allowing analysis while preserving contextual relationships.
This has resulted in a fast-growing community that has been developing various database and algorithmic innovations in this area, many of which will be gathering together in this conference. We joined this field as computer architecture researchers and are currently building a complete hardware-software design, called DECADES, that aims to accelerate the execution of these algorithms.
From a computer architecture point of view, applications involving dense matrix operations such as neural networks have garnered much attention for their acceleration through specialized hardware such as GPUs and TPUs, while graph applications remain difficult to improve even with modern specialized accelerator designs. The reason for this is the characteristic pointer-based data structures of graph applications and the resulting irregular memory accesses performed by many of these workloads. Such irregular memory accesses result in memory latency bottlenecks that dominate the total execution time. In this talk, as part of the DECADES infrastructure, we present an elegant hardware-software codesign solution, named FAST-LLAMAs, to overcome these memory-bottlenecks, and thus, accelerate graph and sparse applications in an energy efficient way.
Graph databases and applications have attracted much attention in the past few years due to the efficiency with which they can represent big data, connecting different layers of data structures and allowing analysis while preserving contextual relationships.
This has resulted in a fast-growing community that has been developing various database and algorithmic innovations in this area, many of which will be gathering together in this conference. We joined this field as computer architecture researchers and are currently building a complete hardware-software design, called DECADES, that aims to accelerate the execution of these algorithms.
From a computer architecture point of view, applications involving dense matrix operations such as neural networks have garnered much attention for their acceleration through specialized hardware such as GPUs and TPUs, while graph applications remain difficult to improve even with modern specialized accelerator designs. The reason for this is the characteristic pointer-based data structures of graph applications and the resulting irregular memory accesses performed by many of these workloads. Such irregular memory accesses result in memory latency bottlenecks that dominate the total execution time. In this talk, as part of the DECADES infrastructure, we present an elegant hardware-software codesign solution, named FAST-LLAMAs, to overcome these memory-bottlenecks, and thus, accelerate graph and sparse applications in an energy efficient way.
We propose a 40 minute talk which includes a rigorous characterization of the problem, and an in-depth analysis of our software-hardware co-design solution, FAST LLAMAs. We will present results based on a simulated model of our system which show significant performance improvements (up to 8x), as well as energy improvements (up to 20x) on a set of fundamental graph algorithms and important real-world datasets. Our system is completely open-source, and includes a compiler and cycle-accurate simulator. Our proposed system is compatible and easily extendable to many of the open-source graph analytic and database frameworks and we are excited to engage with the open-source community of this increasingly important domain.
The work is part of a large collaboration from three academic groups: Margaret Martonosi (PI Princeton), David Wentzlaff (PI Princeton), Luca Carloni (PI Columbia) with students/researchers: Juan L. Aragón (U. of Murcia, Spain), Jonathan Balkind, Ting-Jung Chang, Fei Gao, Davide Giri, Paul J. Jackson, Aninda Manocha, Opeoluwa Matthews, Tyler Sorensen, Esin Türeci, Georgios Tziantzioulis, and Marcelo Orenes Vera. In addition to the submission author, portions of the talk may be offered by others in the collaboration.
Speaker |
Stijn Eyerman |
---|---|
Room | AW1.121 |
Track | Graph Systems and Algorithms |
Time | 17:05 - 17:25 |
Event link | View original entry |
Large scale graph analytics is essential to analyze relationships in big data sets. Thereto, the DARPA HIVE program targets a leap in power efficient graph analytics. In response to this program, Intel proposes the Programmable Unified Memory Architecture (PUMA). Based on graph workload analysis insights, PUMA consists of many multi-threaded cores, fine-grained memory and network accesses, a globally shared address space and powerful offload engines. In this talk, we will describe the PUMA architecture, both in terms of hardware and the software ecosystem. We will provide initial simulation based performance estimations, showing that for graph analysis applications, a PUMA node will outperform a conventional compute node by one to two orders of magnitude. Additionally, PUMA will continue to scale across multiple nodes, which is a challenge in conventional multinode setups.
Speaker |
Tobias Johansson Petr Janouch |
---|---|
Room | AW1.121 |
Track | Graph Systems and Algorithms |
Time | 17:30 - 17:50 |
Event link | View original entry |
In this talk we will introduce enhancements to the Cypher graph query language, enabling queries spanning multiple graphs, intended for use in sharding and federation scenarios.
We will also present our experience with sharding the LDBC Social Network Benchmark dataset.
Speaker |
Ben Steer |
---|---|
Room | AW1.121 |
Track | Graph Systems and Algorithms |
Time | 17:55 - 18:15 |
Event link | View original entry |
Temporal graphs capture the development of relationships within data throughout time. This model fits naturally within a streaming architecture, where new events can be inserted directly into the graph upon arrival from a data source, being compared to related entities or historical state. However, the vast majority of graph processing systems only consider traditional graph analysis on static data, with some outliers supporting batched updating and temporal analysis across graph snapshots. This talk will cover recent work defining a temporal graph model which can be updated via event streams and investigating the challenges of distribution and graph maintenance. Some notable challenges within this include partitioning a graph built from a stream, with the additional complexity of managing trade-offs between structural locality (proximity to neighbours) and temporal locality (proximity to an entities history). Synchronising graph state across the cluster and handling out-of-order updates, without a central ground truth limiting scalability. Managing memory constraints and performing analysis in parallel with ongoing update ingestion.
To address these challenges, we introduce Raphtory, a system which maintains temporal graphs over a distributed set of partitions, ingesting and processing parallel updates in near real-time. Raphtory's core components consist of Graph Routers and Graph Partition Managers. Graph Routers attach to a given input stream and convert raw data into graph updates, forwarding this to the Graph Partition Manager handling the affected entity. Graph Partition Managers contain a partition of the overall graph, inserting updates into the histories of affected entities at the correct chronological position. This removes the need for centralised synchronisation, as commands may be executed in any given arrival order whilst resulting in the same history. To deal with memory constraints, Partition Managers both compress older history and set an absolute threshold for memory usage. If this threshold is met a cut-off point is established, requiring all updates prior to this time to be transferred to offline storage. Once established and ingesting the selected input, analysis on the graph is permitted via Analysis Managers. These connect to the cluster, broadcasting requests to all Partition Managers who execute the algorithm. Analysis may be completed on the live graph (most up-to-date version), any point back through its history or as a temporal query over a range of time. Additionally, multiple Analysis Managers may operate concurrently on the graph with previously unseen algorithms compiled at run-time, thus allowing modification of ongoing analysis without re-ingesting the data.
Raphtory is an ongoing project, but is open source and available for use now. Raphtory is fully containerised for ease of installation and deployment and much work has gone into making it simple for users to ingest their own data sources, create custom routers and perform their desired analysis.
The proposed talk will discuss the benefits of viewing data as a temporal graph, the current version of Raphtory and how someone could get involved with the project. We shall also touch on several areas of possible expansion at the end for discussion with those interested.
The intended audience for this talk is a mixture of data scientists and graphy engineers. It is going to be quite high level, but introducing some interesting ideas of how to view data through the lens of a temporal graph as well as novel systems solutions for distribution, maintenance and processing.
Speaker |
Kevin Gomez Christopher Rost |
---|---|
Room | AW1.121 |
Track | Graph Systems and Algorithms |
Time | 18:20 - 18:40 |
Event link | View original entry |
The temporal analysis of evolving graphs is an important requirement in many domains but hardly supported in current graph database and graph processing systems. We, therefore, extended the distributed graph analytics framework Gradoop for time-related graph analysis by introducing a new temporal property graph data model. Our model supports bitemporal time dimensions for vertices and edges to represent both rollback and historical information. In addition to the data model, we introduce several time-dependent operators (e.g, Snapshot, Diff and Grouping) that natively support the natural evolution of the graph. Since this is an extension of Gradoop, the temporal operators are compatible and can be combined with the already known operators to build complex analytical tasks in a declarative way. In our talk, we will give a brief overview of the Gradoop system, the temporal property graph model and how we support the time-dependent analysis of large graphs. Based on real-world use-cases, we show the expressiveness and flexibility of our temporal operators and how they can be composed to answer complex analytical questions.
Speaker |
Bob van Luijt |
---|---|
Room | AW1.121 |
Track | Graph Systems and Algorithms |
Time | 18:40 - 19:00 |
Event link | View original entry |
Weaviate is an open-source smart graph that aims to allow anyone, anywhere, any time to create their own semantic search engines, knowledge graphs or knowledge networks. Weaviate is RESTful and GraphQL API based and built on top of a semantic vector storage mechanism called the contextionary. Because all data is stored in the vector space, Weaviate is ideal for;
- Semantically search through the knowledge graph.
- Automatically classify entities in the graph.
- Create easy to use knowledge mappings.
Because the use of formal ontologies are optional, Weaviate can be used to create a P2P knowledge network which we want to present during this conference.
This is a follow up after the initial design was shared during last year's FOSDEM.
Creating a knowledge graph can be a complex endeavor, let alone the integration of semantic search models. Bain & Company research under US enterprise CTO's shows that 59% of them believe they lack the capabilities to generate meaningful business insights from their data, and 85% said it would require substantial investments to improve their data platforms.
Weaviate aims anyone to create large, enterprise-scale knowledge graphs as straight forward as possible. Weaviate's feature set allows anyone to;
- Semantically search through the knowledge graph.
- Automatically classify entities in the knowledge graph.
- Create easy to use knowledge mappings.
Weavite's Contextionary is the semantic vector storage mechanism that stores data -unlike traditional storage mechanisms- based on its semantic meaning. For example, if someone stores information about a company with the name Apple, this data object would be found closely related to concepts like the iPhone.
Because of the algorithmic use (as opposed to retraining) of the pre-trained machine learning model, Weaviate is able to learn new concepts fast and near-realtime. This allows the user to update and manipulate the knowledge graph directly.
During the session, we want to show a few -recent- use cases to demo how Weaviate can be used. The demo will include;
querying;
semantic querying;
adding concepts;
going from an ontology to a schema;
and more.
Because of Weaviate's contextionary, a formal ontology is optional (e.g., "a company with the name Netflix" is semantically similar to "a business with the identifier Netflix Inc.") this allows multiple Weaviate to connect and communicate over a peer to peer (P2P) network to exchange knowledge. Aka, the knowledge network. During the session, we want to demonstrate the first prototype of this network.
more information can be found on our website: https://www.semi.technology/documentation/weaviate/current/
Speaker |
Dirk Craeynest Jean-Pierre Rosen |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 10:30 - 10:35 |
Event link | View original entry |
Welcome to the Ada Developer Room at FOSDEM 2020, which is organized
by Ada-Belgium in cooperation with Ada-Europe.
Ada-Belgium and Ada-Europe are non-profit organizations set up
to promote the use of the Ada programming language and related
technology, and to disseminate knowledge and experience into academia,
research and industry in Belgium and Europe, resp. Ada-Europe has
member-organizations, such as Ada-Belgium, in various countries,
and direct members in many other countries.
Speaker |
Jean-Pierre Rosen |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 10:35 - 11:20 |
Event link | View original entry |
An overview of the main features of the Ada language, with special
emphasis on those features that make it especially attractive for
free software development.
Ada is a feature-rich language, but what really makes Ada stand-out is
that the features are nicely integrated towards serving the goals of
software engineering. If you prefer to spend your time on designing
elegant solutions rather than on low-level debugging, if you think
that software should not fail, if you like to build programs from
readily available components that you can trust, you should really
consider Ada
Speaker |
Gautier de Montmollin |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 11:30 - 11:50 |
Event link | View original entry |
In the Ada world, we are surrounded by impressive and professional
tools that can handle large and complex projects. Did you ever
dream of a tiny, incomplete but compatible system to play with?
Are you too impatient, when developing small pieces of code, for
long compile-bind-link-run cycles? Are you a beginner intimidated by
project files and sophisticated tools? Then HAC (the HAC Ada Compiler,
or the Hello-world Ada Compiler) is for you.
HAC is a revival of the SmallAda project, which supported the "Pascal
subset" plus tasking.
Speaker |
Philippe Waroquiers |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 12:00 - 12:50 |
Event link | View original entry |
This talk describes how performance aspects of a big Air Traffic Flow
Management mission critical application are tracked from development
to operations.
Tracking performance is needed when new functionality is added, to
balance the additional services versus the resource increase needed.
Measuring and tracking performance is also critical to ensure a new
release can cope with the current or expected load.
We will discuss various aspects such as which tools and techniques
are used for performance tracking and measurements, what are the
traps and pitfalls encountered for these activities. The application
in question is using Ada, but most of the items discussed are not
particularly Ada related.
Speaker |
Johannes Kliemann |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 13:00 - 13:20 |
Event link | View original entry |
Last year I presented Cappulada, a C++ binding generator for Ada that
intended to overcome the shortcomings of existing solutions and to
provide usable bindings even for complex C++ code.
This year I want to show our conclusions on why automatic bindings
between C++ and Ada are hard (if not impossible) and where existing
solutions (including our own) fail.
Speaker |
Alejandro Mosteo |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 13:30 - 13:50 |
Event link | View original entry |
The Robot Operating System (ROS) is one of the chief frameworks
for service robotics research and development. The next iteration
of this framework, ROS2, aims to improve critical shortcomings of
its predecessor like deterministic memory allocation and real-time
characteristics.
RCLAda is a binding to the ROS2 framework that enables the programming
of ROS2 nodes in pure Ada with seamless integration into the ROS2
workflow.
Speaker |
Jean-Pierre Rosen |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 14:00 - 14:50 |
Event link | View original entry |
Ada incorporates in its standard a model for distributed execution.
It is an abstract model that does not depend on a particular kind of
network or any other communication mean, and that preserves full typing
control across partitions. This presentation briefly exposes the
principles of Ada's distribution model, then shows the possibilities
with life demos across different machines and operating systems.
Speaker |
Jan Verschelde |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 15:00 - 15:20 |
Event link | View original entry |
Tasks in Ada are effective to speed up computations on multicore
processors. In writing parallel programs we determine the granularity
of the parallelism with respect to the memory management. We have to
decide on the size of each job, the mapping of the jobs to the tasks,
and on the location of the input and output data for each job.
A multitasked Newton's method will show the effectiveness of Ada to
speed up the computation of power series. This application belongs
to the free and open source package PHCpack, a package to solve
polynomial systems by polynomial homotopy continuation.
Speaker |
Martin Stein |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 15:30 - 15:50 |
Event link | View original entry |
The Genode OS framework is an open-source tool kit for building highly
secure component-based operating systems scaling from embedded devices
to dynamic desktop systems. It runs on a variety of microkernels
like SeL4, NOVA, and Fiasco OC as well as on Linux and the Muen SK.
But the project also features its own microkernel named "base-hw"
written in C++ like most of the Genode framework.
Spunky is a pet project of mine. Simply put it's an approach to
re-implement the design of the "base-hw" kernel first in Ada and
later in SPARK with the ultimate goal to prove its correctness.
It is also an opportunity to learn how Genode can benefit from Ada
and SPARK in general and promote the use of safety-oriented languages
in the project.
Speaker |
Fabien Chouteau Pierre-Marie de Rodat Alejandro Mosteo |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 16:00 - 16:50 |
Event link | View original entry |
Alire (Ada LIbrary REpository) is a package manager project for the
Ada/SPARK community. The goal of a package manager is to facilitate
collaboration within the community and to lower the barrier of entry
for beginners.
In this talk we will present the Alire project, what it can do for
you and how you can contribute and give more visibility to your
Ada/SPARK projects.
We will also provide a tutorial to show how to use Alire to create
a library and then publish it for others to use.
Speaker |
Stephane Carrez |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 17:00 - 17:20 |
Event link | View original entry |
Storing passwords and secret configuration is a challenge for an
application. Ada Keystore is a library that stores arbitrary content
by encrypting them in secure keystore (AES-256, HMAC-256).
The talk presents the project and shows how to use the Ada Keystore
library to get or store secret information in a secure manner.
The presentation explains how the Ada features such as types, protected
types, tasks, pre/post conditions have helped during the development
of this project.
Speaker |
Riccardo Bernardini |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 17:30 - 17:50 |
Event link | View original entry |
Whoever wrote a research project proposal knows how much unnerving it
can be. The actual project description (made of work packages, tasks,
deliverable items, ...) has lots of redundancies and cross-references
that makes its coherency as frail as a house of cards. For example,
if the duration of a task is changed most probably you'll need to
update the effort in person-months of the task and of the including
work package; you must update the start date of depending tasks and
the deliver date of any deliverable items; most probably also the
WP efforts and length need update too; not to mention the need of
updating all the summary tables (summary of efforts, deliverable,
..) and the GANTT too. Any small changes is likely to start a ripple
of updates and the probability of forgetting something and getting an
incoherent project description is large. Given the harsh competition
in project funding, if your project is incoherent the probability of
getting funded is nil.
One day I got sick of this state of affair and I wrote my own project
generator: 10k lines of Ada code that reads a non-redundant project
description from a simple-format text file and produces a set of files
ready to be imported in the proposal, GANNT chart included. The user
can specify dependences between different items (e.g., this deliverable
is produced at the end of this task, this milestone is reached when
this deliverable is available, this task must begin after this other
task...) and the program automatically computes all the dates.
Both input parser and output processors are implemented using a plugin
structure that makes it easy to write new parsers to read different
formats or new output processors to produce output in different
formats. Currently a parser for a simple ad-hoc format and an output
processor that produces LaTeX files are provided; a new processor based
on the template expander protypo is currently being implemented.
Did I eat my own dog food? Well, yes, I did. I used it to write a
proposal (still under evaluation) and it served me well.
Speaker |
Tomasz Maluszycki |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 18:00 - 18:20 |
Event link | View original entry |
In the Ada world we typically write mission critical software that
just has to work, but in a way one could argue that a lot more software
is mission critical than is usually admitted.
What does it take to actually perform rapid application development
in any language? Can we do it in Ada and why would we do so?
A quick look into some language features that can be [ab]used for
enabling quick development of 'just a prototype' - which, as practice
shows is often deployed into production, usually without proper
quality controls and predictable outcome.
Speaker |
Pierre-Marie de Rodat |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 18:30 - 18:50 |
Event link | View original entry |
The world of generic structured data formats is full of contenders:
the mighty XML, the swift JSON, the awesome YAML, ... Alas, there
is no silver bullet: XML is very verbose, JSON is not convenient for
humans to write, YAML is known to be hard to parse, and so on.
TOML is yet another format whose goal is to be a good configuration
language: obvious semantics, convenient to write and easy to parse
in general-purpose programming languages.
In this talk, I'll shortly describe the TOML format and show a few
use cases in the real world. I'll then present the ada-toml library
itself: its high-level architecture and examples.
Speaker |
Dirk Craeynest Jean-Pierre Rosen |
---|---|
Room | AW1.125 |
Track | Ada |
Time | 18:50 - 19:00 |
Event link | View original entry |
Informal discussion on ideas and proposals for future events.
Speaker |
Jan Grewe |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 10:30 - 11:00 |
Event link | View original entry |
The reproducibility crisis has shocked the scientific
community. Different papers describe this issue and the scientific
community has taken steps to improve on it. For example, several
initiatives have been founded to foster openness and standardisation
in different scientific communities (e.g. the INCF[1] for the
neurosciences). Journals encourage sharing of the data underlying
the presented results, some even make it a requirement.
What is the role of open source solutions in this respect? Where are the problems with
open source projects in (neuro-)sciences?
In this presentation I will address these questions at the example
of the entirely open-source based workflow in our laboratory[2] and
our efforts in developing generic solutions for storing metadata[3]
as well as unifying data and metadata storage[4] that we take together
with the German Neuroinformatics Node (G-Node[5]).
[1] https://incf.org
[2] https://github.com/bendalab
[3] https://github.com/g-node/python-odml
[4] https://github.com/g-node/nix
[5] https://g-node.org
Speaker |
Julia Sprenger |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 11:00 - 11:30 |
Event link | View original entry |
The approaches used in software development in an industry setting and a scientific environment are exhibit a number of fundamental differences. In the former industry setting modern team development tools and methods are used (version control, continuous integration, Scrum, ...) to develop software in teams with a focus on the final software product. In contrast, in the latter scientific environment a large fraction of scientific code is produced by individual scientists lacking thorough training in software development with a specific research goal in mind. Indeed, it is only in the last decades that scientific software development started to become a fully recognized part of scientific work. Still, formal training in software development is largely missing in the scientific curricula of many universities. Additionally, due to the exploratory nature of the scientific method at the frontier of knowledge, most projects require the implementation of custom code. The combination of these circumstances promotes the development of scientific code not suited for sharing and long term maintenance, limiting the reusability and reproducibility of scientific data and findings. The systematic development and adoption of open source packages by the scientific community can emend this situation. Here we present examplary open source packages from the field of neuroscience and discuss the special requirements for open source software development and services in this research area.
Acknowledgements:
This project has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under Specific Grant Agreement No. 785907 (Human Brain Project SGA2). Supported by the NFDI Neuroscience Initiative.
Speaker |
Aniket Pradhan |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 11:30 - 12:00 |
Event link | View original entry |
NeuroFedora is an initiative to provide a ready to use Fedora-based Free/Open source software platform for neuroscience. We believe that similar to Free software; science should be free for all to use, share, modify, and study. The use of Free software also aids reproducibility, data sharing, and collaboration in the research community. By making the tools used in the scientific process more comfortable to use, NeuroFedora aims to take a step to enable this ideal.
The computer has become an indispensable resource in modern neuroscience. From the gathering of data, simulation of computational models, analysis of large amounts of information, collaboration, and communication tools for community development, software is now a necessary part of the research pipeline.
The Neuroscience community is gradually moving to the use of Free/Open Source software (FOSS) 1; however, the software tools used in Neuroscience and research are generally complicated and sophisticated to use. Researchers that hail from a different field other than computing must spend considerable resources on setting up and managing the computing environment and the software tools. This limits the portability of the software, making the installation of software very time-consuming and sometimes tricky.
We present NeuroFedora - A Fedora-based operating system for Neuroscientists.
We have leveraged the infrastructure resources of the FOSS Fedora community 2 to develop an operating system that includes a plethora of ready-to-use Neuroscience software.
We follow the standard software development and quality assurance practices set out by the Fedora community to provide an integrated platform for researchers to use.
Furthermore, NeuroFedora is well integrated with other software such as desktop environments, text editors, and other daily use and development tools.
A NeuroFedora lab image is now available, with over 130 neuroscience packages ready to use. With an up to date documentation at (neuro.fedoraproject.org) and about 120+ packages in the queue, we encourage more FOSS enthusiasts to join the team to help NeuroFedora better aid the open (neuro)-science and research community.
Speaker |
Markus Suhr Marcel Parciak |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 12:00 - 12:30 |
Event link | View original entry |
Health Data is traditionally held and processed in large and complex mazes of hospital information systems. The market is dominated by vendors offering monolithic and proprietary software due to the critical nature of the supported processes and - in some cases - due to legal requirements. The “digital transformation”, “big data” and “artificial intelligence” are some of the hypes that demand for improved exchange of health care data in routine health care and medical research alike. Exchanging data at these scales requires open data formats and protocols, multi-stakeholder collaboration, and agile development. As an example, the de-facto messaging standard organization in medicine HL7 noticed a much more positive response from the medical research community regarding their openly available FHIR specification in comparison to the for-members-only and XML-based HL7v3 messaging standard specification.
While some past (or rather: ongoing) projects on a national scale in the German health care system have tried centralized, top-down specification and development approaches, more recent infrastructure projects embrace the competitive collaboration of a decentralized, bottom-up strategy. As a result, importance and recognition of free software increase in the Medical Informatics research community.
In a series of rapid spotlights, we present tools and frameworks that serve as cornerstones for the envisioned health data exchange infrastructure, including: Organization and collaboration tools; data extraction from clinical source systems, data transformation and de-identification; data management systems and long-term archival using persistent globally-unique object identifiers; federated queries across multiple independently managed clinical data integration centers.
We aim to encourage participants to actively add tools and frameworks within the discussion and highlight their experiences and challenges with using open systems in Medical Informatics.
Marcel Parciak and Markus Suhr are research associates at the University Medical Center Göttingen (UMG), Department of Medical Informatics.
Marcel graduated the Göttingen Medical Informatics Master program in 2018 and is currently a PhD student, investigating the challenges of data provenance in medical research. He is a system architect for the HiGHmed project that facilitates innovative federated infrastructure for cross-organisational secondary use of health care data.
Markus started his professional career in 2014 as a system administrator and software developer at the UMG hospital data center. He joined the Department of Medical Informatics in 2017, becoming lead developer for a free software project and working on multiple biomedical research projects. Since 2019 he is technical lead for the newly created Medical Data Integration Center. Markus is a supporter of the Free Software Foundation Europe.
Speaker |
Michael Hanke |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 12:30 - 13:00 |
Event link | View original entry |
Contemporary sciences are heavily data-driven, but today's data management technologies and sharing practices fall at least a decade behind software ecosystem counterparts.
Merely providing file access is insufficient for a simple reason: data are not static. Data often (and should!) continue to evolve; file formats can change, bugs will be fixed, new data are added, and derived data needs to be integrated.
While (distributed) version control systems are a de-facto standard for open source software development, a similar level of tooling and culture is not present in the open data community.
The lecture introduces DataLad, a software that aims to address this problem by providing a feature-rich API (command line and Python) for joint management of all digital objects of science: source code, data artifacts (as much as their derivatives), and essential utilities, such as container images of employed computational environments.
A DataLad dataset represents a comprehensive and actionable unit that can be used privately, or be published on today's cyberinfrastructure (GitLab, GitHub, Figshare, S3, Google Drive, etc.) to facilitate large and small-scale collaborations.
In addition to essential version control tasks, DataLad aids data discovery by supporting a plurality of evolving metadata description standards. Moreover, Datalad is able to capture data provenance information in a way that enables programmatic re-execution of computations, and as such provides a key feature for the implementation of reproducible science.
DataLad is extensible and customizable to fine tune its functionality to specific domains (e.g., field of science or organizational requirements).
DataLad is built on a few key principles:
DataLad only knows about two things: Datasets and files.
A DataLad dataset is a collection of files in folders.
And a file is the smallest unit any dataset can contain.
At its core, DataLad is a completely domain-agnostic, general-purpose tool to manage data.
A dataset is a Git repository.
A dataset is a Git repository. All features of the version control system Git
also apply to everything managed by DataLad.
A DataLad dataset can take care of managing and version controlling arbitrarily large data.
To do this, it has an optional annex for (large) file content:
Thanks to this annex, DataLad can track files that are TBs in size
(something that Git could not do, and that allows you to restore previous versions of data,
transform and work with it while capturing all provenance,
or share it with whomever you want). At the same time, DataLad does all of the magic
necessary to get this important feature to work quietly in the background.
The annex is set-up automatically, and the tool git-annex manages it all underneath the hood.
DataLad follows the social principle to
minimize custom procedures and data structures. DataLad will not transform
your files into something that only DataLad or a specialized tool can read.
A PDF file (or any other type of
file) stays a PDF file (or whatever other type of file it was)
whether it is managed by DataLad or not. This guarantees that users will not loose
data or data access if DataLad would vanish from their system, or even when DataLad
would vanish from the face of Earth. Using DataLad thus does not require or generate
data structures that can only be used or read with DataLad -- DataLad does not
tie you down, it liberates you.
Furthermore, DataLad is developed for complete decentralization.
There is no required central server or service necessary to use DataLad. In this
way, no central infrastructure needs to be maintained (or paid for) --
your own laptop is the perfect place to live for your DataLad project, as is your
institutions webserver, or any other common computational infrastructure you
might be using.
Simultaneously, though, DataLad aims to
maximize the (re-)use of existing 3rd-party data resources and infrastructure.
Users can use existing central infrastructure should they want to.
DataLad works with any infrastructure from GitHub to
Dropbox, Figshare, or institutional repositories,
enabling users to harvest all of the advantages of their preferred
infrastructure without tying anyone down to central services.
Speaker |
Lilly Winfree |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 13:00 - 13:30 |
Event link | View original entry |
Generating insight and conclusions from research data is often not a straightforward process. Data can be hard to find, archived in difficult to use formats, poorly structured and/or incomplete. These issues create “friction” and make it difficult to use, publish and share data. The Frictionless Data initiative (https://frictionlessdata.io/) at Open Knowledge Foundation (http://okfn.org) aims to reduce friction in working with data, with a goal to make it effortless to transport data among different tools and platforms for further analysis, and with an emphasis on reproducible research and open data. The Frictionless Data project is comprised of a set of specifications (https://frictionlessdata.io/specs/) for data and metadata interoperability, accompanied by a collection of open source software libraries (https://frictionlessdata.io/software/) that implement these specifications, and a range of best practices for data management. Over the past year and a half, we have been working specifically with the researcher community to prototype using Frictionless Data’s open source tools to improve researchers’ data workflows and champion reproducibility. This talk will discuss the technical ideas behind Frictionless Data for research and will also showcase recent collaborative use cases, such as how oceanographers implemented Frictionless Data tooling into their data ingest pipelines to integrate disparate data while maintaining quality metadata in an easy to use interface.
The audience should be familiar with the themes of researching, using data in various forms from various sources, scientific computing, and the talk is intended for those that are interested in data management, data cleaning, metadata, and using open research data.
Lilly Winfree is the Product Owner of the Frictionless Data for Reproducible Research Project at Open Knowledge Foundation, where she solves researchers’ technical data management issues. She has her PhD in neuroscience, and has been active in the open data, open source, and open science communities for four years. Lilly has given numerous conference presentations and workshops over the past decade, and enjoys presenting on technical topics to technical and non-technical audiences.
https://github.com/frictionlessdata/
http://frictionlessdata.io/software/
Workshop presentation: http://bit.ly/FDepfl
Talk from a previous position: https://youtu.be/4Jqu8mBXcmA
Speaker |
Mateusz Kuzak |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 13:30 - 13:45 |
Event link | View original entry |
ELIXIR is an intergovernmental organization that brings together life science resources across Europe. These resources include databases, software tools, training materials, cloud storage, and supercomputers.
One of the goals of ELIXIR [1] is to coordinate these resources so that they form a single infrastructure. This infrastructure makes it easier for scientists to find and share data, exchange expertise, and agree on best practices. ELIXIR's activities are divided into the following five areas Data, Tools, Interoperability, Compute and Training known as “platforms”. The ELIXIR Tools Platform works to improve the discovery, quality and sustainability of software resources. Software Best Practices task of the Tools Platform aims to raise the quality and sustainability of research software by producing, adopting, promoting and measuring information standards and best practices applied to the software development life cycle. We have published four (4OSS) simple recommendations to encourage best practices in research software [2] and the Top 10 metrics for life science software good practices [3].
The 4OSS simple recommendations are as follows:
- Develop a publicly accessible open-source code from day one.
- Make software easy to discover by providing software metadata via a popular community registry.
- Adopt a license and comply with the licence of third-party dependencies.
- Have a clear and transparent contribution, governance and communication processes.
In order to encourage researchers and developers to adopt the 4OSS recommendations and build FAIR (Findable, Accessible, Interoperable and Reusable) software, best practices group in partnership with the ELIXIR Training platform, The Carpentries [4][5], and other communities are creating a collection of training materials [6]. The next step is to adopt, promote, and recognise these information standards and best practices, by developing comprehensive guidelines for software curation, and through workshops for training researchers and developers towards the adoption of software best practices and improvement of the usability of research software tools.
Additionally, the ELIXIR Software Best Practices WG is currently developing a Software Management Plan under the context of the necessary metrics for assessing adoption of good software development practices [7] and will subsequently develop practical guidelines to support its implementation in ELIXIR projects. We will work with the newly formed ReSA (Research Software Alliance) to facilitate the adoption of this plan to the broader community.
In the past year, the Working Group has also been working on improving the tooling and practices around software citation. This work has been done in collaboration with the eLife journal, the Software Sustainability Institute, Datacite, and Software Heritage, over multiple sprint events: (i) BOSC CoFest, (ii) eLife Innovation Sprint, (iii) FORCE19 Research Software Hackathon, and (iv) BioHackathon.
[1] “ELIXIR | A distributed infrastructure for life-science information” Internet: https://www.elixir-europe.org/, [Sep. 16, 2018]
[2] Jiménez RC, Kuzak M, Alhamdoosh M et al. (2017) “Four simple recommendations to encourage best practices in research software” F1000Research [Online]. 6:876. https://doi.org/10.12688/f1000research.11407.1
[3] Top 10 metrics for life science software good practices https://doi.org/10.12688/f1000research.9206.1
[4] “carpentries.org” Internet: carpentries.org, Sep. 15, 2018 [Sep. 16, 2018]
[5] “ELIXIR teams up with The Carpentries to boost its training programme | ELIXIR”, Internet: https://www.elixir-europe.org/news/elixir-carpentries-agreement, Aug. 17, 2018 [Sep. 16, 2018]
[6] SoftDev4Research/4OSS-lesson https://doi.org/10.5281/zenodo.2565040
[7] Top 10 metrics for life science software good practices https://doi.org/10.12688/f1000research.9206.1
Speaker |
Antoine Fauchié |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 13:45 - 14:00 |
Event link | View original entry |
As an editor for WYSIWYM text, Stylo is designed to change the entire digital editorial chain of scholarly journals the field of human sciences.
Stylo (https://stylo.ecrituresnumeriques.ca) is designed to simplify the writing and editing of scientific articles in the humanities and social sciences. It is intended for authors and publishers engaged in high quality scientific publishing. Although the structuring of documents is fundamental for digital distribution, this aspect is currently delayed until the end of the editorial process. This task should, however, be undertaken early on in the process; it must be considered by the author himself. The philosophy behind Stylo consists in returning the task of managing the publication markup to researchers. This repositioning of tasks relating to the editorial process relies on the author’s semantic rather than graphic skills.
This lightning talk will be the opportunity to present this tool and several publishing projects realized with Stylo.
Speaker |
Olivier Aubert |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 14:00 - 14:15 |
Event link | View original entry |
Advene is a video annotation platform (free software) that aims at accompanying scholars in their audiovisual analyses workflow. It promotes flexible and evolving annotation structures and interfaces in order to deal with the inherent dynamic nature of analysis. In this presentation, I will present the platform itself, and illustrate its usage through existing Digital Humanities projects that use it, from structuring videos for interview analyses to implementing a workflow for semantic annotation of movies.
The Advene project (Annotate Digital Video, Exchange on the NEt) aims
at providing a model and a format to share annotations about digital
video documents (movies, courses, conferences...), as well as tools to
edit and visualize the hypervideos generated from both the annotations
and the audiovisual documents.
With the Advene software, teachers, moviegoers, etc. can exchange
multimedia comments and analyses about video documents. The project
also aims at studying the way that communities of users (teachers,
moviegoers, students...) will use these self-publishing tools to share
their audiovisual "readings", and to envision new editing and viewing
interfaces for interactive comment and analysis of audiovisual
content.
Advene is a GPL desktop application, based on Python3/Gtk/Gstreamer.
Speaker |
Nicolas Roelandt |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 14:15 - 14:30 |
Event link | View original entry |
Representing geographical time-space is a fundamental issue in geography, addressing core questions of the discipline, i.e. where are places and what distance separate them. Yet, considering the properties of geographical time space shaped by transport means, no satisfying cartographic representation – including classical maps and plastic space approaches – has been proposed so far.
The "shriveling_world" project aims at producing images of the global geographical time-space, using the third dimension, as in time-space relief maps. The word "shriveling" was introduced by Waldo Tobler in his comments of Mathis-L'Hostis time-space relief image, in order to describe the complex contraction process suggested by the model.
The FOSDEM presentation aims at opening the code to the scientific community, now that the application is close to a first functional version.
Speaker |
Guillaume Plique |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 14:30 - 15:00 |
Event link | View original entry |
Web mining, as represented mostly by the scraping & crawling practices, is not a straightforward task and requires a variety of skills related to web technologies.
However, web mining can be incredibly useful to social sciences since it enables researchers to tap into a formidable source of information about society.
But researchers may not have the possibility to invest copious amount of times into learning web technologies in and out. They usually rely on engineers to collect data from the web.
The object of this talk is to explain how Sciences Po's médialab designed & developed tools to empower researchers and enable them to perform web mining tasks to answer their research questions. Here is an example of issues we will tackle during this talk:
Examples will be taken mainly from the minet CLI tool and the artoo.js bookmarklet.
Guillaume Plique is a research engineer working for SciencesPo's médialab. He assists social sciences researchers daily with their methods and maintain a variety of FOSS tools geared toward the social sciences community and also developers.
Speaker |
Antonin Delpeuch |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 15:00 - 15:30 |
Event link | View original entry |
OpenRefine is a data transformation tool popular in many communities: data journalism, semantic web, GLAMs, scientific research… In this talk I give an overview of our recent efforts to revamp this project as it approaches its 10th anniversary. We are working on exciting improvements which should help alleviate some of the most salient issues faced by our users. My intention is not to lecture the attendance about how to deal with technical debt or to grow a contributor community - I instead seek feedback and spark discussions about our choices. Let us know what you think and help us take good care of this fantastic tool!
Speaker |
Santiago Bragagnolo |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 15:30 - 16:00 |
Event link | View original entry |
We will showcase Grafoscopio, a flexible, extensible, self contained "pocket infrastructure", which simplifies infrastructure to amplify participation, so reproducible research and publishing, agile data storytelling and custom data visualization can be used in fields like investigative journalism, data feminism and civic hacktivism. We will show prototypes developed with Grafoscopio in the previously mentioned domains, the motivations behind Grafoscopio and the local community practices around it that deconstruct binary relations of power (software developer/user, data producer / consumer, software binary / source code, male/female) and approach reproducible research practices and tools from a perspective located and embodied in a particular place of the Global South in Latin America and in contrast/dialogue with Global North perspectives.
Reproducible research (and publishing) has been confined mostly to academic places. But it has a lot of potential in several other places like investigative journalism, data feminism and civic hacktivism, as we have showcased by building several prototypes, including: making the so called "Panama Papers" data leak story reproducible; creating domain specific visualizations for medicine information released by 16 governments; porting the Spanish Data Journalism Handbook and the Data Feminism book to our "pocket infrastructures" and the creation of agile and resilient tools and practices to write and publish together (see proposal links for a detailed view of such prototypes).
To bridge reproducible research and publishing, agile data storytelling and custom data visualization, with the previously mentioned domains, we have co-designed, developed, used and extended a set of inclusive approaches and tools for/from the Global South, that we have called "pocket infrastructures". Our "pocket infrastructures" simplify infrastructure to amplify participation, and they are mostly self contained, flexible, extensible, and work well with good, low or non connectivity and run from a variety of hardware, from a USB drive, to low end computers, to servers and the "cloud" and anything in between and beyond.
This is in sharp contrast with exclusionary approaches like "Big Data" or others that start with big and/or overcomplex infrastructures and are getting traction in the Global North (or are being imported from there to the Global South as the "only way" forward regarding reproducibility).
Grafoscopio is one of such pocket infrastructures for reproducible research and publishing, agile visualization and data storytelling and this lecture will showcase Grafoscopio, the motivations behind it, and some prototypes developed with it, and the community practices that allow the development of such projects and prototypes deconstructing also binary relations of power (software developer/user, data producer / consumer, software binary / source code, male/female).
Update: this talk has been proposed by Offray Luna who unfortunately couldn't make it to FOSDEM. Santiago Bragagnolo will present this talk on his behalf.
Speaker |
Anne L'Hôte Bruno Thomas |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 16:00 - 16:30 |
Event link | View original entry |
We are not journalists. But we are developers working for journalists. When we receive leaks, we are flooded by the huge amount of documents and the huge amount of questions that journalists have, trying to dig into this leak. Among others :
* Where to begin ?
* How many documents mention "tax avoidance" ?
* How many languages are in this leaks ?
* How many documents are in CSV ?
Journalists have more or less the same questions as researchers ! So to help them answer all these questions, we developed Datashare. In a nutshell, Datashare is a tool to answer all your questions about a corpus of documents : just like Google but without Google and without sending information to Google. That means that it extracts content and metadata from all types of documents and index it. Then, it detects any people, locations, organizations and email addresses. The web interface expose all of that to let you have a complete overview of your corpus and search through it. Plus Datashare lets you star and tag documents.
We didn't want to reinvent the wheel, and use assets that has been proved to work well. How did we end up with Datashare from an heterogeneous environment ? Initially we had :
- a command line tool to extract text from huge document corpus
- a proof of concept of NLP pipelines in java
- a shared index based on blacklight / RoR and SolR
- opensource tools and frameworks
Issues we had to fix :
- UX
- scalability of SolR with millions of documents
- integration of all the tools in one
- maintainability and robustness while increasing code base
Speaker |
Sébastien Rochette |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 16:30 - 17:00 |
Event link | View original entry |
This presentation is a feedback from experience on helping a researcher transforming a series of scattered analyses into a documented, reproducible and shareable workflow.
Time allocated by researchers to program / code the analyses required to answer their scientific questions is usually low compared to other tasks. As a result, multiple small experiments are developed and outputs are gathered as best as possible to be presented in a scientific paper. However, science is not only about sharing results but also sharing methods. How can we make our results reproducible when we developed multiple, usually undocumented analyses? What do we do if the program is only applicable to our computer directory architecture? This is always possible to take time to rewrite, re-arrange and document analyses at the time we want/have to share them. Here, I will take the exemple of a "collaboration fest" where we dissected R scripts of a researcher in ecology. We started a reproducible, documented and open-source R-package along with its website, automatically built using continuous integration: https://cesco-lab.github.io/Vigie-Chiro_scripts/.
However, can we think, earlier in the process, a better way to use our small programming time slots by adopting a method that will save time in our future? In this aim, I will present a documentation-first method using little time while writing analyses, but saving a lot when the time has come to share your work.
Lecture
30 min
No prior knowledge expected. Example will be about building documentation for R software but any developper, using any programming language may be interested in the method adopted.
Sébastien Rochette has a PhD in marine ecology. After a few years has a researcher in ecology, he joined ThinkR (https://rtask.thinkr.fr), a company giving courses and consultancy around the R-software. Along with commercial activities, he is highly involved in the development of open-source R packages. He also shares his experience with the R-community through free tutorials, blog posts, online help and other conferences. https://statnmap.com/
I wrote a blog post in French about what I am planning to present: https://thinkr.fr/transformer-plusieurs-scripts-eparpilles-en-beau-package-r/
This topic is also related to another blog post: https://rtask.thinkr.fr/when-development-starts-with-documentation/
Talks about R are in my Github repository: https://github.com/statnmap/prez/. The "README" lists talks that have a live recorded video.
As a researcher, I also gave multiple talks about marine science, modelling and other topics related to my research.
Please note that this talk was originally scheduled to be at 17h.
The talk originally in this slot was "Developing from the field." by Audrey Baneyx and Robin de Mourat which will now take place at 17h.
Speaker |
Robin De Mourat |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 17:00 - 17:30 |
Event link | View original entry |
Software design and development within interdisciplinary research teams is a specific activity which closely associates makers and practitioners in the equipment of experimental research methods and practices. This closeness allows practitioners to tackle research endeavours’ specific requirements, such as understanding the methodological assumptions encoded within the tools. It also induces a specific relationship between “makers” and their publics of “users” : a non-commercial, situated and case-based crafting process, implying shifting roles and complex decision making. How does this peculiar context affect the design and valorization practices around open research tools and their evolution ? What are the benefits and difficulties of such settings, in terms of work organization, pedagogical approaches, and scientific methodology ? What can be shared for other contexts such as activism or journalism ? Grounding on the presentation of several case studies of research tools’ design and development elaborated at the médialab of Sciences Po, this talk will offer an account of how an interdisciplinary research environment affects and dialogs with established methods of design (“participative design”, “user experience research”), development (“agile methods”), and tool valorization and socialization.
Audrey Baneyx has a PhD in artificial intelligence from Paris 6 university. She is a research engineer at the médialab (Sciences Po, Paris) where she works at the intersection of digital methods, knowledge modelling and designing pedagogical storytellings. She is teaching digital culture and methods and, as a mediator, developing médialab tools communities of practitioners. She is co-leading a research group focusing on gender issues online.
Robin de Mourat is research designer at the médialab laboratory (Sciences Po, Paris). He works at the intersection between academic equipment and inquiry practices, combining a background in product design, design history & theory, and human-computer interactions, with diverse material and discursive experiments in the Humanities and Social Sciences. He has participated to the making of texts, things and conversations about the epistemology of design activities, interdisciplinary methodologies, and social & cultural studies of scholarly practices. He has been involved for several years in the development of advanced tools for academic writing and publishing in humanities and social sciences contexts.
Please note that this talk was originally scheduled to be at 16h30.
The talk originally in this slot was "Transforming scattered analyses into a documented, reproducible and shareable workflow" by Sébastien Rochette which will now take place at 16h30.
Note also that Audrey Baneyx couldn't make it, she will not co-present although she co-wrote the talk.
Speaker |
Emmy Tsang |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 17:30 - 18:00 |
Event link | View original entry |
The advancement of web technologies has created an opportunity for developing tools for real-time collaborations, text-mining, interactive data visualisations, sharing reproducible compute environments, etc. These tools can change the ways researchers share, discover, consume and evaluate research and help promote open science and encourage responsible research behaviours.
Through its Innovation Initiative, eLife invests heavily in software development, new product design, collaboration and outreach so that the potential for improvements in the digital communication of new research can start to be realised. In particular, we support exclusively the development of open-source tools, with extensible capabilities, that can be used, adopted and modified by any interested party and actively engage the community of open innovators.
In this talk, we will introduce the following projects:
* Reproducible Document Stack (RDS), an open-tool stack capturing code, data and compute environment in a live paper to improve research reproducibility (see demo here)
* Fostering collaboration and innovation through hacking: eLife Innovation Sprint
We believe that openness is crucial to the future of research, and by supporting the community and promoting open-source research software, we can help build a culture towards integral, collaborative, open and reusable research. We hope to share some of our visions and learnings, and invite feedback and contributions from the wider open-source community on the next steps forward.
Emmy Tsang is the Innovation Community Manager at eLife, a non-profit organisation with the mission to accelerate research communication and discovery. She is responsible for the day-to-day running of the eLife Innovation Initiative, which supports the development of open-source tools, technologies and processes aimed at improving the discovery, sharing, consumption and evaluation of scientific research. Prior to joining eLife, Emmy completed a PhD in neuroscience at the European Molecular Biology Laboratory in Rome, Italy. She is passionate about building communities, fostering collaborations and developing technological solutions to make research more open, reproducible and user-friendly.
Twitter: @eLifeInnovation / @emmy_ft
Speaker |
Karthik Ram |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 18:00 - 18:30 |
Event link | View original entry |
Researchers rarely cite software they use as part of their research. As a result, research software and the time spent developing it have become invisible scholarly contributions. This lack of visibility reduces the incentives that are necessary to produce and share high quality software that are essential for the progress of science. The Journal of Open Source Software (JOSS) is an open source, open access journal primarily designed to make it easier for those individuals authoring research software to gain career credit for their work by publishing short software papers. Software papers are a recognized mechanism for authors of research software to create a citable ‘entity’ which can easily be cited in journals and as such directly impact a researcher’s career via established metrics such as the h-index. JOSS is unique in that it only accepts very short (~ 1-2 pages) papers, with short summaries and links to the software repository. In that sense, the software papers are not the focus of the review. Instead, we ask reviewers to conduct a thorough review of the associated software (which must be open source) ensuring that it is well documented, straightforward to install and functions as expected. In this talk I will describe the origin and impact that JOSS has had on research open source and also touch upon issues such as sustainability and credit.
Speaker |
Bram Luyten |
---|---|
Room | AW1.126 |
Track | Open Research Tools and Technologies |
Time | 18:30 - 19:00 |
Event link | View original entry |
The DSpace community is anticipating the largest release ever in 2020 with DSpace 7 ( https://wiki.duraspace.org/display/DSPACE/DSpace+Release+7.0+Status ). The platform is used in thousands of research institutions around the globe and powers systems including dspace.mit.edu, dash.harvard.edu and openknowledge.worldbank.org. If you download an academic paper through Google Scholar today, the chance is large that it is served to you thanks to a DSpace institutional repository.
The talk aims to briefly introduce the scope and usage of the DSpace software. Attendees will learn how the governance of the DSpace community is structured, and what lead to the decision to drop the two legacy UIs, JSPUI and XMLUI, in favour of an endeavour to introduce Angular as the new UI layer.
The most relevant piece of the presentation for the Fosdem audience, will be an outline of the tooling and best practices applied in the community, together with a pro and con evaluation.
We are very keen on learning from other participants in the audience what they could advise, both on a technical and organisational level, going forward.
Introducing DSpace 7
DSpace 7 Configurable Entities
The DSpace 7 Angular UI from a user perspective
Speaker |
Julian Murgia George Marques |
---|---|
Room | K.3.201 |
Track | Game Development |
Time | 10:30 - 10:40 |
Event link | View original entry |
Welcome to FOSDEM game development devroom! We'll present what this is all about and invite you to participate.
Speaker |
Emmanuel Leblond |
---|---|
Room | K.3.201 |
Track | Game Development |
Time | 10:45 - 11:10 |
Event link | View original entry |
Godot is an incredible open source game engine. Among it key features, it comes packed with a script language called GDscript and loosely based on Python.
But could it be even better ? Could we use the real Python to code our game on Godot ?
And maybe even more important, is it really a good idea ?
Godot is an advanced, feature-packed, multi-platform 2D and 3D open source game engine.
The project has joined the Software Freedom Conservancy project and it growing community makes it hopes to become a real alternative to Unity&GameMaker.
This talk present the Godot-Python project aiming at bringing Python as a fully integrated language into Godot.
We will have a look at Godot’s internal architecture as is it itself a real interpreter with it garbage collector, dynamic typing, introspection and even builtin custom scripting language.
All of this having to work next to our Python interpreter and communicate back and forth with it.
We will then dig into Godot-Python design choices, both past and current, this project came through a looot of errors and trials ;-)
Finally we will discuss the pros and cons about using Python as a script language for Godot vs the traditional GDscript.
The audience should have some basic knowledge of C level computing (static vs dynamic language, compilation & linking).
Speaker |
Vladislav Belov |
---|---|
Room | K.3.201 |
Track | Game Development |
Time | 11:15 - 12:00 |
Event link | View original entry |
A story about graphics pipeline of 0 A. D. (an open-source game of Wildfire Games) and its issues.
Talking structure:
A little history of 0AD (https://play0ad.com/about/the-story-of-0-a-d/)
How our graphics pipeline works
Used technologies (SDL, OpenGL 1/2, ARB/GLSL shaders)
Known problems (old OpenGL, legacy support of OpenGL drivers on macOS 10.14)
Future plans
Speaker |
Christoph Haag |
---|---|
Room | K.3.201 |
Track | Game Development |
Time | 12:35 - 13:00 |
Event link | View original entry |
Last year Khronos released OpenXR, an open API for using XR hardware. In this talk we will look at the practical side of creating VR applications and games with OpenXR.
Since the Oculus VR development kit started a resurgence of consumer VR, game development has largely been relegated to proprietary VR APIs and runtimes. Khronos reacted by creating an open API for using XR hardware and released it OpenXR 1.0 in July 2019. Collabora implemented the OpenXR API in a runtime nicknamed Monado, built on open source VR hardware drivers. With these building blocks VR applications can now use standardized APIs and run on a FOSS stack.
In this talk, Christoph will give an overview of the feature set of the OpenXR API and the practical side of creating VR applications and games with OpenXR. We will look at low level code using the OpenXR API directly as well as an OpenXR plugin for the godot engine.
Speaker |
George Marques |
---|---|
Room | K.3.201 |
Track | Game Development |
Time | 13:05 - 14:00 |
Event link | View original entry |
This hour is dedicated to people who want to come up and shortly present their project, without having to schedule a full talk.
Bring your open source game-related project (be it an engine, game, demo, tool, or something else) and showcase to our fellow gamedev friends. Talks should have at most five minutes with no time allotted for questions. If you want to bring a laptop to show something, keep it ready and install/uninstall it as fast as possible (will be part of your five minutes). Please be understanding so everyone can have a chance. Contact the room managers on the event day if you want to present something.
Speaker |
Mahmoud Abdelghany |
---|---|
Room | K.3.201 |
Track | Game Development |
Time | 14:05 - 15:00 |
Event link | View original entry |
According to the interwebs, Java is one of the most popular programming languages in the multiverse.
And yet, when it comes to games, its popularity seems to dwindle.
Why though...
Is it really not suited for game development?
Is the language as bad as the critics claim??
Is GC...EVIL!?
I have this side-project of porting Doom 3 from C/C++ to Java. And even though Doom 3 is a ~15 year old game, it is still a massive AAA code-base. So I believe my experiences there can adequately answer a lot of the questions that come to mind when you think of Java & Games.
During this talk, I will try to convince you that Java is a very viable game prototyping/development tool.
Either that, or will add more fuel to the flame...
Speaker |
Eshed Shaham |
---|---|
Room | K.3.201 |
Track | Game Development |
Time | 15:05 - 15:35 |
Event link | View original entry |
For some arcane reason contributors spend their precious time on open source game projects. Why do they do this? And more importantly: What can you do to make them do it on your project?
FOSS game projects live on the motivation of their contributors. This motivation usually consists of two parts: The desire to contribute to FOSS game development in general and the desire to contribute to a specific project. While the former is very interesting to discuss, our ability to affect it is negligible. Thankfully, the control of the latter is definitely possible, and it is crucial to the survival of our projects.
In this talk we will analyse different characteristics of FOSS game projects with regards to contributor recruitment and retention. We will try to present practical steps to lower the risk of your current or future projects dying to inactivity and obscurity.
Speaker |
Juan Linietsky |
---|---|
Room | K.3.201 |
Track | Game Development |
Time | 15:45 - 16:45 |
Event link | View original entry |
Godot 4.0 is in the process of being ported from OpenGL to a Vulkan rendering API.
This new technology provides new challenges and benefits for improving quality and performance,
which will be explained and compared during this presentation.
Godot 4.0 is in the process of being ported from OpenGL to a Vulkan rendering API.
This new technology provides new challenges and benefits for improving quality and performance,
which will be explained and compared during this presentation.
Speaker |
Dalai Felinto |
---|---|
Room | K.3.201 |
Track | Game Development |
Time | 16:55 - 17:15 |
Event link | View original entry |
An in-depth look at the development process of virtual reality in Blender. This project started in 2014 and illustrates well the development channels, how Blender does onboarding, how the development team collaborates with others interested parties, and the role the community has in the projects.
Speaker |
Julian Murgia |
---|---|
Room | K.3.201 |
Track | Game Development |
Time | 17:15 - 17:45 |
Event link | View original entry |
Escoria is a Libre framework for the creation of point-and-click adventure games with MIT-Licenced Godot Engine. Since its release, Godot Engine changed a lot at fast pace while Escoria code was still based on old Godot 1.x features.
In this presentation, I'll present the current state of Escoria and discuss the process of rewrite as a Godot Engine plugin. It'll cover architecture and design, allowing adventure game creators to use Godot Engine as a full-featured editor for their adventure game.
Escoria is a Libre framework for the creation of point-and-click adventure games with MIT-Licenced Godot Engine. It was initially developed for the adventure game The Interactive Adventures of Dog Mendonça and Pizzaboy® and later streamlined for broader usages and open sourced as promised to the backers of the Dog Mendonça Kickstarter campaign. Since its release, Godot Engine changed a lot at fast pace while Escoria code was still based on old Godot 1.x features. It is more a collection of dedicated scripts than a proper Godot Engine layer designed for point and click adventure games. Evolving Escoria is quite a big task, so it might be better to actually start it over with a new project architecture while keeping its most interesting features in the process.
In this presentation, I'll present the current state of Escoria and discuss the process of rewrite as a Godot Engine plugin. It'll cover architecture and design, allowing them to use Godot Engine as a full-featured editor for their adventure game.
Speaker |
Eshed Shaham |
---|---|
Room | K.3.201 |
Track | Game Development |
Time | 17:50 - 18:30 |
Event link | View original entry |
The Spring RTS Engine has been in active development since 2005. In the past few years, two of its games, Evolution RTS and Zero-K, have been released on Steam. The journey to these releases was long and difficult. Let's regale ourselves with tales of the adventures of the devs, learn many things that you shouldn't do, and also learn what to do in case you've already done what you shouldn't have.
Speaker |
Greg Chadwick |
---|---|
Room | K.3.401 |
Track | RISC-V |
Time | 10:30 - 10:50 |
Event link | View original entry |
Ibex implements RISC-V 32-bit I/E MC M-Mode, U-Mode and PMP. It uses an in order 2 stage pipe and is best suited for area and power sensitive rather than high performance applications. However there is scope for meaningful performance gains without major impact to power or area. This talk describes work done at lowRISC to analyse and improve the performance of Ibex. The RTL of an Ibex system is simulated using Verilator to run CoreMark and Embench and the traces analysed to identify the major sources of stalls within them. This informs where improvements should be targeted. The open source implementation tools Yosys and openSTA are used to assess potential timing and area impacts of these improvements. In this talk you’ll learn about the pipeline of Ibex, methods to analyse the performance of CPU microarchitecture and how to use Yosys and openSTA to analyse what limits clock frequency in a design.
Speaker |
Dan Petrisko |
---|---|
Room | K.3.401 |
Track | RISC-V |
Time | 10:50 - 11:10 |
Event link | View original entry |
BlackParrot is a Linux-capable, cache-coherent RISC-V multicore, designed for efficiency and ease of use. In this talk, we will provide an architectural overview of BlackParrot, focusing on the design principles and development process as well as the software and hardware ecosystems surrounding the core. We will also discuss the project roadmap and our plans to engage the open-source community. Last, we will demonstrate a multithreaded RISC-V program running on top of Linux on a multicore BlackParrot FPGA implementation.
BlackParrot aims to be the default open-source, Linux-capable, cache-coherent, RV64GC multicore used by the world. Although originally developed by the University of Washington and Boston University, BlackParrot strives to be community-driven and infrastructure agnostic, a core which is Pareto optimal in terms of power, performance, area and complexity. In order to ensure BlackParrot is easy to use, integrate, modify and most importantly trust, development is guided by three core principles: Be Tiny, Be Modular, and Be Friendly. Development efforts have prioritized ease of use and silicon validation as first order design metrics, so that users can quickly get started and trust that their results will be representative of state-of-the-art ASIC designs. BlackParrot is ideal as the basis for a research platform, a lightweight accelerator host or as a standalone Linux core.
Speaker |
Michael Taylor Max H Ruttenberg |
---|---|
Room | K.3.401 |
Track | RISC-V |
Time | 11:10 - 11:30 |
Event link | View original entry |
HammerBlade is an open source RISC-V manycore that has been under development since 2015 and has already been silicon validated with a 511-core chip in 16nm TSMC. It features extensions to the RISC-V ISA that target GPU-competitive performance for parallel programs (i.e. GPGPU) including graphs and ML workloads. In this talk we will overview the components of the HW and software ecosystem in the latest version, and show you how to get up and running as an open source user or contributor in either SW or HW on Amazon F1 cloud FPGAs.
HammerBlade is an open source RISC-V manycore that has been under development since 2015 and has already been silicon validated with a 511-core chip in 16nm TSMC. It features extensions to the RISC-V ISA that target GPU-competitive performance for parallel programs (i.e. GPGPU) including graphs and ML workloads. In this talk we will overview the components of the HW and software ecosystem in the latest version, and show you how to get up and running as an open source user or contributor in either SW or HW on Amazon F1 cloud FPGAs. We will overview the HW-architecture, the CUDA-like programming environment, the runtime software, the HW architecture, and our Amazon F1 cloud emulation and cosimulation environments, and our suite of performance analysis tools.
Speaker |
Luca Carloni Davide Giri |
---|---|
Room | K.3.401 |
Track | RISC-V |
Time | 11:30 - 11:50 |
Event link | View original entry |
ESP is an open-source research platform for RISC-V systems-on-chip that integrate many hardware accelerators.
ESP provides a vertically integrated design flow from software development and hardware integration to full-system prototyping on FPGA. For application developers, ESP offers domain-specific automated solutions to synthesize new accelerators for their software and map it onto the heterogeneous SoC architecture. For hardware engineers, ESP offers automated solutions to integrate their accelerator designs into the complete SoC.
The participants in this FOSDEM20 event will learn how to use ESP from the viewpoints of both application developers and hardware engineers by following a series of short hands-on tutorials embedded in the lecture.
Conceived as a heterogeneous integration platform and tested through years of teaching at Columbia University, ESP is intrinsically suited to foster collaborative engineering of RISC-V based SoCs across the open-source community.
ESP is an open-source research platform to design and program heterogeneous systems-on-chip (SoCs). A heterogeneous SoC combines multiple general-purpose processor cores and many specialized hardware accelerators.
ESP provides automated solutions to (a) synthesize new accelerators, (b) integrate them with RISC-V processors and other third party accelerators into a complete SoC, (c) rapidly prototype the SoC on an FPGA board, and (d) run software applications that take advantage of these accelerators.
ESP contributes to the open-source movement by supporting the realization of more scalable architectures for SoCs that integrate more heterogeneous components, thanks to a more flexible design methodology that accommodates different specification languages and design flows.
With ESP's automation capabilities, it is easy to realize FPGA-based prototypes of complete SoCs. For example, an SoC may feature the Ariane RISC-V processor core booting Linux, a multi-plane network-on-chip supporting a partitioned memory hierarchy with multiple DRAM controllers, and tens of loosely-coupled accelerators that execute coarse-grained tasks exchanging large amount of data with DRAM through direct-memory access (DMA). These accelerators can be third-party open-source hardware components that “speak” the AXI protocol (e.g. the NVIDIA NVDLA accelerator for deep learning) or new accelerators that can be synthesized with different design flows from specifications written in different languages, including: C with Xilinx Vivado HLS, SystemC with Cadence Stratus HLS, Keras TensorFlow and PyTorch with hls4ml, Chisel, SystemVerilog, Verilog, and VHDL.
Information technology has entered the age of heterogeneous computing. Across a variety of application domains, computing systems rely on highly heterogeneous architectures that combine multiple general-purpose processors with specialized hardware accelerators. The complexity of these systems, however, threatens to widen the gap between the capabilities provided by semiconductor technologies and the productivity of computer engineers. ESP tackles this challenge by raising the level of abstraction in the design process, simplifying the domain-specific programming of heterogeneous architectures, and leveraging the potential of the emerging open-source hardware movement.
Building on years of research on communication-based system-level design at Columbia University, ESP combines an architecture and a methodology. The flexible tile-based architecture simplifies the integration of heterogeneous components by balancing regularity and specialization. The companion methodology raises the level of abstraction to system-level design, thus promoting closer collaboration among software programmers and hardware engineers. Through the automatic generation of device drivers from pre-designed templates, ESP simplifies the invocation of accelerators from user-level applications executing on top of Linux. Through the automatic generation of a multi-plane network-on-chip from a parameterized model, the ESP architecture can scale to accommodate many processors, tens of accelerators, and a distributed memory hierarchy. A set of ESP Platform Services provides pre-validated solutions for accelerators configuration, memory management, sharing of system resources, and dynamic frequency scaling, among others.
To date, the majority of the open-source hardware (OSH) efforts related to RISC-V have focused on the development of processor cores that implement the RISC-V ISA and small-scale SoCs that connect these cores with tightly-coupled functional units and coprocessors, typically through bus-based interconnects. Meanwhile, there have been less efforts in developing solutions for large-scale SoCs that combine RISC-V cores with many loosely-coupled components, such as coarse-grain accelerators, interconnected with a network-on-chip (NoC). Compared to other RISC-V related projects, ESP is focused on scalability (with the NoC-based architecture), heterogeneity (with emphasis on loosely-coupled accelerators), and flexibility (with support of different design flows). Just like the ESP architecture simplifies the integration of heterogeneous components developed by different teams, the ESP methodology embraces the use of heterogeneous design flows for component development.
ESP has been developed by the System-Level Design (SLD) group in the Department of Computer Science at Columbia University during the past seven years. The SLD group has published over a dozen scientific papers in peer-reviewed conferences and journals to describe the most innovative aspects of ESP. ESP has been released as an open-source project via GitHub in the summer 2019.
Speaker |
Schuyler Eldridge |
---|---|
Room | K.3.401 |
Track | RISC-V |
Time | 11:50 - 12:10 |
Event link | View original entry |
The burgeoning RISC-V hardware ecosystem includes a number of microprocessor implementations [1, 3] and SoC generation frameworks [1, 2, 7]. However, while accelerator “sockets” have been defined and used (e.g., Rocket Chip’s custom coprocessor/RoCC), accelerators require additional collateral to be generated like structured metadata descriptions, hardware wrappers, and device drivers. Requiring manual effort to generate this collateral proves both time consuming and error prone and is at odds with an agile approach to hardware design. However, the existence and use of hardware construction languages and hardware compilers provides a means to automate this process. Through the use of the Chisel hardware description language [4] and the FIRRTL hardware compiler [5], we provide a definition of an abstract accelerator template which users then implement to integrate an accelerator with the Embedded Scalable Platform (ESP) System-on-Chip platform [2, 8]. Through the use of this template, we are able to automatically generate XML metadata necessary to integrate accelerators with the ESP platform and work on generating all collateral is in progress. Our accelerator template is open source software provided under an Apache 2.0 license [6].
[1] CHIPS alliance Rocket-chip. GitHub Repository. Online: https://github.com/chipsalliance/rocket-chpi.
[2] Columbia University Embedded scalable platform. git repository. Online: https://github.com/sld-columbia/esp.
[3] ETH Zurich Ariane. GitHub Repository. Online: https://github.com/pulp-platform/ariane.
[4] Freechips Project Chisel3. GitHub Repository. Online: https://github.com/freechipsproject/chisel3.
[5] Freechips Project FIRRTL. GitHub Repository. Online: https://github.com/freechipsproject/firrtl.
[6] IBM ESP chisel acclerators. GitHub Repository. Online: https://github.com/ibm/esp-chisel-accelerators.
[7] Princeton University OpenPiton. GitHub Repository. Online: https://github.com/PrincetonUniversity/openpiton.
[8] ESP: The open-source heterogeneous system-on-chip platform. Online: https://www.esp.cs.columbia.edu/.
Speaker |
Karthik Swaminathan |
---|---|
Room | K.3.401 |
Track | RISC-V |
Time | 12:10 - 12:30 |
Event link | View original entry |
RISC-V processors have gained acceptance across a wide range of computing domains, from IoT to embedded/mobile class and even in server-class processing systems. In processing systems ranging from connected cars and autonomous vehicles, to those on-board satellites and spacecrafts, these processors are targeted to function in safety-critical systems, where Reliability, Availability and Serviceability (RAS)-based considerations are of paramount importance. Along with potential system vulnerabilities caused primarily due to random errors, these processors may also be sensitive to targeted errors, possibly from malicious entities, which raises serious concerns regarding the security and safety of the processing system. Consequently, such systems necessitate the incorporation of RAS-based considerations right from an early stage of processor design.
While the hardware and software ecosystem around RISC-V has been steadily maturing, there have, however, been limited developments in early stage reliability-aware design and verification. The Early-stage Reliability And Security Estimation for RISC-V (ERASER) tool attempts to address this shortcoming. It consists of an open source framework aimed at providing directions to incorporate such reliability and security features at an early, pre-silicon stage of design. These features may include what kind of protection to be applied and which components within the processor should they be applied to. The proposed infrastructure comprises of an open source toolchain for early stage modeling of latch vulnerability in a RISC-V core (SERMiner [1]), a tool for automated generation of stress marks that maximize the likelihood of a transient-failure induced error (Microprobe (RISC-V) [2]), and verification by means of statistical and/or targeted fault injection (Chiffre [3]). While the infrastructure is targeted towards any core that uses the RISC-V ISA, the repository provides an end-to-end flow for the Rocket core [4].
ERASER thus evaluates “RAS-readiness”, or the effectiveness of protection techniques in processor design such that processor vulnerability in terms of Failures-In-time (FIT) rate is minimized, for a specified power/performance overhead. FIT rate is defined as the number of failures in one billion hours of operation and is a standard vulnerability metric used in industry.
ERASER is an open source tool available for download at https://github.com/IBM/eraser. The tool currently supports analysis of all latches in the design across a single Rocket core and the generation of stressmarks that can be used to evaluate the vulnerability of these latches. In addition to radiation-induced soft errors, we plan to extend ERASER to also model errors due to voltage noise, thermal and aging-induced failures, both in memory and logic, and generate representative stressmarks.
ERASER is an initial effort to devise a comprehensive methodology for RAS analysis, particularly for open-source hardware, with the hope that it spurs further research and development into reliability-aware design both in industry and academia.
References:
K. Swaminathan, R. Bertran, H. Jacobson, P. Kudva, P. Bose, ‘Generation of Stressmarks for Early-stage Soft-error Modeling’, International Conference on Dependable Systems and Networks (DSN) 2019
S. Eldridge R. Bertran, A. Buyuktosunoglu, P. Bose, ‘MicroProbe: An Open Source Microbenchmark Generator, ported to the RISC-V ISA, the 7th RISC-V workshop, 2017
S. Eldridge, A. Buyuktosunoglu and P. Bose, ‘Chiffre A Configurable Hardware Fault Injection Framework for RISC-V Systems’ 2nd Workshop on Computer Architecture Research with RISC-V (CARRV), 2018
Krste Asanović, Rimas Avižienis, Jonathan Bachrach, Scott Beamer, David Biancolin, Christopher Celio, Henry Cook, Palmer Dabbelt, John Hauser, Adam Izraelevitz, Sagar Karandikar, Benjamin Keller, Donggyu Kim, John Koenig, Yunsup Lee, Eric Love, Martin Maas, Albert Magyar, Howard Mao, Miquel Moreto, Albert Ou, David Patterson, Brian Richards, Colin Schmidt, Stephen Twigg, Huy Vo, and Andrew Waterman, The Rocket Chip Generator, Technical Report UCB/EECS-2016-17, EECS Department, University of California, Berkeley, April 2016
The attached figure shows a representative flow for the RAS estimation methodology. An initial characterization of all instructions in the RISC-V ISA is carried out via RTL simulation using an existing core model (eg. the Rocket core). The simulation is configured to generate VCD (Value- Change Dump) files for every single instruction testcase. The SERMiner tool parses these VCD files to determine latch activities across the core, aggregated at a macro (or RTL module) level. Based on these per-instruction latch activities, SERMiner outputs an instruction sequence, which forms the basis of the SER stressmark to be generated by Microprobe (RISC-V). Microprobe (RISC-V) is a microbenchmark generation tool that is capable of generating microbenchmarks geared towards specific architecture and micro-architecture level characterization. One of its key applications is in the generation of stressmarks, or viruses, that target various worst-case corners of processor operation. These stressmarks may be targeted at maximizing power, voltage noise, temperature, or soft-error vulnerability as in case of this tool. The generated stressmark is then used to generate a list of latches that show a high residency and hence a high SER vulnerability. These latches are the focus of fault injection-based validation experiments using the Chiffre tool. Chiffre provides a framework for automatically instrumenting a hardware design with run-time configurable fault injectors. The vulnerable latches obtained from running the generated stressmarks through the Rocket core model, and then through SERMiner, are earmarked for targeted fault injection experiments using Chiffre. The objective of these experiments is to further prune the list of vulnerable latches by eliminating those that are derated, that is, they do not affect the overall output even when a fault is injected in them. Focusing any and all protection strategies on this final list of latches would maximize RAS coverage across the entire core.
Ongoing and future work:
ERASER currently only supports analysis of all latches in the design across a single Rocket core and the generated stressmarks can be used to evaluate the vulnerability of these latches. Most on-chip memory structures such as register files and caches, are equipped with parity/ECC protection and are as such protected against most radiation-induced soft errors. However, they are still vulnerable to supply voltage noise, thermal and aging-induced failures, and other hard or permanent errors. We plan to extend ERASER to model such errors, both in memory and logic, and generate stressmarks representative of worst-case thermal emergencies and voltage noise, in addition to soft errors.
Speaker |
David Wentzlaff Jonathan Balkind |
---|---|
Room | K.3.401 |
Track | RISC-V |
Time | 12:30 - 12:50 |
Event link | View original entry |
RISC-V application, OS, and firmware development has been slowed by the lack of "real hardware" available for developers to work with. With the rise of FPGAs in the cloud and the recent release of the OpenPiton+Ariane manycore platform on Amazon's F1 cloud FPGA platform, we propose using 1-12 core OpenPiton+Ariane processors emulated on F1 to develop RISC-V software and firmware. In this talk, we will give an accelerated tutorial on how to get started with OpenPiton+Ariane, the spec-compliant RISC-V platform it offers, and how the firmware and OS can be modified and run on top. We will show a number of applications built and running for our present Debian distribution and the software development environment that this offers. We will then highlight how hardware and software can be co-designed on OpenPiton+Ariane with the ability to recompile the hardware underlying the cloud FPGA image and deploy it for use by others. This platform is serving as a basis for software and hardware development for the DECADES project, a project investigating heterogenous manycore and hardware accelerator based designs with support for orchestrated data movement.
RISC-V Software and Firmware Development in the Cloud Using OpenPiton+Ariane on Amazon F1
RISC-V application, OS, and firmware development has been slowed by the lack of "real hardware" available for developers to work with. With the rise of FPGAs in the cloud and the recent release of the OpenPiton+Ariane manycore platform on Amazon's F1 cloud FPGA platform, we propose using 1-12 core OpenPiton+Ariane processors emulated on F1 to develop RISC-V software and firmware. In this talk, we will give an accelerated tutorial on how to get started with OpenPiton+Ariane, the spec-compliant RISC-V platform it offers, and how the firmware and OS can be modified and run on top. We will show a number of applications built and running for our present Debian distribution and the software development environment that this offers. We will then highlight how hardware and software can be co-designed on OpenPiton+Ariane with the ability to recompile the hardware underlying the cloud FPGA image and deploy it for use by others. This platform is serving as a basis for software and hardware development for the DECADES project, a project investigating heterogenous manycore and hardware accelerator based designs with support for orchestrated data movement.
http://openpiton.org
https://openpiton-blog.princeton.edu/2019/10/bringing-openpiton-to-amazon-ec2-f1-fpgas/
OpenPiton+Ariane contributors include:
Jonathan Balkind, Grigory Chirkov, Yaosheng Fu, Adi Fuchs, Fei Gao, Alexey Lavrov, Ang Li, Xiaohua Liang, Katie Lim, Matthew Matl, Michael McKeown, Tri Nguyen, Samuel Payne, Michael Schaffner, Mohammad Shahrad, Jinzheng Tu, Florian Zaruba, Yanqi Zhou, Georgios Tziantzioulis, Luca Benini, David Wentzlaff
DECADES is a large collaboration from three academic groups: Margaret Martonosi (PI Princeton), David Wentzlaff (PI Princeton), Luca Carloni (PI Columbia) with students/researchers: Jonathan Balkind, Ting-Jung Chang, Fei Gao, Davide Giri, Paul Jackson, Paolo Mantovani, Luwa Matthews, Aninda Manocha, Tyler Sorensen, Jinzheng Tu, Esin Türeci, Georgios Tziantzioulis, and Marcelo Orenes Vera. In addition to the submission author, portions of the talk may be offered by others in the collaboration.
Speaker |
Ofer Shinaar |
---|---|
Room | K.3.401 |
Track | RISC-V |
Time | 12:50 - 13:10 |
Event link | View original entry |
We would like to present and overlay technique for RISCV, develop by WD and open sourced.
This FW feature acts as a software “paging” manager. It is to be threaded with the Real-Time code and to the toolchain.
Cacheable Overlay Manager RISC-V (ComRV), a technique which fits limited memory embedded devices (as IoT’s), and does not need any HW support.
Speaker |
Atish Patra |
---|---|
Room | K.3.401 |
Track | RISC-V |
Time | 13:10 - 13:30 |
Event link | View original entry |
RISC-V boot flow has come a long way since in recent times by leveraging the various opensource boot loaders/firmware projects. This also helped in achieving a well-supported and standard boot flow for RISC-V. As a result, developers can use the same boot loaders to boot Linux on RISC-V as they do in other architectures. Currently, U-Boot is used as the last stage boot loader and OpenSBI as the machine mode run time service provider, but there's more work to be done. A few of such future works includes U-boot SPL support, UEFI boot in RISC-V Linux and booting protocol improvements. This talk will focus on some of these ongoing works which are necessary to declare that RISC-V is truly ready for world domination.
Speaker |
Ryan O'Leary |
---|---|
Room | K.3.401 |
Track | RISC-V |
Time | 13:30 - 13:50 |
Event link | View original entry |
Oreboot = Coreboot - C. Oreboot is a fully open-source power-on-reset and romstage firmware written in Rust. Oreboot can boot a HiFive RISC-V processor to Linux with a Go user-mode.
Oreboot rethinks the firmware driver models. Each driver is distilled to four basic functions: init, pread, pwrite and shutdown. This interface allows us to make convenient higher-level drivers such as a "union driver" which duplicates a single write operation to multiple drivers. This makes consoles which have multiple underlying UART drivers elegant.
By using the Rust programming language, Oreboot has a leg-up in terms of security and reliability compared to contemporary firmware written in C or assembly. Rust's borrow-checker ensures pointers are not used after freed and proves that coroutines are thread-safe at compile time.
In this talk, we will also present a short overview of the basics of Rust, how our driver model incorporates coroutines and the bootflow of Oreboot.
Speaker |
Anup Patel |
---|---|
Room | K.3.401 |
Track | RISC-V |
Time | 13:50 - 14:10 |
Event link | View original entry |
The RISC-V H-extension (aka hypervisor extension) is suitable for both Type1 and Type2 hypervisor. We have ported two hypervisors for RISC-V: Xvisor (Type1) and KVM (Type2). We show the current state and furture work for both hypervisors.
Speaker |
Anton Kuzmin |
---|---|
Room | K.3.401 |
Track | RISC-V |
Time | 14:10 - 14:30 |
Event link | View original entry |
There is a need for a lightweight tools for experiments with RISC-V
custom extensions. Adding support for custom instructions in
binutils/gcc/llvm is out of range for many hardware architects. LuaJIT
includes a small and powerful assembler: dynasm, accessible from
within Lua interpreter. Currently dynasm supports following 32 and 64-bit
instruction sets: x86, x64, ARM, PowerPC, and MIPS, and it is just
reasonable to extend this support to RISC-V.
Lua itself is a very compact and simple yet powerful dynamic language,
its JIT compiler (luajit) makes it one of the fastest, if not the
fastest, interpreted language, and it is used in many projects, so
having it running on RISC-V would have use besides the mere internal
need for experimental platform.
Yet another Forth and yet another assembler
Speaker |
Pau Garcia Quiles (pgquiles) François Revol (mmu_man) |
---|---|
Room | K.3.401 |
Track | Retrocomputing |
Time | 14:55 - 15:00 |
Event link | View original entry |
A quick introduction to the 3rd edition of the retro-computing devroom.
Speaker |
Christophe de Dinechin |
---|---|
Room | K.3.401 |
Track | Retrocomputing |
Time | 15:00 - 15:30 |
Event link | View original entry |
Alpha Waves is the first 3D platform game ever, according to the Guiness Book of Records.
Initially developed on Atari ST, it was then ported on Amiga and on the IBM PC.
The technology later gave rise to Alone in the Dark, a major game that launched Infogrames in the big league.
This is the history of that game.
The game was initially developed on Atari ST, representing 17000 lines of 68K assembly code.
It was later ported on Amiga and on the IBM PC, and was the first and only assembly program game that Infogrames ever ported to another CPU.
The technology developed for that game later inspired Frederick Raynal to develop Alone in the Dark, a major game that launched Infogrames in the big league.
This talk is the history of that game by its developer. It will cover:
General principles of paleo-3D
How to draw polygons in software. Fast.
Computing 3D transforms using mostly additions
Music, graphics and other stuff
Funny stories and trivia around the game, including the stints of Infogrames in Artificial Intelligence and the arch-genesis of Alone in the Dark
Speaker |
Rob Hagemans |
---|---|
Room | K.3.401 |
Track | Retrocomputing |
Time | 15:35 - 16:05 |
Event link | View original entry |
At the height of the cold war, BASIC programs exchanged by radio and cassette tape provided young people of socialist Eastern and capitalist Western Europe a rare insight into each other's worlds. BASICODE was a transmission format and an API developed by the Dutch public broadcasting service to overcome the challenge of exchanging open source hobby programs in the highly fragmented 8-bit computing landscape of the early 1980s, which was dominated by mutually incompatible versions of BASIC. Somewhat improbably, the format was picked up across the iron curtain in the German Democratic Republic, where it experienced its age of greatest popularity. The need for programs to work on platforms with widely different capabilities and incompatible syntaxes forced it to be simply structured, highly regulated and relatively well documented. This makes it ideally suited for implementation in a web browser.
Speaker |
Stian Sebastian Skjelstad |
---|---|
Room | K.3.401 |
Track | Retrocomputing |
Time | 16:10 - 16:40 |
Event link | View original entry |
This presentation includes multiple topics mixed together
* Porting a DOS multimedia program into Linux/BSD
* How music was composed/stored in old type of games from the 8bit era, up into the amiga (and partly the DOS world)
* How does audio/music hardware work. C64 SID, ZX-Spectrum and alike, Amiga, Adlib FM-synth OPL2/3 and General Midi
As a child, I was exposed to Cubic Player. This program was a text-based music player. All the color full texts mesmerized me and it gave a direct visual feedback of how the music was generated.
During teenage years I learned programming and got introduced to Linux. All the sourcecode for everything was available. The kernel, multimedia libraries, tools, all kind of APIs. If there are anything you wonder how works, you can inspect it. If you are unable resolve a specific detail in your own project, try to find another project that has done something similar! But for playback of these Amiga style modules, there was no programs that had the same charm as Cubic Player. Programs like mikmod, XMMS, Audacious only displayed playlist, maybe instrument-names and maybe an spectrum analyzer.
When I discovered that Cubic Player had been released as Open Cubic Player. I grabbed the source code and started to study it. All of it was based on direct hardware access, as it was written for DOS. Direct access to video-memory/hardware, raw keyboard scan codes, hardware interrupts for timers allowing background rendering of audio if needed etc. A natural candidate for a novice programmer to port?
Slowly I went through one and one single file of the original source code.
* Lots of logic could be kept
* Some could be thrown away like direct audio hardware drivers
* Some needed heavy rewrites like video output
* Assembler needed to be rewritten into C inlines
* The timer interrupt porting caused a zero-day kernel bug to be discovered https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2004-0554
When text appears on the screen, and audio (highly distorted in the beginning) appeared, was a magical feeling. Progress was faster when the goal felt to be well within the reach.
When you the make your code public, people start to use it... and you start to get feedback like:
* Program is unable to compile on other platforms
* Program crash on other platforms
* Specific crashes that only happens for some few people - compiler bugs (these were more common back in the days)
How did old systems playback music? Using specific hardware circuits.
How is music composed / stored / playback?
We will take a deep dive into these two questions on these platforms:
ZX-Spectrum 128 .AY files - Using code chopped out from "aylet"
Atari ST .YM files - Using ST Sound Library
C64 .SID files - libsidplay
OPL2/OPL3 style - libadplug
Amiga style modules
General MIDI - TiMidity++
Open Source makes it possible to combine the work and effort from multiple volunteers. Not everything is a always an easy walk in the park, but it sure makes it easier than writing everything from scratch when there are known working code that can be either be used as a template or wrapped around.
Speaker |
Frédéric Bisson |
---|---|
Room | K.3.401 |
Track | Retrocomputing |
Time | 16:45 - 17:15 |
Event link | View original entry |
Using web technologies, one can easily emulate the Minitel ecosystem, not only the iconic french terminal but also its servers. This easiness has been made possible due to ES6 and websockets.
Speaker |
Ben Dooks |
---|---|
Room | K.3.401 |
Track | Retrocomputing |
Time | 17:20 - 17:50 |
Event link | View original entry |
Going from seeing an image of a cartridge that I would have loved to have had for my VIC-20 to working out how it was built and then making my own.
Starting with a brief overview of the VIC-20 and the capabilities, to then looking at the "vixen" 16KiB RAM expansion, the initial view of the internals made available on the internet to going to a full implementation.
The initial overview will go through some of the identification of the components, the technologies involved (such as SRAM and DRAM) and the VIC-20 expansion bus. It will show what sort of technologies could go into implementing this sort of expansion and then how these can be discounted either by the age of the technology or the component complexity.
Techniques for further reversing a circuit from the physical device and how the actual device circuit was then discovered, followed by the implementation and testing will be shown.
Speaker |
Jeroen Baten |
---|---|
Room | K.3.401 |
Track | Retrocomputing |
Time | 17:55 - 18:25 |
Event link | View original entry |
Yes, this talk is about running your own mainframe on your own hardware. Mainframes are old, yes, but they are still very much alive. New hardware is still being developed and there are a lot of fresh jobs in this area too. A lot of mainframes run COBOL workloads. COBOL is far from a dead language. It processes an estimated 85% of all business transactions, and 5 billion lines of new COBOL code are written every year. In this session the speaker will help you in take your first steps towards running your own mainframe. If you like then after this session you can continue to build your knowledge of mainframe systems using the links provided during the talk. Come on in and learn the basics of a completely different computer system! And it will take you less than an hour to do that!
Speaker |
Rui Martins |
---|---|
Room | K.3.401 |
Track | Retrocomputing |
Time | 18:30 - 19:00 |
Event link | View original entry |
Reverse engineering an Arcade game and re-implementing it into an 8 bit system is an engineering exercise, where compromises have to be made in order to accomplish the goal, since the capabilities of the target machine are severely under powered when compared with the source machine.
The processes of accomplishing this and all it involves, will be presented.
Remake an Arcade Game (reverse engineering) into an 8 bit system is an engineering exercise, where compromises have to be made in order to accomplish the goal, since the capabilities of the target machine are severely under powered when compared with the source machine.
Starting with graphic capabilities, passing through CPU limitations (clock speed and architecture) and ending with multimedia capabilities, every single one, needs to be addressed with a suitable compromise.
Some "hacks" and optimizations need to be applied, to use the 8 bit hardware capabilities in a convenient way to overcome the huge handy cap between architectures.
For practical and example purposes, the reverse engineering of the Arcade game "Magical Drop II" will be presented, and how it became "Extruder" ZX Spectrum game.
The software was developed in Zilog Z80 Assembly, and several tips and tricks will be shown, that facilitate and help the conversion process.
Speaker |
Andrzej Warzynski Graham Hunter |
---|---|
Room | K.4.201 |
Track | LLVM |
Time | 10:30 - 11:10 |
Event link | View original entry |
The aim of this presentation is to showcase the technologies available in LLVM that aid debugging. We will focus on LLDB, the debugger, and sanitisers (e.g. AddressSanitizer and LeakSanitizer). No prior knowledge is required, but if you're familiar with GDB or Valgrind then this talk will introduce you to alternatives available within LLVM.
LLDB is a very powerful and extensible command line debugger available on Linux, Mac OS, FreeBSD, Windows and Android. It is used internally in XCode and Android Studio and available on various hardware platforms (e.g. X86, ARM, AArch64, PowerPC, Mips). LLDB is built as a set of reusable components which highly leverage existing libraries in LLVM. It has a very powerful expression evaluation engine, intuitive CL interface (with tab-completion), easy to navigate help pages and a "graphical" user interface. In this presentation we will explore basic usage as well as some lesser known features. LLDB has come a long way and we want to present how intuitive, helpful and powerful it can be when used pragmatically.
While LLDB will let you easily examine and debug a program at the point of failure, it can be harder to diagnose the underlying problem if it occurred before the program crashed or printed an incorrect result. LLVM provides some extra features in the form of 'sanitizers' to help find the root cause of some extra problems, like accessing a wrong-but-still-valid memory address or unintentionally wrapping a signed integer value. This presentation will explore how to use the sanitizers to debug programs and some examples of bugs they can catch.
Speaker |
Jeremy Bennett |
---|---|
Room | K.4.201 |
Track | LLVM |
Time | 11:15 - 11:55 |
Event link | View original entry |
Dhrystone and Coremark have been the defacto standard microcontroller benchmark suites for the last thirty years, but these benchmarks no longer reflect the needs of modern embedded systems. Embench™ was explicitly designed to meet the requirements of modern connected embedded systems. The benchmarks are free, relevant, portable, and well implemented.
In this talk we will present the results of benchmarking Clang/LLVM for various IoT class architectures using Embench. We shall look at
- how code size and speed varies across architectures when compiling with Clang/LLVM.
- how Clang/LLVM performance has evolved over time
- how Clang/LLVM compares against other compilers, notably GCC
- the effectiveness of various compilation techniques (LTO, Combined Elimination, Profile Guided Optimization)
The aim is not to show which architecture or compiler is best, but to gain insight into the detail of the compilation process, so that all compilers and architectures can learn from each other.
Speaker |
Serge Guelton (serge-sans-paille) |
---|---|
Room | K.4.201 |
Track | LLVM |
Time | 12:10 - 12:50 |
Event link | View original entry |
GCC is the default toolchain to build C/C++ packages in Fedora. Meanwhile OpenMandrivia already builds most of its package with the LLVM toolchain, the Debian archive is regularly rebuilt with with a recent verion of clang... So could we try that for Fedora?
This talk describes an on-going effort to achieve that goal while keeping the same compiler feature set as GCC.
Subtopics of the talk include:
Speaker |
Jeremy Bennett |
---|---|
Room | K.4.201 |
Track | LLVM |
Time | 12:55 - 13:35 |
Event link | View original entry |
At the GNU Tools Cauldron we held a panel discussion on how GCC and LLVM can work together. The video of that discussion can be seen at https://www.youtube.com/watch?v=PnbJOSZXynA. We proposed a similar discussion to be held at the LLVM Developers Meeting, but the reviewers suggested that such a discussion would be better held as part of the FOSDEM LLVM Devroom, since that was more likely to attract GNU developers as well.
We wish to explore how Clang/LLVM and the GCC can work together effectively.
The participants will explore opportunities for co-operation between the projects. Areas to be covered include:
The compilers are part of wider projects providing all the components of the tool chain, and we anticipate the discussion will roam to low level utilities, source code debuggers and libraries as well. We hope the output of the discussion will inform future work between the two communities.
The panelists are
Speaker |
Alex Denisov Fabian Yamaguchi |
---|---|
Room | K.4.201 |
Track | LLVM |
Time | 13:50 - 14:30 |
Event link | View original entry |
The security of computer systems fundamentally depends on the quality of its underlying software. Despite a long series of research in academia and industry, security vulnerabilities regularly manifest in program code. Consequently, they remain one of the primary causes of security breaches today.
The discovery of software vulnerabilities is a classic yet challenging problem of the security domain. In the last decade, there appeared several production-graded solutions with a favorable outcome.
Code Property Graph[1] (or CPG) is one such solution. CPG is a representation of a program that combines properties of abstract syntax trees, control flow graphs, and program dependence graphs in a joint data structure.
There exist two counterparts[2][3] that allow traversals over code property graphs in order to find vulnerabilities and to extract any other interesting properties.
In this talk, we want to cover the following topics:
[1] https://ieeexplore.ieee.org/document/6956589
[2] https://github.com/ShiftLeftSecurity/codepropertygraph
[3] https://ocular.shiftleft.io
Speaker |
Batuhan Taşkaya |
---|---|
Room | K.4.201 |
Track | LLVM |
Time | 14:35 - 15:15 |
Event link | View original entry |
Python with LLVM has at least one decade of history. This session will be going to cover-up how python implementations tried to use LLVM such as CPython's Unladen Swallow branch (PEP 3146) or attempts from PyPy and why they failed. After that it will show what are the current python projects that use LLVM for speed, such as numba and python libraries for working with LLVM IR. In the end, it will mention about new ideas that would unite the powers of both LLVM and Python.
This talk is about LLVM's influence over Python's ecosystem. It is targeted an audience of language developers who want to integrate LLVM and developers who are curious about why dont dynamic languages can unite their power with LLVM to speed-up. It will start with python's implementations and the approaches they take. The Unladen Swallow branch which basically tries to integrate LLVM to CPython (under google sponsored) is a good example of these approaches. There are attempts from the PyPy side but they are also failed because of the dynamic nature of Python. After this fails, we'll swap back to our current time and show projects that are benefiting from LLVM to speed up python especially on the scientific side such as numba (which offers JITting via LLVM). Besides these projects, there are also a few projects that offer an interface to LLVM. Such as llvmpy and llvmlite. I've been using llvmlite about 1 year in my side projects and toy languages so these projects has the potential to inspire developers to work with LLVM and build languages a-top on it. In the end, it will show what is the future of these two big projects (LLVM & Python) and how we can participate.
Speaker |
Kiran Chandramohan |
---|---|
Room | K.4.201 |
Track | LLVM |
Time | 15:20 - 16:00 |
Event link | View original entry |
This talk introduces Flang (F18), the new Fortran frontend of LLVM being written in modern C++. The talk will provide a brief introduction to Flang, motivation for writing this new compiler, design principles, architecture, status, and an invitation to contribute.
F18 project started at PGI/Nvidia as a new Fortran frontend designed to work with LLVM. The aim of the project is to create a modern Fortran frontend (Fortran 2018 standard) in modern C++. In April of this year, it was accepted as an LLVM project (https://lists.llvm.org/pipermail/llvm-dev/2019-April/131703.html).
The parser and semantic analysis are implemented in a way that provides a strong correspondence to the standards document. It is hoped that this correspondence will help in the development of new features and will become the testbed for deciding future Fortran standard features. The frontend also embraces the newly open-source MLIR framework for language-specific optimisations. This will be through a new dialect call FIR (https://www.youtube.com/watch?v=ff3ngdvUang). MLIR will also be used for creating an OpenMP dialect. The project also hopes to share code with the Clang frontend. While the parser/AST will not be shared, code will be shared in the Driver, OpenMP codegen etc.
In this presentation, we hope to cover the technical details mentioned in the paragraph above, the status of implementation and also give an invitation to contribute.
Speaker |
Kristof Beyls Peter Smith Nick Desaulniers Jeroen Dobbelaere |
---|---|
Room | K.4.201 |
Track | LLVM |
Time | 16:05 - 16:45 |
Event link | View original entry |
Ever wondered how the LLVM project and community works?
Want to get some advice on how to most effectively contribute?
Now is your chance at FOSDEM to ask experienced developers directly. This
panel will host a number of experienced LLVM developers, answering any
relevant questions from the audience.
In case questions from the audience do not fill the entire slot, a set of
prepared questions focussed on how to get started working with and contributing
to LLVM will be raised for the panelists to answer.
The panel will consist of the following experienced contributors to
LLVM: Kristof Beyls, Peter Smith, Jeroen Dobbelaere, Nick Desaulniers
Speaker |
Joshua Mack Nirmal Kumbhare |
---|---|
Room | K.4.201 |
Track | LLVM |
Time | 17:00 - 17:40 |
Event link | View original entry |
Historically, programming heterogeneous systems has been quite a challenge. While programming support for basic general-purpose accelerators such as GPUs has become quite mature in many ways, general heterogeneous SoCs in particular can feature a much broader range of accelerators in their efforts to minimize power consumption while maximizing performance. Many SoCs, though, are designed with accelerators tailored for the domain -- such as signal processing -- in which they’ll be used: Domain-Specific SoCs. As SoC platforms become ever-more heterogeneous, we think that application developers shouldn’t need to waste time reading datasheets or APIs for SoC-specific kernel extensions just to take full advantage of their hardware. With this in mind, in this talk we will discuss strategies we are using to automate mapping of LLVM-compatible languages to heterogeneous platforms with no intervention (not even #pragmas) from the programmer.
To this end, we present our prototype of a software stack that seeks to address both of these needs. To meet the first need, we developed an LLVM-based hybrid compile/run-time toolchain to extract the semantic operations being performed in a given application. With these semantic operations extracted, we can link in additional libraries that enable dispatch of certain kernels (such as a Fast Fourier Transform) to accelerators on the SoC without user intervention. To evaluate the functionality of this toolchain, we developed a runtime system built on top of QEMU+Linux that includes scheduling and task dispatch capabilities targeting hypothetical SoC configurations. This enables behavioral modeling of these accelerators before silicon (or even FPGA) implementations are available. The focus here will be on the LLVM-mapping aspects, but a brief overview of our SoC simulation environment will be presented as well.
Speaker |
Vikram Adve |
---|---|
Room | K.4.201 |
Track | LLVM |
Time | 17:45 - 18:25 |
Event link | View original entry |
TITLE: HPVM: Extending LLVM For Compiling to Heterogeneous Parallel Systems
SPEAKER: Vikram Adve, University of Illinois at Urbana-Champaign
Abstract:
We will present a detailed description of HPVM, an extension to LLVM for
compiling to heterogeneous parallel systems. HPVM aims to make it much
easier to develop compilers for diverse parallel hardware, and to implement
parallel languages (including domain-specific languages) for such hardware.
We will briefly describe at a high-level the key parallel abstraction of
hierarchical dataflow graphs used in HPVM, and then focus on on how HPVM is
integrated on top of LLVM. A second part of the talk will briefly describe
how we are extending HPVM to enable greater energy efficiency and
performance by taking advantage of approximation opportunities in
application domains such as machine learning and image processing. To
conclude, I will briefly discuss how HPVM might be added as a dialect in
MLIR so that other MLIR dialects and MLIR-based compilers can use HPVM for
code generation to diverse heterogeneous hardware targets, including GPUs,
FPGAs, and custom accelerators.
TITLE: HPVM: Extending LLVM For Compiling to Heterogeneous Parallel Systems
SPEAKER: Vikram Adve, University of Illinois at Urbana-Champaign
Background
LLVM has been extraordinarily successful as a compiler infrastructure for
enabling a wide range of compilers and compiler-based tools for scalar and
vector processors, and for supporting GPU compilers for OpenCL and CUDA.
LLVM has seen only limited use, however, for other classes of target
architectures, such as reconfigurable hardware (FPGAs) and domain-specific
accelerators such as for machine learning, image processing, signal
processing, graph processing, and other emerging domains. More generally,
heterogeneous system-on-chip (SoC) architectures are becoming increasingly
important, especially in "edge computing," but LLVM has largely been
limited to the host CPU and GPU on such SoCs, even though the number of
other programmable components on these systems has been steadily increasing.
Overview
In this talk, I will describe an extension of LLVM for developing a compiler
infrastructure -- Heterogeneous Parallel Virtual Machine, or HPVM -- for
heterogeneous parallel systems [1]. I will briefly describe at a high-level
the key parallel abstraction of hierarchical dataflow graphs used in HPVM to
describe heterogeneous parallelism, where ordinary LLVM code is used to
represent the computatational tasks. The main focus of this part of the
talk is how HPVM is integrated on top of LLVM. First, HPVM has been
implemented as a set of intrinsic functions that extend the LLVM
infrastructure. Second, the HPVM code generation framework reuses existing
LLVM (and other) back-ends, in order to leverage existing (often well-tuned)
code generators for individual programmable hardware elements, such as NVPTX
for NVIDIA GPUs, Intel's SPIR-V code generator for Intel SSE and AVX vector
hardware, and Altera's AOCL compiler for targeting Altera's FPGAs.
A second part of the talk will briefly describe how we are extending
HPVM to enable greater energy efficiency and performance by taking
advantage of approximation opportunities in application domains such
as machine learning and image processing. In particular, we are
currently developing ApproxHPVM, an extension of HPVM that supports a
range of algorithmic and hardware-level approximation mechanisms [2].
Moreover, ApproxHPVM only requires application programmers to specify
high-level, "end-to-end" design goals such as the maximum allowable
accuracy loss in a neural network or loss of image quality (e.g.,
PSNR) and the system automatically selects, optimizes and maps
approximation choices for individual coarse-grain tensor operations in
the application. The goal is to make sophisticated and well-tested
approximation techniques widely accessible to application developers.
To conclude, I will briefly discuss how HPVM and ApproxHPVM might be added
as a dialect in MLIR so that other MLIR dialects and MLIR-based compilers
can use HPVM for diverse heterogeneous hardware targets, including GPUs,
FPGAs, and custom accelerators.
Target Audience
The intended target audience for this talk falls into broadly two classes.
The first includes compiler practitioners and researchers interested in
compiling to heterogeneous systems, such as SoCs, FPGAs, and other
"edge-compute" hardware. The second includes language implementers
interested in implementing or porting domain-specific languages such as
TensorFlow, Halide, SPIRAL, and others to heterogeneous parallel systems.
Takeaways
We envision several takeaways for the audience: (1) Understand how to
develop an extension of LLVM that makes it easier to target emerging
hardware platforms not sufficiently well-supported by the existing LLVM IR
and code generation framework. (2) Expose attendees to the opportunities and
challenges in supporting and reasoning about approximate computations in a
compiler framework. (3) Discuss the opportunities and limitations of using
HPVM for supporting heterogeneous parallel systems in the context of MLIR.
Web Site and Software Availability
More information about HPVM is available at http://hpvm.cs.illinois.edu/.
The HPVM infrastructure is implemented as an extension to LLVM. To date,
the software is being developed using an internal Git repository at Illinois
and has been shared with collaborators at IBM and at Harvard University.
We will make it available publicly in open-source form on Github before the
FOSDEM conference.
REFERENCES
[1] Maria Kotsifakou, Prakalp Srivastava, Matthew D. Sinclair,
Rakesh Komuravelli, Vikram S. Adve and Sarita V. Adve, “HPVM:
Heterogeneous Parallel Virtual Machine.” Proceedings of Principles and
Practice of Parallel Programming (PPoPP), Feb 2018, Vösendorf / Wien,
Austria.
[2] Hashim Sharif, Prakalp Srivastava, Mohammed Huzaifa, Maria
Kotsifakou, Yasmin Sarita, Nathan Zhou, Keyur Joshi, Vikram S. Adve,
Sasa Misailovic and Sarita V. Adve, “ApproxHPVM: A Portable Compiler
IR for Accuracy-aware Optimizations,” OOPSLA 2019, October 2019,
Athens, Greece.
Speaker |
Simon Ser |
---|---|
Room | K.4.401 |
Track | Graphics |
Time | 11:00 - 11:25 |
Event link | View original entry |
This talk will explain some basics about KMS, introduce libliftoff, describe the library's current status and outline the next steps.
Many DRM drivers have been exposing overlay planes for quite some time. Overlay planes can improve battery consumption by scanning out directly client buffers, skipping composition. While Wayland compositors and the X server usually take advantage of the cursor plane (and sometimes are able to use the primary plane to directly scan out a client's buffer), overlay planes are under-used. The exception is Weston, which tries to use overlay planes. Other compositors ignore them.
The main challenge is to figure out how to assign buffers coming from clients to hardware planes. The only API exposed by KMS is atomic test commits, so user-space needs to try different combinations. It would be nice to have a common library shared between compositors to de-duplicate the work.
During the XDC 2019 conference we discussed about libliftoff, an attempt at designing such a library. Feedback was positive from both compositor writers and driver developers. We discussed about the API, the potential pitfalls and future goals. The scope of the library has been expanded: libliftoff could also provide some feedback to clients so that they allocate buffers suitable for hardware planes. Additionally, because the KMS API makes it tricky to find the best way to make use of hardware planes, libliftoff could grow some vendor-specific plugins.
Speaker |
Erik Faye-Lund |
---|---|
Room | K.4.401 |
Track | Graphics |
Time | 11:30 - 11:55 |
Event link | View original entry |
A short update on the state of Zink, and OpenGL implementation on top of vulkan, now that it's upstream in Mesa.
Speaker |
Erik Faye-Lund |
---|---|
Room | K.4.401 |
Track | Graphics |
Time | 12:00 - 12:55 |
Event link | View original entry |
mesa3d.org is stuck on web 1.0 technology, but let's see what we can do about it.
This is a Birds-Of-a-Feather session, which starts off with a short presentation as an introduction about the current state of affairs.
Speaker |
Karol Herbst |
---|---|
Room | K.4.401 |
Track | Graphics |
Time | 13:00 - 14:00 |
Event link | View original entry |
I will talk about:
* features
* ongoing work and necessary reworks
* big and important technical issues
* overall state of the Nouveau project
This is a replacement for Manasi Navare's "Enabling 8K displays" talk, which got cancelled as Intel rejected her travel request again.
Speaker |
Jakob Bornecrantz |
---|---|
Room | K.4.401 |
Track | Graphics |
Time | 14:00 - 14:55 |
Event link | View original entry |
In this talk will cover Monado and Khronos' OpenXR standard, and give an overview about the current state of open source VR and what lies ahead. Also go into some details of how tracking is done inside of Monado and show of the current state.
VR took off for the consumer with the release of Oculus consumer hardware. But the hardware lacked open source drivers and Linux support in general. The consumer VR space has now grown from a kickstarter campaign into a large industry. But this growth has its down sides, multiple companies have their own APIs competing. Luckily these companies have agreed to work on a single API under the Khronos umbrella. Now that OpenXR has been released and and the Monado project has been getting more stable it is now possible to do good VR on a completely open stack.
Speaker |
Nicolas Caramelli |
---|---|
Room | K.4.401 |
Track | Graphics |
Time | 15:00 - 15:55 |
Event link | View original entry |
Although KMS/DRM can replace the Linux Framebuffer, there are a number of programs and libraries that can be built on top of the Linux Framebuffer (without X11 or Wayland dependencies) and that might still be worth considering. The Linux Framebuffer allows direct access to pixels: we will illustrate it with various rendering tools (Fbpad, Fbi, NetSurf, MPlayer, ...), but also with drawing libraries such as Cairo or Evas, and multimedia frameworks like FFmpeg or GStreamer.
The Mesa 3D project makes OpenGL rendering possible using only the Linux Framebuffer with GLFBDev or EGL: mesa-demos and yagears programs will be shown.
We will then cover graphics libraries (GLUT, SDL, EFL, GTK, Qt) that allow to integrate high level applications running directly on top of the Linux Framebuffer with no compositor. An example will be described using either WebKitGTK or QtWebKit for the rendering of a HTML5 media player and a WebGL sample, using the Linux Framebuffer port of those libraries and toolkits.
This talk is inspired by the HiGFXback project which aims at preserving historical backends used for graphics on GNU/Linux systems.
Speaker |
Christian König |
---|---|
Room | K.4.401 |
Track | Graphics |
Time | 16:00 - 16:55 |
Event link | View original entry |
TTM is the memory manager in the Linux kernel used by graphics drivers with dedicated VRAM.
It was added to the mainline kernel in June 2009 and has seen numerous changes and we are now more or less running into a dead-end with it's design.
This talk outlines TTMs current functionality, what design problems we ran into and what can we do to fix this.
Speaker |
Ian Romanick |
---|---|
Room | K.4.401 |
Track | Graphics |
Time | 17:00 - 17:55 |
Event link | View original entry |
Automatic, pattern-based code generation for Mesa's compiler infrastructure has been a long standing dream. Nearly a decade ago experiments were conducted using systems like BURS and lburg. Each of these attempts encountered various insurmountable road blocks. In the intervening years, both software and GPU architectures have changed significantly. These changes have enabled a code-generator generator to be a reality. The design and implementation of one system will be presented. In addition to the successes, various difficulties and rough edges will be detailed.
Speaker |
Andreas Bergmeier |
---|---|
Room | K.4.401 |
Track | Graphics |
Time | 18:00 - 18:25 |
Event link | View original entry |
For the Raspberry PI 4 I started implementing a Vulkan driver. This talk will give a guide of how to approach such a task, what my expectations are and what I learned so far.
With the release of Raspberry PI 4 it becomes theoretically more viable to use it in GPU heavy scenarios. Even ordinary software like Gnome Shell, Chromium and games fall into that category.
Sadly, neither Broadcom nor Raspberry PI Foundation currently provide a Vulkan driver. Since I want as much performance (and little overheating) as possible, I started writing a Vulkan driver.
This entails learning kernel and mesa internals as well as trying to understand Gallium. All that I have learned so far, I will try to share in this talk.
Speaker |
Filipe Laíns |
---|---|
Room | K.4.401 |
Track | Graphics |
Time | 18:30 - 18:55 |
Event link | View original entry |
This talk will give an update on the progress being done in libratbag as well as present the new projects we have planned. If there's time I will also show how you should be able to write your own driver and debug existing drivers. This is a talk about libratbag updates, planned projects and a code demo.
Speaker |
Vasily A. Sartakov |
---|---|
Room | K.4.601 |
Track | Hardware-aided Trusted Computing |
Time | 10:30 - 10:40 |
Event link | View original entry |
abstract
description
Speaker |
Jethro G. Beekman |
---|---|
Room | K.4.601 |
Track | Hardware-aided Trusted Computing |
Time | 10:40 - 11:15 |
Event link | View original entry |
Intel Software Guard Extensions (SGX) makes software secure from the outside. Rust makes it secure from the inside. This workshop will introduce you to Rust and the Fortanix® Enclave Development Platform (EDP) for Rust: how it works, what you can do with it, and why Rust is such a good fit for SGX.
Please note that this was previously scheduled for 11:15 and replaces the talk on "Introduction to the CoSMIX Compiler" by Yan Michalevsky.
Speaker |
Amaury Chamayou |
---|---|
Room | K.4.601 |
Track | Hardware-aided Trusted Computing |
Time | 11:20 - 11:55 |
Event link | View original entry |
The Confidential Consortium Framework is an open-source framework for building permissioned confidential multi-party services. It leverages hardware trusted execution environments to provide strong confidentiality, integrity, and high performance. CCF implements consortium-based programmable and auditable governance.
Please note that this talk was originally scheduled to begin at the later time of 11:50.
Speaker |
Vasily A. Sartakov |
---|---|
Room | K.4.601 |
Track | Hardware-aided Trusted Computing |
Time | 12:00 - 12:35 |
Event link | View original entry |
In this talk I will present EActors, an actor framework that is tailored to SGX and offers a more seamless, flexible and efficient use of trusted execution – especially for applications demanding multiple enclaves. EActors disentangles the interaction with enclaves and, among them, from costly execution mode transitions. It features lightweight fine-grained parallelism based on the concept of actors, thereby avoiding costly SGX SDK provided synchronisation constructs. Finally, EActors offers a high degree of freedom to execute actors, either untrusted or trusted, depending on security requirements and performance demands.
Please note that this talk was originally scheduled to begin at the later time of 12:25
Speaker |
Jo Van Bulck |
---|---|
Room | K.4.601 |
Track | Hardware-aided Trusted Computing |
Time | 12:40 - 13:15 |
Event link | View original entry |
This talk analyzes the vulnerability space arising in Trusted
Execution Environments (TEEs) when interfacing a trusted enclave
application with untrusted, potentially malicious code. Considerable
research and industry effort has gone into developing TEE runtime
libraries with the purpose of transparently shielding enclave
application code from an adversarial environment. However, our analysis
reveals that shielding requirements are generally not well-understood in
real-world TEE runtime implementations. We expose several sanitization
vulnerabilities at the level of the Application Binary Interface (ABI)
and the Application Programming Interface (API) that can lead to
exploitable memory safety and side-channel vulnerabilities in the
compiled enclave. Mitigation of these vulnerabilities is not as simple
as ensuring that pointers are outside enclave memory. In fact, we
demonstrate that state-of-the-art mitigation techniques such as Intel’s
edger8r, Microsoft’s “deep copy marshalling”, or even memory-safe
languages like Rust fail to fully eliminate this attack surface. Our
analysis reveals 35 enclave interface sanitization vulnerabilities in 8
major open-source shielding frameworks for Intel SGX, RISC-V, and Sancus
TEEs. We practically exploit these vulnerabilities in several attack
scenarios to leak secret keys from the enclave or enable remote code
reuse. We have responsibly disclosed our findings, leading to 5
designated CVE records and numerous security patches in the vulnerable
open-source projects, including the Intel SGX-SDK, Microsoft Open
Enclave, Google Asylo, and the Rust compiler.
Please note that this talk was originally scheduled to begin at the later time of 13:00
Speaker |
Rouven Czerwinski |
---|---|
Room | K.4.601 |
Track | Hardware-aided Trusted Computing |
Time | 13:20 - 13:55 |
Event link | View original entry |
OP-TEE is an open source implementation of the GPD TEE specifications. However deploying OP-TEE inside
a real world product requires more than just the integration into the system, since the integrator needs
to ensure that all security requirements are met. This talk will outline a common set of these requirements
and show the necessary changes based on NXP i.MX6 platforms.
Please note that this talk was originally scheduled to begin at the later time of 13:35
Speaker |
Thiago Zagatti |
---|---|
Room | K.4.601 |
Track | Hardware-aided Trusted Computing |
Time | 14:00 - 14:30 |
Event link | View original entry |
SGX-LKL is a library OS designed to run unmodified Linux binaries inside SGX enclaves. It uses the Linux Kernel Library (LKL) and a modified version of musl to provide system support for complex applications within the enclave. SGX-LKL has support for in-enclave user-level threading, signal handling, and paging. This demo presents an overview of SGX-LKL and demonstrates how popular applications can be ported and executed within SGX-LKL.
Please note that this talk was originally scheduled to begin at the later time of 14:10
Speaker |
Leif Lindholm |
---|---|
Room | K.4.601 |
Track | Open Source Firmware, BMC and Bootloader |
Time | 15:00 - 15:25 |
Event link | View original entry |
Historically, the UEFI forum has been a bit rubbish at interacting with open source development, but this is improving.
This talk gives a background on why (both the rubbish and the improvement) and what is being done.
Also, a brief update on news for the TianoCore/EDK2 project.
After much lawyerly fun, the UEFI Self-Certification Testsuite (SCT) was released under an OSI license (BSD2) at the end of 2018.
We will explain why this is useful, and how this has helped with the addition of UEFI support in U-Boot, as well as helped improving EDK2 code quality.
We have a new process, referred to as "code first" for drafting changes to the UEFI spefication in public.
This will give an overview of the process, as well as one of the first exercises of it - the definition of audio APIs (for accessibility or, you know, DOOM).
Also, an update on changes in the EDK2 reference implementation, including dropping the CLA and upcoming RISC-V support.
And finally Ard Biesheuvel will give a quick overview of a plan to unify the Linux kernel boot protocol on EFI platforms across architectures.
Speaker |
Heinrich Schuchardt |
---|---|
Room | K.4.601 |
Track | Open Source Firmware, BMC and Bootloader |
Time | 15:30 - 15:55 |
Event link | View original entry |
The Unified Extensible Firmware Interface (UEFI) is the default for booting most Linux and BSD distributions. But the complexity of the UEFI standard does not offer an easy entry point for new developers. The U-Boot firmware provides a lightweight UEFI implementation. Using booting from iSCSI with U-Boot and iPXE as an example let's delve into the UEFI API.
The UEFI sub-system in U-Boot has developed from barely starting GRUB to supporting complex UEFI applications like iPXE and the EFI shell and passing most of the UEFI compliance tests for the implemented protocols and services.
The session gives an overview of the boottime and runtime services of UEFI with a focus on driver binding. The challenges of integrating the UEFI subsystem with U-Boot's infrastructure are described and an outlook is provided.
Questions this talk should answer:
- How does the UEFI driver model work?
- How does this integrate with U-Boot?
- What to expect next in U-Boot's UEFI implementation?
Speaker |
Thierry Laurion |
---|---|
Room | K.4.601 |
Track | Open Source Firmware, BMC and Bootloader |
Time | 16:00 - 16:25 |
Event link | View original entry |
Insurgo had engaged itself in the adventure of facilitating security accessibility and received NlNet funding to do exactly that. Now it wants to get developers involved and expand funding.
The goal of this is to bridge the gap between reasonably secure OS (QubesOS) and slightly more secure hardware (Heads) to help privacy-focused users and those that are vulnerable. But we need to prepare for the future now!
Insurgo has challenged the status quo that has been prevalent since 2015 and has made it possible for OEMs to preinstall QubesOS, thanks to the Heads Open Source Firmware (OSF) and his own PrivacyBeast QubesOS certified branch, not yet merged upstream, due to the lack of time and resources of a single man effort needing additional collaboration.
The integrity of the firmware and boot files is already remotely sealed and can be attested over smartphone (TPMTOTP) and from the bundled Librem Keys/Nitrokey Pro 2 (HOTP), prior to shipping. Thanks to HOTP-enabled USB security dongles bounded to shipped products, the user can visually validate that the hardware they've received is in OEM attested state, prior to complete reownership which is regenerating all required secrets from a trustable recovery environment (Heads OSF) thanks to a re-ownership wizard that guides the user until completion.
This is just the beginning of the adventure and the road ahead requires your help. Insurgo wants to propel this movement forward.
Today's secure hardware (REAL open source initialized hardware, eg. the RYF KGPE-D16, replicant supported phones, Sandy bridge/Ivy bridge based boards, eg. x230) struggle to stay current with upstream code and compliance requirements. LineageOS dropped support of the i9300. Coreboot dropped support of the KGPE-D16 platform. And the list will expand if no measures are taken to support maintainership of privacy focused projects that are taken for granted until support is finally dropped. This is a real problem requiring real solutions.
New efforts to support future, REAL Open Source Hardware (newly Respect Your Freedom [RYF] certified hardware, eg. Talos II from RaptorEngineering, future Power10 based hardware) are neither currently under active development nor currently supported by QubesOS. This needs to change. Now.
There is an opportunity for transition. This requires leadership, developers and funding.
This is why we've created the Insurgo Initiative on the OpenCollective platform.
This is where transparent funding will be available to the public for open source R&D. Please consider participating through code contributions!
Insurgo is making today's most trustworthy hardware available (TRUELY Neutered+Deactivated Intel ME, no FSP, no binary blobs whatsoever but EC firmware in the Root of Trust) to the masses through remote attestation over Heads OSF.
NlNet is helping Heads to be compatible on the T530, T430, T420 and X220, which are widely available, binary blob-free hardware platforms, thanks to a partnership with 9elements under NlNet grant.
NlNet funds is also permitting development of remote administration of QubesOS over tor hidden services when needed, thanks to an ongoing partnership with both the Qubes OS Project & Whonix.
But what about other work needed to ease accessibility of tomorrow's secure hardware and technologies?
Insurgo decided to give back to Open Source Firmware (OSF) related communities and will publicly announce novel approach to support required open source projects.
In premiere, we plan to give back 25% of Insurgo's net profit on sales to the Insurgo Initiative, hosted on OpenCollective.
Those funds will be available to Open Source projects in the form of bounties, to be paid out upon proof of work of agreed contributions.
The idea here is that open source tickets (issues) can be used as bounties and if knowledgeable people knew funds were available for needed work, they'd be more incentivized to address them.
Developers could then be rewarded for their efforts and paid for completing tasks similiar to how Open Source Funds (OpenTech, NlNet, etc) provides funds for larger projects.
The Insurgo Initiative will be self funded and potentially expanded through international partnerships, while the goal stays the same: supporting a future where security is more accessible to the public.
Here are some projects needing additional funding and more developer awareness, right now. Big funds and grant application are great. But the funding process has issues.
Not every developer wants to go through the application process, which requires management skills and requires a process that is not just about coding.
There are awesome developers out there whose help would be greatly needed.
How do we appropriately match developers with pertinent issues? We can fix this with the right mission and funding.
Insurgo's mission is for accessible security.
Bounty tags are being added to projects that lack the funding and to help address the current problems they face for completion:
The main problem we seem to face with many projects can be seen over and over again: a lack of maintainership.
No one can carry on a project for too long without becoming overwhelmed/drained by it.
We need to fairly distribute this work and make sure contributions are incentivized and fairly paid.
In this talk, I will go quickly over past work. The current situation. And where Insurgo wants to go.
Welcome aboard!
Speaker |
Daniel Kiper Piotr Król |
---|---|
Room | K.4.601 |
Track | Open Source Firmware, BMC and Bootloader |
Time | 16:30 - 16:55 |
Event link | View original entry |
For the last several years, hypervisors have played a key role in platform
security by reducing the possible attack surface. At the same time, the hype
surrounding computing and Internet of Things Gateways has led to an increase in
network appliance devices. Our target was to create a less-insecure virtual
network appliance using TrenchBoot, Trusted Platform Module 2.0 and AMD SKINIT
Dynamic Root of Trust for Measurement to establish a Xen hypervisor with a
meta-virtualized pfSense firewall. We are going to present it with an update
of the status of support of TrenchBoot for AMD processors.
This appliance is supported by are supported by apu2, a reliable low-SWaP x86
device from Swiss OEM PC Engines. It can be used as a Single Office / Home
Office firewall or an industrial edge device and has mostly open-source
hardware, coreboot firmware, mPCIe extensibility and an extended support
lifecycle for the embedded Central Processing Unit and motherboard.
In this talk, we will show how to create a system, which enables a significant
portion of computations to the edge devices while maintaining security. Using
a simple, well-known platform, we will conduct a secure boot using the Static
Root of Trust for Measurement with coreboot, move to the Dynamic Root of Trust
for Measurement by SKINIT in TrenchBoot and use all of this to provide a
complete chain of trust for the Xen hypervisor, a virtual firewall appliance
isolated by an input–output memory management unit (IOMMU) from the physical
network interface controller (NIC) devices. We will present benchmark data
on virtualization overhead, explain how this complexity can still be practical
and outline the value of this stack. In the second part of presentation we will
discuss current status of Intel TXT development in the GRUB and Linux kernel.
Speaker |
Patrick Rudolph |
---|---|
Room | K.4.601 |
Track | Open Source Firmware, BMC and Bootloader |
Time | 17:00 - 17:25 |
Event link | View original entry |
Modern Open Source boot firmware ships with an increasing amount of BLOBs. While it's often claimed that it eases the integration,
it makes life of Open Source developers harder, as it's not documented what is done inside BLOBs and what should be done outside of
the same.
We will show how to trace the MMIO access of BLOBs in firmware by using Open Source tools. As analysing the traces for possible
branches and loops is hard and stressful work, we created our own framework for automatic reverse engineering.
Our framework allows to capture and analyse MMIO traces, fuzz the BLOB under test and finally generates readable code in a high level language,
like C, for easy analysing.
During this talk, we will discuss the legal side, the motivation behind reverse engineering, and the benefit for the Open Source community.
We will explain the problems we faced, and explain the basic concept used, with examples from the real world.
Speaker |
Daniel Maslowski (CyReVolt) |
---|---|
Room | K.4.601 |
Track | Open Source Firmware, BMC and Bootloader |
Time | 17:30 - 17:55 |
Event link | View original entry |
With Intel's Firmware Support Package (FSP) and the recent release of a
redistributable firmware binary
for the Management Engine, it has become possible to share full firmware images
for modern x86 platforms and potentially audit the binaries. Yet, reverse
engineering, decompilation and disassembly are still not permitted. However,
thanks to previous research, we can have a closer look at the binary data and
come to a few conclusions. This talk briefly summarizes the fundamentals of
developing custom and open source firmware, followed by a quick guide through
the process of analyzing the binaries without actually violating the terms to
understand a few bits, and finally poses a statement on the political issues
that researchers, repair technicians and software developers are facing
nowadays, taking into account how consumers are affected and how they perceive
the situtation eventually.
Speaker |
Brian Richardson |
---|---|
Room | K.4.601 |
Track | Open Source Firmware, BMC and Bootloader |
Time | 18:00 - 18:25 |
Event link | View original entry |
As the rich capabilities of platforms increase, so does their complexity. As hypervisors and operating systems harden their attack surfaces, malware has been moving deeper into the platform. For example, a modern laptop may have over 15 updatable firmware elements, each with low-level access to a specific hardware domain. From the early days of proprietary BIOS in the 1980’s and 1990’s, to the world of standards in the 2000’s, to the post-PC world of the last few years, the nature of firmware has changed. In order to provide security guarantees for platform firmware, the servicing model of the platform takes center stage.
This session discusses the evolution of platform servicing using examples based on device firmware, non-host/system on a chip (SOC) firmware, and implementation of the Unified Extensible Firmware Interface (UEFI). A modern servicing model features elements for component-based update, resiliency in case unexpected conditions, a more seamless user experience, lowering the friction of update integration, and telemetry for a view into platform health and firmware inventory.
This talk will discuss current trends in standards such as UEFI and associated EDK II firmware, and how the Linux Vendor Firmware System (LVFS) used these components as part of a holistic, open source approach to seamless firmware updates.
Speaker |
Maciej Lawniczak Przemyslaw Czarnowski |
---|---|
Room | K.4.601 |
Track | Open Source Firmware, BMC and Bootloader |
Time | 18:30 - 18:55 |
Event link | View original entry |
Have you ever heard of Board Management Controller? It has been black box firmware to manage servers since last century … now it’s open. OpenBMC is a Linux Foundation project with a goal to produce an open source implementation of BMC firmware stack. It is a vendor independent Linux distribution created using Yocto project that provides complete set of manageability features. Backbone technologies in OpenBMC include D-Bus and systemd. With embedded web server it provides user friendly WebUI and Redfish interface for easy server management using modern RESTful APIs. Intel as one of the founding companies offers additional functionalities on top of OpenBMC implementation which will be presented as a part of this presentation.
In this talk we will:
- tell you a short history and overview of OpenBMC
- have a quick view on OpenBMC architecture (Yocto, Dbus, systemd)
- show what’s new in latest 2.7 releases and what is planned for 2.8 (Feb 2020)
- talk about Intel specific features available in OpenBMC
- tell you how to contribute to OpenBMC project
- give you a guide on how to modify, build and run the project on target BMC on Intel server
Audience: software engineers, validation engineer, embedded software architects, data center administrators
Speaker |
Arne Padmos |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 10:30 - 10:55 |
Event link | View original entry |
What threats do we need to take into account when building a system? A key method for answering this question is an approach called threat modelling, whereby security problems can be anticipated during the design phase. This talk discusses major threat-modelling approaches, and includes concrete examples of how to apply them to software-intensive systems.
Speaker |
Tomáš Mráz |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 11:00 - 11:25 |
Event link | View original entry |
Management of allowed cryptographical algorithms to disallow algorithms not allowed due to weaknesses or restrictions by certification standards is complicated task. The talk will introduce system-wide crypto-policies concept and implementation as an attempt to help system administrators with this task.
This talk replaces "OSINT" talk which was schedulled initially, but David Busby could not attend on the short notice.
The system-wide crypto-policies were introduced in Fedora 21 in 2014. Since that time the tool evolved a lot especially with the recent introduction of run-time generation of configuration from a policy definition file and introduction of sub-policy concept. The feature is called Custom crypto policies. The crypto-policies nowadays help management of crypto algorithms not only in Fedora but also in Red Hat Enterprise Linux 8. It will be shown how the policy definition file looks like and how it is converted to the actual crypto library configuration.
Speaker |
Tobias Reiher |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 11:30 - 11:55 |
Event link | View original entry |
Security vulnerabilities are still very common in todays software. Formal methods could improve the situation, but program verification remains a complex and time-consuming task. Often, the verification of existing software is infeasible and a complete rewrite can be prohibitively expensive. Both, however, is not necessarily required to improve on the current state. By replacing critical parts of an existing software by verified code, security can be strengthened significantly with moderate effort.
We show the feasibility of this approach by the example of a FLOSS TLS implementation. The basis of our PoC is the TLS 1.3 library Fizz which is written in C++. The existing message parser was replaced by a verified version implemented in the SPARK language. Our RecordFlux toolset was used to automatically generate the parser based on a formal message specification. With the SPARK tools we can prove automatically that an attacker cannot cause any overflows, runtime errors or undefined state by sending malformed messages to the modified library. Because of mismatches in the data structures used in C++ and SPARK, some glue code had to be written manually to integrate the verified parser into Fizz. Still, the modified TLS implementation shows only a slight performance loss while providing higher security.
Speaker |
Oleksii Oleksenko |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 12:00 - 12:25 |
Event link | View original entry |
Spectre-type attacks are a real threat to secure systems because a successful attack can undermine even an application that would be traditionally considered safe.
SpecFuzz is the first tool that enables fuzzing for such vulnerabilities.
The key is a novel concept of speculation exposure:
The program is instrumented to simulate speculative execution in software by forcefully executing the code paths that could be triggered due to mispredictions, thereby making the speculative memory accesses visible to integrity checkers (e.g., AddressSanitizer).
Combined with the conventional fuzzing techniques, speculation exposure enables more precise identification of potential vulnerabilities compared to state-of-the-art static analyzers.
Technical report: https://arxiv.org/abs/1905.10311
Speaker |
Lorenzo Fontana |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 12:30 - 12:55 |
Event link | View original entry |
Linux Syscalls can be used as an entrypoint to do security analysis on Linux. However reading and processing every system call in userspace creates a very unique set of challenges.
In this talk we are going to see exactly what those challenges are and how we solved them in the Falco project.
One of the ways to have broad visibility into our systems, when doing security analysis is to go and ask our questions directly to the Linux kernel.
For this purpose, at a very foundational level, in every Linux system we find the syscalls interface. It’s certain that every user space process goes through this part of the kernel.
Starting with this assumption, the immediate conclusion is that we can just go and ask the syscalls “Yo syscalls! What’s happening in my system?”. While this reasoning might seem very simple, reading and processing every single syscall in userspace can result in a set of very unique challenges to this domain.
In this talk we are going to see exactly what those challenges are and how we solved them in the Falco project.
Part of the solution for Falco is to have two alternative drivers, a Kernel module and an eBPF driver talking to userspace using a Ring buffer but you have to come to this talk to hear the rest!
Speaker |
John Lionis |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 13:00 - 13:25 |
Event link | View original entry |
In this presentation we take under consideration the increased use of Docker in corporate environments.
It is a fact that Docker has found wide spread of use during the past years, mostly because of it
being very easy to use , economic w.r.t resources used, fast and easy to deploy when compared with
a full blown virtual machine. More and more servers are being operated as Docker hosts on which
micro-services run in containers. From a security point of view, two aspects of it arise in the
context of this talk and the inherent time-limitations it has. Firstly, the aspect of the already
quite talked-through question, “is it secure ?”.Secondly the less analyzed aspect of incident analysis
and the changes introduced with respect to known methods and evidence.In this presentation we will
briefly outline some security considerations about Docker and the average user and then we will try
to examine how Docker introduces changes to the workflow related to incident analysis and forensics in its environment.
Speaker |
Gilles Van Assche |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 13:30 - 13:55 |
Event link | View original entry |
Protocols in symmetric cryptography are often built from block ciphers, with a fixed input and output size, while variable sizes are handled through their modes of use. Incrementality, namely, the ability to efficiently compute the output for increasing inputs, or to request longer outputs, is often a property of the implementation rather than an explicit feature of a mode.
A doubly-extendable cryptographic keyed (or deck) function is a new kind of object that makes incrementality an integral part of its definition. Writing modes for various applications, such as authenticated encryption of a network channel or disk encryption with a wide block cipher, on top of a deck function turns out to be a simple exercise and leads to less error-prone implementations than on top of a block cipher. We illustrate this with the session-supporting authenticated encryption modes SANE and SANSE. (Sessions naturally protect a continuous flow of messages or a client-server dialog.)
While a deck function can be constructed from existing primitives, like a block cipher, we show two more natural ways of making a deck function in practice.
The first one is based on the well-known permutation-based duplex construction, of which a nice instantiation is the Strobe protocol framework. Strobe was showcased in Noise+Strobe=Disco as an advantageous replacement to all kinds of primitives in the Noise protocol framework, resulting in much simpler specifications and a lighter implementation. Xoodyak, our candidate to the NIST Lightweight Cryptography competition, is another example.
The second one is based on the recent Farfalle construction, which relies on the parallel application of a permutation. Farfalle's inherent parallelism yields deck functions that are at the same time simple and efficient on a wide range of platforms. In particular, we point out the nice performance of Kravatte and Xoofff, two deck functions based on the Keccak-p and the Xoodoo permutation, respectively. It is worth noting that Kravatte and Xoofff are much faster than AES-128 in software, and at least competitive with and often faster than AES-128 using dedicated AES instructions on the more recent Intel and AMD processors!
Speaker |
Robert Golebiowski |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 14:00 - 14:25 |
Event link | View original entry |
How Transparent Data Encryption is built in MySQL and Percona Server ?
- keyrings – what are they used for ? What is the difference between using a server back-end (keyringvault) versus file back-end (keyringfile). How it affects server startup and why? Why per server separation is needed in Vault Server?
- How Master Key encryption works ? How it is build on page level ? How do we know which key we should fetch to decrypt a table ? How do we know that used key is the correct one ? How do we make sure that we can decrypt a table when we need it ?
- What crypto algorithms are used ?
- How Master Key rotation works ? Why is it needed ?
- What is KEYRING encryption and what are encryption threads?
- How binlog encryption works in 5.7 and how it works in 8.0 ?
- How undo log/redo log encryption works ?
How Transparent Data Encryption is Built in MySQL and Percona Server ?
In this presentation, we'll take a deep dive into the world of transparent data encryption for open source databases. We'll be looking at how transparent data encryption is implemented in MySQL and Percona Server for MySQL:
- keyrings – what are they used for ? What is the difference between using a server back-end (keyringvault) versus file back-end (keyringfile). How it affects server startup and why? Why per server separation is needed in Vault Server?
- How Master Key encryption works ? How it is build on page level ? How do we know which key we should fetch to decrypt a table ? How do we know that used key is the correct one ? How do we make sure that we can decrypt a table when we need it ?
- How Master Key rotation works ? Why is it needed ?
By the end of the talk, you'll have a better understanding of the transparent data encryption and will be aware of things to take into account when interacting with encrypted databases in your applications.
Speaker |
Stephan Marwedel |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 14:30 - 14:55 |
Event link | View original entry |
The design, implementation, and configuration of the secure logging service. Its aim is to provide tamper evident logging, i.e., to adequately protect log records of an information system against tampering and to provide a sensor indicating attack attempts. The secure logging service achieves this by authentically encrypting each log record with an individual cryptographic key used only once and protects integrity of the whole log archive by a cipher{based message authentication
code. Each attempt to tamper with either an individual log record or the log archive itself will be immediately detected during log archive verification. Therefore, an attacker can no longer tamper with log records without being detected which greatly enhances the use of log archives in forensic investigations.
Log records are normally produced by any information system in order to perform monitoring during normal operations and for troubleshooting in case of technical problems. Log information is equally important for retaining the security of an information system, as security relevant events are recorded and can later be monitored for unusual patterns which may indicate an attack attempt. Examples include log on and log off, startup and shutdown, network service access, network filter rule application, storage access, etc. Log records may also contain valuable information about a system that a potential attacker intends to compromise. If an attacker is able to successfully compromise a system, they are also able to tamper with log records, potentially hiding their traces. This makes forensic analysis extremely difficult, as no reliable data source about system behavior immediately before the attack is available to a security analyst performing incident investigation. Therefore, log information should be appropriately protected. The aim of the secure logging service is to provide tamper evident logging, i.e., to adequately protect log records of an information system and to provide a sensor indicating attack attempts. The secure logging service achieves this by authentically encrypting each log record with an individual cryptographic key used only once and protects integrity of the whole log archive by a cryptographic authentication code. Each attempt to tamper with either an individual log record or the log archive itself will be immediately detected during log archive verification. Therefore, an attacker can no longer tamper with log records
without being detected. ost information systems rely on standards in order to provide logging services. One of the most widely adopted standards is the syslog protocol which is specified in RFC 5424. Many implementations of this
protocol are available. A popular extensible implementation with additional features is syslog-ng, which is an enhanced logging daemon with advanced features for input and output. Furthermore, it features capabilities for log message filtering, rewriting, and routing. It can be used as a drop-in replacement for existing log daemons on UNIX systems. The implementation of the secure logging service providing tamper evidence and confidentiality of system logs based on the template mechanism of syslog-ng is presented together with an application example.
Speaker |
Moisés Guimarães |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 15:00 - 15:25 |
Event link | View original entry |
Applications and services rely on configuration data in order to be customized and we will talk about how to keep them in a safer place other than plaintext configuration files.
The configparser module is Python's standard configuration file parser and many projects rely on it to achieve easy configuration with plaintext files. OpenStack Common Libraries (Oslo) has an alternative called oslo.config with additional sources of input like command line arguments or environment variables. With the addition of a feature called source drivers last year, we are now able to increase the security of configuration values storing them in a safer place.
This talk focuses on the new source driver that integrates Oslo.Config and Castellan, another Olso module specialized in talking to secret managers, and how we can store our sensitive configuration data using HashiCorp Vault.
Speaker |
Radovan Sroka |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 15:30 - 15:55 |
Event link | View original entry |
Are you a sysadmin and feeling paranoid? Let's promote security hardening to another level.
Perhaps, with the concept of Application Whitelisting you will be able to sleep again.
In this session we are going to explain the Application Whitelisting idea and its implementation, what benefits are there from a security point of view and how it differs from competitors.
We are going to show how to create a new set of rules based on distribution default for given examples.
As a result, an attendee should be able to setup the Application Whitelisting framework on his server or workstation.
This presentation is based on Red Hat/Fedora Linux environment.
Speaker |
Philipp Krenn |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 16:00 - 16:25 |
Event link | View original entry |
Why should you allow all possible system calls from your application when you know that you only need some? If you have ever wondered the same then this is the right talk for you. We are covering:
Because your security approach can always use an additional layer of protection.
Speaker |
Florent Revest |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 16:30 - 16:55 |
Event link | View original entry |
KRSI (Kernel Runtime Security Instrumentation) is an ongoing effort at Google to upstream an LSM (Linux Security Module) instrumentable using eBPF (extended Berkeley Packet Filter) to the Linux kernel.
KRSI allows system owners to dynamically attach eBPF programs to security hooks and write MAC and audit policies without having to reboot or patch the kernel thereby enabling a new class of system security and auditing software.
This talk presents the main concepts behind KRSI: it introduces the technologies leveraged and presents the API exposed to users.
Speaker |
Lukas Vrabec |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 17:00 - 17:25 |
Event link | View original entry |
Lukas Vrabec built a new standalone tool, udica, for generating SELinux policy profiles for containers based on automatic inspecting these containers. We will focus on why udica is needed in the container world and how it can make SELinux and containers work better together. We will show real examples where SELinux separation for containers had to be turned off because the generic SELinux type container_t was too tight. With a tool like “udica”, users and developers can easily customize the policy with limited SELnux policy writing skills. Come to see how easy also you can create custom SELinux policy for your containers!
This talk will explain how SELinux works with containers. We will show how to enable/disable SElinux using multiple different container runtimes and define the default types. One issue with these types is that they are tough to customize. The two default types for running containers are containert which is a fully confined domain, which eliminates any use of the host files unless they are relabeled. Or spct, which is the type containers run with when SELinux is disabled for container separation, --privileged mode. As an example, If you had a container that you wanted to be able to gather the logs from /var/log on the host and send them to a centralized server, you have to disable SELinux separation.
Writing custom policy for each container that needed additional access would be very difficult and require a container policy writer.
Speaker |
Winfried Tilanus |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 17:30 - 17:55 |
Event link | View original entry |
End-to-end encryption is often regarded as the holy grail of security. But when you start implementing it soon becomes a security hell. Does it really protect against the threats it should protect against? And watch out for the pitfalls when implementing it: almost everybody fails there!
Lets start with the conclusion of this talk: after twenty years of designing and analyzing high security instant messaging systems, I came to the conclusion that end-to-end encryption (e2ee) in instant messaging is snake-oil. It creates a false sense of security.
First of all the threat model underneath e2ee has fundamental flaws, it doesn’t deliver protection against the threats commonly named to justify it. And if that isn’t enough, there a lot of issues that make a proper implementation very hard to get right. To name a few: key verification, one-to-many messages, store and forward and archiving.
But lets not end this talk all in black. Though we aren’t there yet, there are some developments that may solve these issues. I will name those too.
Speaker |
Peter Czanik |
---|---|
Room | UA2.114 (Baudoux) |
Track | Security |
Time | 18:00 - 18:25 |
Event link | View original entry |
Everybody knows sudo, right? Sudo allows a system administrator to give certain users the ability to run some commands as root, while logging the executed commands and their arguments. It is installed by default on almost all Linux systems, and is available for most commercial UNIX systems. Still, even system administrators often only know it is the “prefix” to use before entering a command requiring root privileges. Learn how much more this simple looking tool can do!
Everybody knows sudo, right? Sudo allows a system administrator to give certain users the ability to run some commands as root, while logging the executed commands and their arguments. It is installed by default on almost all Linux systems, and is available for most commercial UNIX systems. Still, even system administrators often only know it is the “prefix” to use before entering a command requiring root privileges. Learn how much more this simple looking tool can do!
Most of the times the default configuration allows a group of users to run any commands:
%wheel ALL=(ALL) ALL
It’s a good first step, better than using the root account directly. This way you can trace who ran what commands on the system. But there are a lot more possibilities when configuring sudo, making your system more secure.
Life is simple when when you have to give access a single user to a single command. But as soon as you have multiple users with the same access rights, it is not just shorter but also easier to maintain, if you use aliases.
For added security, you can add a hash of binaries to sudo. This way if the binary changes for any reasons, like modifying it through a successful exploit, you can prevent it from being used.
Using sudo does not make much sense without proper logging and alerting. There are three major possibilities:
- syslog: all events are logged to syslog. For additional security, collect sudo logs centrally, so a malicious user cannot delete them easily.
- e-mail: sudo can send e-mail alerts on different kinds of failures
- debug: in depth logging of subsystems, mostly useful for developers
Session recording is a fourth possibility. The terminal output can be saved in a local file and played back. You can play back what happened, even if the user started up an interactive shell.
Instead of maintaining the sudoers file on each of your systems, you can use LDAP to configure sudo. It has some differences compared to a sudoers file, but also many advantages:
- local users cannot manipulate the rules,
- is easier to maintain,
- goes live immediately.
Starting with version 1.8, sudo has a plugin-based architecture. You can replace or extend sudo functionality using