Posts about blog

SCaLE 15x

SCaLE 15x

SCaLE 15x

SCaLE 15x

This year was the 15th Annual SCaLE (Southern California Linux Expo) event where I was fortunate enough to both attend and speak at. While this is the 15th year of the, now very well known, conference; it was in fact my first time to attend. I spent majority of my time floating between working the Fedora, Red Hat, and OpenShift booths there in the Expo Hall. I had originally planned to spend more time at the Fedora booth than I did, but the OpenShift crew ended up short staffed because of unexpected travel issues of some of their team members so I filled in the best I could. As expected the interest in containers is at full tilt and people were very interested to see what is going on with OpenShift as it is a Kubernetes distribution with advanced features beyond core Kubernetes, and Kubernetes is easily the most popular container orchestration platform around right now. The Project Atomic Community manager, Josh Berkus was kind enough to lend his Sub-Atomic Cluster (Described in this two-part blog series: Part 1, Part 2) to the booth efforts and that made for some very engaging demos of what OpenShift can accomplish (even though the conference network left something to be desired, but this is nothing new). Over all I think we were able to provide event goers a solid booth destination in their Expo Hall travels.

Every conference I go to, I notice there's a specific "crowd profile" in terms of what motivates the participants to attend the conference, what their interests are, etc. Often times these are going to be things like hobbyist, enthusiast, professional/commercial, developer, sysadmin/ops, DevOps practitioners, and potentially (and often) some mixture of those categories. This particular conference was a really solid representation of community focused people and hobbyists which is always a cool crowd because everyone is genuinely interested and enthusiastic about the technologies being represented there. However, from a personal note, something I found rather interesting was the number of people who came by the Red Hat booth that had never heard of the company. This isn't entirely a new phenomenon depending on the "crowd profile" but it's definitely the first time I've seen such a proliferation of it at a specifically Linux conference. This is a weird change of pace for me as for the longest time, Red Hat was a name synonymous with Linux. However, as the company has focused more on the Enterprise with RHEL, the community focused Fedora and CentOS have filled in the void for the community user base and this was a primarily community focused event. Beyond that though, the number of people who had no idea that Red Hat is a major sponsor of and contributor to Fedora was surprising to me.

There are two primary reasons I think lead this situation. First, Linux is so high quality and pervasive these days that the percentage of people who used to get off in the weeds early and often with technical issues is fewer and far between. These systems level technology dives would quickly lead to someone becoming well versed in topics of their distribution and the reality of relationships between different entities (such as Red Hat and Fedora) within the scope of the community. This is no longer the case, Linux is so easy to use and so commonplace that most people don't need (and in many cases don't want) to dig into the nuts and bolts to the point of having a fundamental understanding about the resulting project that produces the Distribution they are using. I think this is great in a lot of ways, I think it's a standing ovation to the fact that Linux has "made it" and that we collectively in the upstream communities are providing quality software that attracts users of all kinds, technical or otherwise. The Second reason I think lead to this is that it poses an interesting problem in the world of marketing for both Fedora as an upstream and Red Hat as a company to properly communicate to users and potential users things that are interesting to them since Linux itself isn't inherently interesting to as wide of an audience as it once was due to popular tech trends shifting away from the system itself but instead to things you can run on top of the system (and recently in containers). Now, Red Hat has done a great job of making that message clear to it's customer base with material that covers the entire Red Hat Technology Portfolio. I also think that Fedora in recent years has been doing a really good job of showing off various features of each Fedora Edition: Workstation, Server, Atomic which highlights features beyond just the core distribution that are tailor made to specific users and potential users. We just need to continue to show up to user groups, MeetUps, and conferences with good representation to help spread the word. On that note, a massive thanks to the amazing Fedora Ambassadors. I'd also like to find a good way to get the message out to more users of various online and programming communities, something similar to Fedora Loves Python but for various Special Interest Groups within Fedora. Just food for thought.

Over all I think we're doing good work and doing a good job spreading the word, it's just interesting to see how trends in technology change, how the landscape changes, and trying to identify how we as a community need to adapt. Kudos to the whole Fedora Community!

I also had a chance to make some new friends from GNOME, EndlessOS, OpenSUSE, debian, opensource.com and LinuxAcademy as a side effect of spending so much time in the Expo Hall. In typical Linux Community fashion, everyone was extremely friendly and I had a great time. :)

I spent most of my time working the booths, but I was able to make it to a couple sessions while the Expo Hall was closed. I've taken some notes on those below.

Kubernetes 101

Project Atomic's very own Josh Berkus did a wonderful overview of the Kubernetes architecture, walking the audience both through the various components of Kubernetes as well as how you would take a traditional application that would be deployed as a "monolith" all on a since physical or virtual machine with only vertical scaling into a multi-node orchestrated deployment of containerized services.

This talk was accompanied by multiple live demos using minikube (with a tip of the hat to minishift) in order to show how the concepts presented during the talk map to real world deployment and configuration within the cluster.

Patterns and Anti-Patterns in Docker Image Lifecycle

Speaker from JFrog talking about Docker image lifecycle management, using Artifactory as an example component.

The presentation begins with a poll of the room about who's using Docker in dev vs production. A primary point of concern is the concept of "trust" to know if there's enough integrity into the process and the images that you pull down to build your application or CI pipeline on top of.

"There is no platform without ecosystem" - Solomon Hykes (CEO Docker Inc.)

Do we have an existing pattern? Do we need to adapt it? CI/CD pipelines? (These are old news, this has been done for years)

The speaker mentioned a concept call the "The Promotion Pyramid" which could basically be turned on it's side and the layers of the pyramid by changed to boxes with arrows between them and it looks just like a production pipeline diagram.

Onward to Dockerfiles! The Dockerfile is extremely powerful, the problem is that it's a hammer and everything now looks like nails. Fast and cheap builds are not the way to go.

FROM fedora

RUN dnf install -y software-properties-common python
RUN dnf install -y nodejs

RUN mkdir /var/www

ADD app.js /var/www/app.js

CMD ["/usr/bin/node", "/var/www/app.js"]

The problem with this is that there's no versioning on anything so each build may or may not produce the same thing because each dnf command could install a different version of each component. You can use a SHA sum to refer to the image version but these are not human readable so it's kind of pointless or maintain your own base image.

Pattern to follow to fix this is to promote immutable binary files (Martin Fowler Immutable Server).

The promotion patterns between registries and repos it basically the only real way because the only way to "version" docker images is with tags and the tag has no concept of version numbers and you can only run one registry per host unless you use VirtualHosts and many daemons mapped to ports.

As a side note, the "promotion pipeline" is effectively the exact structure that Fedora Containers follow.

Fedora Work Session and Meetup

Open session to:

  • Work on development
  • Say Hello
  • Meet others
  • Ask questions about challenges you're facing
  • Share knowledge
  • Find out how to help

This was a really fun grass roots "Birds of a Feather" style session where everyone in the room made a big circle, went around and introduced themselves, and briefly talked about what they are using Fedora for. This spanned from Workstations and Cloud all the way to use with ARM boards in small embedded devices that are deployed out in the field. I had a great time learning what everyone was up to with Fedora and I want to thank Perry Rivera for setting it up and hosting. Perry was also kind enough to be taking notes and feedback based on everything that was discussed.

Closing time

I had a wonderful time (aside from not feeling 100% - I came down with a sinus infection the day before flying out to SCaLE), the event was fantastic and I really hope I get the opportunity to go back next year.

Until next time...

DevConf.cz 2017

DevConf.cz 2017

DevConf.cz 2017 - Define Future.

DevConf.cz 2017

I was fortunate enough to attend DevConf.cz 2017 this year, it's honestly one of my favorite events of the year. Many people from the various upstream communities I work in or closely with come to discuss and present various technologies and it's a lot of fun.

This year I tried very hard to attend as many presentations as possible as well as catch up with other community contributors in "The Hallway Track" because I have in the past been bad at balancing between the official speaker track and The Hallway Track. I like to think I did well. :)

Some of the big themes of the event were Continuous Integration, Container Technologies, Project Atomic family of technologies, Ansible, and Cross-Community and Cross-Distro collaboration (making more of an effort to not re-invent the wheel). Also as a point of reference, sub topics of these include Fedora Modularity, Atomic Host, and Factory 2.0.

This event was kind enough to post video recordings of all the Speakers and I highly recommend those interested in any of these topic spaces to check out the lineup, it was quite good. Speaker Recordings here.

Below are quick notes about the sessions I had the opportunity to attend including a recap of my experience with "The Hallway Track" at the end.

DevConf.cz Day 1

Keynote and Welcome to DevConf.cz 2017

DevConf started off with a quick welcome message about the Conference and a short history including fun facts about how much it's grown in recent years.

After the intro and welcome, it was off to the races with the Day 1 Keynote that discussed the concept of how "software is eating the world" and how the reality of more and more things moving to software is feeding into the Hybrid Cloud concept. In the modern landscape, this solution space can be catered to using only open source software by providing a platform to make infrastructure consistent and stable. At the previous DevConf there was a Keynote that spoke about full end to end Hybrid Cloud as an abstract concept that we as an open source technology ecosystem aimed to accomplish based on current (at the time) market trends. The bulk of this talk was a series of presenters performing live demos, each one effectively built on top of the previous in order to show how the abstract goal presented in the previous year's Keynote has now become a reality.

The open technologies that made their debut on-stage were:

Welcome to DevConf.cz 2017 and Day 1 Keynote video

Generational Core - The Future of Fedora

Next up was a session dedicated to Fedora Generational Core, which is a core component of Fedora Modularity (or it was, it's more or less changed in name but the concept remains the same). Generational Core is now known as Base Runtime, these were originally different concepts targeting different use cases but have merged over time. The Base Runtime is what defines "the line" between Operating System and the Application. The main goal is to have an environment that can be the building block for all other modules and content which has a small package list and relatively low maintenance burden but can remain stable and of high quality. The Base Runtime is the first real module as part of Fedora Modularity that will be shipped.

The bulk of the discussion was off in the weeds talking about the journey to trim down the dependency chain. There was a graphic (in the video link below) that shows the incredible web of dependencies for even some of the most fundamentally required packages to have a functional base environment. It was a great tour of how much work is required to make this stuff happen and highlights that Fedora Modularity isn't just new metadata on top of groups of RPMs.

Generational Core - The Future of Fedora video

Atomic Cluster in 10 Minutes

This was a quick 30-minute session that briefly covered some introductory material about the Project Atomic family of technologies and then dove right into a live demo using ostree layering on top of the base rpm-ostree that comes out of the box with Atomic Host. This functionality comes from either rpm-ostree pkg-add or atomic host install, both of which allow for multiple runs of the command with different packages and they will just add to your new ostree layer on top of the base. Also, that new added layer will be rebased on any future updates to the underlying system.

The main headline of the demo was showing off the new upstream kubectl init command from kubernetes. This command allows for quick setup and testing to be able to be up and running and kicking the tires in no time (well, 10 minutes or less).

Atomic Cluster in 10 Minutes video

Atomic System Containers

Atomic System Containers are a new interesting technology that would allow a system administrator to augment the Atomic Host without having to modify the base ostree. This would allow for the ability to run even your container engine daemon as a container itself. The goal is to provide services that look/feel native to the system but are containers (Example: atomic install foo && systemctl start foo.service such that foo is a containerized service). This is broken down into effectively services distributed as OCI images, executed using runc, using systemd to manage lifecycle, ostree for storage management, skopeo for download/transport of images, and the metadata/conf specification templates required for various integration points. Also, any existing Docker image could be converted into a System Container by simply adding the configuration templates.

You can demo some of this now on Atomic Host using the atomic install --system [--name=NAME] CONTAINER command.

Atomic System Containers video <https://www.youtube.com/watch?v=yQZiRWWEPYo>

Building and Shipping your own Atomic Host

This was a great workshop session put on by Jonathan Lebon that shows you how to do exactly what the title says. Also provided was a great guide for the workshop (linked below). I suggest anyone interested in the topic to check out both the PDF and the video below.

Workshop Guide PDF

Building and Shipping your own Atomic Host video

Audit and Namespaces, Looking Towards Containers

The main outline of this talk aimed to cover:

  • Problems facing auditing and namespaces
  • What auditing means for containers
  • Possible solutions

First up was an introduction to Audit itself. Audit is a Linux kernel auditing mechanism and daemon, it was originally released in 2004, it works with SELinux, it is effectively a really high powered syslog that focuses on kernel space. Audit is a reporting tool, it monitors and reports but does not take action with exception of only one thing: you can configure it to kernel panic a system in the event of action that Audit is unable to log properly (which apparently some high security places would prefer system outage than anything occur without proper auditing). Next the discussion about kernel namespaces and the various ones that exist, including their introduction to the kernel on the timeline. From there a discussion of what containers are and the misconceptions that have come from them. For starters, the kernel has no concept of a container, it's a higher level abstraction that combines kernel features together (namespaces, seccomp, cgroups, etc). The problem comes in that there is only one audit daemon per system, this is because there is only one kernel per system. This makes it difficult to map audit events to various namespaces (or combinations of namespaces based on container storage or networking configuration).

Audit and Namespaces, Looking Towards Containers video

DevConf.cz Day 2

Keynote: A Story of Three Distros: Better Together

On Day 2 of DevConf, I had the honor of being included as a participant in the Keynote which was lead by Red Hat's VP of Engineering, Denise Dumas.

This keynote was a discussion about Fedora, Red Hat Enterprise Linux, and CentOS as the three distro lineage that makes up the Red Hat Family of distros, our individual histories, how we came to co-exist as a cross-distro collaborative effort around operating system technologies, and our plans to collaborate even more in the future around container technologies and runtimes. The discussion further extended the concept of a runtime from a standpoint of being able to migrate them between distros as we decouple these from the operating system in such cases as containers or Software Collections.

Day 2 Keynote video

OpenShift as Enterprise Kubernetes

OpenShift is a kubernetes with many added developer features. One of it's main goals is to be an Enterprise-grade on-premise kubernetes distribution that provides everyone the power to run agile, reliable, distributed systems. However, there are some misconceptions about containers and orchestration systems such as OpenShift. First off, containers are not lightweight virtual machines but instead are entry points for services in a distributed system that can be the building blocks for applications. The idea here is to "write applications, not containers"

The OpenShift Platform provides: service discovery, auto-scaling based on usage metrics, persistent storage management, configuration and secrets management, access to platform API from containers, self-deployable applications, application life cycle management, and build pipelines. The Control Plane is a set of components that all run on the master node(s): API Server, etcd, the cluster scheduler, and controller manager.

OpenShift is extremely powerful and a very cool platform that I urge anyone interested in to watch the video below, it was an extremely well thought out and thorough examination of the technology stack.

OpenShift as Enterprise Kubernetes video

Layered Image Build Service: Lessons Learned

I'm proud to say that this presentation was one of mine, I was honored to be able to speak at the event and I greatly enjoyed the experience.

This talk was about the Fedora Layered Image Build Service and lessons learned along the way. I started off by covering the topics of the day and then diving right in. I began with a fun tale of the time that the Fedora Project Leader, Matt Miller (no relation), said (paraphrased) "There's this open source layered image build system I heard about, we should deploy one!" which started my 18 month journey to a GA Layered Image Build Service release for Fedora. I discussed progress along the way, pain points, highlighted and thanked the various upstreams that kindly supported me along the the way and tipped my hat to the power of OpenShift. The fundamental lesson learned in all of this is that nothing in container land is set in stone, expect APIs to change, and expect backwards incompatible changes to be the norm.

Then we defined containers quickly, had a history lesson of their lineage in Linux space, covered the differences between a Layered Image and Base Image, discussed OpenShift as a platform and use of it's build pipeline and API to create custom tooling (such as with OSBS).

Another topic of interest as it relates to this system is that of Release Engineering, most noteably the cornerstones of making software that is: Reproducible, Auditable, Definable, and Deliverable. This allows us to understand some of the design decisions of the system.

Finally is the discussion of the Layered Image Build Service itself and the Fedora specific implementation.

Layered Image Build Service: Lessons Learned video

Fedora Two-Week Atomic Host: Progress and Future

This session was also one that I presented, it was about the Fedora Atomic Host and the progress so far on the initiative as well as plans for the future. First off I wanted to frame the discussion around Release Engineering and how Fedora traditionally works. As with my previous session I defined Release Engineering as creating a software pipeline that is Reproducible, Auditable, Definable, and Deliverable. Also as a point of reference, a "Compose" is the collection of primitive build artifacts (RPMs), the creation of deliverables (ISOs, Virt Images, Cloud Images, OCI Based Image, etc), and combination of these as a collection that is ready for testing and release release. From there the discussion moved to how the Fedora Release Process works, it is time based (roughly 6 months), there are Nightly Rawhide Composes, DistGit is branched for each upcoming release which triggers Composes to begin for Branched, then Milestone Freezes (Alpha, Beta, RC, GA) go into affect with changes subject to Fedora QE, the Updates Criteria is updated, and ultimately the GA Release.

However, the goals for the Atomic Host Two Week were to move Fedora Atomic Host out of the Fedora 6 month release cycle in order to allow it to iterate more rapidly. We also wanted to create a fully automated pipeline for release, integration, validation, and delivery. We've accomplished a lot on that journey such as the creation of the new dedicated Atomic Host compose which allows changes to be made that won't impact the rest of Fedora, automatic generation of ostree content based on Bodhi updates, AutoCloud automated testing, and a two-week release cycle that is mostly automated (just need to get automated signing work done). In the future we hope to make even more progress around the automated signing, a fully automated end-to-end release (using loopabull), remove kubernetes from the base ostree and move it into a system container (which would make the Atomic Host image smaller and provide more flexibility and choice of container orchestration runtimes for users). We would also like to change the default configuration to use overlayfs for container storage on the backend as well as add kubernetes and OpenShift, single as well as multi-node, testing.

Fedora Two-Week Atomic Host: Progress and Future video

DevConf.cz Day 3

Keynote: History of Containers

The third day of the conference started with a really fun, entertaining, and light-hearted exploration of the history of containers starting from Virtual Machines that started in 1963, through the creation of the OCI, and all the way up to a comical debate-style presentation about the future of containers and wild ideas like microkernels.

One of my favorite components of this talk was the introduction of a new analogy for what used to be known as "Pets vs Cattle" by Steve Pousty. This "Pets vs Cattle" analogy is often used as a way to refer to computing resources that we care about having a long life and substantial uptime (such as virtual machines) vs computing resources that are ephemeral in nature (cloud instances and containers). The presenter identified that not only is this analogy both offensive to the billion+ people on the planet that consider cows as sacred animals, but it is also incorrect in that ranchers don't care about their cattle. The new proposed analogy is "Ants and Elephants" because ants are hive-minded and often are ephemeral in nature and they horizontally scale to accomplish a task (which is more or less what containers aim to do). However, elephants on the other hand spend a lot of time taking care of members of their herd, have grave sites where they pay respects to fallen members, and are large animals that can perform large tasks on their own. Therefore, from now on I will use the "Ants and Elephants" analogy and I highly encourage others to join me.

Keynote: History of Containers video

Commissaire: Exposing System Management

The presentation on Commissaire introduced the project and it's goals of exposing systems management over a simple JSON RPC base API that uses kombu to enable AMQP and performs tasks on the back end with Ansible. Also a point of note in the presentation is that the commissaire developers are working upstream with Ansible on the Python2 to Python3 transition as well. The over all goal is to be able to easily perform maintenance tasks across a container orchestration environment such as kubernetes or OpenShift.

Commissaire: Exposing System Management video

Ansible for people allergic to Dockerfiles

System Message: WARNING/2 (<string>, line 351)

Title underline too short.

Ansible for people allergic to Dockerfiles
-----------------------------------------

This session time slot was a short 30-minute one that introduced the concepts of ansible-container, how it aims to enforce best practices across ansible modules such that they can easily be re-used for container and non-container creation/deployments. There was also discussion of how ansible-container can deploy to orchestration engines automatically (kubernetes and OpenShift currently supported).

NOTE: I was unable to find the video of this talk.

Linch-Pin: Hybrid Cloud Provisioning with Ansible

Linch-Pin is a tool that is aimed to provide simple provisioning and tear-down of environments in multiple on-premise and public cloud providers using Ansible. The utility currently supports short-lived testing environments but targeting long-lived production scenarios in the future.

The guiding principle of Linch-Pin is that "Simple is Better" and it's a tool that originated to replace a really complicated utility called "Provisioner 1.0" (to the best of my knowledge, Provisioner 1.0 is not a public/open source tool). Linch-Pin provides the ability to perform installation/provisioning of systems based on "Topology Files" as input with the output being logging information about the creation as well as an ansible inventory file such that subsequent ansible commands can use it to find/access the specific systems that were created by Linch-Pin. Documentation can be found here.

Linch-Pin: Hybrid Cloud Provisioning with Ansible video

Scaling Up Aggregated Logging and Metrics on OpenShift

This session was a technical deep dive talking about how to resolve some really interesting problems at substantial scale of an OpenShift container orchestration cluster. Scenarios examined here were targeting solutions for clusters with over 10,000 pods in them. Areas such as how to scale ElasticSearch, Kibana, Cassandra, fluentd, and heapster. The session gets off in the weeds quick and is very technical. Anyone interested in these topics or who may potentially run into this level of scale issues is highly recommended to check out the video.

Scaling Up Aggregated Logging and Metrics on OpenShift video <https://www.youtube.com/watch?v=afHxhyOyl1o>_

Deploying Applications on Atomic Host with Ansible

The time slot for this session was also a quick 30 minutes and it focused primarily on quick overview of information about each of Atomic Host, docker build, Ansible, and Cockpit. Then it was demo time, where the presenter showed her ansible playbook and Dockerfile explaining what each does along the way. From there it was a live demonstration of the entire thing working end-to-end to build and deploy a containerize application on Atomic Host using Ansible and Docker.

Deploying Applications on Atomic Host with Ansible video

Testing and Automation and Cooperation: Oh My!

Yet another quick 30 minute time slot that covered a considerable amount of ground across it's topic space. This session covered Fedora's plans to a fully integrated CI pipeline for the entire distro with updates being gated by the CI but easily overridden if/when needed. As an example, the OpenStack project already has this kind of CI pipeline. In Fedora land, we need to firmly decide on what is considered the "input stream" for a CI system as well as determine what we want to gate on (which turns out to be difficult questions to answer). Then we need to find a place to run all tests. As a point of note is that collaboration can be difficult for testing as testing is often project-specific, requirements are often different and sometimes there's cross-community politics in play. We collectively need to start moving towards a common backend toolchain in order to start towards true cross-project collaboration. Currently, we're targeting Ansible as that thing (OpenStack Zuul is already using ansible on the backend).

Testing and Automation and Cooperation: Oh My!

Hallway Track

The hallway tracks are always some of my favorite times at conferences and DevConf.cz is certainly no different. However, because of the nature of them I don't have good notes on the discussions that were had and I've included at least highlight information about the ones that stick out most in my memory

Project Atomic

I had the opportunity to meet up with some community members of the Fedora Atomic WG to discuss various items about plans for the future, the desire to have multiple update streams, as well as plans for Fedora Containers and improving the Container Guidelines. All of these topic items have since been filed into the Atomic WG pagure.io space as issue tickets for posterity and work tracking in the future.

Fedora Infra Managed OpenShift

In another hallway track session a hand full of us were tossing around wild ideas of having an OpenShift environment in Fedora space that ran on bare metal and could provide shared hosting for upstreams to iteratively work on things in a way that could be integrated directly with Fedora services (such as fedmsg, taskotron, and loopabull). This might turn out to be a bit more far fetched than we can really accomplish purely because of the nature of the request but it's something that everyone in the circle thought was a good idea at the time.

Closing time...

That, in a really long-winded nutshell, is my DevConf.cz 2017 experience.

I look forward to DevConf.z 2018!

Until next time...

Fedora Flock 2016

Flock to Fedora 2016

Flock to Fedora: Fedora Users and Developers Conference.

Flock to Fedora 2016

Every year, the Fedora User and Developer community puts on an conference entitled "Flock to Fedora" or "Flock" for short. This year was no different and the event was hosted in beautiful Kraków, Poland. The event had such an amazing line up that I rarely had time for the always fascinating "hallway track" of ad-hoc discussions with various conference go-ers, but in the best kind of way.

Note

At the time of this writing, the videos had not yet been posted but it was reported that they will be found at the link below.

All the sessions were being recorded and I highly recommend anyone interested to check them out here.

I will recap my experience and take aways from the sessions I attended and participated in as well as post slides and/or talk materials that I know of.

Flock Day 1

Keynote: State of Fedora

Flock Day 1 started off with a bang, our very own Fedora Project Leader, Matt Miller took the stage for the morning keynote and discussed the current state of Fedora, where we are, where we're going, ongoing work and current notable Changes with work under way.

One of my favorite take aways from this talk was one about contributor statistics that are gathered based on contributor activity as it is represented within the Fedora Infrastructure via fedmsg and datagrepper (datanommer). The statistics showed that there are over 2000 contributors, of which roughly 300 do 90% of the work (which sounds odd, but statistically this is actually better than average) and of the group that does 90% of the work only about 35% of them work for Red Hat. I'm a big fan of these kind of numbers because it reinforces that Fedora is in fact a community driven project that Red Hat is simply a participant and sponsor of.

Flock 2016 Keynote State of Fedora slides

Introducing Fedora Docker Layered Image Builds

Next time slot that I attended was my presentation on the Fedora Docker Layered Image Build System here I introduced something I've been working on for quite some time with various upstream projects of technologies that come together to form this system. Before diving into the new service I went on a brief history lesson about what containers are, what they are in the context of Linux, and various implementations of which Docker is simply one. The main reason I like to start there is to level set that we have hopes to support all kinds of Linux container runtimes and image builds but we must start somewhere and with Docker being the most popular it makes sense to target it first. (You'd be surprised how often the question of supporting other image formats comes up)

In an attempt to make sure there were no gaps in knowledge of everyone in the room for my presentation I did a quick overview of what specifically Docker is, how containers are instances of images, and how images themselves are most commonly built (Dockerfile). We then progress into concepts of Release Engineering and why this is desirable, as outlined in an article I wrote for OpenSource.com recently. From there we traversed into the wild world of distributed container runtimes and orchestrators, most notably OpenShift as that's a core component of the Layered Image Build Service. We also discussed components used within the Docker Layered Image Build Service such as atomic-reactor, osbs-client, and koji-containerbuild. The last of which enables for the workflow using fedpkg for layered image builds for Fedora contributors just as they are used to for RPM.

I then did a demo, that of course failed (as per the Demo Gods) but was able to show a previously successful build.

Note

I have at this point diagnosed the issue found during the demo and it has been resolved.

Introducing Fedora Docker Layered Image Build slides

Getting New things into Fedora

In recent past there has been a general communications break down between developers and Release Engineering, this has resulted in some issues integrating net-new deliverables within the Fedora Project. This presentation discussed the process by which new changes should come in, the timelines that things should be accepted by, and the various Release Engineer Tools that need integrating with.

However, there was admission that the documentation could be better about these items and the Release Engineering tools could be more approachable for outsiders in order to help with the process of on-boarding new changes into the processes and tooling. These items have shown improvement in the past year with further improvements planned.

There was a lively discussion of ways to make this better and I look forward to seeing positive movement come as a result.

Hacking Koji for fun and Profit

In this session, tip and tricks for hacking on the Koji build system were the focal point. Discussion about what Koji is, who uses it, and why someone might want to hack it was explored. Then an overview of the major components of Koji were presented in an attempt to give potential developers an idea of where to look in the code depending on what component they were trying to augment or supplement. From there a quick example of the Python API was covered as an example of how to get started, including reference for a more advanced example contained within the koji code itself was offered. Next up was a advanced CLI walk through that showed how to call directly to the XMLRPC API just as you can via the Python API.

There was a section of the session focused on the Koji Hub which is the user facing component including how to theme the web UI, change user policy, and how to write plugins that can add functionality to Koji via new API calls, policies, and callback hooks.

Next up was discussion of Koji Builder plugins that can add the ability for Koji to produce new types of Build Artifacts.

Finally, how to actually clone the git repository and then build locally a version of the modified code was covered.

During the Q&A portion there was a discussion of how difficult Koji can be to deploy and that it would be nice if there was a way to get up and running quickly for hacking purposes. Something that was completely automated and not necessarily production ready would be desired. There was also lively discussion about the future of Koji and the iterative improvements already made in refactoring the code as well as plans for more. Originally there was a grand plan for a "Koji 2.0" that would be a complete rewrite but as time has gone on that has proven too lofty of a goal to realistically achieve so the more iterative approach is being taken.

Hacking Koji for Fun and Profit slides

Containers in Production

Dan Walsh discussed running Containers in Production, a topic that is hot on many people's mind as container technology races into the mainstream as fast as OpenStack did before it. This session discussed various means of container runtime execution, including that of Docker and it's daemon. This included various aspects the Docker daemon's strengths and weaknesses and why alternative execution methods might be desirable or at least worth considering for Production workloads and environments. Various aspects such as storage configuration, non-privileged runtimes (security), remote inspection, fault tolerance, and systemd integration were discussed.

In this presentation was a strategy for running production ready containers using runC for execution of Open Container Initiative (OCI) compliant container images (such as Docker Images).

https://github.com/containers

https://github.com/projectatomic/skopeo

Fedora's MirrorManager: now and in the future

The session about MirrorManager was extremely informative, covering various aspects of the project, a brief overview of the history then diving into current features, limitations, things we're trying to do in the future to improve and enable the mirroring of new artifacts.

There were plenty of items that I would like to follow up on as there's so much about content mirroring that I don't currently understand.

I sadly did not take nearly as good of notes during this session as I had hoped to. I highly recommend anyone interested in the topic of content mirroring to watch the recorded session for more information.

Fedora ARM State of the Union

Peter Robinson gave a presentation about the current state of Fedora ARM including both armv7hl and AArch64. At the start of things he requested that questions about specific dev boards be held to the end because he would have a section in the session dedicated to that. Exploration of the trials and tribulations of bringing new hardware to life was interesting (at least to me) as there's so many things that we in the pre-existing hardware platform world take for granted. There's many things about the ARM world and boot firmware that make things difficult because of lack of standardization around the developer board boot methods beyond just the standard trouble of bringing up new hardware that doesn't yet have support for everything necessary in the kernel. Beyond the kernel is the compiler toolchains and programming language tooling that needs added support for new architectures such as ARM, various points of this were discussed with examples of areas where people in the Fedora community have stepped up to help (Haskell SIG being one great example).

From there discussions of various developer boards spiraled off into the weeds of things that I definitely don't understand about the finer points of ARM board "bring up" but it was interesting to listen to the state of things and take notes of things to learn about.

University Outreach - New Task or New Mindset?

Justin Flory and Jona Azizaj presented about the history of the University Involvement Initiative, the struggles met with attempting to expand it's adoption and further reach, and eventually it's decline. This session was a call to arms for community members with ties to Universities either as active students or Alumni to help bring this initiative back to life. The main idea behind all of this is that we would like to help foster the open source community by bringing an active student population into it's ranks. There was a lot of positive feedback and interest shown during the session and I have high hopes for the future of the initiative.

Fedora Engineering Team Dinner

While not on the Flock schedule, this was a personal highlight for me as a member of the Fedora Engineering Team because we are a geographically dispersed team that lives and works from all corners of the planet. As such, we rarely get the opportunity to all be in the same place, at the same time, and in a social setting (as opposed to getting work done). It was great to be able to sit and chat with colleagues and discuss both work and non-work topics and get to know them better on a more personal level.

The main take away: I love my job, I love my team, and I love my company.

Day 2

Kirk, McCoy, and Spock build the future of Fedora

Matt Miller took us on a Star Trek themed adventure that lead to the use of the Kellog Logic Model applied to Fedora Initiatives and how each Working Group (WG) or Special Interest Group (SIG) could use this model as a means to help drive their goals as well as frame their over all initiatives to others, including the Fedora Council and FESCo. The session slides were covered rather quickly and then discussions and questions broke out about how we could use this for various things and/or just general questions about the logic model.

The Fedora Modularity Logic Model was an example where this is already being used within the Fedora Project with success.

Modularity: Why, where we are and how to get involved

Fedora Modularity is a new initiative that is focused on re-thinking how we think of the way Linux distributions are composed. Instead of as a pile of software packages, it could be a grouping of modules that can be managed as disjoint units and lifecycle managed independently of one another.

Background on the topic leads back to the Rings Proposal (a part of Fedora.next), where we think about the distro as a set of rings and the center of rings the central point of the operating system is the most curated components of the operating system and as you get further from the center you can have less and less curation.

However, as time went on you have less and less correlation such that the Rings analogy doesn't really fit. Example, any given package can change over time or need a different version in a different use case or scenario.

Different use cases, a new website with the latest technologies vs an ERP system where you want different lifecycles or different "aged" or different levels of "proven" technologies. This is the problem that modules hope to solve.

What is a module?

  • A thing that is managed as a logical unit
  • A thing that promises an external, unchanging, API
  • A thing that may have many, unexposed, binary artifacts to support the external API
  • A module may "contain" other modules and is referred to as a "module stack"

Base Runtime (Module Stack)

  • Kernel (module)

  • userspace (the interface to userspace, coreutils, systemd, etc)

    • There built requirements are not part of the module, but simply a build requirement.

modulemd: Describe a module

  • yaml definitions of modules, standard document definitions with "install profiles"
  • install profiles
  • definition of components included in a module

There was plenty of discussion around these topics and suggestion that people attend the workshop the following day.

Factory 2.0

As with all things in technology, we want to constantly move faster and faster and the current methods by which we produce the operating system just won't scale into the future. Factory 2.0 is an initiative to fix that.

The presentation kicked off with a witty note that we have entered the "The Second Eternal September," GitHub and how node.js has changed how people fundamentally expect to consume code.

Dependency freezing has become common practice these days because of node.js and rubygems communities impact on developers.

pip freeze > requirements.txt
  • ruby bundler
  • nixOS
  • coreOS
  • docker and friends

Brief overview of Fedora Modularity was given for those who didn't make it to Langdon's session on the topic.

Matt Miller started with Fedora.Next -> Rings, then Envs and Stacks, Red Hat now funding a team to accomplish this.

Backing up first to discuss how not to throw things over the wall. In past there's been discussions about how to articulate "Red Hat things" in the Fedora Space. Ralph Bean (our presenter) works for a group in Red Hat called RHT DevOps.

There are analogous groups within Red Hat and the Fedora Community:

Fedora Packagers -> RH Platform Engineering

Fedora Infra -> RH PnT DevOps

What Factory 2.0 is not: a single web app, a rewrite of our entire pipeline, a silver bullet, a silver platter, just modularity, going to be easy.

"the six problem statements"

  • Repetitive human intervention makes the pipeline slow
  • unnecessary serialization
  • rigid cadence
  • artifact assumption
  • modularity
  • dep chain

"If we had problems before, they're about to get a lot worse" (Imagine modularity without Factory 2.0)

Would like to use pdc-updater to populate metadata tables with information about dep chains, we would then use that information with other tools like pungi but also with new tooling we haven't even thought of just yet.

Unnecessary serialization makes the pipeline slow, one big piece we will need to solve this is the OpenShift Build Service (OSBS). We're going to need to use an autosigner.py to get around new problems (assuming we "go big" with containers).

Automating throughput, repetitive human intervention makes things slow. Builds and composes. An orchestrator for the builds and the composes, best case scenario is that things are built and composed before we ask for them.

Atomic Host Two Week is kind of a case study that we should learn lessons from in order to merge the changes needed back into the standard pipeline instead of the parallel pipeline that was spawned.

Flexible Cadence, The pipeline imposes a rigid and inflexible cadence on "products". Releases related to the previous point about Automating Releases, "the pipeline is as fast as the pipeline is".

EOL: think about the different EOL discussions for the different Editions. Beyond that - a major goal of modularity is "independent lifecycles"

"I want to be able to build anything, in any format, without changing anything" (not possible) but we can make the pipeline pluggable that will make it easier over time to add new artifact types to the pipeline.

"The pernicious hobgoblin of technical debt" as Ralph called it.

Ways we can do better and refactor:

  • Microservices (consolidate around responsibility)
  • Reactive services
  • Idempotent services
  • Infrastructure automation (Ansible all the things)

Docker in Production

The Docker in Production session was a very brief walk through of how you can go from your laptop to a production environment. This effectively boiled down to best practice for how to "containerize" your application properly, ways to build docker images and tagging schemes that you can (or should) use, a distribution mechanism for the images, and finally a distributed orchestration framework such as Kubernetes, OpenShift, or Docker Swarm.

Pagure: Past, Present, and Future

Pagure is a git forge.

Old version was very simple: there were three repos per project: source, tickets, and pull requests. Recently got a new UI (thanks to Ryan Lerch).

Forks, pull requests. (A very GitHub style workflow).

If you want to run your own pagure, all you need is the web services and the database. If you'd like all the bells and whistles, you'll then need to add mail server (pagure milter), pagure eventsource server, gitolite, and a message bus.

Doc hosting (fourth git repository for a project, optional), in the future considering doing something similar to GitHub Pages.

"Watch" repo, to get notifications for a project you're not directly involved in or to opt out of notifications for a project you are directly involved in.

Roadmap in the Issues tab in the UI for milestones and arbitrary tag filtering.

Issue templates, delivered by markdown files in the issues git repo. Also, can set a default message to be displayed when someone files a new pull request.

Diversity - Women in Open Source

The session on Fedora Diversity began with a lot of wonderful information about the initiative and I have outlined to the best of my ability focal points of those slides here.

  • Started roughly a year ago
  • No exists an official Fedora Diversity Adviser
  • Myths
    • Women are not interested in technology
    • Women can't to programming
    • Men developers are mote talented than women
    • There is no work-life balance for women who work in the tech industry
    • So on and so on ...
  • Facts
    • Women in Technology (Mothers of Tech - BizTech)
      • Ada Lovelace (Creator of Programming/Computational Machine)
      • Heddy Lamar (Frequency Hopping)
      • Admiral Hopper (Created COBOL)
      • Many more ...
    • Women are very creative, versatile, powerful, and intelligent
    • Diversity increases success
  • Initiatives
    • Grace Hopper Celebration of Women in Computing
    • Women in Open source Award
    • Outreachy
    • Google Summer of Code
    • and many more
  • Gaps
    • Lack of knowledge, encouragements, support, and time commitment

After the slides were done, the session turned into effectively a giant round table of people telling stories of how they've been successful because of diverse teams, reasons they think that women and other groups of people are currently under represented in Fedora and Open Source, ways they feel we can increase diversity, and methods that could be used to target various under represented groups in the Global Open Source community.

The GNOME Outreachy program was also discussed as a great example of a program working to move things in the right direction around the topic of how we can try to actively improve our community and the open source community at large.

I hope to be able to participate in some of the take aways from these discussions as they are put into action.

Testing Containers using Tunir

tunir is a simple tool that will spawn a virtual machine or several virtual machine and then execute arbitrary commands and report success or failure of the commands based on the exit code of the command. You can also make commands blocking or non blocking and tunir has support for Docker images as well as support for spinning up a kubernetes multi-node cluster in order to test containers "at scale". The presentation was short and to the point with plenty of demos showing how easy it is to get started using tunir. Also, tunir is the testing component behind Fedora AutoCloud.

Cruise Krakow

In the evening of Day 2 the Flock participants had the unique opportunity to dine on the Vistula River and take a small tour up and down the river for some site seeing. It was a beautiful scenic way to wind down with fellow Fedora Flockers after a full day of sessions and technical discussions.

Day 3

Lightning Talks

Day 3 kicked off with Lightning Talks, I presented one myself about a small project I've been working on titled Loopabull which is an event loop driven Ansible playbook execution engine. There were also plenty of other wonderful lightning talks covering topics such as Fedora Marketing, OpenShift provisioning on Fedora with Amazon Web Services, Fedora CommOps, dgplug, and so much more.

Automation Workshop

The automation workshop was kind of an anti-presentation session as the session leader wanted this to either become more of a hacking session of a problem solving session. As such, ad-Hoc discussions and work done on automation issues in the various areas of the Fedora Infrastructure occurred and people broke off into smaller groups within the room to work or solve problems.

OpenShift on Fedora

This session was about running OpenShift on Fedora using the latest and greatest features of OpenShift, most notably the new component called oc cluster up which is an auto-deployment provisioning tool built directly into OpenShift as of version v1.3+ which allows for the automatic creation of a clustered environment. The entire session was provided as a very well documented walk through and the link is below.

OpenShift on Fedora Guided WalkThrough

Building Modules Workshop

The Modules building workshop came together as a hybrid approach of some presentation, some discussion, some demo, and some "follow along" workshop style. This was a lot of fun and incredibly informative, there was lively discussion about aspects of a module definition (for me it was mostly about trying to wrap my head around everything, and the session hosts were very accommodating).

System Message: WARNING/2 (<string>, line 482); backlink

Duplicate explicit target name: "here".

There were many notes taken during the session and they were preserved in an etherpad instance but in the event that it gets lost in the ether over time I have exported it's contents to my FedoraPeople space and it can be viewed here.

Brewery Lubicz

Next up is the evening event which was hosted in a brewery complete with wonderful catering.

As per the schedule:

A feast and beer tasting awaits us at Browar Lubicz, a recently restored brewery. The brewery dates from 1840 and has been brewing beer almost continuously, even during nationalization in the 1950s. Restored in September 2015, the brewery is a high point of a trip to Krakow.

Day 4

Day 4 was Friday and I slept in a little because I was going to be staying up overnight in order to catch my 4am taxi to the airport to begin the journey back home so I regretfully missed the morning session on Ansible best practices but I was told it was very good and I have every intention to watch it on YouTube once the video is posted.

What we do for Docker image test automation

I attended this session for about 45 minutes but it quickly became apparent that the other participants were very new to Docker and taskotron in general and the session would therefore be very introductory in nature so I departed to join a workshop elsewhere. This session was by no means bad and I think anyone who is new to the topic of Docker or taskotron and is interested how these two things are being used together in order to test Docker Images should absolutely have attended or should watch the recording on YouTube after the fact.

Server SIG Pow-Wow

A lot of things are changing in the Fedora Project, particularly for modularization. This session was by and large a collaborative brainstorming and planning session for how to take advantage of the new initiative and how to adapt properly. RoleKit became a focal point of discussion as well as Ansible and potentially an integration with the two. Aspects of the discussion related back to the Fedora Formulas proposal that unfortunately didn't get off the ground at the time.

The session leader graciously took notes and has plans to post a write up.

Informal Friday Night Shenanigans

Friday night a group of us Flockers took to the streets of Krakow City Center in order to take in as much of the local cuisine, culture, and sites as we could on our last night in town (at least it was the last night for some of us). This was a really great outing and I had the opportunity to make some new friends within the Fedora Community that I had yet to meet in person. It was a wonderful way to close out an amazing event.

I look forward to Flock 2017!

Until next time...

blog new website

New Website and Blog

New Website and Blog

I've finally gotten around to creating my new website and blog, powered by nikola. Previously I used Blogger and while that's a fine service that I've enjoyed using for some time, I much prefer being able to edit my blog posts in my local text editor of choice and serve my site as static html from my own system. This has taken far longer than I had hoped it would but I definitely hope it will prompt me to blog more.

If you care to view my old blog, it can still be found here.