Fedora Workstation 32 is Exceptionally Good

tldr; Fedora Workstation is really really good, go download it now.

Some History

I've been a Linux (desktop, laptop, and server) user since somewhere in the ballpark of the year 2000. My memory honestly gets a little bit hazy somewhere back in that time frame because life comes at you fast these days and 20-ish year old memories are a bit hazy, but let us say 2000 for sake of argument. My first ever Linux distribution was Red Hat Linux 7 Deluxe Workstation, it came in a boxed set from the retail store Best Buy and it sat on a store shelf right next to Windows 98se (which was for some reason still for sale at the time), Windows Me, Windows 2000, SUSE Linux 7, and probably some other software I don't remember. This was the golden age of "Year of The Linux Desktop" because Linux was on sale in retail stores right next to Windows, we were poised and ready to take over the world. Well, we did take over the world. Linux now powers everything from maker boards like the RaspberryPi, more smartphones than any other operating system in the world, all of the Top 100 Super Computers in the Top500 list, cars, appliances, light bulbs, and everything in between. The Cloud runs on Linux, Apple is showing off it's prowess at running Linux in VMs during their new "Apple Silicon" keynote, Microsoft created Windows Subsystem for Linux, Microsoft launched a "Microsoft Loves Linux" campaign, Google has released a supported means by which to run a tradition Linux system within ChromeOS (which itself is a Linux system), and the list goes on. We won, go team.

Except ...

The mythical "Year of the Linux Desktop" never quite came to fruition the way many of us had hoped. Our market share has risen slightly over the years and the developer community at large focuses on Linux as a development platform for backend services because Linux dominates the Datacenter and The Cloud. However, we still struggle to make a dent in that Desktop and Laptop market share quite to the same scale. The Open Source Community is vast, we have massive communities of users and developers who run Linux on their Desktop, Laptop, and various other niche personal computing hardware devices with great success and they are extremely happy doing so, and I count myself among these people. Unfortunately though, I always felt like we weren't "quite there yet" and I see this opinion a lot with Linux Desktop users and I feel there's validity in it with supporting evidence in the form of Linux Desktop users seeking refuge on a Mac, Chromebook, or Windows machine and running Linux in a VM, Container, or using WSL. The frustrations we all had, I shared in from time to time ... until now.

(Yes, I know someone's going to come at me with anecdotal evidence of how they've been happily using Linux on their Desktop for X years without issues and/or considerably less issues than with Operating System Y. That's great, I'm happy for you but this has not been the global experience. Do a couple internet searches and I think you'll see what I'm getting at. Let's continue ...)

We Have Arrived

Fedora Workstation 32 is Exceptionally Good <https://fedoramagazine.org/announcing-fedora-32/>_. At the time of this writing it is not new, I know, but at this point in it's lifespan I feel it's time to talk about just how exceptional it is, how good it has remained, and how excited I am for the future. First and foremost, GNOME 3.36 is the first time I feel like GNOME 3 provides as responsive of an user experience as GNOME 2 did and it brings a certain fluidity to the desktop as whole that I kind of forgot was missing. It was a long road to get here, many wonderful members of the community poured their hearts and souls into getting us here and for that, I am forever grateful because this is truly fantastic. GNOME Boxes went overnight from something I though was more or less a party trick to my favorite way to manage virtual machines on my laptops and desktops, hands down. Flatpaks are the future of desktop applications for Linux, you may or may not agree but the inherent power for an user to be able to truly customize the applications available to them, lifecycle manage software in a reasonable way without elevated privileges, access various "app stores", get sandboxing mechanisms that require permissions approvals per application, and do all of this within the confines of their home directory without affecting global system state. This is exactly the type of modern user experience the larger audience in the desktop market expect, and it's been realized ... and it is really good. Again, this is not new but it's come a long way and warrants revisiting and appreciation for how good it had become. Beyond all this, the battery life of my laptop is consierably better than it ever has been before which is basically just icing on the cake.

The Fedora Workstation Community has done a superb job of taking a pile of parts comprised of disjoint Open Source projects from far and wide across the glorious Bazaar, and stitching them together into a cohesive unit that genuinely feels as though it was all created by a single team. This is unique in the Linux World, few have pulled it off before and for some reason something about the fit and finish of Fedora Workstation 32 feels as though the Fedora Workstation Community has truly done it in a way it never has been before. This is not something that should be taken lightly and I do not say it lightly. I've have the good fortune to be a Community Contributor to Fedora for over a Decade, I've had the honor of serving as a Community Elected Member of the Fedora Engineering Steering Committee, and I've now been a part of the Linux Desktop User ecosystem for almost long enough that my tenure could buy beer in the United States of America legally; this is the first time I've ever installed Fedora and thought to myself, "this is truly exceptional."

Don't get me wrong, I love Fedora and I think it's always been amazing and it's been my Linux Distribution of choice since not long after the RHL/RHEL/Fedora split occurred and I've poured over a decade of my life into helping develop, produce, and maintain Fedora. Fedora is part of who I am and no matter what the future brings, it will always have a place in my heart. I do not make these claims lightly, nor do I want to take away from the mountains that have been moved to get us here, and continue to be moved in the future. This is exceptional as a "product" fit and finish to a level of uncharted territory and it excites me.

For the first time in the history of Fedora, a major OEM is going to offer Fedora Workstation pre-installed on their Business Class laptops. This is history in the making. There is no other fully Community Powered, Governed, and Developed Linux Distro that has that kind of commitment from a major vendor. Yes, I know that Red Hat Sponsors Fedora but go look at the Governance, Project Structure, and Contributor Statistics; it is Community Powered in earnest (as it should be). I also know about Dell Project Sputnik and the Dell Developer Laptop Program, which I love and I gladly give all the due credit to Dell and Canonical for making that real and pushing the Linux Desktop agenda forward; but these are different because Canonical supports Ubuntu and you can pay for support through Dell or Canonical on that machine. There is no paid support or support call center for Fedora, it is truly Community Driven, Community Powered, and Community Supported. That is, at a minimum, noteworthy and a realization of the maturity of the project and the "product offering" (for lack of a better term) that comes from Fedora.

I think contributing factors that brought the Lenovo announcement to fruition are many. It's a standing ovation to the Fedora Community, the Developers, Maintainers, Release Engineering, Quality Engineering/Assurance, Designers, Documentation Team, Special Interest Groups, Outreach Programs, Infrastructure Admins, and everyone I might have missed. They've made this possible, the power of the Community and the maturity of the Project make it ripe for use by OEMs in this way. Another thing is truly the fit and finish. The installation of the operating system is more simple than it ever has been, the Welcome Screen after a fresh install feels like you've been greeted by a grand new environment that's fully integrated and well defined. Everything about it feels top notch and ready to take over the world. It has the intangible attribute that I can't well define with words but it feels "Premium" in a way that makes me expect that I should be paying something akin to "Apple Tax" for it. I am again excited for the mythical year of The Linux Desktop.

There Is Still Much To Be Done

I am a natural born citizen of the United States of America and while we as a demographic have a reputation of declaring victory prematurely, I believe myself to not be so unrealistically optimistic (time will tell). In that spirit, I will admit we have a long way to go to truly be a mainstream competitor to Windows, MacOS, and even ChromeOS in raw market share but I feel we have a truly competitive offering in ways I had never considered before. I see a possible future that we could very well get there in the next decade.

There's still much to be done, but I absolutely think we're on the right track.

A tip of my hat to the entire Fedora Community for making something extraordinary and I look forward for the continued improvement in Fedora 33 and beyond over time.

That's my $0.02, thank you for your time.

Getting Back Into Blogging

Getting Back Into Blogging

Something I've always told myself is that I'd start blogging again, about tech or just kind of whatever seemed interesting to me at the time. I'm now making a commitment to myself that I will blog once a week. Not every blog post will be the most amazing thing anyone has ever read and it's reasonable to think that a lot of people will simple ignore a lot of what I say and I'm alright with that. If you're here, welcome! If not ... you don't even know this, but that's also cool.

I'll likely talk about Open Source Software, Linux, Ansible, and technology in general so if none of those things interest you then maybe this isn't the blog for you to keep tabs on. If so, grab the RSS or Atom feeds and follow along with the journey!

Until next time...

New Adventures: Ansible Edition

New Adventures: Ansible Edition

Ansible Logo

I am honored, excited, nervous, humbled, and over all elated to announce that starting December 1, 2017 I will be a member of the Ansible Core Development Team at Red Hat where I will work primarily on Upstream Ansible (what most people would likely know as just 'Ansible', 'Ansible Core', or 'The Ansible Project' depending on who you talk to).

This was without a doubt the most difficult decision I've ever made in my professional career to date. Working on the Fedora Engineering Team had been a dream of mine since I was in college and I finally achieved it a few years ago. I love the team, I love work we do, and I genuinely believe we're making a positive impact on the greater open source community. I take a lot of pride in that. I never foresaw a future in which I'd find something that drew my interests in another direction, but then Ansible came into existence. I've been a big fan of Ansible since the project's beginnings and have upstream contributions dating back to it's early versions. It's become a newfound driving passion of mine and when I was approached with the opportunity to work on it full time, it was something I was extremely excited about and I simply couldn't pass it up.

I want to be clear: I will not be abandoning the Fedora Project by any means, I've been a community contributor to Fedora since 2008 and have done so under employment of my three previous employers as well as two Teams since joining Red Hat. I have no plans of changing that now. I'm simply going to have to scale back my daily responsibilities due to new priorities. I have every intention of continuing to serve as an elected member of the Fedora Engineering Steering Committee and will seek re-election when the time comes. I will also remain active in the Fedora Atomic Working Group as much as possible.

While this was extremely bitter sweet because of my lineage with Fedora and my earnest enjoyment of working on the Fedora Engineering Team, I'm extremely excited and am really looking forward to joining the ranks of the Ansible Team.

A big thank you to the entire Fedora Engineering Team, they are a absolutely stellar group of people and I will always look back on our time together fondly.

Until next time...

AnsibleFest SF 2017

AnsibleFest 2017

AnsibleFest was amazing, it always is. This has been my Third one and it's always one that I look forward to attending. The Ansible Events Team does an absolutely stellar job of putting things together and I'm extremely happy I was not only able to attend but that I was accepted as a speaker.

Kick Off and Product Announcements

The event kicked off with some really great product announcements, some interesting bits about Ansible Tower and the newly announced Ansible Engine.

Ansible AWX

Ansible AWX Logo

As an avid fan of Open Source Software, the announcement and immediate release of Ansible AWX was the headliner of the event for me. This is the open source upstream to Ansible Tower that Red Hat made the commitment to release once Ansible was acquired in accordance with their continued commitment to Open Source. If you live in Ansible user or contributor land, you know this is something that's been a hot topic for quite some time and I'm so glad it's been launched officially. I've been learning Django over the last week so I can start contributing. Looking forward to it.

Ansible Community Leader and Red Hat CEO Fireside Chat

Fireside Chat with Robyn and Jim

Immediately following the Ansible AWX announcement was a fireside chat with Ansible Community Leader Robyn Bergeron (who is also previously the Fedora Project Leader) and Red Hat CEO Jim Whitehurst to discuss various market trends in the realm of infrastructure automation, the ability to deliver faster and more rapidly, and the challenges business are having with the concept of "Digital Transformation." This was a really cool thing to get the perspective of things from both an open source community perspective and that of a business minded individual, and to see where those two perspectives met in the middle and/or overlapped.

Ansible Community Days

The day before and the day after the main headline of AnsibleFest was the Community Days, the day before AnsibleFest focused entirely on topics around Ansible Core and the greater Ansible Community. The day after AnsibleFest focused on Ansible AWX in the morning, explaining architecture and various technical implementation details to try and get some exposure to things for those of us in the room who weren't previously privy to that information. The afternoon of the second day involved the "Ansible Ambassadors" community (I'm not sure if this is an official term)

Ansible All The Things

I gave a presentation that I like to call "Ansible All The Things" or "Ansible Everything" (depending on who my audience is and how acceptable they are of meme jokes). The basic idea though is to look at Ansible not as a configuration management tool, which I feel a lot of the "Tech Media" (for lack of a better term) has classified it as and therefore it is often known as to the more broad audience, but instead think of it like a Task Automation utility. This particular task automation utility also comes with a nice python API and a way to interact by anything that can "speak JSON." This has some advantages if you step back and thing about this abstract concept of a tool with a programming interface that is ultimately as generic as passing JSON around (with added convenience for python programmers). Effectively you have a method of running a task, or series of tasks, on one or many systems in your infrastructure. This is powerful enough to be used for all sorts of things like configuration management (yes, Ansible can perform configuration management tasks but it's also so much more than that), provisioning, deployment, orchestration, command line tooling, builds, event-based execution, workflow automation, continuous integration, and containers.

For those who would like to check you my slides, they are here.

Infrastructure Testing with Molecule

I had the opportunity to attend a presentation about Molecule, which I was really excited about because this is a toolchain I've wanted to dig into for a while. This is effectively the goal: Infrastructure as Code, TDD/CI on your Code, and transitively your Infrastructure. What a time to be alive.

Anyways, the talk itself was absolutely fantastic. Elana Hashman is a spectacular speaker and the amount of research she put into the talk was apparent. The room was captivated and the questions and conversations were enthusiastic, this was clearly a topic space people were interested in. I also have to give a tip of the hat to the live Demo that went off flawlessly, I've never personally pulled off a live Demo without at least one goof that contained the amount of live editing of code that was contained in this one. Kudos.

For those who are interested in the presentation materials, check them out here. (Do it, it's really good.)

Closing Time

The event was wonderful and I hope to have the opportunity to go next year to the North America based AnsibleFest (they also do one in the EU/UK but it's not often I can pull together the funding to that trip).

Flock to Fedora 2017

Flock to Fedora 2017

Every year, the Fedora User and Developer community puts on an conference entitled "Flock to Fedora" or "Flock" for short. This year was no different and the event was hosted in lovely Cape Cod, MA.

This year's Flock was a little different in focus than previous year's, the goal of the event organizers appeared to be that of "doing" as apposed to "watching presentations" which worked out great. As an user and contributor conference, almost everyone there was a current user or contributor so workshops to help enhance people's knowledge level, have them contribute to an aspect of the project, or to introduce them to a new area of the Fedora Project in a more hands-on way was met with enthusiastic participation. There were definitely still "speaking tracks" but there were more "participation tracks" than years past and it turned out to be a lot of fun.


At the time of this writing, the videos had not yet been posted but it was reported that they will be found at the link below.

All the sessions were being recorded and I highly recommend anyone interested to check them out here.

I will recap my experience and take aways from the sessions I attended and participated in as well as post slides and/or talk materials that I know of.

Flock Day 1

Keynote: Fedora State of the Union

The Fedora Project Leader, Matt Miller took the stage for the morning keynote following a brief instructional Logistics/Intro statement by event organizers. Matt discussed the current state of Fedora, where we are, where we're going, ongoing work and current notable Changes with work under way.

Big key take-aways here was that Fedora Modularity and Fedora CI are big initiatives aiming to bring more content to our users, in newly consumable ways, even faster than ever before without compromising quality (and hopefully improving it).

Flock 2017 Keynote State of Fedora slides

Factory 2.0, Fedora, and the Future

One of the big pain points from the Fedora contributor's standpoint is how long it takes to compose the entire distro into an usable thing. Right now, once contributors have pushed source code and built RPMs out of it, you have to take this giant pile of RPMs, create a repository, then start to build things out of it that are stand-alone useful for users. These kinds of things are install media, live images, cloud and virt images, container images, etc.

Factory 2.0 aims to streamline these processes, make them faster, more intelligent based on tracking metadata about release artifacts and taking action upon those artifacts only when necessary, and make everything "change driven" such that we won't re-spin things for the sake of re-spinning or because some time period has elapsed, but instead will take action conditionally on a change occurring to one of the sources feeding into an artifact.

For those who remember last Flock, there was discussion of this concept of the Eternal September and this was a progress report update of the work that's being done to handle that as well as clean up the piles of technical debt that have accrued over the last 10+ years.

Multi-Arch Container Layered Image Build System

Next time slot that I attended was my presentation on the new plans to provide a Multi-Architecture implementation of the Fedora Layered Image Build Service. The goal here is to provide a single entry point for Fedora Container Maintainers to contribute containerized content, submit it to the build system, and then have multiple architecture container builds as a result. This is similar to how the build system operates for RPMs today and we aim to provide a consistent experience for all contributors.

This is something that's still being actively implemented with various upstream components that make up the build service, but will land in the coming months. It was my original hope to be able to provide a live demo, but it unfortunately didn't work out.

Multi-Arch Fedora Layered Image Build Service slides

Become a Container Maintainer

A workshop put together by Josh Berkus that I helped with was to introduce people who'd never created a container within the Fedora Layered Image Build Service to our best practices and guidelines. Josh took everyone through an exercise of looking at a Dockerfile that was not in compliance with the guidelines and then interactively with the audience bringing it into compliance.

After the example was completed, Josh put up a list of packages or projects that would be good candidates for becoming containerized and shipped to the Fedora User base. Everyone split up into teams of two (we got lucky, there was an even number of people in the room), and they worked together to containerize something off the list. He and I spent a period of the time going around and helping workshop attendees and then with about 10 minutes left the teams traded their containerized app or service with someone else and performed a container review in order to give them an idea of what that side of the process is like.

Hopefully we've gained some new long term container maintainers!

Fedora Environment on Windows Subsystem for Linux

This session is one that I think many were surprised would ever happen, most notably because I think for those who've been in the Linux landscape for long enough to remember Microsoft's top brass calling Linux a cancer, we never would have predicted Windows Subsystem for Linux existing. However, time goes on, management changes, and innovation wins. Now we have this magical thing called "Windows Subsystem for Linux" that doesn't actually run Linux at all, but instead runs programs meant to run on Linux without modification or recompilation.

The session goes through how this works, how the Windows kernel accomplishes the feats of magic that it does and the work that Seth Jennings (the session's presenter) put in to get Fedora working as a Linux distribution to run on top of Windows Subsystem for Linux. It's certainly very cool, a wild time to be alive, and something I think will ultimately be great for Fedora as an avenue to attract new users without having to shove them into the deep end right away.

Fedora Environment on Windows Subsystem for Linux slides

Day 2


Going along with the theme of continuing to try and deliver things faster to our users, this session discusses a new service that's being rolled out in Fedora Infrastructure that will address the needs of "keeping things fresh" in Fedora. Introducing Freshmaker

As it stands today, we don't have a good mechanism by which to track the "freshness" of various pieces of software, there's been some attempts at this in the past and they weren't necessarily incorrect or flawed but they never had the opportunity to come to fruition for one reason or another. Good news is that Freshmaker is a real thing, it's a component of Factory 2.0 and is tasked the job of making sure that software in build pipeline is fully up to date with latest input sources for easy of maintaining updated release artifacts for end users to download.

Gating on Automated Tests in Fedora - Greenwave

Greenwave is another component of Factory 2.0 with the goal of automatically blocking or releasing software based on automated testing such that the tests are authoritative. This session discussed the motivations and the design as well as discussed how to override Greenwave via WaiverDB.

Discussing Kubernetes and Origin Deployment Options

This session was mostly about kubernetes, OpenShift, and how to deploy them on Fedora in different ways. There was a brief presentation and then discussions about preferred methods of deployment, what we as a community would like to and/or should pursue as the recommended method by which we direct new users to install these technologies.

Fedora ARM Status Update

Fedora's ARM champion, Peter Robinson, gave an update of where things are in ARM land, discussing the various development boards available and what Fedora contributors and community members can expect in the next couple Fedora releases.

On OpenShift in Fedora Infrastructure

This session was a working/discussion session that revolved around how the Fedora Infrastructure Team plans to utilize OpenShift in the future for Fedora services in order to achieve higher utilization of the hardware we currently have available and to allow for applications to be developed and deployed in a more flexible way. The current plans are still being discussed and reviewed, which is part of what this session was for, but stay tuned for more in the coming weeks.

The Future of fedmsg?

Currently, fedmsg is Fedora's unified message bus. This is where all information about activities within the Fedora Infrastructure are sent and that's not slated to change anytime soon. However, there's new use cases for the messages that will go out on the message bus that have changed in scope and the reliability of message delivery is something that will become a more hard pressing requirement. This presentation was about a proposal to add new transports for messages in addition to the one that already exists, allowing various services needing to listen for fedmsgs to subscribe to the protocol endpoint that most makes sense for the purpose. This session opened a discussion with a proposal to satisfy the newer needs while leaving the current infrastructure in place by taking advantage of some of the features of ZeroMQ.

Day 3

What does Red Hat want?

This was a very candid and honest presentation by our once long standing Fedora Infrastructure lead, Mike McGrath, who spoke on behalf of Red Hat as the primary corporate sponsor of Fedora as to what specifically Red Hat as an entity hopes to gain from the ongoing collaboration with the Fedora Community, and the innovations they hope to help foster moving forward. I unfortunately did not take good notes so don't have much in the way to provide as far as specifics so we'll have to wait for the videos to become available for those interested in this material.

Fedora Infrastructure: To infinity and beyond

The Fedora Infrastructure lead, Kevin Fenzi, stood infront of a whiteboard and kicked off a workshop where interested parties and contributors to the Fedora Infrastructure outlined and planned major initiatives for the Fedora Infrastructure for the next year. Headliners here from general consensus is that OpenShift will definitely be leveraged more heavily but it will require some well defined policy around development and deployment for the sake of sanitizing where code libraries come from for security, auditing, and compliance purposes. The other main topic of discussion was metrics reporting, various options will be evaluated with front runners being the Elastic Stack, Hawkular, and Prometheus.

Modularity - the future, building, and packaging

This session was a great introduction to how things are going to fit together, we dove pretty far into the weeds with some of the tech behind how Fedora Modularity fits together and ultimately if anyone is interested in digging in there, the official docs really are quite good. I would recommend anyone interesting in learning about the technical details about modularity to give it a look.

Let's Create a Module

In this workshop, put on my Tomas Tomecek, we learned how to create a module and feed it into the Fedora Module Build System (MBS). This was an interesting exercise to go through because it helped define the relationship between rpms, modules, non-rpm content, and the metadata that ties all of this together with disjoint modules to create variable lifecycles between different sets of software that come together as a module. I was unable to find the slides from the talk, but our presenter recently tweeted that a colleague of his wrote a blog post he thinks is even better than the workshop, so maybe give that a go. :)

Continuous Integration and Deliver of our Operating System

The topic of Continuous Integration (CI) is one that's extremely common in software development teams and it is not a new concept. But what if we were going to take that concept and apply it to the entire Fedora distribution? Now that might be something special and could really pay off for the user and contributor base therein. This is exactly what the Fedora CI initiative aims to do.

What's most interesting to me about this presentation was that it went through an exercise of thought and then showed specifically how a small team was able to accomplish more work than almost anyone though they could because they treat the bot they've written to integrate their CI pipeline with various other services as a member of the team. They taught themselves to not think of it as a system but as a team member they could offload work to, the work that nobody else wanted to do.

I look forward to seeing a lot of this work come to fruition.

Day 4

The last day of the conference we had a "Show and Tell" where various members from different aspects of the projects got together and worked on things. The rest of the day was a hackathon for those who were still in-town and not traveling back home mid-day.

As always, Flock was a blast and I can't wait for Flock 2018!

Until next time...

Fedora Infrastructure Hackathon 2017

Fedora Infrastructure Hackathon 2017

Last week, hot on the heels of my trip to Boston for Red Hat Summit, I attended the 2017 Edition Fedora Infrastructure Hackathon. The primary goal of the Hackathon was to make a lot of progress in a relatively short amount of time on defining Fedora Infrastructure requirements necessary to support upcoming Fedora Project objectives, as defined by the Council and FESCo, and doing work to satisfy those requirements. In some cases this was simply "define policies around how this should work with the infrastructure", but in most it scenarios is meant digging in and doing work such as patching multiple code bases to support new AuthN/AuthZ protocols and providers, deploying net-new infrastructure services, or even bringing up services in a new datacenter hosted by a fellow Open Source Community Project in order to leverage newly donated hardware. We'll cover all of that in the recap of the journey below.

It all started Monday 2017-05-08, we were hosted graciously in the Red Hat Tower, which as a proud Red Hatter and overall Red Hat fanboi it was extremely cool to get to spend a week there, and worked as hard as we could to get a lot done in about 4.5 days (Monday-Friday, but most people had to travel home on Friday evening). Representative members of various aspects of the Fedora Community were in attendance, the obvious Fedora Infrastructure Team was well represented, but also Fedora QA, Fedora Modularity, Fedora Atomic CI, CentOS, and Fedora RelEng.

Things kicked off by defining an agenda, all notes held in a Gobby Doc. We effectively came up with a loose fitting outline of the following:

  • Monday:

    • AuthN / AuthZ - FAS, FreeIPA, CommunityIPA, Ipsilon, CentOS Infra overlap

    • Modularity

    • CI

  • Tuesday:

    • OpenShift in the AM, CI in the PM

  • Wed/Thu:

    • Hack sessions on OpenShift and CI (break out into teams)

  • Friday:

    • Breakout hack sessions and wrap up


Things started off with Patrick explaining many aspects of various AuthN/AuthZ protocols and technologies that are currently in use within the Fedora Infrastructure as well as migration plans to bring systems and services using older technology in line with newer technologies. There were discussions focused around Fedora Authentication, OAuth2, Kerberos, OpenID, OpenID Connect, FreeIPA, FAS2, and how different Fedora Apps are using different combinations of these technologies. From there and identification of what apps need to be ported away from older technologies was done along with work assigned to people in the room with the intent of accomplishing these tasks over the next few days (and beyond, if necessary).

Bi-Directional Replication

Something that's come up a lot in recent history within the Fedora Infrastructure is database high availability. The Fedora Infrastructure Team already maintains a high level of best practices around database administration but being able to do maintenance with extremely minimal or zero downtime to the database servers is an extremely nice-to-have. Therefore a section of time was dedicated to working through an approach to roll out Postgres BDR for certain applications in the Fedora Infrastructure.

App Porting and Libraries

The Fedora Apps developers in the room had a some targeted breakout session focusing on porting old Fedora web applications away from outdated or no longer recommended libraries and frameworks in order to bring more uniformity to how the applications are developed and maintained, but also make them easier to support by reigning in the spread of tooling required by the group to have to follow along with upstream developments.


Members from the Fedora Modularity Team presented on the Module Build Service and Arbitrary Branching concept in order to discuss integration points into the Fedora Infrastructure's existing systems. This was a lot of discussion that resulted in documentation of processes, identification of issues to resolve, and establishing a realistic timeline for a phased approach to accomplish these tasks.


The Fedora Infrastructure Team is always trying to make the most out of the hardware that it has, and as such has been evaluating container technologies for use in the Infrastructure. Recently an evaluation of OpenShift began and the decision was made to move forward with using it for applications within Fedora. During this session we worked through a series of questions about OpenShift as they would pertain to a production deployment and had the good fortune of being able to ask for best practices and general recommendations from the OpenShift Online Ops Team. We then formulated a plan to have an OpenShift Environment up and running fully automated with Ansible Playbooks (based on openshift-ansible and ansible-ansible-openshift-ansible) in stage by the end of the week. We were successful in this endeavor but are waiting on a certificate for new domain names.

Continuous Integration

Next up we hear from a group within Fedora who are taking on the massive task of attempting to perform Continuous Integration on the entire Fedora Operating System. Alright, maybe not the entire set of packages but they are targeting an installable Fedora Operating System via Fedora Atomic Host. For more information, check out the Fedora Atomic CI wiki page.

During this working session we were joined by our good friends from the CentOS team because they were graciously offering up hardware resources in their very own CentOS CI environment. There was a lot of work done here in the initial days discussion around how to tie the two infrastructures together as well as bridge things like account systems and grant appropriate permissions throughout. Action items were tackled as the week continued.

Wrap Up

We met at the end of the week for a short time before most folks departed to travel home and tallied up the score. All in all we accomplished all but one of the objectives we set out for the hack days and the one that wasn't had progress made on it but it was too large a piece of work to accomplish in just a couple days and is still being worked on at this time. There's all sorts of great info on the Fedora Infrastructure Hackathon wiki page for anyone who's interested in digging into the details (also, check the CI-Infrastructure-Hackathon-2017 Gobby Doc for a pay-by-play).

It was absolutely fantastic to get so many members of the Fedora Community into one room and hack on things. It's also great just to get to spend time hanging out with everyone since we rarely see one another in person. I'm even more excited about Flock 2017 than I was before!

Until next time...

Red Hat Summit 2017

Red Hat Summit 2017

Red Hat Summit 2017 concluded two weeks ago but I am just now getting an opportunity to sit down and write about my experience there. I've been a road warrier lately and was only home for a day and a half and then off to the Fedora Infrastructure Hackathon (but more on that later), then once I got home "for good" I've been attempting to play a game of catch-up.

So here it goes...

Red Hat Summit is without doubt one of my favorite events of the year because I am an extremely proud Red Hatter and I love that we have such an opportunity to show off the latest and greatest that we have to offer the community, our customers, and the world at large. This year was bigger and better than ever, we were in a new (larger) venue at the BCEC, broke our previous attendance record, broke our sponsorship record, and had more sessions and labs than ever before as well. Something else I loved, as it's near and dear to my heart, is that the Community Central portion of the Expo hall was front and center in the main "center stage" so the likes of Fedora, CentOS, Ansible, Gluster, Ceph, Foreman, ManageIQ, oVirt, Project Atomic, OpenShift Origin, and many others had an opportunity to share the spotlight with all the pillars of the Red Hat Technology Portfolio. Another favorite of mine was the portion of the Expo Hall dedicated to customer feedback of current and next-gen still-in-development products as this was certainly the best outlet we could possibly ask for to get real focused feedback from those who spent large portions of their lives with our software.

This year was a bit different for me, I'll often spend a lot of time working the Fedora Community booth in the Expo Hall during Red Hat Summit; which is something I genuinely enjoy doing because it gives me the opportunity to talk to a lot of people about all the things that we mutually find interesting about the innovations going on within the platform. However, I didn't have as much time as usual to dedicate to that as I was an extremely busy bee this year as I found myself with five speaking slots. I created a lab around RPM Packaging that is titled, "From Source to RPM in 120 Minutes" which is effectively a "downstream" instructor-led lab based version of the RPM Packaging Guide that I wrote. I've had a lot of fun doing it in the past and hope to get to continue doing it at future Red Hat Summits. My lab ran each of the three days of Summit and as you may have noticed from the title, those are two hours each session. Then I had two other speaking engagements. First, a Fedora "Birds of a Feather" Session where I lead the conversation with other members of the Fedora Project around current developments, where the project was going, and sparked conversations for feedback from the users and various community members in the room about what aspects of The Project are most important to them. Finally, I co-presented a great talk with an extremely kind human being by the name of Nicolas FANJEAU who works for Airbus, the session was called "Ansible All the Things" where I talked about a wide array of things you can accomplish with Ansible from the traditional to the unorthadox and then Nicolas gave a real world example of how he and his team at Airbus are actually doing a lot of those things (including wiring up Ansible Tower to Service Now) to improve efficiency within their Enterprise and actually deliver aircraft faster as a side effect. It was great fun and I hope to get a chance to work with Nicolas again in the future.

From there I had multiple customer engagements where we discussed their use of Red Hat Container Technologies such as OpenShift Container Platform and Red Hat Enterprise Linux Atomic Host are solving real world business needs and helping to advise on best practices around those technologies. These kind of interactions are again something I really enjoy getting an opportunity to do because it gives me a good perspective on how people are putting the technology to use that I have the good fortune as a member of the Fedora Engineering Team to work on and work with upstream.

There were also many many wonderful Red Hat Announcements, so many I've forgotten at least half of them. I highly recommend you checkout the website to find out more if you're interested.

Closing time

All in all I was exhausted by the end of the week and looking forward to getting back to a more normal level of chaos ... except I still had that hackathon to get to. ;)

Until next time...

SCaLE 15x

SCaLE 15x

This year was the 15th Annual SCaLE (Southern California Linux Expo) event where I was fortunate enough to both attend and speak at. While this is the 15th year of the, now very well known, conference; it was in fact my first time to attend. I spent majority of my time floating between working the Fedora, Red Hat, and OpenShift booths there in the Expo Hall. I had originally planned to spend more time at the Fedora booth than I did, but the OpenShift crew ended up short staffed because of unexpected travel issues of some of their team members so I filled in the best I could. As expected the interest in containers is at full tilt and people were very interested to see what is going on with OpenShift as it is a Kubernetes distribution with advanced features beyond core Kubernetes, and Kubernetes is easily the most popular container orchestration platform around right now. The Project Atomic Community manager, Josh Berkus was kind enough to lend his Sub-Atomic Cluster (Described in this two-part blog series: Part 1, Part 2) to the booth efforts and that made for some very engaging demos of what OpenShift can accomplish (even though the conference network left something to be desired, but this is nothing new). Over all I think we were able to provide event goers a solid booth destination in their Expo Hall travels.

Every conference I go to, I notice there's a specific "crowd profile" in terms of what motivates the participants to attend the conference, what their interests are, etc. Often times these are going to be things like hobbyist, enthusiast, professional/commercial, developer, sysadmin/ops, DevOps practitioners, and potentially (and often) some mixture of those categories. This particular conference was a really solid representation of community focused people and hobbyists which is always a cool crowd because everyone is genuinely interested and enthusiastic about the technologies being represented there. However, from a personal note, something I found rather interesting was the number of people who came by the Red Hat booth that had never heard of the company. This isn't entirely a new phenomenon depending on the "crowd profile" but it's definitely the first time I've seen such a proliferation of it at a specifically Linux conference. This is a weird change of pace for me as for the longest time, Red Hat was a name synonymous with Linux. However, as the company has focused more on the Enterprise with RHEL, the community focused Fedora and CentOS have filled in the void for the community user base and this was a primarily community focused event. Beyond that though, the number of people who had no idea that Red Hat is a major sponsor of and contributor to Fedora was surprising to me.

There are two primary reasons I think lead this situation. First, Linux is so high quality and pervasive these days that the percentage of people who used to get off in the weeds early and often with technical issues is fewer and far between. These systems level technology dives would quickly lead to someone becoming well versed in topics of their distribution and the reality of relationships between different entities (such as Red Hat and Fedora) within the scope of the community. This is no longer the case, Linux is so easy to use and so commonplace that most people don't need (and in many cases don't want) to dig into the nuts and bolts to the point of having a fundamental understanding about the resulting project that produces the Distribution they are using. I think this is great in a lot of ways, I think it's a standing ovation to the fact that Linux has "made it" and that we collectively in the upstream communities are providing quality software that attracts users of all kinds, technical or otherwise. The Second reason I think lead to this is that it poses an interesting problem in the world of marketing for both Fedora as an upstream and Red Hat as a company to properly communicate to users and potential users things that are interesting to them since Linux itself isn't inherently interesting to as wide of an audience as it once was due to popular tech trends shifting away from the system itself but instead to things you can run on top of the system (and recently in containers). Now, Red Hat has done a great job of making that message clear to it's customer base with material that covers the entire Red Hat Technology Portfolio. I also think that Fedora in recent years has been doing a really good job of showing off various features of each Fedora Edition: Workstation, Server, Atomic which highlights features beyond just the core distribution that are tailor made to specific users and potential users. We just need to continue to show up to user groups, MeetUps, and conferences with good representation to help spread the word. On that note, a massive thanks to the amazing Fedora Ambassadors. I'd also like to find a good way to get the message out to more users of various online and programming communities, something similar to Fedora Loves Python but for various Special Interest Groups within Fedora. Just food for thought.

Over all I think we're doing good work and doing a good job spreading the word, it's just interesting to see how trends in technology change, how the landscape changes, and trying to identify how we as a community need to adapt. Kudos to the whole Fedora Community!

I also had a chance to make some new friends from GNOME, EndlessOS, OpenSUSE, debian, opensource.com and LinuxAcademy as a side effect of spending so much time in the Expo Hall. In typical Linux Community fashion, everyone was extremely friendly and I had a great time. :)

I spent most of my time working the booths, but I was able to make it to a couple sessions while the Expo Hall was closed. I've taken some notes on those below.

Kubernetes 101

Project Atomic's very own Josh Berkus did a wonderful overview of the Kubernetes architecture, walking the audience both through the various components of Kubernetes as well as how you would take a traditional application that would be deployed as a "monolith" all on a since physical or virtual machine with only vertical scaling into a multi-node orchestrated deployment of containerized services.

This talk was accompanied by multiple live demos using minikube (with a tip of the hat to minishift) in order to show how the concepts presented during the talk map to real world deployment and configuration within the cluster.

Patterns and Anti-Patterns in Docker Image Lifecycle

Speaker from JFrog talking about Docker image lifecycle management, using Artifactory as an example component.

The presentation begins with a poll of the room about who's using Docker in dev vs production. A primary point of concern is the concept of "trust" to know if there's enough integrity into the process and the images that you pull down to build your application or CI pipeline on top of.

"There is no platform without ecosystem" - Solomon Hykes (CEO Docker Inc.)

Do we have an existing pattern? Do we need to adapt it? CI/CD pipelines? (These are old news, this has been done for years)

The speaker mentioned a concept call the "The Promotion Pyramid" which could basically be turned on it's side and the layers of the pyramid by changed to boxes with arrows between them and it looks just like a production pipeline diagram.

Onward to Dockerfiles! The Dockerfile is extremely powerful, the problem is that it's a hammer and everything now looks like nails. Fast and cheap builds are not the way to go.

FROM fedora

RUN dnf install -y software-properties-common python
RUN dnf install -y nodejs

RUN mkdir /var/www

ADD app.js /var/www/app.js

CMD ["/usr/bin/node", "/var/www/app.js"]

The problem with this is that there's no versioning on anything so each build may or may not produce the same thing because each dnf command could install a different version of each component. You can use a SHA sum to refer to the image version but these are not human readable so it's kind of pointless or maintain your own base image.

Pattern to follow to fix this is to promote immutable binary files (Martin Fowler Immutable Server).

The promotion patterns between registries and repos it basically the only real way because the only way to "version" docker images is with tags and the tag has no concept of version numbers and you can only run one registry per host unless you use VirtualHosts and many daemons mapped to ports.

As a side note, the "promotion pipeline" is effectively the exact structure that Fedora Containers follow.

Fedora Work Session and Meetup

Open session to:

  • Work on development

  • Say Hello

  • Meet others

  • Ask questions about challenges you're facing

  • Share knowledge

  • Find out how to help

This was a really fun grass roots "Birds of a Feather" style session where everyone in the room made a big circle, went around and introduced themselves, and briefly talked about what they are using Fedora for. This spanned from Workstations and Cloud all the way to use with ARM boards in small embedded devices that are deployed out in the field. I had a great time learning what everyone was up to with Fedora and I want to thank Perry Rivera for setting it up and hosting. Perry was also kind enough to be taking notes and feedback based on everything that was discussed.

Closing time

I had a wonderful time (aside from not feeling 100% - I came down with a sinus infection the day before flying out to SCaLE), the event was fantastic and I really hope I get the opportunity to go back next year.

Until next time...

DevConf.cz 2017

DevConf.cz 2017

I was fortunate enough to attend DevConf.cz 2017 this year, it's honestly one of my favorite events of the year. Many people from the various upstream communities I work in or closely with come to discuss and present various technologies and it's a lot of fun.

This year I tried very hard to attend as many presentations as possible as well as catch up with other community contributors in "The Hallway Track" because I have in the past been bad at balancing between the official speaker track and The Hallway Track. I like to think I did well. :)

Some of the big themes of the event were Continuous Integration, Container Technologies, Project Atomic family of technologies, Ansible, and Cross-Community and Cross-Distro collaboration (making more of an effort to not re-invent the wheel). Also as a point of reference, sub topics of these include Fedora Modularity, Atomic Host, and Factory 2.0.

This event was kind enough to post video recordings of all the Speakers and I highly recommend those interested in any of these topic spaces to check out the lineup, it was quite good. Speaker Recordings here.

Below are quick notes about the sessions I had the opportunity to attend including a recap of my experience with "The Hallway Track" at the end.

DevConf.cz Day 1

Keynote and Welcome to DevConf.cz 2017

DevConf started off with a quick welcome message about the Conference and a short history including fun facts about how much it's grown in recent years.

After the intro and welcome, it was off to the races with the Day 1 Keynote that discussed the concept of how "software is eating the world" and how the reality of more and more things moving to software is feeding into the Hybrid Cloud concept. In the modern landscape, this solution space can be catered to using only open source software by providing a platform to make infrastructure consistent and stable. At the previous DevConf there was a Keynote that spoke about full end to end Hybrid Cloud as an abstract concept that we as an open source technology ecosystem aimed to accomplish based on current (at the time) market trends. The bulk of this talk was a series of presenters performing live demos, each one effectively built on top of the previous in order to show how the abstract goal presented in the previous year's Keynote has now become a reality.

The open technologies that made their debut on-stage were:

Welcome to DevConf.cz 2017 and Day 1 Keynote video

Generational Core - The Future of Fedora

Next up was a session dedicated to Fedora Generational Core, which is a core component of Fedora Modularity (or it was, it's more or less changed in name but the concept remains the same). Generational Core is now known as Base Runtime, these were originally different concepts targeting different use cases but have merged over time. The Base Runtime is what defines "the line" between Operating System and the Application. The main goal is to have an environment that can be the building block for all other modules and content which has a small package list and relatively low maintenance burden but can remain stable and of high quality. The Base Runtime is the first real module as part of Fedora Modularity that will be shipped.

The bulk of the discussion was off in the weeds talking about the journey to trim down the dependency chain. There was a graphic (in the video link below) that shows the incredible web of dependencies for even some of the most fundamentally required packages to have a functional base environment. It was a great tour of how much work is required to make this stuff happen and highlights that Fedora Modularity isn't just new metadata on top of groups of RPMs.

Generational Core - The Future of Fedora video

Atomic Cluster in 10 Minutes

This was a quick 30-minute session that briefly covered some introductory material about the Project Atomic family of technologies and then dove right into a live demo using ostree layering on top of the base rpm-ostree that comes out of the box with Atomic Host. This functionality comes from either rpm-ostree pkg-add or atomic host install, both of which allow for multiple runs of the command with different packages and they will just add to your new ostree layer on top of the base. Also, that new added layer will be rebased on any future updates to the underlying system.

The main headline of the demo was showing off the new upstream kubectl init command from kubernetes. This command allows for quick setup and testing to be able to be up and running and kicking the tires in no time (well, 10 minutes or less).

Atomic Cluster in 10 Minutes video

Atomic System Containers

Atomic System Containers are a new interesting technology that would allow a system administrator to augment the Atomic Host without having to modify the base ostree. This would allow for the ability to run even your container engine daemon as a container itself. The goal is to provide services that look/feel native to the system but are containers (Example: atomic install foo && systemctl start foo.service such that foo is a containerized service). This is broken down into effectively services distributed as OCI images, executed using runc, using systemd to manage lifecycle, ostree for storage management, skopeo for download/transport of images, and the metadata/conf specification templates required for various integration points. Also, any existing Docker image could be converted into a System Container by simply adding the configuration templates.

You can demo some of this now on Atomic Host using the atomic install --system [--name=NAME] CONTAINER command.

Atomic System Containers video <https://www.youtube.com/watch?v=yQZiRWWEPYo>

Building and Shipping your own Atomic Host

This was a great workshop session put on by Jonathan Lebon that shows you how to do exactly what the title says. Also provided was a great guide for the workshop (linked below). I suggest anyone interested in the topic to check out both the PDF and the video below.

Workshop Guide PDF

Building and Shipping your own Atomic Host video

Audit and Namespaces, Looking Towards Containers

The main outline of this talk aimed to cover:

  • Problems facing auditing and namespaces

  • What auditing means for containers

  • Possible solutions

First up was an introduction to Audit itself. Audit is a Linux kernel auditing mechanism and daemon, it was originally released in 2004, it works with SELinux, it is effectively a really high powered syslog that focuses on kernel space. Audit is a reporting tool, it monitors and reports but does not take action with exception of only one thing: you can configure it to kernel panic a system in the event of action that Audit is unable to log properly (which apparently some high security places would prefer system outage than anything occur without proper auditing). Next the discussion about kernel namespaces and the various ones that exist, including their introduction to the kernel on the timeline. From there a discussion of what containers are and the misconceptions that have come from them. For starters, the kernel has no concept of a container, it's a higher level abstraction that combines kernel features together (namespaces, seccomp, cgroups, etc). The problem comes in that there is only one audit daemon per system, this is because there is only one kernel per system. This makes it difficult to map audit events to various namespaces (or combinations of namespaces based on container storage or networking configuration).

Audit and Namespaces, Looking Towards Containers video

DevConf.cz Day 2

Keynote: A Story of Three Distros: Better Together

On Day 2 of DevConf, I had the honor of being included as a participant in the Keynote which was lead by Red Hat's VP of Engineering, Denise Dumas.

This keynote was a discussion about Fedora, Red Hat Enterprise Linux, and CentOS as the three distro lineage that makes up the Red Hat Family of distros, our individual histories, how we came to co-exist as a cross-distro collaborative effort around operating system technologies, and our plans to collaborate even more in the future around container technologies and runtimes. The discussion further extended the concept of a runtime from a standpoint of being able to migrate them between distros as we decouple these from the operating system in such cases as containers or Software Collections.

Day 2 Keynote video

OpenShift as Enterprise Kubernetes

OpenShift is a kubernetes with many added developer features. One of it's main goals is to be an Enterprise-grade on-premise kubernetes distribution that provides everyone the power to run agile, reliable, distributed systems. However, there are some misconceptions about containers and orchestration systems such as OpenShift. First off, containers are not lightweight virtual machines but instead are entry points for services in a distributed system that can be the building blocks for applications. The idea here is to "write applications, not containers"

The OpenShift Platform provides: service discovery, auto-scaling based on usage metrics, persistent storage management, configuration and secrets management, access to platform API from containers, self-deployable applications, application life cycle management, and build pipelines. The Control Plane is a set of components that all run on the master node(s): API Server, etcd, the cluster scheduler, and controller manager.

OpenShift is extremely powerful and a very cool platform that I urge anyone interested in to watch the video below, it was an extremely well thought out and thorough examination of the technology stack.

OpenShift as Enterprise Kubernetes video

Layered Image Build Service: Lessons Learned

I'm proud to say that this presentation was one of mine, I was honored to be able to speak at the event and I greatly enjoyed the experience.

This talk was about the Fedora Layered Image Build Service and lessons learned along the way. I started off by covering the topics of the day and then diving right in. I began with a fun tale of the time that the Fedora Project Leader, Matt Miller (no relation), said (paraphrased) "There's this open source layered image build system I heard about, we should deploy one!" which started my 18 month journey to a GA Layered Image Build Service release for Fedora. I discussed progress along the way, pain points, highlighted and thanked the various upstreams that kindly supported me along the the way and tipped my hat to the power of OpenShift. The fundamental lesson learned in all of this is that nothing in container land is set in stone, expect APIs to change, and expect backwards incompatible changes to be the norm.

Then we defined containers quickly, had a history lesson of their lineage in Linux space, covered the differences between a Layered Image and Base Image, discussed OpenShift as a platform and use of it's build pipeline and API to create custom tooling (such as with OSBS).

Another topic of interest as it relates to this system is that of Release Engineering, most noteably the cornerstones of making software that is: Reproducible, Auditable, Definable, and Deliverable. This allows us to understand some of the design decisions of the system.

Finally is the discussion of the Layered Image Build Service itself and the Fedora specific implementation.

Layered Image Build Service: Lessons Learned video

Fedora Two-Week Atomic Host: Progress and Future

This session was also one that I presented, it was about the Fedora Atomic Host and the progress so far on the initiative as well as plans for the future. First off I wanted to frame the discussion around Release Engineering and how Fedora traditionally works. As with my previous session I defined Release Engineering as creating a software pipeline that is Reproducible, Auditable, Definable, and Deliverable. Also as a point of reference, a "Compose" is the collection of primitive build artifacts (RPMs), the creation of deliverables (ISOs, Virt Images, Cloud Images, OCI Based Image, etc), and combination of these as a collection that is ready for testing and release release. From there the discussion moved to how the Fedora Release Process works, it is time based (roughly 6 months), there are Nightly Rawhide Composes, DistGit is branched for each upcoming release which triggers Composes to begin for Branched, then Milestone Freezes (Alpha, Beta, RC, GA) go into affect with changes subject to Fedora QE, the Updates Criteria is updated, and ultimately the GA Release.

However, the goals for the Atomic Host Two Week were to move Fedora Atomic Host out of the Fedora 6 month release cycle in order to allow it to iterate more rapidly. We also wanted to create a fully automated pipeline for release, integration, validation, and delivery. We've accomplished a lot on that journey such as the creation of the new dedicated Atomic Host compose which allows changes to be made that won't impact the rest of Fedora, automatic generation of ostree content based on Bodhi updates, AutoCloud automated testing, and a two-week release cycle that is mostly automated (just need to get automated signing work done). In the future we hope to make even more progress around the automated signing, a fully automated end-to-end release (using loopabull), remove kubernetes from the base ostree and move it into a system container (which would make the Atomic Host image smaller and provide more flexibility and choice of container orchestration runtimes for users). We would also like to change the default configuration to use overlayfs for container storage on the backend as well as add kubernetes and OpenShift, single as well as multi-node, testing.

Fedora Two-Week Atomic Host: Progress and Future video

DevConf.cz Day 3

Keynote: History of Containers

The third day of the conference started with a really fun, entertaining, and light-hearted exploration of the history of containers starting from Virtual Machines that started in 1963, through the creation of the OCI, and all the way up to a comical debate-style presentation about the future of containers and wild ideas like microkernels.

One of my favorite components of this talk was the introduction of a new analogy for what used to be known as "Pets vs Cattle" by Steve Pousty. This "Pets vs Cattle" analogy is often used as a way to refer to computing resources that we care about having a long life and substantial uptime (such as virtual machines) vs computing resources that are ephemeral in nature (cloud instances and containers). The presenter identified that not only is this analogy both offensive to the billion+ people on the planet that consider cows as sacred animals, but it is also incorrect in that ranchers don't care about their cattle. The new proposed analogy is "Ants and Elephants" because ants are hive-minded and often are ephemeral in nature and they horizontally scale to accomplish a task (which is more or less what containers aim to do). However, elephants on the other hand spend a lot of time taking care of members of their herd, have grave sites where they pay respects to fallen members, and are large animals that can perform large tasks on their own. Therefore, from now on I will use the "Ants and Elephants" analogy and I highly encourage others to join me.

Keynote: History of Containers video

Commissaire: Exposing System Management

The presentation on Commissaire introduced the project and it's goals of exposing systems management over a simple JSON RPC base API that uses kombu to enable AMQP and performs tasks on the back end with Ansible. Also a point of note in the presentation is that the commissaire developers are working upstream with Ansible on the Python2 to Python3 transition as well. The over all goal is to be able to easily perform maintenance tasks across a container orchestration environment such as kubernetes or OpenShift.

Commissaire: Exposing System Management video

Ansible for people allergic to Dockerfiles

This session time slot was a short 30-minute one that introduced the concepts of ansible-container, how it aims to enforce best practices across ansible modules such that they can easily be re-used for container and non-container creation/deployments. There was also discussion of how ansible-container can deploy to orchestration engines automatically (kubernetes and OpenShift currently supported).

NOTE: I was unable to find the video of this talk.

Linch-Pin: Hybrid Cloud Provisioning with Ansible

Linch-Pin is a tool that is aimed to provide simple provisioning and tear-down of environments in multiple on-premise and public cloud providers using Ansible. The utility currently supports short-lived testing environments but targeting long-lived production scenarios in the future.

The guiding principle of Linch-Pin is that "Simple is Better" and it's a tool that originated to replace a really complicated utility called "Provisioner 1.0" (to the best of my knowledge, Provisioner 1.0 is not a public/open source tool). Linch-Pin provides the ability to perform installation/provisioning of systems based on "Topology Files" as input with the output being logging information about the creation as well as an ansible inventory file such that subsequent ansible commands can use it to find/access the specific systems that were created by Linch-Pin. Documentation can be found here.

Linch-Pin: Hybrid Cloud Provisioning with Ansible video

Scaling Up Aggregated Logging and Metrics on OpenShift

This session was a technical deep dive talking about how to resolve some really interesting problems at substantial scale of an OpenShift container orchestration cluster. Scenarios examined here were targeting solutions for clusters with over 10,000 pods in them. Areas such as how to scale ElasticSearch, Kibana, Cassandra, fluentd, and heapster. The session gets off in the weeds quick and is very technical. Anyone interested in these topics or who may potentially run into this level of scale issues is highly recommended to check out the video.

Scaling Up Aggregated Logging and Metrics on OpenShift video <https://www.youtube.com/watch?v=afHxhyOyl1o>_

Deploying Applications on Atomic Host with Ansible

The time slot for this session was also a quick 30 minutes and it focused primarily on quick overview of information about each of Atomic Host, docker build, Ansible, and Cockpit. Then it was demo time, where the presenter showed her ansible playbook and Dockerfile explaining what each does along the way. From there it was a live demonstration of the entire thing working end-to-end to build and deploy a containerize application on Atomic Host using Ansible and Docker.

Deploying Applications on Atomic Host with Ansible video

Testing and Automation and Cooperation: Oh My!

Yet another quick 30 minute time slot that covered a considerable amount of ground across it's topic space. This session covered Fedora's plans to a fully integrated CI pipeline for the entire distro with updates being gated by the CI but easily overridden if/when needed. As an example, the OpenStack project already has this kind of CI pipeline. In Fedora land, we need to firmly decide on what is considered the "input stream" for a CI system as well as determine what we want to gate on (which turns out to be difficult questions to answer). Then we need to find a place to run all tests. As a point of note is that collaboration can be difficult for testing as testing is often project-specific, requirements are often different and sometimes there's cross-community politics in play. We collectively need to start moving towards a common backend toolchain in order to start towards true cross-project collaboration. Currently, we're targeting Ansible as that thing (OpenStack Zuul is already using ansible on the backend).

Testing and Automation and Cooperation: Oh My!

Hallway Track

The hallway tracks are always some of my favorite times at conferences and DevConf.cz is certainly no different. However, because of the nature of them I don't have good notes on the discussions that were had and I've included at least highlight information about the ones that stick out most in my memory

Project Atomic

I had the opportunity to meet up with some community members of the Fedora Atomic WG to discuss various items about plans for the future, the desire to have multiple update streams, as well as plans for Fedora Containers and improving the Container Guidelines. All of these topic items have since been filed into the Atomic WG pagure.io space as issue tickets for posterity and work tracking in the future.

Fedora Infra Managed OpenShift

In another hallway track session a hand full of us were tossing around wild ideas of having an OpenShift environment in Fedora space that ran on bare metal and could provide shared hosting for upstreams to iteratively work on things in a way that could be integrated directly with Fedora services (such as fedmsg, taskotron, and loopabull). This might turn out to be a bit more far fetched than we can really accomplish purely because of the nature of the request but it's something that everyone in the circle thought was a good idea at the time.

Closing time...

That, in a really long-winded nutshell, is my DevConf.cz 2017 experience.

I look forward to DevConf.z 2018!

Until next time...

Flock to Fedora 2016

Flock to Fedora 2016

Every year, the Fedora User and Developer community puts on an conference entitled "Flock to Fedora" or "Flock" for short. This year was no different and the event was hosted in beautiful Kraków, Poland. The event had such an amazing line up that I rarely had time for the always fascinating "hallway track" of ad-hoc discussions with various conference go-ers, but in the best kind of way.


At the time of this writing, the videos had not yet been posted but it was reported that they will be found at the link below.

All the sessions were being recorded and I highly recommend anyone interested to check them out here.

I will recap my experience and take aways from the sessions I attended and participated in as well as post slides and/or talk materials that I know of.

Flock Day 1

Keynote: State of Fedora

Flock Day 1 started off with a bang, our very own Fedora Project Leader, Matt Miller took the stage for the morning keynote and discussed the current state of Fedora, where we are, where we're going, ongoing work and current notable Changes with work under way.

One of my favorite take aways from this talk was one about contributor statistics that are gathered based on contributor activity as it is represented within the Fedora Infrastructure via fedmsg and datagrepper (datanommer). The statistics showed that there are over 2000 contributors, of which roughly 300 do 90% of the work (which sounds odd, but statistically this is actually better than average) and of the group that does 90% of the work only about 35% of them work for Red Hat. I'm a big fan of these kind of numbers because it reinforces that Fedora is in fact a community driven project that Red Hat is simply a participant and sponsor of.

Flock 2016 Keynote State of Fedora slides

Introducing Fedora Docker Layered Image Builds

Next time slot that I attended was my presentation on the Fedora Docker Layered Image Build System here I introduced something I've been working on for quite some time with various upstream projects of technologies that come together to form this system. Before diving into the new service I went on a brief history lesson about what containers are, what they are in the context of Linux, and various implementations of which Docker is simply one. The main reason I like to start there is to level set that we have hopes to support all kinds of Linux container runtimes and image builds but we must start somewhere and with Docker being the most popular it makes sense to target it first. (You'd be surprised how often the question of supporting other image formats comes up)

In an attempt to make sure there were no gaps in knowledge of everyone in the room for my presentation I did a quick overview of what specifically Docker is, how containers are instances of images, and how images themselves are most commonly built (Dockerfile). We then progress into concepts of Release Engineering and why this is desirable, as outlined in an article I wrote for OpenSource.com recently. From there we traversed into the wild world of distributed container runtimes and orchestrators, most notably OpenShift as that's a core component of the Layered Image Build Service. We also discussed components used within the Docker Layered Image Build Service such as atomic-reactor, osbs-client, and koji-containerbuild. The last of which enables for the workflow using fedpkg for layered image builds for Fedora contributors just as they are used to for RPM.

I then did a demo, that of course failed (as per the Demo Gods) but was able to show a previously successful build.


I have at this point diagnosed the issue found during the demo and it has been resolved.

Introducing Fedora Docker Layered Image Build slides

Getting New things into Fedora

In recent past there has been a general communications break down between developers and Release Engineering, this has resulted in some issues integrating net-new deliverables within the Fedora Project. This presentation discussed the process by which new changes should come in, the timelines that things should be accepted by, and the various Release Engineer Tools that need integrating with.

However, there was admission that the documentation could be better about these items and the Release Engineering tools could be more approachable for outsiders in order to help with the process of on-boarding new changes into the processes and tooling. These items have shown improvement in the past year with further improvements planned.

There was a lively discussion of ways to make this better and I look forward to seeing positive movement come as a result.

Hacking Koji for fun and Profit

In this session, tip and tricks for hacking on the Koji build system were the focal point. Discussion about what Koji is, who uses it, and why someone might want to hack it was explored. Then an overview of the major components of Koji were presented in an attempt to give potential developers an idea of where to look in the code depending on what component they were trying to augment or supplement. From there a quick example of the Python API was covered as an example of how to get started, including reference for a more advanced example contained within the koji code itself was offered. Next up was a advanced CLI walk through that showed how to call directly to the XMLRPC API just as you can via the Python API.

There was a section of the session focused on the Koji Hub which is the user facing component including how to theme the web UI, change user policy, and how to write plugins that can add functionality to Koji via new API calls, policies, and callback hooks.

Next up was discussion of Koji Builder plugins that can add the ability for Koji to produce new types of Build Artifacts.

Finally, how to actually clone the git repository and then build locally a version of the modified code was covered.

During the Q&A portion there was a discussion of how difficult Koji can be to deploy and that it would be nice if there was a way to get up and running quickly for hacking purposes. Something that was completely automated and not necessarily production ready would be desired. There was also lively discussion about the future of Koji and the iterative improvements already made in refactoring the code as well as plans for more. Originally there was a grand plan for a "Koji 2.0" that would be a complete rewrite but as time has gone on that has proven too lofty of a goal to realistically achieve so the more iterative approach is being taken.

Hacking Koji for Fun and Profit slides

Containers in Production

Dan Walsh discussed running Containers in Production, a topic that is hot on many people's mind as container technology races into the mainstream as fast as OpenStack did before it. This session discussed various means of container runtime execution, including that of Docker and it's daemon. This included various aspects the Docker daemon's strengths and weaknesses and why alternative execution methods might be desirable or at least worth considering for Production workloads and environments. Various aspects such as storage configuration, non-privileged runtimes (security), remote inspection, fault tolerance, and systemd integration were discussed.

In this presentation was a strategy for running production ready containers using runC for execution of Open Container Initiative (OCI) compliant container images (such as Docker Images).



Fedora's MirrorManager: now and in the future

The session about MirrorManager was extremely informative, covering various aspects of the project, a brief overview of the history then diving into current features, limitations, things we're trying to do in the future to improve and enable the mirroring of new artifacts.

There were plenty of items that I would like to follow up on as there's so much about content mirroring that I don't currently understand.

I sadly did not take nearly as good of notes during this session as I had hoped to. I highly recommend anyone interested in the topic of content mirroring to watch the recorded session for more information.

Fedora ARM State of the Union

Peter Robinson gave a presentation about the current state of Fedora ARM including both armv7hl and AArch64. At the start of things he requested that questions about specific dev boards be held to the end because he would have a section in the session dedicated to that. Exploration of the trials and tribulations of bringing new hardware to life was interesting (at least to me) as there's so many things that we in the pre-existing hardware platform world take for granted. There's many things about the ARM world and boot firmware that make things difficult because of lack of standardization around the developer board boot methods beyond just the standard trouble of bringing up new hardware that doesn't yet have support for everything necessary in the kernel. Beyond the kernel is the compiler toolchains and programming language tooling that needs added support for new architectures such as ARM, various points of this were discussed with examples of areas where people in the Fedora community have stepped up to help (Haskell SIG being one great example).

From there discussions of various developer boards spiraled off into the weeds of things that I definitely don't understand about the finer points of ARM board "bring up" but it was interesting to listen to the state of things and take notes of things to learn about.

University Outreach - New Task or New Mindset?

Justin Flory and Jona Azizaj presented about the history of the University Involvement Initiative, the struggles met with attempting to expand it's adoption and further reach, and eventually it's decline. This session was a call to arms for community members with ties to Universities either as active students or Alumni to help bring this initiative back to life. The main idea behind all of this is that we would like to help foster the open source community by bringing an active student population into it's ranks. There was a lot of positive feedback and interest shown during the session and I have high hopes for the future of the initiative.

Fedora Engineering Team Dinner

While not on the Flock schedule, this was a personal highlight for me as a member of the Fedora Engineering Team because we are a geographically dispersed team that lives and works from all corners of the planet. As such, we rarely get the opportunity to all be in the same place, at the same time, and in a social setting (as opposed to getting work done). It was great to be able to sit and chat with colleagues and discuss both work and non-work topics and get to know them better on a more personal level.

The main take away: I love my job, I love my team, and I love my company.

Day 2

Kirk, McCoy, and Spock build the future of Fedora

Matt Miller took us on a Star Trek themed adventure that lead to the use of the Kellog Logic Model applied to Fedora Initiatives and how each Working Group (WG) or Special Interest Group (SIG) could use this model as a means to help drive their goals as well as frame their over all initiatives to others, including the Fedora Council and FESCo. The session slides were covered rather quickly and then discussions and questions broke out about how we could use this for various things and/or just general questions about the logic model.

The Fedora Modularity Logic Model was an example where this is already being used within the Fedora Project with success.

Modularity: Why, where we are and how to get involved

Fedora Modularity is a new initiative that is focused on re-thinking how we think of the way Linux distributions are composed. Instead of as a pile of software packages, it could be a grouping of modules that can be managed as disjoint units and lifecycle managed independently of one another.

Background on the topic leads back to the Rings Proposal (a part of Fedora.next), where we think about the distro as a set of rings and the center of rings the central point of the operating system is the most curated components of the operating system and as you get further from the center you can have less and less curation.

However, as time went on you have less and less correlation such that the Rings analogy doesn't really fit. Example, any given package can change over time or need a different version in a different use case or scenario.

Different use cases, a new website with the latest technologies vs an ERP system where you want different lifecycles or different "aged" or different levels of "proven" technologies. This is the problem that modules hope to solve.

What is a module?

  • A thing that is managed as a logical unit

  • A thing that promises an external, unchanging, API

  • A thing that may have many, unexposed, binary artifacts to support the external API

  • A module may "contain" other modules and is referred to as a "module stack"

Base Runtime (Module Stack)

  • Kernel (module)

  • userspace (the interface to userspace, coreutils, systemd, etc)

    • There built requirements are not part of the module, but simply a build requirement.

modulemd: Describe a module

  • yaml definitions of modules, standard document definitions with "install profiles"

  • install profiles

  • definition of components included in a module

There was plenty of discussion around these topics and suggestion that people attend the workshop the following day.

Factory 2.0

As with all things in technology, we want to constantly move faster and faster and the current methods by which we produce the operating system just won't scale into the future. Factory 2.0 is an initiative to fix that.

The presentation kicked off with a witty note that we have entered the "The Second Eternal September," GitHub and how node.js has changed how people fundamentally expect to consume code.

Dependency freezing has become common practice these days because of node.js and rubygems communities impact on developers.

pip freeze > requirements.txt
  • ruby bundler

  • nixOS

  • coreOS

  • docker and friends

Brief overview of Fedora Modularity was given for those who didn't make it to Langdon's session on the topic.

Matt Miller started with Fedora.Next -> Rings, then Envs and Stacks, Red Hat now funding a team to accomplish this.

Backing up first to discuss how not to throw things over the wall. In past there's been discussions about how to articulate "Red Hat things" in the Fedora Space. Ralph Bean (our presenter) works for a group in Red Hat called RHT DevOps.

There are analogous groups within Red Hat and the Fedora Community:

Fedora Packagers -> RH Platform Engineering

Fedora Infra -> RH PnT DevOps

What Factory 2.0 is not: a single web app, a rewrite of our entire pipeline, a silver bullet, a silver platter, just modularity, going to be easy.

"the six problem statements"

  • Repetitive human intervention makes the pipeline slow

  • unnecessary serialization

  • rigid cadence

  • artifact assumption

  • modularity

  • dep chain

"If we had problems before, they're about to get a lot worse" (Imagine modularity without Factory 2.0)

Would like to use pdc-updater to populate metadata tables with information about dep chains, we would then use that information with other tools like pungi but also with new tooling we haven't even thought of just yet.

Unnecessary serialization makes the pipeline slow, one big piece we will need to solve this is the OpenShift Build Service (OSBS). We're going to need to use an autosigner.py to get around new problems (assuming we "go big" with containers).

Automating throughput, repetitive human intervention makes things slow. Builds and composes. An orchestrator for the builds and the composes, best case scenario is that things are built and composed before we ask for them.

Atomic Host Two Week is kind of a case study that we should learn lessons from in order to merge the changes needed back into the standard pipeline instead of the parallel pipeline that was spawned.

Flexible Cadence, The pipeline imposes a rigid and inflexible cadence on "products". Releases related to the previous point about Automating Releases, "the pipeline is as fast as the pipeline is".

EOL: think about the different EOL discussions for the different Editions. Beyond that - a major goal of modularity is "independent lifecycles"

"I want to be able to build anything, in any format, without changing anything" (not possible) but we can make the pipeline pluggable that will make it easier over time to add new artifact types to the pipeline.

"The pernicious hobgoblin of technical debt" as Ralph called it.

Ways we can do better and refactor:

  • Microservices (consolidate around responsibility)

  • Reactive services

  • Idempotent services

  • Infrastructure automation (Ansible all the things)

Docker in Production

The Docker in Production session was a very brief walk through of how you can go from your laptop to a production environment. This effectively boiled down to best practice for how to "containerize" your application properly, ways to build docker images and tagging schemes that you can (or should) use, a distribution mechanism for the images, and finally a distributed orchestration framework such as Kubernetes, OpenShift, or Docker Swarm.

Pagure: Past, Present, and Future

Pagure is a git forge.

Old version was very simple: there were three repos per project: source, tickets, and pull requests. Recently got a new UI (thanks to Ryan Lerch).

Forks, pull requests. (A very GitHub style workflow).

If you want to run your own pagure, all you need is the web services and the database. If you'd like all the bells and whistles, you'll then need to add mail server (pagure milter), pagure eventsource server, gitolite, and a message bus.

Doc hosting (fourth git repository for a project, optional), in the future considering doing something similar to GitHub Pages.

"Watch" repo, to get notifications for a project you're not directly involved in or to opt out of notifications for a project you are directly involved in.

Roadmap in the Issues tab in the UI for milestones and arbitrary tag filtering.

Issue templates, delivered by markdown files in the issues git repo. Also, can set a default message to be displayed when someone files a new pull request.

Diversity - Women in Open Source

The session on Fedora Diversity began with a lot of wonderful information about the initiative and I have outlined to the best of my ability focal points of those slides here.

  • Started roughly a year ago

  • No exists an official Fedora Diversity Adviser

  • Myths
    • Women are not interested in technology

    • Women can't to programming

    • Men developers are mote talented than women

    • There is no work-life balance for women who work in the tech industry

    • So on and so on ...

  • Facts
    • Women in Technology (Mothers of Tech - BizTech)
      • Ada Lovelace (Creator of Programming/Computational Machine)

      • Heddy Lamar (Frequency Hopping)

      • Admiral Hopper (Created COBOL)

      • Many more ...

    • Women are very creative, versatile, powerful, and intelligent

    • Diversity increases success

  • Initiatives
    • Grace Hopper Celebration of Women in Computing

    • Women in Open source Award

    • Outreachy

    • Google Summer of Code

    • and many more

  • Gaps
    • Lack of knowledge, encouragements, support, and time commitment

After the slides were done, the session turned into effectively a giant round table of people telling stories of how they've been successful because of diverse teams, reasons they think that women and other groups of people are currently under represented in Fedora and Open Source, ways they feel we can increase diversity, and methods that could be used to target various under represented groups in the Global Open Source community.

The GNOME Outreachy program was also discussed as a great example of a program working to move things in the right direction around the topic of how we can try to actively improve our community and the open source community at large.

I hope to be able to participate in some of the take aways from these discussions as they are put into action.

Testing Containers using Tunir

tunir is a simple tool that will spawn a virtual machine or several virtual machine and then execute arbitrary commands and report success or failure of the commands based on the exit code of the command. You can also make commands blocking or non blocking and tunir has support for Docker images as well as support for spinning up a kubernetes multi-node cluster in order to test containers "at scale". The presentation was short and to the point with plenty of demos showing how easy it is to get started using tunir. Also, tunir is the testing component behind Fedora AutoCloud.

Cruise Krakow

In the evening of Day 2 the Flock participants had the unique opportunity to dine on the Vistula River and take a small tour up and down the river for some site seeing. It was a beautiful scenic way to wind down with fellow Fedora Flockers after a full day of sessions and technical discussions.

Day 3

Lightning Talks

Day 3 kicked off with Lightning Talks, I presented one myself about a small project I've been working on titled Loopabull which is an event loop driven Ansible playbook execution engine. There were also plenty of other wonderful lightning talks covering topics such as Fedora Marketing, OpenShift provisioning on Fedora with Amazon Web Services, Fedora CommOps, dgplug, and so much more.

Automation Workshop

The automation workshop was kind of an anti-presentation session as the session leader wanted this to either become more of a hacking session of a problem solving session. As such, ad-Hoc discussions and work done on automation issues in the various areas of the Fedora Infrastructure occurred and people broke off into smaller groups within the room to work or solve problems.

OpenShift on Fedora

This session was about running OpenShift on Fedora using the latest and greatest features of OpenShift, most notably the new component called oc cluster up which is an auto-deployment provisioning tool built directly into OpenShift as of version v1.3+ which allows for the automatic creation of a clustered environment. The entire session was provided as a very well documented walk through and the link is below.

OpenShift on Fedora Guided WalkThrough

Building Modules Workshop

The Modules building workshop came together as a hybrid approach of some presentation, some discussion, some demo, and some "follow along" workshop style. This was a lot of fun and incredibly informative, there was lively discussion about aspects of a module definition (for me it was mostly about trying to wrap my head around everything, and the session hosts were very accommodating).

There were many notes taken during the session and they were preserved in an etherpad instance but in the event that it gets lost in the ether over time I have exported it's contents to my FedoraPeople space and it can be viewed here.

Brewery Lubicz

Next up is the evening event which was hosted in a brewery complete with wonderful catering.

As per the schedule:

A feast and beer tasting awaits us at Browar Lubicz, a recently restored brewery. The brewery dates from 1840 and has been brewing beer almost continuously, even during nationalization in the 1950s. Restored in September 2015, the brewery is a high point of a trip to Krakow.

Day 4

Day 4 was Friday and I slept in a little because I was going to be staying up overnight in order to catch my 4am taxi to the airport to begin the journey back home so I regretfully missed the morning session on Ansible best practices but I was told it was very good and I have every intention to watch it on YouTube once the video is posted.

What we do for Docker image test automation

I attended this session for about 45 minutes but it quickly became apparent that the other participants were very new to Docker and taskotron in general and the session would therefore be very introductory in nature so I departed to join a workshop elsewhere. This session was by no means bad and I think anyone who is new to the topic of Docker or taskotron and is interested how these two things are being used together in order to test Docker Images should absolutely have attended or should watch the recording on YouTube after the fact.

Server SIG Pow-Wow

A lot of things are changing in the Fedora Project, particularly for modularization. This session was by and large a collaborative brainstorming and planning session for how to take advantage of the new initiative and how to adapt properly. RoleKit became a focal point of discussion as well as Ansible and potentially an integration with the two. Aspects of the discussion related back to the Fedora Formulas proposal that unfortunately didn't get off the ground at the time.

The session leader graciously took notes and has plans to post a write up.

Informal Friday Night Shenanigans

Friday night a group of us Flockers took to the streets of Krakow City Center in order to take in as much of the local cuisine, culture, and sites as we could on our last night in town (at least it was the last night for some of us). This was a really great outing and I had the opportunity to make some new friends within the Fedora Community that I had yet to meet in person. It was a wonderful way to close out an amazing event.

I look forward to Flock 2017!

Until next time...