USENIX supports diversity, equity, and inclusion and condemns hate and discrimination.
Review: Setting Up CI/CD Piplelines
I’ve taken a lot of classes in my several years attending LISA, and over that time, I’ve gotten to see a lot of different teaching styles and techniques. My least favorite style (and probably yours, too) is a strict lecture, where the instructor occasionally flips through slides, but mostly reads from notes, and has no engagement, no interaction, and no interest in whether the people in attendance are getting anything from the talk.
My favorite style of learning is hands-on. I get far more out of a few minutes in a terminal or in an editor than I do from an hour in most people’s lectures. That’s one of the reasons I’m such a huge fan of the LISA Lab, now an institution at LISA. It’s not all the time, though, that an actual classroom instruction course is set up that way. This is one of them, though.
Aleksey Tsalolikhin’s “Setting up CI/CD Pipelines” class was very much a hands-on lab, and in this class, we were thrown into the pool, and Aleksey’s job was to make sure we figured out how to swim. He did a great job of it, too.
Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly." (Wikipedia)
Continuous Delivery is sometimes confused with continuous deployment. Continuous deployment means that every change is automatically deployed to production. Continuous delivery means that the team ensures every change can be deployed to production but may choose not to do it, usually due to business reasons. In order to do continuous deployment one must be doing continuous delivery. (Wikipedia)
Continuous Deployment is closely related to Continuous Integration and refers to the release into production of software that passes the automated tests. (Thoughtworks)
These are the foundational concepts for the process of managing the flow of software, from development, to test, to becoming artifacts that are eventually deployed onto computers in order to provide services. If you’ve never been part of a team that uses CI/CD, it may seem like a lot of overhead and unnecessary infrastructure without much return on investment, but that’s a misleading thought. This workflow model is vital to ensuring that software that gets released is dependable and reproducible, and it’s one I’ve spent a lot of time on in the past few years in my own career. But I wanted to see how it was taught, because you can always learn more and see how others approach the same problems.
Aleksey used an interesting tool to teach this class that I wasn’t previously aware of, Strigo. This was a great virtual classroom environment, which even had a portal into a VM that he had set up for us to work through the examples in the slideshow. He wisely knew that we all read at a different pace, and so we were given the link to the slideshow, the link to the app, and Aleksey kept himself busy working with a few individuals who were having trouble getting the environment going.
The first example, “hello world,” was pretty straightforward: we wrote a 7-line C program that printed “Hello World.” We then wrote a quick Makefile to test it against known output, and then generated correct and incorrect output to test against. We then did the same in bats.
The second example was more involved. We installed GitLab on our VM instance, used it to create a repo, then set up Continuous Integration using GitLab CI, using Docker (also installed on our VMs) as the script runner. We then generated a simple website via php scripts, using PHPUnit to verify script correctness, and used the website to validate flow through the pipeline.
The next major part of the course covered Jenkins, a web-based tool that focuses on the pipeline itself, rather than serving as a source code repository like Gitlab. Aleksey’s instructions had us shutdown Gitlab, then install Jenkins, configure it to interoperate with Gitlab, and spin them both back up, all while providing easy-to-copy and -paste commands. Honestly, it was a joy to go through these motions in such a painless way that was still interactive. I wish more courses were taught like this, and I’ll definitely mention that when I fill out my tutorial review.
This class was an excellent walkthrough of exercising different layers of the stack, and a really great in-depth tour of many of the features that Gitlab and Jenkins offer. Although I don’t use either tool in my environment, they were the perfect choices for a course like this, since they’re full-featured, easy to install, light on necessary configuration, and gave a really great experience overall. I’d definitely recommend them for a home project, or even at work if you need a solution to fill these niches (particularly Jenkins—I know many companies using Jenkins for internal pipelines). If you’re all-cloud, you could also look at TravisCI, which is also free for Open Source projects.
Overall, I’m very glad that I took this class. I know that not everyone was as in love with the teaching method as I was, but of all the people I talked to, the vast majority preferred it, and hoped that more courses would go this type of interactive route in the future.