This post will detail my experience (which was a very good one) of authoring my first challenge on the very cool new(ish) labs.iximiuz.com platform.
...but first, a short digression into why it's so cool, and why I'm excited about it!
It used to be that to create an entire learning environment, you had to create the whole thing yourself. If you wanted, like I did, to make a mongo dojo, you had to set up the various VMs with something like vagrant, and then manage the machine states and the networking, and the storage, etc etc...
But now, for particular learning cases, this platform makes it ridiculously straightforward. It's a very simple/easy way to get console access directly in the browser to running linux VMs in the cloud. Since it uses firecracker microVM's under the hood, it boots very fast as is ready to learn with minimal wait time. The accompanying content is also very comprehensive and easy to read.
(For the full description of the tech stack that powers the platform, check out this post.)
So instead of needing to know about how to orchestrate all the VMs behind the scenes and wire that up to a very slick looking frontend with networking all smoothed out and support various users starting and stopping playgrounds and challenges constantly...you can just write yaml and markdown, and the platform can convert it into a working learning environment.
Anatomy of a Challenge
If you'd like to see for yourself what all the challenges look like, you can just go there now. There are loads of free ones, as well as a premium tier with all the content and higher usage limits for things like CPU and bandwidth. Also, the platform even works (fairly well) in a mobile UI (though turn off predictive text and spelling fixes for your keyboard or it's much harder).
...for a bit more of a descriptive introduction to the challenges, read on...
Each challenge is based on one of the existing playgrounds (or can even use a custom playground you design yourself), and so can be situated in environments like a single node k3s cluster or a multi-node VM configuration with Docker preinstalled.
(for a full list of playgrounds, check out this page)
The challenge starts with some description (and ideally also a nice graphic) to set the stage for what the user needs to do. The platform handles all the formatting and styling, so all you need to do is bring the text, and it'll automatically show up looking very sharp.
Each step can have associated hints that can be exposed if you're a bit stuck. Often these are links to the docs, but can also be examples of commands to try running.
The UI has verification elements that go green when you've satisfied the requirements for that particular step. These are backed by simple bash scripts in the markdown file to check, for example, whether a certain number of pods are running with given labels, or whether two pods are able to reach each other over the network.
One last aspect that's available if you need it, is init tasks that can run before the user is presented with the terminal to play around in. These would be for installing some specific software that isn't available in the default playground base, or potentially generating some sample data that needs to be available when the user starts the challenge.
My CKA Challenge Creation
Even though I don't have that much experience with kubernetes, it's still one of the most popular technologies in the DevOps space, and a lot of people are interested in it. So I figured I would enhance my own knowledge of how networking policies are configured and used in those systems.
And if you've ever read this blog before, you'll know that my attitude is that the only thing better than building to learn is building to teach...so this platform is the perfect opportunity to do that.
A very nice companion piece of tech for the platform is the official Iximiuz Labs CLI. You can use it to easily create new challenges, update existing challenges, and generally do cool stuff from the comfort of your terminal. The obvious first step in this case was creating new content via the CLI.
The default file autogenerated via the CLI has lots of helpful comments, and can guide you towards how to use the different fields available for configuration:
(If you want to see what the markdown for my finished challenge looks like, that's here)
There are a few things to be aware of, though, that tripped me up the first time:
the categories are intended as higher level concepts like "kubernetes" that can be used to filter content in the platform UI, whereas the tagz are more for specific topics (like CKA) that should not overlap with the categories
the machine and tab names need to be values from the existing system, and if you try to configure something custom, it (currently) has the risk of breaking things.
if you expect to have some long-running tasks (or some very short-running ones), you might need to tweak the default timeouts for those in your configuration. I've seen a couple times now where I had tasks that didn't play nice with the default timeouts and I had to either explicitly make them shorter or longer to get things working.
Iterating on the content
The CLI isn't only able to generate/push content...it also has the ability to hot reload changes on save so you can get very fast feedback loops while authoring content. It's a total gamechanger to not need to wait for a daemon running somewhere on a server to pick up changes to see what you've updated.
I found it very smooth to sketch out my ideas, see them running immediately in the environment, and then fix things that weren't yet quite right.
And you can join the Discord server if you want to report a bug or request a feature. Ivan is very helpful and very interested in making sure learners (and authors!) have a positive experience on the platform.
Some things I learned along the way to pass on:
it's better to avoid using Dockerhub images in your challenges, since the rate limiting can be a bit aggressive. Much preferred is ghcr.io or another more robust container registry that allows public access.
Instead of batching together related verification tasks into one check, it's a much nicer learner experience to separate them into discrete tasks that give iterative feedback more quickly as the user is completing the challenge. For example, in my challenge I originally had all the checks for correct labels in one big scripted check, though it was much nicer to split that into 3 checks and 3 pieces of feedback.
I definitely recommend https://excalidraw.com/ for getting simple diagrams together (in general, and for the purposes of putting in challenges). It (somewhat) hides my absolutely awful design skills.
And here's the finished product, my very first challenge on this very cool platform!
https://labs.iximiuz.com/challenges/cka-network-policies-between-deployments
Top comments (0)