I finished teaching my first MLOps course a few months ago. Now that I’ve had time to reflect, I wanted to look back at my first post and talk about what I changed, what went well, and what I would do differently next time.
Tools Versus Concepts
My plan to focus both on tools and concepts turned out to work well. I was able to introduce a tool in class, but also give students the freedom to try other tools out there that accomplish similar tasks. This way they could get a sense of how different teams view solving the same tasks, for example, how MLFlow goes about artifact tracking versus how Weights & Biases does the same thing. I noticed some students would try a tool they hadn’t heard of and become a huge fan of it, and even start using it in their day-to-day work.
At first I was worried that focusing on tools would backfire because the tools in the MLOps pipeline are constantly changing and evolving, meaning that each student may end up using a completely different toolkit when they start their job. But, after playing around with several tools, I discovered there is value in learning even just one of them. Getting hands-on with a tool that works was really the best way to understand a particular stage in the pipeline.
MLOps Project Execution
For the project each team had to do a series of POCs so that they could get hands-on experience with each tool and think critically about the strengths and weaknesses of each, and then convince me why they might choose to use that tool in the future. I feared early on that this project was too confusing, but unfortunately I was not able to move forward with my original project idea, which would have been for each team to build an ML-based application. Building an app would’ve been a more natural way to experience MLOps, but another course in the curriculum already requires the students to build an app. Looking back now, I see that the project was an utter failure. It was not only confusing, it was also simply not executed well. I had the students do way too many POCs, and only gave them five minutes to present the results to me. I was trying to mimic one of my earliest experiences as a data scientist, when I was forced to do a POC and defend it to my manager in less than a minute. I see now that this was not the right approach for a class (and probably was not the best way for my manager to have me spend my time either).
If I were to teach this course again, I would change the project, and I would also add weekly quizzes or homework assignments. Some of the students felt that the course was too easy without having graded assignments and quizzes, and so they were not motivated enough to keep up with the material.
I had to do a lot of demos for this class. Each class session had something new, a new tool, a new feature, to demonstrate. Eventually I switched to recording my demos at home, and then giving the students extra time during class to go through the demos and work on the labs. Recording the demos was hard work, and took way longer than it would to just do the demos live, but I think it was the right choice, and most of the students appreciated it.
The part that did not work is that I wanted to require each student to watch the demos in order to earn their participation grade, but by the end of the course I realized that forcing video watching probably isn’t the best way to assign a grade. I also noticed that a bunch of students “watched” all of the demo videos at the very last minute and that was their way of saying “hey look, I participated in your class”. I expected this to happen.
In a future class, if I record my demos, I’ll probably quiz the students on what was covered to ensure they actually learn something from it.
So, final thoughts:
- MLOps was a fun course to teach.
- MLOps is a strange course to teach. There really isn’t any theory, you just build things.
- I absolutely must give the students something to motivate them to do the work week after week.
- I should come up with a new project. Building LLM applications seems to be the thing to do these days.