Skip to content

Software Engineering Training in the Age of Generative AI

Tuesday, 26 March 2024 | Kevin Ottens


This is a piece I also wrote for the enioka blog, so there is a French version available.


At enioka Haute Couture we started offering trainings a little while ago. True to our DNA this is focusing on software engineering practice rather than a given tool, framework or API. This is why we have courses on topics like software architecture, refactoring, dealing with legacy code, test driven development (TDD), code review and so on.

Also, not every team has the same needs. Some prefer a few intensive days during a given week, others prefer smaller sessions over an extended period. That’s why our courses are designed to be flexible. They can be tailored to build a multi-day session, or all the way down to many one hour knowledge building sessions.

Hopefully, this all sound great to you (for sure it does sound great to me). So why talk about this now? Is it some kind of advertisement stunt? Well… not really.

You see, while working on our training offers something happened. Almost three years ago GitHub announced GitHub Copilot. It was just a technical preview at the time. Since then, there has been an arms race in the large language model (LLM) domain. Like it or not, generative AI is here to stay and code assistants based on such models are used more and more.

I’m not one of those doom sayers claiming such models and assistants are going to take over our jobs. Likewise, I don’t think they’re going to double the daily productvity of developers. Still, they will necessarily impact how we work and the code which is produced. So keeping an eye on development practices, I’m less concerned about disappearing developer jobs and more concerned about a drop in the quality of the code produced.

Indeed, early studies indicate that code assistants when introduced in an unchecked manner tend to push the code quality down and tend to increase the amount of security issues introduced. Interestingly the main factors highlighted are behaviorial. Which means that before waiting for a magical new assistant which would code perfectly (spoiler: it won’t happen), we should rather improve the way we introduce and use those tools.

Which gets me back to the [enioka Haute Couture trainings]. In this new era, we have to acknowledge coding assistants during our trainings. This perfuses all the topics I mentioned previously. There is now a nagging question for all our software development practices: when is a coding assistant the right tool for the job?

If you’re practicing TDD or trying to improve your use of it, is it a good idea to have the coding assistant write the tests for you? Maybe not… since it is where you make important design decisions, you likely want to stay at the helm. Might come in handy to generate the code which must pass the tests though.

If you’re dealing with a legacy code base which needs to be modernized, for which part of the process the coding assistant will make you faster? Updating the code to a newer version of the language or dependencies? Extracting clearer modules and functions? Writing approval tests to secure all of that?

There are many more such questions… and you can explore the answers with us during one of our training sessions. We’ll keep talking TDD, legacy code… with a twist!

And of course, just like any other tools, what we’re proposing is not specific to a given solution. You use GitHub Copilot? Codeium? A specific in-house fine-tuned model? This is fine. We’ll take this into account during the training to adapt it as much as possible to the involved developers and their context.

If you want to discuss this further, feel free to get in touch with us.