To round off our Xperts round, we spoke to Stephan Felke. As a former software developer, the current product owner has a comprehensive understanding of what it means to work in different roles and what impact AI will have on software development in the future.
Stephan Felke
How do you assess the general situation at the moment and what does this mean for us as a company?
What is currently happening in the field of AI will definitely have an impact, including on us. In order to deal with these changes in the most stable way possible, our AI strategy is therefore consists of two pillars. On the one hand, using AI tools in software development for the best possible realisation of our customer projects and, on the other, working on AI-based solutions ourselves.
As far as the first point is concerned, as an employer of the future, we naturally recognise that AI tools will most likely dominate the market, just as IDEs are already doing today, to support software development. We want to play a part in the modern development process and enable our employees to do the same. At the same time, it is also important for us to continue to fulfil the requirements of our customers in the future.
When it comes to our own AI-based solutions, predictive maintenance is certainly one of the largest market segments that brings added value to our customers. We are currently involved in two projects in this area. In one of those, we have developed a predictive maintenance system based on the analysis of audio data using data science methods for a large logistics company that is in use at many locations in Europe and the USA. The system helps to ensure that hundreds of thousands of parcels and letters reach their destination safely every day. In future, the solution will also be able to process video data in order to derive even better information about the status of systems from a wide range of manufacturers. We are currently researching ways to develop our own model based on an LLM for our internal database.
As far as the general state of software development is concerned, it is definitely recognisable that these LLMs represent nothing less than an AI revolution. They enable applications that could only be realised with considerable effort or not at all with earlier AI technologies. What is astonishing is that these tools are also very easy to use. This ease of use means that more and more people are increasingly using these tools, increasing their popularity. This also increases the amount of development energy that is put into these models, which leads to them becoming better and better. This creates a spiral of success.
Changes like these naturally also have an impact on software development. IDEs currently mainly make improvements on a syntactic basis, and the use of AI enables improvements on a semantic basis. For example, a coding AI can recognise the lack of error handling and offer direct suggestions for solutions.
AI provides software developers with better support in the development process. In future, requirements will be translated directly into code or solution proposals. From today's perspective, I think it is unlikely that software developers will be completely replaced. As a human, I still have more context and a different perspective on problems and understand much better what needs to be implemented, than a machine is currently able to do. However, the use of AI tools for software development will increase more and more over the next five to ten years and ultimately there will be no alternative.
How will the work of software developers change?
I see a big advantage in the fact that developers no longer have to worry about things like boilerplate code. The creation of tests and all kinds of artefacts (build artefacts, docker containers, etc.) will also be more or less taken over by AI tools. However, I believe that the core of the code will remain in human hands. These tools can also be helpful when it comes to acquiring new knowledge. Developers are always dependent on acquiring new knowledge. Here, an AI can primarily support them in gaining inspiration or knowledge and help with implementation.
I see problems when developers start to rely on AI-generated solutions and no longer check them. In the past, simply copying code from the internet was always advised against and this maxim should also apply to AI-generated solutions. The more complex a solution becomes, the more difficult it is for humans to understand it. Those who place their full trust in AI and no longer scrutinise or understand the generated solutions run the risk of producing faulty or even defective software. In this area, it is extremely important to be able to understand what is happening.
At the same time, the question also arises here: who is responsible for the code? As with self-driving cars, there are still no clear guidelines here. In order to avoid errors in the software with potentially serious consequences, software developers must be able to recognise them in AI-generated solutions. In my opinion, it remains essential that software developers ensure that AI-generated solutions are also correct.
With the increasing use of AI tools, the role of software developers will also change. Nowadays, this already consists of two parts. On the one hand, there is the production of the code and, on the other, the code review, i.e. checking the code that others have written. As far as the first role is concerned, I can well imagine that this will be largely covered by the use of AI tools. The review is a different story. As previously mentioned, I believe that this review of solutions will take up a large part of the developer's work and cannot be easily replaced by AI tools.
Of course, all of this also has an impact on the expertise of software developers. At present, senior developers have a deep understanding of the subject matter and a great deal of detailed knowledge. This helps them to understand how a software product works down to the smallest detail. This type of software developer will continue to be needed, if only for the reasons mentioned above, such as the testing of AI-generated solutions.
It becomes difficult when people only work with AI tools from the outset and don't have the opportunity to get to grips with the details. Above all, these tools can also lead to having to work faster and faster. I definitely see a potential danger here that companies will lean towards employing software developers, for example, who only have a superficial knowledge of the subject. Companies that lack senior developers will definitely have difficulties solving tasks that cannot be handled by an AI due to their complexity.
There will also be an increasing harmonisation of the roles of product owners and software developers. These skills, which are not just about coding, will become increasingly important for software developers in the future and cannot simply be outsourced to AI tools.
To stay competitive, it's crucial to persist. Embracing change is key; just as industrialization has shown, development never stops. Recognizing its benefits and evolving both personally and professionally is essential to avoid being left behind.
What concerns do you have about the increased use of AI?
One big point for me is definitely responsibility. I like to come back to the example of self-driving cars. Even if self-driving cars are not directly related to software development, the technology behind them is. There are scientific studies that look at what a person would do if they had to decide, for example, to drive into a wall or a person or a group of people, depending on their chances of survival. The parameters are arbitrarily interchangeable. Some of these questions pose moral dilemmas and it is not possible to make a clear decision.
If humans are already unable to decide what is right or wrong, how can a machine? It also becomes problematic if the machine is trained with data that gives advantages to a certain group. This is a very dystopian idea, but wouldn't it be possible in principle - remaining on the subject of self-driving cars - for a wealthy person to purchase a premium rule set? In other words, an algorithm that, as a privileged person, chooses the option that offers them the best chance of survival in the event of an accident, even if someone else is injured? Self-driving cars are only part of the problem. Especially for people who develop AI applications, the question arises again and again: Can I reconcile this with a moral compass that serves the common good?
Another aspect that I see in this context is the issues of data protection and security. As a rule, the data used always flows into the training of the AI model. In principle, it is possible to extract data once it has been poured into a system. With the right queries against the AI system, it may be possible to regenerate parts of the raw data. Then, of course, you have a data protection problem. It would certainly also be possible to train an AI to recover data from an AI model.
Overall, however, I believe that the opportunities arising from the use of AI, whether in software development or for us humans, greatly outweigh the risks. For example, self-driving cars also lead to a significant reduction in the number of road deaths. When I consider that we have evolved from cavemen who used stones as tools, to people who made tools from liquid metals, to people who use machines, to people who build machines that do things themselves, I am quite impressed. Each of these developments has improved human life. Our current prosperity is based on this process of constant development. Why shouldn't AI be the next step in this process?