This time, we spoke to Danny Hucke. With his expertise, he supports us as team lead in Digital Product Design and also takes on the role of scientific coordinator. Last year, he gave us an insight into his thoughts on the topic of AI and we were curious to see whether his view of it had changed.
Danny Hucke
Exactly one year has passed since we last spoke, how do you view the developments since then?
At the moment, we are in the middle of the development from AI assistants to AI agents. As far as the use case is concerned, we are clearly still working on assistants. However, things are different in development, where we are increasingly working on agents.
What distinguishes these agents from their predecessors?
Assistants such as ChatGPT work according to the simple principle of input and output. Agents have a much more complex process in this respect. The three essential things here are:
Agents do not answer immediately, but are required to think about the enquiry made. They make a plan and break the task down into sub-problems. Essentially, their first step is to think about the question. Technically, you can visualise it like this: Code is written around the LLM, which is a pure input-output machine, that does nothing other than make the LLM think about the task at hand first, rather than immediately providing an answer. Of course, end users are not aware of this. The problem is first analysed and the agents reflect on the problem.
The second difference to the assistants is simply that agents have access to tools. Regardless of whether this is a web browser, databases, APIs or the fact that agents can also execute code. Agents are therefore able to google, something that software developers would also do. They can distinguish for themselves whether the code they have written actually works in the end.
Last but not least, they identify errors and try to solve them independently. This is often referred to as iteration or feedback. This ability enables the tool to keep working. It solves a problem and then continues. However, as each problem could potentially trigger a new one, there is a risk of the agent getting stuck in a loop.
At the moment, AI assistants have definitely arrived in the mainstream as a form of support. As the number of agents on the market is manageable, they are still in relatively limited use at the moment. One relatively new tool that I would definitely like to mention here is Devin AI. Billed as "The World's First AI Software Engineer", Cognition Labs, the company behind Devin, promises an AI agent with unrivalled capabilities. Fully autonomous and capable of learning, the tool is able to identify and rectify errors, and according to Cognition, able to think and plan independently It is also equipped with all the relevant development tools, such as a shell, code editor and browser, and is said to be able to solve complex software problems.
The number of over 1 million paying users of GitHub Copilot alone shows that AI-supported coding has become an integral part of software development.
What is and will remain an issue in any case is data protection. Almost all of these tools send the code to the USA via APIs, and some also use the data to train their models. This is of course problematic insofar as development artefacts (code, configurations, etc.) are sometimes also company secrets and are then potentially accessible to others via these models. How these tools are then handled depends entirely on the guidelines that apply in your own company. GitHub Copilot, for example, promises that the data will not be passed on or used to train the models.
You've already given a brief glimpse into the future with the agents. What is your assessment of where we are heading?
Personally, I believe that agents will become increasingly relevant. In concrete terms, this means that developers will solve more and more tasks together with agents in the future. Developers will increasingly hand over subtasks to agents so that they can take care of more complex problems. The agents will be able to solve tasks autonomously and developers will increasingly test and review the results.
I also think that the role of software developers will change significantly with the use of these tools. The proportion of coding itself will decrease and be taken over by the agents. Not only will the number of tasks that software developers hand over to AI agents in the future increase, but so will the ability of these agents to solve increasingly complex tasks. Time-consuming coding will become increasingly rare and developers will instead be busy checking and optimising the code generated by AI agents.
Where do you see the future challenges and opportunities?
With the increasing development of these tools, I see challenges for companies and software developers alike. Even for society and the state itself. But one thing at a time.
For developers who define their work more or less by actively writing code, it could be quite difficult. However anyone who has always been interested in the overall problem and various solutions, or is accustomed to seeing the big picture and understanding and delegating its components, is likely to benefit significantly from AI tools. As much more work can now be done in a much shorter time. Overall, I think the role of developers will move much more in the direction of product owner and project management and the boundaries between these functions will become increasingly blurred.
While the barriers to writing software are decreasing, this doesn't mean that the demand for developers is diminishing. On the contrary, it allows companies to undertake projects that were previously not feasible. From my perspective, the volume of software being developed is actually increasing, and so is the need for developers. These tools enhance productivity, thereby expanding the potential to create high-quality software. Historically, in IT, the advent of new and more efficient programming languages has always led to an increase in software demand. Naturally, it will take some time for these changes to fully materialize. In the future, companies will likely be able to tailor cost-effective software to their specific business processes, reducing their dependency on large corporations.
I also see challenges in trusting the output of AI. Even the most advanced AI can make mistakes, making it crucial to train software developers in its use and securely integrate these tools into their workflows. Monitoring will become increasingly important, and anyone who blindly trusts AI could inadvertently introduce malware. Understanding the behavior of these tools and holding them to the same standards as software developers is essential. The old adage still holds true: don’t adopt a solution you don’t fully understand. Rapid software development alone doesn’t ensure a good product; the quality must also be high. Today, the demands for transparency and adaptability are probably higher than ever.
We are currently in an era dominated by assistants and agents. While some are still not utilizing assistants, the first companies are now actively using agents to develop their software products. It is evident that the role of AI in software development will continue to grow. Emerging agents will further accelerate this trend, ensuring a rapid increase in adoption. The core or "brain" of these applications is always the LLM (Large Language Model). Numerous innovations are being built around these LLMs, and we will see many more until their full potential is realized. The landscape could become truly revolutionary if LLMs continue to evolve; predicting the possibilities is difficult. However, if development progresses as it has, we are on the brink of something significant.
At the same time, it must be noted that there hasn't been a significant breakthrough since GPT-4. While there are models that may be slightly better at coding or language processing than GPT-4, overall, these models are on a similar level. There is a strong possibility that LLMs will soon reach a plateau that won't be surpassed quickly, although I don’t believe these models have reached their full potential yet. Nevertheless, regarding the code or applications developed around these LLMs, we are still in the early stages.
If the core model, or "brain," reaches its limit, then the agents will also have limitations, and the advancements we make will become smaller. However, the past few years have shown that predictions in this field are often unreliable. What seems certain today might be outdated tomorrow. Nonetheless, when it comes to AI, I believe nothing is impossible at the moment.
Although the USA remains the hub of AI development, we are seeing a gradual democratization. For instance, there is Mistral from France, and significant developments are currently happening in China.
What do you think of predictions that software developers will be replaced by AI in the future?
As it stands, there are very clear limitations. These agents cannot yet perform everything that software developers can do, not even tools like Devin. They require well-defined frameworks and depend on high-quality data. For instance, these tools struggle significantly with legacy code, and I foresee complex issues like this remaining in human hands for quite some time. The role of a software engineer involves much more than just coding. First and foremost, it involves identifying the actual problem. It’s about translating the customer’s language—what they actually want—into technical language. Currently, a machine cannot reliably determine if what the customer says truly aligns with what they want. Figuring out what needs to be coded has always been one of the most challenging aspects, and I don't see software developers handing over tasks like requirements elicitation, architecture, or customer communication to machines anytime soon.
It becomes challenging when coding itself is the dream job. Previously, this type of work was considered difficult and compensated accordingly. Suddenly, there are machines that can more or less take over this task, or people without exceptional coding skills who can program software using these AI tools. Of course, software development is not the first industry to experience such a shift, but when the requirements of a job change, it also impacts the people involved. Some will enjoy the new tasks, but for others, the central element of their work will be removed, making it harder for them to adjust. It is crucial to support these individuals, helping them adapt rather than simply phasing them out.
At the same time, we must not forget that research and development are continually advancing, constantly producing new innovations. New concepts and architectures are always emerging, driving the industry forward. Emerging technologies such as quantum computing need to be sufficiently understood, and best practices must be established before they can be effectively utilized by AI tools such as Large Language Models (LLMs). This ongoing innovation presents new opportunities and challenges, continually evolving the IT landscape.
Of course, the influence of legislation should not be underestimated in this entire development. The recently enacted EU AI Act has received both praise and criticism, depending on one's perspective. This has led to consequences, such as large companies like Google opting not to offer certain products in the European market. Overall, it is a significant challenge for everyone involved in legislation to find rules that protect people. It is also a personal priority for me to place people at the center. It is crucial to harness the opportunities AI offers us and develop applications for people, rather than against or instead of them. The key term here is human-centered AI.
At IT Sonix, we are actively enhancing our expertise in this area and are working to empower more of our employees to utilize AI products. It is essential to break down barriers. We are also developing our own prototypes and are fascinated by the ongoing developments in this field.