
Artificial Intelligence: between fascination and hope
Convincing answers to everyday questions and astonishingly accurate protein structure predictions are offset by limited creative inputs and predictive maintenance systems of limited usefulness. For Michael Collasius, CEO of HSE•AG, this dichotomy of artificial intelligence exemplifies where it can be best utilized in diagnostic applications and where, at least from today’s perspective, it still does not offer many advantages.
Michael Collasius: Sometimes I feel a little lost when it comes to artificial intelligence (AI). On the one hand, there is this fascination: the precise answers and explanations that tools such as ChatGPT, Google Gemini, Mistral, Bing Chat or Perplexity provide me with to many questions in everyday life, the enormous accuracy of Google AlphaFold’s protein structure predictions or the reliability with which an image recognition system verifies the correct filling of the work deck of a diagnostic device.
On the other hand, I am always disappointed: ChatGPT and its ilk haven’t yet provided me with any truly creative and innovative answers that go beyond lexical knowledge. And it’s not just the life sciences diagnostics and automation industry that hasn’t made any noticeable progress in predictive maintenance in recent years.

Even Zuckerberg is still looking for the killer app
I am not alone in my dilemma. Even pioneers of the AI revolution such as Meta CEO Mark Zuckerberg, who is convinced that AI will transform large parts of our society, in an interview with the business podcast “Acquired,” also states that he is not sure what the first applications of the technology will be. His platforms Facebook, Instagram, Threads, and WhatsApp have not yet found the AI killer app either and are still experimenting in trial-and-error mode.
I increasingly suspect that this dual nature is a fundamental characteristic of AI. Whether the tools are extremely powerful or completely helpless depends less on the technology itself and more on the question at hand and the environment or, in other words, the data available.
Relationship and available data volume as restrictions
AI is obviously particularly powerful when the objects of investigation are closely related. This applies, for example, to languages or protein structures. Both develop through evolutionary processes from precursors, which automatically create patterns of relationships. In contrast, completely new things usually only allow AI to hallucinate by clinging to any apparent patterns. Alpha- Fold also fails with structure prediction if an amino acid sequence is artificial and has no relationship to known structures.
In addition to the necessary relationships, there is another limitation that stands in the way of many practical applications. If AI does not have enough reliable data at its disposal, it cannot generate reliable answers. This limits the application for questions that venture into unknown territory as well as for events that only occur very rarely.
The better the device, the more difficult predictive maintenance becomes
For example, if we at HSE•AG develop a device that works very reliably, error events will inevitably only occur very rarely. Predictive maintenance is therefore unlikely to ever work for such quality devices. If the devices were to produce enough fault data to train an AI model, they would be an imposition for users and therefore simply not marketable.
Sufficient data is generated, for example, where many images are created that are related to each other. This applies, for example, to the automatic evaluation of stained tissue sections or the video monitoring of processes on the work deck of an analytical device. But even in these cases, AI is not a sure-fire success. Sudden light from outside – for example from sunlight when the device has been repositioned – can overwhelm the AI and lead to misinterpretations.
Acceptance for assistance systems is higher than for autopilots
Such errors can usually be detected and rectified sooner or later. However, they impair the acceptance of AI and this is particularly problematic in the medical environment. Doctors must be able to rely on the analysis results. Pathologists therefore prefer to assess the tissue sections with their own eyes. This is nothing but understandable, as they are ultimately responsible for the decisions.
For the time being, AI is therefore best implemented in the medical environment as an assistance system that makes doctors’ work easier. For example, it can indicate where they should take a closer look. This is no different in medical practice than in autonomous driving. After all, most of us would feel uncomfortable if we had to leave control of the car entirely to a self-driving AI system. In contrast, assistance systems that park, keep in lane or maintain speed are welcome helpers for the vast majority of us.
Used correctly, AI tools will significantly advance analytics. In the wrong place, they generate false security at best and false results at worst.
Mountains of data can no longer be managed without AI support
One field of application in which AI tools will be indispensable in diagnostics in the future is undoubtedly data analysis. Devices are producing ever more and ever more complex data. Without the help of intelligent systems, humans will no longer be able to distil the necessary findings from these constantly growing mountains of information. This is all the more true as in future there will be an increasing need not only to assess individual diagnoses, but also to combine the results of different analyses.
When and how AI should be used sensibly in diagnostic devices and where the limits of the systems lie are among the most demanding challenges for us device developers. In order to master them successfully, we not only need to have a technical handle on the application but also understand it in a laboratory context. We must also be able to assess the biology on which the assays are based. Only then can we reliably assess the responses of an AI and incorporate them into the development process.
Used correctly, AI tools will significantly advance analytics. In the wrong place, they generate false security at best and false results at worst.

About Michael Collasius
Michael Collasius holds a Diploma in Biology from the University of Bonn, Germany, a Diploma in Molecular Biology from the Institute of Genetics at the University of Cologne, Germany, and a PhD from the Max Planck Institute of Biochemistry in Martinsried/Munich, Germany.
He has more than 25 years of experience in the life science and diagnostics industry. Michael Collasius has developed a broad portfolio of innovative laboratory sample preparation and analysis platforms for industrial and academic applications, held several leadership positions and built companies in the triple-digit million range from scratch.
Michael was a member of the board of QIAGEN before co-founding HSE•AG in 2017.

Recent Annual Reports