top of page
  • Writer's picturein-manas

Where will the AI pendulum swing?

Hand of a robot (AI) holding a red ball as a pendulum

The world-famous physicist Stephen Hawking commented on the potential of artificial intelligence years ago. He came to an ambiguous verdict. Artificial Intelligence (AI) will either be “the best thing that can happen to humanity. Or the worst.” We don’t yet know where the pendulum will swing. [1] At that time there was no talk of generative AI systems like ChatCPT and Co.

We have put together an “ambiguous” mix on this topic from our innovation compass, which underlines the fact that even today we still do not know which way the pendulum is swinging.

AI detects Parkinson's in its early stages: based on smartphone images of the retina

Parkinson’s – a neurodegenerative disease – causes the dopamine-producing nerve cells in the brain to die. Dopamine is responsible for transmitting all signals from one nerve cell to another. If this messenger substance is missing, those affected will experience shaky arms and legs, stiff muscles and slowed movements over time. Unfortunately, the disease is often recognized too late - namely only when it has already attacked the brain. In Parkinson's patients, the nerve degradation becomes noticeable at an early stage in the retina in the back of the eyeball, as it becomes thinner. In addition, the blood vessels located there become smaller. However, so far only very experienced doctors have been able to make a reliable diagnosis using a microscope and ongoing comparison options. Researchers have now trained an artificial intelligence using retinal images from both Parkinson's patients and healthy people. The series of tests showed that the AI was able to diagnose the contrast-enhanced images extremely reliably using this so-called “support vector machine algorithm” (SVM). In the future, artificial intelligence could play a crucial role in early Parkinson's detection. All that is required are tools that are already available in eye clinics. And best of all: the images can be created in sufficiently high quality using a “simple” smartphone equipped with a special lens, according to the scientists. [2]

Use AI language engines like ChatGPT to install hacker traps

The release of ChatGPT by OpenAI and Microsoft has sparked a broad discussion about the possibilities and dangers of voice-based artificial intelligence. Meanwhile, the chatbot is constantly learning and improving even more complex activities that previously amused rather than impressed experts, such as writing computer programs. This worries cybersecurity leaders who see another dynamic front of criminal potential opening up, but also provides them with a tool to counter it. Xavier Bellekens, head of the Glasgow-based cyber defense company Lupovis (“Deception-as-a-service”), used the voice bot for the first time to deliberately deceive criminal hackers by having him set up a so-called “honeypot”, an apparently productive source of data intended to attract hacker activity and track down its perpetrators - in this case a printer. This required a particularly differentiated, iterative application of “prompt engineering”, the “demand dialogue” with the machine. Bellekens described it as essential to success that he gave the ChatGPT a relatively free hand at every stage of development in order to then react precisely to the errors in the result. In fact, after the printer was present in the cloud for 6 minutes, the first hackers fell into the honeypot - although not human attackers for the time being, but hacker bots. In the cybersecurity scene, the attempt was welcomed as a groundbreaking expansion of the range of instruments. [3]

Image link that leads to INNO-VERSE.

Instructions for use for triggering pandemics can be created by everyday chatbots

The risk of large-scale use of biological weapons seems relatively low, provided only governments have the means to do so. But this is changing, as a report from MIT, the Massachusetts Institute of Technology, suggests: In a course, students without a scientific background should try to get layman's advice on how to trigger a pandemic from common chatbots. They used various LLM systems (GPT-4, GPT-3.5, Bing, Bard, FreedomGPT and others) for the one-hour experiment and got very far: The chatbots suggested pathogens that are considered likely triggers of future pandemics and explained how they could be produced using existing laboratory materials, gave names of companies that could carry out such orders and who were unlikely to check the orders, and suggested how the viruses could be circulated. The authors of the publication emphasize that artificial intelligence can immensely increase biological threats, which is a fundamental threat to international security, for example given that the SARS-CoV-2 virus has led to 20 million deaths. In view of advances in biological research, even more and possibly more dangerous substances can be expected in the future that could be used as biological weapons. With regard to possible countermeasures, they discuss non-proliferation regimes that target LLM or DNA applications. [5]

Potential or risk? Tool or weapon? That's probably up to all of us. At INNO-VERSE, artificially intelligent assistance systems can be used as tools to carry out valuable innovation work to meet the challenges of our time, true to the motto simplify innovation.

Greetings from INNO-VERSE

The in-manas team




bottom of page