Hardly any security solution today can function without artificial intelligence. But these systems do not always meet expectations in every case.
Artificial intelligence (AI) was everywhere in the media in 2018. At the recent digital summit in Nuremberg, Federal Minister for Economic Affairs and Energy Peter Altmaier said AI was going to change everything.
Huge amounts of funding for countless research projects emphasise the expectations linked with this technology. According to a strategy paper by Germany’s Federal Cabinet, the Federal Government intends to make around EUR 3 billion available to promote AI research through to 2025. Other countries have comparable projects in place: the UK is investing about $1 billion, while the Japanese government aims to make about $1.4 billion in grant funds available, and even the Trump government is budgeting for state funding.
But despite all the hopes associated with this technology, we must not forget that cure-alls rarely fulfil the expectations made of them. AI has been a dominant theme in the world of IT once before: in the 1980s, when artificial intelligence was perceived as a substitute for expert knowledge, and expert systems were expected to take the place of professionals and specialists in all scientific disciplines.
But unlike then, today’s understanding of AI is characterised by automated machine learning. This uses neural networks to master models that can then be applied to other, comparable problems. In contrast to rigid algorithms, neural networks can draw analogies from large volumes of sample data and deal with cases that are not found in the sample data but can be derived from it. Machine learning is the method of choice for detecting patterns. This is precisely where the strengths of AI lie: in analysing variations using pattern recognition. And thanks to the computing power available today, machines can do this at high speed, almost in real time.
It depends where it is used
This is exactly what makes AI suitable for security tasks in computer networks – in identifying malware, for example. In recent years, attackers have turned to introducing thousands of variations of the same harmful program as a means of outmanoeuvring traditional antivirus systems. Identifying phishing e-mails is another area where machine learning can be used successfully, since these also contain patterns that can be identified. But it fails completely when more complex attacks are involved, such as APT attacks, which usually run for an extended period.
Machine learning platforms also often have weak points that produce false negatives, such as in image recognition, one of the key domains of machine learning. A recent case in point is the Google algorithm that misidentified dark-skinned people as gorillas. Researchers discovered that AI pattern-recognition systems have blind spots: in other words, there are patterns they simply cannot see. As a result they can develop a form of hallucination, leading them to “see” things that are not there. AI research therefore still has a number of tasks to resolve before this technology can be reliably used across all kinds of application.
The Cyber-Security Lab at the University of Louisville has published a study on AI failures, and warns against relying entirely on software agents for decision-making with a security connection – even if only because there is no way of guaranteeing these systems will prove more fail-safe than any other software.
Read here, how cybercriminals use AI to bypass intelligent detection methods.
You will also find news about all aspects of it-sa and the IT security environment in the it-sa Security Newsletter.