Things get tough for security software if attackers employ artificial intelligence. Initial attempts have already been observed, and are posing a challenge to developers of security Solutions.
There is no doubt that the use of artificial intelligence can dramatically improve security in the IT sector. Whereas antivirus systems previously had to rely on recognising a unique signature for each new incarnation of malware, machine learning makes it possible to detect similarities, so malware mutations can be neutralised. But, of course, attacks also evolve, and cyber criminals are adapting in response to AI-based defence systems. This is confirmed by reports from the research labs of security providers: according to a panel discussion with international security experts, we are seeing the first versions that are seeking to feed false information to our machine learning systems.
The attackers are taking advantage of the fact that machine learning is based on the accuracy of the input sample data. If this data pool is compromised, the results obtained using it will also be incorrect. For example, fraudsters could try to upload manipulated samples to the analysis platform Virustotal, which many software providers use as a source of data.
But direct attacks on neural networks, the base technology underlying machine learning, are also conceivable. As a proof of concept, researchers at New York University recently demonstrated how these software networks can be backdoored to inject falsified data. Even so, this method of attack is highly labour-intensive and is less likely to be employed by typical cyber criminals. Now, however, groups of cyber criminals are starting to use AI technologies to reconnoitre target networks and analyse their protection characteristics. This enables attacks to be adapted to suit different network topologies and security solutiions.
Easy access to AI technology
To use AI methods, attackers have no need for either special expert knowledge or vast financial resources: a growing number of freely available solutions for machine leaning, such as Google’s TensorFlow, lets software developers make use of the latest technology without requiring particular knowledge. The availability of more and more powerful hardware at a low cost enables cyber criminals to include AI software libraries like OpenNN in their own programs.
Self-learning software can also be used to detect weak points in standard software packages or identify suspicious areas of code, which could lead to the discovery of new zero-day vulnerabilities.
Since phishing e-mails are currently the method of choice for malware infiltration, scientists recently demonstrated how criminals can increase the efficiency of attacks using e-mails of this type. They had previously shown that using AI methods to identify phishing URLs can achieve a 98 percent hit rate. This time they went a stage further, identifying and analysing two currently successful phishing campaigns. The result showed that attackers could achieve a much higher success rate using simple methods like altered URLs. Using their AI software DeepPhish, they increased the theoretical hit rate in the first case from just under one percent to more than 20 percent, and in the second case from about five percent to as high as about 36 percent. If cyber criminals had recourse to such systems or developed them themselves, they could cause significantly more damage. In the next few years, AI will therefore be another field in which it will be essential to stay one step ahead of the perpetrators.
Massive research is being carried out into the use of artificial intelligence. But the technology still has its pitfalls, this article points out.
You will also find news about all aspects of it-sa and the IT security environment in the it-sa Security Newsletter.