How will AI change the cyber battleground?
Written by Martina Bonanno
The Czech branch of AFCEA with the support of ENISA organised an essay competition at the conference ‘Does the brain stand a chance?’ during the ECSC2021. This unique opportunity brought together young cyber security talents and AI experts to discuss the very interesting topic of ‘How will AI change the cyber battleground?’.
My participation in this competition afforded me the opportunity to be among the first three participants selected to present their essays at the conference in Prague. This made it possible for Malta to contribute towards the debate and present its standpoint on the security aspect of AI.
Hardly a day goes by without newspaper articles touching on high-profile data breaches or cybersecurity attacks which involve millions of euros. The direction the world is heading towards, especially with the ever-growing pervasiveness of computers, mobile devices, servers, and smart devices, is putting security in a jeopardized position more and more each day. Even so, while corporate and governmental worlds are still grappling to cyberspace’s increased prominence, the application of Artificial Intelligence especially for cybersecurity promises even more profound consequences.
The figurative world of information that is cyberspace, holds an increasing level of interconnectivity and autonomy within the scope of the Internet. Nonetheless, despite improvements made, this resulted in an increased number of cyberattacks. Accordingly, while a growing cyberspace might be beneficial to the public, malevolent cyberspace usage can result in severe economic and social losses.
Over the past years, AI has rapidly progressed, and its capabilities have extended to several domains. Just like that, the overloading volumes of cyberattacks have slowly pushed security experts to the adoption of such technology. By its nature, AI automates tasks that would have previously required human intelligence. As a matter of fact, Machine Learning being one branch of AI, is a way of imitating human intelligence with the use of data and algorithms. In plain language, AI turns the flood of data into actionable information which would in fact create human thoughts, except through a machine. For this reason, its application in cybersecurity became progressively more relevant as it facilitated analysing massive quantities of risk data while speeding up response times.
With the growth of cyberspace, new threats are emerging, and their scale, scope and frequency is increasing. Threats are escalating as more sophisticated and organized attackers are designing targeted attacks to damage or disrupt critical infrastructures and services. Consequently, this kind of automated cyber defence is necessary to deal with the overwhelming level of activity that must now be monitored. After all, the level of complexity at which defence can be performed without the use of AI has been surpassed. Going forward, one may argue that only systems which apply AI to the task will be able to deal with the complexity and speed found in the cybersecurity environment. In like manner, human capacity might be matched or even surpassed by computers on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation.
Regardless, AI will not only augment existing strategies for offence and defence. This technology will open new fronts in the battle for cybersecurity as malicious actors seek ways to exploit AI. Simply, bad actors in cyberspace have at hand the same tools as good actors which increase the sophistication of their attacks. So much so that the impact of potential cyberthreats has been broadened from malicious uses of AI technology to now enabling larger-scale and more powerful attacks. From this perspective, even if the progress of research on the application of AI to defend against cyberattacks started many years ago, there is still uncertainty about how to defend against AI being used as a malicious tool. Accordingly, it can be argued that the poor use of AI is altering the landscape of potential threats against a wide range of beneficial applications.
By the same token, cyberattacks using AI technology are very sophisticated and anomalous. All that much that traditional security tools fail against these emerging threats as their biggest advantage is being able to analyse and learn large amounts of data. Correspondingly, AI can learn and adapt to changes. This is synonymous with attackers particularly because they delegate all their heavy work to AI until it acquires the access they had targeted. AI can learn to spot patterns in behaviour, understand how to convince people that a video, phone call or email is legitimate and then persuading them to compromise networks and hand over sensitive data, among other things. Accordingly, the malicious use of AI increases speed and success rates, which augment the capabilities of attacks. These expand opportunities to commit a crime and form a new threat landscape in which new criminal tactics can take place. For this reason, the best policy in these cases may be to fight fire with fire.
Where on the one side AI is used as a cyber defence from attacks, on the other side, attackers of all types are also using AI to recognize patterns and weak points of their potential targets. Consequently, the state of play is a battlefield where each side is continually probing the other and devising new defences or new forms of attack, and this battlefield is changing by the minute. After all, even AI systems can be attacked. As they are further integrated into critical components of society, these AI attacks represent an emerging and systematic vulnerability with the potential to have significant effects on security. Nevertheless, the ultimate vision, more so long term, cannot be complex attacks against cybersecurity analysts which are frantically trying to fix and defend with guesswork and supposition. Appropriately, humans must step aside this war of algorithms, as many experts have expressed that “you cannot bring a human to a machine fight anymore”.
Bearing this in mind, specialists still question whether AI was the best or worst thing to ever happen to cybersecurity. Specifically, for time and time again, different researchers have warned against the risk of humanity not being able to control autonomous machines. In this case, such possibility to automate security unleashes a technology designed to almost take humans out of the equation. Besides, AI is by no means transparent in how it reaches a decision from many data points, and yet, it is still trusted. There are several algorithms that currently carry out important tasks autonomously, with programmers not fully understanding how they learned. This situation could easily get out of hand at the risk of even becoming uncontrollable and dangerous. In consequence, good actors can employ different measures in the present cyber battleground to lessen or eliminate said risk.
Clearly, the global common good approach to strategic autonomy, including for AI and cybersecurity, deserves much more attention. Being that cyberthreats will never be entirely obsolete, the next feasible objective would be to mitigate and minimize as much as possible the danger of bad actors. This can end up being way too costly and even risky from an exposure standpoint. For this reason, real time breach information sharing is vital and would bring a significant force multiplier effect in lessening the challenges good actors confront daily. Accordingly, from a social standpoint, embracing unity efforts can collectively improve the threat from getting out of hand.
Differently, good players in the AI-driven cyber battlefield could choose to take it upon themselves to follow authorized compliance programs. Having proper called for know-how would reduce the risk of attacks on AI systems by the adoption of best practices in securing systems against AI attacks. Similarly, many governments and integrated areas already provide directions and guidance for the use of AI by the issuing of guidelines to stay ahead. ‘Ethics Guidelines for Trustworthy AI’ is one instance, published by the European Commission back in April 2019.
The effectiveness of AI is mostly based on the quality of data that the algorithm has trained on. Correspondingly, in anticipation of tomorrow’s cyberattacks, one cannot look at yesterday’s threat landscape, especially with the hacking strategies going beyond human imagination. Darktrace, a global leader in cybersecurity technology, claims that many companies were failing to catch novel malware that criminals were launching on a near-daily basis. For that very reason, they presented a completely new system that has shifted the industry paradigm. Instead of spotting cyberthreats, AI understands the organization it protects by learning normal and abnormal behaviour of each user – a complete re-engineering. Thus, genuine actors should be able to reinvent themselves and their organisational politics ever so often to achieve changes that benefit organisations and avoid the risk of losing control.
AI technology in cybersecurity presents great opportunities, but as with any powerful technology, it also brings great challenges. AI improves cybersecurity and defence measures, allowing for greater system robustness, resilience, and responsiveness. On the other hand, AI in the form of Machine Learning and Deep Learning, escalates sophisticated cyberattacks enabling faster, better targeted, and more destructive attacks. Ultimately, AI’s capabilities will reach new frontiers continuously more. However, there must always be a framework for ensuring that AI is accurate, but most importantly ethical.