09 May IT Security
Artificial intelligence – the next frontier in IT security?
Security has always been an arms race between attacker and defender. He starts a war with a stick, you get a spear; he counters with a musket, you upgrade to a cannon; he develops a tank, you split the atom. While the consequences of organ- isational cyber-security breaches may not be as earth-shatteringly dramatic today, the arms race of centuries ago continues into the digital sphere of today.
The next challenge for companies with an eye towards the future should be to recognise that artificial intelligence (AI) is already entering the scene – with tools such as PatternEx focused on spotting cyber-attacks and Feedzai for fraud detection across the e-commerce value chain. The technology is develop- ing so rapidly that it is too early to say whether the impact will be revolution- ary or just the next evolution.
Some AI evangelists argue that this new technological force could render all oth- ers seemingly irrelevant, given the scale of change, risk and opportunity it could
bring about in IT security. This new dark art offering seemingly magical technologi- cal wizardry does indeed have the potential to change our world and – depending on who you choose to believe – either make life a little better, lead to total societal transformation or end humanity itself.
Artificial intelligence has the potential to disrupt all industry sectors – it is a field of computer science focused on creating intelligent software tools that replicate critical human mental faculties. The range of applications includes speech recognition, language translation, visual perception, learning, reasoning, inference, strategising, planning, deci- sion-making and intuition.
As a result of a new generation of disruptive technologies and AI we are entering a fourth industrial revolution.
mation. This new fourth era, with its smart machines, is fuelled by exponential improvement and the convergence of multiple scientific and technological fields such as big data, AI, the Internet of Things (IoT), super-computing hard- ware, hyperconnectivity, cloud com- puting, digital currencies, blockchain distributed ledger systems and mobile computing. The medium to long-term outcomes of these converging exponen- tial technologies for individuals, society, business, government and IT security are far from clear.
The pace of AI development is accel- erating and is astounding even those in the sector. In March 2016, Google DeepMind’s AlphaGo system beat the world Go champion, demonstrating the speed of development taking place in machine learning – a core AI technol- ogy. The board game Go has over 560 million possible moves – you cannot teach the system all the rules and permu- tations. Instead, AlphaGo was equipped with a machine learning algorithm that enabled it to deduce the rules and pos- sible moves from observing thousands of games. This same technology can now be used in IT security in applications ranging from external threat detection and prevention to spotting the precur- sors of potentially illegal behaviour among employees.
Current state of security
In 2015 in the US, the Identity Theft Resource Centre noted that almost 180 million personal records were exposed to data breaches and a PwC survey report highlighted that 79% of responding US organisations had experienced at least one security incident.1,2 Industry research indicates that while hackers exploit vulnerabilities within minutes of their becoming known, companies take roughly 146 days to fix critical vul- nerabilities. With the average cost of a data breach estimated at $4m, there is growing concern over how companies can keep up with the constant onslaught of ever stealthier, faster and malicious attacks today and in the future.
As it stands, many firms focus more on reacting to security breaches than on preventing them and the current approach to network security is often aimed more at standards compliance than at detecting new and evolving threats. The result is an unwinnable game of whack-a-mole that could over- whelm companies in the future unless they are willing to adopt and adapt the mindset, technology and techniques used by the hackers. And there is very little doubt that hackers are – or soon will be – developing AI tools to increase the frequency, scale, breadth and sophis- tication of their attacks.
Organisations in this digital age create infinite amounts of data, both internally through their own processes and external- ly via customers, suppliers and partners. No one human is capable of analysing all that data to monitor for potential secu- rity breaches – our systems have simply become too widespread, data-laden and unwieldy. However, when combined with big data management tools, AI is becoming ever more effective at crunching vast amounts of data and picking out patterns and anomalies. In fact, with most AI systems, the more information they are fed, the smarter they become.
One of the biggest potential security benefits of AI lies in detecting internal threats. Imagine an AI system that, day in and day out, watches the comings and goings of all employees within corporate headquarters via biometrics and login information. It knows, for example,
that the CFO normally logs out of the cloud each day by 12 noon and heads to the company gym, where she spends an average of 45 minutes. One day it spots an anomaly – the CFO has logged into the cloud at 12:20pm. The AI is intel- ligent enough to compare her location with this unexpected login – accord-
ing to its data, the CFO’s face was last scanned on entering the gym and has not been seen leaving the gym, but the cloud login originated from her office. The AI recognises the anomaly, corre- lates the discrepancy between login and CFO locations, shuts down cloud access to the CFO’s account and begins defen- sive measures against potential cyber- attacks. The system also alerts the CFO and escalates this high priority problem to human cyber-security within seconds. Important company data and financial records are safe thanks to AI security.
Imagine how its capabilities will grow as this same AI system continues to learn from and predict the behaviour of hun- dreds or thousands of employees across the organisation – helping it monitor for and predict similar security breaches.
Beyond employee behaviour, this AI security application is also watching the company’s internal systems and learn- ing how they interact. It discovers when customer information is added into the company’s database, the information is automatically picked up by the account- ing software and an invoice is generated within an average of 7.5 seconds. Any deviation outside of normal behaviour by 0.25 seconds triggers the AI to investigate every link within the process and tease out the cause. In this case, based on what it discovers (an inconsequential lag in the system), the AI security properly priori- tises the incident as a non-threatening low risk, but will continue to monitor for similar lags and alert system maintenance to the issue just in case.
“This new era is fuelled by the convergence of scientific and technological fields such as big data, AI, the Internet of Things (IoT), super-computing hardware, hyperconnectivity, cloud
computing, digital currencies, blockchain distributed
ledger systems and mobile computing”
Now let’s take this scenario a step further – imagine that not only has this AI system learned the behaviour of hun-
dreds of employees and of the internal company networks, but it is also capable of continually learning from external cyber-attacks. The more cyber-attacks are thrown at the AI, the more data it can parse and similar to a thinking, rational soldier who has manned the bat- tlements through numerous campaigns, the better educated and prepared it becomes for future attacks. It will recog- nise totally new, hostile code based on past experience and previous exposure to related patterns of attack behaviour. It will build defences as it works to unravel the new hostile code: and as the hostile AI code attempts to adapt to the new defences, the AI security will continually develop new methods to counter and destroy the invader.
“The more cyber-attacks are thrown at the AI, the more data it can parse and
similar to a thinking, rational soldier who has manned
the battlements through numerous campaigns, the better educated and prepared it becomes for future attacks”
This is the potential AI security sys- tem of the near future – fully integrated inside and out, non-invasive to daily business and always on alert and ready to defend. It will be the ultimate digital sentry – hopefully learning and adapting as quickly as the attackers.
Just as fighting with sticks eventually escalated to nuclear weapons, so too will the AI battle between organisations and hackers. Continual one-upmanship will become the norm in AI security, perhaps to the point where even devel- opers will be unable to decipher the exact workings of their constantly learn- ing and evolving security algorithms.
As complex and expensive as this all sounds, will companies in the future, especially smaller organisations, be able to survive without AI?
As the stakes become higher and failures loom larger, ever-evolving AI threats may encourage far more col- laboration across multiple companies. Smaller organisations could band together under one AI security system, dispersing the cost and maintenance across multiple payers, while larger players with the financial and techno- logical muscle to own their own AI security may actually exchange criti- cal information on cyber-attacks – or rather, their AIs could exchange information on cyber-attacks and learn from each other.
“Whereas the AI system is maintained by a few cyber- security experts, the entire simple security system
is in the hands of every employee, vastly multiplying the chances of a security breach”
Alternatively, companies could become so overwhelmed that they simply opt for simple, technologically cheaper ‘brute force’ non-AI solutions to counter increasingly complex AI hacks. The simple, or dumb, solution may entail more checks and passwords across accounts and devices, or perhaps security-enhanced devices that are changed every two weeks. While add- ing five layers of complex passwords to any login or continuously rotating through smartphones could protect company security, the increased over- head, employee frustration and time wasted with cumbersome security measures would not be seen as ideal and could hinder the firm’s reputation which might make it more suscepti- ble to attack.
While an AI system will quietly moni- tor security and enable employees to focus on their work, the simple non-AI solution will place an unnecessary secu- rity burden on employees – they will be responsible for keeping up with those five complex passwords and changing devices on a biweekly basis. Whereas the AI system is maintained by a few cyber-security experts, the entire simple security system is in the hands of every employee, vastly multiplying the chances of a security breach. In the future, this simple non-AI solution might become a defensive strategy of survival rather than an adaptive offensive campaign of a lead- ing, thriving business.
The role of humans
Of course, at this point there’s a natural question to ask: if AI is quicker, smarter and continually adapting to do its job better, why even bother with human cyber-security?” Today, AI security must still learn from humans and although it may one day reach the point where it no longer requires expert involvement, that day remains at least a few years down the road. Furthermore, depending on how valuable we deem human oversight and intuition in security to be, that day may never come to pass. AI security systems currently need humans to write their starter algorithms and provide the neces- sary data, training and feedback to guide their learning. Humans are currently an essential part of the deployment of AI and as AI security evolves beyond this nascent stage, the role for humans in AI will evolve as well.
As organisations increasingly digitise processes, amassing mind-boggling amounts of sensitive data, new impor- tance will be placed on the role of the human architects and minders of AI security systems. Never has so much data been so easily accessible to attack and even small attacks gathering seemingly innocuous data could add up to cata- strophic security breaches. Developers of AI security will become akin to nuclear weapons inspectors in importance – highly trustworthy individuals who have undergone extensive background checks and intensive training, vetting and accreditation. They will not only build AI security, but also provide oversight and intuitive guidance in the training process and be an integral line of cyber- security defence.
AI security will go far beyond human capabilities, freeing organisations and cyber-security experts from the impos- sible task of constant vigilance, allow- ing them to prevent future attacks without interrupting daily workflow. Tomorrow’s AI security system will learn, self-improve and run discreetly behind the scenes – intelligently monitoring, prioritising and destroying threats: ever-evolving into the next finely honed weapon in the cyber-security armoury.