AI and Cyber Security - Grave New World

The recent cyber attack on organisations around the world, including our very own NHS, has been in the minds of professionals for a long time yet nonetheless for many others has clearly been something of a wake-up call. In the near future, as artificial intelligence (AI) systems become more capable, we will begin to see more automated and increasingly sophisticated social engineering attacks. 

The rise of AI-enabled cyberattacks is expected to cause an explosion of network penetrations, personal data thefts, and an epidemic-level spread of intelligent computer viruses. Ironically, perhaps our best hope to defend against AI-enabled hacking is by using AI.  Yet this is very likely to lead to an AI arms race, the consequences of which potentially may be catastrophic in the long term, especially as big government actors join the cyber wars.

A lot has been written about problems that might arise with the arrival of “true AI,” either as a direct impact of such inventions or because of a programmer’s error. However, intentional malice in design and AI hacking have not been addressed to a sufficient degree in the existing scientific literature. It’s fair to say that when it comes to dangers from a purposefully unethical intelligence, anything is possible. According to Bostrom’s orthogonality thesis, an AI system can potentially have any combination of intelligence and goals. Such goals can be introduced either through the initial design, or through hacking - or introduced later, as per an off-the-shelf software: ‘just add your own goals’. 

Even today, AI can be used to defend and to attack cyber infrastructure, as well as to increase the attack surface that hackers can target (ie the number of ways for hackers to get into a system). In the future, as AIs increase in capability - and so reach and then overtake humans in most domains of performance - the scale of the impact on our day to day lives may be very real indeed.  

If one of today’s cybersecurity systems fails the damage can be unpleasant but is tolerable in most cases: someone loses money or personal privacy. But for human-level AI (or above), the consequences could be catastrophic. A single failure of a Superintelligent AI system [SAI], where humans could no longer control it, could cause an existential risk event - an event that has the potential to damage human well-being on a global scale. With an SAI safety system, failure or success is a binary situation: Either you have a safe, controlled SAI or you don’t. 

The goal of cybersecurity in general is to reduce the number of successful attacks on a system; the goal of SAI safety, in contrast, is to make sure no attacks succeed in bypassing the safety mechanisms in place. The rise of brain-computer interfaces, in particular, will create a dream target for human and AI-enabled hackers. And brain-computer interfaces are not so futuristic - they’re already being used in medical devices and gaming for example. If successful, attacks on brain-computer interfaces would compromise not only critical information such as social security numbers or bank account numbers but also our deepest dreams, preferences, and secrets. In short, there is the potential to create unprecedented new dangers for personal privacy, free speech, equal opportunity, and any number of human rights.

Category: 

Tags: