AI and Cybersecurity: a Paradox or a Contradiction?

AI and Cybersecurity

Artificial intelligence can help us to stay secure, and it can also be a threat. Remember the days when all teams needed to do was keep antivirus programs up to date? Unfortunately, today’s complex cybersecurity offerings have advanced to attacks on infrastructure, applications, and computers. There is also the added threat of social engineering, where psychology tricks naive users into unwittingly participating in the cyberattack through the inadvertent introduction of risk. So how can we move forward? What are the things to consider when securing artificial intelligence solutions?

The CISO has two main concerns: everyone who works for the company and everyone who doesn’t. Under these pressures, it is perhaps no surprise that AI and cybersecurity have received so much attention over the past few years and that includes financial investment. Investors placed funding in various security organizations that leverage AI, sensing an opportunity. Investment into cybersecurity startups hit a record high of $7.8 billion in 2020, according to a recent Crunchbase report. AI technologies such as deep learning and machine learning have emerged as safeguards and protection in the enterprise’s armory in the battle against cybercrime. 

AI and Cybersecurity: Why Now?

Cybersecurity has received a lot of attention over the past year. The COVID lockdown exacerbated concerns about the potential for an increased attack surface as employees, often the weakest link in cybersecurity, became decentralized working from home. Employers have concerns over cybersecurity attacks ranging from phishing emails to insecure devices, as well as physical device loss or damage (National Cyber Security Centre, 2021). From the developer standpoint, business leaders have concerns about the potential for cyberattacks via the use of open source software by developers and data scientists, leading to tension within the organization due to the lack of transparency and perceived risk. 

In the cybercrime combat arena, artificial intelligence can be a friendly helping hand to corporate cybersecurity initiatives, swiftly parsing billions of data points in phenomenally short time frames that humans cannot match. AI is designed to gather patterns to help a company act effectively and promptly to neutralize as many cybersecurity perils as possible. No other technology can keep pace. 

Cybercriminals can discharge a range of attacks against AI itself or even use AI against organizations. The paradox is evident: cybercriminals can leverage increasingly accessible AI solutions as effective weapons against the enterprise. Cybercriminals can unleash counterattacks against AI-led defences in a spiralling duel of good versus evil. 

There is a range of potential attack areas during the AI creation process. For example, as part of the AI generation process, developers use training data to create AI algorithms. If cybercriminals are able to tamper or even destroy the training data, they can warp the algorithms that provide an effective defence. Further, it is becoming easier for cybercriminals to create AI-based attack algorithms to detect vulnerabilities quickly. In some cases, they may even create and deploy these algorithms faster than the defending companies can plug them.

What Is the Way Forward for the CISO?

How does the beleaguered enterprise CISO ensure the optimal use of AI technology to protect the enterprise? One answer lies in a better understanding of the definitions of AI itself. In recent years, academics and practitioners alike have called for greater transparency into the internal operations of artificial intelligence models. Artificial narrow intelligence, also known as ANI, is good at focused tasks that involve human-inspired computing. There are plenty of examples of ANI in everyday life, such as Amazon’s Alexa or Apple’s Siri, which are good at handling small pieces of communication. ANI contrasts with artificial general intelligence (AGI), which is more aligned with the Hollywood conception of artificial intelligence. For AGI, the general context of information is crucial. It helps machines to navigate a wider environment rich with context as humans do. Unlike ANI, AGI involves a deep understanding of a small subject area. For example, ANI excels in fragmented communication of small verbal commands.

The solution lies in leveraging Moravec’s paradox. It proposes that straightforward tasks for computers or AI tasks are challenging for humans and vice-versa. The trick is to balance the best AI technology with cybersecurity intelligence team members.

Human-Centered Artificial Intelligence

Human thinking inspires AGI. Humans are better at context-based decisions of judgement in understanding situations and solving problems. For example, a vigilant human cybersecurity expert will flag a spear-phishing email as irregular. The expert can identify if it uses social engineering or context to deceive humans into clicking the phishing link. However, ANI will find it challenging to perform this kind of contextual reasoning. ANI may not recognize the contextual subtleties of the suspicious email. Moreover, for ANI, the attack space is so vast that it is impossible to cover all the scenarios with pertinent training data. To summarise, ANI cannot replace general human intelligence based on the context in the medium or even long-term future.

Over time, the algorithms will become more competent at detecting cybercriminal activity. It can do this with massive amounts of machine-scale data rather than human-scale data. When such attacks happen at scale, AI will do a far more efficient job of recognizing and destroying cybersecurity threats than humans. These algorithms are an example of ANI in action; great at cybersecurity tasks but won’t have the ability to use context to solve broader problems. If there is an audit and versioning trail of training and test data for AI, AI will perform better than humans at distinguishing cybersecurity perils. AI will learn to look for issues, take action and learn from this experience.

A further paradox is that humans can be your weakest link as well as the organization’s secret weapon in cybersecurity battles. As part of a social engineering effort, most phishing attacks rely on the technical ineptitude and stupidity of regular non-technical knowledge workers. Social engineering uses foibles in human reasoning to trick them into inadvertently assisting in an enterprise cyberattack. These subtle manipulations mean that innocent users can unconsciously communicate private information or perform an action that opens up the enterprise to some risk. The risks increase dramatically if business users do not undergo continuous cybersecurity training, such as identifying suspicious emails or verifying people who enter the physical building.

The Developer: Friend or Inadvertent Foe in the Cybercrime Fight?

Developers can use open source technologies such as LIME and SHAP to explain AI algorithms’ activity. In addition, information sharing can help to engender trust in AI. However, the potential for new attacks on LIME and SHAP highlights an overlooked risk. Through exposure, cybersecurity criminals can manipulate the technology. As a result, the algorithm explanations unintentionally direct the organization to lose trust in their AI cybersecurity algorithms. 

Although there are calls for AI to become more transparent, it is clear that explanations and disclosures about AI models pose risks. For example, cybercriminals could reverse-engineer models with transparency and an explanation, making enterprises that use these AI algorithms more vulnerable to attacks. In turn, this introduces a legal risk if transparency introduces successful hacking efforts. Transparency disclosures can make companies more susceptible to lawsuits or regulatory action. In this way, this transparency paradox can assist human understanding, for good or ill. As a result, organizations will need to think mindfully about controlling the risks of AI while minding the information about these risks. They will also need to be very careful about how that information is shared and shielded, or the information may be subject to attack.

Applying Moravec’s Paradox 

Recognizing Moravec’s paradox allows technology leaders to balance the best AI technology with cybersecurity intelligence team members. Organizations have to have a two-pronged defence against cybersecurity. Enterprises should deploy AI and human intelligence in tandem, leading to consistent and cohesive protection against cybersecurity menace targeting the enterprise. Organizations need to understand that transparency of internal artificial intelligence model processing introduces risks. Privacy needs to be wrapped around AI algorithms to help them to succeed. Therefore, the business should incorporate transparency into a broader risk model on cybersecurity that governs how much openness about the AI algorithms is available to the business teams in the organization.

Transparency is crucial, however, if the organization is deploying open source software as part of its technology estate. In addition, developers who use open source methods such as LIME and SHAP in their code will need to be mindful of any risks that they may inadvertently introduce to the organization. There should be a broader process of supporting the developer to do the right things. One recommendation is that the organization could have an open source audit to understand the estate better, supporting initiatives to secure it. At a higher level, it is important to ensure the integrity of applications that have been built in-house.

While being a potentially bleeding-edge armament in the struggle against cybercrime, AI cannot  have sole responsibility in the fight against cybercriminal attacks. To be successful, AI and humans need to work together to provide a vigilant defence. AI will always need support from a team of trained and experienced security professionals. Further, there is a need for employee attentiveness to potential cybersecurity risks, particularly in social engineering. This collaborative team of AI plus human intelligence security will go a long way in protecting the organization if implemented wisely and emphasized throughout the organization.

In technology, it is unspoken that AI is a good thing that can support organizations to succeed. However, AI can also be a potential source of liability. Developers will need to be part of the AI and cybersecurity story. Enterprise leadership can support developers through facilitating teamwork, collaboration, and a privacy culture where cybersecurity issues are recognized, recorded, and resolved with the support of the right technology and teams.

Jennifer Stirrup

Jennifer Stirrup / About Author

Jennifer Stirrup is the Founder and CEO of Data Relish, a UK-based AI and Business Intelligence leadership boutique consultancy delivering data strategy and business-focused solutions. Jen is a recognized leading authority in AI and Business Intelligence Leadership, a Fortune 100 global speaker, and has been named as one of the Top 50 Global Data Visionaries, one of the Top Data Scientists to follow on Twitter and one of the most influential Top 50 Women in Technology worldwide.