A summary of Economics of Artificial Intelligence in Cybersecurity by Nir Kshetri
Nicholas M. Synovic
- 4 minutes read - 672 wordsA summary of Economics of Artificial Intelligence in Cybersecurity
Nir Kshetri IEEE Computing Edge, December 2022 DOI [0]
Table of Contents
Summary
AI in cybersecurity is a growing market estimated to reach $101.8 billion by 2030.
As the internet was designed without security in mind, their exists an asymmetric where small organizations or individuals can comprise large groups, companies, or nation states. Therefore, defense mechanisms must be in place to stop these attacks. Since defendants have access to a large corpus of malware examples, it is possible to train AI models to defend against attacks.
Current State of AI in Cybersecurity and Key Areas Being Transformed
AI is often faster at detecting malware than humans. Additionally, it is also faster at containing infected devices on a network than a human. Furthermore, AI can lower the cost of detecting malware.
Traditional antivirus programs rely on a database of malware examples to check against. However, AI can learn the representations of malware and then find new malware examples prior to them being added to a database. Thus, it is possible for AI to stop zero-day vulnerabilities by recognizing the representation of malware during or prior to the attack.
However, security experts recommend taking an augmented intelligence approach with AI security. This involves a human-AI partnership in identifying and acting upon threats. This is because current AI techniques are not accurate or advanced enough to entirely replace a human security professional.
AI has already been used to stop advance attacks from foreign, state run organizations (i.e., APT41) and more common attacks.
AI is already in use to detect anomalies in identity and access security. It is possible for an AI to monitor and make decisions based off of an accounts activity. As an example, Facebook uses an AI-powered deep entity classification (DEC) to determine if an account is fraudulent or not. If so, the account is removed from Facebook. This tool was used to crack down on fake accounts and accounts that utilized deep fakes.
AI cybersecurity software is being used by academic institutions as attackers are increasingly targeting universities and academic institutions. These tools have been successful in stopping or preventing cyber threats on students, staff, and networks.
Key Factors Driving AI’s Use in Cybersecurity
The cost of AI cybersecurity tools is dropping both for consumers and for enterprises. There exists more publicly available and enterprise-only datasets for training AI to detect malware. And there is a shortage of cybersecurity professionals entering the workforce, so AI could be used to address this shortage of staff.
Some Shortcomings, Limitations, and Challenges
There exists several shortcomings with using AI cybersecurity tools. For starters, it is difficult to explain what the tool is doing. Security professionals therefore prefer an, “Explainable First, Predictive Second” [1] approach to AI tools. Additionally, AI tools can not make security related decisions without human interventions.
And as AI tools grow in popularity, bias will start to develop within these tools. This could potentially allow an attacker to exploit this bias and write malware that is not detected by the tool.
Furthermore, it is uncertain how AI tools would handle volatile situations, such as the COVID-19 pandemic. As such, cybersecurity professionals might turn off or raise the detection threshold on these tools during such situations, potentially allowing attacks to slip through.
Finally, not all of the necessary data to properly train these tools is available due to federal regulation. Personally identifying information (PII) cannot be made publicly available for the purposes of training. Additionally, it is assumed that large “data lakes” of Americans exist under the control of foreign entities. It is therefore possible for an entity to utilize one of these data lakes to write attacks that would not be detected as the attack is coming from an American rather than a foreign entity.