Artificial intelligence (AI) is coming soon to a network near you. Limited forms of AI are already in use, and much more powerful applications are now in development. That means there’s no better time to start thinking about the implications of AI on cybersecurity.

Artificial Intelligence Evolves

Speculation about AI in the form of robots has been popular for generations, dating back well before pioneering digital computers to the giant electronic brains of the 1940s. From the very beginning, this speculation has included worries about the dangers that might be posed by malicious or mistaken robots. With AI becoming a reality, its potential risks and benefits are no longer mere speculation.

When people started thinking about artificial intelligence, they had only one point of reference to go by: human intelligence. Whether robots were made to look vaguely human, they were imagined as thinking and feeling more or less the way we do. In the novel “2001: A Space Odyssey,” author Arthur C. Clarke portrayed HAL 9000 as driven insane by the emotional stresses of Cold War-style deception.

But AI in real life has developed in an entirely different way. For example, it was once assumed that any computer able to play champion-level chess would need to think about the game the way humans do. In fact, we still do not understand how top human players play so well — and computers beat them anyway. They use the brute-force capability of testing millions of possible moves, something no human can do, to find the best possible option.

Thus, as Michael Chorost pointed out at Slate, AIs are not subject to emotional strains or complications from mixed motivations because they have no emotions or motivations of any sort.

Emulating Human Intelligence or Concentrating Human Intelligence?

Instead of emulating human intelligence, it could be said that real-world AI concentrates human intelligence in the same way that a lever concentrates the user’s strength onto the desired task.

In fact, AI has much in common with institutional intelligence. Everyone loves to hate bureaucracy, but organizations can and do display intelligent behavior, expressed by characteristics such as institutional memory and institutional learning.

If you propose an idea at a meeting and your colleagues agree to go with it, congratulations! You have just contributed to institutional intelligence. The AI of tomorrow may well be a sort of automated organization, with both human and electronic members contributing to overall intelligence.

Work on composite human-machine intelligence is already focusing specifically on network security issues. As Naked Security reported, MIT researchers are working on a system that combines human experts and machine learning to achieve a threefold improvement in threat detection combined with a fivefold reduction in false positives.

The system — called AI2 because it combines artificial intelligence with (human) analyst intuition — looks for patterns, which it then presents to its human partners for evaluation. Those human insights improve the machine’s ability to ignore nonthreat patterns while still warning of potentially dangerous ones.

A Whole New Meaning of ‘Trusted Users’

Once human social learning is added to the AI mix, a new and subtle security challenge emerges. A leading security threat today is social engineering, such as spear phishing, which tricks users into making security mistakes. Social learning for AIs introduces the risk that malicious teachers could trick the AI or even subvert it into helping attackers.

The recent mishap of the Tay chatbot gives a hint of the potential risks. According to The Loop, Tay was designed to engage in innocent online small talk. But human Internet trolls soon attacked it and essentially taught it to behave like an Internet troll. Tay was quickly taken offline for further development.

Any social learning AI is potentially vulnerable to this type of attack. Designers need to ensure that only trusted teachers have access to the AI, particularly in the critical initial stages of learning before the AI has been taught to be wary of suspicious lessons. The risks can come not only from deliberately malicious users, but also from careless ones who could inadvertently teach the wrong lessons. If the AI resembles AI2 in being designed for security tasks, the challenge of identifying trusted users is even more critical.

Can we achieve this level of user security? As applied to AI, the challenge is a new one, but it is really the oldest security challenge of all —as the Latin proverb asks, “Quis custodiet ipsos custodes?” Who will guard the guards themselves?

All security is ultimately about human trust. This will not change, even as we enlist AIs to be our security partners.

More from Artificial Intelligence

How red teaming helps safeguard the infrastructure behind AI models

4 min read - Artificial intelligence (AI) is now squarely on the frontlines of information security. However, as is often the case when the pace of technological innovation is very rapid, security often ends up being a secondary consideration. This is increasingly evident from the ad-hoc nature of many implementations, where organizations lack a clear strategy for responsible AI use.Attack surfaces aren’t just expanding due to risks and vulnerabilities in AI models themselves but also in the underlying infrastructure that supports them. Many foundation…

The straight and narrow — How to keep ML and AI training on track

3 min read - Artificial intelligence (AI) and machine learning (ML) have entered the enterprise environment.According to the IBM AI in Action 2024 Report, two broad groups are onboarding AI: Leaders and learners. Leaders are seeing quantifiable results, with two-thirds reporting 25% (or greater) boosts to revenue growth. Learners, meanwhile, say they're following an AI roadmap (72%), but just 40% say their C-suite fully understands the value of AI investment.One thing they have in common? Challenges with data security. Despite their success with AI…

Will AI threaten the role of human creativity in cyber threat detection?

4 min read - Cybersecurity requires creativity and thinking outside the box. It’s why more organizations are looking at people with soft skills and coming from outside the tech industry to address the cyber skills gap. As the threat landscape becomes more complex and nation-state actors launch innovative cyberattacks against critical infrastructure, there is a need for cybersecurity professionals who can anticipate these attacks and develop creative preventive solutions.Of course, a lot of cybersecurity work is mundane and repetitive — monitoring logs, sniffing out…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today