June 19, 2018 By Kacy Zurkus 3 min read

Humans versus machines: Who’s the better hacker? The advent of artificial intelligence (AI) brought with it a new set of attacks using adversarial AI, and this influx suggests the answer is likely machine.

With each innovation in technology comes the reality that attackers who study the security tools will find ways to exploit it. AI can make a phone number look like it’s coming from your home area code — and trick your firewall like a machine learning Trojan horse.

How can organizations fight an unknown enemy that’s not even human?

Humans vs. Machines: The Problem for Security

When cybersecurity company ZeroFOX asked if humans or machines were better hackers back in 2016, they took to Twitter with an automated E2E spear phishing attack. The results? According to their experiment, machines are much more effective at getting humans to click on malicious links.

AI models are built with a type of machine learning called deep neural networks (DNNs), which are similar to neurons in the human brain. DNNs make the machine capable of mimicking human behaviors like decision-making, reasoning and problem-solving.

When researchers and developers make an image, they are trying to picture an object, such as a cup, stop sign or cat. They can generate data that attempts to mimic real data by using machine learning — and each model brings that image closer to the real object. Now, imagine those pictures for medical imaging: The power of AI offers massive benefits when it comes to analyzing images.

So, what’s the problem for security? “Adversarial examples are (say, images) which have deliberately been modified to produce a desired response by a DNN,” according to IBM Research – Ireland.

The differences between the real and the fabricated are too small for the human eye to catch. Trained DNNs might catch those differences and classify the image as something all-together different — which is exactly what the attacker wants.

An Adversarial AI Arms Race

As the amount of data increases, nefarious actors will become more efficient at deploying new types of attacks by leveraging adversarial AI. This tactic will make attack attribution even more challenging.

“Adversaries will increase their use of machine learning to create attacks, experiment with combinations of machine learning and AI and expand their efforts to discover and disrupt the machine learning models used by defenders,” according to a 2018 cybercrime report. Enterprises must essentially prepare for an adversarial arms race.

Attacks will also become more affordable, according to the report — an additional bonus for attackers. An attacker can use an AI system to perform functions that would be virtually impossible for humans given the brain power and technical expertise required to achieve at scale.

Rage Against the Machine

What’s different about adversarial AI attacks? They can put on the same malicious offenses with great speed and depth. While AI is not a fully accessible tool for cybercriminals just yet, it’s weaponization is quickly growing more widespread. These threats can multiply the variations of the attack, vector or payload and increase the volume of the attacks. But outside of speed and scale, the attacks are fundamentally quite similar to current threat tactics.

So, how can organizations defend themselves? IBM recently released the Adversarial Robustness Toolbox to help defend DNNs against weaponized AI attacks, allowing researchers and developers to measure the robustness of their DNN models. This, in turn, will improve AI systems.

Sharing intelligence information with the cybersecurity community is also important in building strong defenses. The solution to adversarial AI will come from a combination of technology and policy, but all hands must be on deck. The risks threaten all sectors across public and private institutions. Coordinated efforts among key stakeholders will help to build a more secure future.

After all, the union of man and machine has the power to give defenders a leg up.

Visit the Adversarial Robustness Toolbox and contribute to IBM’s ongoing research into Adversarial AI Attacks

More from Artificial Intelligence

How red teaming helps safeguard the infrastructure behind AI models

4 min read - Artificial intelligence (AI) is now squarely on the frontlines of information security. However, as is often the case when the pace of technological innovation is very rapid, security often ends up being a secondary consideration. This is increasingly evident from the ad-hoc nature of many implementations, where organizations lack a clear strategy for responsible AI use.Attack surfaces aren’t just expanding due to risks and vulnerabilities in AI models themselves but also in the underlying infrastructure that supports them. Many foundation…

The straight and narrow — How to keep ML and AI training on track

3 min read - Artificial intelligence (AI) and machine learning (ML) have entered the enterprise environment.According to the IBM AI in Action 2024 Report, two broad groups are onboarding AI: Leaders and learners. Leaders are seeing quantifiable results, with two-thirds reporting 25% (or greater) boosts to revenue growth. Learners, meanwhile, say they're following an AI roadmap (72%), but just 40% say their C-suite fully understands the value of AI investment.One thing they have in common? Challenges with data security. Despite their success with AI…

Will AI threaten the role of human creativity in cyber threat detection?

4 min read - Cybersecurity requires creativity and thinking outside the box. It’s why more organizations are looking at people with soft skills and coming from outside the tech industry to address the cyber skills gap. As the threat landscape becomes more complex and nation-state actors launch innovative cyberattacks against critical infrastructure, there is a need for cybersecurity professionals who can anticipate these attacks and develop creative preventive solutions.Of course, a lot of cybersecurity work is mundane and repetitive — monitoring logs, sniffing out…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today