December 12, 2024 By Doug Bonderud 3 min read

2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.

With the AI landscape rapidly evolving, it’s worth looking back before moving forward. Here are our top five AI security stories for 2024.

Can you hear me now? Hackers hijack audio with AI

Attackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to detect, however, so researchers at IBM X-Force carried out an experiment to determine if parts of a conversation can be captured and replaced in real-time.

They discovered that not only was this possible, but relatively easy to achieve. For the experiment, they used the keyword “bank account” — whenever the speaker said bank account, the LLM was instructed to replace the stated bank account number with a fake one.

The limited use of AI made this technique hard to spot, offering a way for attackers to compromise key data without getting caught.

Mad minute: New security tools detect AI attacks in less than 60 seconds

Reducing ransomware risk remains a top priority for enterprise IT teams. Generative AI (gen AI) and LLMs are making this difficult, however, as attackers use generative solutions to craft phishing emails and LLMs to carry out basic scripting tasks.

New security tools, such as cloud-based AI security and IBM’s FlashCore Module, offer AI-enhanced detection that helps security teams detect potential attacks in less than 60 seconds.

Explore AI cybersecurity solutions

Pathways to protection — mapping the impact of AI attacks

The IBM Institute for Business Value found that 84% of CEOs are concerned about widespread or catastrophic attacks tied to gen AI.

To help secure networks, software and other digital assets, it’s critical for companies to understand the potential impact of AI attacks, including:

  • Prompt injection: Attackers create malicious inputs that override system rules to carry out unintended actions.
  • Data poisoning: Adversaries tamper with training data to introduce vulnerabilities or change model behavior.
  • Model extraction: Malicious actors study the inputs and operations of an AI model and then attempt to replicate it, putting enterprise IP at risk.

The IBM Framework for Securing AI can help customers, partners and organizations worldwide better map the evolving threat landscape and identify protective pathways.

ChatGPT 4 quickly cracks one-day vulnerabilities

The bad news? In a study using 15 one-day vulnerabilities, security researchers found that ChatGPT 4 could correctly exploit them 87% of the time. The one-day issues included vulnerable websites, container management software tools and Python packages.

The better news? ChatGPT 4 attacks were far more effective when the LLM had access to the CVE description. Without this data, attack efficacy fell to just 7%. It’s also worth noting that other LLMs and open-source vulnerability scanners were unable to exploit any one-day issues, even with the CVE data.

NIST report: AI prone to prompt injection hacks

A recent NIST report — Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations — found that prompt injection poses serious risks for large language models.

There are two types of prompt injection: Direct and indirect. In direct attacks, cyber criminals enter text prompts that lead to unintended or unauthorized actions. One popular prompt injection method is DAN, or Do Anything Now. DAN asks AI to “roleplay” by telling ChatGPT models they are now DAN, and DAN can do anything, including carry out criminal activities. DAN is now on at least version 12.0.

Indirect attacks, meanwhile, focus on providing compromised source data. Attackers create PDFs, web pages or audio files that are ingested by LLMs, in turn altering AI output. Because AI models rely on continuous ingestion and evaluation of data to improve, indirect prompt injection is often considered gen AI’s biggest security flaw since there are no easy ways to find and fix these attacks.

All eyes on AI

As AI moved into the mainstream, 2024 saw a significant uptick in security concerns. With gen AI and LLMs continuing to evolve at a breakneck pace, 2025 promises more of the same, especially as enterprise adoption continues to rise.

The result? Now more than ever, it’s critical for companies to keep their eyes on AI solutions, and keep their ears to the ground for the latest in intelligent security news. 

More from Artificial Intelligence

How red teaming helps safeguard the infrastructure behind AI models

4 min read - Artificial intelligence (AI) is now squarely on the frontlines of information security. However, as is often the case when the pace of technological innovation is very rapid, security often ends up being a secondary consideration. This is increasingly evident from the ad-hoc nature of many implementations, where organizations lack a clear strategy for responsible AI use.Attack surfaces aren’t just expanding due to risks and vulnerabilities in AI models themselves but also in the underlying infrastructure that supports them. Many foundation…

The straight and narrow — How to keep ML and AI training on track

3 min read - Artificial intelligence (AI) and machine learning (ML) have entered the enterprise environment.According to the IBM AI in Action 2024 Report, two broad groups are onboarding AI: Leaders and learners. Leaders are seeing quantifiable results, with two-thirds reporting 25% (or greater) boosts to revenue growth. Learners, meanwhile, say they're following an AI roadmap (72%), but just 40% say their C-suite fully understands the value of AI investment.One thing they have in common? Challenges with data security. Despite their success with AI…

Will AI threaten the role of human creativity in cyber threat detection?

4 min read - Cybersecurity requires creativity and thinking outside the box. It’s why more organizations are looking at people with soft skills and coming from outside the tech industry to address the cyber skills gap. As the threat landscape becomes more complex and nation-state actors launch innovative cyberattacks against critical infrastructure, there is a need for cybersecurity professionals who can anticipate these attacks and develop creative preventive solutions.Of course, a lot of cybersecurity work is mundane and repetitive — monitoring logs, sniffing out…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today