July 11, 2023 By Jonathan Reed 4 min read

If you ask Jen Easterly, director of CISA, the current cybersecurity woes are largely the result of misaligned incentives. This occurred as the technology industry prioritized speed to market over security, said Easterly at a recent Hack the Capitol event in McLean, Virginia.

“We don’t have a cyber problem, we have a technology and culture problem,” Easterly said. “Because at the end of the day, we have allowed speed to market and features to really put safety and security in the backseat.” And today, no place in technology demonstrates the obsession with speed to market more than generative AI.

Upon the release of ChatGPT, OpenAI ignited a race to incorporate AI technology into every facet of the enterprise toolchain. Have we learned anything from the current onslaught of cyberattacks? Or will the desire to get to market first continue to drive companies to throw caution to the wind?

Forgotten lessons?

Here’s a chart showing how the number of cyberattacks has exploded over the last several years. Mind you, these are the number of attacks per corporation per week. No wonder security teams feel overworked.

Source: Check Point

Likewise, cyber insurance premiums have also risen steeply. This means many claims are being paid out. Some insurers won’t even provide coverage for companies that can’t prove they have adequate security.

Even though everyone is aware of the threat, successful attacks keep occurring. Even though companies have security on their mind, there are many gaping holes that must be backfilled.

The Log4j debacle is a prime example. In 2021, the infamous Log4Shell bug was found in the widely used open-source logging library Log4j. This exposed a massive swath of applications and services, from popular consumer and enterprise platforms to critical infrastructure and IoT devices. Log4j vulnerabilities impacted over 35,000 Java packages.

Part of the problem was that security wasn’t fully built into Log4j. But the problem isn’t software vulnerability alone; it’s also the lack of awareness. Many security and IT professionals have no idea whether Log4j is part of their software supply chain, and you can’t patch something you don’t even know exists. Even worse, some may choose to ignore the danger. And that’s why threat actors continue to exploit Log4j, even though it’s easy to fix.

Will the tech industry continue down the same dangerous path with AI applications? Will we fail to build in security, or worse, simply ignore it? What might be the consequences?

The new AI threat

These days, artificial intelligence has captured the world’s imagination. In the security industry, there’s already evidence that criminals are using AI to write malicious code or help adversaries generate advanced phishing campaigns. But there’s another type of danger AI can lead to as well.

At a recent AI for Good webinar, Arndt Von Twickel, technical officer at Germany’s Federal Office for Information Security (BSI), said that to deal with AI-based vulnerabilities, engineers and developers need to evaluate existing security methods, develop new tools and strategies and formulate technical guidelines and standards.

Hacking AI systems

Take “connectionist AI” systems, for example. These technologies enable safety-critical applications like autonomous driving. And the systems have reached far better-than-human performance levels.

However, AI systems are capable of making life-threatening mistakes if given bad input. High-quality data and the training that huge neural networks require are expensive. Therefore, companies often buy existing data and pre-trained models from third parties. Sound familiar? Third-party risk is currently one of the most important sources of data breaches today.

As per AI for Good, “Malicious training data, introduced through a backdoor attack, can cause AI systems to generate incorrect outputs. In an autonomous driving system, a malicious dataset could incorrectly tag stop signs or speed limits.” Even small amounts of poisoned data could lead to disastrous results, lab experiments show.

Other attacks could feed directly into the operating AI system. For example, meaningless “noise” could be added to all stop signs. This would cause a connectionist AI system to misclassify them. “If an attack causes a system to output a speed limit of 100 instead of a stop sign, this could lead to serious safety issues in autonomous driving,” Von Twickel explained.

It’s precisely the black-box nature of AI systems that leads to the lack of clarity about why or how an outcome was reached. Image processing involves massive input and millions of parameters. This makes it difficult for end users and developers to interpret AI system outputs.

Making AI secure

A first line of AI security would be preventing attackers from accessing the system in the first place. But given the transferable nature of neural networks, adversaries can train AI systems on substitute models that teach malicious examples — even when data is labeled correctly. As per AI for Good, procuring a representative dataset to detect and counter malicious examples can be difficult.

Von Twickel stated that the best strategy involves a combination of methods, including the certification of training data and processes, secure supply chains, continual evaluation, decision logic and standardization.

Taking responsibility for AI

Microsoft, Google and AWS are already setting up cloud data centers and redistributing workloads to accommodate AI computing. And companies like IBM are already helping to deliver real business benefits with AI — ethically and responsibly. Furthermore, vendors are building AI into end-user products, such as Slack and Google’s productivity suite.

For Easterly, the best way to have a sustainable approach to security is to shift the burden onto software providers. “They’re owning the outcomes of security, which means that they’re developing technology that’s secure by design, meaning that they’re tested and developed to reduce vulnerabilities as much as possible,” Easterly said.

This approach has already been advanced by the White House’s new National Cybersecurity Strategy, which proposes new measures aimed at encouraging secure development practices. This idea is to transfer liability for software products and services to large corporations that create and license these products to the federal government.

With the generative AI revolution already upon us, the time is now to think hard about the associated risks — before it opens up another can of security worms.

More from Artificial Intelligence

How red teaming helps safeguard the infrastructure behind AI models

4 min read - Artificial intelligence (AI) is now squarely on the frontlines of information security. However, as is often the case when the pace of technological innovation is very rapid, security often ends up being a secondary consideration. This is increasingly evident from the ad-hoc nature of many implementations, where organizations lack a clear strategy for responsible AI use.Attack surfaces aren’t just expanding due to risks and vulnerabilities in AI models themselves but also in the underlying infrastructure that supports them. Many foundation…

The straight and narrow — How to keep ML and AI training on track

3 min read - Artificial intelligence (AI) and machine learning (ML) have entered the enterprise environment.According to the IBM AI in Action 2024 Report, two broad groups are onboarding AI: Leaders and learners. Leaders are seeing quantifiable results, with two-thirds reporting 25% (or greater) boosts to revenue growth. Learners, meanwhile, say they're following an AI roadmap (72%), but just 40% say their C-suite fully understands the value of AI investment.One thing they have in common? Challenges with data security. Despite their success with AI…

Will AI threaten the role of human creativity in cyber threat detection?

4 min read - Cybersecurity requires creativity and thinking outside the box. It’s why more organizations are looking at people with soft skills and coming from outside the tech industry to address the cyber skills gap. As the threat landscape becomes more complex and nation-state actors launch innovative cyberattacks against critical infrastructure, there is a need for cybersecurity professionals who can anticipate these attacks and develop creative preventive solutions.Of course, a lot of cybersecurity work is mundane and repetitive — monitoring logs, sniffing out…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today