July 28, 2023
Defending Against AI Threats
On this episode of Inside the FBI, hear from Director Christopher Wray and the head of the FBI’s Cyber Division about the Bureau’s stance on artificial intelligence and our key priorities.
Ellen Ferrante: One of the FBI’s strengths is finding new and creative approaches to solving crimes.
The FBI has developed tools to help keep people safe—things like biometrics, DNA research, and facial recognition. We’ve created digital forensics teams that handle technically complex cases. We’ve learned to analyze cell phone data to find missing persons.
Artificial intelligence or “AI” is one of the newest technologies the world is exploring on a massive scale. The FBI is also examining AI to anticipate and defend against threats, and ultimately to help keep the American people safe.
In this episode of Inside the FBI, we’ll discuss the Bureau’s stance on AI and other key priorities. We’ll hear from FBI Director Christopher Wray.
Director Wray: One aspect of AI we at the FBI are most concerned about is that this technology doesn’t just exist in cyberspace, it touches more and more of the physical world, too.
Ferrante: And Bryan Vorndran, head of the FBI’s Cyber Division.
Bryan Vorndran: Cyber threats must be tackled as a team, and private sector organizations have a big role to play.
Ferrante: I’m Ellen Ferrante and this is Inside the FBI.
Ferrante: AI is an area of computer science and mathematical statistics that emphasizes the creation of machines used to replicate or emulate certain aspects of cognition.
AI is quickly making breakthroughs that impact our everyday lives. AI technologies are used anywhere from voice-assisted smart phones to precision medicine to agriculture. The use of AI is not a crime, and AI can perform tasks that make our lives more convenient, save time, and drive innovation.
While the benefits of AI are vast, unfortunately criminals can also exploit this technology to harm others. For instance, the same generative AI technologies that can be used to save people time by automating tasks can also be used to generate deepfakes or malicious code.
As FBI Director Christopher Wray explains:
Director Wray: In response to all this change and technological advancement, our lawmakers and leaders in all industries—from the medical to the creative to the military—are trying to make order from the chaos, to make sure we map a clear path across this new frontier, instead of letting circumstances—or, as we’re already starting to see, foreign governments—make decisions for us.
Ferrante: Here’s how the FBI approaches AI:
- First, the FBI anticipates and defends the public against threats from bad actors who use AI and machine learning to commit cybercrimes and who target the AI and machine learning systems that are used for legitimate and lawful purposes.
- Second, the FBI defends our innovators in the U.S. who are building the next generation of technology and AI so it can’t be turned against us.
- And third, the FBI examines how AI technologies can benefit the American public—for instance, by triaging and prioritizing data to help solve crimes.
On today’s episode, we’ll focus on those first two areas.
Let’s begin by discussing how AI is used in cybercrime. Hostile nation-state spy and hacking services, terrorists, fraudsters, child predators, and others use AI to exploit vulnerabilities and steal data. Cyber actors will go as far as compromising key services that people can’t live without, like hospitals, schools, and modes of transportation.
One way they do this is by using AI-enabled language models to generate both malicious code and spear-phishing content. If you’re not familiar with spear-phishing, that’s when a criminal, usually targeting a company or organization, sends a message that looks like it’s from a trusted sender. The message asks victims to reveal confidential information or take some action—that lets criminals access company accounts, calendars, and other data.
For example, someone can use an AI chat generator to compose a formal business email to a banking employee, requesting an urgent money transfer. Thanks to new AI technologies, cybercriminals—who themselves may not have a perfect command of English or knowledge of the banking industry—can quickly compose a grammatically correct, professional message.
Beyond crafting a convincing email, a cybercriminal can use AI to create a fake online identity, so the requester even appears to be a legitimate contact.
Generative Adversarial Networks or GANs are a type of AI that pairs two things: one, a generator that creates content like an image of a face; and two, a discriminator that tries to detect fakes—which helps the generator get smarter and smarter. As a result, it can be incredibly difficult to discern a GAN-generated fake image, even for those with cybersecurity training. Adversarial machine learning are techniques to disrupt, degrade, or deny the performance of AI or machine-learning systems.
Bryan Vorndran, Assistant Director of the FBI Cyber Division, explains:
Vorndran: The FBI is interested in this space from a security and cybersecurity perspective. Securing networks and devices is essential to prevent and mitigate harm that can affect businesses, critical infrastructure, and national security. The philosophy must now be applied to secure AI systems.
Organizations too often neglect security when deploying new capabilities. The good news is that highly sophisticated AML attacks, like data poisoning, are still mostly found in research literature and not yet "in the wild." As public and private industry adoption of AI increases, the AML attack surface will correspondingly increase along with the potential physical and economic costs of a successful attack.
Ferrante: Cybercriminals exploiting AI might also use a technique to dupe a biometric facial recognition system to steal state unemployment insurance benefits. And others may use machine-learning models to conduct untraceable searching about topics like bomb-making. The possibilities are increasingly wide-ranging and have the potential for devastating results.
Director Wray: One aspect of AI we at the FBI are most concerned about is that this technology doesn’t exist just in cyberspace. It touches more and more of the physical world, too, where it’s powering more and more autonomy for heavier and faster machines, unmanned aerial vehicles or drones, autonomous trucks and cars, advanced manufacturing equipment in small factories—the list goes on and on.
Ferrante: Another way cybercriminals take advantage of AI is by creating fake accounts and posting content intended to sow discord and distrust in our society. Deepfakes are a well-known example of this. Deepfakes are highly convincing but fake images, voices, and videos that are now easily created by widely-available AI tools. Years ago, it would have taken a lot of talent and resources to create deepfakes, but now, almost anyone can create them.
Director Wray: As AI gets better at writing code, and finding code vulnerabilities to exploit, the problem is just going to grow. Those capabilities are already able to make a less-sophisticated hacker more effective by writing code, and finding weaknesses they couldn’t on their own. And soon, as AI improves its performance compared to the best-trained and most-experienced humans, it’s going to be able to make elite hackers even more dangerous than they are today.
Ferrante: The second way the FBI looks at AI is as an economic-espionage target of our foreign adversaries, who are looking to get their hands on U.S. technology and undercut U.S. businesses.
Director Wray: Our country is the gold standard for AI talent in the world. And that makes our AI/machine-learning sector a very attractive target. The Chinese government, in particular, poses a formidable cyber and counterintelligence threat on a scale unparalleled among foreign adversaries.
Ferrante: Chinese companies, with heavy government support, are frantically trying to match U.S. AI capabilities. Because of the Chinese government’s sway, the technology that private companies develop is at the regime’s disposal. Two of China’s biggest tech companies have already released large language models similar to one developed in the United States. China also has a hacking program that is bigger than that of every other major nation combined. For years, it has been stealing personal information from Americans and millions of others around the world, as well as innovations and technologies for its own economic and military gain. China feeds that stolen tech and data into its own large and lavishly funded AI program.
As our adversaries look to exploit gaps in our intelligence and information security networks, the FBI is committed to working with our federal counterparts, our foreign partners, and the private sector to close those gaps. These partnerships allow us to defend networks, attribute malicious activity, sanction bad behavior, and take the fight to our adversaries overseas.
Vorndran: Cyber threats must be tackled as a team, and private sector organizations have a big role to play. We know collaborating to establish best practices—and practicing them—works. We know information sharing, threat reporting, and awareness is also key to addressing these threats. The FBI remains a trusted partner after an intrusion or attack, but we want to foster strong relationships before any attacks happen; and we want to prevent as many problems as we can through prevention and mitigation.
Ferrante: Director Wray echoes the importance of partnerships—and the role all of us play in this new chapter of technological advancements.
Director Wray: We at the FBI firmly believe this is a moment to embrace change—for the benefits it can bring, and for the imperative of keeping America at its forefront. And frankly, there’s no more important partner in our strategy than all of you and your peers throughout the country.
We’re going to pursue our mission wherever it leads us, even when doing so requires mastering new domains and learning new technologies, because we wouldn’t be doing our jobs if we didn’t help you navigate these historic times safely and securely. And we look forward to tackling new challenges and harnessing innovation together.
Ferrante: This has been another production of Inside the FBI. You can follow us on your favorite podcast player, including Spotify, Apple Podcasts, or Google Podcasts. You can also subscribe to email alerts about new episodes at fbi.gov/podcasts.
I’m Ellen Ferrante from the FBI’s Office of Public Affairs. Thanks for tuning in.
During a July 26 keynote at the FBI Atlanta Cyber Threat Summit, Director Christopher Wray warned that cybercriminals are weaponizing artificial intelligence—and the resulting threat will only worsen as machine-learning models become increasingly sophisticated.