Artificial Intelligence

Artificial intelligence (AI) has implications not just for the commercial sector but for national security and law enforcement

Artificial Intelligence

What is Artificial Intelligence? 

Broadly speaking, AI systems are used to replicate or emulate certain aspects of cognition. We interact with AI almost every day in modern life from the use of online search engines to video games, to digital assistants commonly accessed through smart phones and smart devices, to automated cruise control functions in vehicles.

The FBI uses the definition established in the Fiscal Year 2019 National Defense Authorization Act (FY19 NDAA), which provides: 

  • Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
  • An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
  • An artificial system designed to think or act like a human, including cognitive architectures and neural networks. 
  • A set of techniques, including machine learning (ML), that is designed to approximate a cognitive task. 
  • An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision making, and acting.  

Is Artificial Intelligence good or bad? 

Artificial Intelligence

The FBI recognizes the value add of artificial intelligence and machine learning technologies. AI changes the threat landscape by automating tasks that previously required more time, effort, and labor. The availability, low cost, and short time it takes to train new AI that learn from existing models, combined with the availability of large data sets and lack of regulation, create opportunities for bad actors to acquire and use these tools against U.S. interests.

The use of AI is not a crime. Criminals or other malicious actors can, however, use AI in the furtherance of criminal acts or acts that threaten U.S. national security. As the public and private industry adoption of AI increases, the AI/ML attack surface will correspondingly increase along with the potential physical and economic costs of a successful attack.

The FBI is hyper-focused on these kinds of attacks and specifically on what can—and will—go wrong. The FBI is working to identify and defend against threats from those who use AI for criminal activity, and against those who attack or degrade AI systems being used for legitimate, lawful purposes.  


How does the FBI approach Artificial Intelligence? 

Our approach to AI fits into three different buckets—identifying and tracking adversarial and criminal use of AI, protecting American innovation, and FBI AI governance and ethical use.

  • The FBI is focused on anticipating and defending against threats from those who use AI and ML to power malicious cyber activity, conduct fraud, propagate violent crimes, and threaten our national security; we’re working to stop actors who attack or degrade AI/ML systems being used for legitimate, lawful purposes.
  • The FBI defends the innovators who are building the next generation of technology here in the U.S. from those who would steal it. This effort is also related to defense against malicious cyber activities, since all-too-often our adversaries are stealing our trade secrets, including AI, to turn it against the U.S. and U.S. interests.
  • The FBI is also looking at how AI can help us further exercise our authorities to protect the American people—for instance, by triaging and prioritizing the complex and voluminous data we collect in our investigations, making sure we’re using those tools responsibly and ethically, under human control, and consistent with law and policy.

FBI Director Christopher Wray Speaks at 2023 mWISE Cybersecurity Conference

FBI Director Wray reaffirmed the FBI’s stance on countering AI and cyberspace threats and posed a call to action to strengthen private sector and government partnerships in fighting both domestic and foreign adversaries.


How can I defend against synthetic content? 

Synthetic Content is commonly referred to as deepfakes; however, synthetic content can refer to anonymized, artificial data.

Several factors have decreased the resources, time, and effort required to create or use convincing synthetic content. Methods once limited to those with the necessary computing power and expertise can now be employed by a broader customer base via user-friendly applications. One consequence of this trend is that synthetic content creation has been essentially commoditized and scaled beyond once limited use cases.

Content consumers can look for visual distortions and warping in images and video such as unsettling silences and distorted decibels, video inconsistencies or unnatural movement, and poor video, lighting and audio quality. These inconsistencies may be indicators of synthetic images, particularly in social media profile avatars.

  • Unsettling silences and distorted decibels: Audio deepfakes generate delays and unnatural pauses in the process of converting the text to audio or manipulating the audio. Additionally, choppy sentences, unusual inflections, abnormal phrasing, or incongruent background noise could help spot their synthetic origins.
  • Video inconsistencies or unnatural movement: Video deepfakes often have subtle inconsistencies or unnatural movements which do not match their subjects’ speech, or facial features; denote a subject’s blinking, either too much or too little; having eyebrows that do not fit their face; hair in the wrong spots; skin that does not match their age or abnormal skin colors; and awkward head, and body positioning.
  • AI-driven deepfake facial profile images can look distorted, blurred, or unsettling.
  • Poor video, lighting, and audio quality: Video deepfakes tend to have unnatural shadows, colors, and lighting patterns. Deepfakes often use synthetic audio that can sound unnatural or inconsistent, such as background noise or voice pitch; listen closely to the audio to determine if it sounds authentic.

Wait…the FBI is using AI? 

AI gives the FBI new tools and capabilities such as vehicle recognition, triage of voice samples for language identification, and generation of text from speech samples, that process data to detect, deter, and defeat criminal activity and national security threats more quickly. For instance, the FBI uses AI to sift through large amounts of data to conduct video analytics. The FBI only uses information generated from these techniques for investigative leads.

A human being is ultimately accountable for the actions taken, not an AI. This includes ensuring a trained investigator or analyst is responsible for assessing the output of our AI systems before any further substantive actions are taken.

The FBI is aware of the benefits of AI, but we also recognize its limitations. The FBI's policies and procedures for the collection, analysis, and use of data for its investigations are designed to meet the highest standards of privacy, civil liberties, ethics, and adherence to the Constitution of the United States. AI must be developed, acquired, or deployed with attention to responsible and ethical use. Our investigations require an extremely high degree of certainty, and we currently verify and validate all AI-generated leads with human experts.


Resources