New FBI warning about AI-powered scams asking for your money
The FBI is issuing a warning that criminals are increasingly using generative AI technologies, especially deepfakes, to exploit unsuspecting individuals. This warning serves as a reminder of the growing sophistication and availability of these technologies and the urgent need for vigilance to protect ourselves from possible fraud. Let’s explore what deepfakes are, how criminals use them, and what steps you can take to protect your personal information.
I’M GIVING AWAY THE LATEST AND BEST AIRPODS PRO 2
Participate in the giveaway by logging in to my free newsletter.
The rise of deepfake technology
Deepfakes refer to artificial intelligence-generated content that can plausibly mimic real people, including their voices, images and videos. Criminals use these techniques to impersonate individuals, often in crisis situations. For example, they can generate audio clips that sound like a loved one is asking for urgent financial help, or even create real-time video calls that appear to involve company executives or law enforcement officials. The FBI has identified 17 common techniques used by criminals to create these deceptive materials.
AI-POWERED GRANDMOTHER IN SCAM VOLUME
Key tactics used by criminals
The FBI has identified 17 common techniques that criminals use to exploit generative AI technologies, particularly deepfakes, for fraudulent activities. Here is a comprehensive list of these techniques.
1) Voice cloning: Generating audio recordings imitating the voice of a family member or other trusted persons in order to manipulate victims.
2) Video calls in real time: Creating fake video interactions that appear to involve authority figures, such as police officers or business executives.
3) Social engineering: Using emotional appeals to manipulate victims into revealing personal information or transferring funds.
4) Text generated by artificial intelligence: Creating realistic written messages for phishing attacks and social engineering schemes, making them believable.
5) Images generated by artificial intelligence: Using synthetic images to create credible profiles on social networks or fake websites.
6) Videos generated by artificial intelligence: Creating convincing videos that can be used in fraud, including investment scams or impersonation schemes.
7) Creating fake profiles on social networks: Setting up fake accounts that use AI-generated content to defraud others.
8) Phishing emails: Sending emails that look legitimate but are crafted using artificial intelligence to trick recipients into giving out sensitive information.
9) Impersonation of public figures: Using deepfake technology to create video or audio recordings impersonating well-known fraud personalities.
10) False identification documents: Generating fake IDs, such as driver’s licenses or credentials, for identity fraud and impersonation.
11) Investment fraud schemes: Using AI-generated materials to convince victims to invest in non-existent opportunities.
12) Demands for ransom: Impersonating loved ones in distress to hold victims to ransom.
13) Manipulating voice recognition systems: Use of cloned voices to bypass security measures that rely on voice authentication.
14) Fake charity calls: Creating deep fake content asking for donations under false pretenses, often during crises.
15) Business email compromise: Creating emails that appear to come from executives or trusted contacts to authorize fraudulent transactions.
16) Creation of disinformation campaigns: The use of fake videos as part of wider disinformation efforts, particularly around significant events such as elections.
17) Taking advantage of crisis situations: Generating urgent requests for help or money during emergencies, using emotional manipulation.
These tactics highlight the increasing sophistication of fraud schemes enabled by generative artificial intelligence and the importance of vigilance in protecting personal data.
FCC NAMES ITS FIRST AI SCAMMER IN Threat Alert
Tips to protect against deepfakes
Applying the following strategies can improve your security and awareness against deepfake related scams.
1) Limit your online presence: Reduce the amount of personal information, especially high-quality images and videos, available on social media by adjusting your privacy settings.
2) Invest in personal data removal services: The less information out there, the harder it is for someone to deepfake you. While no service promises to remove all of your data from the Internet, a removal service is great if you want to continuously monitor and automate the process of removing your data from hundreds of sites continuously over a long period of time. Check out my top picks for data removal services here.
3) Avoid sharing sensitive information: Never disclose personal or financial information to strangers online or over the phone.
4) Be careful with new connections: Be careful when accepting new friends or connections on social media; verify their authenticity before engagement.
5) Check your social media privacy settings: Make sure your profiles are set to private and that you only accept friend requests from trusted people. Here’s how to switch all your social media accounts, including Facebook, Instagram, Twitter, and any others you might be using, to private.
6) Use two-factor authentication (2FA): to implement 2FA on your accounts to add an extra layer of security against unauthorized access.
7) Check callers: If you receive a suspicious call, hang up and independently verify the identity of the caller by contacting their organization through official channels.
8) Watermark your media: When sharing photos or videos online, consider using digital watermarks to prevent unauthorized use.
9) Monitor your accounts regularly: Keep an eye on your financial and online accounts for unusual activity that could indicate fraud.
10) Use strong and unique passwords: Use different passwords for different accounts to prevent a single breach from compromising multiple services. Consider using a password manager for generating and storing complex passwords.
11) Back up your data regularly: Maintain backup copies important data to protect against ransomware attacks and ensure recovery in case of data loss.
12) Create a secret confirmation phrase: Establish a unique word or phrase with family and friends to confirm identity during unexpected communications.
13) Be aware of visual defects: Look for subtle flaws in images or videos, such as distorted lines or unnatural movements, which may indicate manipulation.
14) Listen for voice anomalies: Pay attention to tone, pitch and word choice in audio recordings. AI-generated voices can sound unnatural or robotic.
15) Do not click on links or download attachments from suspicious sources: Be cautious when receiving email, direct messages, text messages, phone calls, or other digital communications if the source is unknown. This is especially true if the message requires you to act quickly, such as claiming that your computer has been hacked or that you’ve won a prize. Deepfake creators try to manipulate your emotions so that you download malware or share personal information. Always think before you click.
The best way to protect yourself from malicious links that install malware, potentially accessing your personal information, is to have antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android, and iOS devices.
16) Be careful with money transfers: Don’t send money, gift cards or cryptocurrencies to people you don’t know or have met only through the Internet or over the phone.
17) Report suspicious activity: If you suspect that you have been the target of scammers or that you have become a victim of fraud, report it FBI’s Internet Crime Complaint Center.
By following these tips, individuals can better protect themselves from the risks associated with deepfake technology and related scams.
30% OF AMERICANS OVER 65 WANT TO BE REMOVED FROM THE WEB. HERE’S WHY
Kurt’s outdoor essentials
The increasing use of generative AI technologies, especially deepfakes, by criminals highlights the urgent need for awareness and vigilance. As the FBI warns, these sophisticated tools allow fraudsters to impersonate individuals, making scams harder to detect and more convincing than ever. It is critical that everyone understands the tactics these criminals use and takes proactive steps to protect their personal information. By being informed about the risks and implementing security measures, such as identity verification and limiting online exposure, we can better protect ourselves from new threats.
In what ways do you think companies and governments should respond to the growing threat of AI-powered fraud? Let us know by writing to us at Cyberguy.com/Contact.
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report newsletter by going to Cyberguy.com/Newsletter.
Ask Kurt a question or tell us what stories you want us to cover.
Follow Kurt on his social channels:
Answers to the most frequently asked CyberGuy questions:
New from Kurt:
Copyright 2024 CyberGuy.com. All rights reserved.