How Ferrari Hit the Brakes on a Deepfake CEO
The luxury car manufacturer’s close call with a deepfake scam surfaced lessons for all leaders. Learn steps that organizations and individuals can take to thwart these growing threats.
News
- Customer Centricity Summit & Awards Explores Brand-Customer Relationships in KSA
- GITEX Global 2024 to Showcase Global Innovation, Investment, and Cybersecurity Trends
- The Perfectly Imperfect Start of Disruptive Innovations
- GovTech Conclave to Explore Cutting-Edge Solutions for Modern Governance
- New Report Shows Cautious Optimism Among Enterprises Adopting AI
- Majority of CISOs Feel Their Organizations are Unprepared for Cybersecurity Regulations
In July 2024, an executive at luxury sports car manufacturer Ferrari received several messages that appeared to have been sent by CEO Benedetto Vigna on the messaging and calling platform WhatsApp. The messages, which originated from an unfamiliar number, mentioned an impending significant acquisition, urged the executive to sign a nondisclosure agreement immediately, and claimed that Italy’s market regulator and the Italian stock exchange had already been informed about the transaction.
Despite the convincing nature of the messages, which also included a profile picture of Vigna standing in front of the Ferrari logo, the executive grew suspicious. Although the voice mimicked Vigna’s Southern Italian accent, the executive noticed slight inconsistencies in tone during a follow-up call in which he was again urged to assist with the confidential and urgent financial transaction.
Sensing that something was amiss, the executive asked the caller a question that only Vigna would know the answer to — the title of a book Vigna had recommended days earlier. Unable to answer the question, the scammer abruptly ended the call. The executive’s simple test prevented what could have been a major financial loss and reputational damage for Ferrari.
Understanding Deepfakes
The attempt to exploit Ferrari is an example of a deepfake — a highly realistic video, image, text, or voice that has been fully or partially generated using artificial intelligence algorithms, machine learning techniques, and generative adversarial networks, or GANs.
GANs are a type of AI model in which two neural networks — one generating content and the other evaluating it — compete to create highly realistic videos or audio that mimic real individuals. One network, called the generator, creates the fake media while the other, called the discriminator, evaluates how real or fake the generated content looks. This process continues until the generator produces media so realistic that the discriminator can no longer discern whether it’s fake.
Scammers generate deepfakes using large data sets that include photos, audio clips, and videos of the individual they want to impersonate. The more data that’s available, the more realistic the deepfake will appear. For this reason, celebrities, politicians, and public figures with extensive media presence are often impersonated in deepfakes.
While deepfakes can be used for entertainment or creative purposes, such as in film or television, they also pose significant risks. In corporate scams, malicious actors use deepfakes to deceive executives into authorizing fraudulent transactions or disclosing sensitive information.
Financial losses attributed to AI are expected to rise: Deloitte’s Center for Financial Services predicts that fraud enabled by generative AI could reach $40 billion in losses in the United States by 2027, up from $12.3 billion in 2023. Given how realistic many deepfakes appear and the ease with which scammers can produce them, organizations must increase employee awareness and take proactive measures to protect against this emerging threat.1
Inside the Psychology of Deepfake Scams
Deepfake scams are extremely effective because they exploit cognitive biases — systematic errors in thinking that impair judgment, especially in high-pressure situations. Understanding how deepfakes manipulate human psychology can help executives and organizations better protect themselves against these threats.
Deepfakes exploit the trust bias, which is the human tendency to believe information from familiar or authoritative sources without skepticism. In a corporate setting — in which trust in leadership is paramount — deepfakes of CEOs or senior leaders can easily manipulate employees who are used to following directives from these figures without question.
In the Ferrari case, the scammer impersonated the CEO to exploit the executive’s trust, using a familiar voice and visual cues — Vigna’s accent and a profile photo with the Ferrari logo. Recognizing that even seemingly trustworthy communications can be falsified is crucial in building a defense mindset. Organizations need to train employees to question the authenticity of communications, no matter how credible they appear.
Deepfakes also exploit time pressure and urgency, which are common during financial transactions or high-stakes deals. Scammers often create situations that demand immediate action, leaving little room for reflection or detailed verification. In the Ferrari example, the scammer pushed for quick action, stating that regulators and the stock exchange had already been informed. This sense of urgency can override usual decision-making processes, making executives more prone to falling for fraudulent requests. The Ferrari executive’s suspicion and decision to verify by asking a very specific question was pivotal in uncovering the scam and demonstrated the importance of resisting pressure and taking time to authenticate requests. Training staff members to pause and verify, even in time-sensitive scenarios, can help mitigate this risk.
Confirmation bias plays a significant role in the success of deepfake scams too. People tend to favor information that aligns with their preexisting beliefs, especially when the deepfake is reinforced by other convincing but false elements that create a realistic picture. A deepfake that aligns with what an executive already expects to hear — such as news about an impending acquisition or a confidential transaction — can slip through unnoticed because it fits within the individual’s mental framework. Recognizing this bias can help executives learn to approach even expected communications with a critical eye, particularly when sensitive matters are involved.
Lastly, the visual and auditory realism of deepfakes taps into the human brain’s natural inclination to trust what it sees and hears. Historically, visual and audio cues have been reliable sources of information. With deepfake technology, however, this is no longer the case. In the Ferrari example, the scammer used voice-mimicking technology to imitate Vigna’s accent and tone. The executive’s ability to detect slight inconsistencies in the voice, such as shifts in tone and unfamiliar speech patterns, was key to exposing the scam. Executives (like all other employees) must learn to overcome the deeply ingrained assumption that audiovisual material is inherently trustworthy.
Seven Tips to Prevent Deepfake Scams
As the threat of deepfake scams grows, executives should prioritize the following actions to protect their organizations.2
1. Emphasize vigilance. The first line of defense in preventing deepfake scams is employee awareness. In the Ferrari case, personal attention to detail and vigilance enabled the executive to uncover the fraud and avoid a potential crisis.
Organizations should regularly conduct cybersecurity awareness programs that focus on social engineering, deepfakes, and other AI-driven scams in order to enhance employees’ level of vigilance and awareness. Additionally, implementing regular simulation exercises, such as phishing and deepfake drills, can help sharpen employees’ instincts. It is also important to foster a culture of skepticism in which employees are encouraged to question an unusual or unexpected request, even if it seems legitimate and originates from a presumably legitimate source.
2. Enact strong verification protocols. Organizations must have guidelines in place to prevent a scammer’s unauthorized access to information and ability to manipulate employees. The Ferrari executive’s decision to ask a question that only the real CEO could answer is a simple but highly effective method for uncovering a scam.
Strong verification protocols include multistep identity verification for all high-level or sensitive communications. Additionally, it is crucial that all executives and employees have direct lines of communication to verify important decisions, and that they use biometric or encrypted verification tools to validate the identities of key personnel. Lastly, organizations should not use potentially insecure third-party applications, which could lack robust security measures.
These steps emphasize the importance of clear, consistent protocols for verifying the identities of key personnel, particularly in sensitive or high-stakes communications. Organizations must train employees to rely on these internal protocols, especially when dealing with unusual or high-risk financial requests.
3. Promote digital literacy and AI awareness. As AI technologies for producing deepfakes become more advanced and accessible, it is crucial for leaders to stay informed about potential digital threats. The Ferrari case highlights the increasing importance of digital literacy, particularly for high-level executives.
Organizations should offer specialized training on AI and deepfake technology, especially for executives, who are often the primary targets of such scams. Organizations should also train employees to detect manipulated content by recognizing anomalies, such as inconsistencies in tone or background noise or visual artifacts in video content. It is critical to understand that even believable content should be scrutinized. Additionally, companies should regularly update AI awareness programs to keep up with the latest advances in deepfake creation and ensure that employees are equipped to handle new forms of digital technologies that aim to manipulate them.
4. Incorporate cognitive bias awareness. A critical layer of protection against deepfake scams is understanding how cognitive biases, such as trust bias and confirmation bias, influence decision-making. Educating employees about these phenomena helps build the habit of questioning communications that seem legitimate. Employee awareness, combined with standard security protocols, strengthens defenses and ensures that individuals make decisions based on critical thinking rather than cognitive shortcuts.
Organizations can better prepare their workforces by addressing the psychological and behavioral aspects of deepfake scams. Understanding both the technical threat and the human element behind the success of deepfakes is key to developing more resilient and effective defense strategies.
5. Enhance communications security. Organizations should use secure, encrypted communications platforms designed for corporate use. While WhatsApp is a valid tool for personal and informal exchanges, its use in the Ferrari incident raises serious concerns regarding sensitive corporate communications.
Using end-to-end encryption and multifactor authentication for all corporate communications ensures that no unauthorized individuals have access to confidential discussions. Companies should also limit the use of consumer-grade apps like WhatsApp for critical business communications, opting instead for enterprise-grade solutions that provide greater control over data privacy and security. Implementing secure communications protocols can significantly reduce the risk of fraudulent messages and enable organizations to maintain tight control over sensitive information.
6. Implement a multilayered security approach. Organizations can significantly enhance their defense mechanisms by implementing multiple layers of security that integrate both technological safeguards and human oversight. The scam in the Ferrari case was successfully thwarted due to human intuition and vigilance. However, adopting a multilayered security approach that includes multifactor authentication, biometric verification, encryption, and human oversight to detect anomalies in digital communications can build more robust defenses.
Companies should constantly monitor high-risk activities, such as large financial transactions, through the use of fraud detection systems. Additionally, developing fallback protocols, such as rapid shutdown capabilities and immediate notification of suspicious activities, can further safeguard organizations against breaches.
Finally, organizations can enact policies to review critical communications by making at least two individuals responsible for verifying the quality, authenticity, and accuracy of video or audio calls. This promotes a more robust decision-making process and adds an extra layer of defense against sophisticated scams.
By integrating these technological tools with human vigilance, organizations can create a more comprehensive defense against deepfakes and other digital threats.
7. Continually improve fraud detection systems. The cyberthreat landscape is rapidly expanding as criminal groups grow more sophisticated and exploit an ever-broader attack surface of vulnerabilities. Prevention methods that were effective yesterday may not be sufficient today. To keep up with this expanding threat landscape, organizations should regularly update their fraud detection protocols so deepfakes and other elaborate scams can be detected before they have the opportunity to cause harm.
Continuing education programs for employees are critical to informing the workforce about the latest fraud trends and prevention techniques. Organizations should also implement fraud attempt simulations and conduct red-teaming exercises, in which experts simulate a range of potential attacks on an organization to test its defenses and reveal weaknesses in security or employee knowledge. Cybersecurity experts can help refine and improve internal systems.
The Ferrari deepfake scam attempt highlights the evolving sophistication of cyberthreats and the growing trend of using deepfake technology to impersonate corporate leaders. The case serves as a wake-up call for organizations and executives to recognize the importance of implementing robust technical defenses and ensuring that human factors — such as vigilance, critical thinking, and awareness of cognitive biases — are also integrated into fraud prevention strategies.
Finally, this incident highlights the importance of ongoing employee education to adapt to the evolving digital landscape. By learning from this case, organizations can strengthen their overall cyber resilience and better protect their financial assets and reputations from the emerging threats posed by AI-driven scams.
References
1 “Increasing Threat of Deepfake Identities,” PDF file (Washington, D.C.: U.S. Department of Homeland Security, 2019), www.dhs.gov.
2 “Contextualizing Deepfake Threats to Organizations,” PDF file (Washington, D.C.: National Security Agency, FBI, and Cybersecurity and Infrastructure Security Agency, September 2023), https://media.defense.gov.