How much of a threat are AI deepfakes to the gambling sector?

AI-enabled fraud and the emergence of identification deepfakes pose significant threats to gambling operators. However, there are effective strategies to bolster protection for both operators and players.
As technological advancements continue to reshape the gambling landscape, so too do the associated risks for both operators and consumers. The rise of AI-generated deepfakes and synthetic identities represents an escalating threat that necessitates enhanced security measures and potentially costly software solutions.
Innovative technologies are not just limited to legitimate businesses; illegal operators are also leveraging AI to generate fraudulent promotional materials. Recently, Sky News reported the discovery of an AI-generated video featuring their presenters endorsing gambling applications, further complicating the marketing landscape.
This incident involved older footage of news presenter Matt Barbet, manipulated to create a misleading narrative endorsing a fictitious iPhone game with a purported £500,000 win. These deceptive advertisements proliferated across social media platforms, promoting illegal gambling sites masquerading within legitimate gaming applications available on the Apple App Store.
AML Risks Associated with AI
In April, the UK’s Gambling Commission issued an urgent advisory addressing the risks of AI deepfakes in relation to money laundering and terrorist financing activities. Last year, the UK’s Joint Money Laundering Intelligence Taskforce raised alarms about AI being utilized to circumvent crucial customer due diligence protocols. Notably, the UK’s National Crime Agency successfully shut down a website offering AI-generated identity documents—such as passports and driver’s licenses—for merely $15.
The Gambling Commission has emphasized the importance of training staff in identifying AI-generated documentation, acknowledging that threat actors are increasingly adept at exploiting emerging technologies. As noted by Dr. Michaela MacDonald, a senior lecturer in law and technology at Queen Mary University of London, “Synthetic identity theft combines authentic and fabricated personal information to forge entirely new identities.”
AI technologies, including voice cloning and behavioral mimicry, can successfully navigate conventional Know Your Customer (KYC) systems, thereby circumventing facial recognition and other authentication processes.
Research from the Alan Turing Institute highlighted that AI-enabled crime is being propelled by technology’s capacity to automate and scale criminal activities to unprecedented levels. The report expressed concern that UK law enforcement lacks adequate tools to combat this burgeoning threat.
Expected Regulatory Responses to Deepfake Incidents
Globally, regulators exhibit a stringent approach towards anti-money laundering (AML) violations, particularly given the gambling industry’s propensity for handling substantial sums of money. Recently, the UK’s Gambling Commission penalized two operators for failing to comply with AML standards and customer care norms. For instance, The Football Pools was fined £375,000 ($484,417) due to failure to activate stringent measures when AML thresholds were crossed, relying instead on manual interventions.
Corbett Bookmakers faced an even heftier fine of £686,070 due to multiple AML shortcomings, including an inadequate understanding of various customer and geographic risks. The Commission underscored the necessity for operators to adopt a more nuanced, risk-based approach to AML compliance in light of evolving AI challenges.
To tackle the heightened risk of AI-related crimes in the gambling sector, the Gambling Commission reiterated the critical need for comprehensive staff training on AI document verification.
Future regulatory strategies may involve enhanced information-sharing protocols across secure networks, fostering industry innovation, and promoting international cooperation while reviewing existing regulatory frameworks.
Navigating Fast-Moving Technology
Industry experts, such as Annabelle Richard, legal partner at Pinsent Masons, suggest that regulators may adopt a lenient stance in initial instances where AML breaches occur due to deepfake technology. Operators that find their systems compromised may be forgiven in specific cases if they demonstrate uncertainty over measures that should have been enacted at the time of the incident.
However, in cases of evident system failure or insufficient response times to detected anomalies, regulators are likely to enforce stringent penalties. “If you haven’t reached out to the authority for clarification on regulatory obligations, it will be perceived as negligence,” Richard asserts.
Mitigating the AML AI Deepfake Risk in Gambling
According to the UK’s National Crime Agency, fraud is now the most common crime in the UK, with AI potentially amplifying the sophistication and speed of online scams. AI enables fraudsters to target wider demographics across geographical and linguistic barriers, complicating detection efforts with the use of deepfakes.
“The use of AI to facilitate fraud highlights the need for collaborative efforts among the private sector, law enforcement, and the public to intensify protective measures. The UK’s Online Safety Act increases the accountability of online platforms in combating fraud and terrorism,” the NCA stated.
Enacted in 2023, the Online Safety Act requires service providers to adopt measures for effective account verification and implement automated detection systems to unearth and eliminate fraud-related advertisements or posts.
The Need for Enhanced AML Processes Among Operators
In light of AI’s growing sophistication, gambling operators must remain current with best practices and technological innovations. Leveraging AI-driven document verification can be enhanced with biometrics, including facial recognition and liveness detection checks, while employing device fingerprinting and geolocation technologies can significantly heighten detection rates.
Additionally, properly deployed machine learning technologies can flag anomalies in player activities, providing an added layer of security. Dr. MacDonald highlights various emerging technologies, such as data intelligence and end-to-end orchestration, which can assist in identifying synthetic identities and modified documents.
“Integrating these tools can centralize verification processes, analyze extensive datasets for inconsistencies, and utilize machine learning to adapt and detect evolving fraud patterns with remarkable precision,” she explains. However, implementation quality can vary greatly, and many operators use the same AI tools as fraudsters, leaving them vulnerable.
The Longstanding Battle Between Fraud and Gambling
The gambling sector has historically been a prime target for fraudulent activities, creating a perpetual arms race between operators and fraudsters striving to outsmart security systems. Industry consultant Mick d’Ancona remarks, “While it has become easier to fabricate documentation, diligent operators have robust systems in place.”
Nevertheless, d’Ancona cautions that addressing the threat of fraudulent documentation requires significant investment as fraudulent techniques grow more sophisticated. Smaller operators or those in less-regulated markets may struggle to implement adequate preventive measures, exposing them to potential breaches during due diligence on customers.
“Failing to verify documentation appropriately or relying solely on basic proof of identity can leave operators exposed,” he warns.
The Rise of Digital ID Wallets
One promising solution for mitigating identity fraud involves the establishment of official digital identification systems. Digital identity wallets harness various technologies, including cryptographic keys and biometric authentication methods to secure identities.
Countries like Singapore and Estonia have pioneered national digital IDs, facilitating access to essential services while ensuring security. The UK’s Post Office EasyID, launched in 2021, provides a government-certified digital ID for rights verification and criminal record checks.
The EU’s Digital Identity Framework Regulation, set to take effect in 2024, will compel member states to offer at least one digital identity wallet, thus strengthening identity verification processes across online platforms.
Jarek Sygitowicz, co-founder of the identity verification developer Authologic, believes the implementation of electronic ID wallets could radically transform identity verification methods. “As the EU advances eIDAS 2.0 regulations, we anticipate significant adoption rates in the coming months, and even traditionally skeptical nations like the UK are exploring digitized driving licenses this summer,” he notes.
The Need for Standardization and Consistency
The existing threat posed by AI deepfakes is substantial, yet there are numerous tools available for addressing these challenges. However, the adoption of these tools remains inconsistent, particularly among smaller platforms unaware of available resources.
“Currently, what we lack is consistency; there’s no unified framework to tackle AI-driven fraud, and that must change,” explains Peter Wood, CTO at Web3 recruiter Spectrum Search. “Regulators should advocate for industry-wide standards in identification verification designed to withstand AI’s threats, and we need improved collaboration among platforms for real-time data sharing to identify suspicious activities.”
Effective detection of synthetic identity fraud is hindered by fragmented personal information across multiple platforms. Without centralized oversight, spotting inconsistencies becomes challenging. Dr. MacDonald recommends that regulators and law enforcement bodies promote international cooperation for synthetic identity detection, regulatory compliance, and information sharing, essential for keeping pace with increasingly sophisticated AI-enabled fraud.
While the heightened risk of AI-facilitated fraud and AML violations cannot be ignored, operators’ obligations to remain informed and compliant are steadfast. Effective tools exist to combat these threats, but the industry must work collaboratively to establish consensus on best practices to adapt to this evolving landscape.