Menu

Cyber Security

AI Rears Its Head as a Cyber Threat

Accompanying the widespread deployment of powerful new business applications is a growing realization that the technology also gives fraudsters and hackers a way to raise their nefarious game.

Friday, May 3, 2024

By David Weldon

Advertisement

Having sensitized the business and financial worlds and government overseers to risks including bias and data manipulation, privacy and intellectual property violations, hallucinations and model explainability, artificial intelligence is hardly the only disruptive technology with a downside. Quantum computing, for example, is as anticipated for its potentially revolutionary computing power as it is feared for its ability to break security codes, if and when it gets into the wrong hands.

But there are reasons for concern that AI is already being weaponized, and the threats could get worse than what has been experienced to date with deepfakes, voice spoofing, alleged election interference, phishing and ransomware exploits.

Generative AI has been around since the 1960s . . . but now it is much more accessible, has far more capabilities and is easier to use,” said Jason Harrell, managing director of operational and technology risk, Depository Trust & Clearing Corp. (DTCC). “Any time you have a large population that can use a powerful new tool, security professionals must think about how people may use it for malicious purposes.”

“What used to take threat actors days or weeks to spin up new code for various efforts, GenAI can create and check in just minutes,” says threat intelligence specialist Neal Dennis of Cyware. “You can use this tool to create ransomware or phishing websites in a fraction of the time it used to take, and it's far simpler than in the past.”

As McKinsey & Co. put it in a March report on derisking emerging technologies in financial services, financial companies “face well-funded, highly organized and well-trained cyber criminals. These criminals are also adopting emerging technologies to aid in their attacks, including recent attacks utilizing GenAI as part of sophisticated phishing campaigns.” With cyber incidents increasing in both frequency and severity, “institutions must stay vigilant in their capabilities to defend themselves and protect their assets and finances against electronic crime.”

Weapons of Choice

Security threats posed by AI should be viewed as “significant and increasing rapidly,” explains Brett Hansen, chief growth officer at computer security company Cigent Technology. “Cyber criminals are at the early stages of utilizing this powerful technology. With experience, they will find new innovations to increase frequency, sophistication and targeting of attacks.”

If there is good news in this picture, it is for organizations with strong cyber-defense and awareness practices in place. And just as AI can fuel cyberattacks, it can also strengthen defenses.

f1-ai-rears-its-head-240503

Cybersecurity control measure responses from McKinsey Future of Cybersecurity Survey 2023

“You may already be aware that there are bad actors using AI to try to infiltrate companies’ systems to steal money and intellectual property or simply to cause disruption and damage,” JPMorgan Chase & Co. Chairman and CEO Jamie Dimon wrote in his most recent annual shareholder letter. “For our part, we incorporate AI into our toolset to counter these threats and proactively detect and mitigate their efforts.”

At the World Economic Forum’s Davos gathering in January, JPMorgan asset and wealth management CEO Mary Callahan Erdoes said security is a major consideration in the megabank’s spending $15 billion annually on technology and employing 62,000 technologists. “The fraudsters get smarter, savvier, quicker, more devious, more mischievous,” she said. “It’s so hard and it’s going to become increasingly harder, and that’s why staying one step ahead of it is really the job of each and every one of us.”

Identifying Vulnerabilities

To be sure, cyber criminals have a way of eluding defenders and pursuers.

GenAI and large language models (LLMs) have considerable utility for attackers. According to Brian Fricke, chief information security officer of $26 billion-in-assets, Miami-based City National Bank of Florida, these applications can power deceptive techniques including deepfakes, which fraudulently represent real people; voice spoofing, which in a corporate environment might be used transmit spurious orders from a senior executive; synthetic identities, often associated with creation of fraudulent accounts; the emails used in phishing scams and their snort-message-service, or text, equivalent, smishing.

There are also document fraud, or creation of bogus documentation to support fraudulent invoicing and payment processing; and impersonation of individuals and organizations to lure victims via social media.

On April 29, following on the Biden administration’s Executive Order on the Safe, Secure and Trustworthy Development of AI six months prior, the National Institute of Standards and Technology issued several draft guidance documents. One, a GenAI-focused companion to the NIST AI Risk Management Framework, “centers on a list of 13 risks and more than 400 actions that developers can take to manage them”; the risks include “a lowered barrier to entry for hacking, malware, phishing and other cybersecurity attacks.”

The Department of Commerce agency also announced NIST GenAI, “a new program to evaluate and measure generative AI technologies.” Related to the U.S. AI Safety Institute at NIST, the objectives include identifying “strategies to promote information integrity and guide the safe and responsible use of digital content. One of the program’s goals is to help people determine whether a human or an AI produced a given text, image, video or audio recording.”

The Present State

Some observers contend that while greater AI sophistication will be within cyber attackers’ reach, the current state is satisfying their appetites. To the extent that they are using GenAI, it is mainly to facilitate existing methods, such as by refining phishing attacks and detecting software vulnerabilities, says David Ratner, CEO of HYAS.

“While various proof-of-concepts and theoretical uses of generative AI point to what it could potentially be utilized for in the near future – which would dramatically increase the threat level and associated risks – today the threat level is only modestly increased,” in Ratner’s view.

GenAI lends itself to phishing, smishing, vishing (voice phishing), BEC (business email compromise) and whaling (spear phishing that targets high-ranking company officials) – known hazards that may involve social engineering, human deception and fraud. To Ilia Kolochenko, CEO and chief architect at application security specialist ImmuniWeb, “GenAI provides little to no help with sophisticated ransomware campaigns, disruptive cyberattacks against critical national infrastructure, or industrial espionage with advanced persistent threats aiming at stealing top-secret information from governments or valuable trade secrets from large businesses.

ilia-kolochenkoIlia Kolochenko of ImmuniWeb

“In 2024,” Kolochenko continues, “organized cybercrime groups have all the requisite resources and skills, such as state-of-the-art malware development, producing substantially superior quality of cyber warfare compared to any LLM even after fine-tuning of the LLM.”

That said, Kolochenko advises that any authentication system based on a client’s voice or visual appearance be urgently tested for deepfake and AI-generated content. Employees who might be targeted to receive deceptive emails or texts should also be trained – and their training regularly refreshed – to spot red flags and to prevent and report identity fraud.

“Scams Without Borders”

Cigent’s Hansen sees advances threatening to render many defense solutions inadequate: “Despite billions of dollars in investment, ransomware continues to plague organizations, data continues to be infiltrated, and systems continue to be infected.”

In a global survey published last month by BioCatch, a provider of behavioral biometric authentication technology, 91% of responding financial industry “fraud fighters” were rethinking their use of voice verification for large customers because of AI voice-cloning, and more than 70% identified the use of synthetic identities while onboarding new clients last year. The results showed the paradox of financial institutions already using AI tools to defend themselves as criminals launch AI-super-charged attacks.”

Tom Peacock, BioCatch’s director of global fraud intelligence, said AI can “flawlessly localiz[e] the language, slang and proper nouns used and personaliz[e] for every individual victim the scam type, images, audio and/or video involved. AI gives us scams without borders and will require financial institutions to adopt new strategies and technologies to protect their customers.”

On the flip side, where “AI fights AI,” the technology “will accelerate the adoption of not only new technologies, but also new approaches and paradigms,” Ratner of HYAS expects. “Zero-trust, data-centric protections, and AI protection capabilities will see rapid adoption to address evolving threats.”

GenAI can help cybersecurity, compliance and risk professionals intelligently automate and accelerate simple but laborious tasks and free them up to address higher or more challenging priorities, Kolochenko says.

f2-ai-rears-its-head-240503

Responses from BioCatch 2024 AI, Fraud and Financial Crime Survey

Paired with machine learning, GenAI is exceptional at analyzing large data sets to find anomalies that would otherwise be missed by a human, Ratner says.

“The hackers are not yet fully making the existing guardrails obsolete,” he adds. But the protections “are slowly becoming more ineffective, and new approaches will be required in the future.”

Chief AI Officer

Considering the risks and the stakes, and the need for senior-level leadership, the role of chief AI officer or functional equivalent could put the necessary emphasis on employee training and threat awareness.

The U.S. Treasury Department has a chief AI officer, Todd Conklin, as of January; he also has the title of deputy assistant secretary of cyber. On May 1, the Commodity Futures Trading Commission announced that its chief data officer, Ted Kaouk, took on the expanded role of chief data and AI officer, related to what agency Chairman Rostin Behnam termed “efforts to deploy an enterprise data and artificial intelligence strategy to modernize staff skillsets, instill a data-driven culture, and begin to leverage the efficiences of AI as an innovative financial markets regulator.”

david-ratnerDavid Ratner of HYAS

“The average employee will most likely be confronted with AI-generated phishing attacks, so this is where a top focus should be,” Ratner says. “Train employees that the bar has been raised on the quality of these attacks, and give them the right techniques to spot them.

“Recognize that GenAI has created a volatile environment undergoing rapid change, and defenders should be aware of what’s coming. Think about what could happen, while still being focused on what is happening now.”

If phishing and other familiar techniques slip through today’s countermeasures, what will be the implications of more advanced, AI-powered agents or botnets?

“Will artificial general intelligence invent new tactics and techniques?” Ratner states. “Defenders should prepare, do their research, and be ready to adapt.”




Advertisement

BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals