Cryptocurrency has been flagged as a new frontier for cyber threats in a new report drafted with inputs from Indian nodal agencies such as the Indian Computer Emergency Response Team (CERT-In) and Computer Security Incident Response Team in the Finance sector (CSIRT-Fin).
“Cryptocurrency has significantly altered the cyber threat landscape, empowering intruders in ways that previous technologies could not […] Services and platforms have emerged to facilitate the exchange, laundering, and obfuscation of cryptocurrency funds, making it easier for intruders to monetise their activities without leaving a traceable trail,” read the Digital Threat Report 2024 published by CERT-In, CSIRT-Fin, and global cybersecurity company SISA on Monday, April 7.
The report is aimed at mapping the landscape of cyber threats in 2024, particularly in relation to the Banking, Financial Services, and Insurance (BFSI) sector.
It states that while threat actors initially used Bitcoin for illicit transactions, they have since migrated to other cryptocurrencies like Monero (XMR). “Monero’s advanced encryption techniques obscure transaction details, making it exceptionally challenging for law enforcement agencies to trace funds and identify the individuals involved,” the report stated.
Crypto being named as an anticipated cybersecurity threat in 2025 within the report stands out, especially since the Indian government is reportedly re-examining its tough regulatory stance on the digital asset amid global policy shifts led by US President Donald Trump’s pro-crypto initiatives.
The report also acknowledged the targeting of crypto exchanges by threat actors as a new strategy. “By attacking these exchanges, intruders aim to steal large amounts of digital currency, exploiting security vulnerabilities within these platforms,” it read.
WazirX, one of India’s major crypto exchanges, was hit by a cyber attack where hackers allegedly stole nearly half of the platform’s crypto reserves worth more than $230 million. More recently, hackers stole digital assets worth over $1.5 billion from Dubai-based crypto exchange Bybit, in what is said to be the largest crypto heist till date.
Story continues below this ad
The Digital Threat Report also pointed out a new malware variant that scans “infected environments” for crypto wallets or the keys that secure them. “By extracting these keys, intruders can gain unauthorised access to victims’ crypto assets, leading to significant financial losses,” it said.
AI-generated deepfakes, LLM prompt hacking
The report identified deepfakes and AI-generated content as “potent tools for intrusion, particularly in social engineering attacks.”
“Deep fake voice and video allow cyber perpetrators to mimic the voices and appearances of executives, employees, or trusted partners. For example, an attacker might use a deep fake video during a virtual meeting to deceive a finance team into authorizing a unauthorized transfer or employ a deep fake voice to trick individuals into revealing one-time passwords (OTPs),” the report stated.
It said that the threat of LLM (large language model) prompt hacking was much more prevalent in applications that host LLMs locally as opposed to developer APIs by providers such as OpenAI and DeepSeek.
Story continues below this ad
However, jailbreaking attempts have been successful against OpenAI’s ChatGPT in the past. For instance, in 2023, ChatGPT users discovered that they could bypass the AI chatbot’s safeguards by asking it to pretend to be a dead grandmother. This technique came to be known as the ‘grandma exploit’.
Beyond jailbreaking, the report noted that malicious LLMs such as WormGPT and FraudGPT are capable of writing convincing phishing emails, coding highly-effective malware programmes, and automating the development of exploits.
“The polymorphic nature of AI-generated code means that signature-based detection methods are less effective, as each iteration can appear unique while maintaining its malicious functionality,” the report read. AI tools are also aiding hackers in diversifying the file formats used in phishing campaigns to evade email security filters, as per the report.
Recommendations
In its suggestions for policymakers, the report recommends implementing clear and comprehensive AI regulations to ensure the responsible deployment of AI and ML in the BFSI sector.
Story continues below this ad
“Providing the industry with clear guidelines around critical aspects such as data privacy, ethical AI use, and algorithmic transparency will encourage responsible AI adoption, supporting growth while safeguarding the integrity of the financial sector and protecting consumer interests,” the report stated.
As part of its recommendations to build resilient cyber defenses, the report suggested industry stakeholders to invest in AI-powered anomaly detection tools. “These systems can identify subtle deviations in user behavior, pinpointing malicious activities hidden within normal operations,” it said.
It also recommended that companies in the BFSI sector subject the APIs within AI-native applications to security testing in order to uncover hidden vulnerabilities. “By expanding Dynamic Application Security Testing (DAST) to cover API endpoints, organizations address gaps that traditional scanning might miss. Proactive testing against OWASP Top 10 API vulnerabilities ensures AI systems are protected at scale,” the report stated.