Connect with us

Coin Market

Crypto and NFTs at F1: What are firms bringing to the races beyond sponsorships?

Published

on

Are crypto companies sponsoring F1 for the glitz and glamor, or will NFTs demonstrate important use cases behind these partnerships?

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Coin Market

Can AI bots steal your crypto? The rise of digital thieves

Published

on

By

What are AI bots?

AI bots are self-learning software that automates and continuously refines crypto cyberattacks, making them more dangerous than traditional hacking methods.

At the heart of today’s AI-driven cybercrime are AI bots — self-learning software programs designed to process vast amounts of data, make independent decisions, and execute complex tasks without human intervention. While these bots have been a game-changer in industries like finance, healthcare and customer service, they have also become a weapon for cybercriminals, particularly in the world of cryptocurrency.

Unlike traditional hacking methods, which require manual effort and technical expertise, AI bots can fully automate attacks, adapt to new cryptocurrency security measures, and even refine their tactics over time. This makes them far more effective than human hackers, who are limited by time, resources and error-prone processes.

Why are AI bots so dangerous?

The biggest threat posed by AI-driven cybercrime is scale. A single hacker attempting to breach a crypto exchange or trick users into handing over their private keys can only do so much. AI bots, however, can launch thousands of attacks simultaneously, refining their techniques as they go.

Speed: AI bots can scan millions of blockchain transactions, smart contracts and websites within minutes, identifying weaknesses in wallets (leading to crypto wallet hacks), decentralized finance (DeFi) protocols and exchanges.Scalability: A human scammer may send phishing emails to a few hundred people. An AI bot can send personalized, perfectly crafted phishing emails to millions in the same time frame.Adaptability: Machine learning allows these bots to improve with every failed attack, making them harder to detect and block.

This ability to automate, adapt and attack at scale has led to a surge in AI-driven crypto fraud, making crypto fraud prevention more critical than ever.

​In October 2024, the X account of Andy Ayrey, developer of the AI bot Truth Terminal, was compromised by hackers. The attackers used Ayrey’s account to promote a fraudulent memecoin named Infinite Backrooms (IB). The malicious campaign led to a rapid surge in IB’s market capitalization, reaching $25 million. Within 45 minutes, the perpetrators liquidated their holdings, securing over $600,000.

How AI-powered bots can steal cryptocurrency assets

AI-powered bots aren’t just automating crypto scams — they’re becoming smarter, more targeted and increasingly hard to spot.

Here are some of the most dangerous types of AI-driven scams currently being used to steal cryptocurrency assets:

1. AI-powered phishing bots

Phishing attacks are nothing new in crypto, but AI has turned them into a far bigger threat. Instead of sloppy emails full of mistakes, today’s AI bots create personalized messages that look exactly like real communications from platforms such as Coinbase or MetaMask. They gather personal information from leaked databases, social media and even blockchain records, making their scams extremely convincing. 

For instance, in early 2024, an AI-driven phishing attack targeted Coinbase users by sending emails about fake cryptocurrency security alerts, ultimately tricking users out of nearly $65 million.

Also, after OpenAI launched GPT-4, scammers created a fake OpenAI token airdrop site to exploit the hype. They sent emails and X posts luring users to “claim” a bogus token — the phishing page closely mirrored OpenAI’s real site​. Victims who took the bait and connected their wallets had all their crypto assets drained automatically.

Unlike old-school phishing, these AI-enhanced scams are polished and targeted, often free of the typos or clumsy wording that is used to give away a phishing scam. Some even deploy AI chatbots posing as customer support representatives for exchanges or wallets, tricking users into divulging private keys or two-factor authentication (2FA) codes under the guise of “verification.”

In 2022, some malware specifically targeted browser-based wallets like MetaMask: a strain called Mars Stealer could sniff out private keys for over 40 different wallet browser extensions and 2FA apps, draining any funds it found. Such malware often spreads via phishing links, fake software downloads or pirated crypto tools.

Once inside your system, it might monitor your clipboard (to swap in the attacker’s address when you copy-paste a wallet address), log your keystrokes, or export your seed phrase files — all without obvious signs.

2. AI-powered exploit-scanning bots

Smart contract vulnerabilities are a hacker’s goldmine, and AI bots are taking advantage faster than ever. These bots continuously scan platforms like Ethereum or BNB Smart Chain, hunting for flaws in newly deployed DeFi projects. As soon as they detect an issue, they exploit it automatically, often within minutes. 

Researchers have demonstrated that AI chatbots, such as those powered by GPT-3, can analyze smart contract code to identify exploitable weaknesses. For instance, Stephen Tong, co-founder of Zellic, showcased an AI chatbot detecting a vulnerability in a smart contract’s “withdraw” function, similar to the flaw exploited in the Fei Protocol attack, which resulted in an $80-million loss. 

3. AI-enhanced brute-force attacks

Brute-force attacks used to take forever, but AI bots have made them dangerously efficient. By analyzing previous password breaches, these bots quickly identify patterns to crack passwords and seed phrases in record time. A 2024 study on desktop cryptocurrency wallets, including Sparrow, Etherwall and Bither, found that weak passwords drastically lower resistance to brute-force attacks, emphasizing that strong, complex passwords are crucial to safeguarding digital assets.

4. Deepfake impersonation bots

Imagine watching a video of a trusted crypto influencer or CEO asking you to invest — but it’s entirely fake. That’s the reality of deepfake scams powered by AI. These bots create ultra-realistic videos and voice recordings, tricking even savvy crypto holders into transferring funds. 

5. Social media botnets

On platforms like X and Telegram, swarms of AI bots push crypto scams at scale. Botnets such as “Fox8” used ChatGPT to generate hundreds of persuasive posts hyping scam tokens and replying to users in real-time.

In one case, scammers abused the names of Elon Musk and ChatGPT to promote a fake crypto giveaway — complete with a deepfaked video of Musk — duping people into sending funds to scammers. 

In 2023, Sophos researchers found crypto romance scammers using ChatGPT to chat with multiple victims at once, making their affectionate messages more convincing and scalable.​

Similarly, Meta reported a sharp uptick in malware and phishing links disguised as ChatGPT or AI tools, often tied to crypto fraud schemes. And in the realm of romance scams, AI is boosting so-called pig butchering operations — long-con scams where fraudsters cultivate relationships and then lure victims into fake crypto investments. A striking case occurred in Hong Kong in 2024: Police busted a criminal ring that defrauded men across Asia of $46 million via an AI-assisted romance scam​.

Automated trading bot scams and exploits

AI is being invoked in the arena of cryptocurrency trading bots — often as a buzzword to con investors and occasionally as a tool for technical exploits.

A notable example is YieldTrust.ai, which in 2023 marketed an AI bot supposedly yielding 2.2% returns per day — an astronomical, implausible profit. Regulators from several states investigated and found no evidence the “AI bot” even existed; it appeared to be a classic Ponzi, using AI as a tech buzzword to suck in victims​. YieldTrust.ai was ultimately shut down by authorities, but not before investors were duped by the slick marketing. 

Even when an automated trading bot is real, it’s often not the money-printing machine scammers claim. For instance, blockchain analysis firm Arkham Intelligence highlighted a case where a so-called arbitrage trading bot (likely touted as AI-driven) executed an incredibly complex series of trades, including a $200-million flash loan — and ended up netting a measly $3.24 in profit​.

In fact, many “AI trading” scams will take your deposit and, at best, run it through some random trades (or not trade at all), then make excuses when you try to withdraw. Some shady operators also use social media AI bots to fabricate a track record (e.g., fake testimonials or X bots that constantly post “winning trades”) to create an illusion of success. It’s all part of the ruse.

On the more technical side, criminals do use automated bots (not necessarily AI, but sometimes labeled as such) to exploit the crypto markets and infrastructure. Front-running bots in DeFi, for example, automatically insert themselves into pending transactions to steal a bit of value (a sandwich attack), and flash loan bots execute lightning-fast trades to exploit price discrepancies or vulnerable smart contracts. These require coding skills and aren’t typically marketed to victims; instead, they’re direct theft tools used by hackers. 

AI could enhance these by optimizing strategies faster than a human. However, as mentioned, even highly sophisticated bots don’t guarantee big gains — the markets are competitive and unpredictable, something even the fanciest AI can’t reliably foresee​.

Meanwhile, the risk to victims is real: If a trading algorithm malfunctions or is maliciously coded, it can wipe out your funds in seconds. There have been cases of rogue bots on exchanges triggering flash crashes or draining liquidity pools, causing users to incur huge slippage losses.

How AI-powered malware fuels cybercrime against crypto users

AI is teaching cybercriminals how to hack crypto platforms, enabling a wave of less-skilled attackers to launch credible attacks. This helps explain why crypto phishing and malware campaigns have scaled up so dramatically — AI tools let bad actors automate their scams and continuously refine them based on what works​.

AI is also supercharging malware threats and hacking tactics aimed at crypto users. One concern is AI-generated malware, malicious programs that use AI to adapt and evade detection. 

In 2023, researchers demonstrated a proof-of-concept called BlackMamba, a polymorphic keylogger that uses an AI language model (like the tech behind ChatGPT) to rewrite its code with every execution. This means each time BlackMamba runs, it produces a new variant of itself in memory, helping it slip past antivirus and endpoint security tools​.

​In tests, this AI-crafted malware went undetected by an industry-leading endpoint detection and response system​. Once active, it could stealthily capture everything the user types — including crypto exchange passwords or wallet seed phrases — and send that data to attackers​.

While BlackMamba was just a lab demo, it highlights a real threat: Criminals can harness AI to create shape-shifting malware that targets cryptocurrency accounts and is much harder to catch than traditional viruses​.

Even without exotic AI malware, threat actors abuse the popularity of AI to spread classic trojans. Scammers commonly set up fake “ChatGPT” or AI-related apps that contain malware, knowing users might drop their guard due to the AI branding. For instance, security analysts observed fraudulent websites impersonating the ChatGPT site with a “Download for Windows” button; if clicked, it silently installs a crypto-stealing Trojan on the victim’s machine​.

Beyond the malware itself, AI is lowering the skill barrier for would-be hackers. Previously, a criminal needed some coding know-how to craft phishing pages or viruses. Now, underground “AI-as-a-service” tools do much of the work. 

Illicit AI chatbots like WormGPT and FraudGPT have appeared on dark web forums, offering to generate phishing emails, malware code and hacking tips on demand​. For a fee, even non-technical criminals can use these AI bots to churn out convincing scam sites, create new malware variants, and scan for software vulnerabilities​.

How to protect your crypto from AI-driven attacks

AI-driven threats are becoming more advanced, making strong security measures essential to protect digital assets from automated scams and hacks.

Below are the most effective ways on how to protect crypto from hackers and defend against AI-powered phishing, deepfake scams and exploit bots:

Use a hardware wallet: AI-driven malware and phishing attacks primarily target online (hot) wallets. By using hardware wallets — like Ledger or Trezor — you keep private keys completely offline, making them virtually impossible for hackers or malicious AI bots to access remotely. For instance, during the 2022 FTX collapse, those using hardware wallets avoided the massive losses suffered by users with funds stored on exchanges.Enable multifactor authentication (MFA) and strong passwords: AI bots can crack weak passwords using deep learning in cybercrime, leveraging machine learning algorithms trained on leaked data breaches to predict and exploit vulnerable credentials. To counter this, always enable MFA via authenticator apps like Google Authenticator or Authy rather than SMS-based codes — hackers have been known to exploit SIM swap vulnerabilities, making SMS verification less secure.Beware of AI-powered phishing scams: AI-generated phishing emails, messages and fake support requests have become nearly indistinguishable from real ones. Avoid clicking on links in emails or direct messages, always verify website URLs manually, and never share private keys or seed phrases, regardless of how convincing the request may seem.Verify identities carefully to avoid deepfake scams: AI-powered deepfake videos and voice recordings can convincingly impersonate crypto influencers, executives or even people you personally know. If someone is asking for funds or promoting an urgent investment opportunity via video or audio, verify their identity through multiple channels before taking action.Stay informed about the latest blockchain security threats: Regularly following trusted blockchain security sources such as CertiK, Chainalysis or SlowMist will keep you informed about the latest AI-powered threats and the tools available to protect yourself. 

The future of AI in cybercrime and crypto security

As AI-driven crypto threats evolve rapidly, proactive and AI-powered security solutions become crucial to protecting your digital assets.

Looking ahead, AI’s role in cybercrime is likely to escalate, becoming increasingly sophisticated and harder to detect. Advanced AI systems will automate complex cyberattacks like deepfake-based impersonations, exploit smart-contract vulnerabilities instantly upon detection, and execute precision-targeted phishing scams. 

To counter these evolving threats, blockchain security will increasingly rely on real-time AI threat detection. Platforms like CertiK already leverage advanced machine learning models to scan millions of blockchain transactions daily, spotting anomalies instantly. 

As cyber threats grow smarter, these proactive AI systems will become essential in preventing major breaches, reducing financial losses, and combating AI and financial fraud to maintain trust in crypto markets.

Ultimately, the future of crypto security will depend heavily on industry-wide cooperation and shared AI-driven defense systems. Exchanges, blockchain platforms, cybersecurity providers and regulators must collaborate closely, using AI to predict threats before they materialize. While AI-powered cyberattacks will continue to evolve, the crypto community’s best defense is staying informed, proactive and adaptive — turning artificial intelligence from a threat into its strongest ally.

Continue Reading

Coin Market

Web3 has a metadata problem, and it’s not going away

Published

on

By

Opinion by: Casey Ford, PhD, researcher at Nym Technologies

Web3 rolled in on the wave of decentralization. Decentralized applications (DApps) grew by 74% in 2024 and individual wallets by 485%, with total value locked (TVL) in decentralized finance (DeFi) closing at a near-record high of $214 billion. The industry is also, however, heading straight for a state of capture if it does not wake up. 

As Elon Musk has teased of placing the US Treasury on blockchain, however poorly thought out, the tides are turning as crypto is deregulated. But when they do, is Web3 ready to “protect [user] data,” as Musk surrogates pledge? If not, we’re all on the brink of a global data security crisis.

The crisis boils down to a vulnerability at the heart of the digital world: the metadata surveillance of all existing networks, even the decentralized ones of Web3. AI technologies are now at the foundation of surveillance systems and serve as accelerants. Anonymity networks offer a way out of this state of capture. But this must begin with metadata protections across the board.

Metadata is the new frontier of surveillance

Metadata is the overlooked raw material of AI surveillance. Compared to payload data, metadata is lightweight and thus easy to process en masse. Here, AI systems excel best. Aggregated metadata can reveal much more than encrypted contents: patterns of behaviors, networks of contacts, personal desires and, ultimately, predictability. And legally, it is unprotected in the way end-to-end (E2E) encrypted communications are now in some regions. 

While metadata is a part of all digital assets, the metadata that leaks from E2E encrypted traffic exposes us and what we do: IPs, timing signatures, packet sizes, encryption formats and even wallet specifications. All of this is fully legible to adversaries surveilling a network. Blockchain transactions are no exception.

From piles of digital junk can emerge a goldmine of detailed records of everything we do. Metadata is our digital unconscious, and it is up for grabs for whatever machines can harvest it for profit.

The limits of blockchain

Protecting the metadata of transactions was an afterthought of blockchain technology. Crypto does not offer anonymity despite the reactionary association of the industry with illicit trade. It offers pseudonymity, the ability to hold tokens in a wallet with a chosen name. 

Recent: How to tokenize real-world assets on Bitcoin

Harry Halpin and Ania Piotrowska have diagnosed the situation:

“[T]he public nature of Bitcoin’s ledger of transactions […] means anyone can observe the flow of coins. [P]seudonymous addresses do not provide any meaningful level of anonymity, since anyone can harvest the counterparty addresses of any given transaction and reconstruct the chain of transactions.”

As all chain transactions are public, anyone running a full node can have a panoptic view of chain activity. Further, metadata like IP addresses attached to pseudonymous wallets can be used to identify people’s locations and identities if tracking technologies are sophisticated enough. 

This is the core problem of metadata surveillance in blockchain economics: Surveillance systems can effectively de-anonymize our financial traffic by any capable party.

Knowledge is also an insecurity

Knowledge is not just power, as the adage goes. It’s also the basis on which we are exploited and disempowered. There are at least three general metadata risks across Web3.

Fraud: Financial insecurity and surveillance are intrinsically linked. The most serious hacks, thefts or scams depend on accumulated knowledge about a target: their assets, transaction histories and who they are. DappRadar estimates a $1.3-billion loss due to “hacks and exploits” like phishing attacks in 2024 alone. 

Leaks: The wallets that permit access to decentralized tokenomics rely on leaky centralized infrastructures. Studies of DApps and wallets have shown the prevalence of IP leaks: “The existing wallet infrastructure is not in favor of users’ privacy. Websites abuse wallets to fingerprint users online, and DApps and wallets leak the user’s wallet address to third parties.” Pseudonymity is pointless if people’s identities and patterns of transactions can be easily revealed through metadata.

Chain consensus: Chain consensus is a potential point of attack. One example is a recent initiative by Celestia to add an anonymity layer to obscure the metadata of validators against particular attacks seeking to disrupt chain consensus in Celestia’s Data Availability Sampling (DAS) process.

Securing Web3 through anonymity

As Web3 continues to grow, so does the amount of metadata about people’s activities being offered up to newly empowered surveillance systems. 

Beyond VPNs

Virtual private network (VPN) technology is decades old at this point. The lack of advancement is shocking, with most VPNs remaining in the same centralized and proprietary infrastructures. Networks like Tor and Dandelion stepped in as decentralized solutions. Yet they are still vulnerable to surveillance by global adversaries capable of “timing analysis” via the control of entry and exit nodes. Even more advanced tools are needed.

Noise networks

All surveillance looks for patterns in a network full of noise. By further obscuring patterns of communication and de-linking metadata like IPs from metadata generated by traffic, the possible attack vectors can be significantly reduced, and metadata patterns can be scrambled into nonsense.

Anonymizing networks have emerged to anonymize sensitive traffic like communications or crypto transactions via noise: cover traffic, timing obfuscations and data mixing. In the same spirit, other VPNs like Mullvad have introduced programs like DAITA (Defense Against AI-guided Traffic Analysis), which seeks to add “distortion” to its VPN network. 

Scrambling the codes

Whether it’s defending people against the assassinations in tomorrow’s drone wars or securing their onchain transactions, new anonymity networks are needed to scramble the codes of what makes all of us targetable: the metadata our online lives leave in their wake.

The state of capture is already here. Machine learning is feeding off our data. Instead of leaving people’s data there unprotected, Web3 and anonymity systems can make sure that what ends up in the teeth of AI is effectively garbage.

Opinion by: Casey Ford, PhD, researcher at Nym Technologies.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Continue Reading

Coin Market

Bitcoin whale bets $368M with 40x leverage on BTC decline ahead of FOMC

Published

on

By

A Bitcoin whale is wagering hundreds of millions on Bitcoin’s short-term decline, ahead of a week filled with key economic reports that may significantly impact Bitcoin’s price trajectory and risk appetite among investors.

A large crypto investor, or whale, has opened a 40x leveraged short position for over 4,442 Bitcoin (BTC) worth over $368 million, which functions as a de facto bet on Bitcoin’s price fall.

Leveraged positions use borrowed money to increase the size of an investment, which can boost the size of both gains and losses, making leveraged trading riskier compared to regular investment positions.

The Bitcoin whale opened the $368 million position at $84,043 and faces liquidation if Bitcoin’s price surpasses $85,592.

Source: Hypurrscan

The investor has generated over $2 million in unrealized profit, however, he has an over $200,000 loss on his position’s funding fees, Hypurrscan data shows.

Despite the heightened risk of leveraged trading, some crypto investors are making significant profits with this strategy. Earlier in March, a savvy trader gained $68 million on a 50x leveraged short position, banking on Ether’s (ETH) 11% price decline.

The leveraged bet comes ahead of a week of numerous significant macroeconomic releases, including the upcoming Federal Open Market Committee (FOMC) meeting on March 19, which may impact investor appetite for risk assets such as Bitcoin.

Related: Bitcoin’s next catalyst: End of $36T US debt ceiling suspension

Bitcoin needs weekly close above $81k to avoid pre-FOMC downside: analysts

Bitcoin price continues to risk significant downside volatility due to growing macroeconomic uncertainty around global trade tariffs.

To avoid downside volatility ahead of the FOMC meeting, Bitcoin will need a weekly close above $81,000, according to Ryan Lee, chief analyst at Bitget Research,

The analyst told Cointelegraph:

“The key level to watch for the weekly close is $81,000 range, holding above that would signal resilience, but if we see a drop below $76,000, it could invite more short-term selling pressure.”

Related: Bitcoin experiencing ‘shakeout,’ not end of 4-year cycle: Analysts

The analyst’s comments come days ahead of the next FOMC meeting scheduled for March 19. Markets are currently pricing in a 98% chance that the Fed will keep interest rates steady, according to the latest estimates of the CME Group’s FedWatch tool.

Source: CME Group’s FedWatch tool

“The market largely expects the Fed to hold rates steady, but any unexpected hawkish signals could put pressure on Bitcoin and other risk assets,” added the analyst.

Magazine: SCB tips $500K BTC, SEC delays Ether ETF options, and more: Hodler’s Digest, Feb. 23 – Mar. 1

Continue Reading

Trending