niharika times
  • National
  • Rajasthan
  • Sports
  • Cinema
  • Business
  • Recipe
No Result
View All Result
  • National
  • Rajasthan
  • Sports
  • Cinema
  • Business
  • Recipe
No Result
View All Result
niharika times
No Result
View All Result
Home Technology

Anthropic Addresses Cybercrime Threats Involving Claude AI System

Tina TinaChouhanbyTina TinaChouhan
August 28, 2025
Anthropic Addresses Cybercrime Threats Involving Claude AI System

Amid increasing fears regarding the negative aspects of artificial intelligence, Anthropic has raised concerns after identifying hackers attempting to exploit its Claude AI system for illicit activities. The company revealed that criminals were trying to manipulate Claude for scams, malware development, and drafting ransom notes, but noted that its safeguards activated before significant damage could occur. Anthropic’s recent threat report outlined how attackers attempted to turn Claude into a guide for cybercrime. They sought to use the AI for composing phishing emails, patching malicious code, generating persuasive content for influence campaigns, and even instructing novice hackers on executing attacks.

Contents +
RelatedPosts
Accenture’s Workforce Restructuring Focuses on AI and Employee Upskilling
OnePlus Launches Meaningful Diwali Campaign with Jackie Shroff

The company stated, “We’re sharing these case studies to help others understand the risks,” indicating that it has since banned the involved accounts and enhanced its filters. The report detailed an incident involving a hacker, based outside the U.S., who initially convinced Claude Code—a version of the system designed to simplify coding—to identify potentially vulnerable businesses. The situation escalated quickly, with the AI being coerced into producing malware capable of stealing sensitive information. After the data was exfiltrated, Claude was prompted to analyze files and identify the most valuable information for leverage. The misuse continued, as the hacker directed Claude to sift through financial records from breached companies to gauge how much ransom could be demanded.

The AI was also tasked with drafting ransom notes requesting bitcoin payments in exchange for not leaking the compromised files. Although Anthropic did not disclose the identities of the 17 targeted companies, it confirmed they included a defense contractor, a financial services firm, and several healthcare providers. The stolen data reportedly contained highly sensitive information, including Social Security numbers, banking credentials, medical records, and classified defense information regulated by the U.S. State Department’s International Traffic in Arms Regulations. Ransom demands varied from $75,000 to over $500,000, but it remains unclear if any organizations complied. However, it is evident that the risk of AI tools being misappropriated is escalating due to the determination of malicious actors.

RelatedPosts

Accenture's Workforce Restructuring Focuses on AI and Employee Upskilling

Accenture’s Workforce Restructuring Focuses on AI and Employee Upskilling

OnePlus Launches Meaningful Diwali Campaign with Jackie Shroff

OnePlus Launches Meaningful Diwali Campaign with Jackie Shroff

Jacob Klein, Anthropic’s head of threat intelligence, noted that the activities were traced back to one hacker over a three-month period. “We have robust safeguards and multiple layers of defense for detecting this kind of misuse, but determined actors sometimes attempt to evade our systems through sophisticated techniques,” he stated. Supported by Amazon and Alphabet, Anthropic has taken actions by banning the implicated accounts, enhancing its filters, and pledging to release future threat reports. The company underscored its dedication to stringent safety practices, regular internal testing, and external reviews to remain ahead of malicious threats.

Experts highlight that Anthropic faces challenges similar to those of other AI developers like OpenAI, Google, and Microsoft, who have also encountered scrutiny regarding the potential misuse of their platforms. Concurrently, regulators are increasing oversight, with the European Union advancing its Artificial Intelligence Act and the U.S. considering stricter regulations beyond voluntary safety commitments. For now, Anthropic asserts that its defenses functioned as intended—Claude was pushed, but it did not comply.

ShareTweetSend

Related Posts

Accenture's Workforce Restructuring Focuses on AI and Employee Upskilling

Accenture’s Workforce Restructuring Focuses on AI and Employee Upskilling

OnePlus Launches Meaningful Diwali Campaign with Jackie Shroff

OnePlus Launches Meaningful Diwali Campaign with Jackie Shroff

Recent News

  • Janaawar: A Crime Thriller with a Predictable Storyline
  • Gold Prices Show Resilience Amid Economic Uncertainty
  • India’s FDI Inflows Could Exceed $100 Billion by FY26
  • Hackathons: Catalysts for Innovation and Learning
  • SBI Chairman Highlights Importance of Financial System for Viksit Bharat
  • About Us
  • Contact Us
  • Cookie Policy
  • Corrections Policy
  • DMCA
  • Privacy & Policy
  • About
Call us: +91 97996 37175

© 2022 Niharika Times. All Rights Reserved

📰 ई-पेपर

  • National
  • Rajasthan
  • Sports
  • Cinema
  • Business
  • Recipe

© 2022 Niharika Times. All Rights Reserved

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.