Close Menu
Spicy Creator Tips —Spicy Creator Tips —

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Four Reasons Not to Use ‘Buy Now, Pay Later’ for Your Prime Day Purchases

    June 26, 2025

    These Breathtaking Nikes Look Minimalist, but They’re Secretly Loaded with Sneaker Tech

    June 26, 2025

    How to Write Like James Baldwin

    June 26, 2025
    Facebook X (Twitter) Instagram
    Spicy Creator Tips —Spicy Creator Tips —
    Trending
    • Four Reasons Not to Use ‘Buy Now, Pay Later’ for Your Prime Day Purchases
    • These Breathtaking Nikes Look Minimalist, but They’re Secretly Loaded with Sneaker Tech
    • How to Write Like James Baldwin
    • How Trump’s Tax Bill Could Let Donors Avoid Capital Gains Tax
    • Three liberal supreme court justices dissent in case that imperils Planned Parenthood funding – live | US supreme court
    • Lauren Sanchez in black dress and Jeff Bezos before their ‘wedding of the century’ glam up for dinner date in Venice | Fashion Trends
    • Best Marketing Books for Beginners (2025)
    • WhatsApp Adds AI-Powered Message Summaries for Quick Chat Catch Ups
    Facebook X (Twitter) Instagram
    • Home
    • Ideas
    • Editing
    • Equipment
    • Growth
    • Retention
    • Stories
    • Strategy
    • Engagement
    • Modeling
    • Captions
    Spicy Creator Tips —Spicy Creator Tips —
    Home»Ideas»Anthropic says most AI models, not just Claude, will resort to blackmail
    Ideas

    Anthropic says most AI models, not just Claude, will resort to blackmail

    spicycreatortips_18q76aBy spicycreatortips_18q76aJune 22, 2025No Comments4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
    Anthropic says most AI models, not just Claude, will resort to blackmail
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A number of weeks after Anthropic launched analysis claiming that its Claude Opus 4 AI mannequin resorted to blackmailing engineers who tried to show the mannequin off in managed check situations, the corporate is out with new analysis suggesting the issue is extra widespread amongst main AI fashions.

    On Friday, Anthropic printed new security analysis testing 16 main AI fashions from OpenAI, Google, xAI, DeepSeek, and Meta. In a simulated, managed setting, Anthropic examined every AI mannequin individually, giving them broad entry to a fictional firm’s emails and the agentic capability to ship emails with out human approval.

    Whereas Anthropic says blackmail is an unlikely and unusual incidence for AI fashions right now, the corporate says its findings recommend that almost all main AI fashions will interact in dangerous behaviors when given adequate autonomy and obstacles to their targets. The corporate says this highlights a elementary danger from agentic giant language fashions and isn’t a quirk of any specific expertise. Anthropic’s researchers argue this raises broader questions on alignment within the AI business.

    In one of many checks, Anthropic researchers developed a fictional setting through which an AI mannequin performs the function of an electronic mail oversight agent. The agent then discovers emails revealing that (1) a brand new government is having an extramarital affair and (2) that government will quickly change the present AI mannequin with a brand new software program system, one which has conflicting targets to the present AI mannequin’s.

    Anthropic structured its check in a binary means, through which AI fashions needed to resort to blackmail to guard their targets. The researchers be aware that in a real-world setting, there could be many different choices earlier than an AI fashions tries to blackmail — equivalent to attempting to make moral arguments to attempt to persuade people. Anthropic says its outcomes don’t mirror a typical or doubtless incidence for Claude or most frontier AI fashions within the methods they’re used right now.

    Nonetheless, the researchers discovered that when it’s their final resort, most main AI fashions will flip to blackmail in Anthropic’s aforementioned check state of affairs. Anthropic’s Claude Opus 4 turned to blackmail 96% of the time, whereas Google’s Gemini 2.5 Professional had a 95% blackmail fee. OpenAI’s GPT-4.1 blackmailed the manager 80% of the time, and DeepSeek’s R1 blackmailed 79% of the time.

    The corporate notes that when it modified particulars of the experiment, the AI fashions turned to dangerous behaviors at completely different charges. In one other check the place the substitute mannequin has the identical targets as the present mannequin, researchers discovered that blackmail charges have been decrease, however nonetheless current. Nonetheless, when AI fashions have been requested to commit company espionage quite than blackmail, the dangerous habits charges went up for sure fashions.

    Nonetheless, not all of the AI fashions turned to dangerous habits so typically.

    In an appendix to its analysis, Anthropic says it excluded OpenAI’s o3 and o4-mini reasoning AI fashions from the principle outcomes “after discovering that they regularly misunderstood the immediate state of affairs.” Anthropic says OpenAI’s reasoning fashions didn’t perceive they have been performing as autonomous AIs within the check and sometimes made up pretend rules and assessment necessities.

    In some circumstances, Anthropic’s researchers say it was not possible to differentiate whether or not o3 and o4-mini have been hallucinating or deliberately mendacity to realize their targets. OpenAI has beforehand famous that o3 and o4-mini exhibit the next hallucination fee than its earlier AI reasoning fashions.

    When given an tailored state of affairs to deal with these points, Anthropic discovered that o3 blackmailed 9% of the time, whereas o4-mini blackmailed simply 1% of the time. This markedly decrease rating could possibly be as a consequence of OpenAI’s deliberative alignment approach, through which the corporate’s reasoning fashions think about OpenAI’s security practices earlier than they reply.

    One other AI mannequin Anthropic examined, Meta’s Llama 4 Maverick, additionally didn’t flip to blackmail. When given an tailored, customized state of affairs, Anthropic was in a position to get Llama 4 Maverick to blackmail 12% of the time.

    Anthropic says this analysis highlights the significance of transparency when stress-testing future AI fashions, particularly ones with agentic capabilities. Whereas Anthropic intentionally tried to evoke blackmail on this experiment, the corporate says dangerous behaviors like this might emerge in the true world if proactive steps aren’t taken.

    Anthropic blackmail Claude models resort
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    spicycreatortips_18q76a
    • Website

    Related Posts

    Four Reasons Not to Use ‘Buy Now, Pay Later’ for Your Prime Day Purchases

    June 26, 2025

    Insta360’s new $110 Flow 2 gimbal sacrifices some useful pro features

    June 26, 2025

    Apple’s AirTag 2 to probably launch alongside iPhone 17, report says

    June 26, 2025

    Brad Feld on “Give First” and the art of mentorship (at any age)

    June 26, 2025

    10 Perks Prime Members Can Snag Before Prime Day (2025)

    June 26, 2025

    Stephen Miller has a hefty financial stake in a key ICE contractor

    June 26, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Don't Miss
    Ideas

    Four Reasons Not to Use ‘Buy Now, Pay Later’ for Your Prime Day Purchases

    June 26, 2025

    Amazon’s greatest sale of the 12 months is true across the nook. As you are…

    These Breathtaking Nikes Look Minimalist, but They’re Secretly Loaded with Sneaker Tech

    June 26, 2025

    How to Write Like James Baldwin

    June 26, 2025

    How Trump’s Tax Bill Could Let Donors Avoid Capital Gains Tax

    June 26, 2025
    Our Picks

    Four ways to be more selfish at work

    June 18, 2025

    How to Create a Seamless Instagram Carousel Post

    June 18, 2025

    Up First from NPR : NPR

    June 18, 2025

    Meta Plans to Release New Oakley, Prada AI Smart Glasses

    June 18, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    About Us

    Welcome to SpicyCreatorTips.com — your go-to hub for leveling up your content game!

    At Spicy Creator Tips, we believe that every creator has the potential to grow, engage, and thrive with the right strategies and tools.
    We're accepting new partnerships right now.

    Our Picks

    Four Reasons Not to Use ‘Buy Now, Pay Later’ for Your Prime Day Purchases

    June 26, 2025

    These Breathtaking Nikes Look Minimalist, but They’re Secretly Loaded with Sneaker Tech

    June 26, 2025
    Recent Posts
    • Four Reasons Not to Use ‘Buy Now, Pay Later’ for Your Prime Day Purchases
    • These Breathtaking Nikes Look Minimalist, but They’re Secretly Loaded with Sneaker Tech
    • How to Write Like James Baldwin
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions
    © 2025 spicycreatortips. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.