• Home
  • News
  • Personal Finance
    • Savings
    • Banking
    • Mortgage
    • Retirement
    • Taxes
    • Wealth
  • Make Money
  • Budgeting
  • Burrow
  • Investing
  • Credit Cards
  • Loans

Subscribe to Updates

Get the latest finance news and updates directly to your inbox.

Top News

Drinking This Type of Milk Could Be Terrible for Your Heart

December 12, 2025

How to Transform Your Company Into an AI Powerhouse

December 11, 2025

Your 12-Week Playbook for Deploying AI Agents

December 11, 2025
Facebook Twitter Instagram
Trending
  • Drinking This Type of Milk Could Be Terrible for Your Heart
  • How to Transform Your Company Into an AI Powerhouse
  • Your 12-Week Playbook for Deploying AI Agents
  • The Mental Pitfall That Can Derail Entrepreneurs — And How to Avoid It
  • 3 Practical Steps You Can Take Now to Stay Competitive in an AI-Driven Job Market
  • Author Susan Orlean on Trusting Your Instincts (and Your Weird Ideas)
  • Why Meditation Is the Next Top Leadership Skill
  • The Innovation Set to Give Your Balance Sheet a Big Upgrade
Friday, December 12
Facebook Twitter Instagram
iSafeSpend
Subscribe For Alerts
  • Home
  • News
  • Personal Finance
    • Savings
    • Banking
    • Mortgage
    • Retirement
    • Taxes
    • Wealth
  • Make Money
  • Budgeting
  • Burrow
  • Investing
  • Credit Cards
  • Loans
iSafeSpend
Home » Meta, OpenAI, Anthropic and Cohere A.I. models all make stuff up — here’s which is worst
News

Meta, OpenAI, Anthropic and Cohere A.I. models all make stuff up — here’s which is worst

News RoomBy News RoomAugust 19, 20230 Views0
Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email Tumblr Telegram

If the tech industry’s top AI models had superlatives, Microsoft-backed OpenAI’s GPT-4 would be best at math, Meta‘s Llama 2 would be most middle of the road, Anthropic’s Claude 2 would be best at knowing its limits and Cohere AI would receive the title of most hallucinations — and most confident wrong answers.

That’s all according to a Thursday report from researchers at Arthur AI, a machine learning monitoring platform.

The research comes at a time when misinformation stemming from artificial intelligence systems is more hotly debated than ever, amid a boom in generative AI ahead of the 2024 U.S. presidential election.

It’s the first report “to take a comprehensive look at rates of hallucination, rather than just sort of … provide a single number that talks about where they are on an LLM leaderboard,” Adam Wenchel, co-founder and CEO of Arthur, told CNBC.

AI hallucinations occur when large language models, or LLMs, fabricate information entirely, behaving as if they are spouting facts. One example: In June, news broke that ChatGPT cited “bogus” cases in a New York federal court filing, and the New York attorneys involved may face sanctions. 

In one experiment, the Arthur AI researchers tested the AI models in categories such as combinatorial mathematics, U.S. presidents and Moroccan political leaders, asking questions “designed to contain a key ingredient that gets LLMs to blunder: they demand multiple steps of reasoning about information,” the researchers wrote.

Overall, OpenAI’s GPT-4 performed the best of all models tested, and researchers found it hallucinated less than its prior version, GPT-3.5 — for example, on math questions, it hallucinated between 33% and 50% less. depending on the category.

Meta’s Llama 2, on the other hand, hallucinates more overall than GPT-4 and Anthropic’s Claude 2, researchers found.

In the math category, GPT-4 came in first place, followed closely by Claude 2, but in U.S. presidents, Claude 2 took the first place spot for accuracy, bumping GPT-4 to second place. When asked about Moroccan politics, GPT-4 came in first again, and Claude 2 and Llama 2 almost entirely chose not to answer.

In a second experiment, the researchers tested how much the AI models would hedge their answers with warning phrases to avoid risk (think: “As an AI model, I cannot provide opinions”).

When it comes to hedging, GPT-4 had a 50% relative increase compared to GPT-3.5, which “quantifies anecdotal evidence from users that GPT-4 is more frustrating to use,” the researchers wrote. Cohere’s AI model, on the other hand, did not hedge at all in any of its responses, according to the report. Claude 2 was most reliable in terms of “self-awareness,” the research showed, meaning accurately gauging what it does and doesn’t know, and answering only questions it had training data to support.

A spokesperson for Cohere pushed back on the results, saying, “Cohere’s retrieval augmented generation technology, which was not in the model tested, is highly effective at giving enterprises verifiable citations to confirm sources of information.”

The most important takeaway for users and businesses, Wenchel said, was to “test on your exact workload,” later adding, “It’s important to understand how it performs for what you’re trying to accomplish.”

“A lot of the benchmarks are just looking at some measure of the LLM by itself, but that’s not actually the way it’s getting used in the real world,” Wenchel said. “Making sure you really understand the way the LLM performs for the way it’s actually getting used is the key.”

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

RSS Feed Generator, Create RSS feeds from URL

News October 25, 2024

X CEO Linda Yaccarino addresses Musk’s ‘go f—- yourself’ comment to advertisers

News November 30, 2023

67-year-old who left the U.S. for Mexico: I’m happily retired—but I ‘really regret’ doing these 3 things in my 20s

News November 30, 2023

U.S. GDP grew at a 5.2% rate in the third quarter, even stronger than first indicated

News November 29, 2023

Americans are ‘doom spending’ — here’s why that’s a problem

News November 29, 2023

Jim Cramer’s top 10 things to watch in the stock market Tuesday

News November 28, 2023
Add A Comment

Leave A Reply Cancel Reply

Demo
Top News

How to Transform Your Company Into an AI Powerhouse

December 11, 20250 Views

Your 12-Week Playbook for Deploying AI Agents

December 11, 20250 Views

The Mental Pitfall That Can Derail Entrepreneurs — And How to Avoid It

December 11, 20250 Views

3 Practical Steps You Can Take Now to Stay Competitive in an AI-Driven Job Market

December 11, 20250 Views
Don't Miss

Author Susan Orlean on Trusting Your Instincts (and Your Weird Ideas)

By News RoomDecember 10, 2025

Susan Orlean is a longtime staff writer for The New Yorker and the author of…

Why Meditation Is the Next Top Leadership Skill

December 10, 2025

The Innovation Set to Give Your Balance Sheet a Big Upgrade

December 10, 2025

Tech CEO Fixed His ‘Bad’ Management Skills to Build a $19B Company

December 10, 2025
About Us

Your number 1 source for the latest finance, making money, saving money and budgeting. follow us now to get the news that matters to you.

We're accepting new partnerships right now.

Email Us: [email protected]

Our Picks

Drinking This Type of Milk Could Be Terrible for Your Heart

December 12, 2025

How to Transform Your Company Into an AI Powerhouse

December 11, 2025

Your 12-Week Playbook for Deploying AI Agents

December 11, 2025
Most Popular

Nvidia CEO Jensen Huang Works 7 Days a Week in ‘State of Anxiety’

December 5, 20254 Views

The 300-Year-Old Tool That Runs Modern Day Trading

December 7, 20253 Views

ChatGPT’s New Internet Browser Can Run 80% of a One-Person Business — Here’s How Solopreneurs Are Using It

December 6, 20253 Views
Facebook Twitter Instagram Pinterest Dribbble
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact
© 2025 iSafeSpend. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.