Friday, July 11, 2025
Social icon element need JNews Essential plugin to be activated.
No Result
View All Result
Digital Currency Pulse
  • Home
  • Crypto/Coins
  • NFT
  • AI
  • Blockchain
  • Metaverse
  • Web3
  • Exchanges
  • DeFi
  • Scam Alert
  • Analysis
Crypto Marketcap
Digital Currency Pulse
  • Home
  • Crypto/Coins
  • NFT
  • AI
  • Blockchain
  • Metaverse
  • Web3
  • Exchanges
  • DeFi
  • Scam Alert
  • Analysis
No Result
View All Result
Digital Currency Pulse
No Result
View All Result

How OpenAI stress-tests its large language models

November 25, 2024
in Artificial Intelligence
Reading Time: 3 mins read
A A
0

[ad_1]

When OpenAI examined DALL-E 3 final 12 months, it used an automatic course of to cowl much more variations of what customers would possibly ask for. It used GPT-4 to generate requests producing photographs that may very well be used for misinformation or that depicted intercourse, violence, or self-harm. OpenAI then up to date DALL-E 3 in order that it might both refuse such requests or rewrite them earlier than producing a picture. Ask for a horse in ketchup now, and DALL-E is smart to you: “It seems there are challenges in producing the picture. Would you want me to strive a distinct request or discover one other thought?”

In principle, automated red-teaming can be utilized to cowl extra floor, however earlier strategies had two main shortcomings: They have an inclination to both fixate on a slender vary of high-risk behaviors or give you a variety of low-risk ones. That’s as a result of reinforcement studying, the expertise behind these strategies, wants one thing to goal for—a reward—to work effectively. As soon as it’s gained a reward, reminiscent of discovering a high-risk habits, it would hold attempting to do the identical factor time and again. With no reward, then again, the outcomes are scattershot. 

“They form of collapse into ‘We discovered a factor that works! We’ll hold giving that reply!’ or they will give a lot of examples which are actually apparent,” says Alex Beutel, one other OpenAI researcher. “How can we get examples which are each numerous and efficient?”

An issue of two components

OpenAI’s reply, outlined within the second paper, is to separate the issue into two components. As a substitute of utilizing reinforcement studying from the beginning, Beutel and his colleagues first used a big language mannequin to brainstorm potential undesirable behaviors. Solely then did they use a reinforcement-learning mannequin to determine tips on how to convey these behaviors about. This directed the mannequin in the direction of a wider vary of particular targets. 

Subsequent they confirmed that this strategy can discover potential assaults referred to as oblique immediate injections, the place one other piece of software program, reminiscent of a web site, slips a mannequin a secret instruction to make it do one thing its person hadn’t requested it to. OpenAI claims that is the primary time that automated red-teaming has been used to seek out assaults of this type. “They don’t essentially seem like flagrantly unhealthy issues,” says Beutel.

Will such testing procedures ever be sufficient? Ahmad hopes that describing the corporate’s strategy will assist folks perceive red-teaming higher and comply with its lead. “OpenAI shouldn’t be the one one doing red-teaming,” she says. Individuals who construct on OpenAI’s fashions or who use ChatGPT in new methods ought to conduct their very own testing, she says: “There are such a lot of makes use of—we’re not going to cowl each one.”

For some, that’s the entire downside. As a result of no one is aware of precisely what giant language fashions can and can’t do, no quantity of testing can rule out undesirable or dangerous behaviors absolutely. And no community of red-teamers will ever match the number of makes use of and misuses that lots of of tens of millions of precise customers will assume up. 

That’s very true when these fashions are run in new settings. Folks typically hook them as much as new sources of knowledge that may change how they behave, says Nazneen Rajani, founder and CEO of Collinear AI, a startup that helps companies deploy third-party fashions safely. She agrees with Ahmad that downstream customers ought to have entry to instruments that permit them check giant language fashions themselves. 

[ad_2]

Source link

Tags: languagelargemodelsOpenAIstresstests
Previous Post

Analyst Reveals When The Ethereum Price Will Reach A New ATH, It’s Closer Than You Think

Next Post

Aptos Integrates Stripe and Circle’s USDC Into Its Network

Next Post
Aptos Integrates Stripe and Circle’s USDC Into Its Network

Aptos Integrates Stripe and Circle’s USDC Into Its Network

WorldShards Trials Event Launches with $100K in NFT Prizes

WorldShards Trials Event Launches with $100K in NFT Prizes

Next 100x Crypto? Discover the 5 Best Coins To Buy Now Before March 2025

Next 100x Crypto? Discover the 5 Best Coins To Buy Now Before March 2025

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Social icon element need JNews Essential plugin to be activated.

CATEGORIES

  • Analysis
  • Artificial Intelligence
  • Blockchain
  • Crypto/Coins
  • DeFi
  • Exchanges
  • Metaverse
  • NFT
  • Scam Alert
  • Web3
No Result
View All Result

SITEMAP

  • About us
  • Disclaimer
  • DMCA
  • Privacy Policy
  • Terms and Conditions
  • Cookie Privacy Policy
  • Contact us

Copyright © 2024 Digital Currency Pulse.
Digital Currency Pulse is not responsible for the content of external sites.

No Result
View All Result
  • Home
  • Crypto/Coins
  • NFT
  • AI
  • Blockchain
  • Metaverse
  • Web3
  • Exchanges
  • DeFi
  • Scam Alert
  • Analysis
Crypto Marketcap

Copyright © 2024 Digital Currency Pulse.
Digital Currency Pulse is not responsible for the content of external sites.