Trump Administration Some people think that regulations are a good thing. crippling The following are some of the ways to get in touch with us. AI industryBut one of the biggest names in the industry disagrees.
The Big Interview Event on Thursday was held by WIRED. Anthropic Daniela Amodei, president and founder of the company, told WIRED’s Steven Levy editor-at-large that while Trump’s AI is a threat to humanity and its future, it will not be a problem for them. crypto czar, David Sacks, Can you imagine? tweeted This company is owned by her “running a sophisticated regulatory capture strategy based on fear-mongering,” She believes that her commitment to highlighting the dangers of AI makes the industry stronger.
“We were very vocal from day one that we felt there was this incredible potential” for AI, Amodei said. “We really want to be able to have the entire world realize the potential, the positive benefits, and the upside that can come from AI, and in order to do that, we have to get the tough things right. We have to make the risks manageable. And that’s why we talk about it so much.”
Over 300,000 companies, startups and developers use Anthropic’s Claude Model. Amodei explained that through her company’s interactions with these brands, Amodei has learned that customers not only want AI that can do amazing things but also one that is reliable and secure.
“No one says, ‘We want a less safe product,'” Amodei likened Anthropic’s publication of crash tests to illustrate how the company has improved safety. While it may be surprising to watch a crash test dummy fly through the window of a car in a video clip, learning that automakers have updated safety features after that test might convince buyers. Amodei added that the same applies to companies who use Anthropic AI products. This creates a self-regulating market.
“We’re setting what you can almost think of as minimum safety standards just by what we’re putting into the economy,” “She said” Companies “are now building many workflows and day-to-day tooling tasks around AI, and they’re like, ‘Well, we know that this product doesn’t hallucinate as much, it doesn’t produce harmful content, and it doesn’t do all of these bad things.’ Why would you go with a competitor that is going to score lower on that?”
Photograph: Annie Noelker

