Thursday, June 20th 2024

OpenAI Co-Founder Ilya Sutskever Launches a New Venture: Safe Superintelligence Inc.

OpenAI's co-founder and ex-chief scientist, Ilya Sutskever, has announced the formation of a new company promising a safe path to artificial superintelligence (ASI). Called Safe Superintelligence Inc. (SSI), the company has a simple mission: achieving ASI with safety at the front. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," notes the SSI website, adding that "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

Interestingly, safety is a concern only a few frontier AI labs have. In recent history, OpenAI's safety team got the spotlight for being neglected, and the company's safety lead, Jan Leike, publically criticized safety practices before moving to Anthophic. Antrophic is focused on providing safe AI models, with its Claude Opus being one of the leading AI models to date. What is to come out of SSI? We still don't know. However, given the team of Ilya Sutskever, Daniel Gross, and Daniel Levy, we assume they attracted the best-in-class talent for developing next-generation AI models, focusing on safety. With offices in Palo Alto and Tel Aviv, SSI can tap a vast network of AI researchers and policymakers to establish safe ASI, free from short-term commercial pressure and focused on research and development. "Our team, investors, and business model are all aligned to achieve SSI," says the SSI website.
Source: SSI
Add your own comment

38 Comments on OpenAI Co-Founder Ilya Sutskever Launches a New Venture: Safe Superintelligence Inc.

#1
Vayra86
Great, now finally go see a barber, okay?

I think Ilya smells money and wants to cash before the bubble says poof

OpenAI created the perfect fearmongering for his business. Gosh, no, this totally doesn't look like a crypto ICO

"What is to come out of SSI? We still don't know."
:roll: :roll: :roll:
Posted on Reply
#2
Assimilator
More "AI" grift, nothing to see here, move along.
Posted on Reply
#3
R0H1T
So he made the "wolf" first & now he's selling us sheepdog :shadedshu:
Posted on Reply
#4
JWNoctis
Come on. Arguably the threat is actual. If ASI is possible, then it would almost certainly come at cross-purposes with human interest. And since lesser intelligence cannot reasonably predict action of greater intelligence, there is no actual way to ensure otherwise, except by not creating such intelligence in the first place. That is not what's happening, with everyone racing along.

That is, if misuse of lesser AIs or other ills do not take humanity down first. On the other hand, misalignment with current human interest is not necessarily a bad thing either, but might end with anything from actual utopia, to utopia-with-smile-painted-on-your-soul, to death, to something quite a lot worse. I'm not sure it's desirable to find out which.

The whole thing really is pretty ominous. I applaud them for trying, even if they are just another participant in the race. Interesting times.
Posted on Reply
#5
Prime2515102
First AI, now ASI? Next will be AMI (artificial megaintelligence) then AWTFI. This getting stupid already... I'm glad I'm old enough be dead soon.
Posted on Reply
#6
Caring1
Typo in the title: Co funder, instead of Co-founder
Posted on Reply
#7
Endymio
Vayra86I think Ilya smells money and wants to cash before the bubble says poof
Yeah, and that whole 'Internet' thing is a fad, too. You're really onto something, I think.
Posted on Reply
#8
64K
How do you sell a new company to people with the current buzz-word while acknowledging a lot of people's concerns about the possible risks of AI? How about:

Safe Superintelligence Inc.

That should do it. Maybe the people that associate AI with the Terminator movies or losing control of everything to AI won't find it so scary because under their guiding hand it's safe. It's right there in the name people. Also, it's a corporation so it must be a serious business and not just another scam. ;)
Posted on Reply
#9
Chomiq
He should invest in some hair plugs.

Also, free advice on naming the next AI company:
Not A Skynet Inc.
Posted on Reply
#10
Caring1
64KHow do you sell a new company to people with the current buzz-word while acknowledging a lot of people's concerns about the possible risks of AI? How about:

Safe Superintelligence Inc.

That should do it. Maybe the people that associate AI with the Terminator movies or losing control of everything to AI won't find it so scary because under their guiding hand it's safe. It's right there in the name people. Also, it's a corporation so it must be a serious business and not just another scam. ;)
He forgot to add, "Trust me" at the end of it's name. :laugh:
Posted on Reply
#11
Assimilator
JWNoctisCome on. Arguably the threat is actual. If ASI is possible, then it would almost certainly come at cross-purposes with human interest. And since lesser intelligence cannot reasonably predict action of greater intelligence, there is no actual way to ensure otherwise, except by not creating such intelligence in the first place. That is not what's happening, with everyone racing along.

That is, if misuse of lesser AIs or other ills do not take humanity down first. On the other hand, misalignment with current human interest is not necessarily a bad thing either, but might end with anything from actual utopia, to utopia-with-smile-painted-on-your-soul, to death, to something quite a lot worse. I'm not sure it's desirable to find out which.

The whole thing really is pretty ominous. I applaud them for trying, even if they are just another participant in the race. Interesting times.
There is no risk because LLMs are incapable of creating true AI. Any time you see an article claiming that "AI researchers" are worried about that happening, you can guarantee one of three things:
  • those "researchers" don't exist (go go gadget "anonymous sources" AKA "I can make up whatever I want to get clicks" AKA what passes for journalism in this century)
  • those "researchers" are idiots and/or being interviewed by idiots
  • those "researchers" are lying through their teeth to keep the grift bubble, and the stock price of the grift "AI" company they work for, inflated
"Safe" Superintelligence Inc intends to become the ultimate grifter: you pay them a hefty subscription fee, in return you get to display their stamp of approval on your company's website/portfolio/whatever. Notice how SSI does no work, because as explained above there is no work for them to do, because the concept of "safety" is irrelevant for LLMs. Oh I'm sure they'll "audit" you as part of that subscription, but the "auditor" will perform no meaningful actions because, again, there is literally nothing to do.

This, BTW, is why commercial companies cannot be allowed to become the gatekeepers of anything.
Posted on Reply
#12
ty_ger
Step one, define 'safe'.
Lots of words, but nothing concrete. I have no idea what their goal is.
Are they trying to design something that isn't a weapon? Isn't manipulative? Is family-friendly? Which won't take our jobs? What is 'safe'? What is their goal?
Posted on Reply
#13
Endymio
AssimilatorThere is no risk because LLMs are incapable of creating true AI.
Yes, and protein molecules are incapable of creating intelligence as well -- right?
Posted on Reply
#14
JWNoctis
AssimilatorThere is no risk because LLMs are incapable of creating true AI. Any time you see an article claiming that "AI researchers" are worried about that happening, you can guarantee one of three things:
  • those "researchers" don't exist (go go gadget "anonymous sources" AKA "I can make up whatever I want to get clicks" AKA what passes for journalism in this century)
  • those "researchers" are idiots and/or being interviewed by idiots
  • those "researchers" are lying through their teeth to keep the grift bubble, and the stock price of the grift "AI" company they work for, inflated
"Safe" Superintelligence Inc intends to become the ultimate grifter: you pay them a hefty subscription fee, in return you get to display their stamp of approval on your company's website/portfolio/whatever. Notice how SSI does no work, because as explained above there is no work for them to do, because the concept of "safety" is irrelevant for LLMs. Oh I'm sure they'll "audit" you as part of that subscription, but the "auditor" will perform no meaningful actions because, again, there is literally nothing to do.

This, BTW, is why commercial companies cannot be allowed to become the gatekeepers of anything.
We don't actually know that. It most certainly won't, but some similar architecture operating with greater varieties of inputs and much greater amount of resource might, and quite probably not nearly all of the most interesting frontier advancements of the field are being published anymore. You don't toy with "might" when human survival is at stake. Even commercial models are not showing clear signs of capabilities plateauing with ever greater amount of compute yet, and some of those "grifters" are talking about expanding the grid and generation capacity to feed new datacentres. Bubbles would only burst, when they have nothing to show for their effort.

As to grifters, I'm sure there were plenty of those circa 180 years ago, arguing how railways would cause the end of humanity by agricultural collapse, through all that smog blanketing the sun, and that infernal racket scaring livestock to death or at least making unproductive, aaaand that they'd accept donations for their cause too. I just hope it's that simple this time.

Hell, a burst bubble and another AI winter poisoning the field for another couple generations might well be the only thing that would keep humanity safe from that unfortunate fate now, IF ASI is at all possible, but would turn out to be beyond the reach of reasonable compute before the music would stop.
Posted on Reply
#15
Assimilator
JWNoctisEven commercial models are not showing clear signs of capabilities plateauing with ever greater amount of compute yet
Incorrect. An LLM is an LLM is an LLM, there are no "capabilities" that can "grow". The only reason the "new versions" appear superior to older ones is more compute.
JWNoctisand some of those "grifters" are talking about expanding the grid and generation capacity to feed new datacentres
Ah yes, like how Altman wants OpenAI to invest in a company working on nuclear fusion... a company that he's also invested in. It really is just grift all the way down.
JWNoctisBubbles would only burst, when they have nothing to show for their effort.
Incorrect, bubbles only burst once enough people call out the bullshit for what it is. That hasn't happened yet because every company in the world is run by idiot psychopaths who know nothing about "AI" other than that it's the next big thing to put on their resume, so they have zero incentive to examine whether it's actually providing value to the business they "run", or in fact even working at all. This self-perpetuating circlejerk will continue until one of these CEOs of a large and well-known company bets the farm on some stupid "AI" project that fails miserably because "AI" is rubbish, causing that company to implode. At that stage journalists will finally start asking "is this AI stuff actually any good?" and they are going to find no shortage of people in the trenches to tell them that it absolutely is not. Once those "AI is actually garbage" headlines start to drop, the bubble has popped.
Posted on Reply
#16
Endymio
AssimilatorIncorrect. An LLM is an LLM is an LLM, there are no "capabilities" that can "grow". The only reason the "new versions" appear superior to older ones is more compute.
This is, of course, false. They 'grow' the same way a human brain grows -- more (or better) training.
AssimilatorAh yes, like how Altman wants OpenAI to invest in a company working on nuclear fusion... a company that he's also invested in. It really is just grift all the way down.
Except that AI-based tools are *already* generating trillions of dollars of benefits in industries ranging from medicine to aerospace to materials science.
Posted on Reply
#17
InVasMani
Create a effective anti hack AI then we'll talk...
Posted on Reply
#18
remixedcat
not safe if it's coming from ClosedAI
Posted on Reply
#19
Vayra86
JWNoctisCome on. Arguably the threat is actual. If ASI is possible, then it would almost certainly come at cross-purposes with human interest. And since lesser intelligence cannot reasonably predict action of greater intelligence, there is no actual way to ensure otherwise, except by not creating such intelligence in the first place. That is not what's happening, with everyone racing along.

That is, if misuse of lesser AIs or other ills do not take humanity down first. On the other hand, misalignment with current human interest is not necessarily a bad thing either, but might end with anything from actual utopia, to utopia-with-smile-painted-on-your-soul, to death, to something quite a lot worse. I'm not sure it's desirable to find out which.

The whole thing really is pretty ominous. I applaud them for trying, even if they are just another participant in the race. Interesting times.
The fact this is done by a commercial entity is all we need to know here. This is just good business. Fuck ethics. Remember Google, do no evil. Or do you also still believe crypto is really there to decentralize and democratize the world of finance? Its all more of the same because it all originates from the same thing: a market. There is only one bottom line: money.
EndymioExcept that AI-based tools are *already* generating trillions of dollars of benefits in industries ranging from medicine to aerospace to materials science.
Exactly, all those tools are tailor made for highly specific jobs, so pray tell, where is this existential threat now. There is just a new security aspect, a new attack vector. Nothing else. AI is nothing other than a more complex algorithm.
AssimilatorIncorrect, bubbles only burst once enough people call out the bullshit for what it is. That hasn't happened yet because every company in the world is run by idiot psychopaths who know nothing about "AI" other than that it's the next big thing to put on their resume, so they have zero incentive to examine whether it's actually providing value to the business they "run", or in fact even working at all. This self-perpetuating circlejerk will continue until one of these CEOs of a large and well-known company bets the farm on some stupid "AI" project that fails miserably because "AI" is rubbish, causing that company to implode. At that stage journalists will finally start asking "is this AI stuff actually any good?" and they are going to find no shortage of people in the trenches to tell them that it absolutely is not. Once those "AI is actually garbage" headlines start to drop, the bubble has popped.
Hmm something something autonomous cars hmmm hyperloop hmmm metaverse hmmm

This is exactly it, tech companies' eternal search for new revenue and markets. The purpose comes after that. Demand, in commerce, is created.
EndymioYeah, and that whole 'Internet' thing is a fad, too. You're really onto something, I think.
Whoever said that? And whoever said AI is 'the next thing after the Internet'? That alone is ridiculous. The AI feeds on the internet. Like a parasite, a disease. Cue Agent Smith.
Posted on Reply
#20
Assimilator
EndymioThis is, of course, false. They 'grow' the same way a human brain grows -- more (or better) training.
No, they don't. More/better training may make the LLM better at joining the correct dots but it's still doesn't understand HOW or WHY those dots are connected. As I've said before, correlation without causation is not and never will be an intelligence.
EndymioExcept that AI-based tools are *already* generating trillions of dollars of benefits in industries ranging from medicine to aerospace to materials science.
"Trillions" of dollars? Really? I can make random shit up on the internet too.
Posted on Reply
#21
JWNoctis
Vayra86The fact this is done by a commercial entity is all we need to know here. This is just good business. Fuck ethics. Remember Google, do no evil. Or do you also still believe crypto is really there to decentralize and democratize the world of finance? Its all more of the same because it all originates from the same thing: a market. There is only one bottom line: money.
Commercial interests profiteering off the AI craze has precisely zero relevance to the actual threat strong AI can pose. Them going either direction, either minimizing it or blowing it beyond all proportion, is going to do diddly-squat if the threat materializes.

One could argue that the battle of aligning any possible AGI/ASI is already lost, and doomed from the start; Commercial interest is already so grossly misaligned from general human interest, that they might as well be reptile aliens. Heck, human interest is itself self-contradictory to the point that...Well, just look at what's going on these days.
AssimilatorIncorrect. An LLM is an LLM is an LLM, there are no "capabilities" that can "grow". The only reason the "new versions" appear superior to older ones is more compute.
Again, some other future architecture might exhibit dangerous, unforeseen, capability. The current craze is creating a dangerous concentration of resource, that I'd actually be thankful if they just took the money and run. They would do less damage that way.
AssimilatorAh yes, like how Altman wants OpenAI to invest in a company working on nuclear fusion... a company that he's also invested in. It really is just grift all the way down.
Quite possibly dooming the world at the same time is kind of more salient, than any big number one person might amass.
AssimilatorIncorrect, bubbles only burst once enough people call out the bullshit for what it is. That hasn't happened yet because every company in the world is run by idiot psychopaths who know nothing about "AI" other than that it's the next big thing to put on their resume, so they have zero incentive to examine whether it's actually providing value to the business they "run", or in fact even working at all. This self-perpetuating circlejerk will continue until one of these CEOs of a large and well-known company bets the farm on some stupid "AI" project that fails miserably because "AI" is rubbish, causing that company to implode. At that stage journalists will finally start asking "is this AI stuff actually any good?" and they are going to find no shortage of people in the trenches to tell them that it absolutely is not. Once those "AI is actually garbage" headlines start to drop, the bubble has popped.
That's...more or less exactly what I thought I meant by "nothing to show", actually. :oops:

Ironically that may well end up saving the world, or at least denying it this specific fate.
Posted on Reply
#22
Vayra86
JWNoctisCommercial interests profiteering off the AI craze has precisely zero relevance to the actual threat strong AI can pose. Them going either direction, either minimizing it or blowing it beyond all proportion, is going to do diddly-squat if the threat materializes.

One could argue that the battle of aligning any possible AGI/ASI is already lost, and doomed from the start; Commercial interest is already so grossly misaligned from general human interest, that they might as well be reptile aliens. Heck, human interest is itself self-contradictory to the point that...Well, just look at what's going on these days.

Again, some other future architecture might exhibit dangerous, unforeseen, capability. The current craze is creating a dangerous concentration of resource, that I'd actually be thankful if they just took the money and run. They would do less damage that way.

Quite possibly dooming the world at the same time is kind of more salient, than any big number one person might amass.

That's...more or less exactly what I thought I meant by "nothing to show", actually. :oops:

Ironically that may well end up saving the world, or at least denying it this specific fate.
I dont believe in this AGI bullshit. Humans want control.
Posted on Reply
#23
Assimilator
Vayra86I dont believe in this AGI bullshit. Humans want control.
Humans also like to delude themselves that they have control and/or can impose control. Except an AGI would, by its very definition, be so much smarter than humans as to be uncontrollable. So what'll happen is that we'll fuck around and find out, which is pretty much standard for our species.
Posted on Reply
#24
slyphnier
AssimilatorNo, they don't. More/better training may make the LLM better at joining the correct dots but it's still doesn't understand HOW or WHY those dots are connected. As I've said before, correlation without causation is not and never will be an intelligence.
that "intelligence" is more like consciousness ?
as basic intelligence dont need "why/reason" isnt it ?
especially if the goal for current AI is for "assistant", not intelligence being that work by itself

anyway we are just beginning with LLM as current "AI", i dont expect we stop and keep using LLM
and i also dont think LLM itself stay same like now, well we can compare it with the rest of things that human been created in past
like sakanai.ai company : sakana.ai/llm-squared/
Posted on Reply
#25
Vayra86
AssimilatorHumans also like to delude themselves that they have control and/or can impose control. Except an AGI would, by its very definition, be so much smarter than humans as to be uncontrollable. So what'll happen is that we'll fuck around and find out, which is pretty much standard for our species.
Sure, theoretically. I just don't believe we can make one. AI works if its used as an algorithm: strict guard rails and a preset goal. AI therefore IS an algorithm, just more complex, but still with a minimum amount of unexpected behaviour. After all, we don't like unexpected behaviour. The army can't use that behaviour, for example, and neither can the medical world. Or traffic. What we can use, is prediction. More accurate prediction, too. I don't see how a really good predictor is going to ruin us, if anything it will make things more transparent as a bullshit filter.

An AI with unexpected behaviour is, these days 'hallucinating'. All of this is a perfect example of how humans will always want to assume control. We already do it, and by doing it, we've already domesticated the technology.

We're skirting the edge of a paradox here and AI in its current form IS a paradox, because it reproduces whatever it is trained on. There is no sentience, no new content, just a really elaborate way of mixing existing content. Trinkets and baubles, if you will. Now sure, there are use cases for a more advanced algorithm / approach on that. But that's all it is. AI does nothing we couldn't already do, because its trained on the things we already did.

Somehow people are thinking that if we feed the computer enough information, it will develop something magic but that's a fairy tale. If you scale things up, you scale things up and you have a bigger model, that still isn't going to do shit unless you tailor make it to do it. Computers don't make mistakes. Humans do.
EndymioThis is, of course, false. They 'grow' the same way a human brain grows -- more (or better) training.
Until we turn it off. There are lots of similarities, but that doesn't make something the same thing, or even 'similar'. They just share properties.
Posted on Reply
Add your own comment
Jun 29th, 2024 23:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts