Thursday, June 20th 2024
OpenAI Co-Founder Ilya Sutskever Launches a New Venture: Safe Superintelligence Inc.
OpenAI's co-founder and ex-chief scientist, Ilya Sutskever, has announced the formation of a new company promising a safe path to artificial superintelligence (ASI). Called Safe Superintelligence Inc. (SSI), the company has a simple mission: achieving ASI with safety at the front. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," notes the SSI website, adding that "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."
Interestingly, safety is a concern only a few frontier AI labs have. In recent history, OpenAI's safety team got the spotlight for being neglected, and the company's safety lead, Jan Leike, publically criticized safety practices before moving to Anthropic. Anthropic is focused on providing safe AI models, with its Claude Opus being one of the leading AI models to date. What is to come out of SSI? We still don't know. However, given the team of Ilya Sutskever, Daniel Gross, and Daniel Levy, we assume they attracted the best-in-class talent for developing next-generation AI models, focusing on safety. With offices in Palo Alto and Tel Aviv, SSI can tap a vast network of AI researchers and policymakers to establish safe ASI, free from short-term commercial pressure and focused on research and development. "Our team, investors, and business model are all aligned to achieve SSI," says the SSI website.
Source:
SSI
Interestingly, safety is a concern only a few frontier AI labs have. In recent history, OpenAI's safety team got the spotlight for being neglected, and the company's safety lead, Jan Leike, publically criticized safety practices before moving to Anthropic. Anthropic is focused on providing safe AI models, with its Claude Opus being one of the leading AI models to date. What is to come out of SSI? We still don't know. However, given the team of Ilya Sutskever, Daniel Gross, and Daniel Levy, we assume they attracted the best-in-class talent for developing next-generation AI models, focusing on safety. With offices in Palo Alto and Tel Aviv, SSI can tap a vast network of AI researchers and policymakers to establish safe ASI, free from short-term commercial pressure and focused on research and development. "Our team, investors, and business model are all aligned to achieve SSI," says the SSI website.
38 Comments on OpenAI Co-Founder Ilya Sutskever Launches a New Venture: Safe Superintelligence Inc.
I think Ilya smells money and wants to cash before the bubble says poof
OpenAI created the perfect fearmongering for his business. Gosh, no, this totally doesn't look like a crypto ICO
"What is to come out of SSI? We still don't know."
:roll: :roll: :roll:
That is, if misuse of lesser AIs or other ills do not take humanity down first. On the other hand, misalignment with current human interest is not necessarily a bad thing either, but might end with anything from actual utopia, to utopia-with-smile-painted-on-your-soul, to death, to something quite a lot worse. I'm not sure it's desirable to find out which.
The whole thing really is pretty ominous. I applaud them for trying, even if they are just another participant in the race. Interesting times.
Safe Superintelligence Inc.
That should do it. Maybe the people that associate AI with the Terminator movies or losing control of everything to AI won't find it so scary because under their guiding hand it's safe. It's right there in the name people. Also, it's a corporation so it must be a serious business and not just another scam. ;)
Also, free advice on naming the next AI company:
Not A Skynet Inc.
- those "researchers" don't exist (go go gadget "anonymous sources" AKA "I can make up whatever I want to get clicks" AKA what passes for journalism in this century)
- those "researchers" are idiots and/or being interviewed by idiots
- those "researchers" are lying through their teeth to keep the grift bubble, and the stock price of the grift "AI" company they work for, inflated
"Safe" Superintelligence Inc intends to become the ultimate grifter: you pay them a hefty subscription fee, in return you get to display their stamp of approval on your company's website/portfolio/whatever. Notice how SSI does no work, because as explained above there is no work for them to do, because the concept of "safety" is irrelevant for LLMs. Oh I'm sure they'll "audit" you as part of that subscription, but the "auditor" will perform no meaningful actions because, again, there is literally nothing to do.This, BTW, is why commercial companies cannot be allowed to become the gatekeepers of anything.
Lots of words, but nothing concrete. I have no idea what their goal is.
Are they trying to design something that isn't a weapon? Isn't manipulative? Is family-friendly? Which won't take our jobs? What is 'safe'? What is their goal?
As to grifters, I'm sure there were plenty of those circa 180 years ago, arguing how railways would cause the end of humanity by agricultural collapse, through all that smog blanketing the sun, and that infernal racket scaring livestock to death or at least making unproductive, aaaand that they'd accept donations for their cause too. I just hope it's that simple this time.
Hell, a burst bubble and another AI winter poisoning the field for another couple generations might well be the only thing that would keep humanity safe from that unfortunate fate now, IF ASI is at all possible, but would turn out to be beyond the reach of reasonable compute before the music would stop.
This is exactly it, tech companies' eternal search for new revenue and markets. The purpose comes after that. Demand, in commerce, is created. Whoever said that? And whoever said AI is 'the next thing after the Internet'? That alone is ridiculous. The AI feeds on the internet. Like a parasite, a disease. Cue Agent Smith.
One could argue that the battle of aligning any possible AGI/ASI is already lost, and doomed from the start; Commercial interest is already so grossly misaligned from general human interest, that they might as well be reptile aliens. Heck, human interest is itself self-contradictory to the point that...Well, just look at what's going on these days. Again, some other future architecture might exhibit dangerous, unforeseen, capability. The current craze is creating a dangerous concentration of resource, that I'd actually be thankful if they just took the money and run. They would do less damage that way. Quite possibly dooming the world at the same time is kind of more salient, than any big number one person might amass. That's...more or less exactly what I thought I meant by "nothing to show", actually. :oops:
Ironically that may well end up saving the world, or at least denying it this specific fate.
as basic intelligence dont need "why/reason" isnt it ?
especially if the goal for current AI is for "assistant", not intelligence being that work by itself
anyway we are just beginning with LLM as current "AI", i dont expect we stop and keep using LLM
and i also dont think LLM itself stay same like now, well we can compare it with the rest of things that human been created in past
like sakanai.ai company : sakana.ai/llm-squared/
An AI with unexpected behaviour is, these days 'hallucinating'. All of this is a perfect example of how humans will always want to assume control. We already do it, and by doing it, we've already domesticated the technology.
We're skirting the edge of a paradox here and AI in its current form IS a paradox, because it reproduces whatever it is trained on. There is no sentience, no new content, just a really elaborate way of mixing existing content. Trinkets and baubles, if you will. Now sure, there are use cases for a more advanced algorithm / approach on that. But that's all it is. AI does nothing we couldn't already do, because its trained on the things we already did.
Somehow people are thinking that if we feed the computer enough information, it will develop something magic but that's a fairy tale. If you scale things up, you scale things up and you have a bigger model, that still isn't going to do shit unless you tailor make it to do it. Computers don't make mistakes. Humans do. Until we turn it off. There are lots of similarities, but that doesn't make something the same thing, or even 'similar'. They just share properties.