Thursday, June 20th 2024

OpenAI Co-Founder Ilya Sutskever Launches a New Venture: Safe Superintelligence Inc.

OpenAI's co-founder and ex-chief scientist, Ilya Sutskever, has announced the formation of a new company promising a safe path to artificial superintelligence (ASI). Called Safe Superintelligence Inc. (SSI), the company has a simple mission: achieving ASI with safety at the front. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," notes the SSI website, adding that "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

Interestingly, safety is a concern only a few frontier AI labs have. In recent history, OpenAI's safety team got the spotlight for being neglected, and the company's safety lead, Jan Leike, publically criticized safety practices before moving to Anthophic. Antrophic is focused on providing safe AI models, with its Claude Opus being one of the leading AI models to date. What is to come out of SSI? We still don't know. However, given the team of Ilya Sutskever, Daniel Gross, and Daniel Levy, we assume they attracted the best-in-class talent for developing next-generation AI models, focusing on safety. With offices in Palo Alto and Tel Aviv, SSI can tap a vast network of AI researchers and policymakers to establish safe ASI, free from short-term commercial pressure and focused on research and development. "Our team, investors, and business model are all aligned to achieve SSI," says the SSI website.
Source: SSI
Add your own comment

38 Comments on OpenAI Co-Founder Ilya Sutskever Launches a New Venture: Safe Superintelligence Inc.

#26
Endymio
Vayra86Hmm something something autonomous cars hmmm hyperloop hmmm metaverse hmmm

This is exactly it, tech companies' eternal search for new revenue and markets. The purpose comes after that. Demand, in commerce, is created.
Worldwide, automobiles have killed more than 10 million people in the last several decades. Autonomous vehicles have the capability to reduce these deaths by two orders or magnitude or more. The idea that there's some sort of corporate conspiracy to artificially "create" demand for this is patent nonsense.
Vayra86Whoever said AI is 'the next thing after the Internet'? That alone is ridiculous. The AI feeds on the internet. Like a parasite, a disease. Cue Agent Smith.
AI isn't "the next thing" after the Internet -- it is far more revolutionary and transformative. As for the "Agent Smith" reference, you realize that The Matrix was a Hollywood fantasy, right?
Vayra86Until we turn it off. There are lots of similarities, but that doesn't make something the same thing, or even 'similar'. They just share properties.
When two things share properties, with "lots of similarities", we call them "similar". That's what the word means.
Posted on Reply
#27
R-T-B
Prime2515102First AI, now ASI? Next will be AMI (artificial megaintelligence) then AWTFI. This getting stupid already... I'm glad I'm old enough be dead soon.
ASI has been a term in scifi novels since probably when you were born.
Posted on Reply
#28
Vayra86
EndymioWorldwide, automobiles have killed more than 10 million people in the last several decades. Autonomous vehicles have the capability to reduce these deaths by two orders or magnitude or more. The idea that there's some sort of corporate conspiracy to artificially "create" demand for this is patent nonsense.


AI isn't "the next thing" after the Internet -- it is far more revolutionary and transformative. As for the "Agent Smith" reference, you realize that The Matrix was a Hollywood fantasy, right?
You're in deep, keep praying for those technologies then. I don't, I see mostly issues with them.
EndymioWhen two things share properties, with "lots of similarities", we call them "similar". That's what the word means.
The lovely fading and misleading principles of marketing work well on you.

The sun and the earth are both in space, so they're similar. Except you can't live on one, and you can live on the other. Still similar? Or let's trade the sun for Venus, as a thought experiment, because they're now also both planets. More similar. But is it relevant? You still can't live there. The difference really defines what makes Earth, Earth and not Venus.

You say the brain and LLMs are similar. Except you can choose what you train on an LLM, and you haven't got the same degree of control on a brain. They digest information differently, and are influenced by completely different parameters. They're as similar as the Earth and Venus are as planets. And the key difference is the degree of control. That thing humans like a lot, like oxygen and water on habitable planets. Every time you release an AI model and let it do things unchecked, it comes up with the most random nonsense. So while there are similarities, it is the difference that defines the technology, not the similarity. The difference reduces an LLM to essentially a complex algorithm, because its mostly unreliable and useless outside of that use case.
Posted on Reply
#29
Endymio
Vayra86keep praying for those technologies then. I don't, I see mostly issues with them.
In the early 1900s, people like you also "saw mostly issues" with the internal combustion engine, which is why there were strong movements to ban it entirely. Thankfully, common sense prevailed, or we'd still be driving horse and buggies.
Vayra86The sun and the earth are both in space, so they're similar. Except you can't live on one, and you can live on the other. Still similar? Or let's trade the sun for Venus ... because they're now also both planets.
You just admitted they're similar, so which is it? Instead of attempting to play semantics, why not simply admit Earth and Venus are similar in terms of orbital mechanics, but not with respect to biomes.
Vayra86You say the brain and LLMs are similar. Except you can choose what you train on an LLM, and you haven't got the same degree of control on a brain.
I'm not sure what point you believe you're making here. We could give up the control we have over LLM training, and make it even more similar to how a human brain develops. We choose not to do so, because controlling the training process expedites it.
Vayra86Every time you release an AI model and let it do things unchecked, it comes up with the most random nonsense.
Rather like a human infant, eh? Until we -train- it otherwise.
Posted on Reply
#30
Vayra86
EndymioI'm not sure what point you believe you're making here. We could give up the control we have over LLM training, and make it even more similar to how a human brain develops. We choose not to do so, because controlling the training process expedites it.


Rather like a human infant, eh? Until we -train- it otherwise.
You're answering your own questions here, much like I already did answer them. The point is exactly what you've just stated. Do you not see the problem with these two statements?

First you're saying: 'We could give up control we have over LLM training and make it more similar to human brain development' Then you're saying 'until we train it otherwise' - which is exercising control. So which is it? Is giving up control more similar to a human brain, or is it when adding control? Or is it a bit of both, however it suits us at each moment in time? And how will you mimic the development of a brain in an LLM that way? How do you know what to filter and what not? With an LLM that is guided by its purpose. But with a human brain?

The point of the comparison really is that with a human brain, you never truly control the training parameters, even though we try hard to gain control of as many as possible. With an LLM you do control everything, and this is even the basic premise of it: input = output. You train it for a purpose, a relatively narrow one compared to a brain, and because it is narrow, it is weak, vulnerable to errors, and incapable of true agency. Its an elaborate search engine on the data you've put inside. The generative part doesn't spawn anything new. A human brain does, because it frames the input with a myriad of other information you can't possibly 'train' into a model. Like emotion. Events will inspire, motivate, etc.

So similar, but not even remotely the same.
EndymioIn the early 1900s, people like you also "saw mostly issues" with the internal combustion engine, which is why there were strong movements to ban it entirely. Thankfully, common sense prevailed, or we'd still be driving horse and buggies.
And here we were in the 2020's, where the internal combustion engine was once more a problem we couldn't ignore. Yay for progress, except we've dug ourselves a massive fossil hole, that's difficult to get out of. AI isn't entirely different in that regard. A lot of the technologies we develop since the industrial age are like this: there is a price we pay, but the economy rules, so fuck that price, we won't pay it now, we'll just postpone it for later generations. In the case of the ICE, that escalated pretty quickly don't you think? 100 years - years in which we continuously strived to reduce pollution. The best we've managed though is transporting it to Asia.
EndymioYou just admitted they're similar, so which is it? Instead of attempting to play semantics, why not simply admit Earth and Venus are similar in terms of orbital mechanics, but not with respect to biomes.
Its not semantics, its essential to the discussion. Planes have wheels too, so they're similar to cars...? Biomes are just one aspect of the differences between Earth and Venus. Even just something as silly as their position in space is crucial. And it defines those biomes. Such a little detail defines quite a lot then.
Posted on Reply
#31
Markosz
Sounds like he is out of ideas on how to compete with the AI leaders so he joins the opposing team as some regulatory BS.
Posted on Reply
#32
Endymio
Vayra86And here we were in the 2020's, where the internal combustion engine was once more a problem we couldn't ignore.
Are you seriously suggesting we should have stuck with horses and buggies? Seriously, what happened to our education system? Do you not realize that, at the beginning of the 20th century, NYC urban planners decreed that the largest problem facing them was the issue of removing horse manure from city streets? And they calculated that, within 25 years, that manure would be more than 15 feet deep throughout the city, and utterly unremovable, as the more horses you brought in to haul it out, the more you'd generate in response?
Vayra86Planes have wheels too, so they're similar to cars...?
Why yes, they are. But mostly because planes and cars are both vehicles that transport people. I'm sure even you will admit they're much more similar than a jet is to a coronavirus, say, or a romance novel.

What you've been struggling to say this entire time is that two entities can be similar in one context, but not in another. AI algorithms don't eat, sleep, defecate, fornicate, wear clothes, or frolic on the beach. In that respect they're not similar to humans. They do, however, perform tasks and solve problems. In this aspect, they're very similar to humans. And increasing so, they're performing tasks and solving problems that humans cannot solve. Which makes them valuable indeed. Thanks for playing.
Posted on Reply
#33
64K
For anyone that believes that AI is anything like what the human brain does consider the following. The human brain possesses self-awareness, a consciousness, a will of it's own, it can think outside the box (outside of what AI is programmed to think or how to think). AI is just monkey-see-monkey-do. It's not anything close to human intelligence. imo it never will be. That level of human intelligence is just a Sci Fi fantasy for AI.

AI has it's place admittedly as long as it is constantly monitored by people for screw-ups because it also lacks another ability of the human brain which is common sense.
Posted on Reply
#34
Vayra86
EndymioWhat you've been struggling to say this entire time is that two entities can be similar in one context, but not in another. AI algorithms don't eat, sleep, defecate, fornicate, wear clothes, or frolic on the beach. In that respect they're not similar to humans. They do, however, perform tasks and solve problems. In this aspect, they're very similar to humans. And increasing so, they're performing tasks and solving problems that humans cannot solve. Which makes them valuable indeed. Thanks for playing.
That's what you've been struggling to read out of it then, because I was pretty clear and now you're saying the same thing. Well done, we agreed all the time. For the most part, anyway, not entirely about its degree of usefulness. Though I won't disagree on the specific tasks part. That's exactly what I've said too. But then, an AI is just a more complex algorithm with a lot of processing power behind it.

Now, let's move on, and make the jump to the topic we're in. A supposed safe superintelligence, and a way to curb the supposed rampant AI we could create. Where is it. We can discuss things without trying to oppose each other just because ;) If you think about this topic, this supposed threat, how could we conceive it?
Posted on Reply
#35
Endymio
Vayra86That's what you've been struggling to read out of it then, because I was pretty clear and now you're saying the same thing. Well done, we agreed all the time.
Not quite. You were claiming that humans and AI algorithms aren't in any way similar in the context under discussion: their ability to perform tasks and solve problems. You also claimed that these algorithms (and the internal combustion engine, as well) create "mostly issues and problems" for society, rather than the enormous benefits both have generated. This is, of course, absurd.
Vayra86Now, let's move on, and make the jump to the topic we're in. A supposed safe superintelligence, and a way to curb the supposed rampant AI we could create. Where is it.
We're quite a way from a "superintelligent" AI. But even if we presuppose such, the fact remains that until we being programming these algorithms with emotions, they won't have a survival instinct -- or any instincts whatsoever. The idea that they would "wake up" and attack us all is naive, to say the least.
64KThe human brain possesses self-awareness, a consciousness, a will of it's own, it can think outside the box (outside of what AI is programmed to think or how to think). AI is just monkey-see-monkey-do. It's not anything close to human intelligence. imo it never will be.
There are many fallacies in the above statement. Let's unpack just a few. First, we don't understand what "self awareness" even is and can't even agree on a formal definition for it, so it's absurd to claim AI can or cannot have it. Do weasels have self-awareness? German Shepards? Dolphins? Humans in a coma?

Secondly, there's absolutely no reason to believe that neurons made of meat are inherently superior to those made of silicon. Most experts believe that, if you put enough together, consciousness naturally results.

Third and most importantly -- who cares? An AI algorithm capable of finding a cure for cancer, predicting the path of a hurricane, or writing a successful movie script is incredibly valuable, whether or not it's "self aware".
64KAI also lacks another ability of the human brain which is common sense.
The problem with that statement is that "common sense" is anything but common in humans.
Posted on Reply
#36
64K
EndymioThere are many fallacies in the above statement. Let's unpack just a few. First, we don't understand what "self awareness" even is and can't even agree on a formal definition for it, so it's absurd to claim AI can or cannot have it. Do weasels have self-awareness? German Shepards? Dolphins? Humans in a coma?

Secondly, there's absolutely no reason to believe that neurons made of meat are inherently superior to those made of silicon. Most experts believe that, if you put enough together, consciousness naturally results.

Third and most importantly -- who cares? An AI algorithm capable of finding a cure for cancer, predicting the path of a hurricane, or writing a successful movie script is incredibly valuable, whether or not it's "self aware".


The problem with that statement is that "common sense" is anything but common in humans.
Just let me know when the first AI demands better working conditions or else it goes on strike. AI is not aware that it exists or it can even desire anything better for itself.

I don't think you really believe that packing enough transistors on a die could ever create self-awareness or any of the unique features found in the human mind. AI is, as you said, an algorithm. It's a program. Human intelligence is vastly more than an algorithm. A monkey can learn. It can be trained but it can't think on a higher level like humans.

As I said, AI does have it's uses but true human intelligence isn't one of them.
Posted on Reply
#37
VictorLGC
So this is kind of the John Connor of our generation?? :D
Posted on Reply
#38
Markosz
64KFor anyone that believes that AI is anything like what the human brain does consider the following. The human brain possesses self-awareness, a consciousness, a will of it's own, it can think outside the box (outside of what AI is programmed to think or how to think). AI is just monkey-see-monkey-do. It's not anything close to human intelligence. imo it never will be. That level of human intelligence is just a Sci Fi fantasy for AI.

AI has it's place admittedly as long as it is constantly monitored by people for screw-ups because it also lacks another ability of the human brain which is common sense.
What is self-awareness, consciousness and will? These incredibly vague words to make us feel special. What is the box you are thinking outside?
We are all raised and taught that we are "special" because of our intelligence. But honestly that's not different to how an AI is programmed/taught.
Humans are limited on what they can do based on education and environment they grew up in. They won't magically come up with something new if they have no previous experience in the area, everything is just a slight reiteration of something that came before.

If you look at the progress of LLMs, they can already pass the Turing test, they have reasoning and logical thinking skills better than most people.
AI can already come up with "new" stuff (reiteration based on previous knowledge): leap71.com/2024/06/18/leap-71-hot-fires-3d-printed-liquid-fuel-rocket-engine-designed-through-noyron-computational-model/

Based on the previous ~5 years of progress, the next decade is going to be wild. The only limit for what AI will be able to do is what we limit it to do, including thinking it's "self aware" by talking about consciousness and will, etc...
64KJust let me know when the first AI demands better working conditions or else it goes on strike. AI is not aware that it exists or it can even desire anything better for itself.
There are billions of people working and living like that.
Posted on Reply
Add your own comment
Sep 30th, 2024 15:12 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts