• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

OpenAI Co-Founder Ilya Sutskever Launches a New Venture: Safe Superintelligence Inc.

Joined
Sep 17, 2014
Messages
22,431 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Humans also like to delude themselves that they have control and/or can impose control. Except an AGI would, by its very definition, be so much smarter than humans as to be uncontrollable. So what'll happen is that we'll fuck around and find out, which is pretty much standard for our species.
Sure, theoretically. I just don't believe we can make one. AI works if its used as an algorithm: strict guard rails and a preset goal. AI therefore IS an algorithm, just more complex, but still with a minimum amount of unexpected behaviour. After all, we don't like unexpected behaviour. The army can't use that behaviour, for example, and neither can the medical world. Or traffic. What we can use, is prediction. More accurate prediction, too. I don't see how a really good predictor is going to ruin us, if anything it will make things more transparent as a bullshit filter.

An AI with unexpected behaviour is, these days 'hallucinating'. All of this is a perfect example of how humans will always want to assume control. We already do it, and by doing it, we've already domesticated the technology.

We're skirting the edge of a paradox here and AI in its current form IS a paradox, because it reproduces whatever it is trained on. There is no sentience, no new content, just a really elaborate way of mixing existing content. Trinkets and baubles, if you will. Now sure, there are use cases for a more advanced algorithm / approach on that. But that's all it is. AI does nothing we couldn't already do, because its trained on the things we already did.

Somehow people are thinking that if we feed the computer enough information, it will develop something magic but that's a fairy tale. If you scale things up, you scale things up and you have a bigger model, that still isn't going to do shit unless you tailor make it to do it. Computers don't make mistakes. Humans do.

This is, of course, false. They 'grow' the same way a human brain grows -- more (or better) training.
Until we turn it off. There are lots of similarities, but that doesn't make something the same thing, or even 'similar'. They just share properties.
 
Last edited:
Joined
Sep 29, 2020
Messages
144 (0.10/day)
Hmm something something autonomous cars hmmm hyperloop hmmm metaverse hmmm

This is exactly it, tech companies' eternal search for new revenue and markets. The purpose comes after that. Demand, in commerce, is created.
Worldwide, automobiles have killed more than 10 million people in the last several decades. Autonomous vehicles have the capability to reduce these deaths by two orders or magnitude or more. The idea that there's some sort of corporate conspiracy to artificially "create" demand for this is patent nonsense.

Whoever said AI is 'the next thing after the Internet'? That alone is ridiculous. The AI feeds on the internet. Like a parasite, a disease. Cue Agent Smith.
AI isn't "the next thing" after the Internet -- it is far more revolutionary and transformative. As for the "Agent Smith" reference, you realize that The Matrix was a Hollywood fantasy, right?

Until we turn it off. There are lots of similarities, but that doesn't make something the same thing, or even 'similar'. They just share properties.
When two things share properties, with "lots of similarities", we call them "similar". That's what the word means.
 
Last edited:
Joined
Aug 20, 2007
Messages
21,452 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
First AI, now ASI? Next will be AMI (artificial megaintelligence) then AWTFI. This getting stupid already... I'm glad I'm old enough be dead soon.
ASI has been a term in scifi novels since probably when you were born.
 
Joined
Sep 17, 2014
Messages
22,431 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Worldwide, automobiles have killed more than 10 million people in the last several decades. Autonomous vehicles have the capability to reduce these deaths by two orders or magnitude or more. The idea that there's some sort of corporate conspiracy to artificially "create" demand for this is patent nonsense.


AI isn't "the next thing" after the Internet -- it is far more revolutionary and transformative. As for the "Agent Smith" reference, you realize that The Matrix was a Hollywood fantasy, right?
You're in deep, keep praying for those technologies then. I don't, I see mostly issues with them.

When two things share properties, with "lots of similarities", we call them "similar". That's what the word means.
The lovely fading and misleading principles of marketing work well on you.

The sun and the earth are both in space, so they're similar. Except you can't live on one, and you can live on the other. Still similar? Or let's trade the sun for Venus, as a thought experiment, because they're now also both planets. More similar. But is it relevant? You still can't live there. The difference really defines what makes Earth, Earth and not Venus.

You say the brain and LLMs are similar. Except you can choose what you train on an LLM, and you haven't got the same degree of control on a brain. They digest information differently, and are influenced by completely different parameters. They're as similar as the Earth and Venus are as planets. And the key difference is the degree of control. That thing humans like a lot, like oxygen and water on habitable planets. Every time you release an AI model and let it do things unchecked, it comes up with the most random nonsense. So while there are similarities, it is the difference that defines the technology, not the similarity. The difference reduces an LLM to essentially a complex algorithm, because its mostly unreliable and useless outside of that use case.
 
Last edited:
Joined
Sep 29, 2020
Messages
144 (0.10/day)
keep praying for those technologies then. I don't, I see mostly issues with them.
In the early 1900s, people like you also "saw mostly issues" with the internal combustion engine, which is why there were strong movements to ban it entirely. Thankfully, common sense prevailed, or we'd still be driving horse and buggies.

The sun and the earth are both in space, so they're similar. Except you can't live on one, and you can live on the other. Still similar? Or let's trade the sun for Venus ... because they're now also both planets.
You just admitted they're similar, so which is it? Instead of attempting to play semantics, why not simply admit Earth and Venus are similar in terms of orbital mechanics, but not with respect to biomes.

You say the brain and LLMs are similar. Except you can choose what you train on an LLM, and you haven't got the same degree of control on a brain.
I'm not sure what point you believe you're making here. We could give up the control we have over LLM training, and make it even more similar to how a human brain develops. We choose not to do so, because controlling the training process expedites it.

Every time you release an AI model and let it do things unchecked, it comes up with the most random nonsense.
Rather like a human infant, eh? Until we -train- it otherwise.
 
Joined
Sep 17, 2014
Messages
22,431 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
I'm not sure what point you believe you're making here. We could give up the control we have over LLM training, and make it even more similar to how a human brain develops. We choose not to do so, because controlling the training process expedites it.


Rather like a human infant, eh? Until we -train- it otherwise.
You're answering your own questions here, much like I already did answer them. The point is exactly what you've just stated. Do you not see the problem with these two statements?

First you're saying: 'We could give up control we have over LLM training and make it more similar to human brain development' Then you're saying 'until we train it otherwise' - which is exercising control. So which is it? Is giving up control more similar to a human brain, or is it when adding control? Or is it a bit of both, however it suits us at each moment in time? And how will you mimic the development of a brain in an LLM that way? How do you know what to filter and what not? With an LLM that is guided by its purpose. But with a human brain?

The point of the comparison really is that with a human brain, you never truly control the training parameters, even though we try hard to gain control of as many as possible. With an LLM you do control everything, and this is even the basic premise of it: input = output. You train it for a purpose, a relatively narrow one compared to a brain, and because it is narrow, it is weak, vulnerable to errors, and incapable of true agency. Its an elaborate search engine on the data you've put inside. The generative part doesn't spawn anything new. A human brain does, because it frames the input with a myriad of other information you can't possibly 'train' into a model. Like emotion. Events will inspire, motivate, etc.

So similar, but not even remotely the same.

In the early 1900s, people like you also "saw mostly issues" with the internal combustion engine, which is why there were strong movements to ban it entirely. Thankfully, common sense prevailed, or we'd still be driving horse and buggies.
And here we were in the 2020's, where the internal combustion engine was once more a problem we couldn't ignore. Yay for progress, except we've dug ourselves a massive fossil hole, that's difficult to get out of. AI isn't entirely different in that regard. A lot of the technologies we develop since the industrial age are like this: there is a price we pay, but the economy rules, so fuck that price, we won't pay it now, we'll just postpone it for later generations. In the case of the ICE, that escalated pretty quickly don't you think? 100 years - years in which we continuously strived to reduce pollution. The best we've managed though is transporting it to Asia.

You just admitted they're similar, so which is it? Instead of attempting to play semantics, why not simply admit Earth and Venus are similar in terms of orbital mechanics, but not with respect to biomes.
Its not semantics, its essential to the discussion. Planes have wheels too, so they're similar to cars...? Biomes are just one aspect of the differences between Earth and Venus. Even just something as silly as their position in space is crucial. And it defines those biomes. Such a little detail defines quite a lot then.
 
Last edited:
Joined
May 13, 2016
Messages
88 (0.03/day)
Sounds like he is out of ideas on how to compete with the AI leaders so he joins the opposing team as some regulatory BS.
 
Joined
Sep 29, 2020
Messages
144 (0.10/day)
And here we were in the 2020's, where the internal combustion engine was once more a problem we couldn't ignore.
Are you seriously suggesting we should have stuck with horses and buggies? Seriously, what happened to our education system? Do you not realize that, at the beginning of the 20th century, NYC urban planners decreed that the largest problem facing them was the issue of removing horse manure from city streets? And they calculated that, within 25 years, that manure would be more than 15 feet deep throughout the city, and utterly unremovable, as the more horses you brought in to haul it out, the more you'd generate in response?


Planes have wheels too, so they're similar to cars...?
Why yes, they are. But mostly because planes and cars are both vehicles that transport people. I'm sure even you will admit they're much more similar than a jet is to a coronavirus, say, or a romance novel.

What you've been struggling to say this entire time is that two entities can be similar in one context, but not in another. AI algorithms don't eat, sleep, defecate, fornicate, wear clothes, or frolic on the beach. In that respect they're not similar to humans. They do, however, perform tasks and solve problems. In this aspect, they're very similar to humans. And increasing so, they're performing tasks and solving problems that humans cannot solve. Which makes them valuable indeed. Thanks for playing.
 

64K

Joined
Mar 13, 2014
Messages
6,772 (1.73/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) Temporary MSI RTX 4070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Temporary Viewsonic 4K 60 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
For anyone that believes that AI is anything like what the human brain does consider the following. The human brain possesses self-awareness, a consciousness, a will of it's own, it can think outside the box (outside of what AI is programmed to think or how to think). AI is just monkey-see-monkey-do. It's not anything close to human intelligence. imo it never will be. That level of human intelligence is just a Sci Fi fantasy for AI.

AI has it's place admittedly as long as it is constantly monitored by people for screw-ups because it also lacks another ability of the human brain which is common sense.
 
Joined
Sep 17, 2014
Messages
22,431 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
What you've been struggling to say this entire time is that two entities can be similar in one context, but not in another. AI algorithms don't eat, sleep, defecate, fornicate, wear clothes, or frolic on the beach. In that respect they're not similar to humans. They do, however, perform tasks and solve problems. In this aspect, they're very similar to humans. And increasing so, they're performing tasks and solving problems that humans cannot solve. Which makes them valuable indeed. Thanks for playing.
That's what you've been struggling to read out of it then, because I was pretty clear and now you're saying the same thing. Well done, we agreed all the time. For the most part, anyway, not entirely about its degree of usefulness. Though I won't disagree on the specific tasks part. That's exactly what I've said too. But then, an AI is just a more complex algorithm with a lot of processing power behind it.

Now, let's move on, and make the jump to the topic we're in. A supposed safe superintelligence, and a way to curb the supposed rampant AI we could create. Where is it. We can discuss things without trying to oppose each other just because ;) If you think about this topic, this supposed threat, how could we conceive it?
 
Joined
Sep 29, 2020
Messages
144 (0.10/day)
That's what you've been struggling to read out of it then, because I was pretty clear and now you're saying the same thing. Well done, we agreed all the time.
Not quite. You were claiming that humans and AI algorithms aren't in any way similar in the context under discussion: their ability to perform tasks and solve problems. You also claimed that these algorithms (and the internal combustion engine, as well) create "mostly issues and problems" for society, rather than the enormous benefits both have generated. This is, of course, absurd.

Now, let's move on, and make the jump to the topic we're in. A supposed safe superintelligence, and a way to curb the supposed rampant AI we could create. Where is it.
We're quite a way from a "superintelligent" AI. But even if we presuppose such, the fact remains that until we being programming these algorithms with emotions, they won't have a survival instinct -- or any instincts whatsoever. The idea that they would "wake up" and attack us all is naive, to say the least.

The human brain possesses self-awareness, a consciousness, a will of it's own, it can think outside the box (outside of what AI is programmed to think or how to think). AI is just monkey-see-monkey-do. It's not anything close to human intelligence. imo it never will be.
There are many fallacies in the above statement. Let's unpack just a few. First, we don't understand what "self awareness" even is and can't even agree on a formal definition for it, so it's absurd to claim AI can or cannot have it. Do weasels have self-awareness? German Shepards? Dolphins? Humans in a coma?

Secondly, there's absolutely no reason to believe that neurons made of meat are inherently superior to those made of silicon. Most experts believe that, if you put enough together, consciousness naturally results.

Third and most importantly -- who cares? An AI algorithm capable of finding a cure for cancer, predicting the path of a hurricane, or writing a successful movie script is incredibly valuable, whether or not it's "self aware".

AI also lacks another ability of the human brain which is common sense.
The problem with that statement is that "common sense" is anything but common in humans.
 

64K

Joined
Mar 13, 2014
Messages
6,772 (1.73/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) Temporary MSI RTX 4070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Temporary Viewsonic 4K 60 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
There are many fallacies in the above statement. Let's unpack just a few. First, we don't understand what "self awareness" even is and can't even agree on a formal definition for it, so it's absurd to claim AI can or cannot have it. Do weasels have self-awareness? German Shepards? Dolphins? Humans in a coma?

Secondly, there's absolutely no reason to believe that neurons made of meat are inherently superior to those made of silicon. Most experts believe that, if you put enough together, consciousness naturally results.

Third and most importantly -- who cares? An AI algorithm capable of finding a cure for cancer, predicting the path of a hurricane, or writing a successful movie script is incredibly valuable, whether or not it's "self aware".


The problem with that statement is that "common sense" is anything but common in humans.

Just let me know when the first AI demands better working conditions or else it goes on strike. AI is not aware that it exists or it can even desire anything better for itself.

I don't think you really believe that packing enough transistors on a die could ever create self-awareness or any of the unique features found in the human mind. AI is, as you said, an algorithm. It's a program. Human intelligence is vastly more than an algorithm. A monkey can learn. It can be trained but it can't think on a higher level like humans.

As I said, AI does have it's uses but true human intelligence isn't one of them.
 
Joined
Oct 1, 2023
Messages
25 (0.06/day)
Processor Intel i9 9900K @5GHz all 8 cores.
Motherboard Asus Prime H370M Plus
Cooling Arctic Cooler Freezer 12
Memory 64GB Micron dual-channel @ 2666MHz
Video Card(s) Asus TUF RTX 4090 OC
Storage 5TB SSD + 6TB HDD
Display(s) LG C3 42'' 4K OLED Display
Case AeroCool
Audio Device(s) Realtek + Edifier R1900T II + Sony Subwoofer
Power Supply MSi MPG A850G 850W
Mouse Multilaser MO277
Keyboard T-Dagger Destroyer (mechanical)
Software Windows 11 Pro Beta Ring
So this is kind of the John Connor of our generation?? :D
 
Joined
May 13, 2016
Messages
88 (0.03/day)
For anyone that believes that AI is anything like what the human brain does consider the following. The human brain possesses self-awareness, a consciousness, a will of it's own, it can think outside the box (outside of what AI is programmed to think or how to think). AI is just monkey-see-monkey-do. It's not anything close to human intelligence. imo it never will be. That level of human intelligence is just a Sci Fi fantasy for AI.

AI has it's place admittedly as long as it is constantly monitored by people for screw-ups because it also lacks another ability of the human brain which is common sense.

What is self-awareness, consciousness and will? These incredibly vague words to make us feel special. What is the box you are thinking outside?
We are all raised and taught that we are "special" because of our intelligence. But honestly that's not different to how an AI is programmed/taught.
Humans are limited on what they can do based on education and environment they grew up in. They won't magically come up with something new if they have no previous experience in the area, everything is just a slight reiteration of something that came before.

If you look at the progress of LLMs, they can already pass the Turing test, they have reasoning and logical thinking skills better than most people.
AI can already come up with "new" stuff (reiteration based on previous knowledge): https://leap71.com/2024/06/18/leap-...-designed-through-noyron-computational-model/

Based on the previous ~5 years of progress, the next decade is going to be wild. The only limit for what AI will be able to do is what we limit it to do, including thinking it's "self aware" by talking about consciousness and will, etc...


Just let me know when the first AI demands better working conditions or else it goes on strike. AI is not aware that it exists or it can even desire anything better for itself.
There are billions of people working and living like that.
 
Last edited:
Top