• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

OpenAI Co-Founder Ilya Sutskever Launches a New Venture: Safe Superintelligence Inc.

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
2,579 (0.97/day)
OpenAI's co-founder and ex-chief scientist, Ilya Sutskever, has announced the formation of a new company promising a safe path to artificial superintelligence (ASI). Called Safe Superintelligence Inc. (SSI), the company has a simple mission: achieving ASI with safety at the front. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," notes the SSI website, adding that "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

Interestingly, safety is a concern only a few frontier AI labs have. In recent history, OpenAI's safety team got the spotlight for being neglected, and the company's safety lead, Jan Leike, publically criticized safety practices before moving to Anthropic. Anthropic is focused on providing safe AI models, with its Claude Opus being one of the leading AI models to date. What is to come out of SSI? We still don't know. However, given the team of Ilya Sutskever, Daniel Gross, and Daniel Levy, we assume they attracted the best-in-class talent for developing next-generation AI models, focusing on safety. With offices in Palo Alto and Tel Aviv, SSI can tap a vast network of AI researchers and policymakers to establish safe ASI, free from short-term commercial pressure and focused on research and development. "Our team, investors, and business model are all aligned to achieve SSI," says the SSI website.



View at TechPowerUp Main Site | Source
 
Joined
Sep 17, 2014
Messages
22,431 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Great, now finally go see a barber, okay?

I think Ilya smells money and wants to cash before the bubble says poof

OpenAI created the perfect fearmongering for his business. Gosh, no, this totally doesn't look like a crypto ICO

"What is to come out of SSI? We still don't know."
:roll: :roll: :roll:
 
Joined
Feb 18, 2005
Messages
5,847 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
More "AI" grift, nothing to see here, move along.
 
Joined
May 22, 2024
Messages
411 (2.23/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-76 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
Come on. Arguably the threat is actual. If ASI is possible, then it would almost certainly come at cross-purposes with human interest. And since lesser intelligence cannot reasonably predict action of greater intelligence, there is no actual way to ensure otherwise, except by not creating such intelligence in the first place. That is not what's happening, with everyone racing along.

That is, if misuse of lesser AIs or other ills do not take humanity down first. On the other hand, misalignment with current human interest is not necessarily a bad thing either, but might end with anything from actual utopia, to utopia-with-smile-painted-on-your-soul, to death, to something quite a lot worse. I'm not sure it's desirable to find out which.

The whole thing really is pretty ominous. I applaud them for trying, even if they are just another participant in the race. Interesting times.
 
Joined
Mar 23, 2005
Messages
197 (0.03/day)
System Name Bessy 6.0
Processor i7-7700K @ 4.8GHz
Motherboard MSI Z270 KRAIT Gaming
Cooling Swiftech H140-X + XSPC EX420 + Resevior
Memory G.Skill Ripjaws V 32GB DDR-3200 CL14 (B-die)
Video Card(s) MSI GTX 1080 Armor OC
Storage Samsung 960 EVO 250GB x2 RAID0, 940 EVO 500GB, 2x WD Black 8TB RAID1
Display(s) Samsung QN90a 50" (the IPS one)
Case Lian Li something or other
Power Supply XFX 750W Black Edition
Software Win10 Pro
First AI, now ASI? Next will be AMI (artificial megaintelligence) then AWTFI. This getting stupid already... I'm glad I'm old enough be dead soon.
 
Joined
Oct 22, 2014
Messages
14,084 (3.82/day)
Location
Sunshine Coast
System Name H7 Flow 2024
Processor AMD 5800X3D
Motherboard Asus X570 Tough Gaming
Cooling Custom liquid
Memory 32 GB DDR4
Video Card(s) Intel ARC A750
Storage Crucial P5 Plus 2TB.
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Eweadn Mechanical
Software W11 Pro 64 bit
Typo in the title: Co funder, instead of Co-founder
 

64K

Joined
Mar 13, 2014
Messages
6,767 (1.73/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) Temporary MSI RTX 4070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Temporary Viewsonic 4K 60 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
How do you sell a new company to people with the current buzz-word while acknowledging a lot of people's concerns about the possible risks of AI? How about:

Safe Superintelligence Inc.

That should do it. Maybe the people that associate AI with the Terminator movies or losing control of everything to AI won't find it so scary because under their guiding hand it's safe. It's right there in the name people. Also, it's a corporation so it must be a serious business and not just another scam. ;)
 
Joined
Feb 23, 2019
Messages
6,061 (2.89/day)
Location
Poland
Processor Ryzen 7 5800X3D
Motherboard Gigabyte X570 Aorus Elite
Cooling Thermalright Phantom Spirit 120 SE
Memory 2x16 GB Crucial Ballistix 3600 CL16 Rev E @ 3800 CL16
Video Card(s) RTX3080 Ti FE
Storage SX8200 Pro 1 TB, Plextor M6Pro 256 GB, WD Blue 2TB
Display(s) LG 34GN850P-B
Case SilverStone Primera PM01 RGB
Audio Device(s) SoundBlaster G6 | Fidelio X2 | Sennheiser 6XX
Power Supply SeaSonic Focus Plus Gold 750W
Mouse Endgame Gear XM1R
Keyboard Wooting Two HE
He should invest in some hair plugs.

Also, free advice on naming the next AI company:
Not A Skynet Inc.
 
Last edited:
Joined
Oct 22, 2014
Messages
14,084 (3.82/day)
Location
Sunshine Coast
System Name H7 Flow 2024
Processor AMD 5800X3D
Motherboard Asus X570 Tough Gaming
Cooling Custom liquid
Memory 32 GB DDR4
Video Card(s) Intel ARC A750
Storage Crucial P5 Plus 2TB.
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Eweadn Mechanical
Software W11 Pro 64 bit
How do you sell a new company to people with the current buzz-word while acknowledging a lot of people's concerns about the possible risks of AI? How about:

Safe Superintelligence Inc.

That should do it. Maybe the people that associate AI with the Terminator movies or losing control of everything to AI won't find it so scary because under their guiding hand it's safe. It's right there in the name people. Also, it's a corporation so it must be a serious business and not just another scam. ;)
He forgot to add, "Trust me" at the end of it's name. :laugh:
 
Joined
Feb 18, 2005
Messages
5,847 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
Come on. Arguably the threat is actual. If ASI is possible, then it would almost certainly come at cross-purposes with human interest. And since lesser intelligence cannot reasonably predict action of greater intelligence, there is no actual way to ensure otherwise, except by not creating such intelligence in the first place. That is not what's happening, with everyone racing along.

That is, if misuse of lesser AIs or other ills do not take humanity down first. On the other hand, misalignment with current human interest is not necessarily a bad thing either, but might end with anything from actual utopia, to utopia-with-smile-painted-on-your-soul, to death, to something quite a lot worse. I'm not sure it's desirable to find out which.

The whole thing really is pretty ominous. I applaud them for trying, even if they are just another participant in the race. Interesting times.
There is no risk because LLMs are incapable of creating true AI. Any time you see an article claiming that "AI researchers" are worried about that happening, you can guarantee one of three things:
  • those "researchers" don't exist (go go gadget "anonymous sources" AKA "I can make up whatever I want to get clicks" AKA what passes for journalism in this century)
  • those "researchers" are idiots and/or being interviewed by idiots
  • those "researchers" are lying through their teeth to keep the grift bubble, and the stock price of the grift "AI" company they work for, inflated
"Safe" Superintelligence Inc intends to become the ultimate grifter: you pay them a hefty subscription fee, in return you get to display their stamp of approval on your company's website/portfolio/whatever. Notice how SSI does no work, because as explained above there is no work for them to do, because the concept of "safety" is irrelevant for LLMs. Oh I'm sure they'll "audit" you as part of that subscription, but the "auditor" will perform no meaningful actions because, again, there is literally nothing to do.

This, BTW, is why commercial companies cannot be allowed to become the gatekeepers of anything.
 
Joined
Jun 3, 2008
Messages
747 (0.12/day)
Location
Pacific Coast
System Name Z77 Rev. 1
Processor Intel Core i7 3770K
Motherboard ASRock Z77 Extreme4
Cooling Water Cooling
Memory 2x G.Skill F3-2400C10D-16GTX
Video Card(s) EVGA GTX 1080
Storage Samsung 850 Pro
Display(s) Samsung 28" UE590 UHD
Case Silverstone TJ07
Audio Device(s) Onboard
Power Supply Seasonic PRIME 600W Titanium
Mouse EVGA TORQ X10
Keyboard Leopold Tenkeyless
Software Windows 10 Pro 64-bit
Benchmark Scores 3DMark Time Spy: 7695
Step one, define 'safe'.
Lots of words, but nothing concrete. I have no idea what their goal is.
Are they trying to design something that isn't a weapon? Isn't manipulative? Is family-friendly? Which won't take our jobs? What is 'safe'? What is their goal?
 
Joined
May 22, 2024
Messages
411 (2.23/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-76 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
There is no risk because LLMs are incapable of creating true AI. Any time you see an article claiming that "AI researchers" are worried about that happening, you can guarantee one of three things:
  • those "researchers" don't exist (go go gadget "anonymous sources" AKA "I can make up whatever I want to get clicks" AKA what passes for journalism in this century)
  • those "researchers" are idiots and/or being interviewed by idiots
  • those "researchers" are lying through their teeth to keep the grift bubble, and the stock price of the grift "AI" company they work for, inflated
"Safe" Superintelligence Inc intends to become the ultimate grifter: you pay them a hefty subscription fee, in return you get to display their stamp of approval on your company's website/portfolio/whatever. Notice how SSI does no work, because as explained above there is no work for them to do, because the concept of "safety" is irrelevant for LLMs. Oh I'm sure they'll "audit" you as part of that subscription, but the "auditor" will perform no meaningful actions because, again, there is literally nothing to do.

This, BTW, is why commercial companies cannot be allowed to become the gatekeepers of anything.
We don't actually know that. It most certainly won't, but some similar architecture operating with greater varieties of inputs and much greater amount of resource might, and quite probably not nearly all of the most interesting frontier advancements of the field are being published anymore. You don't toy with "might" when human survival is at stake. Even commercial models are not showing clear signs of capabilities plateauing with ever greater amount of compute yet, and some of those "grifters" are talking about expanding the grid and generation capacity to feed new datacentres. Bubbles would only burst, when they have nothing to show for their effort.

As to grifters, I'm sure there were plenty of those circa 180 years ago, arguing how railways would cause the end of humanity by agricultural collapse, through all that smog blanketing the sun, and that infernal racket scaring livestock to death or at least making unproductive, aaaand that they'd accept donations for their cause too. I just hope it's that simple this time.

Hell, a burst bubble and another AI winter poisoning the field for another couple generations might well be the only thing that would keep humanity safe from that unfortunate fate now, IF ASI is at all possible, but would turn out to be beyond the reach of reasonable compute before the music would stop.
 
Joined
Feb 18, 2005
Messages
5,847 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
Even commercial models are not showing clear signs of capabilities plateauing with ever greater amount of compute yet
Incorrect. An LLM is an LLM is an LLM, there are no "capabilities" that can "grow". The only reason the "new versions" appear superior to older ones is more compute.

and some of those "grifters" are talking about expanding the grid and generation capacity to feed new datacentres
Ah yes, like how Altman wants OpenAI to invest in a company working on nuclear fusion... a company that he's also invested in. It really is just grift all the way down.

Bubbles would only burst, when they have nothing to show for their effort.
Incorrect, bubbles only burst once enough people call out the bullshit for what it is. That hasn't happened yet because every company in the world is run by idiot psychopaths who know nothing about "AI" other than that it's the next big thing to put on their resume, so they have zero incentive to examine whether it's actually providing value to the business they "run", or in fact even working at all. This self-perpetuating circlejerk will continue until one of these CEOs of a large and well-known company bets the farm on some stupid "AI" project that fails miserably because "AI" is rubbish, causing that company to implode. At that stage journalists will finally start asking "is this AI stuff actually any good?" and they are going to find no shortage of people in the trenches to tell them that it absolutely is not. Once those "AI is actually garbage" headlines start to drop, the bubble has popped.
 
Joined
Sep 29, 2020
Messages
144 (0.10/day)
Incorrect. An LLM is an LLM is an LLM, there are no "capabilities" that can "grow". The only reason the "new versions" appear superior to older ones is more compute.
This is, of course, false. They 'grow' the same way a human brain grows -- more (or better) training.

Ah yes, like how Altman wants OpenAI to invest in a company working on nuclear fusion... a company that he's also invested in. It really is just grift all the way down.
Except that AI-based tools are *already* generating trillions of dollars of benefits in industries ranging from medicine to aerospace to materials science.
 
Joined
May 13, 2010
Messages
6,065 (1.14/day)
System Name RemixedBeast-NX
Processor Intel Xeon E5-2690 @ 2.9Ghz (8C/16T)
Motherboard Dell Inc. 08HPGT (CPU 1)
Cooling Dell Standard
Memory 24GB ECC
Video Card(s) Gigabyte Nvidia RTX2060 6GB
Storage 2TB Samsung 860 EVO SSD//2TB WD Black HDD
Display(s) Samsung SyncMaster P2350 23in @ 1920x1080 + Dell E2013H 20 in @1600x900
Case Dell Precision T3600 Chassis
Audio Device(s) Beyerdynamic DT770 Pro 80 // Fiio E7 Amp/DAC
Power Supply 630w Dell T3600 PSU
Mouse Logitech G700s/G502
Keyboard Logitech K740
Software Linux Mint 20
Benchmark Scores Network: APs: Cisco Meraki MR32, Ubiquiti Unifi AP-AC-LR and Lite Router/Sw:Meraki MX64 MS220-8P
not safe if it's coming from ClosedAI
 
Joined
Sep 17, 2014
Messages
22,431 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Come on. Arguably the threat is actual. If ASI is possible, then it would almost certainly come at cross-purposes with human interest. And since lesser intelligence cannot reasonably predict action of greater intelligence, there is no actual way to ensure otherwise, except by not creating such intelligence in the first place. That is not what's happening, with everyone racing along.

That is, if misuse of lesser AIs or other ills do not take humanity down first. On the other hand, misalignment with current human interest is not necessarily a bad thing either, but might end with anything from actual utopia, to utopia-with-smile-painted-on-your-soul, to death, to something quite a lot worse. I'm not sure it's desirable to find out which.

The whole thing really is pretty ominous. I applaud them for trying, even if they are just another participant in the race. Interesting times.
The fact this is done by a commercial entity is all we need to know here. This is just good business. Fuck ethics. Remember Google, do no evil. Or do you also still believe crypto is really there to decentralize and democratize the world of finance? Its all more of the same because it all originates from the same thing: a market. There is only one bottom line: money.

Except that AI-based tools are *already* generating trillions of dollars of benefits in industries ranging from medicine to aerospace to materials science.
Exactly, all those tools are tailor made for highly specific jobs, so pray tell, where is this existential threat now. There is just a new security aspect, a new attack vector. Nothing else. AI is nothing other than a more complex algorithm.

Incorrect, bubbles only burst once enough people call out the bullshit for what it is. That hasn't happened yet because every company in the world is run by idiot psychopaths who know nothing about "AI" other than that it's the next big thing to put on their resume, so they have zero incentive to examine whether it's actually providing value to the business they "run", or in fact even working at all. This self-perpetuating circlejerk will continue until one of these CEOs of a large and well-known company bets the farm on some stupid "AI" project that fails miserably because "AI" is rubbish, causing that company to implode. At that stage journalists will finally start asking "is this AI stuff actually any good?" and they are going to find no shortage of people in the trenches to tell them that it absolutely is not. Once those "AI is actually garbage" headlines start to drop, the bubble has popped.
Hmm something something autonomous cars hmmm hyperloop hmmm metaverse hmmm

This is exactly it, tech companies' eternal search for new revenue and markets. The purpose comes after that. Demand, in commerce, is created.

Yeah, and that whole 'Internet' thing is a fad, too. You're really onto something, I think.
Whoever said that? And whoever said AI is 'the next thing after the Internet'? That alone is ridiculous. The AI feeds on the internet. Like a parasite, a disease. Cue Agent Smith.
 
Last edited:
Joined
Feb 18, 2005
Messages
5,847 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
This is, of course, false. They 'grow' the same way a human brain grows -- more (or better) training.
No, they don't. More/better training may make the LLM better at joining the correct dots but it's still doesn't understand HOW or WHY those dots are connected. As I've said before, correlation without causation is not and never will be an intelligence.

Except that AI-based tools are *already* generating trillions of dollars of benefits in industries ranging from medicine to aerospace to materials science.
"Trillions" of dollars? Really? I can make random shit up on the internet too.
 
Joined
May 22, 2024
Messages
411 (2.23/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-76 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
The fact this is done by a commercial entity is all we need to know here. This is just good business. Fuck ethics. Remember Google, do no evil. Or do you also still believe crypto is really there to decentralize and democratize the world of finance? Its all more of the same because it all originates from the same thing: a market. There is only one bottom line: money.
Commercial interests profiteering off the AI craze has precisely zero relevance to the actual threat strong AI can pose. Them going either direction, either minimizing it or blowing it beyond all proportion, is going to do diddly-squat if the threat materializes.

One could argue that the battle of aligning any possible AGI/ASI is already lost, and doomed from the start; Commercial interest is already so grossly misaligned from general human interest, that they might as well be reptile aliens. Heck, human interest is itself self-contradictory to the point that...Well, just look at what's going on these days.
Incorrect. An LLM is an LLM is an LLM, there are no "capabilities" that can "grow". The only reason the "new versions" appear superior to older ones is more compute.
Again, some other future architecture might exhibit dangerous, unforeseen, capability. The current craze is creating a dangerous concentration of resource, that I'd actually be thankful if they just took the money and run. They would do less damage that way.
Ah yes, like how Altman wants OpenAI to invest in a company working on nuclear fusion... a company that he's also invested in. It really is just grift all the way down.
Quite possibly dooming the world at the same time is kind of more salient, than any big number one person might amass.
Incorrect, bubbles only burst once enough people call out the bullshit for what it is. That hasn't happened yet because every company in the world is run by idiot psychopaths who know nothing about "AI" other than that it's the next big thing to put on their resume, so they have zero incentive to examine whether it's actually providing value to the business they "run", or in fact even working at all. This self-perpetuating circlejerk will continue until one of these CEOs of a large and well-known company bets the farm on some stupid "AI" project that fails miserably because "AI" is rubbish, causing that company to implode. At that stage journalists will finally start asking "is this AI stuff actually any good?" and they are going to find no shortage of people in the trenches to tell them that it absolutely is not. Once those "AI is actually garbage" headlines start to drop, the bubble has popped.
That's...more or less exactly what I thought I meant by "nothing to show", actually. :oops:

Ironically that may well end up saving the world, or at least denying it this specific fate.
 
Joined
Sep 17, 2014
Messages
22,431 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Commercial interests profiteering off the AI craze has precisely zero relevance to the actual threat strong AI can pose. Them going either direction, either minimizing it or blowing it beyond all proportion, is going to do diddly-squat if the threat materializes.

One could argue that the battle of aligning any possible AGI/ASI is already lost, and doomed from the start; Commercial interest is already so grossly misaligned from general human interest, that they might as well be reptile aliens. Heck, human interest is itself self-contradictory to the point that...Well, just look at what's going on these days.

Again, some other future architecture might exhibit dangerous, unforeseen, capability. The current craze is creating a dangerous concentration of resource, that I'd actually be thankful if they just took the money and run. They would do less damage that way.

Quite possibly dooming the world at the same time is kind of more salient, than any big number one person might amass.

That's...more or less exactly what I thought I meant by "nothing to show", actually. :oops:

Ironically that may well end up saving the world, or at least denying it this specific fate.
I dont believe in this AGI bullshit. Humans want control.
 
Joined
Feb 18, 2005
Messages
5,847 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
I dont believe in this AGI bullshit. Humans want control.
Humans also like to delude themselves that they have control and/or can impose control. Except an AGI would, by its very definition, be so much smarter than humans as to be uncontrollable. So what'll happen is that we'll fuck around and find out, which is pretty much standard for our species.
 
Joined
Feb 1, 2011
Messages
19 (0.00/day)
No, they don't. More/better training may make the LLM better at joining the correct dots but it's still doesn't understand HOW or WHY those dots are connected. As I've said before, correlation without causation is not and never will be an intelligence.
that "intelligence" is more like consciousness ?
as basic intelligence dont need "why/reason" isnt it ?
especially if the goal for current AI is for "assistant", not intelligence being that work by itself

anyway we are just beginning with LLM as current "AI", i dont expect we stop and keep using LLM
and i also dont think LLM itself stay same like now, well we can compare it with the rest of things that human been created in past
like sakanai.ai company : https://sakana.ai/llm-squared/
 
Top