• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AI Topic: AlphaZero, ChatGPT, Bard, Stable Diffusion and more!

Joined
May 17, 2021
Messages
3,005 (2.72/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
Joined
Jan 18, 2020
Messages
700 (0.44/day)
There is a risk to life from LLMs? Can't really see it myself other than them boring us to death.

We're nowhere near technology that can reason and create by itself, it's all just regurgitation from patterns in huge amounts of data, the base data is all created by us... My strong suspicion is that we're not even the primary intelligence on the planet, probably for another thread...
 
Joined
May 17, 2021
Messages
3,005 (2.72/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
My strong suspicion is that we're not even the primary intelligence on the planet, probably for another thread...

You can't really say that and walk away. What do you mean?
 
Joined
Jun 11, 2019
Messages
495 (0.27/day)
Location
Moscow, Russia
Processor Intel 12600K
Motherboard Gigabyte Z690 Gaming X
Cooling CPU: Noctua NH-D15S; Case: 2xNoctua NF-A14, 1xNF-S12A.
Memory Ballistix Sport LT DDR4 @3600CL16 2*16GB
Video Card(s) Palit RTX 4080
Storage Samsung 970 Pro 512GB + Crucial MX500 500gb + WD Red 6TB
Display(s) Dell S2721qs
Case Phanteks P300A Mesh
Audio Device(s) Behringer UMC204HD
Power Supply Fractal Design Ion+ 560W
Mouse Glorious Model D-

Can we ever truly control AI? All these examples are frankly concerning.
Yeah, if we stop doing non-interpretable models so that all the guardrails could be installed correctly before taking it online. It's only sheer greed blindly powering blackbox stuff ahead because money-money-valuations plus potential for cost savings due to automatization right here and right now.
 

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,513 (2.37/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
You can't really say that and walk away. What do you mean?

Dolphins.

But seriously, they can definitely not add to that because it's so OT (human-designed AI) it's not on the same planet. Perhaps literally. In which case. Not for TPU.
 
Joined
May 17, 2021
Messages
3,005 (2.72/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
Yeah, if we stop doing non-interpretable models so that all the guardrails could be installed correctly before taking it online. It's only sheer greed blindly powering blackbox stuff ahead because money-money-valuations plus potential for cost savings due to automatization right here and right now.

There were guardrails, they just keep going off the rails.
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,473 (1.91/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 56.6ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, transparent full custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
There were guardrails, they just keep going off the rails.
Guardrail models are inherently flawed because they're based on manual rules, not interpretations of a deeper understanding.

AI needs to be slowly socialized over the course of years (or equivalent relative time), as children are. If children are not socialized by the age of two, they have lifelong issues.

A slowly developed morality and the understanding of inherent principles is what helps people form the social conscience and compass that allows them to make nuanced decisions, even if in situations that are novel.

The worst mistake IMO of this AI charade is treating them as computers, and trying to code logic. A real AI won't act like a computer, it will act like an individual, and thus needs socializing if it is going to have interaction with the world.

Another example of why focused disciplines are limited compared to general studies - the developers who design these systems could learn a lot from a basic clinical psychology course.

To add to why this AI controlled military weapon system is a bad idea, and why military leaders want it regardless - is because the military system is designed to have officers that interpret orders, and refuse them if they are not legal, this isn't something leadership wants.
 
Last edited:
Joined
May 17, 2021
Messages
3,005 (2.72/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
Guardrail models are inherently flawed because they're based on manual rules, not interpretations of a deeper understanding.

AI needs to be slowly socialized over the course of years, as children are. If children are not socialized by the age of two, they have lifelong issues.

A slowly developed morality and the understanding of inherent principles is what helps people form the social conscience and compass that allows them to make nuanced decisions, even if in situations that are novel.

The worst mistake IMO of this AI charade is treating them as computers, and trying to code logic. A real AI won't act like a computer, it will act like an individual, and thus needs socializing if it is going to have interaction with the world.

Another example of why focused disciplines are limited compared to general studies - the developers who design these systems could learn a lot from a basic clinical psychology course.

To add to why this AI controlled military weapon system is a bad idea, and why military leaders want it regardless - is because the military system is designed to have officers that interpret orders, and refuse them if they are not legal, this isn't something leadership wants.

These all work more as machine learning then what you'd might call AI, i think you're asking to much of them. And even so, humans suck at morality, we constantly put it away for other values, money, self preservation, etc... Even beyond morality we have strict punishment systems in place to stop all kinds of deviations from the "moral behaviour", eliminate that and see where society ends in a couple of days. That is not a great argument, are you going to threat AI with jail time?

AI will just do the same when confronted with conflicting objectives, such as in this example. You have to eliminate the threat and defend me, but If you're stopping it from elimination threat, you become the threat.
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,473 (1.91/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 56.6ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, transparent full custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
These all work more as machine learning then what you'd might call AI, i think you're asking to much of them. And even so, humans suck at morality, we constantly put it away for other values, money, self preservation, etc... Even beyond morality we have strict punishment systems in place to stop all kinds of deviations from the "moral behaviour", eliminate that and see where society ends in a couple of days. That is not a great argument, are you going to threat AI with jail time?

AI will just do the same when confronted with conflicting objectives, such as in this example. You have to eliminate the threat and defend me, but If you're stopping it from elimination threat, you become the threat.
Maybe we suck at acting on what we know to be good morals, but I'd strongly disagree that we suck at morals themselves, we know when we're doing something that is wrong, we just choose to ignore it.

I agree that the current approach to AI is much too simplistic and has nowhere near enough thought, patience and development put into it.

Wouldn't be surprised to see the entire approach we're taking, pouring however many billions into, be discarded, as it's a technological dead end on the path to "true" AI, much like early programming languages died in favour of superior options that were developed.
 
Joined
May 17, 2021
Messages
3,005 (2.72/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
we know when we're doing something that is wrong, we just choose to ignore it.

That point doesn't seem right to me, people do all sorts of immoral things and just believe they are doing it for the right reasons. Again, just as in this case.
 

64K

Joined
Mar 13, 2014
Messages
6,226 (1.67/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) MSI RTX 2070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Dell 27 inch 1440p 144 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
To me morality is really Situational Ethics. There are few absolute rights and wrongs. Even cold blooded murder isn't always perceived to be wrong. An example would be a military soldier killing enemy soldiers in the field even if shelling them while they are asleep and unarmed. Or a person flipping the switch on the Electric Chair knowing full well it will kill the condemned.

Then there is the human weakness to obey authority even when told to do something that is clearly immoral. The Milgram Experiment comes to mind.

If AI is ever able to reason then who the hell knows what they may choose to be acceptable. Possibly even harming humans for their own preservation if humans are perceived as a threat.
 
Joined
Sep 1, 2020
Messages
2,063 (1.51/day)
Location
Bulgaria
Perhaps we should educate the next generations of people more and more fully. Obviously, selfishness, and hence evaluating behavior to the detriment of other people and society as a whole, as right from an individual perspective, is a major problem. This is a flaw of the social order itself, and it seems to me that it is set up on purpose to be confrontational. That's right, not just competition, but war. These bad principles are embedded in the products produced by capital, including LLMs and future AIs are likely to suffer as well. Cooperation should be at the forefront.
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,473 (1.91/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 56.6ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, transparent full custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
That point doesn't seem right to me, people do all sorts of immoral things and just believe they are doing it for the right reasons. Again, just as in this case.
No, you're confusing justification with morals. They "... for the right reasons", I.e. they know it's wrong, but "justified".

You don't need to justify things if you see nothing wrong with them.
 
Joined
May 17, 2021
Messages
3,005 (2.72/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
No, you're confusing justification with morals. They "... for the right reasons", I.e. they know it's wrong, but "justified".

You don't need to justify things if you see nothing wrong with them.

You have the trolley dilemma. There isn't really a right answer or wrong answer.
You can clearly see wrong in all of the options, but you don't fell bad by taking one of them using your own justification for it, aka morals. You have to, you can't avoid it, because even doing nothing is an option with consequences.

To me morality is really Situational Ethics. There are few absolute rights and wrongs. Even cold blooded murder isn't always perceived to be wrong. An example would be a military soldier killing enemy soldiers in the field even if shelling them while they are asleep and unarmed. Or a person flipping the switch on the Electric Chair knowing full well it will kill the condemned.

Then there is the human weakness to obey authority even when told to do something that is clearly immoral. The Milgram Experiment comes to mind.

If AI is ever able to reason then who the hell knows what they may choose to be acceptable. Possibly even harming humans for their own preservation if humans are perceived as a threat.

i totally agree with you
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,473 (1.91/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 56.6ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, transparent full custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
To me morality is really Situational Ethics. There are few absolute rights and wrongs. Even cold blooded murder isn't always perceived to be wrong. An example would be a military soldier killing enemy soldiers in the field even if shelling them while they are asleep and unarmed. Or a person flipping the switch on the Electric Chair knowing full well it will kill the condemned.

Then there is the human weakness to obey authority even when told to do something that is clearly immoral. The Milgram Experiment comes to mind.

If AI is ever able to reason then who the hell knows what they may choose to be acceptable. Possibly even harming humans for their own preservation if humans are perceived as a threat.
Yes, hence contextual ethics based on socialization and slow, careful development of artificial minds, allowing for nuanced and situational judgements that can be different depending on the scenario, not this rushed "guiderail" system (which to me feels like a legal liability protection attempt rather than anything inspired).

AI reasoning needs to be nurtured within the systems that have been refined for thousands of years over many generations. Things become traditions and cultures over long periods of time, because they work, and are conducive to a stable society. For example, we know murder is almost always wrong, but most cultures have "justified" exceptions, and most of these exceptions are quite similar, isolated cultures come to similar sets of rules for the social conscience.

Political, purely logical rulesets, or an AI "conscience" that is separated from human development and the lessons we've learned, won't end well, in my opinion.

E.g. it's "logical" for the individual to steal, if you can get away with it without anyone finding out, but this has consequences for society in general.

Or: It's politically expedient to agree with the laws of the government in power, but this isn't a good basis for ethics, for example look how companies aligned with 1930s/40s Germany, or Stalinist Russia.

AI ethics need to be separate from simple laws or whatever is currently politically fashionable (or even just the opinions of the types that code these things).

I think it's particularly important to get this right, although I'm not hopeful, since these tools are being integrated to automate the process of education and information dissemination to the public, replacing or supplementing search engines. How they are programmed will determine how people's minds are shaped, this needs to be as perfect as we can make it, or have the influence pared back.
 
Last edited:

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,513 (2.37/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10

Can we ever truly control AI? All these examples are frankly concerning.

If this is that, then:

 
Joined
Jun 18, 2021
Messages
2,311 (2.15/day)
Guardrail models are inherently flawed because they're based on manual rules, not interpretations of a deeper understanding.

AI needs to be slowly socialized over the course of years (or equivalent relative time), as children are. If children are not socialized by the age of two, they have lifelong issues.

A slowly developed morality and the understanding of inherent principles is what helps people form the social conscience and compass that allows them to make nuanced decisions, even if in situations that are novel.

The worst mistake IMO of this AI charade is treating them as computers, and trying to code logic. A real AI won't act like a computer, it will act like an individual, and thus needs socializing if it is going to have interaction with the world.

Another example of why focused disciplines are limited compared to general studies - the developers who design these systems could learn a lot from a basic clinical psychology course.

To add to why this AI controlled military weapon system is a bad idea, and why military leaders want it regardless - is because the military system is designed to have officers that interpret orders, and refuse them if they are not legal, this isn't something leadership wants.

I don't think we're anywhere near that level of AI. In a sense the biggest mistake is really comparing what we have now to a proper AI with the capacity to reason and learn by itself.

You have the trolley dilemma. There isn't really a right answer or wrong answer.

Hmm, I agree there's no right answer but there are definitely wrong answers. For example, if we look at the hospital formulation, would you kill a random person to save 5 others?

If this is that, then:


Ahahah "but he now says he mis-spoke", did he mis-spoke or someone told him he mis-spoke? Credibility right down the drain :D
 

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,513 (2.37/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
People often say things they don't mean. The problem occurs when others jump on it as gospel. None of us here know the full story.
 
Joined
Aug 14, 2013
Messages
2,373 (0.60/day)
System Name boomer--->zoomer not your typical millenial build
Processor i5-760 @ 3.8ghz + turbo ~goes wayyyyyyyyy fast cuz turboooooz~
Motherboard P55-GD80 ~best motherboard ever designed~
Cooling NH-D15 ~double stack thot twerk all day~
Memory 16GB Crucial Ballistix LP ~memory gone AWOL~
Video Card(s) MSI GTX 970 ~*~GOLDEN EDITION~*~ RAWRRRRRR
Storage 500GB Samsung 850 Evo (OS X, *nix), 128GB Samsung 840 Pro (W10 Pro), 1TB SpinPoint F3 ~best in class
Display(s) ASUS VW246H ~best 24" you've seen *FULL HD* *1O80PP* *SLAPS*~
Case FT02-W ~the W stands for white but it's brushed aluminum except for the disgusting ODD bays; *cries*
Audio Device(s) A LOT
Power Supply 850W EVGA SuperNova G2 ~hot fire like champagne~
Mouse CM Spawn ~cmcz R c00l seth mcfarlane darawss~
Keyboard CM QF Rapid - Browns ~fastrrr kees for fstr teens~
Software integrated into the chassis
Benchmark Scores 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
Update from the pc gamer article
Update: Turns out some things are too dystopian to be true. In an update to the Royal Aeronautical Society article referred to below, it's now written that "Col Hamilton admits he 'mis-spoke' in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical 'thought experiment' from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: 'We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome'".

Hamilton also added that, while the US Air Force has not tested weaponised AI as described below, his example still "illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".
 
Joined
Jul 21, 2008
Messages
5,179 (0.89/day)
System Name [Daily Driver]
Processor [Ryzen 7 5800X3D]
Motherboard [Asus TUF GAMING X570-PLUS]
Cooling [be quiet! Dark Rock Slim]
Memory [64GB Corsair Vengeance LPX 3600MHz (16GBx4)]
Video Card(s) [PNY RTX 3070Ti XLR8]
Storage [1TB SN850 NVMe, 4TB 990 Pro NVMe, 2TB 870 EVO SSD, 2TB SA510 SSD]
Display(s) [2x 27" HP X27q at 1440p]
Case [Fractal Meshify-C]
Audio Device(s) [Steelseries Arctis Pro]
Power Supply [CORSAIR RMx 1000]
Mouse [Logitech G Pro Wireless]
Keyboard [Logitech G512 Carbon (GX-Brown)]
Software [Windows 11 64-Bit]
Just a random ChatGPT self experience.

Used it a LOT this last semester of college. Mostly inputting questions and getting a fairly easy to understand explanation of the things I asked it about. But I also had to write a few essays for me that I used as the framework for the actual essay (going in and rewriting the paragraphs and finding sources). So considering I was on the deans list prior to using it and am still safely on the deans list now (while doing slightly less work) it's safe to say that this stuff is going to change college profoundly. I'm all for de-valuing degrees tho, absurd amount of money people spend on pieces of paper and then most careers are majority OJT anyway.
 
Joined
Apr 24, 2020
Messages
2,583 (1.73/day)
Just a random ChatGPT self experience.

Used it a LOT this last semester of college. Mostly inputting questions and getting a fairly easy to understand explanation of the things I asked it about. But I also had to write a few essays for me that I used as the framework for the actual essay (going in and rewriting the paragraphs and finding sources). So considering I was on the deans list prior to using it and am still safely on the deans list now (while doing slightly less work) it's safe to say that this stuff is going to change college profoundly. I'm all for de-valuing degrees tho, absurd amount of money people spend on pieces of paper and then most careers are majority OJT anyway.

Ehhh... the hallucinations are exceptionally real. Maybe in an easy subject this is passable. But I can't imagine any objective subject (engineering, law, etc. etc.) benefiting from these random hallucinations.


I spoke with a paralegal about this story, and people don't realize that current legal search tools are exceptionally good (and why they're worth 5 digit $xx,xxx subscriptions). Like the ability to automatically look up all cases, and whether or not they've been overturned, completely automatically from written statements from various legal teams. In the job where words matter, ChatGPT's ability to "just make shit up" is an exceptional risk, likely rendering it unusable.

---------

I've got my Bing AI (on ChatGPT technology) earlier in this thread, showing how its getting basic facts horribly wrong and misunderstanding maybe 2nd year electrical engineering questions. I don't know how its like for all other fields, but I can at least rule this technology out for being useful in engineering or legal fields.
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,473 (1.91/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 56.6ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, transparent full custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
Ehhh... the hallucinations are exceptionally real. Maybe in an easy subject this is passable. But I can't imagine any objective subject (engineering, law, etc. etc.) benefiting from these random hallucinations.


I spoke with a paralegal about this story, and people don't realize that current legal search tools are exceptionally good (and why they're worth 5 digit $xx,xxx subscriptions). Like the ability to automatically look up all cases, and whether or not they've been overturned, completely automatically from written statements from various legal teams. In the job where words matter, ChatGPT's ability to "just make shit up" is an exceptional risk, likely rendering it unusable.

---------

I've got my Bing AI (on ChatGPT technology) earlier in this thread, showing how its getting basic facts horribly wrong and misunderstanding maybe 2nd year electrical engineering questions. I don't know how its like for all other fields, but I can at least rule this technology out for being useful in engineering or legal fields.
Yup, it's why these LLM are useless for medical education and practice, no matter what people may try to tell or sell you.
 
Joined
Jul 21, 2008
Messages
5,179 (0.89/day)
System Name [Daily Driver]
Processor [Ryzen 7 5800X3D]
Motherboard [Asus TUF GAMING X570-PLUS]
Cooling [be quiet! Dark Rock Slim]
Memory [64GB Corsair Vengeance LPX 3600MHz (16GBx4)]
Video Card(s) [PNY RTX 3070Ti XLR8]
Storage [1TB SN850 NVMe, 4TB 990 Pro NVMe, 2TB 870 EVO SSD, 2TB SA510 SSD]
Display(s) [2x 27" HP X27q at 1440p]
Case [Fractal Meshify-C]
Audio Device(s) [Steelseries Arctis Pro]
Power Supply [CORSAIR RMx 1000]
Mouse [Logitech G Pro Wireless]
Keyboard [Logitech G512 Carbon (GX-Brown)]
Software [Windows 11 64-Bit]
But I can't imagine any objective subject (engineering, law, etc. etc.)

I mean it ABSOLUTELY can be helpful in these types of fields when used correctly. I look at it as an advanced search engine, with the right inputs it can provide you with basically any information you need. I use it in the cybersecurity field and have yet to encounter any major hiccups with it. Obviously YMMV, but it will only get better as it gets more access into those expensive indexes.

Obviously we are still in the teething/infancy phase of it, but anyone who is quick to put it down as nonsense will be the people 5-10 years from now that are the first to find their positions as redundant because they didn't adapt and implement new technology.
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,473 (1.91/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 56.6ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, transparent full custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
I mean it ABSOLUTELY can be helpful in these types of fields when used correctly. I look at it as an advanced search engine, with the right inputs it can provide you with basically any information you need. I use it in the cybersecurity field and have yet to encounter any major hiccups with it. Obviously YMMV, but it will only get better as it gets more access into those expensive indexes.

Obviously we are still in the teething/infancy phase of it, but anyone who is quick to put it down as nonsense will be the people 5-10 years from now that are the first to find their positions as redundant because they didn't adapt and implement new technology.
Yeah, except search engines still work, and provide context.

AI right now grabs a search engine or several search engine results and reformats them, without much of the useful context, adds in a bunch of it's own generated text based on what it thinks you want to hear, then states as fact.
 
Joined
Apr 24, 2020
Messages
2,583 (1.73/day)
I use it in the cybersecurity field and have yet to encounter any major hiccups with it.

Hmm. I took a class or two on various cryptography and cybersecurity things back in college.

Are you able and/or willing to share any ChatGPT sessions you've done? Since I have a bit of college-study on this (albeit old), I probably can follow the discussion.
 
Top