- Joined
- Sep 17, 2014
- Messages
- 22,431 (6.03/day)
- Location
- The Washing Machine
Processor | 7800X3D |
---|---|
Motherboard | MSI MAG Mortar b650m wifi |
Cooling | Thermalright Peerless Assassin |
Memory | 32GB Corsair Vengeance 30CL6000 |
Video Card(s) | ASRock RX7900XT Phantom Gaming |
Storage | Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB |
Display(s) | Gigabyte G34QWC (3440x1440) |
Case | Lian Li A3 mATX White |
Audio Device(s) | Harman Kardon AVR137 + 2.1 |
Power Supply | EVGA Supernova G2 750W |
Mouse | Steelseries Aerox 5 |
Keyboard | Lenovo Thinkpad Trackpoint II |
Software | W11 IoT Enterprise LTSC |
Benchmark Scores | Over 9000 |
Sure, theoretically. I just don't believe we can make one. AI works if its used as an algorithm: strict guard rails and a preset goal. AI therefore IS an algorithm, just more complex, but still with a minimum amount of unexpected behaviour. After all, we don't like unexpected behaviour. The army can't use that behaviour, for example, and neither can the medical world. Or traffic. What we can use, is prediction. More accurate prediction, too. I don't see how a really good predictor is going to ruin us, if anything it will make things more transparent as a bullshit filter.Humans also like to delude themselves that they have control and/or can impose control. Except an AGI would, by its very definition, be so much smarter than humans as to be uncontrollable. So what'll happen is that we'll fuck around and find out, which is pretty much standard for our species.
An AI with unexpected behaviour is, these days 'hallucinating'. All of this is a perfect example of how humans will always want to assume control. We already do it, and by doing it, we've already domesticated the technology.
We're skirting the edge of a paradox here and AI in its current form IS a paradox, because it reproduces whatever it is trained on. There is no sentience, no new content, just a really elaborate way of mixing existing content. Trinkets and baubles, if you will. Now sure, there are use cases for a more advanced algorithm / approach on that. But that's all it is. AI does nothing we couldn't already do, because its trained on the things we already did.
Somehow people are thinking that if we feed the computer enough information, it will develop something magic but that's a fairy tale. If you scale things up, you scale things up and you have a bigger model, that still isn't going to do shit unless you tailor make it to do it. Computers don't make mistakes. Humans do.
Until we turn it off. There are lots of similarities, but that doesn't make something the same thing, or even 'similar'. They just share properties.This is, of course, false. They 'grow' the same way a human brain grows -- more (or better) training.
Last edited: