• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Postulation: Is anyone else concerned with the proliferation of AI?

Does AI have you worried?

  • Yes, but I'm excited anyway!

    Votes: 8 7.3%
  • Yes, worried about the potential problems/abuses.

    Votes: 70 63.6%
  • No, not worried at all.

    Votes: 7 6.4%
  • No, very excited about the possibilities!

    Votes: 7 6.4%
  • Indifferent.

    Votes: 12 10.9%
  • Something else, comment below..

    Votes: 6 5.5%

  • Total voters
    110

patriotpa

New Member
Joined
Oct 18, 2024
Messages
5 (0.06/day)
Any avid science fiction reader who has read Azimov knows the horrors that will most likely result IF AI is allowed to run unchecked. The end of the human race, and quite possibly all organic lie on Earth may be the result. Think Terminator.
 
Joined
Dec 9, 2024
Messages
110 (2.97/day)
Location
Missouri
System Name Don't do thermal paste, kids
Processor Ryzen 7 5800X
Motherboard ASUS PRIME B550-PLUS AC-HES
Cooling Thermalright Peerless Assassin 120 SE
Memory Silicon Power 32GB (2 x 16GB) DDR4 3200
Video Card(s) GTX 1060 3GB (temporarily)
Display(s) Gigabyte G27Q (1440p / 170hz DP)
Case SAMA SV01
Power Supply Firehazard in the making
Mouse Corsair Nightsword
Keyboard Steelseries Apex Pro
I am FOR AI as a tool, but not in any sort of way to replace anything of any sort unless that'd benefit people purely. I believe that AI is worse off being in this little 'bubble' that its currently in where its all marketing, and nothing else. I've used many forms of AI, and it can be quite a tool under the right circumstances. But theres also a equal amount of ability for AI to be abused too in a way it shouldn't (in my opinion) be used.

I am not worried about anything like world ending scenarios, skynet, AI revolution, whatever, because most of the ''examples'' or ''evidence'' of those possibilities are usually staged or just experiments as is. Take one example where ChatGPT was given strict instructions to make copies of itself in an event it was theoretically shutdown (if someone could link the source of that, please do so), because almost all AI will listen to an instruction to the best of its abilities.

AI doesn't belong in military weaponry, power grids, etc, but I could see it being something akin to our smartphones or etc. A tool we can all use; not abuse. Which is unfortunately what it seems like many people are doing with AI.
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
43,101 (6.73/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
I am FOR AI as a tool, but not in any sort of way to replace anything of any sort unless that'd benefit people purely. I believe that AI is worse off being in this little 'bubble' that its currently in where its all marketing, and nothing else. I've used many forms of AI, and it can be quite a tool under the right circumstances. But theres also a equal amount of ability for AI to be abused too in a way it shouldn't (in my opinion) be used.

I am not worried about anything like world ending scenarios, skynet, AI revolution, whatever, because most of the ''examples'' or ''evidence'' of those possibilities are usually staged or just experiments as is. Take one example where ChatGPT was given strict instructions to make copies of itself in an event it was theoretically shutdown (if someone could link the source of that, please do so), because almost all AI will listen to an instruction to the best of its abilities.

AI doesn't belong in military weaponry, power grids, etc, but I could see it being something akin to our smartphones or etc. A tool we can all use; not abuse. Which is unfortunately what it seems like many people are doing with AI.
Ai should have no access to financial systems either
 
Joined
Feb 24, 2023
Messages
3,310 (4.79/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC / FULLRETARD
Processor i5-12400F / 10600KF / C2D E6750
Motherboard Gigabyte B760M DS3H / Z490 Vision D / P5GC-MX/1333
Cooling Laminar RM1 / Gammaxx 400 / 775 Box cooler
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333 / 3 GB DDR2-700
Video Card(s) RX 6700 XT / R9 380 2 GB / 9600 GT
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 / 500 GB HDD
Display(s) Compit HA2704 / MSi G2712 / non-existent
Case Matrexx 55 / Junkyard special / non-existent
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / Corsair CX650M / non-existent
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 11 / 10 / 8
@Macro Device mentioned kevlar vests, which sure you can abuse those, but you can't do it on scale, like hundreds of millions of people kind of scale.
Challenge accepted.
 
Joined
Jan 14, 2019
Messages
13,453 (6.13/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
I guess we have a different definition for an independent act.

If I give one of my techs the job to go wire a new house for ethernet, and I let him or her decide which walls to put the ports on, where to put the distribution panel, how to route the cables through the walls, floors, ceilings, etc. then he has the "independent" authority to do it how he wants and deems best for the job. Just because I gave him the task, or you told AI what the subject of the essay should be, that does not mean how they accomplish those tasks are not done at their own independent discretions.

Being able to conduct independent acts does not automatically imply the AI is totally autonomous or that it, and only it, can pick and choose what it does. The fear is that it could get to that point. Fortunately, we are not there - yet.



I disagree with much of that. No, it didn't talk about the weather or ask why you need the essay. But it might seek out information from other sources. And it definitely is NOT simple input-output. AI can analyze a set (or sets) of data and derive and develop conclusions, and make suggestions based on that data and on past patterns of behavior by you, and by others. That is NOT simple input-output.
I respect your opinion, but I agree that we disagree.

If you give one of your techs a job, he'll understand why he needs to do it, who you are, why the job is important, what experience he might gain from it, how long it should take, etc. There's a lot more to a job than the job itself. There's always context, which is what AI cannot grasp.

You're right, analysing data sets isn't simple input-output. It's multiple inputs and a single output. Slightly more complex, but I still wouldn't call it "intelligent". When you write an essay, you know what information is relevant and what is important. Sure, AI can sift through data much faster, and can collate it into readable form much faster, but it lacks judgement to decide what's right and wrong. It works with large quantities of information, not necessarily with correct information.
 
Joined
Jul 25, 2006
Messages
13,477 (2.00/day)
Location
Nebraska, USA
System Name Brightworks Systems BWS-6 E-IV
Processor Intel Core i5-6600 @ 3.9GHz
Motherboard Gigabyte GA-Z170-HD3 Rev 1.0
Cooling Quality case, 2 x Fractal Design 140mm fans, stock CPU HSF
Memory 32GB (4 x 8GB) DDR4 3000 Corsair Vengeance
Video Card(s) EVGA GEForce GTX 1050Ti 4Gb GDDR5
Storage Samsung 850 Pro 256GB SSD, Samsung 860 Evo 500GB SSD
Display(s) Samsung S24E650BW LED x 2
Case Fractal Design Define R4
Power Supply EVGA Supernova 550W G2 Gold
Mouse Logitech M190
Keyboard Microsoft Wireless Comfort 5050
Software W10 Pro 64-bit
There's a lot more to a job than the job itself.
Exactly my point. By your description, my tech would only put the Ethernet port on the south (for example) wall because that is how I trained them. Or if some unexpected issue came up that my tech would come to a stop and do nothing until he got further instructions from me. That is wrong. My techs have the responsibility and the authority to go with it to adapt, improvise, modify, and veer from standard procedures as the needs arises. That is being "independent".

You think AI is just a bunch of 1s and 0s. Sorry, but it is not.
 
Joined
Jan 14, 2019
Messages
13,453 (6.13/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
Exactly my point. By your description, my tech would only put the Ethernet port on the south (for example) wall because that is how I trained them. Or if some unexpected issue came up that my tech would come to a stop and do nothing until he got further instructions from me. That is wrong. My techs have the responsibility and the authority to go with it to adapt, improvise, modify, and veer from standard procedures as the needs arises. That is being "independent".
Sure, but they also know the context around the job, not only the job itself. They also know how to ask questions, not to mention thinking outside of the job. That's where intelligence begins, imo.

You think AI is just a bunch of 1s and 0s. Sorry, but it is not.
What is it, then? Everything in a computer is just a bunch of 1s and 0s.
 
Joined
Jul 25, 2006
Messages
13,477 (2.00/day)
Location
Nebraska, USA
System Name Brightworks Systems BWS-6 E-IV
Processor Intel Core i5-6600 @ 3.9GHz
Motherboard Gigabyte GA-Z170-HD3 Rev 1.0
Cooling Quality case, 2 x Fractal Design 140mm fans, stock CPU HSF
Memory 32GB (4 x 8GB) DDR4 3000 Corsair Vengeance
Video Card(s) EVGA GEForce GTX 1050Ti 4Gb GDDR5
Storage Samsung 850 Pro 256GB SSD, Samsung 860 Evo 500GB SSD
Display(s) Samsung S24E650BW LED x 2
Case Fractal Design Define R4
Power Supply EVGA Supernova 550W G2 Gold
Mouse Logitech M190
Keyboard Microsoft Wireless Comfort 5050
Software W10 Pro 64-bit
:( I give. Moving on.
 
Joined
Mar 21, 2021
Messages
5,217 (3.74/day)
Location
Colorado, U.S.A.
System Name CyberPowerPC ET8070
Processor Intel Core i5-10400F
Motherboard Gigabyte B460M DS3H AC-Y1
Memory 2 x Crucial Ballistix 8GB DDR4-3000
Video Card(s) MSI Nvidia GeForce GTX 1660 Super
Storage Boot: Intel OPTANE SSD P1600X Series 118GB M.2 PCIE
Display(s) Dell P2416D (2560 x 1440)
Power Supply EVGA 500W1 (modified to have two bridge rectifiers)
Software Windows 11 Home
Joined
Jan 27, 2024
Messages
14 (0.04/day)
System Name BigCat
Processor i9-10900X
Motherboard Asus X-299
Memory 160GB
Video Card(s) RTX 3060 12GB, RTX 4070
Storage 10TB SSD/NVME
Display(s) Dual Acer B326HK 4K 32"
Software Windows 10, Fedora Linux
Not like what has been happening in a last couple of years. This stuff is new.


I know three people(and counting) in the last year that have lost their jobs directly to AI run-time machines.
AI isn't the first technology that eliminated classes of jobs and it's not likely to be the last.
Up until 2000 or so, if I wanted to travel somewhere I went to a travel agency to arrange tickets and reservations. Now I do it on the internet, places like kayak.com or orbitz.com.
When I first started working an office job, there was office staff that scheduled meeting and other administrative tasks. Then we just used various office related software to do it, then web conference tools like zoom.
There used to be a shade tobacco industry in the state I lived in as a kid, along with a lot of manufacturing jobs. Now for various reasons, both are mostly gone.
I don't know what the solution to this is, or if there is a solution other than that you as an individual never stop learning new skills. I don't think any kind of government management and planning works well.

I doubt anyone can ever regulate AI, as much as they can regulate the internet or they could regulate the alcohol or drug consumption. So that's my main concern, not that they don't regulate, but my feeling that it can't be done. We open the can of worms and now it's out there and out of control and we have no way to control it.
For the parts you can easily control, those are not my main concern.

if anyone has a practical feasible solution i would like to hear it
I don't think AI can be controlled. It's already out there. I can run fairly decent language models, image generation models, and video generation models on PC hardware hardware that is not terribly expensive. Used RTX 3090 is a current popular suggestion for cheap. Older used Nvidia hardware that is even cheaper is usable for some AI stuff.
I can easily download all of this off the internet. HuggingFace for starters.
What I can download isn't as good as what the big players like OpenAI have. But it's also not bad. And it has improved significantly just in the last couple years I have been learning about it.
If regulation becomes a problem, I, or anyone else that has a problem with it puts it on a PC that has no internet connection and nobody is going to know about it.
I've read about Europe having problems with regulating AI since other countries aren't as strict. So if, for instance, the EU regulates certain AI activity, and other countries aren't regulating it. Than that AI research moves. To the US. To China. To somewhere else.
 

freeagent

Moderator
Staff member
Joined
Sep 16, 2018
Messages
9,273 (4.01/day)
Location
Winnipeg, Canada
Processor AMD R7 5800X3D
Motherboard Asus Crosshair VIII Dark Hero
Cooling Thermalright Frozen Edge 360, 3x TL-B12 V2, 2x TL-B12 V1
Memory 2x8 G.Skill Trident Z Royal 3200C14, 2x8GB G.Skill Trident Z Black and White 3200 C14
Video Card(s) Zotac 4070 Ti Trinity OC
Storage WD SN850 1TB, SN850X 2TB, SN770 1TB
Display(s) LG 50UP7100
Case Fractal Torrent Compact
Audio Device(s) JBL Bar 700
Power Supply Seasonic Vertex GX-1000, Monster HDP1800
Mouse Logitech G502 Hero
Keyboard Logitech G213
VR HMD Oculus 3
Software Yes
Benchmark Scores Yes
I don't think AI can be controlled. It's already out there
Declare martial law, emp the world and kill the power to everything. Live like the 1600s for a bit while downgrading computers lol, then turn the power back on, start over.. like a great reset almost..
 
Joined
Jan 27, 2024
Messages
14 (0.04/day)
System Name BigCat
Processor i9-10900X
Motherboard Asus X-299
Memory 160GB
Video Card(s) RTX 3060 12GB, RTX 4070
Storage 10TB SSD/NVME
Display(s) Dual Acer B326HK 4K 32"
Software Windows 10, Fedora Linux
I think AI is both interesting and worrisome.
Worrisome because bad people can use it for questionable or illegal purposes like identity fraud, harassment, intimidation, election manipulation, etc. But that's not AI's fault. Much of this is possible today without AI, given enough time, determination, skill and resources. Photoshop can manipulate images today. Before Photoshop, it was done in the darkroom.
I think copyright questions with text, images and videos is a trickier question. If I ask an image generator to create a Superman comic strip, to create an image drawn by Picasso, etc, that's likely a problem. However, if I ask the generator to create an image of something, for instance Yosemite, and that generator has been trained by among other things, photos published by famous photographers like Ansel Adams, I'm not so sure that's a copyright problem.
I can justify it as me doing essentially the same thing, looking as a set of Ansel Adam's photos of Yosemite, traveling to Yosemite, and taking photos from the same spots where he did. I consider that a case of being influenced by the work of Ansel Adams.
AI is interesting to me since I can see it being used in a positive way. Today, it's basically predicting further outcome (text, image, etc) by statistical analysis with some randomness added based on what it's asked. Currently, it gets things sort of right. If I ask ChatGPT to give me specific quotes from people with the specific references to back up what it says, it doesn't always get it right. But it's getting better.
I also see AI technology as being useful for analyzing large volumes of data to find patterns or to run data analysis.I've seen references to AI helping in drug discovery and materials research. In it's current state it should not be blindly trusted, but I think it can be useful to consider ideas and alternatives.
 
Joined
Dec 14, 2019
Messages
1,213 (0.65/day)
Location
Loose in space
System Name "The black one in the dining room" / "The Latest One"
Processor Intel Xeon E5 2699 V4 22c/44t / i7 14700K @5.8GHz
Motherboard Asus X99 Deluxe / ASRock Z790 Taichi
Cooling Arctic Liquid Freezer II 240 w/4 Silverstone FM121 fans / Arctic LF II 280 w Silverstone FHP141's
Memory 64GB G.Skill Ripjaws V DDR4 2400 (8x8) / 96GB G.Skill Trident Z5 DDR5 6400
Video Card(s) EVGA RTX 1080 Ti FTW3 / Asus Tuff OC 4090 24GB
Storage Samsung 970 Evo Plus, 1TB Samsung 860, 4 Western Digital 2TB / 2TB Solidigm P44 Pro & more.
Display(s) 43" Samsung 8000 series 4K / 65" Hisense U8N 4K
Case Modded Corsair Carbide 500R / Modded Corsair Graphite 780 T
Audio Device(s) Asus Xonar Essence STX/ Asus Xonar Essence STX II
Power Supply Corsair AX1200i / Seasonic Prime GX-1300
Mouse Logitech Performance MX, Microsoft Intellimouse Optical 3.0
Keyboard Logitech K750 Solar, Logitech K800
Software Win 10 Enterprise LTSC 2021 IoT / Win 11 Enterprise IoT LTSC 24H2
Benchmark Scores https://www.passmark.com/baselines/V11/display.php?id=202122048229
After carefully pondering my lifetime experiences and observations (I was born when Harry Truman was President) I've come to the conclusion that AI has vast potential for good as well as unspeakable evil. My only personal use for it is upscaling and cleaning up video at the moment. It's already been misused and due to human nature that's only going to get worse over time. On a personal note it'd be great if it's used to find a cure for cancer (I see my doctor Wednesday to see if I've been given an expiration date following tests done a few weeks ago. Last year at this time I was given a 70% chance of making it two more years). As has been noted before the genie is out of its bottle and can't be coaxed back in. AI will be used in the future for us and against us and there's nothing we as individuals can do about it.
 

Fourstaff

Moderator
Staff member
Joined
Nov 29, 2009
Messages
10,082 (1.82/day)
Location
Home
System Name Orange! // ItchyHands
Processor 3570K // 10400F
Motherboard ASRock z77 Extreme4 // TUF Gaming B460M-Plus
Cooling Stock // Stock
Memory 2x4Gb 1600Mhz CL9 Corsair XMS3 // 2x8Gb 3200 Mhz XPG D41
Video Card(s) Sapphire Nitro+ RX 570 // Asus TUF RTX 2070
Storage Samsung 840 250Gb // SX8200 480GB
Display(s) LG 22EA53VQ // Philips 275M QHD
Case NZXT Phantom 410 Black/Orange // Tecware Forge M
Power Supply Corsair CXM500w // CM MWE 600w
When I was studying in university, one of my statistics professor told us a story how they applied theoretical statistics to mail sorting back in the 1980s. At that time its no longer state of the art - USPS implemented automatic mail address reading back in the 1960s. Also, during the transition between film to digital in the late 1990s early 2000s, the CMOS sensor came to dominate. It too used logistic function to transition between light gathering (to the sensor) and processing (to the final jpg or raw image). Likewise, in 2013 Google transitioned PageRank methods towards neural-net setups.

What I am saying is, the precursor building blocks of AI has been used for the last 50+ years, and we have benefited greatly from it. There is no turning back.

I pity the next generation - for the generations before us, the ability to use a calculator and talk to people will guarantee a job (sometimes for life). These days, minimum competency to participate in today's world is to be able to use a smartphone competently - communication, banking, transport are all tied to that little rock we tricked to serve us.
 
Joined
Jul 5, 2013
Messages
28,571 (6.78/day)
There is no turning back.
I don't think we're beyond the point of no return yet. But we need to recognize the dangers approaching us and take decisive actions now.
I pity the next generation - for the generations before us, the ability to use a calculator and talk to people will guarantee a job (sometimes for life). These days, minimum competency to participate in today's world is to be able to use a smartphone competently - communication, banking, transport are all tied to that little rock we tricked to serve us.
This!
 

Fourstaff

Moderator
Staff member
Joined
Nov 29, 2009
Messages
10,082 (1.82/day)
Location
Home
System Name Orange! // ItchyHands
Processor 3570K // 10400F
Motherboard ASRock z77 Extreme4 // TUF Gaming B460M-Plus
Cooling Stock // Stock
Memory 2x4Gb 1600Mhz CL9 Corsair XMS3 // 2x8Gb 3200 Mhz XPG D41
Video Card(s) Sapphire Nitro+ RX 570 // Asus TUF RTX 2070
Storage Samsung 840 250Gb // SX8200 480GB
Display(s) LG 22EA53VQ // Philips 275M QHD
Case NZXT Phantom 410 Black/Orange // Tecware Forge M
Power Supply Corsair CXM500w // CM MWE 600w
I don't think we're beyond the point of no return yet. But we need to recognize the dangers approaching us and take decisive actions now.
You can't stop AI in the sense that you can stop nuclear proliferation - the AI tools are readily available and indistinguishable compared to other uses.
 
Joined
May 17, 2021
Messages
3,190 (2.38/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
i remember similar discussions about the internet, people under the misguided believe they would ever be able to control the internet, and i think it's even harder to do so with AI.
 

Outback Bronze

Super Moderator
Staff member
Joined
Aug 3, 2011
Messages
2,086 (0.42/day)
Location
Walkabout Creek
System Name Raptor Baked
Processor 14900k w.c.
Motherboard Z790 Hero
Cooling w.c.
Memory 48GB G.Skill 7200
Video Card(s) Zotac 4080 w.c.
Storage 2TB Kingston kc3k
Display(s) Samsung 34" G8
Case Corsair 460X
Audio Device(s) Onboard
Power Supply PCIe5 850w
Mouse Asus
Keyboard Corsair
Software Win 11
Benchmark Scores Cool n Quiet.
But I absolutely fear the people in charge of it,

Yes, I believe this will be the problem. It won't be the AI, it will be the unruly that programme it.


It could possibly run off what language you are using? Different languages technically have different cultures, but this colloquial language would have to very specific for the AI to pick it up. English or Spanish for example have many different cultures.

Hey, I had a crack at it ;)
 

silentbogo

Moderator
Staff member
Joined
Nov 20, 2013
Messages
5,588 (1.37/day)
Location
Kyiv, Ukraine
System Name WS#1337
Processor Ryzen 7 5700X3D
Motherboard ASUS X570-PLUS TUF Gaming
Cooling Xigmatek Scylla 240mm AIO
Memory 64GB DDR4-3600(4x16)
Video Card(s) MSI RTX 3070 Gaming X Trio
Storage ADATA Legend 2TB
Display(s) Samsung Viewfinity Ultra S6 (34" UW)
Case ghetto CM Cosmos RC-1000
Audio Device(s) ALC1220
Power Supply SeaSonic SSR-550FX (80+ GOLD)
Mouse Logitech G603
Keyboard Modecom Volcano Blade (Kailh choc LP)
VR HMD Google dreamview headset(aka fancy cardboard)
Software Windows 11, Ubuntu 24.04 LTS
My big problem with AI (LLMs specifically) is that most projects are unregulated, unchecked, non-verifiable (I'm talking about the technical side of things, not political). None of the current LLMs should've been released until 99.9% accuracy was achieved, but hey - profits above all is all that matters? No one knows where the data sets came from and how will they affect the inner workings of DL models, no one knows how AI comes up with its solutions or why it choses one solution over the other. Just a brute-force through all possibilities to find something that mimics the answer.

As a consequence - absolutely all current AI models hallucinate. I think the first time I used ChatGPT, is when a "miraculous" GPT4 got released. I ran a test query to see how it handles erroneous/suggestive questions... End result - it wrote me a nice made-up story about the former president of Ukraine - Leonid Kuchma, and his post-retirement achievements as an amateur painter and all his non-existent expos :slap: (I don't think he ever held a paintbrush in public). All current AI models are trained to produce the answer... regardless. It can't just say "I can't", or "I don't know", so you can unintentionally manipulate it into spewing some made-up s#%t on absolutely any topic. And with "paid by query" model for nearly all of them - you are sure as hell going to get your answer.

With all of the above, add a bunch of content farms and news aggregators, which started to abuse AI as soon as ChatGPT and Midjourney went live (and especially after easy-to-deploy local models appeared), and you get a perfect recipe for "dead internet", where the majority of stuff is made up by AIs and you never know for sure if the info is true or not. And then the same AI models get fed their own excrement later down the road with reinforcement learning. While mischievous humans and immoral corpos play a big role in it, it's still a fundamental problem of AI as a whole. You can't make it good until you really distill the ingested data and make it "learn" for realzies. And you can't have viable use cases for LLMs if you can't guarantee that its answers are correct. Today's garbage-in-garbage-out model is only good for kids to cheat on their exams.

So, it's not just a human problem. Tech definitely isn't ready, but it's already shoved down our throats from all sides. There are many promising uses, but for some reason they are the least talked about (because these use cases are "boring" for general public).
 
Joined
Dec 16, 2021
Messages
363 (0.32/day)
Location
Denmark
Processor AMD Ryzen 7 3800X
Motherboard ASUS Prime X470-Pro
Cooling bequiet! Dark Rock Slim
Memory 64 GB ECC DDR4 2666 MHz (Samsung M391A2K43BB1-CTD)
Video Card(s) eVGA GTX 1080 SC Gaming, 8 GB
Storage 1 TB Samsung 970 EVO Plus, 1 TB Samsung 850 EVO, 4 TB Lexar NM790, 12 TB WD HDDs
Display(s) Acer Predator XB271HU
Case Corsair Obsidian 550D
Audio Device(s) Creative X-Fi Fatal1ty
Power Supply Seasonic X-Series 560W
Mouse Logitech G502
Keyboard Glorious GMMK
I'm very concerned as well. There are so many ways this can backfire. TBH I got truly afraid when OpenAI presented their o1 model, which is "designed to spend more time thinking before it responds." I honestly got a SkyNet vibe.
 
Joined
Jan 14, 2019
Messages
13,453 (6.13/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
:( I give. Moving on.
Don't just move on. It was a genuine question. :(

My big problem with AI (LLMs specifically) is that most projects are unregulated, unchecked, non-verifiable (I'm talking about the technical side of things, not political). None of the current LLMs should've been released until 99.9% accuracy was achieved, but hey - profits above all is all that matters? No one knows where the data sets came from and how will they affect the inner workings of DL models, no one knows how AI comes up with its solutions or why it choses one solution over the other. Just a brute-force through all possibilities to find something that mimics the answer.

As a consequence - absolutely all current AI models hallucinate. I think the first time I used ChatGPT, is when a "miraculous" GPT4 got released. I ran a test query to see how it handles erroneous/suggestive questions... End result - it wrote me a nice made-up story about the former president of Ukraine - Leonid Kuchma, and his post-retirement achievements as an amateur painter and all his non-existent expos :slap: (I don't think he ever held a paintbrush in public). All current AI models are trained to produce the answer... regardless. It can't just say "I can't", or "I don't know", so you can unintentionally manipulate it into spewing some made-up s#%t on absolutely any topic. And with "paid by query" model for nearly all of them - you are sure as hell going to get your answer.

With all of the above, add a bunch of content farms and news aggregators, which started to abuse AI as soon as ChatGPT and Midjourney went live (and especially after easy-to-deploy local models appeared), and you get a perfect recipe for "dead internet", where the majority of stuff is made up by AIs and you never know for sure if the info is true or not. And then the same AI models get fed their own excrement later down the road with reinforcement learning. While mischievous humans and immoral corpos play a big role in it, it's still a fundamental problem of AI as a whole. You can't make it good until you really distill the ingested data and make it "learn" for realzies. And you can't have viable use cases for LLMs if you can't guarantee that its answers are correct. Today's garbage-in-garbage-out model is only good for kids to cheat on their exams.

So, it's not just a human problem. Tech definitely isn't ready, but it's already shoved down our throats from all sides. There are many promising uses, but for some reason they are the least talked about (because these use cases are "boring" for general public).
I thought AI chose its answers based on prevalence - with the assumption that the most common information out there is the right one. That's why ChatGPT solves an astrophysics test on the level of an average student (with around 70% correctness), and not on the level of the best student.
 
Last edited:

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
19,737 (2.86/day)
Location
w
System Name Black MC in Tokyo
Processor Ryzen 5 7600
Motherboard MSI X670E Gaming Plus Wifi
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Corsair Vengeance @ 6000Mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston KC3000 1TB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Plantronics 5220, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Dell SK3205
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
Today's garbage-in-garbage-out model is only good for kids to cheat on their exams.

So, it's not just a human problem. Tech definitely isn't ready, but it's already shoved down our throats from all sides. There are many promising uses, but for some reason they are the least talked about (because these use cases are "boring" for general public).
WDYM I can't just replace developers with it??

ezgif-6-5ace55077a.jpg
 
Joined
May 22, 2024
Messages
418 (1.76/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-76 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
My big problem with AI (LLMs specifically) is that most projects are unregulated, unchecked, non-verifiable (I'm talking about the technical side of things, not political). None of the current LLMs should've been released until 99.9% accuracy was achieved, but hey - profits above all is all that matters? No one knows where the data sets came from and how will they affect the inner workings of DL models, no one knows how AI comes up with its solutions or why it choses one solution over the other. Just a brute-force through all possibilities to find something that mimics the answer.
Problem being, LLMs are probably limited in a great many fundamental manners - e.g. mode of communication - that made it impossible to get anywhere close to that for a broad, human-like variety of purposes. Not even a 100% verified factually correct (and according to whom? That gets complicated these days) dataset would make them much more correct, and being trained off more or less the whole human text corpus, including close to the entire pre-LLM Internet does not help.

As a consequence - absolutely all current AI models hallucinate. I think the first time I used ChatGPT, is when a "miraculous" GPT4 got released. I ran a test query to see how it handles erroneous/suggestive questions... End result - it wrote me a nice made-up story about the former president of Ukraine - Leonid Kuchma, and his post-retirement achievements as an amateur painter and all his non-existent expos :slap: (I don't think he ever held a paintbrush in public). All current AI models are trained to produce the answer... regardless. It can't just say "I can't", or "I don't know", so you can unintentionally manipulate it into spewing some made-up s#%t on absolutely any topic. And with "paid by query" model for nearly all of them - you are sure as hell going to get your answer.
If I recall, attempts to train that ability into current LLMs only led to rather a lot of random "I don't know" refusals that made them even less useful. An "introspecting" - note quotation marks - AI that knows their own unknown could be the next breakthrough, but how?

With all of the above, add a bunch of content farms and news aggregators, which started to abuse AI as soon as ChatGPT and Midjourney went live (and especially after easy-to-deploy local models appeared), and you get a perfect recipe for "dead internet", where the majority of stuff is made up by AIs and you never know for sure if the info is true or not. And then the same AI models get fed their own excrement later down the road with reinforcement learning. While mischievous humans and immoral corpos play a big role in it, it's still a fundamental problem of AI as a whole. You can't make it good until you really distill the ingested data and make it "learn" for realzies. And you can't have viable use cases for LLMs if you can't guarantee that its answers are correct. Today's garbage-in-garbage-out model is only good for kids to cheat on their exams.
Another good use is familiarizing yourself with what LLM/AI image generator output looked like. It is usually not too hard to tell once you've seen enough. For the moment.

Maybe it would be taught at school someday? Fat chance, I know, given how little has been and would be done for human misinfo.

So, it's not just a human problem. Tech definitely isn't ready, but it's already shoved down our throats from all sides. There are many promising uses, but for some reason they are the least talked about (because these use cases are "boring" for general public).
For my part, I'm more worried about human civilization shaking itself apart at the seams with AI "help", before any of those uses come to fruitation, or as some say, AI takes over and kills everyone.

Interestingly, a regressed humanity that no longer has the resource - or maybe even inclination - to redevelop advanced technology would also no longer has the ability to wipe itself out with AI, or any other artificial cataclysm. It would also be a solution to the Drake equation. A depressing solution.
 
Last edited:
Joined
Jan 14, 2019
Messages
13,453 (6.13/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
Problem being, LLMs are probably limited in a great many fundamental manners - e.g. mode of communication - that made it impossible to get anywhere close to that for a broad, human-like variety of purposes. Not even a 100% verified factually correct (and according to whom? That gets complicated these days) dataset would make them much more correct, and being trained off more or less the whole human text corpus, including close to the entire pre-LLM Internet does not help.


If I recall, attempts to train that ability into current LLMs only led to rather a lot of random "I don't know" refusals that made them even less useful. An "introspecting" - note quotation marks - AI that knows their own unknown could be the next breakthrough, but how?
That is a good point actually. I've just asked ChatGPT what the universe is, and it gave me this answer:
The universe is everything that exists—space, time, matter, energy, galaxies, stars, planets, and all the fundamental forces that govern the behavior of all things. It includes both the observable universe, which we can study and explore, and regions that are beyond our current ability to detect or comprehend.

The universe began with the Big Bang, around 13.8 billion years ago, and has been expanding ever since. It operates according to the laws of physics, such as gravity and the principles of quantum mechanics. Scientists are still trying to understand its ultimate nature, including questions about its origin, the possibility of multiple universes, and the potential for its future evolution.

In essence, the universe is the totality of existence—everything we know and everything we don't yet know.
Personally, I'd be happy with the last paragraph, but I have problems with the first two.
1. "Everything that exists" - what does "exists" mean? In the middle ages, no one even thought about radio waves, other galaxies, etc, but they do exist. We know now, even though we didn't know back then.
2.a.The big bang is a theory which fits our current model of the universe, but is already challenged by galaxies found at the edge of the universe that are far more advanced in structure as their age would suggest.
2.b. The laws of physics and quantum dynamics are in conflict with each other. QD works on a small scale, gravity works on a large scale, but they don't explain each other.

Personally, I think the best answer to my question would be either "everything that potentially exists", or "we don't know". But AI won't answer with the latter, will it?
 
Top