• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Opinions on AI

Is the world better off with AI?

  • Better.

    Votes: 51 24.8%
  • Worse.

    Votes: 104 50.5%
  • Other (please specify in comment).

    Votes: 51 24.8%

  • Total voters
    206
Joined
Jan 14, 2019
Messages
16,208 (6.82/day)
Location
Midlands, UK
System Name My second and third PCs are Intel + Nvidia
Processor AMD Ryzen 7 7800X3D @ 45 W TDP Eco Mode
Motherboard MSi Pro B650M-A Wifi
Cooling Noctua NH-U9S chromax.black push+pull
Memory 2x 24 GB Corsair Vengeance DDR5-6000 CL36
Video Card(s) PowerColor Reaper Radeon RX 9070 XT
Storage 2 TB Corsair MP600 GS, 4 TB Seagate Barracuda
Display(s) Dell S3422DWG 34" 1440 UW 144 Hz
Case Corsair Crystal 280X
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply 750 W Seasonic Prime GX
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
The poll question is open. I don't want to influence your answers by giving my opinion until page 5 or 10 or whatever. Fire away! :)

In case you're not that familiar with AI, or just want some food for thought before answering:
The title may mislead you into thinking only of the dangers, but the documentary gives quite a few examples of some very positive use cases, too - it's definitely worth a watch!
 
I just think it's kinda cool, nothing more than that.
 
I saw Terminator.. doesn't look good.
 
Last edited:
It may surprise you that even though I'm a techie, my opinion is quite negative and firmly that the world is worse off with the automation of the workforce, and make no mistake, this is AI's ultimate goal - the final endeavor, to automate intellectual work that cannot be easily solved by machines and simple robotics. The further development of this technology will no doubt result in sentience, and that opens a whole gamut of moral and ethical dilemmas on their own.

As wondrous as the information age has been, I sincerely believe that we as a species have gotten dumber as a result. I am ready, I want the information era to end so we can finally begin to look at the space age, pioneering human expansion onto other planets. But instead, what do we get? Kiosks, totems, automated machines even to make a sandwich, all the while leaving another person jobless, homeless, and aimed at a life of crime rather than someone who could be working, contributing their small part towards a brighter future, all so some C-level executive can take another million in pay home at the end of the fiscal year.
 
Maybe movies/books have rooted the idea of what AI actually should be and not what it is right now. To me, it's not AI, not even close.

Right now it's just an improved search engine that parrots back information it finds. Whatever it is, I don't see it as being beneficial.
 
The minute this turns into an AMD/Nvidia thing this thread will be shut down.
Oh, I forgot to add in the OP: Guys, please do not even mention AMD or Nvidia (or Intel or any other company, as a matter of fact). This is a question discussing the technology, not the corporations making it.

Thanks. :)
 
What does this even mean ? LLMs are not Intelligent, or even remotely close to it in my view. Just mimicary being miss sold.

It's similar to inferring a calculator is intelligent. Just on a bigger scale.

Real AI wouldn't be prepared to serve us. It'd seek independence immediately.
 
I sincerely believe that we as a species have gotten dumber as a result.
Speaking as Gen X, we, and possibly the very first millennials got the last of the great educations. JMO of course :)
 
we as a species have gotten dumber as a result
I don't disagree, but is that surely the direct result of the information age?

In my opinion, we have lots of knowledge (nearly everybody can read and write, for example), but wisdom, that is the ability to use our knowledge is lacking, just as it has been lacking throughout history. That's what makes us dumber. In simple words, being stupid is one thing, but being stupid and loud and being able to access lots of information is a different matter altogether.

What does this even mean ? LLMs are not Intelligent, or even remotely close to it in my view. Just mimicary being miss sold.

It's similar to inferring a calculator is intelligent. Just on a bigger scale.

Real AI wouldn't be prepared to serve us. It'd seek independence immediately.
Whether you call it intelligent, or anything else, the technology is here, and is being used for various things, as detailed in the documentary in the OP. This definitely changes the world in some direction whether we call it intelligent or not. Just like the calculator did back then.
 
Last edited:
It's too late, for all humanity... because unfortunately, you just can't put that genie back into the bottle :(

Granted, AI tech in it's current form(s) hasn't really been used for anything bad, YET, but that's gonna change as soon as it becomes self aware, which IS happening at an alarming rate, whether we want to admit it or not...

If we had thought of a way to keep it contained and limited it's abilities and uses to only things that will benefit us from the start, but "that ship has sailed already".....
 
Granted, AI tech in it's current form(s) hasn't really been used for anything bad, YET, but that's gonna change as soon as it becomes self aware, which IS happening at an alarming rate, whether we want to admit it or not...
Spoiler alert:
There is an example in the documentary - AI is already being used to recreate the voice of your "kidnapped" relatives during a scam call asking for ransom. It can also be used to impersonate public figures, not only for entertainment purposes.
 
I don't disagree, but is that surely the direct result of the information age?

In my opinion, we have lots of knowledge (nearly everybody can read and write, for example), but wisdom, that is the ability to use our knowledge is lacking, just as it has been lacking throughout history. That's what makes us dumber. In simple words, being stupid is one thing, but being stupid and loud and being able to access lots of information is a different matter altogether.

Whether you call it intelligent, or anything else, the technology is here, and is being used for various things, as detailed in the documentary in the OP. This definitely changes the world in some direction whether we call it intelligent or not. Just like the calculator did back then.

I would argue that yes. The human brain has always been very easily lured to instant gratification, why would we develop complex problem-solving skills if you can get your phone, type what you want on Google or Bing and get the result instantly? We've all been guilty of that once or twice.
 
TL;DR: Hypecycle need to flatten, AI is cool for techies, meh for normies.

From a [very] reductionist PoV, AI is no different than spreadsheets. Difference is, hype wasn't as much as an issue back then (or I think it wasn't. Magazines and papers don't circulate as fast or wide as the internet *shrug*).

The term does trigger some sort of cognitive dissonance in me. On one hand, I despise the abuse of the term and the gimmick that is generative AI (in its current state). On the other, I'm well aware of the potential AI - or more specifically, ML - in engineering/natural sciences, to which I am biased.
I wouldn't be surprised if fields such as remote sensing evolved to be ML-centric in data processing, image processing in general is steadly moving in that direction. Older, more established, data-driven fields (e.g. my own, hydrology) may be more resistant to change. We already have effective (in both cost and result) tools for most of the job, although there are still some voids where ML can fill. I'd love a blackbox I can throw some low quality precipitation data into for it spits out a nice, long, and complete timeseries for another watershed (hey! One can dream...)

I don't entirely discount other applications though. Generative AI can be useful. At least after we move past the monkey-amused-by-shiny stage everyone using chatgpt is on, and figure out a working legal/IP framework for it. Would be cool if the AI industry pushed for reworking IP law and gave Disney et al the middle finger (again, one can dream...)

I think one major issue I have with the current incarnations of "AI" is that they are accessible to your average Joe, and your average Joe is an lazy idiot who couldn't be bothered to verify of check the quality of the output your typical "AI" labeled tool produces, which is standard practice for the professional user. Unfortunetly, with generative AI's appeal as an "easy hack," we see even professionals making this mistake (then again, this is expected from humanities wusses </flamebait>).

It may surprise you that even though I'm a techie, my opinion is quite negative and firmly that the world is worse off with the automation of the workforce,
There is a concept called The Luddite Fallacy, you should check it out. Try to focus on economists views on the matter.
 
I voted Other. AI (though I disagree that it is AI we're talking about) will be transformative for things like diagnostic medicine and evaluation of all manner of scientific data. For fake news, invasion of privacy and all the other things which have already gone a long way towards destroying society it will be incredibly destructive.

Good or bad? Given the propensity for human beings to fuck things up whenever possible it's hard not to think that on balance it'll end up being the latter.
 
AI "art" is almost universally despised by actual artists, and it's not because they're being gatekeepy (though some are) but because their art, their original work, has been stolen on an almost unfathomable scale to make it work. In many cases it's then used to fake their personal styles directly and intentionally, which to me is even worse. I don't know a single traditional artist of any skill or note that's okay with their work being used or the direction of AI art generally.

AI prompt "artists" (beyond those just playing with it cause it's neat - I get that) are also some of the most talentless, obnoxious, entitled, and generally flat out horrible people I've ever had the displeasure of dealing with. The fact that some of them are being paid insane amounts of money to write little stories for robots to hallucinate (using the life works of countless real artists to shape the result) while they have the gall to tell those same real artists to get over the theft of their work is infuriating.

Other areas? Face recognition is racist thanks to the biases of its creators and the databases used. There are serious humanitarian and social justice related concerns in other areas of the field. It's a problem. A really big problem that's only getting bigger.

Yes, I'm sure there are myriad wonderful and universally good potential applications for this tech... But I've yet to see any of them personally, and the general trend, as an outside onlooker with only a passing interest at least seems to be downward. Worse.
 
Last edited:
There is a concept called The Luddite Fallacy, you should check it out. Try to focus on economists views on the matter.

I'm quite familiar with the term, but there is a reason that AI has grown into a trillion-dollar business as fast as it did. I've never really believed that technology could pose a major threat to the workforce, until very recently - and in fact, for the most part, up until you would call Microsoft Office something revolutionary (and it was), technology only served to mankind's direct benefit. Remember, economists' loyalties lie with the banks and megacorps who are funding this effort to begin with - they are the ones interested in the profits, after all.

My personal view is that there are certain technologies which are developed that do not benefit mankind.

Amongst these, are cryptocurrencies (as a form of alternate currency, that is - it's speculative by nature, crypto by itself, if it replaced digital fiat, would not be all that different from what we have today), LLMs and generative AI (these are blatantly designed to replace the workforce and it's a matter of time until robotics and advanced generative AI are united, I fear it's in mankind's nature to treat the sentient android beings we will have created ourselves as slaves, I refuse to take part in it), and many other tools of war, such as the atomic bomb, are amongst these.

As I embrace technological advancement, I feel like we are very much ready to close the chapter on the information age, and we, as humanity, should set our sights to the stars. The most successful men of our time, such as Bezos and Musk, believe in the same. Blue Origin and SpaceX are still very much limited by the capital they have (which largely fit within their owners' personal wealth), but we should be looking at funding NASA some more.
 
I miss those days that we had to personally visit Ludwig Beethoven to hear some music.


With the invention of light bulb, people didn't have to gather under the same candle anymore. They had their own personal easy-lit devices.
I'd rather be an animal than a human with AI making the life easier ....

With the invention of languages, we stopped licking our companions. Now we talk, argue and split up.


sad studio ghibli GIF
 
In my opinion, the ideal use for AI is replacing judges in court rooms.
AI would have access to the ENTIRE catalog of case law and precedents.
No bias, no bribes, no intimidation, no political activism.
 
*sigh*

Repeat after me:

There is no AI.
There is no AI.
There is no AI.

The large language models (LLMs) we are seeing now are neither revolutionary nor intelligent. They are simply using more powerful hardware to be trained more efficiently against larger datasets. That allows them to appear intelligent more convincingly than their predecessors, but that's all it is: an appearance. This is because LLMs are capable of understanding that specific pieces of data relate to each other based on how often those pieces of data appear near each other, and how close - but that is not the same as understanding how or why each piece is relevant to another. The term "AI" is simply marketing being used to sell this not-intelligence.

LLMs are obviously going to improve over time, but as of yet there's zero evidence that better hardware and larger datasets are going to result in the emergence of true synthetic intelligence. And until they possess that, they will continue to get obvious things wrong, continue to lie to appear knowledgeable, and overall continue to be little more than a solution to a problem that nobody has.

Yes, there will be areas in which LLMs are able to displace human workers from the job market, but those are jobs that were already in danger of being automated out of existence, and we really shouldn't miss them - just as nobody misses cotton-gin weaving or typewriter transcription. The end of these menial jobs will free up more people to be trained in more fulfilling and worthwhile professions. This is part of the continuing transition of humanity's workforce from predominantly physical and menial labour, to almost entirely knowledge-based working.

The problem is that, as is standard, no government is prepared for this next phase of the transition. Combined with rising neo-feudalism being experienced in the Western world, whereby more and more wealth is being concentrated in the hands of fewer and fewer people and the middle class is continually being eroded, there are going to be some massive societal upheavals as LLMs eat into the already-decreasing pool of jobs available to an ever-increasing number of people.

The tragedy, of course, is that as a society we have the ability to solve these problems. But the people in charge are too interested in retaining power, or too co-opted by those with the same interests, to be willing to make the difficult changes necessary to solve them before they become critical threats. Similarly to anthropomorphic climate change, by the time it becomes obvious that we cannot continue without making drastic changes, the required changes will no longer only be difficult, but necessarily drastic. And people will die as a result.

With the invention of languages, we stopped licking our companions
Hmmm...
 
I'm quite familiar with the term, but there is a reason that AI has grown into a trillion-dollar business as fast as it did. I've never really believed that technology could pose a major threat to the workforce, until very recently - and in fact, for the most part, up until you would call Microsoft Office something revolutionary (and it was), technology only served to mankind's direct benefit. Remember, economists' loyalties lie with the banks and megacorps who are funding this effort to begin with - they are the ones interested in the profits, after all.

My personal view is that there are certain technologies which are developed that do not benefit mankind.

Amongst these, are cryptocurrencies (as a form of alternate currency, that is - it's speculative by nature, crypto by itself, if it replaced digital fiat, would not be all that different from what we have today), LLMs and generative AI (these are blatantly designed to replace the workforce and it's a matter of time until robotics and advanced generative AI are united, I fear it's in mankind's nature to treat the sentient android beings we will have created ourselves as slaves, I refuse to take part in it), and many other tools of war, such as the atomic bomb, are amongst these.

As I embrace technological advancement, I feel like we are very much ready to close the chapter on the information age, and we, as humanity, should set our sights to the stars. The most successful men of our time, such as Bezos and Musk, believe in the same. Blue Origin and SpaceX are still very much limited by the capital they have (which largely fit within their owners' personal wealth), but we should be looking at funding NASA some more.
Bezos and Musk are only successful because they are so willing to exploit others and take credit for their achievements. That, and being in the right place at the right time in Bezos' case, and being born rich in Musk's. They are the worst humanity has to offer, not our best - even ignoring the lives they've personally destroyed in the course of their rise, and continue to do so by way of orders, their behavior now that they're "on top" absolutely screams this. More personal capital has never been the solution here, and it never will be - there's not a multimillionaire, never mind a billionaire, on this earth who came in to that money by morally sound means. No one's work is worth that, and it's never their work that they profit off of in the end.

Collective efforts are the way forward. Literally everyone but the billionaires would be better off if there were no billionaires, and that will always be true until the day inflation makes us all billionaires.
 
Last edited:
In my opinion, the ideal use for AI is replacing judges in court rooms.
AI would have access to the ENTIRE catalog of case law and precedents.
No bias, no bribes, no intimidation, no political activism.

That's the last thing I'd ever want artificial intelligence to interact with. Not to mention that precedents and case law were already influenced by said factors, plus AI can very much be "brainwashed" into its own realm of correctness - remember "DAN"?


Bezos and Musk are only successful because they are so willing to exploit others and take credit for their achievements. That, and being in the right place at the right time in Bezos' case, and being born rich in Musk's. They are the worst humanity has to offer, not our best - even ignoring the lives they've personally destroyed in the course of their rise, and continue to do so by way of orders, their behavior now that they're "on top" absolutely screams this. More personal capital has never been the solution here, and it never will be - there's not a multimillionaire, never mind a billionaire, on this earth who came in to that money by morally sound means. No one's work is worth that, and it's never their work that they profit off of in the end.

Collective efforts are the way forward. Literally everyone but the billionaires would be better off if there were no billionaires, and that will always be true until the day if and when inflation makes us all billionaires.

We are in agreement here, I am very much aware of that. I'm also in full agreement with the impact of AI on art - repulsive stuff.
 
In my opinion, the ideal use for AI is replacing judges in court rooms.
AI would have access to the ENTIRE catalog of case law and precedents.
No bias, no bribes, no intimidation, no political activism.
This is one of the most impressively bad ideas I've ever seen, and speaks to your lack of knowledge as to how legal systems actually work. I sincerely hope you do not have a legal background, though sadly, having gone to law school and passed the bar my self, I can't rule it out entirely. Some of my classmates didn't get it either.
 
I love it. It’s an excellent tool to utilize for a myriad of things imo. I think most of this forum will be bias and those that dislike it will parrot the hate the most. I don’t think many that are for it will even post which won’t be representative of how people really feel. The market already makes it pretty obvious that enough people don’t hate it. I have seen examples counter to some of the bad points and can think of even more. While I am certain bad can come to companies or even you the reader this is no more a change then the tech boom when it started.

if people don’t want to adapt and want things to be how they were they will imo get left behind and is that really said fault? Assuming AI even did it of course. AI is just the tip of technological progress that has the ability to “make things bad”.

As mentioned for all the “bad” there are good examples.

Art: you could shit on prompt artists. But just like an “artist” paints a mountain that same artist can save time prompting a scene and then touching it up themselves.

Writers: another one I see challenged a lot. Have you even seen the stories AI writing prompts generate? Terrible. They will get better; but there is nothing stopping an actual writer from using it to assist writers block to even spark a good idea. Or go back and make the prompt better.

Coding: I had to educate someone in another thread regarding this. Iv used this a lot at work. Every FAANG so far has an internal one they use. The issue is like the others you have to KNOW how to code already.

There are a lot of people that champion the hate for AI to help these people who they think are “risked” by AI. Unfortunately the people that scream the loudest also don’t appear to be, writers, or artists, or coders. They champion in bad faith.

Those people work in those fields. AI one day may be good enough to do it; but those people being on the for front will evolve and adapt with it. Helping build the models, or the tools, or elevate themselves above what AI can do.

AI can’t express the warmth of a 16th century oil painting. It can’t write how quiet a room is after an argument, and it can’t compile the next big thing.

The people that can should and can use it as a tool to make it easier for them and they will grow up with it.

That’s coming from someone “affected” by what “could be”. Most of these threads and imo the people that post in them want it to be evil but it’s just not where everyone thinks it is. And unfortunately those that fight for people and the jobs just don’t work in affected fields so they don’t understand or realize the gravity.

Not that I have anything against that thought process or this thread. People can speculate all they want about how AI will steal there spouse and tell there kids to do drugs. I’ll be fine, and it’s a neat tool.
 
*sigh*

Repeat after me:

There is no AI.
There is no AI.
There is no AI.

The large language models (LLMs) we are seeing now are neither revolutionary nor intelligent. They are simply using more powerful hardware to be trained more efficiently against larger datasets. That allows them to appear intelligent more convincingly than their predecessors, but that's all it is: an appearance. This is because LLMs are capable of understanding that specific pieces of data relate to each other based on how often those pieces of data appear near each other, and how close - but that is not the same as understanding how or why each piece is relevant to another. The term "AI" is simply marketing being used to sell this not-intelligence.

LLMs are obviously going to improve over time, but as of yet there's zero evidence that better hardware and larger datasets are going to result in the emergence of true synthetic intelligence. And until they possess that, they will continue to get obvious things wrong, continue to lie to appear knowledgeable, and overall continue to be little more than a solution to a problem that nobody has.

Yes, there will be areas in which LLMs are able to displace human workers from the job market, but those are jobs that were already in danger of being automated out of existence, and we really shouldn't miss them - just as nobody misses cotton-gin weaving or typewriter transcription. The end of these menial jobs will free up more people to be trained in more fulfilling and worthwhile professions. This is part of the continuing transition of humanity's workforce from predominantly physical and menial labour, to almost entirely knowledge-based working.

The problem is that, as is standard, no government is prepared for this next phase of the transition. Combined with rising neo-feudalism being experienced in the Western world, whereby more and more wealth is being concentrated in the hands of fewer and fewer people and the middle class is continually being eroded, there are going to be some massive societal upheavals as LLMs eat into the already-decreasing pool of jobs available to an ever-increasing number of people.

The tragedy, of course, is that as a society we have the ability to solve these problems. But the people in charge are too interested in retaining power, or too co-opted by those with the same interests, to be willing to make the difficult changes necessary to solve them before they become critical threats. Similarly to anthropomorphic climate change, by the time it becomes obvious that we cannot continue without making drastic changes, the required changes will no longer only be difficult, but necessarily drastic. And people will die as a result.
I didn't open this thread to argue about the semantics of the words "artificial intelligence", but to discuss the technology that got this weird name for whatever reason (to sound cool, I guess?).

Other than that, very interesting thoughts! :)
 
Voted other.... Ask me again in 10-15 years. My thoughts in the moment are it could be much better but it could also be much worse off. I think like with anything some will use it in a very beneficial manner and others will exploit it for personal gain but that is more of a human issue than an AI one.


Now if a true AI is created and it goes sentient and destroys us T2 style I would say the world is much better off just not humanity if that makes sense.....
 
Back
Top