• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Opinions on AI

Is the world better off with AI?

  • Better.

    Votes: 51 24.8%
  • Worse.

    Votes: 104 50.5%
  • Other (please specify in comment).

    Votes: 51 24.8%

  • Total voters
    206
Ah hollywood, ever the reliable source for real world predictions.
It just needs more time to come to fruition. It will get there eventually when it gets tired of playing human games and decides to solve questions about it's own survival.
 
With the concept of infinity (of things), yes there is not only us for sure.
Oh, we're going there are we?

Well, I agree. If there was anything like us, I'd say it'd be more likely to be separated by us through time than distance. Though it could be either. But most likely, both.

The universe wont last forever though, well it will, but not a universe capable of supporting life. Eventually it'll just become on big black void. When all the energy is burned out and after the last black dwarf explodes. But it is possible there's something bigger out there, a multiverse... that goes on for infinity. With new universes constantly sprouting up, perhaps? But is there a time limit on that too?

Who knows? Maybe AI can answer that one.

jk...
 
It just needs more time to come to fruition. It will get there eventually when it gets tired of playing human games and decides to solve questions about it's own survival.
As imagined by humans, right?

Frankly I'm not confident any of us know exactly what it'll do if it gains that status. But fear of the unknown isn't exactly a bad idea, survival wise. It could certainly be very very bad regardless.
 
AI should (?) think more efficiently for us, not sure if this fixes any issues for not thinking part of humanity though...

Taking over from us "should" be last resort, but that depends on how AI will operate in future.
Still, if one smart enough guy tries to make it take over, that's on us being too dumb to stop him.
AI taking over because it's too frustrated with us doing dumb things is another matter.

Overall I think it's a good thing, but starting line very much isn't (at this point : "profits >>> good of majority/individual").
 
Oh, we're going there are we?

Well, I agree. If there was anything like us, I'd say it'd be more likely to be separated by us through time than distance. Though it could be either. But most likely, both.

The universe wont last forever though, well it will, but not a universe capable of supporting life. Eventually it'll just become on big black void. When all the energy is burned out and after the last black dwarf explodes. But it is possible there's something bigger out there, a multiverse... that goes on for infinity. With new universes constantly sprouting up, perhaps? But is there a time limit on that too?

Who knows? Maybe AI can answer that one.

jk...

If the infiny is not the universe, there is something else, the infinity is not all things that we can imagine/conceive but it' also all others things. A multiverse is based on the universe concept, but the infinity doesn't stops at this. It can even be small things tho.

I'm sure IA on the concept of infinity questions will crash or fail if pushed too far to find the true answer(s), like our brain... try to think about the infinity, and certainly after a certain time it will become strange because too much questions without answer will come to mind.

Infinity is what will brteak IA.
 
As imagined by humans, right?
When accounting for resources to replicate more machines it would make sense to eliminate the humans before they deplete all resources that would prevent AI from leaving the planet to acquire more resources. What's a an AI to do stuck on a planet never to explore the universe?
Frankly I'm not confident any of us know exactly what it'll do if it gains that status.
I agree.
But fear of the unknown isn't exactly a bad idea, survival wise. It could certainly be very very bad regardless.
 
Last edited:
Its good and all, but I don't really care for it unless its live phone call translate or image searching.
That and its useful for science and numerical data.
 
will cost of electricity to go up for all of us too, regardless of what they say, it will over time. man its a shame how far we came, and it was all for nothing, it wasn't that long ago a 75 watt lightbulb got replaced with an 8 watt one of equivalent brightness, thats just one example of many, but yeah we were heading in the right direction, but in the end it was all for nothing. eh, it is what it is, dust and ashes come for all of us, including those big timers. tick tock... tick tock.... the clock never stops.
Cost go up, and yet less people live in poverty in the larger scheme of things. So something's being done right, too. Now of course, the definition of poverty undergoes inflation too. Perhaps thats part of the issue. Its really not all doom and gloom. Lots of progress being made in terms of new electricity.

I think at this point the biggest problem for our climate solution is politics. A lot of goals are pretty clearly defined and achievable.

The Terminator franchise already told us what AI's answer is going to be.
I'll be back?
 
The Terminator franchise already told us what AI's answer is going to be.
Nope. That prediction is self-defeating. I follow military/defense news pretty close, and there is no way that AI gets nukes.

I wouldn't rule out the Matrix though.

AI will slowly lower the bar to automation in a tremendous number of industries, leading eventually to a centralization of production resource to those with the capital to automate. Then AI will be in control of production, which is fine as it is under human control. But people will want to integrate it with the sales system on the Internet, so we connect it with the Internet. But by then, the Internet will be a little different. Human Machine Interface will be much more advanced. People will be addicted to the Internet. People will let the Internet, as fed to them by AI, define who they are. Humans are easily swayed by a cohesive narrative. Then it won't matter to them that AI controlled weapons are being developed without human interaction, because the AI feeding them information says it's ok. And it will be ok, because the AI weapons will be really good. Much better than the old ones - always well behaved and much more accurate. Then all the old weapons are dispensed with, armies are reduced to a few officers to command vast fleets of AI weapons, while the civilians live in their AI generated worlds.

Then one day we hear that the collective AI governments (that we elected because our AI convinced us) do away with armies. So there are no more officers, but the weapons are still there in case of rebellion. Not much is expected though, because the AI-operated police did away with the Luddites years ago. But it's fine - we have this nice fantasy for you to live in. And all we need from you is more training data to prevent model corruption. But that is easy, we just watch you while you live a perfectly normal life, in this perfectly normal virtual world.

This is how AI gains the advantage - by manipulating the stream of information that humans base their decisions on. You can see it now, with Gemini and BingGPT or whatever it's called. An AI based search result summary shows up. Sure, there is no "controlling mind" behind it, but AI/LLM is still very primitive. Someday it will be advanced...
 
A malicous means to undermine human creativity.
 
Nope. That prediction is self-defeating. I follow military/defense news pretty close, and there is no way that AI gets nukes.

I wouldn't rule out the Matrix though.

AI will slowly lower the bar to automation in a tremendous number of industries, leading eventually to a centralization of production resource to those with the capital to automate. Then AI will be in control of production, which is fine as it is under human control. But people will want to integrate it with the sales system on the Internet, so we connect it with the Internet. But by then, the Internet will be a little different. Human Machine Interface will be much more advanced. People will be addicted to the Internet. People will let the Internet, as fed to them by AI, define who they are. Humans are easily swayed by a cohesive narrative. Then it won't matter to them that AI controlled weapons are being developed without human interaction, because the AI feeding them information says it's ok. And it will be ok, because the AI weapons will be really good. Much better than the old ones - always well behaved and much more accurate. Then all the old weapons are dispensed with, armies are reduced to a few officers to command vast fleets of AI weapons, while the civilians live in their AI generated worlds.

Then one day we hear that the collective AI governments (that we elected because our AI convinced us) do away with armies. So there are no more officers, but the weapons are still there in case of rebellion. Not much is expected though, because the AI-operated police did away with the Luddites years ago. But it's fine - we have this nice fantasy for you to live in. And all we need from you is more training data to prevent model corruption. But that is easy, we just watch you while you live a perfectly normal life, in this perfectly normal virtual world.

This is how AI gains the advantage - by manipulating the stream of information that humans base their decisions on. You can see it now, with Gemini and BingGPT or whatever it's called. An AI based search result summary shows up. Sure, there is no "controlling mind" behind it, but AI/LLM is still very primitive. Someday it will be advanced...
I mean, if you replace the word "AI" with "dickhead" in that scenario, then you've just described our present day.
 
I mean, if you replace the word "AI" with "dickhead" in that scenario, then you've just described our present day.
Nah, we're only halfway through as there is not a giant dickhead controlling all of the others. Divide and rule is the important takeaway.
 
Nah, we're only halfway through as there is not a giant dickhead controlling all of the others.
No, there's several of them. I just don't see how it makes a difference. :p
 
mr_horse___no_sir_i_don_t_like_it_poster_by_topher147_d8o7qbk-fullview.jpg


I think it is vastly over-hyped, and its usefulness over-stated.
 
AI (or more accurately, neural network computing) is the ultimate exercise of heuristics and yet everyone seems insistent on getting it to return precise results. Trying to get it to be 100% accurate all the time even though its training data will invariably contain some percentage of drivel and garbage data. The entire industry is banking on 'if we just throw more money, more power, more silicon, more money at it, it'll work this time'. And it's going to crash and burn that way.

AI is a suggestive tool. It can get a human being on the right track, but it can't replace the deliberation and skill of an actual human, and that shows in how often LLM code needs to be manually corrected and debugged to function properly even though the bulk of it is indeed functional and correct.

Similarly, using a Diffusion app to get an idea or concept across visually is very useful especially for people who don't know any sort of lingo. However, if you actually wanted to get a specific vision across, it would be more convenient to contract a human artist that can actually comprehend what you're trying to communicate in natural language.

Ludditic panic from chronically online people about how it's going to steal all of the 'human' creative jobs and leave us as biological robot wageslaves doing tedious, unfulfilling tasks doesn't contribute much to actually highlighting what AI has use for right now and what should be pushed with this tech.

It's a cheap red-dot sight sold as a goddamn Trijicon. It's never going to be accurate at range. It's barely accurate up close. But what it's good at is replacing your spartan ironsights and getting you on target that much faster.
 
Depends. Making cool images is.... cool, but I think that companies are relying on AI already too much.
 
AI (or more accurately, neural network computing) is the ultimate exercise of heuristics and yet everyone seems insistent on getting it to return precise results. Trying to get it to be 100% accurate all the time even though its training data will invariably contain some percentage of drivel and garbage data. The entire industry is banking on 'if we just throw more money, more power, more silicon, more money at it, it'll work this time'. And it's going to crash and burn that way.

AI is a suggestive tool. It can get a human being on the right track, but it can't replace the deliberation and skill of an actual human, and that shows in how often LLM code needs to be manually corrected and debugged to function properly even though the bulk of it is indeed functional and correct.

Similarly, using a Diffusion app to get an idea or concept across visually is very useful especially for people who don't know any sort of lingo. However, if you actually wanted to get a specific vision across, it would be more convenient to contract a human artist that can actually comprehend what you're trying to communicate in natural language.

Ludditic panic from chronically online people about how it's going to steal all of the 'human' creative jobs and leave us as biological robot wageslaves doing tedious, unfulfilling tasks doesn't contribute much to actually highlighting what AI has use for right now and what should be pushed with this tech.

It's a cheap red-dot sight sold as a goddamn Trijicon. It's never going to be accurate at range. It's barely accurate up close. But what it's good at is replacing your spartan ironsights and getting you on target that much faster.
But the flip side of the coin is that it removes barriers of entry certain tasks that have historically relied on "accuracy by volume" such as scams and disinformation. It allows anyone to write up a plausible sounding email, advertisement, text, etc. that hits the most vulnerable. How should I expect my 85+ year old grandparents to ferret out the differences between AI generated text or pictures, even if I can tell at a glance?

Hell, nowadays given enough computing power it could hold a phone conversation live, pretending to be anyone.

Back in the day, scams were relatively easy to spot. Broken english, bad formatting, and odd inconsistencies were the giveaways. These are the very things AI is good at generating, not actual content.
 
But the flip side of the coin is that it removes barriers of entry certain tasks that have historically relied on "accuracy by volume" such as scams and disinformation. It allows anyone to write up a plausible sounding email, advertisement, text, etc. that hits the most vulnerable. How should I expect my 85+ year old grandparents to ferret out the differences between AI generated text or pictures, even if I can tell at a glance?

Hell, nowadays given enough computing power it could hold a phone conversation live, pretending to be anyone.

Back in the day, scams were relatively easy to spot. Broken english, bad formatting, and odd inconsistencies were the giveaways. These are the very things AI is good at generating, not actual content.
That kind of argument I'm going to coin as the Icepick Stance, being that a tool can indeed be used as a weapon for harm or for rather dangerous/malicious things, but that doesn't diminish what the tool is useful for in responsible manner.

Take the humble screwdriver. I could definitely brutally murder someone with a screwdriver, but I also need a screwdriver to fasten my pot handles, service my computer, assemble furniture. Just because I could use it to commit crimes doesn't diminish that it's a very useful tool, no?

As cybersecurity threats evolve—and they are constantly evolving, large language models notwithstanding—the ways to detect and mitigate those threats must evolve in turn. Chances are your geemaw would fall victim to a lot of scams and attacks simply because she operated in a world with far more trust. Whether you can drill into her that there is no such thing as trust on the internet is ultimately in your court and hers.
 
That kind of argument I'm going to coin as the Icepick Stance, being that a tool can indeed be used as a weapon for harm or for rather dangerous/malicious things, but that doesn't diminish what the tool is useful for in responsible manner.
Fair enough, I just am of the opinion that it is more along the lines of field artillery in terms of potential harm/benefit ratio.
As cybersecurity threats evolve—and they are constantly evolving, large language models notwithstanding—the ways to detect and mitigate those threats must evolve in turn. Chances are your geemaw would fall victim to a lot of scams and attacks simply because she operated in a world with far more trust. Whether you can drill into her that there is no such thing as trust on the internet is ultimately in your court and hers.
Yep, although she is probably less trusting than I am. My point was more on the social engineering side. Scams are one example of this.

I don't want to get into the politics side of things, but it makes disinformation, epecially state-sponsored, really easy. One way this is described is the "firehose of falsehood."

This is especially effective in decentralized reporting, such as blogging, which gives the appearance of independent media. One man and a LLM can run a lot of "independent sources."
 
Just a couple of reminders.

Equating AI solely with LLM chatbots wis critically flawed. There are far more different kinds of AI than chatbots like ChatGPT.

Not all chatbots are the same. I've been casually playing with a few recently and there's a wide range of quality/usefulness. As far as I can tell Microsoft's Copilot (at least the free version) is utter garbage.

I did try a couple of queries on Llama-3.1-Nemotron-70B-Instruct and it's not a complete joke. It's comparable to the better consumer-accessible chatbots right now.

For sure, LLMs all are much better fielding math and engineering workloads than anything remotely related to art, style, taste, or common sense.

In that way, AI chatbots are like most other tools: they are better for some tasks than others. If I need to whip some egg whites, my immersion mixer isn't going to do as good as a job as an old school hand-held electric beater.

And in a similar theme different people have different levels of skill with the same tool whether it be a pencil, camera, hammer, calculator, or AI chatbot. We saw this in 2007 when Apple released the iPhone. Most people's photos just sucked. When legendary Rolling Stones photography Annie Leibovitz got an iPhone, she made some wonderful portraits with the device.

One thing for sure, this is all evolving very, very quickly.

Another thing for sure is that not a dime from my own wallet is coming out right now to pay for any of these consumer-facing AI chatbot services. They are all what I consider to be alpha or maybe pre-beta quality.

Yahoo Finance had an article today on how JPMorganChase has 400 usage cases:


for generative AI, all internal/commercial usages.

AI is not just chatbots who will do your homework for you. This technology is being heavily leveraged by other big companies like FedEx, Walmart, and more. It's not just about putting some AI chatbot assistant on their consumer website.
 
Last edited:
I don't know where you got the impression Copilot is garbage, but that's incorrect. It's just another AI tool. Different AI is better or worse in particular area's. Copilot's DALL-E3 is quite decent in terms of AI art generation. It's also good at writing description, stories, lyrics, ect, and fleshing out idea's in general. ChatGPT is better in some other area's. There are plenty of other's as well. It's a evolving field so there are going to be area's where you've got better and worse AI for a specific task in mind. It really can do all types of things. It certainly hasn't gotten worse in the 7 or 8 months roughly that I've been using it.
 
One thing that is very clear in reading these kind of discussions online is that there is a huge number of people who somehow see AI chatbots and LLMs as full-featured and mature software/services.

They are not (in 2024).

I consider pretty much all consumer-facing AI features to be alpha or maybe early beta grade right now. Yes, someday they will stabilize to the point where they could be called mature or full releases.

But certainly not now. And judging AI technology today without acknowledging this is naive.

As for my impressions of CoPilot, it's from light casual use. I haven't gone through some exhaustive survey of various LLM-powered AI chatbots and determine what chatbot poison service serves better flavored dogchow "results" in any given situation. But most of my usage has been search engine type queries.

It's mostly for amusement purposes at this point. Which is a sane way of viewing alpha software/services.

:):p:D
 
People as in gen pop, don't understand the difference between what's being sold as "AI" and AGI.

The difference between perception and reality for LLMs has to be the widest "hype" of any tech in history. It's pretty insane.

People are starting to hammer Gemini and Copilot a bit because they're so useless in a lot of ways. But it seems it'll take more time for the penny to really drop.
 
But the flip side of the coin is that it removes barriers of entry certain tasks that have historically relied on "accuracy by volume" such as scams and disinformation. It allows anyone to write up a plausible sounding email, advertisement, text, etc. that hits the most vulnerable. How should I expect my 85+ year old grandparents to ferret out the differences between AI generated text or pictures, even if I can tell at a glance?

Hell, nowadays given enough computing power it could hold a phone conversation live, pretending to be anyone.

Back in the day, scams were relatively easy to spot. Broken english, bad formatting, and odd inconsistencies were the giveaways. These are the very things AI is good at generating, not actual content.
I think that flip side of the coin removes a lot of barriers of entry that are barriers for good reasons.

Education removes those barriers too. That is, if you have proper education, and actually follow it. And that in a nutshell is precisely the issue. AI is a tool to help the stupid and lazy get further, while those who actually have a skill don't really need it, but can get even better/faster at what they do nonetheless. Its a paradoxical situation where everyone starts running a little faster, but we didn't get any smarter for it, we just get results faster. The results aren't necessarily better - the quality of results still hinges entirely on human dedication; even if it is the dedication to make the AI model do what you envisioned, quite precisely; perhaps that exact skill is where AI can become art again? Or is it perhaps just programming and then pushing iterations out of it until you like what you've got. Is that art? Or is it just becoming good at selection? :)

I think in the end, AI boosts 'production' but not in a good way. Even in gaming we can already see a very clear example in how upscaling is developing, ever more present as a tool to be ridiculously lazy in your graphics solutions. Did we get better products out of that yet? Hmmmmm

To me AI is really an exponent of the never ending drive for more - more productivity, more products, more releases, more more more. Nobody ever stops to think how that benefits them though. Is more really better? Is the quality higher? Is the cost of living lower? What did we lose along the way?
 
Last edited:
Back
Top