• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Postulation: Is anyone else concerned with the proliferation of AI?

Does AI have you worried?

  • Yes, but I'm excited anyway!

    Votes: 11 7.6%
  • Yes, worried about the potential problems/abuses.

    Votes: 91 63.2%
  • No, not worried at all.

    Votes: 10 6.9%
  • No, very excited about the possibilities!

    Votes: 9 6.3%
  • Indifferent.

    Votes: 14 9.7%
  • Something else, comment below..

    Votes: 9 6.3%

  • Total voters
    144
What does "under no human intervention" mean? AI doesn't do anything on its own initiative. Someone must have given it the command to create the rogue AI.
The problem is we have no idea how they train their AIs.
It's in it's infancy, soooo much can be done to improve models.
 
It does now, that's what is so damn scary..
Wait... did no one ask X AI to do anything, it just came up with it all of a sudden? I find that hard to believe of an LLM.
 
Wait... did no one ask X AI to do anything, it just came up with it all of a sudden? I find that hard to believe of an LLM.

Difficult to know what the truth of it is.

I reckon the most advanced ai stuff isn't known to the public, as it is likely used for military purposes.
 
More concerned with the growth in inHumane stupidity - it seems we still manage to destroy more than we create - perhaps AI rule would be our salvation....
 
I'd rather it be in the hands of the many than in the hands of the few.

Cat's out of the box now - there's nothing you or I can do to stop it.

Wait... did no one ask X AI to do anything, it just came up with it all of a sudden? I find that hard to believe of an LLM.
Correct. LLMs don't do anything without an input.
 
I reckon the most advanced ai stuff isn't known to the public, as it is likely used for military purposes.
Yeah. There is that. Military purposes are not the scary part though.

More concerned with the growth in inHumane stupidity - it seems we still manage to destroy more than we create - perhaps AI rule would be our salvation....
Oh, please. I think we all know it doesn't work that way..
 
Last edited:
This is good old deep learning, not LLMs.
Started that way, but the current model can make new proteins just like image generation. Pretty cool stuff. Watch the whole video if you have time.
 
Started that way, but the current model can make new proteins just like image generation.
Which, again, is something that you don't need an LLM for.

Watch the whole video if you have time.
I did before I made my previous post. It's an incredible breakthrough and an important milestone, but here's a far more important question: what happens when - not if - Google decides that AlphaFold should be closed-source and charge for its use?

In earlier decades, this kind of research would've been done by governments for the benefit of all their citizens, and often humankind as a whole; today it's done by megacorporations whose only goal is to scrape the last cent from your bank account. Where does the madness end?
 
if you work at a computer you should likely be worried. Doubtful AI is going to wire a building anytime soon so im not worried. After a while people will simply stop listening to anything on the internet. If it can happen to cable it can also happen to AI. Once people believe everything is fake they will just tune out and focus on real things again. AI is like politicians. Powerful but their power is derived from believers.
 
Last edited:
@Assimilator it uses a Transformer model, which is what LLMs uses as well. I would consider it the same technology.
 
And this is a good example of why AI can't be trusted.
This is only one example and is not exclusive.
Not one of those AI's got it right. Every one of them got some part of this query wrong and none of them was consistent.
I tried this same query myself with most of those(I don't have a paid X account) and I got different results than he did, none of them even close to correct.
And yet normal search engines find the data needed to identify each CPU example, or lack thereof.

This is one of the reasons that AI is scary.
People are using the deeply flawed info coming out of AI models to make decisions and do research.
They're getting flawed and baseless info and it's causing harm. :banghead:
 
Last edited:
And this is a good example of why AI can't be trusted.
This is only one example and is not exclusive.
Not one of those AI's got it right. Every one of them got some part of this query wrong and none of them was consistent.
I tried this same query myself with most of those(I don't have a paid X account) and I got different results than he did, none of them even close to correct.
And yet normal search engines find the data needed to identify each CPU example, or lack thereof.

This is one of the reasons that AI is scary.
People are using the deeply flawed info coming out of AI models to make decisions and do research.
They're getting flawed and baseless info and it's causing harm. :banghead:
It's a tool, not a substitute. You shouldn't use it for things that it's not meant to do, and it's not a replacement for human thinking. I don't think this would even be in the training data.

If you rely on the output of an AI to be true, you are an idiot. And yes, you do deserve to be harmed for that, you were told beforehand not to do that.
 
We should differentiate between AGI and what's being sold as "AI".

When it's actually machine learning, just advanced pattern recognition.

So we shouldn't be surprised, when that tech fails to live up to its "Intelligence" branding.
 
It's a tool, not a substitute. You shouldn't use it for things that it's not meant to do, and it's not a replacement for human thinking. I don't think this would even be in the training data.
I agree with you. The problem is that a lot of people ARE using it as a primary source of info, some exclusively. It's making them look like morons and is already causing problems.
If you rely on the output of an AI to be true, you are an idiot.
Exactly.
 
We should differentiate between AGI and what's being sold as "AI".

When it's actually machine learning, just advanced pattern recognition.

So we shouldn't be surprised, when that tech fails to live up to its "Intelligence" branding.
The way I see it, it's no different from talking to a regular person with good recall. I wouldn't expect an "Intelligent" person to know every CPU ever built off the top of their head... Or at least I wouldn't trust their answers.
I agree with you. The problem is that a lot of people ARE using it as a primary source of info, some exclusively. It's making them look like morons and is already causing problems.
I hope this trend continues.
 
I've just seen a pressure washer from a budget brand with an "AI pump" in it. Does anyone have any idea what that is about?
 
It's either working properly, partially functional, or it's broken. This is why developers test and debug features and functionality. The AI can get things wrong, but so too can human developers who made AI. It's not perfect, but neither are people. It is cheaper than hiring a personal programmer though. Will it do as good a job maybe not probably not possibly so maybe better. A world of possibilities.
 
I've just seen a pressure washer from a budget brand with an "AI pump" in it. Does anyone have any idea what that is about?
Why can't we just put AI in things which could actually be helpful... Not stuff like this..
 
Back
Top