• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Postulation: Is anyone else concerned with the proliferation of AI?

Does AI have you worried?

  • Yes, but I'm excited anyway!

    Votes: 12 8.2%
  • Yes, worried about the potential problems/abuses.

    Votes: 91 62.3%
  • No, not worried at all.

    Votes: 10 6.8%
  • No, very excited about the possibilities!

    Votes: 10 6.8%
  • Indifferent.

    Votes: 14 9.6%
  • Something else, comment below..

    Votes: 9 6.2%

  • Total voters
    146
Low quality post by Macro Device
As an... enthusiast, let's call me that way, I STRONGLY disagree with this, unless you are an absolute masochist. AI's failures to recognise the situation can cause irrecoverable damage to your system. Dildos should be dumber than their users.
People already are causing irreversible damage to their "systems." We need numbers (but I don't want them).

Not true, they upgraded years ago. But I digress..
True, but its still basically a CF card with a vm and floppy images lol.
 
M5.....

SkyNet....

'nuff said :D
 
Self regulations of something this pervasive have never worked in the past.
^^^This^^^

Sadly, self-regulation never works. Greed sets in. When banks became unregulated, the mortgage crisis and fraudulent accounts happened. When insurance companies are unregulated, policies are dropped, covered claims are denied and rates go way up. When the auto industry is unregulated, safety goes away. Big pharma. Law Enforcement. Power companies. They all have abused their positions when there has been no oversight or consequences for their actions.

Can you imagine the safety conditions in factories if there was not OSHA? Look at what happens now due to the shortage of inspectors in the food processing industry.

Regulation, sadly, is a necessary evil.
 
On this global forum, people seem oblivious to the partition of AI. JSH, this year, spoke of each nation having an AI. How does that work? How do the ethics work?

Ask an AI questions pertaining to it's host culture, and you'll get very different answers from another culture. AI systems are dependent on their input, so these variances of AI create necessarily different responses.

The learning set of an AI is almost as important as it's function. If you want a genuinely 'free' AI, it has to be told what is a solid foundation of fact, versus what is myth, culture, or faith. How do you even do that? Who can choose that?

I don't fear AI. But I absolutely fear the people in charge of it, especially those motivated by wealth and power.
 
Yes, really. I hope you are not suggesting because you have not seen something, that it cannot be?

You actually said 2 things to support my comment. First you said you have not seen "so far" AI do something. Part of my point was AI is evolving so fast, we humans are having difficulty keeping up. Then you gave an example of how AI can act independently. You tell it to write an essay, and it will independently go out, do the research and write the paper.

Yes, you told it the subject but in the future, it "may" be possible for AI, while doing that research for you, sees a problem and independently, tries to resolve it. I emphasize "may" because we don't know how it is likely to evolve - yet.
No, AI writing an essay is not an independent act. You gave it a task, and it performed it using the data at its disposal. It didn't think where else it should look, it didn't think why you need the essay, it didn't start an independent conversation by asking about the weather, etc. Nothing of the sorts. It's just simple input-output, like everything in IT so far.

I'll believe that AI can do more if I see any indication of it.
 
The discussion has been taking place all over the net. Wondering what everyone here thinks.

How do you feel about AI? Excited? Concerned? Scared? Indifferent?

Share your input and opinion.

My vote was yes, I'm concern about the possible problems and abuses. Many of them are already rearing their ugly heads..
You can already see the problems with it especially in YT video shorts. Information is often incorrect yet YT is flooded with them ad nauseam.
I wouldn't use a youtube short as a reference for home repair.
 
On this global forum, people seem oblivious to the partition of AI. JSH, this year, spoke of each nation having an AI. How does that work? How do the ethics work?

Ask an AI questions pertaining to it's host culture, and you'll get very different answers from another culture. AI systems are dependent on their input, so these variances of AI create necessarily different responses.

The learning set of an AI is almost as important as it's function. If you want a genuinely 'free' AI, it has to be told what is a solid foundation of fact, versus what is myth, culture, or faith. How do you even do that? Who can choose that?

I don't fear AI. But I absolutely fear the people in charge of it, especially those motivated by wealth and power.
To make a free AI, first we have to be free ourselves. To be free, we have to ask what freedom is. Freedom of choice? Freedom of speech? Freedom from oppression? What is oppression anyway? Freedom of opinion? Or freedom from our bodily forms? (this sounds weird, but I'm being serious here) An AI is only as free as the people who program it.
 
I don't fear AI. But I absolutely fear the people in charge of it, especially those motivated by wealth and power.
That's what I was trying to portray!!!

Thankfully AI is far from being alive like Jonny5. At that point, as it becomes self aware, we should take huge concerns.

Or should we? A human baby around age 1 becomes self aware. Will AI just be a self aware baby, which won't really be dangerous at all?? Or the opposite of that. Extremely dangerous?

I won't live long enough to see Jonny5 become alive. That's my current belief.
 
I won't live long enough to see Jonny5 become alive. That's my current belief.
I am sure Jonny5 is alive an well.. remember, what we get is 20-50 years behind the tech that actually exist.. at least :D

Ooh god here we goo :laugh:
 
I don't fear AI. But I absolutely fear the people in charge of it, especially those motivated by wealth and power.
What I fear is when AI gets smart enough to function autonomously. The rough numbers say that is coming sooner than later. We need a plan in place to make sure that kind of thing is very strictly controlled.

Thankfully AI is far from being alive like Jonny5. At that point, as it becomes self aware, we should take huge concerns.
Alive or self-aware? No. But smart enough to do real damage? We are very close.
 
I am sure Jonny5 is alive an well.. remember, what we get is 20-50 years behind the tech that actually exist.. at least :D

Ooh god here we goo :laugh:
So I'm using a 50 year old CPU right now?? WTH??!! Haha.

Alive or self-aware? No. But smart enough to do real damage? We are very close.
That just brings us back to regulation and that's the end of the conversation then.

We have all agreed AI needs regulation, now we need to implement it. :)
 
Low quality post by freeagent
So I'm using a 50 year old CPU right now?? WTH??!! Haha.
Maybe older :O

Edit:

Just kidding, I really have no idea what I am talking about :rockout:
 
Low quality post by AusWolf
What I fear is when AI gets smart enough to function autonomously. The rough numbers say that is coming sooner than later. We need a plan in place to make sure that kind of thing is very strictly controlled.

Alive or self-aware? No. But smart enough to do real damage? We are very close.
All it takes is some fanatic organization with global outreach, deep pockets, and access to the best tech engineers to point it in the right direction to save the planet and Skynet will be born.
( let me step back for a moment I was being a little overdramatic )
 
People or entities using AI should ultimatly be held accountable if AI makes mistakes and causes damages.
And AI generated content should be labeled as such.
 
Twitter is full of generated AI videos to fool people. Facebook is going down the same road. The future does not look bright
if it can be abused, it will be
 
No, AI writing an essay is not an independent act.
I guess we have a different definition for an independent act.

If I give one of my techs the job to go wire a new house for ethernet, and I let him or her decide which walls to put the ports on, where to put the distribution panel, how to route the cables through the walls, floors, ceilings, etc. then he has the "independent" authority to do it how he wants and deems best for the job. Just because I gave him the task, or you told AI what the subject of the essay should be, that does not mean how they accomplish those tasks are not done at their own independent discretions.

Being able to conduct independent acts does not automatically imply the AI is totally autonomous or that it, and only it, can pick and choose what it does. The fear is that it could get to that point. Fortunately, we are not there - yet.

You gave it a task, and it performed it using the data at its disposal. It didn't think where else it should look, it didn't think why you need the essay, it didn't start an independent conversation by asking about the weather, etc. Nothing of the sorts. It's just simple input-output, like everything in IT so far.

I disagree with much of that. No, it didn't talk about the weather or ask why you need the essay. But it might seek out information from other sources. And it definitely is NOT simple input-output. AI can analyze a set (or sets) of data and derive and develop conclusions, and make suggestions based on that data and on past patterns of behavior by you, and by others. That is NOT simple input-output.
 
I don't fear AI. But I absolutely fear the people in charge of it, especially those motivated by wealth and power.

This is the entirety of it. And if there happens to be someone in charge of something who is well meaning they will be swallowed up by capital ghouls.
if it can be abused, it will be

The big thing is off course who is doing the abusing, and the scale. @Macro Device mentioned kevlar vests, which sure you can abuse those, but you can't do it on scale, like hundreds of millions of people kind of scale. That is the difference between now and the past: all of this is profilerated like never before. WW1 was shocking because technology allowed for mass slaughter on a previously unimaginable scale, and social media allows for the same thing to happen.
 
I'm normally not a tin foil hat guy, but what if something Terminator-ish will happen?
 
I'm normally not a tin foil hat guy, but what if something Terminator-ish will happen?

It can't. "AI" as it's now called is not that. People (who own stakes in AI companies) like to say that with enough hardware these LLMs will become self-aware and might be able to ride into the singularity, but there are no grounds for that. Actual "artifical intelligence" hasn't been invented yet. (unless you assume mimicked speech is the same as thought)
 
every 15/20 years we come up with another worry, i think AI will turn its self off than coup with the human condition through lack of interest. why worry about something i can do nothing about eh.
 

Postulation: Is anyone else concerned with the proliferation of AI?

You know comparing this All this A.I/L.L.M to folding for disease research reminds me. During Research about the folding models that computers are generally good a small folding piece on the models, but humans are better a larger folding piece In shorter time. Human's have a higher ingenuity than what a computer has been capable of so far. Even with all the advancements in A.I/L.L.M's it has yet to cure any disease fully & completely. Instead, it's being use to drive down cost & increase profit for the people using it. It's like as if they see crypto with GPUs. How they can make profit & went how can we use the or replicate the blockchain in A.I? just data instead & use it in A.I/L.L.M to make profit?
 
As a security expert, my concern level is very high. The cat is indeed out of the bag, but that certainly does not mean regulation is not something we should push before it gets WORSE.

I have zero confidence in regulation happening in the US however.
Yeah the IT firm that is offsite allowed co-pilot to be installed and im like, do you ding dongs even realize this is spyware pushed my ms itself and can compromise IP?

The US nuclear grid still uses offline 8" floppies. So not going there at least, lol.
There's alot of air gap
 
Back
Top