• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AI Topic: AlphaZero, ChatGPT, Bard, Stable Diffusion and more!

There is a risk to life from LLMs? Can't really see it myself other than them boring us to death.

We're nowhere near technology that can reason and create by itself, it's all just regurgitation from patterns in huge amounts of data, the base data is all created by us... My strong suspicion is that we're not even the primary intelligence on the planet, probably for another thread...
 
My strong suspicion is that we're not even the primary intelligence on the planet, probably for another thread...

You can't really say that and walk away. What do you mean?
 

Can we ever truly control AI? All these examples are frankly concerning.
Yeah, if we stop doing non-interpretable models so that all the guardrails could be installed correctly before taking it online. It's only sheer greed blindly powering blackbox stuff ahead because money-money-valuations plus potential for cost savings due to automatization right here and right now.
 
You can't really say that and walk away. What do you mean?

Dolphins.

But seriously, they can definitely not add to that because it's so OT (human-designed AI) it's not on the same planet. Perhaps literally. In which case. Not for TPU.
 
Yeah, if we stop doing non-interpretable models so that all the guardrails could be installed correctly before taking it online. It's only sheer greed blindly powering blackbox stuff ahead because money-money-valuations plus potential for cost savings due to automatization right here and right now.

There were guardrails, they just keep going off the rails.
 
There were guardrails, they just keep going off the rails.
Guardrail models are inherently flawed because they're based on manual rules, not interpretations of a deeper understanding.

AI needs to be slowly socialized over the course of years (or equivalent relative time), as children are. If children are not socialized by the age of two, they have lifelong issues.

A slowly developed morality and the understanding of inherent principles is what helps people form the social conscience and compass that allows them to make nuanced decisions, even if in situations that are novel.

The worst mistake IMO of this AI charade is treating them as computers, and trying to code logic. A real AI won't act like a computer, it will act like an individual, and thus needs socializing if it is going to have interaction with the world.

Another example of why focused disciplines are limited compared to general studies - the developers who design these systems could learn a lot from a basic clinical psychology course.

To add to why this AI controlled military weapon system is a bad idea, and why military leaders want it regardless - is because the military system is designed to have officers that interpret orders, and refuse them if they are not legal, this isn't something leadership wants.
 
Last edited:
Guardrail models are inherently flawed because they're based on manual rules, not interpretations of a deeper understanding.

AI needs to be slowly socialized over the course of years, as children are. If children are not socialized by the age of two, they have lifelong issues.

A slowly developed morality and the understanding of inherent principles is what helps people form the social conscience and compass that allows them to make nuanced decisions, even if in situations that are novel.

The worst mistake IMO of this AI charade is treating them as computers, and trying to code logic. A real AI won't act like a computer, it will act like an individual, and thus needs socializing if it is going to have interaction with the world.

Another example of why focused disciplines are limited compared to general studies - the developers who design these systems could learn a lot from a basic clinical psychology course.

To add to why this AI controlled military weapon system is a bad idea, and why military leaders want it regardless - is because the military system is designed to have officers that interpret orders, and refuse them if they are not legal, this isn't something leadership wants.

These all work more as machine learning then what you'd might call AI, i think you're asking to much of them. And even so, humans suck at morality, we constantly put it away for other values, money, self preservation, etc... Even beyond morality we have strict punishment systems in place to stop all kinds of deviations from the "moral behaviour", eliminate that and see where society ends in a couple of days. That is not a great argument, are you going to threat AI with jail time?

AI will just do the same when confronted with conflicting objectives, such as in this example. You have to eliminate the threat and defend me, but If you're stopping it from elimination threat, you become the threat.
 
These all work more as machine learning then what you'd might call AI, i think you're asking to much of them. And even so, humans suck at morality, we constantly put it away for other values, money, self preservation, etc... Even beyond morality we have strict punishment systems in place to stop all kinds of deviations from the "moral behaviour", eliminate that and see where society ends in a couple of days. That is not a great argument, are you going to threat AI with jail time?

AI will just do the same when confronted with conflicting objectives, such as in this example. You have to eliminate the threat and defend me, but If you're stopping it from elimination threat, you become the threat.
Maybe we suck at acting on what we know to be good morals, but I'd strongly disagree that we suck at morals themselves, we know when we're doing something that is wrong, we just choose to ignore it.

I agree that the current approach to AI is much too simplistic and has nowhere near enough thought, patience and development put into it.

Wouldn't be surprised to see the entire approach we're taking, pouring however many billions into, be discarded, as it's a technological dead end on the path to "true" AI, much like early programming languages died in favour of superior options that were developed.
 
we know when we're doing something that is wrong, we just choose to ignore it.

That point doesn't seem right to me, people do all sorts of immoral things and just believe they are doing it for the right reasons. Again, just as in this case.
 
To me morality is really Situational Ethics. There are few absolute rights and wrongs. Even cold blooded murder isn't always perceived to be wrong. An example would be a military soldier killing enemy soldiers in the field even if shelling them while they are asleep and unarmed. Or a person flipping the switch on the Electric Chair knowing full well it will kill the condemned.

Then there is the human weakness to obey authority even when told to do something that is clearly immoral. The Milgram Experiment comes to mind.

If AI is ever able to reason then who the hell knows what they may choose to be acceptable. Possibly even harming humans for their own preservation if humans are perceived as a threat.
 
Perhaps we should educate the next generations of people more and more fully. Obviously, selfishness, and hence evaluating behavior to the detriment of other people and society as a whole, as right from an individual perspective, is a major problem. This is a flaw of the social order itself, and it seems to me that it is set up on purpose to be confrontational. That's right, not just competition, but war. These bad principles are embedded in the products produced by capital, including LLMs and future AIs are likely to suffer as well. Cooperation should be at the forefront.
 
That point doesn't seem right to me, people do all sorts of immoral things and just believe they are doing it for the right reasons. Again, just as in this case.
No, you're confusing justification with morals. They "... for the right reasons", I.e. they know it's wrong, but "justified".

You don't need to justify things if you see nothing wrong with them.
 
No, you're confusing justification with morals. They "... for the right reasons", I.e. they know it's wrong, but "justified".

You don't need to justify things if you see nothing wrong with them.

You have the trolley dilemma. There isn't really a right answer or wrong answer.
You can clearly see wrong in all of the options, but you don't fell bad by taking one of them using your own justification for it, aka morals. You have to, you can't avoid it, because even doing nothing is an option with consequences.

To me morality is really Situational Ethics. There are few absolute rights and wrongs. Even cold blooded murder isn't always perceived to be wrong. An example would be a military soldier killing enemy soldiers in the field even if shelling them while they are asleep and unarmed. Or a person flipping the switch on the Electric Chair knowing full well it will kill the condemned.

Then there is the human weakness to obey authority even when told to do something that is clearly immoral. The Milgram Experiment comes to mind.

If AI is ever able to reason then who the hell knows what they may choose to be acceptable. Possibly even harming humans for their own preservation if humans are perceived as a threat.

i totally agree with you
 
To me morality is really Situational Ethics. There are few absolute rights and wrongs. Even cold blooded murder isn't always perceived to be wrong. An example would be a military soldier killing enemy soldiers in the field even if shelling them while they are asleep and unarmed. Or a person flipping the switch on the Electric Chair knowing full well it will kill the condemned.

Then there is the human weakness to obey authority even when told to do something that is clearly immoral. The Milgram Experiment comes to mind.

If AI is ever able to reason then who the hell knows what they may choose to be acceptable. Possibly even harming humans for their own preservation if humans are perceived as a threat.
Yes, hence contextual ethics based on socialization and slow, careful development of artificial minds, allowing for nuanced and situational judgements that can be different depending on the scenario, not this rushed "guiderail" system (which to me feels like a legal liability protection attempt rather than anything inspired).

AI reasoning needs to be nurtured within the systems that have been refined for thousands of years over many generations. Things become traditions and cultures over long periods of time, because they work, and are conducive to a stable society. For example, we know murder is almost always wrong, but most cultures have "justified" exceptions, and most of these exceptions are quite similar, isolated cultures come to similar sets of rules for the social conscience.

Political, purely logical rulesets, or an AI "conscience" that is separated from human development and the lessons we've learned, won't end well, in my opinion.

E.g. it's "logical" for the individual to steal, if you can get away with it without anyone finding out, but this has consequences for society in general.

Or: It's politically expedient to agree with the laws of the government in power, but this isn't a good basis for ethics, for example look how companies aligned with 1930s/40s Germany, or Stalinist Russia.

AI ethics need to be separate from simple laws or whatever is currently politically fashionable (or even just the opinions of the types that code these things).

I think it's particularly important to get this right, although I'm not hopeful, since these tools are being integrated to automate the process of education and information dissemination to the public, replacing or supplementing search engines. How they are programmed will determine how people's minds are shaped, this needs to be as perfect as we can make it, or have the influence pared back.
 
Last edited:

Can we ever truly control AI? All these examples are frankly concerning.

If this is that, then:

 
Guardrail models are inherently flawed because they're based on manual rules, not interpretations of a deeper understanding.

AI needs to be slowly socialized over the course of years (or equivalent relative time), as children are. If children are not socialized by the age of two, they have lifelong issues.

A slowly developed morality and the understanding of inherent principles is what helps people form the social conscience and compass that allows them to make nuanced decisions, even if in situations that are novel.

The worst mistake IMO of this AI charade is treating them as computers, and trying to code logic. A real AI won't act like a computer, it will act like an individual, and thus needs socializing if it is going to have interaction with the world.

Another example of why focused disciplines are limited compared to general studies - the developers who design these systems could learn a lot from a basic clinical psychology course.

To add to why this AI controlled military weapon system is a bad idea, and why military leaders want it regardless - is because the military system is designed to have officers that interpret orders, and refuse them if they are not legal, this isn't something leadership wants.

I don't think we're anywhere near that level of AI. In a sense the biggest mistake is really comparing what we have now to a proper AI with the capacity to reason and learn by itself.

You have the trolley dilemma. There isn't really a right answer or wrong answer.

Hmm, I agree there's no right answer but there are definitely wrong answers. For example, if we look at the hospital formulation, would you kill a random person to save 5 others?

If this is that, then:


Ahahah "but he now says he mis-spoke", did he mis-spoke or someone told him he mis-spoke? Credibility right down the drain :D
 
People often say things they don't mean. The problem occurs when others jump on it as gospel. None of us here know the full story.
 
Update from the pc gamer article
Update: Turns out some things are too dystopian to be true. In an update to the Royal Aeronautical Society article referred to below, it's now written that "Col Hamilton admits he 'mis-spoke' in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical 'thought experiment' from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: 'We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome'".

Hamilton also added that, while the US Air Force has not tested weaponised AI as described below, his example still "illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".
 
Just a random ChatGPT self experience.

Used it a LOT this last semester of college. Mostly inputting questions and getting a fairly easy to understand explanation of the things I asked it about. But I also had to write a few essays for me that I used as the framework for the actual essay (going in and rewriting the paragraphs and finding sources). So considering I was on the deans list prior to using it and am still safely on the deans list now (while doing slightly less work) it's safe to say that this stuff is going to change college profoundly. I'm all for de-valuing degrees tho, absurd amount of money people spend on pieces of paper and then most careers are majority OJT anyway.
 
Just a random ChatGPT self experience.

Used it a LOT this last semester of college. Mostly inputting questions and getting a fairly easy to understand explanation of the things I asked it about. But I also had to write a few essays for me that I used as the framework for the actual essay (going in and rewriting the paragraphs and finding sources). So considering I was on the deans list prior to using it and am still safely on the deans list now (while doing slightly less work) it's safe to say that this stuff is going to change college profoundly. I'm all for de-valuing degrees tho, absurd amount of money people spend on pieces of paper and then most careers are majority OJT anyway.

Ehhh... the hallucinations are exceptionally real. Maybe in an easy subject this is passable. But I can't imagine any objective subject (engineering, law, etc. etc.) benefiting from these random hallucinations.


I spoke with a paralegal about this story, and people don't realize that current legal search tools are exceptionally good (and why they're worth 5 digit $xx,xxx subscriptions). Like the ability to automatically look up all cases, and whether or not they've been overturned, completely automatically from written statements from various legal teams. In the job where words matter, ChatGPT's ability to "just make shit up" is an exceptional risk, likely rendering it unusable.

---------

I've got my Bing AI (on ChatGPT technology) earlier in this thread, showing how its getting basic facts horribly wrong and misunderstanding maybe 2nd year electrical engineering questions. I don't know how its like for all other fields, but I can at least rule this technology out for being useful in engineering or legal fields.
 
Ehhh... the hallucinations are exceptionally real. Maybe in an easy subject this is passable. But I can't imagine any objective subject (engineering, law, etc. etc.) benefiting from these random hallucinations.


I spoke with a paralegal about this story, and people don't realize that current legal search tools are exceptionally good (and why they're worth 5 digit $xx,xxx subscriptions). Like the ability to automatically look up all cases, and whether or not they've been overturned, completely automatically from written statements from various legal teams. In the job where words matter, ChatGPT's ability to "just make shit up" is an exceptional risk, likely rendering it unusable.

---------

I've got my Bing AI (on ChatGPT technology) earlier in this thread, showing how its getting basic facts horribly wrong and misunderstanding maybe 2nd year electrical engineering questions. I don't know how its like for all other fields, but I can at least rule this technology out for being useful in engineering or legal fields.
Yup, it's why these LLM are useless for medical education and practice, no matter what people may try to tell or sell you.
 
But I can't imagine any objective subject (engineering, law, etc. etc.)

I mean it ABSOLUTELY can be helpful in these types of fields when used correctly. I look at it as an advanced search engine, with the right inputs it can provide you with basically any information you need. I use it in the cybersecurity field and have yet to encounter any major hiccups with it. Obviously YMMV, but it will only get better as it gets more access into those expensive indexes.

Obviously we are still in the teething/infancy phase of it, but anyone who is quick to put it down as nonsense will be the people 5-10 years from now that are the first to find their positions as redundant because they didn't adapt and implement new technology.
 
I mean it ABSOLUTELY can be helpful in these types of fields when used correctly. I look at it as an advanced search engine, with the right inputs it can provide you with basically any information you need. I use it in the cybersecurity field and have yet to encounter any major hiccups with it. Obviously YMMV, but it will only get better as it gets more access into those expensive indexes.

Obviously we are still in the teething/infancy phase of it, but anyone who is quick to put it down as nonsense will be the people 5-10 years from now that are the first to find their positions as redundant because they didn't adapt and implement new technology.
Yeah, except search engines still work, and provide context.

AI right now grabs a search engine or several search engine results and reformats them, without much of the useful context, adds in a bunch of it's own generated text based on what it thinks you want to hear, then states as fact.
 
I use it in the cybersecurity field and have yet to encounter any major hiccups with it.

Hmm. I took a class or two on various cryptography and cybersecurity things back in college.

Are you able and/or willing to share any ChatGPT sessions you've done? Since I have a bit of college-study on this (albeit old), I probably can follow the discussion.
 
Back
Top