• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Postulation: Is anyone else concerned with the proliferation of AI?

Does AI have you worried?

  • Yes, but I'm excited anyway!

    Votes: 7 7.5%
  • Yes, worried about the potential problems/abuses.

    Votes: 58 62.4%
  • No, not worried at all.

    Votes: 6 6.5%
  • No, very excited about the possibilities!

    Votes: 7 7.5%
  • Indifferent.

    Votes: 10 10.8%
  • Something else, comment below..

    Votes: 5 5.4%

  • Total voters
    93
Joined
Jul 5, 2013
Messages
28,571 (6.78/day)
The US? asleep at the wheel if you ask me
Some of us would strongly agree if we didn't know there are corporate economic wheels grinding...

That situation is not related to the AI discussion. Please do not derail the thread.
"AI" didn't take anyone's jobs away. Dumbass bosses who were sold on non-functioning technology took away people's jobs.
Semantics and moose muffins. Company "bosses" bought into AI and laid off the workers AI replaced. AI took those jobs regardless of who made the decision.
 
Joined
Sep 17, 2014
Messages
22,892 (6.07/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
And yet: https://apnews.com/article/google-s...e-department-84e07fec51c5c59751d846118cb900a7



We are ahead of the year 1870. But we aren't in like 1910 where public awareness is fully aware of the problem. I'd say we're probably in like 1900s where novelists and other writers are beginning to write and/or publish stories like "The Jungle". There's an awareness to the problem but we haven't quite made the case to the general public yet.
Google and its dominant search engine... talk about misfiring... the issue is the social media barons. Google isn't chewing on the foundations of society, they just want markets.
 
Joined
Apr 24, 2020
Messages
2,774 (1.61/day)
Not everyone thought it was "thinking". It was just running through sequence of possible move on a moment by moment basis. No "thinking" was taking place.

That has nothing to do with the concerns being expressed. For you to say that shows you haven't paid attention to the bulk of the ongoing conversation.

You are omnislashing my posts and messing up the fucking point of my writing. Read the two lines together and don't respond to them line-by-line.

Chess, in the 1980s was seen to be "thinking". Then in the 90s, Deep Blue beat the Human Chessmaster. Human Ego however is a thing, so people changed the definition of "thinking" away from Chess and towards Go, Handwriting Recognition and other such tasks. The AI Community then pursued these new subjects, and finally in the 00s and 10s, defeated humans in THOSE tasks.

This dance has been going on for decades. The only thing new is that the pushback / human ego portion of this debate is finally slowing down. I think people are finally recognizing that computers are really good at tasks we define as "thinking" (especially after AI researchers put forth a lot of effort into them).

But computer proofs (that are so complex no human can possibly follow the logic) has been a thing since the 1970s. Computer assisted "thinking", in regards to logic puzzles / graph coloring, are used every day in our "compilers" (translating C++ into Machine Code). Etc. etc. Computers are a hell of a lot better than humans in a lot of things you'd call "thinking". Always have been, always will be.

--------

The next step is to work with these specific algorithms. LLMs cannot beat you in Chess, they're actually quite bad at chess. Chess-AIs cannot beat you in language. Generative-AIs cannot beat you in programming (Generative AIs make the "paintings"). At best, ChatGPT can maybe make some Generative-AI prompts, and maybe all together these tools can be cobbled together to work together, but who knows? Thats the next step towards advancements.
 
Joined
Jan 14, 2019
Messages
13,434 (6.13/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
But until now they've never been able to "out-think" us. We are on the bleeding edge of that reality.
The problem is that AI lacks the capability to judge the correctness of information. It writes stuff out of sheer quantity of available information. Watch the video I linked above. ;)
 
Joined
Apr 24, 2020
Messages
2,774 (1.61/day)
The problem is that AI lacks the capability to judge the correctness of information. It writes stuff out of sheer quantity of available information. Watch the video I linked above. ;)

LLMs are specifically an AI designed to predict-the-next-word in a discussion.

The innovation of ChatGPT was seeing "the end of a post" as a specific word/token to think about. Now ChatGPT can be trained on things like Tweets and recognize the beginning and end of posts. This means that it can "predict" a sentence, and then the "end of post" to give the other person a chance to talk.

That's all it is. An incredibly good "autocorrect" with features that make it very good for internet discussion / autocorrect / and other situations with beginnings/ends of text. But its still just that: a text predictor at its heart. That's the way the model is designed.

-------

I think LLMs have shown that "text predictors" innately are forced to understand grammars and structures of language once they reach a certain power-level. This is certainly exciting. But its not entirely clear how to use this understanding yet. IMO, the "embeddings" to any text in these LLMs are the next step forward. It seems like these "embeddings" vectors can be used to guestimate understanding, or maybe not? More research required.

But I'm seeing some useful stuff with computer-algorithms being applied to embeddings, rather than the dumbasses who are just playing text-predictor with a chatbot. If the text-predictor stuff becomes useful eventually, then the books (and such) will be written about it given enough time. But as far as a computerized understanding of human language goes, the embeddings get us closer to there than all the other crap people are talking about.
 
Joined
Jan 14, 2019
Messages
13,434 (6.13/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
Some of us would strongly agree if we didn't know there are corporate economic wheels grinding...


That situation is not related to the AI discussion. Please do not derail the thread.

Semantics and moose muffins. Company "bosses" bought into AI and laid off the workers AI replaced. AI took those jobs regardless of who made the decision.
McDonald's experimented with AI-assisted voice recognition drive-thrus at a couple of their restaurants. They had to end up re-hiring their staff because AI was only accurate 4 out of 5 times, which is far exceeded by any human. I'll try to find the article.
 
Joined
Jul 5, 2013
Messages
28,571 (6.78/day)
You are omnislashing my posts and messing up the fucking point of my writing.
Yes, it's called picking apart your statements and I'm doing it deliberately to offset the statements.
Read the two lines together and don't respond to them line-by-line.
I'll respond as I damn well please, thank you very much.
Human Ego however is a thing, so people changed the definition of "thinking" away from Chess
Wrong. Many people NEVER called it "thinking"(because it isn't), myself included as we knew how that victory was achieved. No computer has EVER out-thought a human being. Out-performed us, yes, but not out-thought.

The problem is that AI lacks the capability to judge the correctness of information.
Exactly. The problem is that aspect is on verge of changing.
 
Last edited:
Joined
Jan 14, 2019
Messages
13,434 (6.13/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
Yes, it's called picking apart your statements and I'm doing it deliberately to offset the statements.

I'll respond as I damn well please, thank you very much.

Wrong. Many people NEVER called it "thinking", myself included as we knew how that victory was achieved. No computer has EVER out-thought a human being. Out-performed us, yes, but not out-thought.
Let's stop for a second and think... what is thinking? What is intelligence? How do you define it?

People here seem to be defining it by "being able to solve a problem using creative methods". I think that's very far from the truth. If all we did was solve problems, then we'd still be hunting mammoths with spears. Very advanced spears.

I think "thinking" also involves reasoning, logical deductions, comparing different topics, making value judgements, forming opinions, recognising the context of the task, thinking outside of the task, making own thoughts without previous input, asking questions, and knowing the limits of our knowledge. I haven't seen any AI do any of this.
 
Joined
Apr 24, 2020
Messages
2,774 (1.61/day)
Wrong. Many people NEVER called it "thinking", myself included as we knew how that victory was achieved. No computer has EVER out-thought a human being. Out-performed us, yes, but not out-thought.

Okay smart guy. What is thinking and what were your proxies to thinking over the past 50 years?

The AI Community has come and risen to every challenge the public has thrown at it, constantly. It took a myriad of algorithms, deep study, incredibly powerful computers and all. But its always about "thinking" better than a human at various tasks.

And once humans are no longer the benchmark, AIs became the benchmark for each other. Humans will NEVER solve puzzles (Sudoku and similar) as well as a computer today, because modern Computer 3SAT solvers and Constraint Programmers are superior. Humans will NEVER build an airplane route faster or more effectively than computer AI algorithms (though today they're called "Graph Algorithms", there was a point when this was considered AI). Etc. etc.

The entire history of the AI Community is just people saying "X is thinking", AI programmers making a computer solve X and then humans saying "Well X really wasn't thinking because a computer did it". Etc. etc. for decades. So forgive me if I don't believe you at all.

I think "thinking" also involves reasoning, logical deductions, comparing different topics, making value judgements, forming opinions, recognising the context of the task, thinking outside of the task, making own thoughts without previous input, and asking questions. I haven't seen any AI do any of this.

You cannot defeat the 3SAT solvers in reasoning or logical deductions.

When these computer chips are made, you need to lay out 300 Billion transistors in a way that all transistors can connect to each other, minimizes redundancies, etc. etc. Its a very large logic puzzle. Humans solve these puzzles with K-Maps:

1736709575398.png


But computers can track bits and bytes and transistors better than humans ever could. Today, these solvers are building our chips, laying out our designs, etc. etc.
 
Joined
Jan 14, 2019
Messages
13,434 (6.13/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
You cannot defeat the 3SAT solvers in reasoning or logical deductions.

When these computer chips are made, you need to lay out 300 Billion transistors in a way that all transistors can connect to each other, minimizes redundancies, etc. etc. Its a very large logic puzzle. Humans solve these puzzles with K-Maps:

View attachment 379767

But computers can track bits and bytes and transistors better than humans ever could. Today, these solvers are building our chips, laying out our designs, etc. etc.
Let me correct my above post, then: Computers can think, but they're not intelligent. Solving problems is thinking, not intelligence.
 
Joined
Jul 5, 2013
Messages
28,571 (6.78/day)
Let's stop for a second and think... what is thinking? What is intelligence? How do you define it?
Computers such as Big Blue were programed with tables of moves and probabilities of those moves on a progressive results stack. It did so very quickly. This is not thinking, it is program instruction execution. No computer has done anything more than that yet. But we're edging closer.

Okay smart guy. What is thinking and what were your proxies to thinking over the past 50 years?
What was that about ego?
The AI Community has come and risen to every challenge the public has thrown at it
Yet currently it's only executing code based on predictable probabilities. Nothing more.. It's getting increasingly accurate and powerful, but it's not actual intelligence yet. This is why AI generated photo's are still riddled with imperfections and flaws. Computer "AI" can't "see" what it's making in the simulated photo, it's just predicting number sequences based on number models it's provided.

Computers can think
No, they can not. See above. They can calculate, very swiftly in massive qualities, but "thinking" is not what they are doing.
 
Joined
Apr 24, 2020
Messages
2,774 (1.61/day)
Let me correct my above post, then: Computers can think, but they're not intelligent. Solving problems is thinking, not intelligence.

So I think people want to discuss the philosophy problem of "what is intelligence".

There's two overall arguments in the AI Community. There's "weak AI", which are AIs that solve particular problems. And then there's this vague-as-hell kind of bullshit term that people call "strong AI", "AGI", etc. etc. You know, the "real" intelligence that we've only seen in made up movies and fictional stories by Isaac Asimov.

Despite being called "strong AI", its never been accomplished and the definition of it seems to change again-and-again.

"Weak AI", despite its name, has always been where advancements in artificial intelligence have come from. And even today, LLMs and Generative AIs, constitute research from the "Weak AI" side of the discussion. LLMs are a tailor made automated machine-learning tool to predict the next word of text.

As it turns out, LLMs have gained "superpowers" no one was expecting. If you ask an LLM to translate a word to Chinese or Japanese, they seem to be able to do that. (!!!!). Which is suddenly people treating LLMs as if they're a "strong AI".

And then you ask them to multiply 532 x 7654, and then they can't do that. Because those two numbers I picked at near-random and I bet they don't have those numbers in their training set. So unless the LLM has a specific "python-interpreter" to take mathematics questions, plug them into a programming language and then work with them in that manner, LLMs will fail. As its impossible to "text predict" mathematics or other equations.

So where my overall posts are going is: "strong AI" is bullshit, it always was bullshit, but its been around since the dawn of AI. "Weak AI" is where all the gains have been. LLMs are a marvel of "weak AI" practitioners (people just trying to solve one problem), even though we got a whole bunch of unexpected superpowers from their work. But its still evident that when you take an LLM outside of its zone of expertise (ex: start asking about multiplication of randomized 3 or 4 digit numbers likely outside of the internet's training), they collapse.

What was that about ego?

So you don't have a proxy for "thinking" or a way to measure it. Gotcha. If you don't have a means of measuring "thinking" or seeing if its there, then the discussion is worthless. Even if "thinking" were to happen, you wouldn't be able to argue it happened (or argue its not happening).
 
Joined
Jan 14, 2019
Messages
13,434 (6.13/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
Computers such as Big Blue were programed with tables of moves and probabilities of those moves on a progressive results stack. It did so very quickly. This is not thinking, it is program instruction execution. No computer has done anything more than that yet. But we're edging closer.


What was that about ego?

Yet currently it's only executing code based on predictable probabilities. Nothing more.. It's getting increasingly accurate and powerful, but it's not actual intelligence yet. This is why AI generated photo's are still riddled with imperfections and flaws. Computer "AI" can't "see" what it's making in the simulated photo, it's just predicting number sequences based on number models it's provided.


No, they can not. See above. They can calculate, very swiftly in massive qualities, but "thinking" is not what they are doing.
Exactly my point... only that some of us have different definitions of the word "thinking". It's hard to have a conversation when even the basics aren't laid out properly.
 
Joined
Jan 14, 2019
Messages
13,434 (6.13/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
So I think people want to discuss the philosophy problem of "what is intelligence".

There's two overall arguments in the AI Community. There's "weak AI", which are AIs that solve particular problems. And then there's this vague-as-hell kind of bullshit term that people call "strong AI", "AGI", etc. etc. You know, the "real" intelligence that we've only seen in made up movies and fictional stories by Isaac Asimov.

Despite being called "strong AI", its never been accomplished and the definition of it seems to change again-and-again.

"Weak AI", despite its name, has always been where advancements in artificial intelligence have come from. And even today, LLMs and Generative AIs, constitute research from the "Weak AI" side of the discussion. LLMs are a tailor made automated machine-learning tool to predict the next word of text.

As it turns out, LLMs have gained "superpowers" no one was expecting. If you ask an LLM to translate a word to Chinese or Japanese, they seem to be able to do that. (!!!!). Which is suddenly people treating LLMs as if they're a "strong AI".

And then you ask them to multiply 532 x 7654, and then they can't do that. Because those two numbers I picked at near-random and I bet they don't have those numbers in their training set. So unless the LLM has a specific "python-interpreter" to take mathematics questions, plug them into a programming language and then work with them in that manner, LLMs will fail. As its impossible to "text predict" mathematics or other equations.

So where my overall posts are going is: "strong AI" is bullshit, it always was bullshit, but its been around since the dawn of AI. "Weak AI" is where all the gains have been. LLMs are a marvel of "weak AI" practitioners (people just trying to solve one problem), even though we got a whole bunch of unexpected superpowers from their work. But its still evident that when you take an LLM outside of its zone of expertise (ex: start asking about multiplication of randomized 3 or 4 digit numbers likely outside of the internet's training), they collapse.



So you don't have a proxy for "thinking" or a way to measure it. Gotcha. If you don't have a means of measuring "thinking" or seeing if its there, then the discussion is worthless. Even if "thinking" were to happen, you wouldn't be able to argue it happened (or argue its not happening).
Just like with Lex, exactly my point.
 
Joined
Apr 24, 2020
Messages
2,774 (1.61/day)
Oh, ok more ego then? You're missing the context of the point being made which is why you're not seeing it.

I have plenty of definitions of thinking. What I'm trying to test if you have any, or if you're just blowing smoke on this subject.

Strong vs Weak AI was already a deeply written about subject 20+ years ago. It somewhat bores me to see us come back to it today but if its new for yall I at least laid out a summary a post or two about it. IMO, none of this stuff is new. We're just in a new wave of tech-dominance (see 90s where tech-bros were once again at the top of the media hype cycle, or the 80s when the "first AI Bubble / winter / collapse" happened). Its almost hilarious how predictable these cycles are becoming.

Including the shear number of newcomers who get caught up in the new hype each time. That's also part of the cycle.

-------

Oh, and you cut out my most important words.

If you don't have a means of measuring "thinking" or seeing if its there, then the discussion is worthless. Even if "thinking" were to happen, you wouldn't be able to argue it happened (or argue its not happening).

This is why "weak AI" continues to make gains and "strong AI" fails.

Because at least "weak AI" sets goals that can be accomplished and worked towards. An interesting trick by the "strong AI" group is now taking the work of "weak AI" practitioners and suddenly calling it evidence of strong AI. I don't think I've seen that before. (See how LLMs or Generative AIs are today now being hyped up as a proxy for AGIs). I'm hoping people don't fall for the bullshit.
 
Joined
Jul 5, 2013
Messages
28,571 (6.78/day)
that some of us have different definitions of the word "thinking".

Regardless of who's definition you want to reference, the result is the same. "Thinking" requires an conscious intelligence, an ability to reason and critically solve problems through abstract examination. Computers can't do that yet and have never been able to. How far away we are from that point is open for debate and we're getting close enough to it that we need to be very concerned and very careful, thus this discussion.

I'm hoping people don't fall for the bullshit.
On that point we can agree!
 
Last edited:
Joined
Jan 14, 2019
Messages
13,434 (6.13/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race

Regardless of who's definition you want to reference, the result is the same. "Thinking" requires an conscious intelligence, an ability to reason and critically solve problems through abstract examination. Computers can't do that yet and have never been able to. How far away we are from that point is open for debate and we're getting close enough to it that we need to be very concerned and very careful, thus this discussion.
My definition is more broad (see above). I don't think computers will get there in our lifetime, if ever.
 
Joined
Jul 5, 2013
Messages
28,571 (6.78/day)
My definition is more broad (see above).
I understood what you intended to say, the effort earlier was to keep it more grounded and on point for relations to computers and the comparison to humans. However...
I don't think computers will get there in our lifetime, if ever.
... we do differ somewhat on if/when it will happen. I think it has a fair chance of happening. When is matter of timing and situations.
 
Joined
Apr 24, 2020
Messages
2,774 (1.61/day)
How far away we are from that point is open for debate and we're getting close enough to it that we need to be very concerned and very careful, thus this discussion.

I don't believe you have an adequate definition of "thinking". But lets test it.

Which algorithm is closest to demonstrating "thinking" that makes you worried? I know its not Chess Bots or Automated Theorem Provers. Are you actually worried about the progress in LLMs / text-predictors?
 
Joined
Jul 5, 2013
Messages
28,571 (6.78/day)
I don't believe you have an adequate definition of "thinking". But lets test it.

Which algorithm is closest to demonstrating "thinking" that makes you worried? I know its not Chess Bots or Automated Theorem Provers. Are you actually worried about the progress in LLMs / text-predictors?
This is an example of you arguing ego, a debate I will not engage in. Seeing a pattern yet?
 
Joined
Apr 24, 2020
Messages
2,774 (1.61/day)
This is an example of you arguing ego, a debate I will not engage in. Seeing a pattern yet?

Yeah. Just more "Strong AI" bullshitting, as usual. Not from you per se, but the standard crap over the decades on this subject.

And that's the kind of bullshit I prefer making clear to everyone in the discussion.
 
Joined
Jul 5, 2013
Messages
28,571 (6.78/day)
Yeah. Just more "Strong AI" bullshitting, as usual. Not from you per se, but the standard crap over the decades on this subject.

And that's the kind of bullshit I prefer making clear to everyone in the discussion.
You are most certainly making things clear..

What is at play is the perception of "thinking" and Intelligence. One can not refer to computer calculations and number computing in general as thinking or intelligence. Yet all computers are programed by beings of intelligence. So how long will it be before the high-level AI you refer to as "Strong AI" is able to start thinking in a genuine way? This what I was asking.
 
Joined
Jan 14, 2019
Messages
13,434 (6.13/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
I understood what you intended to say, the effort earlier was to keep it more grounded and on point for relations to computers and the comparison to humans. However...

... we do differ somewhat on if/when it will happen. I think it has a fair chance of happening. When is matter of timing and situations.
In most science fiction, there's a point when AI asks "why" before doing a task. That is the point when I start calling it intelligent. Until then, it's only artificial. I don't think we'll get there anytime soon, if ever, but feel free to disagree (which is just another sign of intelligence that AI can't do, by the way). :)
 
Joined
Apr 24, 2020
Messages
2,774 (1.61/day)
In most science fiction, there's a point when AI asks "why" before doing a task. That is the point when I start calling it intelligent. Until then, it's only artificial. I don't think we'll get there anytime soon, if ever, but feel free to disagree (which is just another sign of intelligence that AI can't do, by the way). :)

Science Fiction is about stories about the human condition, using "non-human AIs" to help the audience disconnect from the machines. Its a literary technique, not a scientific one.

It leads to great stories and is a good starting point for philosophical debates. But just remember, its science fiction, emphasis on the fiction. I believe Issac Asimov himself have said that the "3 laws of Robotics" are actually about how humans make decisions (Don't hurt others, listen to your boss, and protect yourself). The story is how this simple heuristic can violate human values despite the individual following the three rules as closely as they can (even with superhuman advanced fictional AIs).

----------

EDIT: In the real world, I truly believe that the human ego is so powerful that we'd never allow "strong AI" to exist. We will continue to come up with new excuses for why (new AI accomplishments) are "not intelligence". Just an eternal treadmill of changing definitions.
 
Top