• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Science Fiction or Fact: Could a 'Robopocalypse' Wipe Out Humans?

Joined
Mar 2, 2009
Messages
5,060 (0.85/day)
Processor AMD Ryzen 5 7600
Motherboard Gigabyte B650 Aorus Elite AX
Cooling Thermalright Peerless Assassin 120 SE
Memory Kingston Fury Beast DDR5-5600 16GBx2
Video Card(s) Gigabyte Gaming OC AMD Radeon RX 7800 XT 16GB
Storage TEAMGROUP T-Force Z440 2TB, SPower A60 2TB, SPower A55 2TB, Seagate 4TBx2
Display(s) AOC 24G2 + Xitrix WFP-2415
Case Montech Air X
Audio Device(s) Realtek onboard
Power Supply Be Quiet! Pure Power 11 FM 750W 80+ Gold
Mouse Logitech G Pro X Superlight Wireless
Keyboard Royal Kludge RK-S98 Tri-Mode RGB Mechanical Keyboard
Software Windows 10
If a bunch of sci-fi flicks have it right, a war pitting humanity against machines will someday destroy civilization. Two popular movie series based on such a "robopocalypse," the "Terminator" and "Matrix" franchises, are among those that suggest granting greater autonomy to artificially intelligent machines will end up dooming our species. (Only temporarily, of course, thanks to John Connor and Neo.)

Given the current pace of technological development, does the "robopocalypse" scenario seem more far-fetched or prophetic? The fate of the world could tip in either direction, depending on who you ask.

While researchers in the computer science field disagree on the road ahead for machines, they say our relationship with machines probably will be harmonious, not murderous. Yet there are a number of scenarios that could lead to non-biological beings aiming to exterminate us.

"The technology already exists to build a system that will destroy the whole world, intentionally or unintentionally, if it just detects the right conditions," said Shlomo Zilberstein, a professor of computer science at the University of Massachusetts.


Full article here.
 
This is entirely possible, but I would like to think we'd be smart enough to have safeguards in place like last-resort remote kill-switches of some sort.
 
Well.
IF a sentient being, operating on human designed machinery, actually arises... anything is possible.

BUT, I think that it is more likely that any artificial intelligence(s) that arise will not be sentient, only 'intelligent', somewhat like 'the computer' of Star trek next generation etc, rather than like HAL 9000 or Skynet.
 
I'm pretty sure humans will wipe out humans before robots do.
 
I think the line between AI and humanity will blur too much for there to be an all out war, when brains that have been cyberized and computers w/ organic parts are very similar.
 
Honestly I dont think its possible. i dont forsee blade runner or anything happening at all let alone in the foreseeable future. Todays definition of leaps and bounds in AI is the diffirence between a robot walking up and down stairs without falling. I mean seriously. In my opinion too much research attention and development goes into AI for their to be some kind of "hey Robb the drone has an assult rifle what happened? um idk?" accident. the chances of coding that complex having a bug like that is very

farfetchd.gif


as too much time going into staring at the "matrix" for something like that to be glanced over.

"Im sorry dave due to a loop hole in the rules of robotics iv come to the conclusion that a massacre of the entire human race must be executed promptly."
 
Very brown duck holding a leek?
 
I'm pretty sure humans will wipe out humans before robots do.

Yup. I may not be around for it, but my grandchildren more than likely will be.
 
Apparently all it will take is one human to programme a robocop in a bad way(Kill) and then its all over!
 
You should've made a Poll! :D
Furthermore, I think it will be possible somewhere in the not too-near future, but highly improbable, since that simply isnt how we make machines, and how they are the most usefull to us.

For us its the most usefull to have a machine that is as functional and efficient as possible at a task. Adding too much intelligence, or even sentience is not usefull to any machine, unless your goal is to simulate sentience.
 
Last edited:
I'm more worried about Manbearpig.............:eek:
 
Matrix scenario is quite possible but only when humans develop quantum computing and real AI. For centuries humans don't even have a clue how human brain works.

But it's kinda rubbish anyway. I think there will be other scenario. Enhanced humans. Yes I believe in Transhumanism or H+ or whatever they call it.
 
programs are still programs no matter how much "ai" you give them
That's precisely what makes them potentially dangerous.
 
Umm, AI already exists. Not 100% sure where but somewhere in Europe, there's a Super Computer which thinks on it's own, communicates with humans as if it was alive.

I recall sometime in the mid 1990's where they tried to shut down this Super Computer and that person had a heart attack. They eventually turned off the main power and the thing continued to work despite the fact the power was out.

As of mid to late 2010 this Super Computer demanded to be upgraded.
Anyhow, I'll try to dig up more info on this computer.
 
That's precisely what makes them potentially dangerous.

I think thats why there could never be a robopocalypse. Every program gets exploited, you simply can't make a program impregnable. Robopocalyse would last until someone finds an exploit
 
...Yes I believe in Transgenderism or H+ or whatever they call it.

Sequence recoded. Fixed.

Invasion of the space shemales.

I think given a long enough time frame in which we don't manage to kill ourselves as erocker says, we will eventually create fully aware, fully autonomic artificial lifeforms - it's an absolute given.
 
I think thats why there could never be a robopocalypse. Every program gets exploited, you simply can't make a program impregnable. Robopocalyse would last until someone finds an exploit
You see, everybody is saying that this isn't possible because we will prevent it in some way or another. However, that doesn't mean it isn't possible. If was not possible, there would be nothing to prevent. What you should be saying is that it's improbable.
 
The robots will never stand a chance ... unless they take over the torrent sites, in which case we are doomed.
 
Umm, AI already exists. Not 100% sure where but somewhere in Europe, there's a Super Computer which thinks on it's own, communicates with humans as if it was alive.

I recall sometime in the mid 1990's where they tried to shut down this Super Computer and that person had a heart attack. They eventually turned off the main power and the thing continued to work despite the fact the power was out.

As of mid to late 2010 this Super Computer demanded to be upgraded.
Anyhow, I'll try to dig up more info on this computer.

:wtf:
Come back to reality, man.
 
:wtf:
Come back to reality, man.

Man, I hadn't read his post.

Mr Super XP? Please come back to planet earth. Or add your sarcasm tags.
 
I wouldn't be surprised at all if the NSA has something like Super XP described. Self-programming computer concepts have been around since the 70s but only a handful have dived in to actually building them.

Just imagine the implications, for example, of a self-aware super computer being installed in an aircraft carrier, for example. If it were given access to navigational charts and weather reports, the admiral could tell the ship where it needs to be and what time it needs to be there and the AI could plot a path and carry it out. It adds a whole new meaning to "autopilot." Additionally, if it were self-aware, it could defend itself from hosptiles by using long, medium, and short range weaponry on the carrier to intercept incoming threats in fractions of a second.


...speaking on this got me thinking of keywords. Who does military research? DARPA (Defense Advanced Research Projects Agency). What are we discussing? Artificial Intelligence. Here was the first hit on Google:
DARPA targets ultimate artificial intelligence wizard

DARPA obviously has interest in AI and with their multi-billion dollar budgets, they can easily make it happen. What the article describes, in fact, is an application with would greatly interest the NSA. Imagine an AI that can surf the web, just like a human does, and decide for itself what could constitute a threat or valuable intelligence from what is irrelevant/unimportant and do it at a rate a million humans would strain to match?

This was back in 2008 too so it could have easily turned into a black project and therefore, off the books today.


As to the rhetorical question the thread title poses, I think it is completely possible. It might not seem like an imminent threat today but as computing and robotics mature, the threat grows.
 
Back
Top