• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Science Fiction or Fact: Could a 'Robopocalypse' Wipe Out Humans?

Sure, humans are very fragile. All it would take is one hostile "AI" computer to get ahold of a nuke, and GG folks
 
Good thing nuclear missile silos are largely equiped with 1960s technology. XD

The rest have to be straped into an aircraft which, as of right now, definitely requires human involvement.


...oh crap, there's a new line of nuclear submarines coming: SSBN(X). :eek:

http://defensenewsstand.com/NewsSta...7-billion-to-acquire-operate/menu-id-720.html
http://en.wikipedia.org/wiki/SSBN-X_future_follow-on_submarine

...even more concerning is I see no mention of how many people are supposed to man these ships--another excellent place to use an AI. Just like a flying drone, these things just patrol and patrol and patrol. Without crew on board, they could literally patrol non-stop for 40 years except for maintenance.
 
Last edited:
The AI would have to have a media to communicate across. I still think if we had a self aware AI they will still require human input
 
We are organic machines. So no matter what happens its a machine destroying us. ;)

Unless its some natural cause of course.
 
The AI would have to have a media to communicate across. I still think if we had a self aware AI they will still require human input
I do believe submarines communicate via the Interim Polar System when submerged and satellites when surfaced (or near surface). An AI operated ship would do the same as a human operated ship checking in periodically and emergency surfacing if there is an urgent problem in order to send distress signals. They can also use a sonar ping to give away their location in case of major failure.
 
Recently i accidentally watch some TV show theres man dumb enough to have his thing electrocuted for a laugh.. i think even without robot interference, some human are dumb enough and have possibility to wipe out them self and bring the rest of us with them.

Just look at many of our politician decision..they make stupid decision such as SOPA or War..
 
I do believe submarines communicate via the Interim Polar System when submerged and satellites when surfaced (or near surface). An AI operated ship would do the same as a human operated ship checking in periodically and emergency surfacing if there is an urgent problem in order to send distress signals. They can also use a sonar ping to give away their location in case of major failure.

They would have to rely on a insecure unstable ad-hoc style network.
 
People want to create AI because it's easier than improving their own intelligence
 
They would have to rely on a insecure unstable ad-hoc style network.
Like all wireless communication, it is easy to intercept the communication. It is secured using encryption of the data within.

I wouldn't call it unstable because I highly doubt it is. Again, with an AI at the helm, it doesn't need to receive constant orders from the brass. It gets its orders before it leaves port and only changes those orders if they receive new orders on extremely low frequency communications--just like a human operated submarine.
 
The first time I came across reports of research into AI, it was 1990.
Back then, people like Marvin Minsky and Hans Moravec were saying AI is ten years away.
It's always ten years away.

It also depends on your definition of 'AI'.
Are researchers working on 'intelligent' neural nets, brain simulations, and programs with 'intelligent' behaviour in a specific field of knowledge and endeavour? Yes.
But is that AI research? Technically, yes, either directly or indirectly.
But is it development of a sentient intelligence? No.

There was some minor hubub a handful of years ago concerning the issue of whether true sentient AI is possible. It boiled down to a discussion of a problem in computational mathematics; whether it was possible to prove that "P = not-P".
As yet, there is no solution.
"If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in 'creative leaps,' no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss..."

— Scott Aaronson, MIT

http://en.wikipedia.org/wiki/Millennium_Prize_Problems

If P=NP, we eventually get the robo-apocalypse or transcendant digital singularity :rolleyes:
If not, maybe we go down the path of more extensive cybernetic implants to improve functioning in various mental and\or physical activities. Or we just stay the way we are, and slowly improve.
 
Assuming the universe is infinite, statistically I believe there is a 100% probability that we are all in a quantum computer simulation right now. so anything is possible
 
Remember folks, we ARE Quantum computers. The Synapses in our brain communicate with each other instantaneously using Quantum co-ordinates.

You simply don't get any more sophisticated than a human brain.

Unfortunately training the brain to be at its best is time consuming. There are folks out there that can beat computers at chess and do complex maths equations quicker than a calculator.

There are some things a computer can never truly have. A computer will always do things on probability - it will never have the instinct, never be able to use that 'gut-feeling' that is 9 times out of 10 mysteriously right. It will never be able to truly understand things, as true understanding is linked directly with emotion, and emotion is a BIOLOGICAL response. You cannot emulate Billions of years of evolution in any computer, you cannot illicit that BIOLOGICAL response you require to develop emotion, thus the idea of a Computer ever being able to defeat the human race should be removed entirely.

Remember, we rely on instinct more times than we have ever thought about something. Computers can be programmed to preserve their own life for example. But what will they do when they are given certain choices? what scenarios can they be programmed with? We have been engineered to have a reaction in INFINITE scenarios. Its embedded in our DNA. Good luck emulating that.
 
Last edited:
Unfortunately training the brain to be at its best is time consuming. There are folks out there that can beat computers at chess and do complex maths equations quicker than a calculator.

There are some things a computer can never truly have. A computer will always do things on probability - it will never have the instinct, never be able to use that 'gut-feeling' that is 9 times out of 10 mysteriously right. It will never be able to truly understand things, as true understanding is linked directly with emotion, and emotion is a BIOLOGICAL response. You cannot emulate Billions of years of evolution in any computer, you cannot illicit that BIOLOGICAL response you require to develop emotion, thus the idea of a Computer ever being able to defeat the human race should be removed entirely.

Remember, we rely on instinct more times than we have ever thought about something. Computers can be programmed to preserve their own life for example. But what will they do when they are given certain choices? what scenarios can they be programmed with? We have been engineered to have a reaction in INFINITE scenarios. Its embedded in our DNA. Good luck emulating that.

First of all, the best Chess player in the world is a computer.

Second, how do you know what the limits of computers are? They are practically in their 'fertilization' stage. For all we know, Humans exist to create a quantum artificial intelligence computer that can explore the stars and interact with things in ways no human can.
 
First of all, the best Chess player in the world is a computer.

I'd say that the best human chessplayers vs the best computers is a 50/50 outcome right now; all other considerations set aside. As has been shown since 1997.
That Kasparov vs Deeper Blue match in 1997 (6 games), involved quite a bit of 'psyching-out' from the human IBM team.

Second, how do you know what the limits of computers are? They are practically in their 'fertilization' stage. For all we know, Humans exist to create a quantum artificial intelligence computer that can explore the stars and interact with things in ways no human can.

Only if P=NP.

If you like, we can assume that it does. Can the universe give us any evidence of that being true?:

1) Here we are, on a rock orbiting a star, which is orbiting the center of a large spiral galaxy. What do we see? Do we see (or hear) evidence of something out there? If 'quantum artificial intelligence computers' are possible, why hasn't some previous civilization produced them? They haven't because we would know about it, right now, here, because we would see evidence. Either direct or indirect evidence through electromagnetic emissions of some kind. We see nothing and hear nothing 'out of the ordinary' out there. Tentative conclusion: P does not equal NP

2) "Technically advanced civilizations are rare and spread out through the lifetime of a given galaxy"
The argument being that maybe there was a civilization that existed millions of years ago, so that we would not be able to know of it without detecting an 'artifact' of some kind. Well, if there were any previous civilizations in our galaxy that spawned intelligent, sentient, self-replicating machinery, those machines would likely still exist. We don't see evidence of them. Tentative conclusion: P does not equal NP

3) "We're the first technical civilization to arise in our galaxy"
We see no evidence of ultra advanced machine civilizations elsewhere in the universe; and we would see it if it existed, the energy requirements and energy outputs are not something that can be hidden.
Tentative conclusion: Speculative, but through indirect evidence it suggests P does not equal NP.

4) "We're the first technical civilization to arise within our observable horizon of the universe"
Unprovable.

So, from that quick and dirty speculative assessment, the rest of the universe is suggesting to us that P does not equal NP, and so, true sentient and creative AI is not possible.

Of course, I could be wrong. :cool:
 
..."ultra advanced machine civliations," like us, most likely were caused to advance due to war. All wars have two sides and, as demonstrated by stealth technology, you can't fight an enemy you can't detect. "Ultra advanced machine civiliations," therefore, have likely found means to prevent anything "leaking" into space that would give their position away.

Every study conducted on "are we alone" conluded the odds are improbable of us being the only intelligent species in the universe. We're just amateurs at hiding because we're not at war with other planets...yet.


There's means to make P = NP by using random, external seeds. Most computers today use the time as a random seed but as we all know, that's not exactly random. I wouldn't be surprised if someone, somewhere out there has made a random seed chip using a crystal or some such but in general applications, a dedicated randomizer is not necessary.

And by the way, "sentients" is rarely "creative." If you write a program to account for all the possibilities, it is quite easy to predict which choice a sentient will prefer by weighting each against personality.
 
I'd say that the best human chessplayers vs the best computers is a 50/50 outcome right now; all other considerations set aside. As has been shown since 1997.
That Kasparov vs Deeper Blue match in 1997 (6 games), involved quite a bit of 'psyching-out' from the human IBM team.



Only if P=NP.

If you like, we can assume that it does. Can the universe give us any evidence of that being true?:

1) Here we are, on a rock orbiting a star, which is orbiting the center of a large spiral galaxy. What do we see? Do we see (or hear) evidence of something out there? If 'quantum artificial intelligence computers' are possible, why hasn't some previous civilization produced them? They haven't because we would know about it, right now, here, because we would see evidence. Either direct or indirect evidence through electromagnetic emissions of some kind. We see nothing and hear nothing 'out of the ordinary' out there. Tentative conclusion: P does not equal NP

2) "Technically advanced civilizations are rare and spread out through the lifetime of a given galaxy"
The argument being that maybe there was a civilization that existed millions of years ago, so that we would not be able to know of it without detecting an 'artifact' of some kind. Well, if there were any previous civilizations in our galaxy that spawned intelligent, sentient, self-replicating machinery, those machines would likely still exist. We don't see evidence of them. Tentative conclusion: P does not equal NP

3) "We're the first technical civilization to arise in our galaxy"
We see no evidence of ultra advanced machine civilizations elsewhere in the universe; and we would see it if it existed, the energy requirements and energy outputs are not something that can be hidden.
Tentative conclusion: Speculative, but through indirect evidence it suggests P does not equal NP.

4) "We're the first technical civilization to arise within our observable horizon of the universe"
Unprovable.

So, from that quick and dirty speculative assessment, the rest of the universe is suggesting to us that P does not equal NP, and so, true sentient and creative AI is not possible.

Of course, I could be wrong. :cool:

So your argument is "We haven't discovered it therefore it can't exist" in a nutshell, right? Seems pretty small sighted. How many years have humans been capable of looking out at the stars with decent resolution? 30 years? :nutkick:
 
Every study conducted on "are we alone" conluded the odds are improbable of us being the only intelligent species in the universe. We're just amateurs at hiding because we're not at war with other planets...yet.

Ah, but there's a difference between 'intelligent species' and 'technical civilization'. I have no doubt that there are other 'intelligent species' out there. I see it as being likely there are incredibly few 'technical civilizations' out there spread across the universe.
Assuming a situation where a civilization would be 'at war with other planets' assumes that two (or more) technical civilizations at comparable levels of technical skill, relatively close in travel time to one another, and hostile to each other exist. This is highly improbable, even if there are many many technical civilizations out there that are 'hiding'. Not to mention machine civilizations.

There's means to make P = NP by using random, external seeds. Most computers today use the time as a random seed but as we all know, that's not exactly random. I wouldn't be surprised if someone, somewhere out there has made a random seed chip using a crystal or some such but in general applications, a dedicated randomizer is not necessary.

I'll take your word for it, you undoubtedly know more about it than I do.
So, you're saying that you need a 'randomized seed' of suitably high complexity to make P=NP. A jump start? Well, that's interesting but P=NP has still not been proven and the $1 million prize is still up for grabs.

And by the way, "sentients" is rarely "creative." If you write a program to account for all the possibilities, it is quite easy to predict which choice a sentient will prefer by weighting each against personality.

True, sentients are rarely creative. Or do you mean 'spectacularly creative' as in 'strokes of genius'? I would say that every sentient, regardless of how intelligent they are exhibits some level of creativity.

OK, your second sentence gives the standard programmer's reply taken to infinite boundaries.
Nothing can be definitively stated by resorting to absolute points of view, it's essentially just bravado.
Yes, a program can be coded to do as many things as you want it to do within the limits of your intelligence and ability to oversee its overall structure. But that is still finite and limited, ultimately, to a subset of the intelligence and ability of the programmer(s) who created it. Even some kind of neural net programmed in a way that you hint at would be ultimately limited by its original coding, done by humans ---- unless P=NP and that neural net can make the creative leap(s) necessary to initiate self-improvement.

There's a problem here I know. Humans can obviously make creative leaps of one kind or another, and arguably, so can other animals that meet the criteria for some kind of minimal sentience. This suggests that P=NP, since we can do it to varying degrees.
To me, and apparently to some others, it seems like P=NP is not possible, logically. And yet, our sentience says that it is possible. A seeming paradox.

In my opinion, which I'm fully aware may be completely wrong, we are not able to create (or initiate the creation of) a 'random seed' of suitable complexity to spark sentience in a set of algorithms running on a machine; that random seed would be, in a very real sense, indirectly, a subset of our own complexity -- which somehow allows us to make the creative leap -- which may not be enough for our algorithmic creation(s).
 
So your argument is "We haven't discovered it therefore it can't exist" in a nutshell, right? Seems pretty small sighted. How many years have humans been capable of looking out at the stars with decent resolution? 30 years?

Sorry man, but you didn't read very carefully.
'Speculative' and 'Tentative' based upon the knowledge gathered so far.
 
Sorry man, but you didn't read very carefully.
'Speculative' and 'Tentative' based upon the knowledge gathered so far.

No I read them just fine. Its your whole schtick that seems out of whack. What the hell are you talking about "Possible=Not Possible." What does that have to do with anything limiting computer intelligence? If your whole purpose is just to write a bunch of jibberish that's completely not related to anything at all and holds no value at all, then congrats on a job well done. Quantum mechanics is basically built around the principal that everything is possible, however unlikely.
 
But that is still finite and limited, ultimately, to a subset of the intelligence and ability of the programmer(s) who created it. Even some kind of neural net programmed in a way that you hint at would be ultimately limited by its original coding, done by humans ---- unless P=NP and that neural net can make the creative leap(s) necessary to initiate self-improvement.
It is relatively easy to make a program that programs itself. The only problem with AIs, right now, is artifical motivation. How do you make a program that can identify an unknowable problem and then make it want to fix it. Said differently, the first step of a true AI will be a debugger that debugs itself. It can be given an application's source, compile it, simulate being a user, identify errors users will trigger, correct the code, and recompile over and over and over until the AI determined it is flawless. A simple task like that should naturally cause the AI to imrpove its own code to improve efficency. It should also discover, and acknowledge, issues that are subset to it's current task (like the inefficiency of a button or text box) which would spur it on to further research.

These are things that can be done today but the incentive simply isn't there. Why spend years researching an AI when you could hire a programmer to get it done in a week?


Humans can obviously make creative leaps of one kind or another, and arguably, so can other animals that meet the criteria for some kind of minimal sentience. This suggests that P=NP, since we can do it to varying degrees.
To me, and apparently to some others, it seems like P=NP is not possible, logically. And yet, our sentience says that it is possible. A seeming paradox.
As I stated before, creativity is simply a layman's term for probability. "Logical" = high probability to choose this route. "Creative" = low probability to choose this route. Someone who is said to be creative often takes the least frequent approach to a scenario. Therefore, a creative AI simply needs to be able to come up with a list of solutions from the very logical to the not-so-logical and pick the not-so-logical approach. Not-so-logical is often based upon an unlikely combination of past experiences. Take sci-fi "creatures," for example: logical is something humanoid; not-so-logical is something off the wall like combining a pig, frog, and human. It doesn't matter where you look in sci-fi, everything has some roots here on Earth. Humans really aren't that creative at all.


In my opinion, which I'm fully aware may be completely wrong, we are not able to create (or initiate the creation of) a 'random seed' of suitable complexity to spark sentience in a set of algorithms running on a machine; that random seed would be, in a very real sense, indirectly, a subset of our own complexity -- which somehow allows us to make the creative leap -- which may not be enough for our algorithmic creation(s).
Think of how sentience works: memories and thought processes. Computers already accel at both having massive databases and the ability to calculate specific scenarios at lightning speed. The two things that are missing is the ability to create new processes on demand and advancement in sensory recognition. For example, if you hook up a computer to a camera, the computer needs to be able identify everything of importance it "sees" without human assistance. If it can't identify it, it needs to be able to describe it and from that description, derive a name. This is a thought process humans do in an instant that doesn't need to program itself but it is something computers fail miserably at today because they are made to think in a way computers don't like to think (imagry instead of shapes).


In any event, this stuff is coming and faster than you think.
 
I don't think the demise of humanity will be as simple as a "'Robopocalypse". I see homo sapiens branching off into multiple species, some continuing to evolve "naturally", some genetically modified, some augmented with technology, some being liberated from fixed physical form completely, some coalescing into new forms of what we might call consciousness. Technology will evolve in a similar fashion and eventually terms like "human", "cyborg", "robot", and "'artificial' intelligence" may become as pointless as their meaning becomes questionable—and lineage won't matter.

All along the way, sure there will be conflicts and some of these groups may be wiped out. Maybe humanity as we know it will be one of them, but I doubt it will ever be completely destroyed by war. If humanity survives the next century, I think our progeny will exist in some form a thousand, a million, and a billion years from now—and to them, none of the currently foreseeable life forms descending from humans, or their wars, will be more significant than microbes are to us today.

FWIW, sentience is an illusion.
 
The only problem with AIs, right now, is artifical motivation. How do you make a program that can identify an unknowable problem and then make it want to fix it. Said differently, the first step of a true AI will be a debugger that debugs itself. It can be given an application's source, compile it, simulate being a user, identify errors users will trigger, correct the code, and recompile over and over and over until the AI determined it is flawless. A simple task like that should naturally cause the AI to imrpove its own code to improve efficency. It should also discover, and acknowledge, issues that are subset to it's current task (like the inefficiency of a button or text box) which would spur it on to further research.

Exactly, I agree with that. My point is that I personally don't think it's possible to code something that can 'make the leap'. When I say that, I don't mean continual increase in complexity of instructions, I mean that 'the leap' is when that AI is somehow 'jumpstarted' and does it on its own without a human having stuffed it with millions of lines of code telling it how to do everything. Think of a baby; babies know next to nothing, but their minds are in overdrive taking in sensory experience with any method they can get away with, seeing, touching, tasting, hearing, sifting through all of it recognizing patterns, developing memory schemata and paradigms that they can later apply to other situations in many different combinations. Babies 'jumpstart' themselves. But that's just developing intelligence, self awareness (sentience) develops over time, in early childhood, as children learn to creatively apply their small store of memories and sensory experiences to think reflexively about themselves as separate beings; when they recognize they're People too.
I think that the complexity needed for a human to develop first intelligence and then later sentience is beyond the capability of any human created algorithm or code.

Intelligence? yes, there will be intelligent (small 'i') computer programs.
Sentient and Creative? no, there will not be such things, in my opinion. Only more and more complex programming creating the illusion of such capability -- PI (Pseudo Intelligence) as one scifi author put it.

I'm happy to be wrong, but that's my opinion at this moment in time.

I'm getting too philosophical I think, and we seem to disagree on philosophical grounds, which is messy, so I'll end this.
 
Back
Top