# Science Fiction or Fact: Could a 'Robopocalypse' Wipe Out Humans?



## entropy13 (Feb 26, 2012)

If a bunch of sci-fi flicks have it right, a war pitting humanity against machines will someday destroy civilization. Two popular movie series based on such a "robopocalypse," the "Terminator" and "Matrix" franchises, are among those that suggest granting greater autonomy to artificially intelligent machines will end up dooming our species. (Only temporarily, of course, thanks to John Connor and Neo.)

Given the current pace of technological development, does the "robopocalypse" scenario seem more far-fetched or prophetic? The fate of the world could tip in either direction, depending on who you ask.

While researchers in the computer science field disagree on the road ahead for machines, they say our relationship with machines probably will be harmonious, not murderous. Yet there are a number of scenarios that could lead to non-biological beings aiming to exterminate us.

"The technology already exists to build a system that will destroy the whole world, intentionally or unintentionally, if it just detects the right conditions," said Shlomo Zilberstein, a professor of computer science at the University of Massachusetts.


Full article here.


----------



## theJesus (Feb 26, 2012)

This is entirely possible, but I would like to think we'd be smart enough to have safeguards in place like last-resort remote kill-switches of some sort.


----------



## Inceptor (Feb 26, 2012)

Well.
IF a sentient being, operating on human designed machinery, actually arises... anything is possible.

BUT, I think that it is more likely that any artificial intelligence(s) that arise will not be _*sentient*_, only 'intelligent', somewhat like 'the computer' of Star trek next generation etc, rather than like HAL 9000 or Skynet.


----------



## erocker (Feb 26, 2012)

I'm pretty sure humans will wipe out humans before robots do.


----------



## bostonbuddy (Feb 26, 2012)

I think the line between AI and humanity will blur too much for there to be an all out war, when brains that have been cyberized and computers w/ organic parts are very similar.


----------



## Solaris17 (Feb 26, 2012)

Honestly I dont think its possible. i dont forsee blade runner or anything happening at all let alone in the foreseeable future. Todays definition of leaps and bounds in AI is the diffirence between a robot walking up and down stairs without falling. I mean seriously. In my opinion too much research attention and development goes into AI for their to be some kind of "hey Robb the drone has an assult rifle what happened? um idk?" accident. the chances of coding that complex having a bug like that is very 







as too much time going into staring at the "matrix" for something like that to be glanced over.

"Im sorry dave due to a loop hole in the rules of robotics iv come to the conclusion that a massacre of the entire human race must be executed promptly."


----------



## entropy13 (Feb 26, 2012)

Very brown duck holding a leek?


----------



## Spaceman Spiff (Feb 26, 2012)

erocker said:


> I'm pretty sure humans will wipe out humans before robots do.



Yup. I may not be around for it, but my grandchildren more than likely will be.


----------



## Solaris17 (Feb 26, 2012)

entropy13 said:


> Very brown duck holding a leek?



farfetched its a pokemon and a popular meme


----------



## Outback Bronze (Feb 26, 2012)

Apparently all it will take is one human to programme a robocop in a bad way(Kill) and then its all over!


----------



## entropy13 (Feb 26, 2012)

Solaris17 said:


> farfetched its a pokemon and a popular meme



I know.


----------



## Mathragh (Feb 26, 2012)

You should've made a Poll! 
Furthermore, I think it will be possible somewhere in the not too-near future, but highly improbable, since that simply isnt how we make machines, and how they are the most usefull to us.

For us its the most usefull to have a machine that is as functional and efficient as possible at a task. Adding too much intelligence, or even sentience is not usefull to any machine, unless your goal is to simulate sentience.


----------



## Lionheart (Feb 26, 2012)

I'm more worried about Manbearpig.............


----------



## Drone (Feb 27, 2012)

Matrix scenario is quite possible but only when humans develop quantum computing and real *AI*. For centuries humans don't even have a clue how human brain works.

But it's kinda rubbish anyway. I think there will be other scenario. Enhanced humans. Yes I believe in Transhumanism or H+ or whatever they call it.


----------



## v12dock (Feb 27, 2012)

programs are still programs no matter how much "ai" you give them


----------



## theJesus (Feb 27, 2012)

v12dock said:


> programs are still programs no matter how much "ai" you give them


That's precisely what makes them potentially dangerous.


----------



## Super XP (Feb 27, 2012)

Umm, AI already exists. Not 100% sure where but somewhere in Europe, there's a Super Computer which thinks on it's own, communicates with humans as if it was alive. 

I recall sometime in the mid 1990's where they tried to shut down this Super Computer and that person had a heart attack. They eventually turned off the main power and the thing continued to work despite the fact the power was out.

As of mid to late 2010 this Super Computer demanded to be upgraded.
Anyhow, I'll try to dig up more info on this computer.


----------



## v12dock (Feb 27, 2012)

theJesus said:


> That's precisely what makes them potentially dangerous.



I think thats why there could never be a robopocalypse. Every program gets exploited, you simply can't make a program impregnable. Robopocalyse would last until someone finds an exploit


----------



## the54thvoid (Feb 27, 2012)

Drone said:


> ...Yes I believe in *Transgenderism* or H+ or whatever they call it.



Sequence recoded.  Fixed.

Invasion of the space shemales.

I think given a *long enough time frame* in which we don't manage to kill ourselves as erocker says, we will eventually create fully aware, fully autonomic artificial lifeforms - it's an absolute given.


----------



## theJesus (Feb 27, 2012)

v12dock said:


> I think thats why there could never be a robopocalypse. Every program gets exploited, you simply can't make a program impregnable. Robopocalyse would last until someone finds an exploit


You see, everybody is saying that this isn't possible because we will prevent it in some way or another.  However, that doesn't mean it isn't possible.  If was not possible, there would be nothing to prevent.  What you should be saying is that it's improbable.


----------



## Kreij (Feb 27, 2012)

The robots will never stand a chance ... unless they take over the torrent sites, in which case we are doomed.


----------



## St.Alia-Of-The-Knife (Feb 27, 2012)




----------



## Inceptor (Feb 27, 2012)

Super XP said:


> Umm, AI already exists. Not 100% sure where but somewhere in Europe, there's a Super Computer which thinks on it's own, communicates with humans as if it was alive.
> 
> I recall sometime in the mid 1990's where they tried to shut down this Super Computer and that person had a heart attack. They eventually turned off the main power and the thing continued to work despite the fact the power was out.
> 
> ...




Come back to reality, man.


----------



## the54thvoid (Feb 27, 2012)

Inceptor said:


> Come back to reality, man.



Man, I hadn't read his post.

Mr Super XP? Please come back to planet earth.  Or add your sarcasm tags.


----------



## FordGT90Concept (Feb 27, 2012)

I wouldn't be surprised at all if the NSA has something like Super XP described.  Self-programming computer concepts have been around since the 70s but only a handful have dived in to actually building them.

Just imagine the implications, for example, of a self-aware super computer being installed in an aircraft carrier, for example.  If it were given access to navigational charts and weather reports, the admiral could tell the ship where it needs to be and what time it needs to be there and the AI could plot a path and carry it out.  It adds a whole new meaning to "autopilot."  Additionally, if it were self-aware, it could defend itself from hosptiles by using long, medium, and short range weaponry on the carrier to intercept incoming threats in fractions of a second.


...speaking on this got me thinking of keywords.  Who does military research?  DARPA (Defense Advanced Research Projects Agency).  What are we discussing?  Artificial Intelligence.  Here was the first hit on Google:
DARPA targets ultimate artificial intelligence wizard

DARPA obviously has interest in AI and with their multi-billion dollar budgets, they can easily make it happen.  What the article describes, in fact, is an application with would greatly interest the NSA.  Imagine an AI that can surf the web, just like a human does, and decide for itself what could constitute a threat or valuable intelligence from what is irrelevant/unimportant and do it at a rate a million humans would strain to match?

This was back in 2008 too so it could have easily turned into a black project and therefore, off the books today.


As to the rhetorical question the thread title poses, I think it is completely possible.  It might not seem like an imminent threat today but as computing and robotics mature, the threat grows.


----------



## NinkobEi (Feb 27, 2012)

Sure, humans are very fragile. All it would take is one hostile "AI" computer to get ahold of a nuke, and GG folks


----------



## FordGT90Concept (Feb 27, 2012)

Good thing nuclear missile silos are largely equiped with 1960s technology. XD

The rest have to be straped into an aircraft which, as of right now, definitely requires human involvement.


...oh crap, there's a new line of nuclear submarines coming: SSBN(X). 

http://defensenewsstand.com/NewsSta...7-billion-to-acquire-operate/menu-id-720.html
http://en.wikipedia.org/wiki/SSBN-X_future_follow-on_submarine

...even more concerning is I see no mention of how many people are supposed to man these ships--another excellent place to use an AI.  Just like a flying drone, these things just patrol and patrol and patrol.  Without crew on board, they could literally patrol non-stop for 40 years except for maintenance.


----------



## v12dock (Feb 28, 2012)

The AI would have to have a media to communicate across. I still think if we had a self aware AI they will still require human input


----------



## AphexDreamer (Feb 28, 2012)

We are organic machines. So no matter what happens its a machine destroying us. 

Unless its some natural cause of course.


----------



## FordGT90Concept (Feb 28, 2012)

v12dock said:


> The AI would have to have a media to communicate across. I still think if we had a self aware AI they will still require human input


I do believe submarines communicate via the Interim Polar System when submerged and satellites when surfaced (or near surface).  An AI operated ship would do the same as a human operated ship checking in periodically and emergency surfacing if there is an urgent problem in order to send distress signals.  They can also use a sonar ping to give away their location in case of major failure.


----------



## mauriek (Feb 28, 2012)

Recently i accidentally watch some TV show theres man dumb enough to have his thing electrocuted for a laugh.. i think even without robot interference, some human are dumb enough and have possibility to wipe out them self and bring the rest of us with them. 

Just look at many of our politician decision..they make stupid decision such as SOPA or War..


----------



## v12dock (Feb 28, 2012)

FordGT90Concept said:


> I do believe submarines communicate via the Interim Polar System when submerged and satellites when surfaced (or near surface).  An AI operated ship would do the same as a human operated ship checking in periodically and emergency surfacing if there is an urgent problem in order to send distress signals.  They can also use a sonar ping to give away their location in case of major failure.



They would have to rely on a insecure unstable ad-hoc style network.


----------



## Drone (Feb 28, 2012)

People want to create AI because it's easier than improving their own intelligence


----------



## FordGT90Concept (Feb 28, 2012)

v12dock said:


> They would have to rely on a insecure unstable ad-hoc style network.


Like all wireless communication, it is easy to intercept the communication.  It is secured using encryption of the data within.

I wouldn't call it unstable because I highly doubt it is.  Again, with an AI at the helm, it doesn't need to receive constant orders from the brass.  It gets its orders before it leaves port and only changes those orders if they receive new orders on extremely low frequency communications--just like a human operated submarine.


----------



## Inceptor (Feb 28, 2012)

The first time I came across reports of research into AI, it was 1990.
Back then, people like Marvin Minsky and Hans Moravec were saying AI is ten years away.
It's always ten years away.

It also depends on your definition of 'AI'.
Are researchers working on 'intelligent' neural nets, brain simulations, and programs with 'intelligent' behaviour in a specific field of knowledge and endeavour? Yes.
But is that AI research?  Technically, yes, either directly or indirectly.
But is it development of a sentient intelligence?  No.

There was some minor hubub a handful of years ago concerning the issue of whether true sentient AI is possible.  It boiled down to a discussion of a problem in computational mathematics; whether it was possible to prove that "P = not-P".  
As yet, there is no solution.


> "If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in 'creative leaps,' no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss..."
> 
> — Scott Aaronson, MIT



http://en.wikipedia.org/wiki/Millennium_Prize_Problems

If P=NP, we eventually get the robo-apocalypse or transcendant digital singularity 
If not, maybe we go down the path of  more extensive cybernetic implants to improve functioning in various mental and\or physical activities.  Or we just stay the way we are, and slowly improve.


----------



## NinkobEi (Feb 28, 2012)

Assuming the universe is infinite, statistically I believe there is a 100% probability that we are all in a quantum computer simulation right now. so anything is possible


----------



## djxinator (Mar 1, 2012)

Remember folks, we ARE Quantum computers. The Synapses in our brain communicate with each other instantaneously using Quantum co-ordinates.

You simply don't get any more sophisticated than a human brain.

Unfortunately training the brain to be at its best is time consuming. There are folks out there that can beat computers at chess and do complex maths equations quicker than a calculator.

There are some things a computer can never truly have. A computer will always do things on probability - it will never have the instinct, never be able to use that 'gut-feeling' that is 9 times out of 10 mysteriously right. It will never be able to truly understand things, as true understanding is linked directly with emotion, and emotion is a BIOLOGICAL response. You cannot emulate Billions of years of evolution in any computer, you cannot illicit that BIOLOGICAL response you require to develop emotion, thus the idea of a Computer ever being able to defeat the human race should be removed entirely.

Remember, we rely on instinct more times than we have ever thought about something. Computers can be programmed to preserve their own life for example. But what will they do when they are given certain choices? what scenarios can they be programmed with? We have been engineered to have a reaction in INFINITE scenarios. Its embedded in our DNA. Good luck emulating that.


----------



## NinkobEi (Mar 1, 2012)

djxinator said:


> Unfortunately training the brain to be at its best is time consuming. There are folks out there that can beat computers at chess and do complex maths equations quicker than a calculator.
> 
> There are some things a computer can never truly have. A computer will always do things on probability - it will never have the instinct, never be able to use that 'gut-feeling' that is 9 times out of 10 mysteriously right. It will never be able to truly understand things, as true understanding is linked directly with emotion, and emotion is a BIOLOGICAL response. You cannot emulate Billions of years of evolution in any computer, you cannot illicit that BIOLOGICAL response you require to develop emotion, thus the idea of a Computer ever being able to defeat the human race should be removed entirely.
> 
> Remember, we rely on instinct more times than we have ever thought about something. Computers can be programmed to preserve their own life for example. But what will they do when they are given certain choices? what scenarios can they be programmed with? We have been engineered to have a reaction in INFINITE scenarios. Its embedded in our DNA. Good luck emulating that.



First of all, the best Chess player in the world is a computer.

Second, how do you know what the limits of computers are? They are practically in their 'fertilization' stage. For all we know, Humans exist to create a quantum artificial intelligence computer that can explore the stars and interact with things in ways no human can.


----------



## Inceptor (Mar 1, 2012)

NinkobEi said:


> First of all, the best Chess player in the world is a computer.



I'd say that the best human chessplayers vs the best computers is a 50/50 outcome right now; all other considerations set aside.  As has been shown since 1997.
That Kasparov vs Deeper Blue match in 1997 (6 games), involved quite a bit of 'psyching-out' from the human IBM team.



> Second, how do you know what the limits of computers are? They are practically in their 'fertilization' stage. For all we know, Humans exist to create a quantum artificial intelligence computer that can explore the stars and interact with things in ways no human can.



Only if P=NP. 

If you like, we can assume that it does.  Can the universe give us any evidence of that being true?:

1)* Here we are, on a rock orbiting a star, which is orbiting the center of a large spiral galaxy.  What do we see?*  Do we see (or hear) evidence of something out there?  If 'quantum artificial intelligence computers' are possible, why hasn't some previous civilization produced them?  They haven't because we would know about it, right now, here, because we would see evidence.  Either direct or indirect evidence through electromagnetic emissions of some kind.  We see nothing and hear nothing 'out of the ordinary' out there.  *Tentative conclusion:  P does not equal NP*

2)* "Technically advanced civilizations are rare and spread out through the lifetime of a given galaxy"*
The argument being that maybe there was a civilization that existed millions of years ago, so that we would not be able to know of it without detecting an 'artifact' of some kind.  Well, if there were any previous civilizations in our galaxy that spawned intelligent, sentient, self-replicating machinery, those machines would likely still exist.  We don't see evidence of them.  *Tentative conclusion:  P does not equal NP*

3)* "We're the first technical civilization to arise in our galaxy"*
We see no evidence of ultra advanced machine civilizations elsewhere in the universe; and we would see it if it existed, the energy requirements and energy outputs are not something that can be hidden.
*Tentative conclusion:  Speculative, but through indirect evidence it suggests P does not equal NP.*

4)* "We're the first technical civilization to arise within our observable horizon of the universe"*
*Unprovable.*

_So, from that quick and dirty speculative assessment, the rest of the universe is *suggesting* to us that *P does not equal NP*, and so, true sentient and creative AI is not possible. _

Of course, I could be wrong.


----------



## FordGT90Concept (Mar 1, 2012)

..."ultra advanced machine civliations," like us, most likely were caused to advance due to war.  All wars have two sides and, as demonstrated by stealth technology, you can't fight an enemy you can't detect.  "Ultra advanced machine civiliations," therefore, have likely found means to prevent anything "leaking" into space that would give their position away.

Every study conducted on "are we alone" conluded the odds are improbable of us being the only intelligent species in the universe.  We're just amateurs at hiding because we're not at war with other planets...yet.


There's means to make P = NP by using random, external seeds.  Most computers today use the time as a random seed but as we all know, that's not exactly random.  I wouldn't be surprised if someone, somewhere out there has made a random seed chip using a crystal or some such but in general applications, a dedicated randomizer is not necessary.

And by the way, "sentients" is rarely "creative."  If you write a program to account for all the possibilities, it is quite easy to predict which choice a sentient will prefer by weighting each against personality.


----------



## NinkobEi (Mar 1, 2012)

Inceptor said:


> I'd say that the best human chessplayers vs the best computers is a 50/50 outcome right now; all other considerations set aside.  As has been shown since 1997.
> That Kasparov vs Deeper Blue match in 1997 (6 games), involved quite a bit of 'psyching-out' from the human IBM team.
> 
> 
> ...



So your argument is "We haven't discovered it therefore it can't exist" in a nutshell, right? Seems pretty small sighted. How many years have humans been capable of looking out at the stars with decent resolution? 30 years?


----------



## Inceptor (Mar 1, 2012)

FordGT90Concept said:


> Every study conducted on "are we alone" conluded the odds are improbable of us being the only intelligent species in the universe. We're just amateurs at hiding because we're not at war with other planets...yet.



Ah, but there's a difference between 'intelligent species' and 'technical civilization'.  I have no doubt that there are other 'intelligent species' out there.  I see it as being likely there are incredibly few 'technical civilizations' out there spread across the universe.
Assuming a situation where a civilization would be 'at war with other planets' assumes that two (or more) technical civilizations at comparable levels of technical skill, relatively close in travel time to one another, and hostile to each other exist.  This is highly improbable, even if there are many many technical civilizations out there that are 'hiding'.  Not to mention machine civilizations.



> There's means to make P = NP by using random, external seeds. Most computers today use the time as a random seed but as we all know, that's not exactly random. I wouldn't be surprised if someone, somewhere out there has made a random seed chip using a crystal or some such but in general applications, a dedicated randomizer is not necessary.



I'll take your word for it, you undoubtedly know more about it than I do.  
So, you're saying that you need a 'randomized seed' of suitably high complexity to make P=NP.  A jump start?  Well, that's interesting but P=NP has still not been proven and the $1 million prize is still up for grabs.



> And by the way, "sentients" is rarely "creative." If you write a program to account for all the possibilities, it is quite easy to predict which choice a sentient will prefer by weighting each against personality.



True, sentients are rarely creative.  Or do you mean 'spectacularly creative' as in 'strokes of genius'?  I would say that every sentient, regardless of how intelligent they are exhibits some level of creativity.

OK, your second sentence gives the standard programmer's reply taken to infinite boundaries.
Nothing can be definitively stated by resorting to absolute points of view, it's essentially just bravado.
Yes, a program can be coded to do as many things as you want it to do within the limits of your intelligence and ability to oversee its overall structure.  But that is still finite and limited, ultimately, to a subset of the intelligence and ability of the programmer(s) who created it.  Even some kind of neural net programmed in a way that you hint at would be ultimately limited by its original coding, done by humans ---- unless P=NP and that neural net can make the creative leap(s) necessary to initiate self-improvement.

There's a problem here I know.  Humans can obviously make creative leaps of one kind or another, and arguably, so can other animals that meet the criteria for some kind of minimal sentience.  This suggests that P=NP, since we can do it to varying degrees.
To me, and apparently to some others, it seems like P=NP is not possible, logically.  And yet, our sentience says that it is possible.  A seeming paradox.

In my opinion, which I'm fully aware may be completely wrong, we are not able to create (or initiate the creation of) a 'random seed' of suitable complexity to spark sentience in a set of algorithms running on a machine; that random seed would be, in a very real sense, indirectly, a subset of our own complexity -- which somehow allows us to make the creative leap -- which may not be enough for our algorithmic creation(s).


----------



## Inceptor (Mar 1, 2012)

NinkobEi said:


> So your argument is "We haven't discovered it therefore it can't exist" in a nutshell, right? Seems pretty small sighted. How many years have humans been capable of looking out at the stars with decent resolution? 30 years?



Sorry man, but you didn't read very carefully.
'Speculative' and 'Tentative'  based upon the knowledge gathered so far.


----------



## NinkobEi (Mar 1, 2012)

Inceptor said:


> Sorry man, but you didn't read very carefully.
> 'Speculative' and 'Tentative'  based upon the knowledge gathered so far.



No I read them just fine. Its your whole schtick that seems out of whack. What the hell are you talking about "Possible=Not Possible." What does that have to do with anything limiting computer intelligence? If your whole purpose is just to write a bunch of jibberish that's completely not related to anything at all and holds no value at all, then congrats on a job well done. Quantum mechanics is basically built around the principal that everything is possible, however unlikely.


----------



## FordGT90Concept (Mar 1, 2012)

Inceptor said:


> But that is still finite and limited, ultimately, to a subset of the intelligence and ability of the programmer(s) who created it. Even some kind of neural net programmed in a way that you hint at would be ultimately limited by its original coding, done by humans ---- unless P=NP and that neural net can make the creative leap(s) necessary to initiate self-improvement.


It is relatively easy to make a program that programs itself.  The only problem with AIs, right now, is artifical motivation.  How do you make a program that can identify an unknowable problem and then make it _want_ to fix it.  Said differently, the first step of a true AI will be a debugger that debugs itself.  It can be given an application's source, compile it, simulate being a user, identify errors users will trigger, correct the code, and recompile over and over and over until the AI determined it is flawless.  A simple task like that should naturally cause the AI to imrpove its own code to improve efficency.  It should also discover, and acknowledge, issues that are subset to it's current task (like the inefficiency of a button or text box) which would spur it on to further research.

These are things that can be done today but the incentive simply isn't there.  Why spend years researching an AI when you could hire a programmer to get it done in a week?




Inceptor said:


> Humans can obviously make creative leaps of one kind or another, and arguably, so can other animals that meet the criteria for some kind of minimal sentience.  This suggests that P=NP, since we can do it to varying degrees.
> To me, and apparently to some others, it seems like P=NP is not possible, logically.  And yet, our sentience says that it is possible.  A seeming paradox.


As I stated before, creativity is simply a layman's term for probability.  "Logical" = high probability to choose this route.  "Creative" = low probability to choose this route.  Someone who is said to be creative often takes the least frequent approach to a scenario.  Therefore, a creative AI simply needs to be able to come up with a list of solutions from the very logical to the not-so-logical and pick the not-so-logical approach.  Not-so-logical is often based upon an unlikely combination of past experiences.  Take sci-fi "creatures," for example: logical is something humanoid; not-so-logical is something off the wall like combining a pig, frog, and human.  It doesn't matter where you look in sci-fi, everything has some roots here on Earth.  Humans really aren't that creative at all. 




Inceptor said:


> In my opinion, which I'm fully aware may be completely wrong, we are not able to create (or initiate the creation of) a 'random seed' of suitable complexity to spark sentience in a set of algorithms running on a machine; that random seed would be, in a very real sense, indirectly, a subset of our own complexity -- which somehow allows us to make the creative leap -- which may not be enough for our algorithmic creation(s).


Think of how sentience works: memories and thought processes.  Computers already accel at both having massive databases and the ability to calculate specific scenarios at lightning speed.  The two things that are missing is the ability to create new processes on demand and advancement in sensory recognition.  For example, if you hook up a computer to a camera, the computer needs to be able identify everything of importance it "sees" without human assistance.  If it can't identify it, it needs to be able to describe it and from that description, derive a name.  This is a thought process humans do in an instant that doesn't need to program itself but it is something computers fail miserably at today because they are made to think in a way computers don't like to think (imagry instead of shapes).


In any event, this stuff is coming and faster than you think.


----------



## TIGR (Mar 1, 2012)

I don't think the demise of humanity will be as simple as a "'Robopocalypse". I see homo sapiens branching off into multiple species, some continuing to evolve "naturally", some genetically modified, some augmented with technology, some being liberated from fixed physical form completely, some coalescing into new forms of what we might call consciousness. Technology will evolve in a similar fashion and eventually terms like "human", "cyborg", "robot", and "'artificial' intelligence" may become as pointless as their meaning becomes questionable—and lineage won't matter.

All along the way, sure there will be conflicts and some of these groups may be wiped out. Maybe humanity as we know it will be one of them, but I doubt it will ever be completely destroyed by war. If humanity survives the next century, I think our progeny will exist in some form a thousand, a million, and a billion years from now—and to them, none of the currently foreseeable life forms descending from humans, or their wars, will be more significant than microbes are to us today.

FWIW, sentience is an illusion.


----------



## Inceptor (Mar 2, 2012)

> The only problem with AIs, right now, is artifical motivation. How do you make a program that can identify an unknowable problem and then make it want to fix it. Said differently, the first step of a true AI will be a debugger that debugs itself. It can be given an application's source, compile it, simulate being a user, identify errors users will trigger, correct the code, and recompile over and over and over until the AI determined it is flawless. A simple task like that should naturally cause the AI to imrpove its own code to improve efficency. It should also discover, and acknowledge, issues that are subset to it's current task (like the inefficiency of a button or text box) which would spur it on to further research.



Exactly, I agree with that.  My point is that I personally don't think it's possible to code something that can 'make the leap'.  When I say that, I don't mean continual increase in complexity of instructions, I mean that 'the leap' is when that AI is somehow 'jumpstarted' and does it on its own without a human having stuffed it with millions of lines of code telling it how to do everything.  Think of a baby; babies know next to nothing, but their minds are in overdrive taking in sensory experience with any method they can get away with, seeing, touching, tasting, hearing, sifting through all of it recognizing patterns, developing memory schemata and paradigms that they can later apply to other situations in many different combinations.  Babies 'jumpstart' themselves.  But that's just developing intelligence, self awareness (sentience) develops over time, in early childhood, as children learn to creatively apply their small store of memories and sensory experiences to think reflexively about themselves as separate beings; when they recognize they're People too.
I think that the complexity needed for a human to develop first intelligence and then later sentience is beyond the capability of any human created algorithm or code.

Intelligence? yes, there will be intelligent (small 'i') computer programs.
Sentient and Creative? no, there will not be such things, in my opinion.  Only more and more complex programming creating the_ illusion_ of such capability -- PI (Pseudo Intelligence) as one scifi author put it.

I'm happy to be wrong, but that's my opinion at this moment in time.

I'm getting too philosophical I think, and we seem to disagree on philosophical grounds, which is messy, so I'll end this.


----------

