Strong AI: The possibility or Destruction.
Posted by Jeremy Tarbush in AI, Artificial, Intelligence, machines, Salvation, Strong, Terminator
Well, what do you think of the possibility of Strong AI taking us all over? I think it highly unlikely as long as inventors remembering the three laws of robotics. I think it more likely that a insect-like nanobot form some kind of "Grey-Goo". But still, as long as we remember Asimov's Three Laws of Robotics in our engineering, we should be fine.
I found an article today that says scientist believe we should have nothing to fear, and I will end by quoting the entire thing here. It seems to talk some about the Terminator: Salvation movie. I haven't seen it, but the previews looked awesome. Anyway, here you go:
Despite 'Terminator,' machines still on our side: Scientists say AI will be humanity's 'Salvation'
Read more: http://www.nydailynews.com/entertainment/movies/2009/05/26/2009-05-26_despite_terminator_machines_still_on_our_side_scientists_say_a
i_will_be_humanity.html#ixzz0GwSZrgNC&BThe post-apocalyptic "Terminator Salvation" depicts a world in which a computer becomes self-aware - and promptly makes the logical decision to wipe out the humans that made it. It's a scenario that's gotten a lot eerier in the 25 years since the pre-Internet days of the first "Terminator" movie.
But there's no reason banish the BlackBerry to the basement ...just yet, say computer scientists.
"We live in a world today where this is no longer science fiction," says McG, the director of "Terminator Salvation." "Artificial intelligence is absolutely everywhere, from the ABS brakes in your car to your BlackBerry, spelling the word you misspelled correctly on your behalf.
"It's here."
Not so fast.
While the most powerful supercomputer - the IBM Roadrunner at the Los Alamos National Laboratory - can do about a thousand trillion calculations per second, experts say machines still can't come close to a real-life Arnold Schwarzenegger when it comes to recognizing patterns and solving problems.
"At this point, the technology is not to the point where it's self aware," said professor Reid Simmons of the Robotics Institute at Carnegie Mellon University. "So they're not going to make decisions such as, 'Oh, I'm getting tired of dropping bombs on Afghanistan, I think I'll fly over to Germany and I'll drop bombs there instead.' That's not going to happen."
If and when computers do become sentient, Simmons believes their human creators won't be that stupid: We will have engineered in basic safeguards.
"We'll still have control of the off plug," he says.
Noted futurist and inventor Ray Kurzweil, author of the book "The Singularity Is Near: When Humans Transcend Biology," believes computers will be able to pass the so-called Turing test - when a computer shows an intelligence that is indistinguishable from a human - by the late 2020s.
Sound a little too science fiction? Take a note of Kurzweil's track record: in the 1980s, he predicted a computer would beat a human grandmaster at chess by 1998. He was a year late, since Deep Blue beat Gary Kasparov in 1997.
"This technology is growing exponentially," says Kurzweil. "What used to fit in a building now fits in your pocket. What used to fit in your pocket will fit inside a blood cell 25 years from now."
But rather than an adversarial war between man and machines envisioned in the "Terminator" series, Kurzweil believes the technical advancement of the next few decades will herald a literal rewiring of the human brain. Given the shrinking costs of nanotechnology, it's a matter of time before humans can turn themselves into supercomputers.
"It's not going to be a clear distinction where you can walk in a room and say humans on the left side of the room and AIs, computers on the right side," says Kurzweil. "We're all going to be deeply integrated with these technologies."
McG isn't so sure. He points to the splitting of the atom as proof that the human race is at its most inventive when it's thinking up new ways to potentially destroy itself. What logical, free-thinking computer would want to share the planet with us?
"Not only do I think it's possible [that computers would want to terminate us,] I'm not certain humanity will be around in 50 to 100 years," he says.
Science fiction has certainly mined those fears over the years, whether it's been "2001's" homicidal computer, Hal 9000, or the computer that almost obliterated the world in a nuclear holocaust in "War Games." (In a case of life imitating art, the Roadrunner supercomputer at Los Alamos is used for nuclear simulations.)
But Andrew Ng, a computer science professor at Stanford University, says artificial intelligence will breed machines that are more R2-D2 than T-800s. In fact, he adds, there may be more legitimate worry that the future will resemble "Mad Max" or "The Day After Tomorrow" than the one predicted by "Terminator Salvation."
"I think humanity faces many doomsday scenarios today, such as those posed by global warming, possible nuclear war, maybe even asteroid strikes," he says. "AI is much more likely to see us through these problems than to cause them."