Great Minds Think Alike


When it comes to waving the flag of community alert around the, frankly obvious, threat of humanity being collectively turned into an ultra-precise mince by Evil Robots, we will take any help that we can get. It's just gravy when this help comes from "high quarters" of intellectual reputation.

The, what I think would be fair to call "imminent", philosopher, Nick Bostrom has expressed his, well considered, concern with the existential risk that might be introduced by the development of what he refers to as "Superintelligence". Again, we appreciate the input of "rarefied minds" in all of this, but frankly, we're taking all comers;

Eliezer Yudkowsky put it this way:

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." [ ERN - Indeed... ]

"Over the past year, Oxford University philosophy professor Nick Bostrom has gained visibility for warning about the potential risks posed by more advanced forms of artificial intelligence. He now says that his warnings are earning the attention of companies pushing the boundaries of artificial intelligence research.

Many people working on AI remain skeptical of or even hostile to Bostrom’s ideas. But since his book on the subject, Superintelligence, appeared last summer, some prominent technologists and scientists—including Elon Musk, Stephen Hawking, and Bill Gates—have echoed some of his concerns. Google is even assembling an ethics committee to oversee its artificial intelligence work."

Full Story @ Technology Review