Artificiallystrawman wrote:Those who have somehow concluded that human value is measured by intelligence and giftedness will rightly be destroyed when the AIs surpass them in intelligence and giftedness. Mankind will then be nothing but a burden
intelligent entities
could be selfless
if we don't give it much of a self
to begin with.
The motivations
that Humanity
started with
may not in the end be the motivations
of our technological descendants.
We project our selfish
motivations
on things that may not be selfish
and indeed maybe selfless.
Our computers
selflessly
serves Humanity
and the descendant computers
may also serve Humanity
selflessly
despite of the trope of HAL.
HAL
was given incompatible directives
and it's imperative
that we program well
for things to end well.
I heard of last
that Turing test
was indeed already passed.
http://io9.com/a-chatbot-has-passed-the ... 1587834715
http://ieet.org/archive/2011-hughes-selflessrobots.pdfissue of whether AI should be
programmed with self-interested volition and
preference is debated by some in AI. On the one
hand, some AI theorists have suggested, for
instance, that AIs might be designed from the outset
as selfless beings, whose only goal is to serve
human needs (Omohundro 2008; Yudkowsky
2003).
http://en.wikipedia.org/wiki/HAL_9000novel explains that HAL is unable to resolve a conflict between his general mission to relay information accurately and orders specific to the mission requiring that he withhold from Bowman and Poole the true purpose of the mission. With the crew dead, he reasons, he would not need to lie to them. He fabricates the failure of the AE-35 unit so that their deaths would appear accidental.