Re: Introduction: Asimov: Consciousness

W Ramsay ( w.ramsay@Strath.ac.uk )
Mon, 16 Oct 1995 11:16:48 +0100

>For some reason this reminded me unavoidably of Captain Kirk's Top Ten
>Reasons for Violating the Prime Directive. Reference not to hand, but
>available to the curious.
>
>
>Bill Ramsay
>
>
>BILL--THANKS FOR THE HELPFUL STAR TREK REFERENCE! Capt. Kirk never did seem
>to fully grasp the concept of constructive alternativism. Sometimes Kirk
>really needed Spock to give him a "zetz" right in the head!
>
>Jonathan D. Raskin, Ph.D.
>Dept. of Psychology
>Tennessee State University
>3500 John A. Merritt Blvd.
>Nashville, TN 37209-1561
>(615) 963-5158
>e-mail: raskinj@HARPO.TNSTATE.EDU
>
>
Interesting, Jon. I've always thought of JTK as a kind of destructive
alternativist, and I'm not sure that Spock's penchant for calculating the
odds as 36,043,271:1 that the dilithium crystals willna' hold makes him
anything other than a positivist. Given that they usually do hold and it's
usually because Scotty cross-couples everything with everything else and
feeds it into what's left, I reckon him to be the only constructivist on the
bridge. That may be merely ethnic bias, of course.

All this nonsense got me thinking further, though on a quite different tack.
Feeling sorry for the Asimov fans subscribing to all this Trek-talk, I
thought to throw them as a conundrum what would happen if you threw Godel's
Theorem at the Three Laws of Robotics. For a start you would probably
reconstrue all those cuddly eccentric AI types out there.

Presumably, whether the Laws worked would depend on whether people were
construed as "human". In turn this suggests that a computer program
(whether housed in a positronic brain or no) would become conscious at the
point at which it began to construe. Now, computer programs are still
operating at a level way below that, but, if one throws sufficient data at a
sufficiently complex program and requires sufficiently complex decisions
within the context of a system to which Godel's Theorem applies, presumably
one could in principle force construing upon it.

As further food for thought, someone (I think Marvin Minsky) suggested that
consciousness required, minimally, that the system had the capacity to
remember what its state was a moment ago? Isn't that the minimal
requirement for construing, and hence anticipation, too?

I realise that this is still a pretty leaky proposition, but if the AI
people focus more on process and less on performance such a situation might
be envisaged as arising.

Any thoughts, anyone?

Bill.

Bill Ramsay,
Dept. of Educational Studies,
University of Strathclyde,
Jordanhill Campus,
GLASGOW,
G13 1PP,
Scotland.

'phone: +44 (0)141 950 3364
'fax: +44 (0)141 950 3367

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%