+ All Categories
Home > Documents > [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

[r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

Date post: 14-Apr-2018
Category:
Upload: sejong-park
View: 219 times
Download: 0 times
Share this document with a friend

of 19

Transcript
  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    1/19

    International Journal of Computational Cognition (http://www.YangSky.com/yangijcc.htm )Volume 2, Number 3, Pages 119, September 2004Publisher Item Identifier S 1542-5908(04)10301-1/$20.00Article electronically published on May 27, 2003 at http://www.YangSky.com/ijcc23.htm . Please cite thispaper as: Matjaz Gams, Computational Analysis of Human Thinking Processes (Invited Paper),International Journal of Computational Cognition (http://www.YangSky.com/yangijcc.htm ), Volume 2, Num-ber 3, Pages 119, September 2004.

    COMPUTATIONAL ANALYSIS OF HUMAN THINKING

    PROCESSES (INVITED PAPER)

    MATJAZ GAMS

    Abstract. Human creative thinking is analyzed, in particular through

    the principle of multiple knowledge. It is claimed that current digitalcomputers, however fast, cannot achieve true human-level intelligence,

    and that the Church-Turing thesis might be inadequate to encapsulatetop human thinking mechanisms. We try to show this by introduc-ing and analyzing a two- and one-processing entity. Formally, we

    want to compare performance of a single program/process performedby a Turing machine, and two programs/processes performed by two

    interaction Turing machines that can dynamically change each oth-ers programs based on dynamic unconstrained input. c2003 Yangs

    Scientific Research Institute, LLC. All rights reserved.

    1. Introduction

    Can we define computing processes in animals, humans and computers?Do they formally differ? Why are we intelligent and creative (Gomes et al.

    1996, Haase 1996, Turner 1995, Wiggins 2001) and which cognitive processesdiffer us from animals on one side and from computers on the other?

    The best formal model of computing is the Turing machine. Digital com-puters are very good real-life implementations of formal Turing machines.The science that is interested in dealing with intelligence and consciousnessin computers is artificial intelligence (AI). It is the science of mimickinghuman mental faculties in a computer (Hopgood 2003).

    But while it is rather easy to reproduce several human mental abilities- like playing chess - with digital computers, it is very hard to implementtrue intelligence or consciousness. In recent years it is more or less becom-ing evident that the mathematical model of the universal Turing machine(Figure 1) is not sufficient to encapsulate the human mind and the brain(Gams 2001).

    Received by the editors January 16, 2003 / final version received May 26, 2003.Key words and phrases. Weak artificial intelligence, strong artificial intelligence, AI,

    cognitive science, computational model of human mind.

    c2003-2004 Yangs Scientific Research Institute, LLC. All rights reserved.1

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    2/19

    2 MATJAZ GAMS

    Figure 1. The Turing machine can compute anything ahuman can, according to the strong-AI interpretation ofthe Church-Turing thesis.

    It is only fair to observe that there are different viewpoints regardinghuman/computer subject (Gams et al. 1997). Even some very smart andsuccessful researchers still say that current computers are intelligent andconscious. But this position is much less common than a decade ago, andis often denoted as strong AI (Sloman 1992).

    There are several ways of structuring different AI approaches. Here weare interested in weak and strong AI. Strong AI says that computers arein principle computationally as powerful as digital computers, while weak

    AI highlights practical or principal differences (Gams 2001). Strong AI isphilosophically based on an implication of the Church-Turing thesis claim-ing - in simple words - that all solvable functions can be mechanically solvedby the Turing machine (Copeland 1997). More difficult functions cannot besolved by in our physical world existing computing mechanisms. Therefore,humans are in principle computationally as capable as digital computers.Another consequence directly following from the Church-Turing thesis isthat there is no possible procedural formal counterexample or counterproofto show potential human stronger computing, i.e. supercomputing mecha-nisms.

    The last two arguments dominated scientific human/computer debatesfor the last five decades. Not only are they theoretically correct, no widelyaccepted counterexample was found in real life. So, scientifically speaking,

    there is nothing wrong with the Church-Turing thesis as long as we stayin formal definable domains.

    On the other hand, where are intelligent and conscious computers? Onething is to play a game like chess brilliantly, another to show at least some

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    3/19

    COMPUTATIONAL ANALYSIS OF HUMAN THINKING PROCESSES 3

    level of true intelligence and consciousness, recognized by humans. Not onlythat, analysis of capacity growth highlight the problem.

    The exponential growth of computer capabilities (Moore 1975; Hamil-ton 1999) is constant over more than half of the century. E.g. the speedof computing roughly doubles each 18 months. On the other hand, hu-man capabilities have been practically the same during the last century,despite some small constant growth in IQ tests in recent decades. Sincecomputer performance grows pretty linearly on a logarithmic scale, com-puters in more and more areas sooner or later outperform humans. Re-garding symbolic calculating possibilities, computers outperformed humanspractically immediately, and several other areas followed. Currently, com-puters are outperforming best humans in most complex games like chess orin capabilities like mass memory (See Figure 2).

    It is hard to objectively measure intelligence, but one of the most commondefinitions is performance in new, unknown situations. The other idea is byPenrose (1991) that humans recognize intelligence based on some intuitivecommon sense. We shall accept this position.

    1K

    1G

    Memory capacity

    1M

    1T

    Time

    Computers

    Humans

    Crossingpoint

    20K

    20G

    20M

    Figure 2. Computer capabilities progress exponentially i.e. linearly on a logarithmic scale. Due to the much fastergrowth rate, computers outperform humans in more andmore complex domains, currently in mass memory capacity.

    Putting this basic intelligence (or consciousness) of humans and comput-ers on the same graph, we again see that human performances remain un-changed in the last century, similar to Figure 2. But in this case, computer

    basic intelligence remain indistinguishable from zero over the same period(Figure 3). Growing exponentially, computers would by now have to be atleast a little intelligent and consciousness, e.g. at least like a little child, ifthere were any correlation between speed of computing and intelligence.

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    4/19

    4 MATJAZ GAMS

    So there must be something wrong with the strong AI thesis that digi-tal computers will soon become truly intelligent (Wilkes 1992). Computercomputing power and top human properties do not seem to be related.

    Intellige

    nce,

    cons

    ciousness

    Human-level intelligence

    1940 1950 1960 1970 1980 1990 2000 2010 2020 2030

    Un unknown barrier

    Computer intelligence

    Figure 3. In top human properties like intelligence andconsciousness, computer performance remains indistin-guishable from zero, contradicting exponential growth incomputing capabilities.

    The discussion about the Turing machine sometimes resembled discus-sions between scientists and mentalists (Angell 1993; Abrahamson 1994;Dreyfus 1979), but mostly it remained inside two scientific disciplines (Pen-rose 1989): strong and weak AI. Other related disciplines are cognitivesciences and studies of Turing machines. One of our main interests is topresent formal computing mechanisms encapsulating human thinking.

    2. Formal Model

    Formally, can a single processing entity like the Turing machine (or ahuman mental creative process) perform as well as two Turing machines(two human mental creative processes) interacting with each other?

    Here we often use term human mental process since it is obvious that

    some of thinking is rather trivial. In general, in this section we discuss topmental processes like thinking, intelligence and consciousness.

    In computer science, one of the key concepts is related to the Turingmachine. As long as we deal with formal domains, the universal Turing

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    5/19

    COMPUTATIONAL ANALYSIS OF HUMAN THINKING PROCESSES 5

    machine can perform exactly the same function as two of them (Hopcroft etal. 1979). The main reason is that two functions can be performed as oneintegrating function. We call this the paradox of multiple knowledge andlater try to show the neglected differences.

    If we make experiments in classification and machine learning areas (Breimanet al. 1984, Mantaras et al. 2000), the evaluation procedure is fairly welldefined. The computing entity with problem domain knowledge is denotedas a model of the domain. It can be used to predict or perform tasks in thisdomain, e.g. classify a new patient on the basis of existing knowledge. Theprobability of correct solution is denoted as pi, 0pi1. The probability thata combination of two or more models will perform correctly in a situation sis denoted by qs. For example, for two models in a situation where the firstmodel succeeds (T) and the second fails (F), the probability that the com-bination will be successful is denoted by qTF (see Table 1). It is assumedthat 0qs1, and qFF = 0, qTT=1. Note that qTF is not related to conceptslike true false often used in machine learning.

    Table 1. Analyses of four combinations of combining two processes.

    s ps ps, d = 0 ps, d = 1 qs(F, F) (1p1)((1 d)(1p2) + d) (1p1)(1p2) 1p1 qFF (= 0)(F, T) (1p1)(1 d)p2 (1p1)p2 0 qFT(T, F) p1(1 d)(1p2) p1(1p2) 0 qTF(T, T) p1((1 d)p2 + d) p1p2 p1 qTT (= 1)

    Explanation of Table 1: The first column denotes all possible situationsof two spspsd = 0psd = 1qs(F, F)(1 p1)((1 d)(1 p2) + d)(1p1)(1

    p2)1p1qFF (= 0)(F, T)(1p1)(1d)p2(1p1)p20qFT(T, F)p1(1 d)(1p2)p1(1p2)0qTF(T, T)p1((1 d)p2 + d)p1p2p1qTT (= 1)models. The first letter denotes the success/failure of the first model, thesecond of the second model. General probability of each combination ispresented in column 2. There, p1 and p2 represent classification accuracyof the first and the second model, and d represents dependency of the twomodels. Columns 3 and 4 represent two special cases with d = 0 (indepen-dent) and d = 1 (totally dependent i.e. identical). The success rate of thecombining mechanism is represented in the fifth column (qs).

    An example: Suppose that in 3/3 of all cases the two models perform

    identically, and each model alone again classifies with 0.5. Suppose that incase when one model fails and the other succeeds, the combining procedurealways chooses the correct prediction. From this data it follows pFF = pTT= 0.75/2 = 0.375, pFT = pTF = 0.25/2 = 0.125, and p1 = p2 = 0.5. The

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    6/19

    6 MATJAZ GAMS

    mathematical calculation returns d = 0.5 meaning that the second model isidentical to the first one in one half of classifications and independent in theother half of classifications. The parameters of combining mechanism are:qFF = 0, qTF = qFT = qTT = 1. Classification accuracy of the combinedmodel is 0.625. This is 0.125, i.e. 12.5% in absolute terms better than thebest single model, which is quite a substantial increase.

    The question we want to analyze is: when do two models perform betterthan the best model? The technical version of our Principle of multipleknowledge (Gams 2001) claims that it is reasonable to expect improvementsover the best single model when single models are sensibly combined. Com-bining different techniques together is in recent years often denoted as theblackboard system (Hopgood 2003).

    There is another, strong version of the Principle of multiple knowledge.It states that multiple models are an integral and necessary part of any cre-ative, i.e. truly intelligent process. Creative processes are top-level processesdemanding top performance, therefore single processes cannot achieve topperformance. In other words: a sequential single model executable on theuniversal Turing machine will not achieve as good performance as combinedmodels in majority of real-life domains. Secondly, no Turing machine exe-cuting a single model, e.g., no computer performing as a single model, willbe able to achieve creative performance.

    First, we analyze the technical version of the principle.

    3. Analysis

    In this section we perform analyses of two computing entities, i.e. models.The probability of successful performance of two models is obtained as a sumover all possible situations; for two independent models it is (see Table 1):

    pM =

    sS

    ps qs = p2(1p1) qFT + p1(1p2) qTF + p1p2

    and for two dependent models by including d:

    pM =

    sS

    ps qs

    = (1 d)(p2(1p1) qFT + p1(1p2) qTF + p1p2 ) + dp1 .

    pM

    can be expressed in relation to pM :

    pM = (1 d)pM + dp1 .

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    7/19

    COMPUTATIONAL ANALYSIS OF HUMAN THINKING PROCESSES 7

    Here we assumed p1 p2. For 0 < d < 1, the accuracy p

    Mlies be-

    tween p1 and pM. Therefore, whenever two combined independent modelsindicate better overall performance than the best model alone, dependentmodels will also indicate it, and the improvement of accuracy p

    Mp1 will

    be directly proportional to pM p1 with a factor of 1 d. This is animportant conclusion, but should not mislead us into thinking that we an-alyze two independent models, which is quite an irrelevant case in real life.Rather, we analyze any two computing entities, in real life or in for-mal domains, but only by eliminating too dependent behavior to focuson the most relevant performance. Final conclusions will be relevant for allcombinations of two processing entities, as mentioned above.

    As mentioned, interaction is the most important part of the supercom-puting mechanisms that enables improvements over the Turing machine.But for the purpose of analyses of beneficial conditions, we concentrate onindependent cases and assume reasonable interaction.

    Now we continue analyzing two independent models to reveal basic con-ditions in the 4-dimensional (p1, p2, qTF, qFT) space, under which anycombined dependent models are more successful than the best single modelalone. This conclusion has already simplified the problem space. We canfurther shrink the analyzed 4-dimensional space into a 3-dimensional spaceby predefining one variable. Now we can graphically show conditions underwhich two models perform better than the best one of them alone. We shallanalyze 4 cases:

    (1) p1 = p2 = p;

    (2) qTF = qFT = q;(3) predefined qFT;(4) predefined p1.

    Case 1: p1 = p2 = p. This is the case where both models have the sameclassification accuracy. We analyze under which conditions is pM greater orequal to max(p1, p2); therefore we compare max(p1, p2) pM and derive:

    1 qTF + qFT .

    The combining mechanism must on average behave just a little betterthan randomly to obtain the improvement.

    Case 2: qTF = qFT = q. Again we compare two and the best single modeland obtain:

    qmax(p1, p2)p1p2

    p1(1p2) + p2(1p1).

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    8/19

    8 MATJAZ GAMS

    In this case there are three variables and we can plot the correspondinghyperplane, which shows under which conditions two models perform betterthe best single one.

    In Figure 4 we show the lower bound for q that enables successful com-bined classification depending on p1 and p2. If p1 = p2, q must be at least0.5. This is consistent with the case 1: qTF + qFT > 1. Another conclu-sion: the greater the difference between p1 and p2, the greater must be q toachieve the improvement over the best single model. This seems intuitivelycorrect as well, and can also be analytically analyzed. If models performsimilar, we need to guess just a little better than random when combiningmodels. But if one model is substantially better than the other, one hasproblems determining when to trust the less successful model.

    Figure 4. In the space above the hyperplane two modelsoutperform the best single one. In terms of parameters, weanalyze lower bound of q.

    The improvement is possible for all (p1, p2) pairs. No improvement forqTF + qFT < 1 is possible. Unlike in case 1, qTF + qFT > 1 by itself doesnot guarantee an improvement.

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    9/19

    COMPUTATIONAL ANALYSIS OF HUMAN THINKING PROCESSES 9

    Figure 5. For qFT = 0.5, the space where two classifiersperform better than the best one, shrinks compared to Fig-ure 4.

    .

    Case 3: predefined qFT. Now we predefine one value of the variable qFT,and get the condition:

    qTF max(p1, p2) (1p1)p2 qFT p1p2

    p1(1p2).

    Two cases for qFT = 0.5 and qFT = 0.999 are presented in Figure 5 andFigure 6.

    The improvement space in Figure 5 is smaller than in Figure 4. ForqFT = 0.5, in a large proportion of the (p1, p2) pairs, improvement is notpossible (the top wing of the surface). For qFT = 0.999 in Figure 6, which

    means nearly perfect guessing when the better model fails and the worstone succeeds, improvement is feasible in nearly the whole (p1, p2) plane, fornearly all values of qTF. When the first model classifies better than thesecond one (p1 > p2), the lower bound for qTF is roughly proportional to

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    10/19

    10 MATJAZ GAMS

    Figure 6. For qFT = 0.999, two combined models oftenperform better than the best single one.

    the difference between p1 and p2. Good estimates of the correctness of abetter model play a major role in determining the overall success.

    Case 4: predefined p1. In this case, the following condition is obtained:

    max(p1, p2) p1(1p2) qTF + (1p1)p2 qFT + p1p2.

    Two cases, p1 = 0.5 and p1 = 0.999, are graphically represented in Fig-ures 7 and 8. They show the same influence of the difference between p1 and

    p2: the greater the difference, the harder it is to obtain an improvement.Improvements are in general possible under similar conditions as observedin previous cases.

    Overall, analyses strongly indicate that in many real-life domains, im-provements can be obtained by sensibly combining two models/processingentities. Here we presented four special cases, which indicate general con-ditions and relations:

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    11/19

    COMPUTATIONAL ANALYSIS OF HUMAN THINKING PROCESSES 11

    Figure 7. For a predefined value of p1 = 0.5, p2 > p1, alarge part of the space is again beneficial for the two models.

    The best situation when combining models is obtained when p1 =p2, i.e. when models perform similarly good

    with fixed qFT and qTF, increasing difference between p1 and p2proportionally decreases the portion of the space where combiningis beneficial, meaning that the biggest the difference in quality, themore difficult it is to get an improvement over the best model, and

    it is more important that the better model is well estimated thanthe worse model, which has generally less effect on the success of acombination.

    These conclusions can be verified in real tests as well (Gams 2001).What is the meaning of these formal analyses? First, they show, but

    not prove, that in many reasonable situations in real-life we can expect

    meaningful improvements when sensibly combining two good, but not toosimilar programs/models. To show that two models are beneficial in a gen-eral real-life domain on average, we need much more complex analyses andof course gathering statistics from experiments in practical tasks. Both

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    12/19

    12 MATJAZ GAMS

    Figure 8. The same analysis for a predefined value of p1 = 0.999.

    studies strongly indicate that multiple knowledge is indeed beneficial. For

    analytical analyses refer to (Gams 2001), while there have been countless re-ports about practical combining two models/methods/programs in variousdomains from pattern recognition to classification.

    In relation to formal computing theory, e.g. the Turing machine (TM) thepuzzle remains because as mentioned - it is always possible to constructanother TM that performs exactly the same as a combination of two TMs. Inrelation to models/programs, we refer to the paradox of multiple knowledge,since it is always possible to construct a model/program, that will performexactly the same as a combination of two single models/programs.

    In the next session we show that it is possible to resolve this contradictionby introducing computing machines, stronger than the universal TM or evenonly by interaction multiple structure.

    4. Supercomputing

    Soon after the introduction of the universal Turing machine it has beenknown that, at least in theory, stronger computing mechanisms exist. Alan

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    13/19

    COMPUTATIONAL ANALYSIS OF HUMAN THINKING PROCESSES 13

    Turing himself introduced the universal TM only for simulating proce-dural/mechanic thinking of a human while accessing that this might notbe sufficient for creative thinking. At the same time he found no formalreason why computers in future should not outperform human in thinkingand even feelings. His estimate of this event was around year 2000, andobviously, this is far from realized.

    Turing already proposed a stronger computing mechanism the Turingmachine with an oracle, capable of answering any question with always cor-rect Yes/No (1947; 1948). This formal computing mechanism is obviouslystronger than the universal Turing machine, and can easily solve severalproblems like the halting problem, i.e. whether a TM performing a pro-gram will stop or not under any condition. The only problem is that thereis no known physical implementation of a guru, while digital computers arevery good implementations of the universal Turing machine. Also, the Tur-ing machine with a guru does not seem to perform like humans. However,there are several other stronger-than-TM computing mechanisms with in-teresting properties (Copeland 2002). Terms like hypercomputation andsuperminds (Bringsjord, Zenzen 2003) are introduced.

    History is full of interesting attempts in the stronger-than-UT direction.Scarpellini (1963) suggested that nonrecursive functions, i.e. those demand-ing stronger mechanisms than the universal Turing machine, are abundantin real life. This distinction is important, since obviously most of simple pro-cesses are computable, and several simple mental processes are computableas well. But some physical processes and some mental i.e. creative pro-

    cesses are very probably not. One of the debatable computing mechanismsare quantum computers. Clearly, no digital computer can compute a trulyrandom number, while this is trivial in quantum events.

    Komar (1964) proposed that an appropriate quantum system might behypercomputational. This is unlike Penrose who proposed that only theundefined transition between quantum and macroscopic is nonrecursive.Deutch (1992) also reports that although a quantum computer can per-form computing differently, it is not in principle stronger than the universalTM. Putnam (1965) described a trial-and error Turing machine, which cancompute also the Turing-uncomputable functions like the halting problem.Abramson (1971) introduced the Extended Turing machine, capable of stor-ing real numbers on its tape. Since not all numbers are Turing-computable,Turing machines cannot compute with those numbers, and are there infe-

    rior in principle. Boolos and Jeffrey (1974) introduced the Zeus machine,a Turing machine capable of surveying its own indefinitely long compu-tations. The Zeus machine is another version of the stronger-than-UTM.It is also proposed as an appropriate computing mechanism by Bringsjord

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    14/19

    14 MATJAZ GAMS

    (Bringsjord, Zenzen 2003). Karp and Lipton (1980) introduced McCulloch-Pitts neurons, which can be described by Turing machines, but not if grow-ing at will. Rubel (1985) proposed that brains are analog and cannot bemodeled in digital ways.

    One of the best-known authors claiming that computers are inferior tohuman creative thinking is Roger Penrose (1989; 1994). His main argu-ment is that human superiority can be formally proven by extending theGoedelian argument: in each formal system a statement can be constructedthat can not be formally proven right or wrong. The statement it similar tothe liar statement (Yang 2001), which says something about itself. Hu-mans immediately see the truth of such statements since they deal withsimilar situations all the time in real life, but formal systems cannot. Ac-cording to Penrose, the reason for seeing is that humans are not abided bylimitations of formal systems; they are stronger than formal systems (com-puters). Furthermore, since humans see the truth and cannot describe itformally, the solution is not procedural, i.e. Turing computable, therefore,humans use nonrecursive mechanisms. The other major idea by Penroseis related to supercomputing mechanisms in the nerve tissue. This is thePenrose-Hameroff theory (Hameroff et al. 1998). The special supercom-puting mechanisms are based on quantum effects in connections betweenneurons.

    Several of these theories are well in accordance with our principle of mul-tiple knowledge. Principles are general theories, and should be supported bymany more specific theories and also interpretations of the principles. The

    undoubtedly valid hypothesis of course is, that there is another, stronger-than-UTM mechanism. The principle is consistent with that, but also withmuch softer interpretations, e.g. that multiple interacting processes withopen input are sufficient for supercomputation. This structural/processinginterpretation is related to the question where intelligence in humans is. Isit in a genome? In one genome not, but the brain and the mind is to alarge extend dependent of it. However, it is the genome itself that enablesconstruction of the brain and supermind. So the intelligence comes onlythrough appropriate structure.

    In relation to the Penrose-Goedelian argument, it can be derived fromthe principle of multiple knowledge, that humans in a flash introduce anynew mechanism necessary whenever the current computing mechanism getsGoedelized. So, while any formal system can be Goedelized also in our mind

    processes, it can also be extended so rapidly and consistently, that there isno way in real life that the mind gets actually trapped with any specificcase. The Turing machine is not time-dependent, while most cases in reallife are. Therefore, there is no practical sense in formally showing that any

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    15/19

    COMPUTATIONAL ANALYSIS OF HUMAN THINKING PROCESSES 15

    formal system can be Goedelized, when there is no time for it. Thinking inthe mind resembles a multi-process in society of multiple constantly inter-acting processes that can only be frozen in time for a particular meaninglessmoment.

    Similar performance is achieved by growing communities of final numberof mathematicians. What we argue with the principle of multiple knowl-edge, is that one human brain and mind in reality performs like a com-munity of humans. The idea is similar to the Society of minds (Minsky1987; 1991), where Minsky presented the mind as a society of agents, butpresented no claim that this computing mechanism is stronger than a uni-versal Turing machine. Minsky also introduced the blackboard idea - thatmultiple knowledge representations are in reality stronger than single knowl-edge representations. However, Minsky seems inclined to the validity of theChurch-Turing thesis, which he often presented to public.

    One of the stronger-than-UTM that we find relevant are interaction Tur-ing machines. Wegner in 1997 presented his idea that interaction is morepowerful than algorithms. The major reason for superior performance is inan open truly interactive environment, which not only cannot be formal-ized, it enables solving tasks better than with the Turing machine. Thisresembles our principle, but can be interpreted in a simpler form as well.Consider social intelligent agents on the Internet (Bradshaw 1997), whichalready fit the interaction demand (see Figure 9). Such agents are alreadyin principle stronger than stand-alone universal Turing machines, althoughthese are just common computers with the addition of open Internet com-

    munication. Therefore, current computers, if able to truly interact with theopen environment, should already suffice.Quite similar to the interaction TM are coupled Turing machine (Copeland,

    Sylvan 1999). The improvement is in the input channel, which enables un-defined input and thus makes it impossible for the universal Turing machineto copy its behavior.

    Similar idea of introducing mechanisms that cannot be copied by theuniversal Turing machines comes from partially random machines (Turing1948, Copeland 2000). These Turing machines get random inputs and there-fore can not be modeled by a Turing-computable function as shown alreadyby Chruch.

    The problem with these ideas is that it is hard to find physical evidencefor these mechanisms in real life around us. For example, there is no oracle

    in real world that would always correctly reply to a Yes/No question. Onthe other hand, nearly all practical, e.g. mathematical and physical tasksare computable, i.e. Turing computable. There is no task we are not ableto reproduce by a Turing machine. Copeland (2002) objects to this idea

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    16/19

    16 MATJAZ GAMS

    INPUT INPUT

    Figure 9. Some stronger-than-universal Turing Machinesresemble this graphical representation, consistent with theprinciple of multiple knowledge.

    citing Turing and Church. Referring to the original statements presentedby the founders of computer science, one observes discrepancies betweenmathematical definitions and interpretations later declared by the strong AIcommunity. Original mathematical definitions are indeed much stricter, e.g.referring to mathematical definitions. Yet, to fully accept Copelands claims,humans should produce one simple task that can be solved by humans andnot by computers.

    The example we propose is simply lack of intelligence and consciousnessin computers. There is a clear distinction between the physical world andthe mental world. The Turing machine might well be sufficient to performpractically all meaningful practical tasks in practical life. But the mental

    world is a challenge that is beyond the universal Turing machine. Everymoment in our heads a new computing mechanism rumbles on joining andinteracting with others. In computing terms, our mind performs like anunconstrained growing community of at any time final number of humans.

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    17/19

    COMPUTATIONAL ANALYSIS OF HUMAN THINKING PROCESSES 17

    5. Confirmations / Discussion

    In this paper we have shown basic advantages when combining two men-tal processes/programs/models. There is practical relevance of combiningseveral processes/programs/models, and the principal one related to human-level intelligence and consciousness. The first is denoted as the general ver-sion of the principle and the last as intelligent version of the principle ofmultiple knowledge.

    As mentioned, there have been many confirmations of the principle ofmultiple knowledge, e.g., in machine learning where several hundreds ofpublications show that multiple learning mechanisms achieve better results;in simulations of multiple models and formal worst-case analyses, which

    show that reasonable multiple processes can outperform best single-ones;when fitting the formal model to real-life domains showing that it can bedone sufficiently good.

    There are also reasonable confirmations that human thinking is highlymultiple. In historic terms, the progress of human race and its brains/mindalso strongly indicate that multiple thinking is in strong correlation to in-telligence and consciousness. At the anatomic level, especially through thesplit brain research, there is the significant left-right brain asymmetry show-ing that humans do not think in one way as programs on Turing machinesdo.

    We argue that digital computers might be inadequate to perform trulyintelligent and consciousness information processing. Currently, there areseveral supercomputing or hypercomputing mechanisms proposed. We have

    proposed our principle of multiple knowledge differentiating between univer-sal Turing machines and human brains and minds. Thinking processes inour heads perform like a group of actors, using different computing mech-anisms interacting with each other and always being able to include newones.

    The principle of multiple knowledge is in a way similar to the Heisen-berg principle, which discriminates the physical world of small (i.e. atomic)particles from the physical world of big particles. Our principle differenti-ates between universal Turing computing systems and multiple interactioncomputing systems like human minds.

    In analogy to physics, existing computer single-models correspond toNewtonian models of the world. Intelligent computer models have additionalproperties thus corresponding to quantum models of the world valid in theatomic universe. The Heisenbergs principle of uncertainty in the quantumworld is strikingly similar to multiple interaction computing introduced here.

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    18/19

    18 MATJAZ GAMS

    Weak AI basically says that the universal Turing machine is not strongenough to encapsulate top human performances. The Principle of multipleknowledge does not directly imply that digital computers cannot achievecreative behavior. Rather, it implies that current computers need substan-tial improvements to become creative, and that just increased performanceswont be enough. But human mental processes might as well indeed be inprinciple different compared to computer systems.

    References

    [1] Abrahamson, J.R. (1994), Mind, Evolution, and Computers, AI magazine, pp. 19-22.[2] Abramson, F.G. (1971), Effective computation over the real numbers, Twelfth annual

    symposium on switching and automata theory, Nortridge, CA.[3] Angell, I.O. (1993), Intelligence: Logical of Biological, Communications of the ACM,

    36, pp. 15-16.[4] Boolos, G.S. and Jeffrey, R.C. (1974), Computability and logic, Cambridge Com-

    putability Press.

    [5] Bradshaw, M. (ed.) (1997), Software Agents, AAAI Press/The MIT Press.[6] Breiman, L., Friedman, J.H., Olshen, R.A. and Stone, C.J. (1984), Classification

    and Regression Trees, Wads-worth International Group.[7] Bringsjord S. and Zenzen M. J. (2003), Superminds, Kluwer.

    [8] Copeland B.J. (1997), The Church-Turing thesis, in E. Zalta (ed.), Stanford Ency-clopedia of Phylosophy.

    [9] Copeland B.J. (2000), Narrow versus wide mechanisms, Journal of philosophy 96,

    pp. 5-32.[10] Copeland B.J. (2002), Hypercomputation, Minds and Machines, Vol. 12, No. 4, pp.

    461-502.[11] Copeland B.J. and Sylvan R. (1999), Beyond the Universal Turing Machine, Aus-

    tralian journal of philosophy 77, pp. 46-66.[12] Deutch, D. (1992), Quantum Computation, Physics Word, pp. 57-61.[13] Dreyfus, H.L. (1979), What Computers Cant Do, Harper and Row.

    [14] Gams, M. (2001), Weak intelligence: Through the principle and paradoxof multiple knowledge, Advances in computation: Theory and practice, Vol-

    ume 6, Nova science publishers, inc., NY, ISBN 1-56072-898-1, pp. 245.http://ai.ijs.si/mezi/weakAI/weakStrongAI.htm.

    [15] Gams, M., Paprzycki M., Wu X. (eds.) (1997), Mind Versus Computer: Were Dreyfus

    and Winograd Right?, IOS Press[16] Gomes, P., Bento, C., Gago, P. and Costa, E. (1996), Towards a Case-Based Model

    for Creative Processes. In Proceedings of the 12th European Conference on Artificial

    Intelligence, pp. 122-126, August 11-16, Buda-pest, Hungary.[17] Haase, K. (1995), Too many ideas, just one word: a review of Margaret Bodens The

    creative mind, Artificial Intelligence Journal, 79: 83-96.[18] Hamilton, S. (1999), Taking Moores Law into the Next Century, IEEE Computer,

    January 1999, pp. 43-48.[19] Hameroff, Kaszniak, Scott, Lukes (eds.), (1998), Consciousness Research Abstracts,

    Towards a science of consciousness, Tucson, USA.

    [20] Hopcroft, J.E., Ullman, J.D. (1979), Introduction to Automata Theory, Languages,and Computation, Addison-Wesley.

  • 7/30/2019 [r] Gams M. Computational Analysis of Human Thinking Processes (YSRI,2004)(T)(19s)

    19/19

    COMPUTATIONAL ANALYSIS OF HUMAN THINKING PROCESSES 19

    [21] Hopgood, A.A. (2001), Intelligent Systems for Engineers and Scientists, CRC Press.[22] Karp, R.M. and Lipton R.J. (1982), Turing machines that take advice, in Engeler

    et al. (eds.) Logic and algorithmic, L Enseignement Marhematique.[23] Komar, A. (1964), Undicedability of macroscopically distinguishable states in quan-

    tum field theory, Physical Review, 133B, pp. 542-544.[24] Mantaras, R. L., Plaza, E. (2000), (Eds): Machine Learning: ECML-2000. Springer-

    Verlag Lecture Notes in Artificial Intelligence.[25] Minsky, M. (1987), The Society of Mind, Simon and Schuster, New York.[26] Minsky, M. (1991), Society of mind: a response to four reviews, Artificial Intelligence

    48, pp. 371-396.[27] Moore, G.E. (1975), Progress in Digital Integrated Electronics, Technical Digest of

    1975, International Electronic Devices Meeting 11.[28] Penrose, R. (1989; 1991), The Emperors New Mind: Concerning computers, minds,

    and the laws of physics, Oxford University Press.

    [29] Penrose, R. (1994), Shadows of the Mind, A Search for the Missing Science of Con-sciousness, Oxford University Press.

    [30] Putnam, H. (1965), Trial and error predicates and the solution of a problem ofMostowski, Journal of Symbolic Logic 30, pp. 49-57.

    [31] Rubel, L.A. (1985), The brain as an analog computer, Journal of theoretical neuro-biology 4, pp. 73-81.

    [32] Scarpellini, B. (1963), Zwei Unentscheitbare Probleme der Analysis, Zeitschrift fuer

    Mathematische Logik und Grundlagen der Mathematik 0, pp. 265-354.[33] Turner S. (1995): Margaret Boden, The creative mind, Artificial intelligence journal,

    79: 145-159.[34] Wegner, P. (1997), Why Interaction is More Powerful than Computing, Communi-

    cations of the ACM, Vol. 40, No. 5, pp. 81-91, May.[35] Wiggins, G.A. (2001), Towards a more precise characterisation of Creativity in AI,

    Proceedings of the ICCBR 2001 Workshop on Creative Systems, Vancouver, British

    Columbia.

    [36] Sloman, A. (1992), The emperors real mind: review of the Roger Penroses TheEmperors New Mind: Concerning Computers, Minds and the Laws of Physics,Artificial Intelligence, 56, pp. 335-396.

    [37] Turing, A.M. (1947), Lecture to the London Mathematical Society on 20 February

    1947, in Carpenter, Doran (eds.) A.M. Turings ACE Report of 1946 and otherpapers, MIT Press.

    [38] Turing, A.M. (1948), Intelligent Machinery, in B. Meltzer, D. Michie (Eds.), MachineIntelligence 5, Edinburgh University Press.

    [39] Wilkes, M. W. (1992), Artificial Intelligence as the Year 2000 Approaches, Commu-

    nications of the ACM, 35, 8, pp. 17-20.[40] Yang, T., (2001), Computational verb systems: The paradox of the liar, International

    Journal of Intelligent Systems, Volume: 16, Issue: 9, Sept. 2001, pp. 1053-1067.

    Department of Intelligent Systems, Jozef Stefan Institute, Jamova 39, 1000

    Ljubljana, Slovenia, Europe

    E-mail address: [email protected]


Recommended