+ All Categories
Home > Documents > Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Date post: 22-Dec-2014
Category:
Upload: yashika54
View: 297 times
Download: 1 times
Share this document with a friend
Description:
 
Popular Tags:
44
Compilation of Notes on CSE 575 Reading materials By: Scott Settembre October 25, 2007 My reading notes include interesting facts, summaries of what I’ve read, as well as my notes and insights on specific topics usually relating to the topic of programming or implementing cognitive function. I have included notes from two of our textbooks that I have been reading, Thagard’s Mind: Introduction to Cognitive Science and the Cummins anthology Minds, Brains and Computers . I have also read several recommended articles and readings from the syllabus as well as from the newsgroup and usually had one or two small notes regarding those readings. MITECs has been a favorite of mine, giving a detailed overview of the history of some topics and reinforcing some of what has been illuminated in class, though I have found MITEC less though provoking than the texts, but full of valuable facts of which I make note. One of my favorite activities is attending the presentations from the Cognitive Science Colloquium as well as the workshop on the Philosophy of 1
Transcript
Page 1: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Compilation of Notes on CSE 575 Reading materialsBy: Scott Settembre

October 25, 2007

My reading notes include interesting facts, summaries of what I’ve read, as well as my notes and insights on specific topics usually relating to the topic of programming or implementing cognitive function. I have included notes from two of our textbooks that I have been reading, Thagard’s Mind: Introduction to Cognitive Science and the Cummins anthology Minds, Brains and Computers. I have also read several recommended articles and readings from the syllabus as well as from the newsgroup and usually had one or two small notes regarding those readings. MITECs has been a favorite of mine, giving a detailed overview of the history of some topics and reinforcing some of what has been illuminated in class, though I have found MITEC less though provoking than the texts, but full of valuable facts of which I make note. One of my favorite activities is attending the presentations from the Cognitive Science Colloquium as well as the workshop on the Philosophy of Biology and UB Graduate Conference on Logic. All which have been thought provoking and inspiring, and so generated some reviews and insights which I have also recorded in this journal. I have intentionally neglected to put in any thoughts I had while reading papers on ACT-R, SNePS, or RTE, although relevant to Cognitive Science; I will save them for potential future projects.

1

Page 2: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from: Thagard’s Mind: Introduction to Cognitive ScienceChapter 2 – “Logic”

p.32. Abduction - different from inference in that we generate a hypothesis and then apply inference rules, but that hypothesis may be wrong or may be one of many.

p.32. Inductive generalization - is like abduction except that the hypothesis are given probably values based on Bayes Theorem.

p.36. "pragmatic reasoning schemas" suggest to me that there are alternative ways that the brain performs formal logic. Also, p.37. has an alternative way of looking at why logic may not be followed, with "mental models" that restrict the problem solving methods that are used.

p.38. Very interesting how people deal with probability, even me for the example here with the college-educated carpenter. It shows that people may be using a dumbed down version of probability theory in order to draw conclusions. "inductive reasoning appears to be based on something other than formal rules of probabiliyt theory."

p.39. What may be an interesting task is to develop a network of neurons or nodes that can induce and abduce.

2

Page 3: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from: Thagard’s Mind: Introduction to Cognitive ScienceChapter 3 – “Rules”

page.51. Chomsky views on innate grammar are hard to believe. There may be a universal grammar to express knowledge and concept, but I doubt that it has evolutionarily evolved in so short a time.

page.51. Chomsky is correct in saying that a grammar is doubtfully a system of rules. I need to read his 2002 paper on that.

page.53. Holland rule buckets is discussed here. I'm surprised that a genetic algorithm method, though not really a GA, is being featured in a CS book. I may have the good fortune of using GA knowledge in my project.

page.54. In the TicTacToe game there are specific locations and then there are relative patterns, like an empty space in a row. The ability to generalize from one row to the other is a problem itself. That knowledge that constructs that particular "row" concept IS the problem of cognition. Now if they had their program generate the rule that leads to this generalization, then that would be excellent.

page.55. Anderson's work on ACT is very interesting in that it performs sentence decomposition based on goals and subgoals. This may be important to look at for RTE and may be also the source of a potential project.

page.56. Nice! ACT-R is becoming a neurological model! They are implementing some of it in neural networks and also there is a match between parts of ACT-R and the brain itself. This is seemingly something that I may need to become more aware of. (Note to self: look into ACT-R)

3

Page 4: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from: Thagard’s Mind: Introduction to Cognitive ScienceChapter 4 – “Concepts”

page.59. Jerry Fodor (influenced by Chomsky) in 1975? thought that concepts are innate??? That is flat wrong, though a concept in general may be innate, specific concepts need to be learned.

page.60. Minksy (1975) Frames, Schank and Abelson (1977) scripts, David Rumelhart (1980) schemas (or what is typical of an object and not the essence of the object), Hilary Putnam (1975) "concepts should be thought of in terms of stereotypes and not in terms of defining conditions". Putnam realized something used in Pattern Classification techniques, in terms of distance to instances of a particular class.

page.64-65. Concept application (with spreading activation) can be used for problem solving including Planning (concept is matched by matching the situation and goals and then applying that concept's script), Decision making (decision made on match of a similar script, however, it is quick and easy but does not take into account the complex concerns about action and goals), Explanation (with a bit of inference), Learning (rules come from innate, experience, or other rules)...

page.67. The concept of an object (or basic physical objects) and of some particular objects (like faces) are innate, but the mechanisms for forming new concepts are also innate. So an AI system may need to have hard-coded in, some concepts as well as the way to form new ones. Does this mean the brain structure and the brain is pre-programmed in some way? Or does it just mean that the brain seems to aquire these concepts and learning strategies through the expression of the DNA? My instinct tells me that the latter is more likely, that the DNA is primarily concerned with creating a structure of the body and that structure begins to dictate how learning works as well as base concepts.

page.68. New concepts can be created from existing concepts WITHOUT a real world example. This makes sense, since Unicorns do not exist in most Earthlings experience, yet we all know what one is and what makes it special.

page.69. "Cognitive Grammar" where "syntactic structure is very closely tied in with the nature and meaning of concepts." This should be investigated (Taylor 2003)

page.69. "A concept's meaning is normally not given definitions in terms of other concepts" A concept are related to "each other and to the world".

page.70. Discussion of classical view of a concept vs. prototype view. Whereas classical view has defining conditions that make an object a specific concept, the prototype view compares a prototypical object to another object where typical conditions are matched. This too is in line with distancing in pattern classification

4

Page 5: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

techniques, which apparently is what some psychological experiments have determined about the human mind.

page.72. Damage to the brain suggests that concepts are located in specific areas. For example, one stroke victim could not name muscial instruments, but had no problem with naming other objects.

page.73. Interesting note, "Learning a complex discipline often requires intentional and active conceptual change". Which means perhaps placing a concept in a different area of the brain, a movement of a concept.

5

Page 6: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from: Thagard’s Mind: Introduction to Cognitive ScienceChapter 5 – “Analogies”

100 billion neurons.

page.111. Local and distributed networks utilize "parallel constraint satisfaction".

My programming project will focus on recurrent networks, or networks that feedback information.

page.116. A concept is a pattern of network activation for connectionists.

page.117. "Syncrony", or same temporal pattern, is exactly what needs to be done. We will see if my project results come up with a similar solution.

page.117. Networks can express many more concepts than can words. I.e. that taste bud example. This is exactly why we need to use some sort of network for concept formation and then use rules to help train and modify these.

6

Page 7: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from: Thagard’s Mind: Introduction to Cognitive ScienceChapter 9 – “Brains”

page.151. "spike train" of a neuron is the pattern of firing. This is much different from a firing rate. So for a firing train of "0011001" would have the same firing rate as a train of "1100100". This has to be accounted for in the brain simulations.

page.151. A neuron can fire hundreds of times a second.

page.152. An excellent quote is here: "In general, a physical system is a computational system 'when its physical states can be seen as representing states of some other system where transitions between its states can be explained as operations on the representations'

page.151. Maybe a good idea to look at a paper by "Eliasmith and Anderson 2003 for an elegant analysis of neural representation."Also, "Maass and Bishop 1999 for anaysis of the representational and computational capacities of spiking neurons"

page.154. Thinking about learning and the problem of "catastophic interference" in neural networks, I was thinking about the timing aspect of the brain. Perhaps "frequency" like that of a radio tuner allows the same neural cluster to contain more than one channel of information, which may be why we can use the same area of the brain for multiple things.

7

Page 8: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from: Thagard’s Mind: Introduction to Cognitive ScienceChapter 10 – “Emotions”

 ITERA (Intuitive Thinking in Environmental Risk Appraisal) model of emotions - uses nodes to represent the emotions and those nodes have a excitatory or inhibitory link to other concepts. (Nerb and Spada 2001)  But this is "unrealistic neurologically" since they use local units that are not like real neurons and therefore do not have a distributed nature AND they "neglect the division of the brain into functional areas".

HOTCO (hot coherence) of Thagard 2000,2003.  Uses valences, like activations, to represent an emotional assessment of an idea, a thing, or a situation.

GAGE (named in honor of Phineas Gage, railroad worker accident victim) by Wagar and Thagard 2004.  Uses "groups of spiking neurons to provide temporal coordination of the activities of different brain areas" and so more like our brain.  There is a "nucleus accumbens" that integrates the emotional and cognitive information from different parts of the brain.  Suggests that interaction between parts of the brain (bi-directional) is what helps integrate the relations between cognitive and physiological aspects of emotion.

Donald Norman (2003) "Attractive things work better".  Beauty and function are interconnected, make things easier to use.

Neurotransmitters, like dopamine and serotonin and oxytocin, influence the connections between neurons in different parts of the brain.  This is important since there are global affectations that influence the entire brain that may not need to be conceptually processed and transmitted through the network, only the bloodstream.

"Emotional evaluations" contribute to many cognitive processes, including decision making.  Emotional processing from the amygdala and prfrontal cortex.

Dualist (mind is separate from the brain) and functionalist (pure computation not dependent on an implementation) views are hard to reconcile with emotional processing.  Brain based materialism gains support, but needs to explain WHY we experience feelings.

8

Page 9: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from: Thagard’s Mind: Introduction to Cognitive ScienceChapter 11 - "Consciousness"

Dualist view: Mind is separate from Body so conscioiusness is spiritual.Idealist view: Everything is mental, so consciousness is a property of everything in the universe.Materialist view: Consciousness is a physical process, undefined at this point.Functionalist views:    a. "Consciousness is a property the functioning of a significally complex computational system"    b. Consciousness is a "side-effect" of a biological system and not just its functions.    c. "Mysterian" position that it is too complex to be figured out.

p.182. Sleep needed to restore levels of glycogen.  May be useful in the model of the cell for the project I am developing.

9

Page 10: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from P. Smolensky: "Connectionism and the Language of Thought"Cummins and Cummins Mind, Brains, and Computers : Chapter 17

p.291. The description and examples of "coffee" using microfeatures shows quite clearly the difference between symbolic categorization and distributed connectionist categorization.

Newsgroup message I wrote on this topic:

I have read both Fodor's paper (though I will probably have to read it a few more times before I truly understand all he said) and Smolensky in the Cummings book.  For anyone that has not read these, probably best to read the Smolensky one first, then the Fodor, since the Fodor (chapter 16) paper is in response to Smolensky (chapter 17).

I am having a hard time figuring out how to understand why Fodor discounts connectionism, especially because it seems like we actually (all) have a working connectionist machine in our head.  I believe his argument centers around his belief that SINCE some portions of thought are optimally symbolic-based (for example, reasoning) that ALL of thought is not be connectionist-based (not even visual or auditory?).  Or I may be wrong, maybe what I am reading is that Smolensky claims that ALL thought is connectionist based by his model and that Fodor is disagreeing with that?

It seems today that we have a good understanding of the strengths and weaknesses of currently implemented ANNs.  Although the limitations of current ANNs and how closely they match up with the neural network in our heads (see Churchland in Cummings page 211), may imply there is functionality there that we have still not yet discovered.  We now understand why an ANN works (for a non-mathematical understanding, an ANN is like finding the highest peak in a multi-dimensional landscape) from a mathematical perspective (though that may not be how our actual brain works, it may be unnecessary to know in actuality how our brain works, if the brain is just an implementation of the processing of cognition).

For some cognition tasks, it seems that a neural network would be a more natural choice.  Smolensky did an excellent job of showing in his coffee example, how coffee can be represented by a series of features that are present (or not present) and that it depends on context.  Depending on CONTEXT is the key idea here.  This contextual deluge is handled very nicely by an ANN, but would make most knowledge-engineers quake at their keyboard if they were needed to code all the different possible contexts in a recognition task in symbolic logic.  In addition, ANN's seem to take on the "poverty of the stimulus" quite easily (given enough samples, but not all samples) being able to generalize and categorize.

For other tasks, like reasoning, although this can probably be implemented in an ANN, it seems very unnecessary.  Our reasoning tasks seem to have neatly evolved over time to quite "nearly" match that of first order logic (FOL).  Can FOL be implemented in an ANN? Probably, but it seems like a connectionist approach here (for reasoning) would be

10

Page 11: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

purely an implementation, and not address the essense of such cognition.

So I guess my confusion comes down to, "why not both?"

11

Page 12: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from Terry Winograd: "A Procedural Model of Language Understanding"Cummins and Cummins Mind, Brains, and Computers : Chapter 7

page 95. In sentence understanding, "we are always in context, and in theat context we make use of what has gone on to help interpret what is coming". Sounds like background knowledge is needed as well as current context knowledge representation.

p.108. In terms of NLP, we have here the obsession to make grammar calculable and correct. In the way humans use language, there is no guarantee that this is true. Decomposing sentences, especially something like "Can you give me a match?", gives the wrong idea. The semantic meaning of such a sentence does depend on context, but also very much in the way that phrase is used. There is a feeling to language, in how it feels when it is heard and spoken. A sentence feels right or rings true when we hear it. Are we following a grammar, or is it just that a grammar can be made from the sentences we speak. I would propose that grammar making is not the route to go, but instead creating a "feels right" function based on the multitude of examples we get from reading and hearing. This can be applied directly to noun phrases with optional words (or even meaningless phrases), like "Colorless green ideas sleep furiously." This sentence feels right, but has no meaning. That just implies that sentences can feel right, but have no semantic cohesion.

p.112. "meaningful connection". This is certainly important. Much like an image is understood from its component pieces, and words and sentences are understood from how they feel together, semantic meaning may also be understood from its component pieces. If there is a meaningful connection between semantic concepts, then that helps disambiguate both syntax and other semantic concepts from following or preceding sentences.

p.113. Winograd acknowledges much of the deficiencies of the technqiues that he describes using in Block World. It is like "attempting to model the behavior of a complex system by using unrelated mathematical formulas whose results are a general approximation to its output." We create a model that does not reflect the real underlying processes.

12

Page 13: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Assorted notes from various papers:

From "Mind and Machines" - Hilary Putnam

p.22. Essentially is the mind a computing machine? If so, then it can be simulated by a Turing machine.

From "Semantic Engines" - John Haugeland

Basically a history of how we deal with intelligence and reasoning about what can or cannot be intelligent.

The one thing that stood out is the discussion of what "X" is missing from our understanding of intelligence. Consciousness, original intentionality and caring, are 3 different candidates for what "X" is.

From "How minds can be computational systems" – Dr. William J. Rapaport http://www.cse.buffalo.edu/~rapaport/Papers/jetai-sspp98.pdf Pg 3. Reading section 1 on computability got me thinking about a potential mind experiment. If we imagined an unlimited memory machine which would just perform lookup functions, a trivial computational matter, could we then devise a way to compute cognition based on the current state and the current inputs.

13

Page 14: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from §4.2 "The Science of Memory"Kolak, Daniel; Hirstein, William; Mandik, Peter; & Waskan, Jonathan (2006), Cognitive Science: An Introduction to Mind and Brain (New York: Routledge): 126-136.

Procedural vs Declaritive.

Maybe some good experiements implementing short-term memory - "phonological loop", "visiospacial sketchpad"

Studying meaning of something increases recall.

Consolodation process of long term memory may have the hippocampus store information for up to three years in the areas of initial activation, until it can be consolidated somewhere else.  This is evidenced by retrograde amnesia that can go back for 3 years.

Sleep may BE this consolodation process.  Connectionist models may shed light on this? Check that out.

Episodic and Semantic memory may have two different storage processes handling them.

Interesting, Hebbs rule cannot be applied to all connections, there has to be a specific reason for strengthening a connection, since not all paired firings produce strengthening.  Associated affective response is one measure of significance in strengthing the connection, but what does this mean.

14

Page 15: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from “MITECS: Cognitive Modeling, Connectionist”http://cognet.mit.edu/library/erefs/mitecs/mcclelland.html "the acquisition of conceptual representations of concepts (Rumelhart and Todd 1993). A crucial aspect of this latter work is the demonstration that connectionist models trained with back propagation can learn what basis to use for representations of concepts, so that similarity based generalization can be based on deep or structural rather than superficial aspects of similarity (Hinton 1989; see McClelland 1994 for discussion)"

More than just a pattern matcher, it represents "things" in terms of features that may even be hidden to us as humans.  Sound familiar? Hard to see something or understand how we are thinking, if we could never see these hidden features anyway.

In Language: constraint-satisfaction seems to be the predominant theory.  I wonder if this would hold true for visual imagedry.

For Reasoning: maybe the connectionist model fails "perhaps in part because higher level cognition often has a temporally extended character, not easily captured in a single settling of a network to an attractor state".  Though there is this thing called RECURRENT NETWORKS to model temporally extended aspects of cognition, though it is basically a network that does not just do feedfoward.

Thagard does this with reasoning by analogy.

Notes from “MITECS: Rules and Representation”http://cognet.mit.edu/library/erefs/mitecs/horgan.html

 GOFAI - Classical AI - The brain is a "syntactic engine" using rules to modify structure. Rules and representations for cognitive thought.

Connectionist - Since computation is distributed, there is no syntax. Nodes themselves do not have any representational context.

15

Page 16: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from “MITECS: Modularity of Mind”http://cognet.mit.edu/library/erefs/mitecs/karmiloff-smith.html

From Fodor: Intependently functioning modules in the brain, which are genetically specified, and process input from other modules and send output.

Damage to a specific module, that seem to be related to another module, may not be the same module.  A visual impairment for object recognition may not hamper face recogntion.  This seems to imply that modules can overlap functionality, and not that modularity (or non interaction between modules at an inter-processing stage) is not valid.

So basically, it is now thought that pure Fodorians, where the brain is genetically prescribed into modules, that there is room for epigenetic development or developing of the brain into modules over time.  This is especially true, in my opinion, due to cases where the brain has areas that respecialize.  Genetics, once again, define boundaries and proclivities and not absolutes.

Nice quote: "The long period of human postnatal cortical development and the considerable plasticity it displays suggest that progressive modularization may arise simply as a consequence of the developmental process."

16

Page 17: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from “MITECS: Neurosciences”http://cognet.mit.edu/library/erefs/mitecs/neurointro.html

In general, cognitive neuroscience is more about how we humans are implemented in a neuronal network and not really about what the essence cognition is (nor what is necessary to implement such a thing). This is more a study about the human mind-machine, not about what is necessary to implement a cognition.

They summarized Cognitive neuroscience pretty nicely, here is an overview:Cognitive neuroscience is a "science of information processing."-how info is acquired (sensation)-interpreted to confer meaning (perception and recognition) -stored or modified (learning and memory) -thinking and consciousness -predict future state (decision making) -guide behavior (motor control) -communicate (language)

This article also describes the history of neuroscience, but specifically of note is that the main focus of the initial efforts were to locate in the brain where specific psychological functions were located. Broca, of "Broca's Brain" by Segan (that I read one summer) was one of the pioneers that located the "speech center".

There was actually a "localizationist and antilocalizationist" debate helped along by the "lesion method". Determining location of faculty through damage to brain areas.

Ah interesting, "Camillo Golgi" discovered how to stain neurons with silver nitrate. I am guessing that the Golgi bodies in the brain are named after him. Around this time the neuron doctorine came about and neurons were determined to be the active elements in the brain.

Mapping the brain came in two steps, one based on the regions of delination, the other on tract tracing of connections. "cytoarchitectonics and neuroanatomical tract tracing"

"Associationism" focuses on how meaning is extracted from the sensory impulses. Locke states that "meaning is born from an association of 'ideas' of which sensation was the primary source. This is not too different from the way we hope to think about how an ANN operates. Basically a big lookup table associating features with a class. But what would be nice is that the meaning of the "class" in one ANN is really the association of features to another ANN. Multiple layers of ANN were not discussed in pattern classification, but I believe that may be the next step. It basically would be an association of the complexity of one ANN to the input of another ANN. For example, what does it truly mean to be of a class, unless we have some innate knowledge in our brain about what that class is (and there may be, i.e. dangerous plants or animals) then really what is happening while learning is that a network learns to group features (from input) together in terms of the output of another

17

Page 18: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

network. Thus George Berkeley's "theory of objects" may not be that far off to exactly how our brain acutally implements such a thing as a concept.

Of course on the opposite side, Gestalt theorists believed that "experience of objects is not reducible to elemental sensations and relationships between them." I would suggest that this helps the innateness argument, which I would like to try to get away from. There must be a system that could be created where a base reference to something would be the sum of other things that are learned or perceived. The overall idea of Gestalt is probably correct, but how it is suggested it is implemented falls dependent (perhaps) on a specific implementation of cognition (humans).

18

Page 19: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from a presentation: 9/19/07Center for Cognitive Science ColloquiumJuergen Bohnemeyer from the Department of Linguistics, University at BuffaloPresentation: “How to Hammer a Shirt Apart (and Talk About It)”

Though I am mostly unfamiliar to the methods of linguists and to what they find interesting, I did understand most of what was being said. What I took out of this presentation was the realization, and I think part of the presenter’s conclusion, that an action that has no direct expression in a language will be “grouped” by the brain with similar types of actions and then expressed in language with some verb from that group.

In Juergen’s experiment, he took an action that has no real direct easy way to express, and researched how people express it in language. “Hammering” a shirt apart, causes the shirt to rip or tear. The hammer, however, is used like a hammer. So the problem is, how is that expressed, by way of what the hammer is doing or by way of what is happening to the state of the shirt.

Of course, my view is much simpler than the view of the linguist. I looked at this in terms of having categories of verbs and then invisioning an Neural Network or another type of pattern classifier, just classify the action. Given the appropriate input, or features, in this case context, players, and actions, this is actually a simple implementation. I was attempting to see why it was so fascinating to a linguist and could only come up with the conclusion that he was attempting to understand this in terms of the complexity of language, of which my knowledge falls short on.

As a last note, I had a wonderful conversation with my dad about this and he expressed interest in coming to one of these presentations as he felt it was quite mentally stimulating. My father focused on the idea that indeed how does a human come up with a verb specifically describing such a unique incident. Indeed, we cannot have a verb for every single possible combination of situation, actors, and actions. And I agree with him that it is curious that some words are directly understood as actions and other words need to be mentally massaged into the meaning they are trying to convey. Interesting problem.

19

Page 20: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from a presentation: 10/13/07UB Graduate Conference on LogicStewart Shapiro, Ohio State UniversityPresentation: "We hold these truths to be self-evident. But what do we mean by that?"

Dr. Shapiro had a thought provoking presentation from his paper on the topic of what we inherently believe to be true or self-evident. Though I can clarify that a bit by saying he was concerned with “truths” that may not have any proof, but yet we believe them to be true and so rely on their truth unknowingly. Although I missed the first half of his presentation, which took place the day before, he continued on the framework he had built already with the audience and took it one step further. He showed how it became paradoxical if we relied on the principle of self-evidence to prove statements. Once something is understood to be self-evident, it no longer can be used in that way.

Now I must say, I had issue with some of what he was saying, which probably means I did not get the depth of the topic. I discussed with Albert Goldfain afterwards, and explained my position basically referencing Godel’s proof that there are statements in a system that cannot be proven from other statements. Albert attempted to tell me why that is not what self-evident implies, since perhaps self-evident items are true, just not proven yet.

From the standpoint of a programmer, I did not see too much trouble in representing statements of this nature in a system like SNePS. A belief is a belief, so a self-evident truth can be treated as just a belief and then reasoned with like any belief. So from my perspective, I’m sure I missed something important in his argument.

20

Page 21: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from a presentation: 10/17/07Center for Cognitive Science ColloquiumWerner Ceusters, Director, Ontology Research GroupPresentation: “Referent Tracking: Research Topics and Applications”

I found Dr. Ceusters presentation very direct and to the point, though I believe missing out on some of the fundamentality of what Ontology gives us – a way to categorize items based on properties or features of that item. Let me explain, he claims that Referent Tracking, or assigning a number to each and every item in the universe (and indeed each and every statement or term or maybe even every feature). Now this might be good for tracking objects in a closed and well defined system, but from an implementation standpoint the problem becomes intractable.

My personal viewpoint is that in such closed or well defined systems, this is a great concept, and in fact I believe used extensively by database management systems and programmers alike. So making the leap that everything should be given an IUI may be skipping a step in the understanding of a problem and ultimately not at all good for simplifying the implementation of the problem. Now I may be wrong, or perhaps I did not understand why IUI’s are the cat’s meow, but it seems to me, labeling an item with a number provides “just another label” and that label does not include any semantic information what so ever. I would have to read more about this before I could really critic this, but these are my initial impressions.

21

Page 22: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from a presentation: 9/29/07Workshop on the Philosophy of BiologyDavid Hershenov, University at BuffaloPresentation: “Organisms, Brains and their Parts”

The brain is broken up into areas which serve possible functional differences (also modularity). More than 100 neuron types. Distributed nature of intelligence makes it hard to determine where function is located.

Michael Levin, CUNYPresentation: “Innateness”

Very interesting, discussing essentially the nature-nurture argument. Levin comes down on the nature side, attempting to show that the variation in the organism from the environment averages out over the population. Genetic variability between races does have an eventual outcome and the environment plays a very small part on average.

I do not necessarily disagree with all that he said, but simply we must remember that genetics places limits on our body and mind, but does not overshadow the effects that a learning machine can have. IQ may initially be genetic bound, but with practice and learning this can be modified within a range. A biological cause of super IQ (i.e. more intraconnectedness in the brain) may indeed rule in and rule out what is and what is not possible, but that does not limit that a brain can learn and can be trained for a normal IQ. And it certainly does not rule out technological influence from medicines and procedure that can help effect the resulting expression of genes.

Genes do determine much, but when it comes to the human mind, they may dictate certain behaviors in pathological cases. And left uninfluenced (or in a similar environment) genes can express themselves in humans identically (as in the case of the 79 identical twins separated unknowingly at birth). The overwhelming majority of them had identical jobs, hobbies, etc, but there was a significant number that did not. The commonality I think was a middle-class existence causing the behaviors to exert themselves given relatively similar pressures from the environment (in this case middle-class pressures).

Robin Andreasen, University of DelawarePresentation: “Is Race a Scientifically Valuable Biomedical Research Variable?”

Dr. Andreasen’s main idea is that genetic variability, within a race, is enough to discount it in research. Instead, we should focus on the gene’s that are present that preside over the medical condition or resulting expression. In stark contrast to Levin’s ideas, she purports that race, if not useless already in most research, will become more and more uncorrelated to the thing being researched, over time.

22

Page 23: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

I tend to agree with her. The correlation of the gene of skin type or gene of eye slant, to something like heart disease or cancer, is probably very low. Her research suggests this and only in very few cases, that we know of, like sickle-cell or Leiden Factor, are they directly attributed to region where a race evolved. Her secondary point is that if we are going to take race into account, we really need to take geographical region as well or else we will lose important data associated with race.

23

Page 24: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from a presentation: 10/17/07Center for Cognitive Science ColloquiumJames R. Sawusch, University at BuffaloPresentation: “Signal Variability and Perceptual Constancy in Speech: How Listeners Accommodate Variation in Speaking Rate”

I enjoyed this presentation very much, but I was again not exactly sure of why the problem is a problem. The topic is one of how the brain understands words, without parsing them, and how there can be a miss-recognition of a word if it is spoken in an environment which has noise or contains a specific noise coinciding with the word being recognized. In variation, this also includes speech rate which adds to the miss-recognition.

In pattern classification, this is a very simple problem, and totally understood. Noise is expected because of incomplete training patterns, just like in this problem. So in a noisy environment, using a classifier, speech recognition can have a miss-categorization. A category is essentially a large multi-dimensional bounded area, not necessarily contiguous, and as such a word recognized may have many of the micro-features contained in that word, but have one micro-feature missed or sensed incorrectly thus placing the solution just outside the boundary of that category.

I would expect, as the presenter has proven, that such things exist, and they do exist in our language. For a linguist this may be a very interesting and perplexing problem, but the neural network nature of our brain makes this a well understood problem mathematically. However, if there was context involved in this, that is still something that we do not quite understand from a pattern classification standpoint. If I were to suggest a further project on this topic, I would recommend focusing of the effect of context on such word recognition.

24

Page 25: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from a presentation: 9/12/07Cognitive Science ColloquiumJ. Leo van Hemmen, “Neurobiology of Snake Infrared Vision: How It Can Be Precise Despite Bad Optics”

Notes from a presentation: 10/16/07Susan Udin, Department Of Physiology And Biophysics

I decided to append this presentation to Dr. Hemmen’s presentation as they have a similar message and technique. I was actually inspired last year by Dr. Udin’s lecture at Cognitive Science colloquium and was happy to see that we had her again in class. This year she went into a bit more detail and showed us how the frog brain processed visual stimulus, even down to the detail of the field of activation that a particular part of the brain had. Her videos were quite illuminating and telling about the actions of the particular layers of the visual cortex. I did not know that a point in the brain would only process a small area of view. I did not know that the deeper you go in the brain layer, the differences of what the cells would fire for (i.e. right to left moving objects or even stationary objects). I found this very fascinating, since this shows that the brain is definitely distributed in its processing of visual images. I would have liked to see even deeper to see when signals start coalescing and forming perhaps more semantic signals (or even if this is the case, perhaps all of visual processing is distributed?)

I thought Udin’s work encompassed Hemmen’s discoveries, but I will make a quick comment about the snake. The IR system of processing seems so lacking detail (resolution) in the images, but this is not completely the case. Just like walking past a door that is nearly completely shut but for a small crack, we can get a working and complete image of the room inside, so would the ability of the IR sensory apparatus be able to get a detailed image. I see no reason why this could not be the case. An image could be built up from the borders found from a small movement of the head, perhaps this is why snakes heads tend to rhythmically move back and forth, to use this feature of a visual sensory network to built up a complete picture. The hypnotizing sway of a cobra may be nothing more than to build a IR image of the scene.

One last thing that Udin brought up at the end of her lecture this year was the ability for the migration of neuron axions to move to correlated neurons that are firing at the same time, somewhere else in the neural mesh. I found that this is a sort of learning behavior at the neuronal layer that is mostly non-hebbian in nature. From what she told me quickly after class, the local interactions between the cells at a chemical level can influence and cause this sort of migration. Usually we think of a network learning by changing the “weights” of a synapse, but by actually changing the structure of the network, this is a not well understood concept yet. Hebbian learning can bring us very far in the training of a network, but it seems that this is an important and overlooked feature of network learning that may not be present in any of our ANN models.

25

Page 26: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes on: Wenner, Melinda (2007), "Forgetting to Remember", p. 13.http://www.sciammind.com/article.cfm?articleID=494524A5-E7F2-99DF-3451DE1F8ABA0FD3

Aside: I have done a few experiments and tracked Snow’s learning capabilities. I found that she sometimes will learn something, like stacking a ring ontop of another ring with two hands, then forget how to do that, then reaquire it a few days later. Another task was walking, when she walked about 9.5 months, she learned how to crawl only after that. Only after she “understood” (maybe better word for this) that she could get from place to place did she attempt to learn to crawl. Before she walked, she could not come up with the plan of movement, but after walking, she then found a need to crawl to an item (like a couch) to stand up.

This and another article talked about how sleep may play an important role in remembering. I would also include dreaming in this. Dreaming may be a byproduct of this process, though in some cases dreaming contains purpose and intent.

In my experience, there are three types of dreams. One type is when the dreamer does not know they are in a dream. A second type is when a dreamer knows they are dreaming, but have no control over the dream. A third type is when a dream knows they are dreaming and also have control over what is occurring in the dream (either how they respond to events or even creating those events). I have often found that stimulus from the waking world does interact with my dream, especially noises or odd and painful sleeping positions.

I’ll get a little behavioralist in my analysis here, but what I find is happening, is that the brain in a dream may actually be strengthening neural connections for stimulus-response type events. We learn how to do something a lot of the time by equating what works (or what our desired outcome is) to what we do (our action). The act of hearing a noise in another room prompts us to investigate, just as it would in our dream. Whereas in the waking world we may find that our dream “creates” the cause of the sound in the other room when we get there (i.e. the cat knocked over a glass), which would increase the likelyhood that our predictive abilities for believing that a cause created an event would be attributed more belief (i.e. in the waking world, we will hear that and perhaps assume a cat was the culprit more readily).

Dreams may help us strengthen our ability to predict as well as act. Now how does this relate to forgetting. Not sure exactly how I got onto dreaming, but in ANN classification, it is usually the case that a training set causes continuous reweighting of the network, and so misclassification continuously reoccurs until all training cases are learning. In a dynamic learning system, like the brain, I would therefore assume that forgetting was essential and very common.

26

Page 27: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes on: Johnson, George (2007), "An Oracle: Part Man, Part Machine", NY Times Week in Review (23 September): WK1, WK4http://query.nytimes.com/gst/fullpage.html?res=9800E5D6103DF930A1575AC0A9619C8B63

Quick note on this, though maybe not a critical point since the article was more fanciful than substanative. The melding of man and machine from an informational processing standpoint is really just using technology to improve methods of communication. At this point you would be dealing with a super human, or a network of humans, and not really much machine processing. Though the computer and the internet would be a part of the “human ant colony”, it is not an essential part, since we could reduce any super information processing colony to the interaction between agents, in this case humans.

27

Page 28: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Rapaport, William J. (1990), "Computer Processes and Virtual Persons:Comments on Cole's `Artificial Intelligence and Personal Identity"', Technical Report 90-13 (Buffalo: SUNY Buffalo Department of Computer Science).http://www.cse.buffalo.edu/~rapaport/Papers/cole.tr.17my90.pdf

Rapaport, William J. (1993b), "Because Mere Calculating Isn't Thinking:Comments on Hauser's `Why Isn't My Pocket Calculator a Thinking Thing?'", Minds and Machines 3: 11-20.http://www.cse.buffalo.edu/~rapaport/Papers/joint.pdfhttp://members.aol.com/lshauser/wimpcatt.html

I have read these papers before in KRR and as an aside, I have made it are rule never to argue with a philosopher! They argue more with questions than with answers.

However, I will comment on what I would say are base assumptions that we all believe but maybe cannot prove. First of all, let us clarify the problem, what the arguments are concerned with is “human-like” thinking created here. Is the Searle-in-the-room “machine” and Cal exhibiting “human-like” thinking. Both authors (and forgive my transgressions before I make them) seem to be more concerned if ANY type of cognition is occurring (or can happen). I would have to say an obvious “yes and no”. Yes, some type of cognition is occurring, but no, no human-style cognition is occurring in CAL and no human-style of consciousness arises from Searle-in-a-box.

I think we need to come up with a system of degrees of intelligence and then define how to be classified by that system. From a computational standpoint, a calculator can calculate and a human brain can calculate, but we can see intuitively a difference in the two. What is that difference and what makes one calculation be cognitive and the other not-so-much? That is the question that both authors seem to be arguing over, but the argument really should be held over the clarification of degree.

Maybe this might make a nice project, but determining levels of consciousness and attributing reasons to why we would make those distinctions, as well as determining levels of cognition and attributing what complexity in the calculation is most relevant and telling. Note to self: reread Hauser’s response to Rapaport again.

28

Page 29: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from: Koch, Christof; & Greenfield, Susan (2007), "How Does Consciousness Happen?", Scientific American (October): 76-83.http://www.cse.buffalo.edu/~rapaport/Papers/Papers.by.Others/koch-greenfield07-Csness-sciam.pdf

This is the article I emailed to Dr. Rapaport. Our discussion of consciousness was moved to a later date, but I think that it was a good idea for me to read a few articles on this.

The distributed nature of information and reasoning in the brain is most likely a result of the structure of the brain. I see no reason why cognition must be distributed except for how it is implemented. However, since we are talking about human consciousness, we must deal with how the brain is responding.

I think that perhaps these two authors of this article are examining the activity of the brain and attributing that to consciousness. This would imply that they might suppose that without these activation patterns in the brain, consciousness could not arise, whereas I would posit the question “if our brain was designed differently, would consciousness not arise.” That is to say, if we did not have distributed cognition, then consciousness would not occur based on these findings.

So in the general case, can consciousness occur in ANY implementation of cognition? Is consciousness a byproduct of cognition in general, or are necessary implementation techniques required to produce it? I would theorize that consciousness can indeed be the result of plain old cognitive activity and may be a necessary condition for cognition to occur.

29

Page 30: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from: Johnson, George (2007), "Alex Wanted a Cracker, but Did He *Want* One?", NY Times Week in Review (16 September): 1WK, 4WK, online, with a YouTube video of Alex, at: http://www.nytimes.com/2007/09/16/weekinreview/16john.html

and

Notes from: Klinkenborg, Verlyn (2007), "Alex the Parrot", NY Times (12 September), p. A20, online at: http://www.nytimes.com/2007/09/12/opinion/12wed4.html

The problem I have always had, and I think many people have had, with the likes of Alex and Koko (the gorillia that uses sign language and a board of icons to communicate) is proving that language equates to cognition and consciousness. I do not necessarily think that experiments in communication at a simple level can prove that. We can see that from the likes of Eliza and other IF-THEN-ELSE computer programs that we as humans are likely to attribute consciousness and reason where there is no real cognition going on. We can be fooled.

I do, though, believe that animals have a low level of reasoning and planning. Simple communication, from these and other animals, may be stimulus-response based, but having a raven “discover” the use of a tool to capture some food to eat could only have been done by a rational, planning, mind. Trial and error??? Perhaps, but even trial and error can be intelligent depending on how it is carried out.

A possible further insight may lay in that WE are not that different than animals in our thought processes. Perhaps we ARE doing some sort of sophisticated or more complex form of stimulus-response. Perhaps what we should take from our examination of animal cognition is the possibility that human thought may not be able to be explained at this moment because we are looking at the problem as too complex, whereas it is actually something simple. Some simple set of rules, of which we can see in action in animals, but brought up a notch by one or two other simple rules which we have yet not discovered. Perhaps what we are missing is that simple theory, maybe as simple as a Pavlovian pairing of stimulus to response, and the reason why we do not see it is that we have not trained out brain yet to see it.

30

Page 31: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from Brockman, John (ed.), What We Believe but Cannot Prove: Today's Leading Thinkers on Science in the age of Certainty (New York:HarperCollins): 167-168.

What makes humans uniquely smart?

Here's my best guess: We alone evolved a simple computational trick with far-reaching implications for every aspect of our life, from language and mathematics to art, music, and morality. The trick: the ability to take as input any set of discrete entities and recombine them into an infinite variety of meaningful expressions.

Thus we take meaningless phonemes and combine them into words, words into phrases, and phrases into Shakespeare. We take meaningless strokes of paint and combine them into shapes, shapes into flowers, and flowers into Monet's water lilies. And we take meaningless actions and combine them into action sequences, sequences into events, and events into homicide and heroic rescues.

I'll go one step further: I bet that when we discover (intelligent) life on other planets, we'll find that although the materials may be different for running the computation, they will create open-ended systems of expression by means of the same trick, thereby giving birth to the process of universal computation.

The insight here I believe is that intelligence creates meaning out of has no inherent meaning. I slightly disagree in that I think that evolution has provided us with some innateness that allows the meaning of things to be directly mapped physically into the brain and then manipulated in a physical (signal) way to continue the longevity and implication on that meaning.

It can be shown that the brain can map an physical object to the same “area” of the brain on every human that exists. Meaning of “an object” therefore is where it activates in our head. You might say that the purpose of the brain is to map external sensory information to the proper place in our brain.

Now I say “proper place” but what I mean is that over time, evolutionary time, creatures that mapped the object to the proper place survived and the ones that did not or did not perfectly do so, died out. So I disagree fundamentally with the idea that senses are “meaningless” fails to take into consideration the context where they seem to have evolved to be meaningful. Evolution provided the meaning, slowly and over time, and recombination magnified and generalized the effect of that meaning and how it interacts in the brain.

31

Page 32: Cog Sci Journal - Oct 2007 - In Docx format - UB Computer Science ...

Notes from: Henig, Robin Marantz (2007), "The Real Transformers", NY Times Magazine (29 July);http://www.nytimes.com/2007/07/29/magazine/29robots-t.htm

Excellent article on the recent history and advances in robotics, but more importantly “cognitive robotics” though the author did not use that term. Save this as a good reference for various types of emotion based architectures and physical system learning architectures.

Main point here, and one I well agree with, is the idea that to be more human it might be best if we start out trying to build a human-like platform. This is more an exploratory way of discovery, instead of a philosophical way. Introspection gets us so far, but perhaps addressing the real world problems of simple operation will provide insights in how the more complicated cognition is done.

For example, I think that using ANN’s as the primary layer in a cognitive machine is ideal. Why classify with intricate rules, when we can use the power inherent in the ANN to automatically classify objects, ideas, qualia. However, implementing higher level cognition is probably actually a layer of complexity (that ANN implement but is not a necessary condition for) which implements a “flawed” system of logic. I say flawed, because there is plenty of evidence that humans do not think soundly, however, we approximate a logical framework using the building blocks which have emerged out of the complex web of networks we have in our brain.

Neural networks can get us so far, taking care of many of the tasks of plain classification, but then how we understand this classification and can reason with it is a different problem. A framework of logic and reasoning is built up from the complexity of network and that is something we can address. But I’ve found that things like classification (ontology) can still be improved upon and optimized in a perfect way (though we do not have an ideal way of doing this yet, but there have been many tries). Whereas ANN is not perfect, it can get the job done, and so whereas human cognition is not perfect, we can still see that we can get the job done without formalizing cognition. Where to start? Maybe with the robots described in this article.

32


Recommended