LIP6-09/07/2007 2
Agenda
• Introduction
• Function of Consciousness
• Control Systems Fundamentals
• Internal Models
• The Simulator
• Experiment and Results
• Related Work
• Future Work
• Conclusions
• References
LIP6-09/07/2007 3
Introduction
• This presentation is about:
– A software platform for research into Artificial Consciousness;
– The conceptual foundation for this research.
LIP6-09/07/2007 4
What Is Consciousness?
• It is perhaps too early to try and define consciousness.
• A better course would be to try to understand its added value to behavior and then try to define consciousness as consequence of this understanding.
• Consciousness must be have physical (direct or indirect) influence(s) on the environment, otherwise it would not be detectable by evolution and selected as a trait for survival.
LIP6-09/07/2007 5
A Function of Consciousness
Consciousness allows for flexibility of action/behavior
• How can the brain make successful limb and body movements?
• Environmental conditions are constantly changing and have to be adapted to.
LIP6-09/07/2007 6
Control Systems Concepts (1/6)
Open Loop Control (Feed Forward Control Systems)
LIP6-09/07/2007 7
Control Systems Concepts (2/6)
Closed Loop Control (Feedback Control Systems)
LIP6-09/07/2007 8
Control Systems Concepts (3/6)
System Identification
- Black-boxes
- Gray-boxes, and
- White-boxes
LIP6-09/07/2007 9
Control Systems Concepts (4/6)
Model Predictive Control
LIP6-09/07/2007 10
Control Systems Concepts (5/6)
Model Predictive Control Dynamics
LIP6-09/07/2007 11
Control Systems Concepts (6/6)
Forward and Inverse Models
LIP6-09/07/2007 12
Internal Models (1/3)
Delay in Closed Loop Systems
LIP6-09/07/2007 13
Internal Models (2/3)
Plant Model in the Control Loop
LIP6-09/07/2007 14
Internal Models (3/3)
• Benefits
– Feedback control
– Anomaly detection
– Anticipation
• Comparison with other techniques
– Flexibility
LIP6-09/07/2007 15
The Simulator (1/4 )
Simulator Structure
LIP6-09/07/2007 16
The Simulator (2/4)
Environment Structure
LIP6-09/07/2007 17
The Simulator (3/4)
Agent Cognitive Structure
LIP6-09/07/2007 18
The Simulator (4/4)
Internal Model General States for Skills
LIP6-09/07/2007 19
Experiments and Results (1/5)
Test of Stop Model
LIP6-09/07/2007 20
Experiments and Results (2/5)
Test of Car-Following Model
LIP6-09/07/2007 21
Experiments and Results (3/5)
Car Following Zoom
LIP6-09/07/2007 22
Experiments and Results (4/5)
Car Following Dynamics
LIP6-09/07/2007 23
Experiments and Results (5/5)
Choosing Between Conflicting Alternatives
• The next experiment will include a higher level Internal Model that will deal with potentially conflicting situations.For example the agent is near his destination but has a slow moving car in front of him. If the agent overtakes the leading car he runs the risk of overshooting his destination.
• This would be an example in that a frustrating (negative) experience would have to be endured in order to achieve a greater good.
LIP6-09/07/2007 24
Related Work
• Owen Holland(Holland and Goodman 2003) in the paper Robots With Internal Models
LIP6-09/07/2007 25
Future Work
• Simulator as a Framework (MAS)
• 3D Graphical Interface
• Model Refining
• Model Implementation Technology
• Time Considerations: Real Time and Synchronisms
• Learning Features and Mechanisms
LIP6-09/07/2007 26
Conclusions
• Promising approach to the study of cognitive processes in general and Artificial Consciousness in particular.
• A simulator architecture that can grow and expand to a multiprocessing environment, thus affording greatly enhanced computing power.
• If various functions attributed to consciousness can be unequivocally be implemented then it comes down to:
a) These functions do not need consciousness to steer behavior, or
b) The machine is exhibiting some level of consciousness within the domain of the simulated environment.
LIP6-09/07/2007 27
References
• Churchland, P.S. (2002), “Brain_Wise: Studies in Neurophilosophy, The MIT Press, pages 76-90.
• Damasio, A. (1994), “Descartes Error: Emotion, Reason, and the Human Brain. New York: Grossett/Putnan.
• Damasio, A. (1999), “A Feeling of What Happens”. New York: Harcourt Brace.
• Grush, R. (1997), “The Architecture of Representation”, in Philosophical Psychology 10:5-23.
• Iacoboni M.,Monar-Szakacs, Gallese V., Buccino G., and Mazziotta J.C. (2005), “Grasping the Intentions of Others with One’s Own Mirror System”, in PloS Biology (www.plosbiology.org).
LIP6-09/07/2007 28
References (Cont.)
• Gaschler K. (2006), “One Person, One Neuron?”, Scientific American Mind (February/March), pp: 77-82.
• Pouget, A., and T.J. Sejnowski (1997), “Spatial Transformations in the Parietal Cortex Using Basis Functions”, Journal of Cognitive Neural Science 9(2):222-237.
• Rizzolatti G., Fogassi L. and Gallese V. (2006), “Mirrors in the Mind”, Scientific American (November), pp: 54-61.
• Sloman, A. (2004), “GC5 The Architecture of Brain and Mind”, in Grang Challenges in Computing – Research, edited by Tony Hoare and Robin Milner, BCS, 21, 24.
• Wolpert, D.M., Z. Ghahramani, and M.I. Jordan (1995), “An Internal Model for Sensorimotor Integration”, in Science 269:1880-1882.