+ All Categories
Home > Documents > University Studies 15A: Consciousness I

University Studies 15A: Consciousness I

Date post: 05-Jan-2016
Category:
Upload: faxon
View: 29 times
Download: 1 times
Share this document with a friend
Description:
University Studies 15A: Consciousness I. Neural Network Modeling (Round 2). Let us begin again with the problem that neuroscientists confronted with the 100-step rule. - PowerPoint PPT Presentation
Popular Tags:
35
University Studies 15A: Consciousness I Neural Network Modeling (Round 2)
Transcript

University Studies 15A: Consciousness I

University Studies 15A:Consciousness INeural Network Modeling (Round 2)

Let us begin again with the problem that neuroscientists confronted with the 100-step rule.That is, they knew that whatever the brain was doing, it was using massive parallelism to produce responses in no more that 100 hundred sequences of neurons firing.So, we have a brain:

If we unfold and flatten the neocortex we get:Two sheets of interconnected ,six-layered neuronal assemblies, connected by the corpus callosumThe work of the brain is done by the neuronal clusters becoming activated and activating additional neuronal clusters in turn.Experiential Level:Seeing, hearing, remembering, deciding, actingBrain Level:One set of neurons becomes activated, activating another set, which in turn activates yet another set. This continues through 100 steps of transmitted activation.For example:Information about light comes in from the retina to the primary visual cortex:

The primary visual cortex passes the activation on to higher processing layers:

Each layer processes the activation by aggregating large number of simple patterns into a smaller number of more complex patterns:

Remember that we started with just angled line segments in the simple cells: of V1. And keep those feedback connection in mind.What we see is a complex reconstruction built from many layers of input that had been divided into separate streams and reassembled.

To construct what we see, activation passes from the visual cortex to layers of neurons that synthesize the input from different sensory sources:

The neural clusters embodying the Visual System then continue further and connect to those that embody the Semantic Systems and visual object recognition:Despite the many layers, there are fewer than one hundred layer.Because the brain is processing all the activation information in parallel, the activation passes quickly from layer to layer.So, when you see this picture

Your visual system very quickly uses the feedback connections from higher memory of objects and draws on your knowledge of dalmatians to fill in the missing information.Experiential Level:Seeing, hearing, remembering, deciding, actingBrain Level:One set of neurons becomes activated, activating another set, which in turn activates yet another set. This continues through 100 steps of transmitted activation.How the Trick is Done: Neural NetworksEach level of processing in the brain is a cell assembly, that is, layer of neurons.

This is a cell assembly of neurons in a flys retina.

This picture shows the connection of one layer to neurons to the next.The connection of neurons of one layer to those of the next is through the synaptic junction:Various factors control the strength of the connection between neurons at the synaptic junction:Number of vesicles of neurotransmitter in the sending neuronNumber of receptors on the receiving neuronStructural changes in the junction gap.All of these can change

Back to the Neuron and our Schematic Model of one

a(1)a(2)a(3)a(i)a(n)b(j)Ojb(m)b(1)OmO1Our Schematic Representation of Layers of Cell Assemblies:The strength of the synaptic connection between neurons in the two layersa(i) and b(j)is represented by wi,j.This set of weights defines a weighting matrix of dimension (m,n) (columns for Layer A, rows for Layer B)Wn,m =Experiential Level:Seeing, hearing, remembering, deciding, actingBrain Level:One set of neurons becomes activated, activating another set, which in turn activates yet another set. This continues through 100 steps of transmitted activation.How the Trick is Done: Neural NetworksAt each step, the activation of a Layer B derives from the sum of the input synaptic activations from Layer A multiplied by the strength of the synaptic junction. Or: = W Since all the neurons in Layer B are receiving input at the same time, calculating the activation of the entire layer occurs simultaneously.Experiential Level:Learning and memoryBrain Level:Neurons in one cell assembly change the strength of their connections to neurons in the next cell assembly by changing the structure of the synaptic connection.How we represent memory and learning in Neural Network modelsMemory: all memory resides in the Weighting Matrices that represents the structure of synaptic connections in the systemActivation-based Learning: changing weights in the Weighting Matrices, using Hebbs Rule.wij = aibj

At every level, as the brain passes activation from layer to layer, they adjust their patterns of synaptic strength.

At every level, the boxes representing functional units in the brain actually have their own internal structures of cell assemblies, and these also have their own changing patterns of synaptic connections, their own W.Experiential Level:Things in memory: apples, houses, words, ideasBrain Level:They are all patterns of synaptic connections.Modeling: Learned abilities: riding a bicycleIt is all the Ws. This has implications, because the layers in a network operate as a system rather than as independent neurons.Remember our simple set of artificial neurons:

Sixteen input units are connected to two output unitsOnly two input units are active at a time.They must be horizontal or vertical neighborsOnly one output unit can be active at a time (inhibition is marked by the black dots).

Trial 1 Trial 2 Trial 3

Trial 1 Trial 2 Trial 3If one used this as a perception unit that passed its internal state onto other layers, those other layers would only know of two objects activated by the input layer.

How it would see the 16 input units would vary from Trial 1 to 3, but it divides the input space into just two things as patterns of connection.We have trained this network on a simulation.ALL objects from cars and people to concepts like cuteness or justice, are mutually defined partitions in very, very high level input spaces.

In a word, this is your brain at work:

Neural networks extract patterns and divide an input space.This can lead to odd results with implications for biological neural networks.David McClelland tested the ability of a neural network to build a classification tree based on closeness of attributes.He built a network that could handle simple property statements like:Robin can grow, move, fly.Oak can grow.Salmon has scales, gills, skin.Robin has wings, feathers, skin.Oak has bark, branches, leaves, roots.

Baars and Gage discuss this and give the design:Neural Network software turns this sort of design into a computer program to simulate the network:

What Baars and Gage do not discuss was the next step.McClelland fed the system facts about penguins:Penguin can swim, move, grow.Penguin has wings, feathers, skin.When one runs the simulation, the result is a tree that did a good job:

The results were profoundly different if they gave the facts about penguins interleaved with facts about the other objects or if it was all penguins all the time:

Well come back to this result when we discuss memory and sleep.An important aspect of neural networks in the brain that people explored through artificial networks is the brains use of recurrency, when nodes in networks loop back on themselves.Simulated models show that one absolutely crucial feature of recurrent networks is the ability to complete partial patterns:

The image of the Dalmatian is very incomplete, but the brain feeds back knowledge of Dalmatians to the visual system, which then produces a yet more complete view and cycles in loops until perception settles into Dalmation.Artificial neural networks like the penguin learner allow researchers to model the behavior of neural systemsThese sorts of pattern-completing, self-modifying networks appear throughout the brain.Baars and Gage stress that 90% of the connections between the thalamus and V1 go from V1 to the thalamus as re-entrant connections rather than feed-forward input.Many neural net modelers have developed systems based on re-entrant brain connectivity:

Experiential LevelBiological LevelModelingSeeing, Deciding, ActingLayers of cell assemblies transmitting activationLearningAdjustment of synaptic strength in connections between neuronsMemoryThe strength of synaptic connections maintained by the system of neuron assembliesthings: all forms of internal representationMutually differentiated patterns of activation within an over-all systemAttractor basins (You really dont want to know the details.)To sum up:= W wij = aibjIts all done with neural networks.W


Recommended