+ All Categories
Home > Documents > Force field adaptation can be learned using vision in the absence of proprioceptive error

Force field adaptation can be learned using vision in the absence of proprioceptive error

Date post: 31-Dec-2015
Category:
Upload: griffin-albert
View: 13 times
Download: 0 times
Share this document with a friend
Description:
Force field adaptation can be learned using vision in the absence of proprioceptive error. A. Melendez-Calderon, L. Masia, R. Gassert, G. Sandini, E. Burdet Motor Control Reading Group Michele Rotella August 30, 2013. Ideal vs. Constrained Movement. Ideal robotic trainer (6 DOF) - PowerPoint PPT Presentation
Popular Tags:
18
Force field adaptation can be learned using vision in the absence of proprioceptive error A. Melendez-Calderon, L. Masia, R. Gassert, G. Sandini, E. Burdet Motor Control Reading Group Michele Rotella August 30, 2013
Transcript

Force field adaptation can be learned using vision in the absence of

proprioceptive error

A. Melendez-Calderon, L. Masia, R. Gassert, G. Sandini, E. Burdet

Motor Control Reading Group

Michele Rotella

August 30, 2013

Ideal vs. Constrained Movement

Ideal robotic trainer (6 DOF) Realistic movements BUT, complex, bulky, not portable Safety

Reduced DOF trainer Cheaper, simpler, mobile BUT, lost information, different dynamics Will transfer to complex movement?

Exo-UL3

Research Question!

Can performance gains in a constrained environment transfer to an unconstrained

(real movement) environment?

If mechanical constraints limits arm movement, can vision replace proprioceptive

information in learning new arm dynamics?

Integration of Sensory Modes

Vision Proprioception

Importance?

Experiment: targeted reaches

Subjects 30, right-handed

Device 2 DOF planar manipulandum

General task Control cursor with handle position Perform point-to-point movements Successful reach to target in 0.6 ± 0.1 s Color feedback on speed Single (Exp. 1) or five (Exp. 2) movement directions

Braccio di Ferro

Experiment Environments Null force field (NF)

No force, visual feedback of robot/hand position

Viscous curl force field (VF) Velocity dependent force field, visual feedback

Virtual null force field (vNF), vision ≠ proprioception Stiff haptic channel Measure lateral force estimate movement (robot + arm dynamics) Visual feedback actual arm + lateral deviation

Virtual viscous force field (vVF) Stiff haptic channel Measure lateral force estimate movement (robot + arm dynamics)

Estimate velocity of arm estimate viscous curl field Visual feedback actual arm + viscous curl field deviation

Real Environment

Virtual Environment

World Frame

Target Frame

Real

Virtual

Experimental Protocols

Exp. 1: Unidirectional force field learning

Exp. 2: Multidirectional force field learning

Fam. Learning Testing I Testing II Washout Post-washout

uVG(10)

Virtual,Constrained

vNF (25) vVF (150) vVF(20),VF(5)

Catch Trials,Learning Effect

vVF(20),NF(5)Catch Trials,

After Effect

NF(25)NF(20),VF(5)

uCG(10)

Unconstrained

NF (25) VF (150) NA VF(20),NF(5)

Fam. Learning Testing I Testing II Washout Post-washout

mVG(5) vNF (10) vVF (30) vVF(20),VF(5)

vVF(20),NF(5)

NF(10)NF(20),VF(5)mCG(5) NF (10) VF (30) NA VF(20),

NF(5)

Data Analysis & Expected Results

Performance metrics Feed-forward control: Aiming error at 150 ms Directional Error: Aiming error at 300 ms

Between-group analysis Pearson’s correlation coefficient between mean trajectories T-tests between groups

Hypothesis Over time, directional error decreases, catch trial error increases Similar trajectories for vVF and VF

Results: Unidirectional Learning

Gradually StraightenGradually Straighten

SimilarSimilar

OppositeOpposite

Full Washout/Baseline

Full Washout/Baseline

Sim

ilar

Sim

ilar

Large oscillations

Slo

wer

Slo

wer

Results: Unidirectional Learning(cont.)

Feedforward Component

Curvature & Lateral Deviation

Smaller for uVG

*Subjects are not aware of the constraining channel

Results: Multidirectional Learning

Similar paths indicate learning of vVF

* All paths highly correlated

Results: Multidirectional Learning(cont.)

Difference in beginning (Incomplete learning)

Smaller in virtual environment

* Per target, more time to learn single target than many target directions

Discussion

Can learn new dynamics without proprioceptive error Visual feedback shows arm dynamics

Uni- vs. multidirectional task Unidirectional – no difference between uVG and UCG Multidirectional – different aftereffects, incomplete learning

Transfer of learning in a virtual environ. to real movement

But, some proprioception + force feedback from channel

Maybe the CNS favored visual information over proprioception based on reliability

Applications

Sport training Complex movements with simple (take-home) devices

Rehabilitation Simple devices, safer, cheaper Stroke patients have impaired feed-forward control Create visual feedback that could correct lateral forces

Thoughts…

Direct connection to our isometric studies! We totally constrain movement Consider a visual perturbation We use simple dynamics that do not necessarily represent the arm

How realistic do the virtual dynamics have to be for training? Actual arm dynamics? How much error in the arm model? Virtual dynamics of another system?

Thoughts…

Why could subjects not tell when their arm was constrained? How would results change if people could see their hand?

How can we manipulate how much someone relies on a certain type of feedback? This has come up before!

Why did the required reaching length change between uni- and multi-directional experiments?


Recommended