+ All Categories
Home > Documents > Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John...

Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John...

Date post: 21-May-2020
Category:
Upload: others
View: 8 times
Download: 0 times
Share this document with a friend
200
Examining the Sensorimotor Integration Processes Prior to and During Movements to Somatosensory Targets by Gerome Aleandro Manson A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Exercise Science University of Toronto Joint with École Doctorale des Sciences de la Vie et de la Santé AixGMarseille Université © Copyright by Gerome Aleandro Manson 2019
Transcript
Page 1: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

!Examining!the!Sensorimotor!Integration!Processes!Prior!to!and!During!Movements!to!Somatosensory!Targets!

! by! !

Gerome!Aleandro!Manson!

A!thesis!submitted!in!conformity!with!the!requirements!for!the!degree!of!Doctor!of!Philosophy!

Graduate!Department!of!Exercise!Science!University!of!Toronto!

Joint!with!École!Doctorale!des!Sciences!de!la!Vie!et!de!la!Santé!

AixGMarseille!Université!©!Copyright!by!Gerome!Aleandro!Manson!2019!

Page 2: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

ii

Examining!the!Sensorimotor!Integration!Processes!Prior!to!and!

During!Movements!to!Somatosensory!Targets!

Gerome Aleandro Manson

Doctor of Philosophy

University!of!Toronto!AixGMarseille!Université!!

2019

Abstract!

Previous research on multisensory integration for movement planning and control has focused on

movements to targets external to the body. In this dissertation, three experiments were conducted

to examine the sensorimotor transformation processes underlying goal-directed actions to targets

defined by body positions (i.e., somatosensory targets). The goal of the first experiment was to

investigate if the modality of the cue used to indicate the location of a somatosensory target affects

the body representation used to encode the target’s position during movement planning. The results

showed that auditory cues prompted the use of an exteroceptive body representation for the

encoding of movements to a somatosensory target in visual coordinates. The goal of the second

experiment was to examine the neural processes associated with the visual remapping of an

auditory-cued somatosensory target. It was found that the sensorimotor transformation processes

responsible for the conversion of a somatosensory target position into visual coordinates engages

visuomotor cortical networks to a greater extent than movements to external visual targets. The

goal of the third experiment was to examine the sensorimotor transformation processes employed

for the online control of movements to a somatosensory target. The results of this experiment

revealed that the remapping of a somatosensory target into visual coordinates may not occur prior

to online corrections. Altogether the findings of this thesis reveal that sensory cues can facilitate

the remapping of a somatosensory target prior to goal-directed actions. However, these remapping

processes may be too costly for use during online control when there is no vision of the reaching

limb.

Page 3: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

iii

Acknowledgements!Reflecting on the many people who helped me through this time made me feel extremely grateful.

The list of people acknowledged here is not exhaustive but just a few of the individuals who came

to mind. There were many more. Je vais commencer en français.

Merci beaucoup à tous mes collègues et mes amis à Marseille. Particulièrement : Caroline, Julien,

Marie C, Eva (et Téo), Stefania F, Christina, Sarah, Myelene, Joanna, Svetlana, Kyle, Rory, Shu,

Kevin, Raphael, Thibault, Stefania, Sebastian, Luisa, et Ana. Nous avons passé des bons moments

ensemble et vous me manquez.

Merci aussi aux membres de l’équipe : Merci à Didier Louber (l’ami des caraïbes), Laurence

Mouchnino, Marie Fabre, et Alain Guillaume. C’était vraiment un plaisir de travailler avec vous.

Merci à Olivia Lhomond pour toute ton aide et pour « English Day » chaque semaine. Merci à

Anahid Saradjian pour tout ce que tu m’as donné et pour notre rituel de donner un manger aux

chats. Merci infiniment à mes deux frères : Romain Chaumillion (le #1 fan de Raptors) et Nicolas

Lebar (« le mec ») simplement pour tous. Merci énormément à Mme Aurélie Aufray pour tout

votre travail administratif avec la co-tutelle.

Merci aux membres de comité de thèse : Merci à Franck Vidal pour les discussions et l’aide avec

le développement du document. Un grand merci aussi à mon superviseur Jean Blouin pour tout

votre travail avec la thèse et les connaissances scientifique que vous m’avez apporté.

Now to switch back to English. To the external examiners who took time to evaluate my work:

Dr. Denise Henriques and Dr. Jennifer Campos. Thank you for all of your insightful questions,

critiques, and the many discussions we had over the years. I would like to also acknowledge the

mentorship and guidance I received from Dr. Timothy Welsh. Tim, thank you for helping me

develop both as a scientist and as a person. Thank you also for your expertise in statistics.

Thank you also to all of the students and staff members, both past and present, of the Perceptual

Motor Behaviour Lab, the Action and Attention Lab, and the Department of Exercise Sciences

who made this thesis possible. Thank you for creating a supportive environment through both the

fun times and the hard times. In particular, I would like to thank Valentic Crainic, Damian

Page 4: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

iv

Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous

aspects of each project.

To all the present and former members of Team G: Sads, Taff, Lok, and Dov, for taking a big risk

by working with me, and then for becoming some of my best friends. I learned so much from you

all. To my mentors both official and unofficial: Danielle, Tanya, Darian Cheng, Heather, and

Matt Ray. Thank you for the words of advice and guidance. Thank you also to D-Millz, Jonah,

Sam, Nat, Steph, and Mo for friendship, lively discussions, and research support over the years.

Thank you to team bolt: Sharaf, Rach, Sharifa and Jo (also Jess). In addition to being great friends,

you all are my inspirations. A very special thank you to Cindy, Debra, and Rachel “getting

buckets” Goodman for your love, kindness, and support. Thank you to the Generals of the

Heavens: Kwasi, Gabriel, and Danny. You all provided me with the support and motivation to

keep going. Also, very special thank thanks to Kwasi and Steph for being my late-night study

partners in the ring and for your love and support over the years to me and my family during some

of the hardest times.

Thank you to my new lab and family at Houston Methodist. Thank you Dr. Dimitry Sayenko,

Rachel Markley, and Jonathan Calvert for your support during the past few months. Thank you

also to Masha, Sasha, Liza, and Misha for being part of my Texas Family.

To Dr. Luc Tremblay, I cannot express in words the gratitude I have for everything you have done

for me. You have been a colleague, mentor, and friend of the highest caliber. You believed in me

and pushed me to take risks, fail, and comeback better. Thank you also for all the lessons both in

science and in life. I will continue to pay it forward.

To my dear family Dolores (Mom), Richard (Dad) and Niclas (Brother). From pilot testing my

experiments both in Canada and France to planning surprise parties for everyone around, you all

have been with me every step of the way. Thank you all so much, for everything.

Lastly, I would like to dedicate this thesis to my late grandmother (Violet) and my late aunty

(Joanne). Thank you both for all the love and support and for helping me keep things in

perspective. Life is both long and short, I will do my best to make these days count.

Page 5: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

v

Table&of&Contents&ACKNOWLEDGEMENTS ................................................................................................................ III

TABLE OF CONTENTS...................................................................................................................... V

LIST OF ABBREVIATIONS ............................................................................................................. IX

LIST OF TABLES ................................................................................................................................ X

LIST OF FIGURES ............................................................................................................................ XI

PREAMBLE .......................................................................................................................................... 1

.......................................................................................................................................... 3

RÉSUMÉ ....................................................................................................................................... 3

GENERAL INTRODUCTION AND LITERATURE REVIEW................................... 4

GENERAL INTRODUCTION ..................................................................................................... 4

PHYSIOLOGICAL SYSTEMS AND MODELS OF GOAL-DIRECTED ACTION .................................... 5

THE VISUAL SYSTEM .............................................................................................................. 5

Visual System Neuroanatomy in Brief ................................................................................. 5

Construction of the Visual World ........................................................................................ 8

Two Visual Streams ............................................................................................................ 8

Vision for Goal-directed Action Planning and Control: Multiple-Processes Model............ 11

The Multiple-Processes Model .......................................................................................... 13

THE SOMATOSENSORY SYSTEM ............................................................................................ 15

Somatosensory Sensory Receptors .................................................................................... 15

Somatosensory Information Processing: From Spinal Cord to the Cortex.......................... 17

Somatosensory Representations During Action – Vector Integration to Endpoint Model.... 20

SENSORIMOTOR TRANSFORMATIONS FOR MOVEMENT PLANNING AND CONTROL.................. 23

Multisensory Combination and Integration for Movement: General Principles.................. 23

Multisensory Combination and Integration ....................................................................... 24

Multisensory Combination and Integration During Movement Planning and Control ........ 25

LITERATURE REVIEW CONCLUSIONS, GAPS, AND FURTHER QUESTIONS ................................ 46

........................................................................................................................................ 49

GENERAL METHODOLOGY .................................................................................................. 49

Page 6: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

vi

PARTICIPANTS ....................................................................................................................... 49

GENERAL PROCEDURES ........................................................................................................ 49

BEHAVIOURAL VARIABLES RECORDING AND ANALYSIS ....................................................... 50

Temporal Measures .......................................................................................................... 50

Error Measures ................................................................................................................ 50

Latency of Online Corrections .......................................................................................... 51

ROBOTIC GUIDANCE APPARATUS, DEVELOPMENT, AND TESTING .......................................... 54

ELECTROENCEPHALOGRAPHY APPARATUS, RECORDINGS, AND ANALYSES ........................... 55

Apparatus ......................................................................................................................... 57

Visual Evoked Potentials Features, Calculations, and Analysis ......................................... 57

Source Localization Procedures and Analysis ................................................................... 60

ELECTROOCULOGRAPHY (EOG) ............................................................................................ 61

EOG for EEG ................................................................................................................... 61

EOG for Eye Tracking and Gaze Direction Measurements ................................................ 61

FLEXIBILITY IN THE ENCODING OF REACHING MOVEMENTS TO

SOMATOSENSORY TARGETS: BEHAVIOURAL AND ELECTROPHYSIOLOGICAL

EXPERIMENTS .................................................................................................................................. 62

STUDY A ..................................................................................................................................... 63

ABSTRACT ............................................................................................................................ 63

EXPERIMENT A1 ................................................................................................................... 64

Introduction ...................................................................................................................... 64

Methods ............................................................................................................................ 65

Results .............................................................................................................................. 72

Discussion ........................................................................................................................ 74

EXPERIMENT A2 ................................................................................................................... 76

Introduction ...................................................................................................................... 76

Methods ............................................................................................................................ 77

Results .............................................................................................................................. 86

DISCUSSION .......................................................................................................................... 90

CONCLUSION ........................................................................................................................ 92

RAPID ONLINE CORRECTIONS FOR UPPER-LIMB REACHES TO

PERTURBED SOMATOSENSORY TARGETS: EVIDENCE FOR NON-VISUAL

SENSORIMOTOR TRANSFORMATION PROCESSES ................................................................. 93

Page 7: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

vii

STUDY B ..................................................................................................................................... 94

Abstract ............................................................................................................................ 94

Introduction ...................................................................................................................... 95

Methods ............................................................................................................................ 98

Results ............................................................................................................................ 110

Discussion ...................................................................................................................... 120

Conclusions .................................................................................................................... 125

GENERAL DISCUSSION .......................................................................................... 126

THESIS FINDINGS, AND FUTURE DIRECTIONS .............................................................. 126

SUMMARY OF THESIS FINDINGS .......................................................................................... 126

Gaze- (in)dependent Encoding of Somatosensory Targets are Influenced by the Modality of

the Imperative Stimulus ............................................................................................................... 127

Activation of Cortical Networks Associated with Visuomotor Transformations for

Movements to Auditory-Cued Somatosensory Targets .................................................................. 129

Fast Correction Latencies Suggest Control Based on Optimized Feedback use for

Movements to Perturbed Somatosensory Targets. ........................................................................ 131

FUTURE DIRECTIONS AND PERSPECTIVES ............................................................................ 133

CONCLUSIONS ......................................................................................................... 134

REFERENCES .................................................................................................................................. 135

APPENDIX 1: POWER ANALYSES FOR EXPERIMENTAL STUDIES ..................................... 165

POWER ANALYSES FOR EXPERIMENTAL STUDIES ..................................................... 165

STUDY A: ............................................................................................................................ 165

Experiment A1 ................................................................................................................ 165

Experiment A2 Power Analysis ....................................................................................... 167

Study B Power Analysis .................................................................................................. 167

APPENDIX 2: SUPPLEMENTARY ANALYSIS STUDY A ........................................................... 170

ROBOT APPARATUS DEVELOPMENT AND TESTING ................................................... 170

APPENDIX 3: SUPPLEMENTARY ANALYSIS STUDY B ........................................................... 175

MOTION DETECTION TIME VERSUS LATENCY OF ONLINE CORRECTIONS ......... 175

NORMALIZED TRAJECTORY DEVIATIONS............................................................................. 176

Page 8: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

viii

APPENDIX 4: LAPLACIAN-TRANSFORMED VEPS FOR STUDY A ........................................ 179

APPENDIX 5: EXPERIMENTAL QUESTIONNAIRES ................................................................ 180

PRIOR TO PARTICIPATION QUESTIONAIRES ................................................................ 180

HAND DOMINANCE TEST .................................................................................................... 180

EYE DOMINANCE TEST ....................................................................................................... 181

BRIEF NEUROLOGICAL QUESTIONNAIRE.............................................................................. 182

Page 9: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

ix

List!of!Abbreviations!!

AUD-SOMA Auditory-cued Somatosensory Target AUD-VIS Auditory-cued Visual Target CSD Current Source Density DV Difference Vector DVV Desired Velocity Vector EEG Electroencephalography EOG Electrooculography fMRI Functional Magnetic Resonance Imaging FP Fixation Position IPL Inferior Parietal Lobule IPS Intraparietal Sulcus LED Light Emitting Diode LGN Lateral Geniculate Nucleus LIPv Ventral Lateral Intraparietal Area mIPS Medial Intraparietal Sulcus MT Movement Time OPV Outflow Position Vector PAcc Peak Acceleration PDecc Peak Deceleration PPC Posterior Parietal Cortex PPV Perceived Position Vector PV Peak Velocity RT Reaction Time SC Superior Colliculus SCARA Selectively Compliant Assembly Robot Arm SPL Superior Parietal Lobule TACT-SOMA Tactile-cued Somatosensory Target TMS Transcranial Magnetic Stimulation TPV Target-Position Vector V1 Primary Visual Cortex VEP Visual Evoked Potential VITE Vector-Integration-To-Endpoint

Page 10: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

x

List!of!Tables!Table 1.! Descriptions of additional cellular processes contributing to local field potentials .. 56!

Table 2.! Mean (and standard deviation) of latencies (in ms) for the peaks used in the CSD-

VEP calculation. ....................................................................................................................... 85!

Table 3.! Mean and standard deviation for the temporal and kinematic variables of movements

to somatosensory (somato) and visual targets. ......................................................................... 113!

Table 4.! Mean and standard deviation (in cm) for the accuracy variables of movements to

somatosensory (somato) and visual targets. ............................................................................. 114!

Table 5.! Effect sizes of relevant studies ............................................................................. 166!

Table 6.! Number of participants in relevant studies............................................................ 168!

Page 11: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

xi

List!of!Figures! A depiction of the distribution of rods and cones along the surface of the retina. The

density of rods, located on the peripheral retinal eccentricities is much greater than the density of

cones, which are concentrated more centrally around the fovea. At approximately 20 degrees of

visual eccentricity is the point at which ganglion cell axons exit the eye (known as the optic disk).

The presence of the optic disk creates a blind spot in the visual field. This figure was adapted from

Osterberg (1935) . ....................................................................................................................... 7!

Visual information enters through the occipital cortex and then is projected either

through the parietal lobe (dorsal stream) or the temporal lobe (ventral stream). This figure was

adapted from Milner and Goodale (1995). ................................................................................. 10!

The multiple-processes model of limb control. The planning stage occurs before

movement start. The initial movement vector is planned based on sensory information and a priori

knowledge from previous trials and the predicted sensory consequences. After peak acceleration

(PAcc) impulse regulation processes begin. Corrections during impulse regulation are based on

the contrast differences between expected and obtained sensory consequences. Impulse regulation

processes continue until after peak deceleration (PDecc) and the end of the primary sub-movement.

Limb-target regulation processes start at PDecc and continue until movement end. Limb-target

correction processes are based on visual and proprioceptive feedback obtained from prior to peak

velocity (PV) to the end of the trajectory. Figure adapted from Elliot et al. (2010)..................... 14!

The dorsal-medial-lemniscal pathway for the transmission of proprioception and touch

information to the somatosensory cortex. Information from muscle spindles and cutaneous

receptors enters through the dorsal roots of the spinal cord then ascends ipsilaterally to the medulla.

These fibers synapse then cross (decussate) at the medulla and ascend toward to the ventral

posterolateral nucleus of the thalamus and then to the cortex (adapted from: Noback, Ruggiero,

Demarest, & Strominger, 2005). ................................................................................................ 19!

The Vector Integration to Endpoint Model (VITE) of proprioceptive online control.

Before movement starts, a difference vector (DV) is calculated from a comparison of the target

position vector (TPV) and the current position of the arm (PPV). Movement commences with a

Page 12: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

xii

voluntary and scalable GO signal that scales a velocity-based command (DVV) that activates

dynamic gamma motor neurons (γd). The DVV produces an outflow position vector (OPV) which

activate static gamma (γs) and alpha motor neurons (α). The activity of the OPV, which is driven

by movement, and is altered by sensory information from type Ia and II afferents, changes in the

PPV as arm position changes. This feeds back into the desired velocity vector and alters the

movement of the arm. Movement ends when the PPV matches the DV. This Figure was adapted

from Bullock et al. (1996) ......................................................................................................... 22!

Figure adapted from Pouget et al. (2002). A) The experimental paradigm, participants

fixated on a peripheral fixation point (FP) prior to making a movement to a central target. Targets

were either visual (led), auditory (sound from a speaker) or proprioceptive (projected position of

the participant’s right foot). Note that, in all conditions, participants pointed towards the targets

without actually touching them B) Gaze-dependent reaching errors for each target modality.

Movement endpoints to auditory, visual, and proprioceptive targets were all significantly biased

by gaze direction. ...................................................................................................................... 30!

Experimental setup and results of visual target condition adapted from Reichenbach et

al. (2009). Participants made movements to visual targets that either remained stable or were

shifted early or late in the trajectory. Movements without vision of the limb had larger endpoint

errors and longer correction latencies than movements made without vision of the limb. ........... 33!

Differences between hand-centred and gaze-centred neural receptive fields (RF). Hand-

centred neural activations shift with the position of the hand in space, and gaze centred activations

shift with the position of the eyes. Both types of representations have been noted in the PPC.

Adapted from Batista and Newsome (2000). ............................................................................. 37!

The common eye-centred reference frame gain field hypothesis. A) Transformations in

the PPC. The distribution of reach related activities in the PPC is arranged from primarily eye-

centred in the inferior parietal lobule (IPL) to heterogeneously limb-centred and eye-centred in the

superior parietal lobule (SPL). For this reason, the PPC is viewed as the sensorimotor interface for

transformations related to the planning and control of goal-directed actions. Sensory information

about target and effector positions from pertinent sources modify the reach related activities of

eye-centred neurons. Response patterns of neurons in the SPL suggest that information is then

Page 13: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

xiii

transformed directly into limb-centred coordinates. B) Modulations of reach related neurons in

the PPC. Neurons in the PPC fire with respect to changes in eye position. The gain of these

responses can be modulated by visual information. Figure representations adapted from data

described in Buneo & Andersen 2006, and Cohen & Andersen 2002. ........................................ 38!

A) Adapted from Bernier et al. (2007). Participants exhibited lower errors due to

prismatic adaptation when reaching to a proprioceptively defined target than a visual target. B)

Adapted from Sarlegna et al. (2007), movements to somatosensory targets were less curved than

movements to visual targets in the same position. Both of these results argue that different

sensorimotor transformation mechanisms underlie movements to visual and somatosensory

targets. ...................................................................................................................................... 43!

Outline of the experiments conducted in this thesis, Experiment A1 and A2 investigated

the sensorimotor transformations used for planning movements to somatosensory targets, whereas

Experiment B investigated the role of these transformations in the online control of goal-directed

actions....................................................................................................................................... 48!

Figure outlining the extrapolation method for determining correction latency as

described by Oostwoud Wijdenes et al. (2014). The authors tested the predictive ability of different

computations of correction latency to estimate a 100 ms latency to target perturbations of 1 to 4

cm in simulated movements of 300 – 500 ms. The methods included threshold, confidence interval

and the extrapolation method. Panel A) shows the main findings of Oostwoud Wijdenes et al.

(2014) for the 3 cm perturbation stimulations, the extrapolation method applied to averaged

acceleration data yielded a very accurate prediction of correction latency across movement times.

Panel B) shows the method applied to data in the Study B of this thesis. Averaged acceleration

profiles for the perturbation curve showed in panel B were derived from movements to

somatosensory targets perturbed away from the body. ............................................................... 53!

To calculate an ERP, continuous EEG data were preprocessed and artifacts due to

external noise and interference were removed. After, the data were segmented into defined time

windows centered around the event of interest. Finally, by averaging all trials for certain

conditions, an ERP waveform could be observed for each electrode of interest (see below). ...... 58!

Page 14: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

xiv

Characteristic peaks of a visual evoked potential. A negative deflection N1 occurs at

around 80 ms after the visual stimulation. The P1 occurs at about 100 ms after that, followed by

the N2 at around 140 ms. Lastly there is a P2 that occurs about 200 ms after the initial stimulation.

For the EEG analyses in the present thesis, the peak to peak amplitude between the P100 and N150

was used for analysis (shown in red), as this was the most recognizable peaks across participants.

................................................................................................................................................. 59!

Activities of associated with visual –sensorimotor processing were analyzed using the

electrodes overlapping the occipital cortex and the occipito-parietal junction of the left hemisphere

(electrodes PO7, PO3, O1). These electrodes were selected based on findings in previous studies

(Lebar et al., 2015). ................................................................................................................... 59!

Panel A) A drawn representation of the apparatus used in Experiment 1. Participants

sat facing an immersive display comprised of a computer monitor, a semi-reflective glass surface,

and a custom aiming apparatus (not to scale). Participants made movements from an unseen home

position (microswitch) to either visual targets projected onto the surface of the aiming console or

the perceived position of their fingers. In the somatosensory-target conditions, participants

positioned their target fingers beneath a plastic case and performed movements to the perceived

position of their fingers as if projected onto the case’s surface. Note that the visual items projected

on the semi-reflective glass were perceived by the participants as appearing on the table surface.

Panel B) A representation (not to scale) of the aiming surface in the visual target condition.

Piezoelectric buzzers were positioned to the left of the aiming surface, and provided the imperative

signals in the auditory-cued aiming conditions .......................................................................... 67!

Panel A) Averaged normalized reach trajectories for all participants in each cue-target

condition (error patches represent the between-participant standard error of the mean). Panel B)

Average normalized directional error for each cue-target condition (error bars represent the

between participant standard deviation). Participant’s reaching errors were significantly more

influenced by gaze fixation position (i.e., Right Fix vs. Left Fix) in the auditory-cued target

conditions (-VIS and –SOMA) than in the vibrotactile-cued target conditions. Panel C) Mean

normalized reaction times for participants in each experimental condition (error bars represent the

between-subjects standard deviation). Reaction times of gaze-shifted movements were

Page 15: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

xv

significantly longer when participants aimed to auditory-cued somatosensory targets compared to

both auditory-cued visual targets and vibrotactile-cued somatosensory targets. ......................... 73!

Panel A) A drawn representation of the experimental apparatus for Experiment 2 (not

to scale). Participants fixated on one of three fixation locations and began their movements from

the associated home position. In the auditory-cued somatosensory-target condition (represented

above), participants performed movements to one of the three middle fingers of their non-reaching

limb. Panel B) A representation of the aiming surface in the visual target condition. Participants

fixated on of three target positions and placed their fingers on the corresponding microswitch to

begin each trials. Piezoelectric buzzers to left again provided the imperative signals to initiate

movement in both the AUD-VIS and AUD-SOMA conditions. ................................................. 79!

Grand average VEPs for each electrode in the AUD-VIS and AUD-SOMA conditions

(error patches represent the between-participant standard deviation). The peak-to-peak amplitudes

between P100 and N150 as determined by a current source density analysis were used for statistical

contrasts. ................................................................................................................................... 84!

CSD normalized VEPs for the occipital (O1) and occipito-parietal (PO7) electrode

(error bars represent the standard error of the mean). For both electrodes, CSD- VEPs were

significantly larger in the AUD-SOMA compared to the AUD-VIS condition. .......................... 88!

Grand average source activity for each condition between the P100 and N150 latency

time. Source activity (color maps reveal activation levels) was localized in parietal and occipital

areas in both experimental and control conditions. Statistical contrasts (paired-samples t-tests,

alpha set to 0.05, t-value map indicates direction of effects) revealed significantly more activity in

the left parietal and parieto-occipital regions (as indicated by the shade of red) for the AUD-SOMA

experimental condition compared to the AUD-SOMA control condition. .................................. 89!

A drawn representation of the experimental setup (not to scale) is shown in panels (a)

and (b). Panel (c) is a representation of the aiming console and stimuli for each target modality.

Participants sat comfortably facing the aiming apparatus in a completely dark room. A robotic

device was used to deliver target perturbations in both somatosensory (left finger) and visual

Page 16: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

xvi

(LED) targets. Participants performed leftward arm movements to reach the position of the target

as if it was projected onto the aiming surface. ......................................................................... 100!

Examples of velocity profiles of the reaching hand and the robotic effector (Target) for

each perturbation time. Panel (a) shows a perturbation occurring before movement onset; panel (b)

shows a perturbation occurring ~100 ms after movement onset, and panel (c) shows a perturbation

occurring ~200 ms after movement onset. ............................................................................... 104!

The extrapolation method for determining the latency of online corrections. For each

participant average acceleration profiles in the direction axis were computed for both perturbation

and no-perturbation trials. The acceleration difference between these profiles (Accel Difference)

was then plotted to calculate correction latency. Correction latencies were computed by drawing a

line (Extrapolation Line) between 75% and 25% of the maximum difference in the Accel

Difference profile (Extrapolation points) and extrapolating the line to the first zero crossing. The

time between the perturbation and the zero crossing was defined as the correction latency. Panel

(a) shows this method applied to averaged data for somatosensory target perturbations and panel

(b) shows the method applied to averaged data for visual target perturbations. ........................ 109!

Average reaching trajectories for each condition in the reaching protocol. Panels (a),

(b), depict perturbations occurring before movement onset. Panels (c) and (d) depict perturbations

100 ms after movement onset. Panels (e) and (f) depict perturbations occurring 200 ms after

movement onset. Trajectories were normalized with each point representing 2% of movement

duration. Error bars indicated the between-subject standard deviation of spatial position ......... 112!

(a) Direction constant error. Participants were more accurate when performing reaches

to somatosensory targets perturbed after movement onset compared to when performing reaches

to visual targets. When aiming to visual targets participants exhibited a larger under-correction

relative to when making movements to somatosensory targets. (b) Correction magnitude.

Participants exhibited larger corrections in response to somatosensory target perturbations than in

response to visual target perturbations at all perturbation times. Furthermore, for both modalities,

participants exhibited smaller corrections in response to perturbations at 200 ms than in response

to the before and 100 ms perturbation times. ........................................................................... 117!

Page 17: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

xvii

Correction latencies in response to target perturbations. Overall, participants corrected

faster for somatosensory target perturbations than for visual target perturbations. Furthermore, for

visual targets, correction latencies were longer in response to targets perturbed toward the body.

............................................................................................................................................... 119!

Reaction of the robot perturbation movement when signaled by motion tracking

processed on the same computer. ............................................................................................ 171!

Number of lost samples in 500 1-second recordings. Sixty (60) percent of data were

lost samples, this severely decreased the amount and resolution of the useable recordings. Also

55% during the robot’s trajectory. ........................................................................................... 171!

Differences between single and dual setups for the recording of perturbation data. The

dual station setup was found to be the best solution for both maximizing resolution of trajectory

data and reducing variability in the speed of perturbations....................................................... 173!

Most reaction times for the dual station setup fell within 60- 75 milliseconds after the

go signal. This reduction perturbation variability provided a basis for the protocol. ................. 173!

Overall the number of lost samples was reduced from 60% to 10% with only 4% of

lost samples occurring during the robot’s movements. ............................................................. 174!

Contrasts between the target modality differences in correction time and detection time.

The difference between the two target modality conditions is significantly larger incorrection time

compared to detection time. .................................................................................................... 176!

Analysis of trajectory deviations. Each curve represents a temporally normalized

averaged trajectory of each condition used in the experiment. Error patches represent the between

subject standard deviation. Targets were either perturbed Before, 100 ms or 200 ms after

movement onset or were unperturbed (control). Participants adjusted their trajectory to

somatosensory targets perturbed 100 ms after movement onset but no differences between spatial

trajectories were found for visual targets. Participants were capable to fully adjusting their

movements to targets perturbed before movement start. .......................................................... 178!

Page 18: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

xviii

Grand average of the Laplacian transformed VEPs for each electrode of interest in the

AUD-VIS and AUD-SOMA conditions. The peak-to-peak amplitudes between P1 and N1 were

used for the normalization and comparisons made in Experiment A1. ..................................... 179!

Adapted from Miles, W.R. (1930). Ocular dominance in human adults. The Journal of

General Psychology, 3, 412-430.............................................................................................. 181!

Page 19: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

1

Preamble!The three experiments in this dissertation address critical gaps in the literature with regard to

movements to somatosensory targets. The overall goal of the thesis was to determine if the

transformation processes used for movements to somatosensory targets are similar or distinct from

those underlying movements to visual targets. The goal of the first study (Study A) was to

determine if the manner in which a somatosensory target is encoded affects sensorimotor

transformation processes prior to upper-limb reaches. The theoretical basis for this study was

derived from numerous sources, including: previous work on body representations; clinical work

on patients with autotopagnosia; and psychophysical experiments examining movements to body

parts. A review of the extant literature revealed that somatosensory targets could be encoded in a

visual, gaze-dependent, reference frame, or a non-visual, gaze-independent, reference frame.

Experiment A1 examined if the sensory signals used to identify the somatosensory target

location impacted the reference frame employed for movement planning. Participants made

reaching movements to somatosensory targets, which were either cued by an auditory or a

vibrotactile stimulus. Gaze positions were also systematically altered for each cue-target condition

to measure the degree of gaze-dependent encoding. Larger gaze-dependent biases were found for

movements to auditory-cued somatosensory targets compared to vibrotactile-cued somatosensory

targets. These results indicated that participants used an exteroceptive, visual body representation

to encode auditory-cued somatosensory targets, and an interoceptive, somatosensory body

representation to encode vibrotactile-cued somatosensory targets. Importantly, the results of

Experiment A1 also revealed that shifts in gaze positions had a larger effect on reaction times for

auditory-cued somatosensory targets than both auditory-cued visual and vibrotactile-cued

somatosensory targets. The larger gaze-dependent errors and increases in reaction times could

indicate that additional transformation processes were required for the visual remapping of

somatosensory targets prior to goal-directed actions.

Experiment A2 expanded on the findings of Experiment A1 by examining the neural

correlates of the sensorimotor transformations underlying movements to somatosensory targets

planned in a visual reference frame. It was found that there was an increase in cortical visual

information processing prior to reaches to auditory-cued somatosensory targets, compared to

Page 20: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

2

auditory-cued visual targets. The sources of this facilitation were localized to occipital and

occipital-parietal areas. These findings suggest that remapping of somatosensory target positions

prior to reaching movements involves cortical networks associated with visual information

processing.

The goal of Study B was to examine the sensorimotor transformation processes during the

online control of movements to somatosensory targets. Previous studies examining online

adjustments have suggested that somatosensory information about the limb position must be

transformed into visual coordinates prior to engaging in online corrections. However, as Study A

revealed, these processes require additional time and may result in an increase in error. Thus, it

remains unclear if this visual transformation strategy is also employed for online control.

In Study B, participants made aiming movements, with no vision of their reaching limb, to

either an external visual target or a somatosensory target (the left index finger of their non-pointing

hand). On a proportion of trials, target positions were perturbed either before, or 100 ms or 200 ms

after movement start. It was found that participants made faster, and more accurate, corrections to

perturbed somatosensory targets than to perturbed visual targets. These findings suggest that non-

visual sensorimotor transformations were performed when making online corrections to perturbed

somatosensory target

Page 21: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

3

!!

! Résumé!La littérature sur les processus d’intégration multisensoriels (i.e., transformations sensorimotrice)

avant et pendant des mouvements volontaires ont principalement utilisé des cibles visuelles.

Considérant que la modalité d’une cible influence le cadre de référence utilisé pour contrôler un

mouvement volontaire, l’objectif principal des trois expériences de la présente thèse était

d’examiner ces processus de transformations sensorimotrices, et ce, spécifiquement pour des

mouvements vers un cible somatosensorielle. Les deux premières expériences ont examiné si les

transformations sensorimotrices pendant la planification du mouvement sont influencées par le

cadre de référence utilisée pour coder les cibles somatosensorielles. L’objectif de la troisième

expérience était d’examiner les transformations sensorimotrices utilisée pour le contrôle du

mouvement vers une cible somatosensorielle. Les résultats des deux premières expériences

indiquent que même des signaux auditifs peuvent faciliter l’utilisation d’un cadre de référence

visuel du corps pour transformer la position de cibles somatosensorielles. Ces processus de

transformation pourraient nécessiter des processus supplémentaires et engagent clairement des

réseaux corticaux visuels et visuomoteurs. De plus, les résultats de la troisième expérience indique

que des corrections apportées aux mouvements vers des cibles somatosensorielles perturbées

prennent moins de temps pour être initiées en plus d’être plus efficaces et précises que des

corrections apportées aux mouvement vers des cibles visuelles. De plus, cette troisième expérience

indique que des cibles somatosensorielles ne sont pas nécessairement transformées dans un

référentiel visuel avant de corriger un mouvement volontaire en cours d’exécution. Globalement,

les études dans cette thèse ont montré que des signaux auditifs peuvent faciliter l’encodage de

cibles somatosensorielles dans un cadre de référence visuel pendant la planification du mouvement

volontaire mais que le temps et les ressources nécessaire pour encoder des cibles

somatosensorielles pourraient être trop importantes pour nécessairement continuer d’utiliser un

cadre de référence visuel pour corriger la trajectoire d’un mouvement volontaire.

Page 22: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

4

!!General!Introduction!and!Literature!Review!

! General!Introduction!Philosopher George Berkeley’s new theory of vision, originally published in 1709, was one of the

first cohesive accounts of why and how we develop the ability to see and interact with the world.

In his New Theory of Vision, Berkeley argued that an intelligent being who possesses a perfect

sense of sight but who is devoid of the sense of touch “…could have no idea of a solid or quantity

in three dimensions which followeth from its [the intelligent being] not having any idea of

distance” (Berkeley, 1709, p.154). Berkeley made the assertion that humans develop the ability to

perceive 3-dimensional objects only when they gain the capability to interact with these objects in

meaningful ways through touch (see also Held & Hein, 1963). This assertion was later paralleled

by Ian Hacking (1983) in his work Representing and Intervening to resolve the ongoing debate

between realists and anti-realists with regard to scientific entities. To Hacking, a scientist learns

“…to see through a microscope by doing, not just by looking ” and just as a “…scuba diver learns

to see in the new medium of oceans by swimming around…” any “… new ways of seeing acquired

after infancy, involve learning by doing ” (Hacking, 1983, p. 136). This connection between action

and perception also appealed to contextualists, sought to disentangle notions of separation between

thought and language. To contextualists, action, language, thought, and percepts could not be fully

understood without consideration of the context wherein they occurred. One of the strongest

champions of the contextualism was John Dewey, who, in his seminal work published in 1925,

challenged the spectator theories of knowledge by arguing that we must not only understand

context to understand the mind but that the “mind is constantly adapting to the environment

through action” (Nerlich & Clarke, 1996, p. 125). Thus, to truly understand the depths of any

philosophy or object of human nature, we must understand the nuances of the time, and

importantly, the context wherein they came to fruition.

Overall the purpose of this thesis was to examine the neuroscientific homologue of the

“action in context argument” as outlined by the contextualists. The series of experiments detailed

in this thesis employed behavioural and neurophysiological paradigms with the goal of examining

Page 23: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

5

the sensorimotor transformations during the preparation and control movements to somatosensory

targets.

! Physiological!Systems!and!Models!of!GoalGdirected!Action!!Many different physiological systems are involved in the production of goal-directed movements,

the following section briefly surveys two of the main sensory systems that are considered in this

thesis: the visual system and the somatosensory/proprioceptive system. Both systems are discussed

with regard to their functional relevance for the planning and control of actions.

! The!Visual!System!

! Visual!System!Neuroanatomy!in!Brief!

Visual sensation begins at the retina, a thin sheet of densely packed neurons located in the posterior

part of the eye. The outermost layer of the retina consists of light-sensitive cells known as

photoreceptors, whose main function is to transform the refracted patterns of light into electrical

signals. There are two main classes of photoreceptors: cones and rods. Cones are short tapered

neurons that fire in response to high levels of illumination. The greatest concentration of cones on

the retina is found at 0 degrees eccentricity with respect to the fovea (i.e., foveola, see Figure 1).

As a result of their location and their frequency dependent firing pattern, cones are functionally

implicated in both visual acuity and colour vision (Green, 1970; Meister & Tessier, 2013; Shlaer,

1937). In contrast, rods are distributed along the retinal periphery (see Figure 1), and peak rod

density occurs at 18 degrees with respect to the fovea (Purves et al., 2001). Rods are long

cylindrical shaped neurons that respond to low levels of illumination, and their firing frequencies

become saturated in bright-light conditions. Functionally, rods have been implicated in dim light

perception and in the detection of peripheral motion (Gilbert, 2013a; Meister & Tessier, 2013).

Signals from groups of rods and cones from different regions of the retina are transmitted

to the brain via two main types of ganglion cells: ON cells and OFF cells. These cells alter their

firing patterns depending on their exposure to light intensity: ON cells fire more rapidly when

illumination increases, whereas OFF cells fire more rapidly when illumination decreases. The

anatomical arrangement of these ganglia gives rise to retinal receptive fields. The retinal receptive

field of a ganglion cell is best defined as the map between the spatiotemporal patterns of light and

Page 24: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

6

the pattern of ganglion cell activations (Wienbar & Schwartz, 2018). In simpler terms, the

receptive field represents a region in visual space wherein the presence or change in a stimulus or

stimulus features can alter the response of the ganglion cell. Receptive field activation patterns

are linked to visual processes such as contrast perception, motion detection, and the recognition of

temporal characteristics of visual stimuli (Gilbert, 2013b).

Information gathered from these ganglion cells travels to the brain via multiple subcortical

pathways including: the superior colliculus (SC), a region in the midbrain associated with eye

movements (saccade generation and control), orienting behaviours, and attention (Becker, 1991;

Georgopoulos, 1990); the pretectum, another midbrain region associated with eye movements and

the pupillary light reflex (Gilbert, 2013a); and the lateral geniculate nucleus (LGN), a region of

the thalamus that projects to the visual cortex (Gilbert, 2013b; Meister & Tessier, 2013). The

information going through the LGN pathway is most associated with conscious perception of the

visual world and is the dominant pathway for the information used in the planning and control of

action.

Page 25: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

7

&!A!depiction!of!the!distribution!of!rods!and!cones!along!the!surface!of!the!retina.!The!density!of!rods,!located!on!the!peripheral!retinal!eccentricities!is!much!greater!than!the!density!of!cones,!which!are!concentrated!more!centrally!around!the!fovea.!At!approximately!20!degrees!of!visual!eccentricity!is!the!point!at!which!ganglion!cell!axons!exit!the!eye!(known!as!the!optic!disk).!The!presence!of!the!optic!disk!creates!a!blind!spot!in!the!visual!field.!This!figure!was!adapted!from!Osterberg!(1935)!.!!

!

Page 26: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

8

! Construction!of!the!Visual!World!!

The visual system also relies on cortical visual areas to construct a representation of the visual

scene. Such processes are supported by the neural pathways that extend from the LGN to the visual

cortex.

Neurons in the LGN primarily project to the primary visual cortex (V1). Responses of

neurons in V1 have been associated with: object orientation (Hubel & Wiesel, 1968; Meister &

Tessier, 2013); depth perception (Barlow, Blakemore, & Pettigrew, 1967); the direction of

stimulus motion (Treue, 2001); and stimulus speed (Maunsell & Van Essen, 1983). Furthermore,

activation patterns in the receptive fields of V1 neurons, as a consequence of the alignment of their

projecting cells, reveals the existence of selective firing patterns for specific subtypes of the

aforesaid visual features (Gilbert, 2013b). For example, a population of neurons in the visual cortex

possessing an alignment of inhibitory and excitatory projections in a certain orientation will only

respond to stimuli that match their orientation (Hubel & Wiesel, 1962). This response selectivity

of visual receptive fields has also been noted for stimulus direction, and speed (Maunsell & Van

Essen, 1983). By analyzing these selective firing patterns, it is possible to determine activity

correlated to intermediate visual perceptual features such as object contours, object motion, and

colour contrast (Gilbert, 2013b; Green, 1970; Maunsell & Van Essen, 1983).

! Two!Visual!Streams!

From V1, visual information travels through two separate but interconnected pathways (see

Budisavljevic, Dell’Acqua, & Castiello, 2018): the ventral visual pathway or ventral stream, and

the dorsal visual pathway or dorsal stream (Mishkin, Ungerleider, & Macko, 1983). Each stream

discussed below receives the same intermediate level (orientation, speed, direction response tuned

firing) information from V1 neurons (see Figure 2). Both visual streams receive information about

object structure, object orientation, and object spatial location. What most differentiates the two

streams is how visual information processing within each stream relates to the goal of the observer

(Goodale & Milner, 1992; Mishkin et al., 1983).

The ventral stream is associated with object recognition and representation. Anatomically,

the ventral stream is composed of projections from V1 to the temporal lobe (see Figure 2). Overall,

the functional role of the ventral stream is to discern the characteristics of objects and spatial

Page 27: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

9

relationships by building perceptual representations. The representations formed in the ventral

stream allow us to identify and semantically describe relationships between objects observed in

visual scene.

In contrast to the ventral stream, the dorsal stream uses visual information from V1 to

construct an action-centered representation of the world. To enter the dorsal stream, visual

information travels from V1 to the parietal cortex and then to the frontal lobe. The dorsal stream

is thus associated with the transformation of visual information that is relevant to movement goals

and the moving effector.

To construct a representation of the environment for use in goal-directed actions, both

visual streams work together. Simply put, the ventral stream is engaged to enable the identification

and recognition of possible goal objects in the visual environment and the dorsal stream uses visual

information about the size and shape of objects to guide skilled movements.

Page 28: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

10

&!Visual!information!enters!through!the!occipital!cortex!and!then!is!projected!either!through!the!parietal!lobe!(dorsal!stream)!or!the!temporal!lobe!(ventral!stream).!This!figure!was!adapted!from!Milner!and!Goodale!(1995).!!

Page 29: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

11

! Vision!for!GoalGdirected!Action!Planning!and!Control:!MultipleGProcesses!Model!

The discovery of the two visual streams was paramount to the understanding of the role of vision

in the planning and control of action. However, models describing visuomotor control have been

in development for well over a century. In this section, the development of the multiple processes

model of online control is briefly described with relevance to visual information use. In this thesis,

when movements were made toward visual targets as a baseline measure, only initial visual

information about the hand was provided (Study A). When movements to visual targets were used

to examine online sensorimotor transformations, no initial visual information about limb position

was provided (Study B). In all cases, no visual feedback about limb position was provided during

the trajectory. Thus, the discussion of the utility of visual information for each phase in the multiple

processes model focuses on visual target information.

Woodworth, in his seminal monograph (see Woodworth, 1899), forwarded a model which

proposed that goal-directed movements consisted of two phases: an initial impulse phase wherein

the limb is propelled toward the target; and a current control phase wherein visual feedback is used

to “home in” on the target. Woodworth also noted that fast movements (i.e., less than 400 ms in

duration), showed similar trajectory characteristics independent of visual feedback availability

(vision of the limb and target). This led to the idea that the central nervous system takes time to

perform visual-based trajectory corrections based on target information, although recent estimates

are much shorter (i.e., less than 100 ms depending on when vision is presented in the trajectory)

than the originally proposed 400 ms (Blouin, Teasdale, Bard, & Fleury, 1993; Howarth, Beggs, &

Bowden, 1971; Kennedy, Bhattacharjee, Hansen, Reid, & Tremblay, 2015; Tremblay, Hansen,

Kennedy, & Cheng, 2013; Zelaznik, Hawkins, & Kisselburgh, 1983).

Since the proposal of Woodworth’s (1899) model, experimental evidence has revealed

insights into the use of visual target information in each component. For example, with the

emergence of motor programming theory, came the idea that the initial impulse phase was based

on a motor program specified before movement onset based on initial hand position and visual

target location (Schmidt, Sherwood, Zelaznik, & Leikind, 1985). Online trajectory amendments

were thought to be the result of separate motor programs initiated during the movement. Thus, the

Page 30: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

12

efficacy of visual-based corrections was hypothesized to be related to the time required to organize

online motor programs (Howarth et al., 1971; Keele & Posner, 1968).

Perhaps the most crucial update to the Woodworth’s original model was proposed by

Meyer and colleagues in 1988. Meyer’s optimized control model was based on the impulse

variability theory of motor control (Schmidt et al., 1985). The authors surmised that the trajectory

specified in the initial impulse phase is programmed optimally such that no correction is required

in order to successfully land on the target. However, because of neuromotor noise, and the

variability associated with the impulse, endpoints would not always land directly on the target, but

have a normal distribution around the target’s centre. Similar to the previous models of motor

control, the optimized control model purported that any necessary corrective movements would

occur at the end of the initial impulse to compensate for endpoints that may land outside of the

target boundary.

While the above-mentioned models increased the explanatory power and mechanistic

understanding of the two-component model, these models fell short of accounting for a variety of

experimental findings with regard to visually-guided action. One of the main findings that this

model failed to address was corrections to early perturbations prior to the end of the initial impulse

(Goodale, Pélisson, & Prablanc, 1986; Hansen, Tremblay, & Elliott, 2008; Saunders & Knill,

2003). Goodale et al. (1986) performed a seminal study wherein participants reached without

online vision of their reaching limb to targets that changed location during the orienting saccade,

and prior to the onset of the hand movement (i.e., during saccadic suppression, see Bridgeman,

Hendry, & Stark, 1975). The authors found that participants corrected for shifts in target position

even though they could not consciously detect that the target position had changed. Moreover, the

kinematic features of the corrected trajectories could not be distinguished from those of

uncorrected trajectories (see also Prablanc & Martin, 1992). Together, these results showed that

changes in target position could be corrected for early in the movement even without vision of the

limb. The authors concluded that early rapid online corrections were likely based on updated visual

target location and limb position information derived from somatosensory and efferent information

(see also Prablanc, Desmurget, & Gréa, 2003).

Page 31: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

13

! The!MultipleGProcesses!Model!To account for early online corrections, the multiple-processes model was proposed by Elliott et

al. (2010). Similar to Woodworth’s original model, the multiple-processes model outlines two

types of control. The first type of online control is impulse-regulation and the second is limb-

target regulation. Impulse-regulation occurs early in the movement and is based on egocentric

representations including comparisons of the visual target location and the efference copy. Limb-

target regulation occurs later in the movement, and is based allocentric information such as the

comparison between visual information about the target position and proprioceptive information

about the hand position (see also Elliott et al., 2017).

Overall, the multiple-processes model is based on the premise that visually-guided

movement unfolds in the following manner: First, prior to movement initiation, the initial

movement vector is programmed based on expected sensory consequences and prior knowledge.

Second, when the signal to initiate the movement is received, the muscle commands and an efferent

copy are produced. The efference copy is used to predict the sensory consequences of the

movement based on the motor command. When the limb begins to move, impulse regulation

begins, and corrections occur by evaluating the difference between the observed and expected

(predicted) sensory consequences. Early in the trajectory (prior to peak velocity), the visual system

gathers sensory information to be used in limb-target regulation. When the primary sub-movement

is completed (i.e., approximately at peak deceleration) limb-target regulation begins based on

sensory information obtained throughout the trajectory (see Figure 3).

Overall the information obtained and constructed by the visual system is used in two ways

for goal-directed actions. The first is to define and plan a trajectory, and the second is to control

an ongoing movement.

Page 32: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

14

&The!multipleGprocesses!model!of!limb!control.!The!planning!stage!occurs!before!movement!start.!The!initial!movement!vector!is!planned!based!on!sensory!information!and!a!priori!knowledge!from!previous!trials!and!the!predicted!sensory!consequences.!After!peak!acceleration!(PAcc)!impulse!regulation!processes!begin.!Corrections!during!impulse!regulation!are!based!on!the!contrast!differences!between!expected!and!obtained!sensory!consequences.!! Impulse!regulation!processes!continue!until!after!peak!deceleration!(PDecc)!and!the!end!of!the!primary!subGmovement.!LimbGtarget! regulation!processes!start!at!PDecc!and!continue!until!movement!end.!LimbGtarget!correction!processes!are!based!on!visual!and!proprioceptive!feedback!obtained!from!prior!to!peak!velocity!(PV)!to!the!end!of!the! trajectory.!Figure!adapted!from!Elliot!et!al.!(2010).!

Page 33: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

15

! The!Somatosensory!System!As mentioned above, the dorsal visual stream constructs a representation of the external

environment based on one’s intended actions. These visual representations are then used to amend

the limb’s trajectory at both the early and later phases of a visually-guided movement. Without

vision of the limb, other sensory systems capable monitoring limb position are necessary in order

to engage in corrective mechanisms. One sensory system that could be effective for the formation

of action-centered representations is the somatosensory system. For experiments in this thesis and

other work (see: Section 2.4.3), somatosensory information also served to define target positions.

The following discussions of somatosensory information will be focused on two modalities of

sensory information pertaining to somatosensation: tactile information emerging from cutaneous

receptors in the skin and proprioception emerging from receptors in the muscles and joints.

! Somatosensory!Sensory!Receptors!

Somatosensation, similar to kinesthesia (Sherrington, 1916) and often referred to as

proprioception, is the sense of one’s own body position and body motion in space. There has been

some debate as to the functional necessity of certain somatosensory receptors for the monitoring

of kinematic and kinetic parameters associated with changes in body position (for review see

Proske & Gandevia, 2012). Nonetheless, the prominent discourse in the present literature is that

somatosensory information emerges from a combination of inputs from numerous sensory organs

including: skin receptors, joint receptors, muscle spindles, and Golgi tendon organs.

It has been demonstrated that skin afferent receptors play a substantial role in one’s sense

of body position (Edin & Abbs, 1991; Edin, 2004). For example, both the slow-acting and fast-

acting mechanoreceptors in non-glabrous skin have been shown to respond to voluntary finger and

hand movements. Fast-acting receptors produce rapid graded responses depending on the extent

of the movement-induced skin stretch at proximal joints. Slow-acting receptors respond

directionally by increasing discharge during proximal-joint flexion and decreasing discharge

during proximal-joint extension. Furthermore, it has been demonstrated that the activity of neurons

responsive to skin stretch in the primary somatosensory cortex is correlated with their activity

Page 34: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

16

levels when encoding kinematic information about joint positions (Cohen, Prud’homme, &

Kalaska, 1994). These results suggest that skin receptors could contribute to limb localization

during both static postures and limb movements.

Joint receptors are located within joint capsules and provide information about the

articulation of a joint and its associated ligaments. The range of activation of joint receptors is

limited as most receptors respond to end ranges of flexion and extension in a non-direction specific

manner (Burgess & Clark, 1969). However, it has been shown that, in the absence of other

proprioceptive receptors (muscle spindles, cutaneous receptors, and Golgi tendon organs), joint

receptors respond in a position dependent manner to low velocity finger movements (Gandevia &

McCloskey, 1976). Further research in humans is required to truly determine the extent to which

joint receptors contribute to proprioception; however, as it stands, the contribution of joint

receptors appears to be minor.

Muscle spindles are receptors that are located within muscle tissue. Muscle spindles

respond to changes in muscle length, and thus provide information about the position, velocity,

and acceleration of the limb during both self-generated and externally-triggered movements

(Proske & Gandevia, 2012). Muscle spindles’ signals project to the somatosensory cortex and the

activity of these spindles can help distinguish various kinematic features across a wide range of

joint angles. The role of muscle spindles as the primary proprioceptive sensory organ has been

supported by work examining joint replacements (Goodwin, McCloskey, & Matthews, 1972),

dorsal column lesions (Lee & Tatton, 1975), and tendon vibration (Redon, Hay, & Velay, 1991).

Golgi tendon organs are proprioceptive receptors located in the muscle-tendon junction.

These organs were originally thought to respond to changes in muscle force (Proske & Gandevia,

2012). However, mounting experimental evidence shows that Golgi tendon organs may not sense

tensile forces but rather respond to changes in tendon length (Kistemaker, Van Soest, Wong,

Kurtzer, & Gribble, 2012). Despite these new findings, it is agreed that Golgi tendon organs, along

with muscle spindles are the main proprioceptive receptors responsible for the accurate

performance of goal-directed actions in the absence of vision.

Page 35: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

17

! Somatosensory!Information!Processing:!From!Spinal!Cord!to!the!Cortex!

Proprioceptive and tactile information emerging from mechanoreceptors the in innervated regions

of the skin (dermatomes) and muscles (myotomes) are relayed to the central nervous system via

peripheral nerve fibres that vary in diameter, myelination, and conduction velocity. Information

from most tactile and proprioceptive afferents are relayed via large diameter (12 – 20 µm) or

medium diameter (6 – 12 µm) myelinated dorsal root ganglion cells (Type I and II afferents). It is

hypothesized that the fast conduction velocities of the larger diameter cells (70- 120 m/s) make it

possible for this information to be obtained during the course of slow and rapid actions. Nerve

fibers carrying proprioceptive and tactile information ascend to the cerebellum via the

spinocerebellar tract or to the brainstem through the dorsal column of the spinal cord (the dorsal–

medial – lemniscal Pathway; see Figure 4).

Fibres from the lower body regions ascend in the medial portion of the dorsal column,

while fibres from the upper regions ascend in the more lateral portions. The rostral third of the

dorsal column contains neurons responsible for the relay of proprioceptive information whereas

and the middle relays primarily tactile and cutaneous information. These tracts either ascend

ipsilaterally and terminate in the cerebellum (spinocerebellar tract) or terminate in the lower

medulla of the brainstem where, after synapsing, the fibers cross and ascend through the medial

lemniscus. Due to the the crossing of fibers at the sensory decussation (see Figure 4), afferent

information from proprioceptive and tactile receptors in the right side of the body is processed in

the left cerebral hemisphere.

Information from the dorsal– medial – lemniscal pathway enters the thalamus before being

relayed to cortical areas. It has been argued that there are two main regions of somatosensory

processing in the thalamus (Kaas, Merzenich, & Killackey, 1983). The ventral posterior nucleus

that receives information from cutaneous inputs, and the ventral posterior superior nucleus that

receives inputs from fibers carrying proprioceptive information. The ventral posterior area sends

projections to primary somatosensory area 3b (i.e., Broadmann’s areas) and the ventral posterior

superior area sends proprioceptive information primarily to area 3a. The degree of specialization

of these two somatosensory processing areas remains an active area of investigation (Disbrow,

Roberts, & Krubitzer, 2000; Disbrow, Roberts, Poeppel, & Krubitzer, 2001; Hinkley, Krubitzer,

Page 36: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

18

Nagarajan, & Disbrow, 2007). Although previous studies have reported that integration of

proprioceptive and tactile information occurs in other areas (e.g., areas 1 and 2), work in non-

human primates has revealed neurons in both areas 3a and 3b respond to both proprioceptive and

tactile stimulation (Kim et al., 2015).

How somatosensory information conveyed through these pathways give rise to a sense of

body position during passive and dynamic actions is still an active area of inquiry for researchers

interested in human movement science. Currently, there is no agreed-upon model with the

explanatory power to account for all possible functions of the somatosensory system. Converging

fields of research have, however, produced models that are capable of explaining how

somatosensory cortical networks, which are active during movement planning and movement

control, might interact with proprioceptive receptors for the production and correction of action

(Bullock, Cisek, & Grossberg, 1998; Cisek, Grossberg, & Bullock, 1998).

Page 37: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

19

&The! dorsalGmedialGlemniscal! pathway! for! the! transmission! of! proprioception! and! touch! information! to! the!somatosensory!cortex.!Information!from!muscle!spindles!and!cutaneous!receptors!enters!through!the!dorsal!roots!of!the!spinal!cord!then!ascends!ipsilaterally!to!the!medulla.!These!fibers!synapse!then!cross!(decussate)!at!the!medulla!and!ascend!toward!to!the!ventral!posterolateral!nucleus!of!the!thalamus!and!then!to!the!cortex!(adapted!from:!Noback,!Ruggiero,!Demarest,!&!Strominger,!2005).!!

Page 38: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

20

! Somatosensory!Representations!During!Action!–!Vector!Integration!to!Endpoint!Model!

One sensorimotor model pertinent to this thesis, which describes the role of somatosensory

information and cortical areas in goal-directed actions, is the Vector Integration To Endpoint

(VITE) model proposed by Bullock and colleagues in 1996. This model outlines how the

somatosensory system participates in the generation and control of reaching movements under

various task constraints and is primarily based on single and population neuron activity recordings

of somatosensory regions in the cortex.

According to the VITE model, before movement begins, a difference vector between the

the estimated position of the limb at the target, and the limb’s current position is calculated. This

difference vector, could be based on predictions or simulations (Blakemore, Wolpert, & Frith,

2000), and is characterized by a buildup of activity in somatosensory portions of area 5 based on

input from other parietal areas. Recall that the parietal areas receive projections from the dorsal

stream and could be a possible area for interactions between the visual and somatosensory systems

(Chuchland, Santhanam, & Shenoy, 2006; Reichenbach, Thielscher, Peer, Bülthoff, & Bresciani,

2014). At the same time, other neural populations within area 5 monitor the limb’s position in

space. The activity of these monitoring neurons is tonic in nature as they receive information about

static limb positions from sensory fibers (i.e., Type Ia and Type II muscle spindles and Golgi

tendon organs).

Information about the limb’s position and the prediction of the location of the target

contribute to the development of a movement command. In the VITE model, the output of the

movement is regulated by a velocity vector characterized by phasic-tonic neural activity in area 4.

The output of this velocity vector is modulated by a volitional “GO” signal, most likely emerging

from the higher centers of the brain (i.e., frontal regions). The neurons that generate the velocity

vector project to alpha motor neurons, which control extrafusal (i.e., force generating) muscle

fibers. When the command is sent, and the limb begins to move, information received from the

sensory fibers modulates firing patterns until the limb position vector matches the desired output.

This cortico-spinal VITE model has been tested using simulations, and these parameters

were sufficient to explain proprioceptive adjustments in the contexts of limb perturbation, target

Page 39: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

21

perturbation, and Coriolis fields (Bullock, Cisek, and Grossberg, 1998). Although not

comprehensive, this model is indicative of the possible role proprioception has in the planning and

completion of goal direction actions (see Figure 5).

Page 40: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

22

&The!Vector!Integration!to!Endpoint!Model!(VITE)!of!proprioceptive!online!control.!Before!movement!starts,!a!difference!vector!(DV)!is!calculated!from!a!comparison!of!the!target!position!vector!(TPV)!and!the!current!position!of!the!arm!(PPV).!Movement!commences!with!a!voluntary!and!scalable!GO!signal!that!scales!a!velocityGbased!command!(DVV)!that!activates!dynamic!gamma!motor!neurons!(γd).!The!DVV!produces!an!outflow!position!vector!(OPV)!which!activate!static!gamma!(γs)!and!alpha!motor!neurons!(α).!The!activity!of!the!OPV,!which!is!driven!by!movement,!and!is!altered!by!sensory!information!from!type!Ia!and!II!afferents,!changes!in!the!PPV!as!arm!position!changes.!This!feeds!back!into!the!desired!velocity!vector!and!alters!the!movement!of!the!arm.!Movement!ends!when!the!PPV!matches!the!DV.!This!Figure!was!adapted!from!Bullock!et!al.!(1996)!

!

Page 41: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

23

! Sensorimotor!Transformations!for!Movement!Planning!and!Control!

Both the multiple-processes and VITE models provide a framework for interpreting how visual

and somatosensory information is used during movement. However, both models do not fully

explain how these two senses might interact during the planning and control of goal-directed

actions. To address this question, the following section outlines current theories and experimental

evidence investigating multisensory integration processes as they are related to the performance

of goal-directed actions. Furthermore, whether these processes are flexible and adjust to

environmental constraints is also discussed.

It is important to acknowledge that late action selection and early programming

mechanisms do also influence sensory integration process, neural activities, and behavioral

performance related to goal- directed aiming movements (Allport, 1987; Cisek, 2007; Hommel,

2004, 2009; Prinz, 1997; Welsh, 2011). Although these concepts are relevant, to focus this

introduction, only sensory integration processes related to the later stages of movement planning

(i.e., after the action required to obtain the goal is already determined) were reviewed.

! Multisensory!Combination!and!Integration!for!Movement:!General!Principles!

Research on multisensory integration during movement has traditionally focused on how sensory

information obtained from multiple sources is converted and used to guide action. In the context

of action planning and control, sensory information is often discussed with regard to frames of

reference and sensorimotor transformations. Simply put, a sensory reference frame is a coordinate

system wherein the locations or quantities of objects are specified (e.g., Alais, Newell, &

Mamassian, 2010; Ernst & Bülthoff, 2004; Pritchett, Carnevale, & Harris, 2012). These reference

frames are associated with a specific sense or sensory organ and are linked to the receptive fields

of the neurons they subserve. For example, visual receptive fields are anchored to the neuronal cell

features in the retina (see Section 2.2.1 above). Thus, if retinal sensory information is used to

localize positions or features of objects, information emerging from these receptors is said to be

Page 42: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

24

mapped in a gaze-centred or retinotopic reference frame1. In addition, changes in the position or

orientation of the sensory organ or effector will change how locations or features of the perceived

objects are encoded (Batista and Newsome, 2000; see Figure 8). For this thesis, a visual or gaze-

centred reference frame was used to describe stimuli or movements if they were coded relative to

gaze-position. A non-visual or somatosensory reference frame was used to describe stimuli or

movements coded independent of gaze-position. For some studies below, the results are indicative

of limb-centred coding, which means that stimuli or movements were encoded with respect to

effector positions (see Figure 8).

The production of goal-directed actions requires the transformation of information from

many different reference frames into commands to be sent to the muscles to produce movements.

Prior to the motor command (or when a correction is needed), previous research has suggested that

both the limb and the target must first be represented in the same sensory map (Jeannerod, 1991;

Reichenbach, Thielscher, Peer, Bülthoff, & Bresciani, 2009). These sensorimotor transformations

are thought to involve numerous multisensory integration and combination processes.

! Multisensory!Combination!and!Integration!!Signals acquired from numerous sensory sources must be combined and integrated prior to being

used for sensorimotor transformations. Multisensory combination is the process of extracting

complementary (but not redundant) information from sensory afferences that are coded in different

reference frames. For example, when determining the position of a reachable object, multisensory

combination processes could involve the evaluation of: the eye position relative to the head; the

head position relative to the trunk; visual information about the goal object in relation to the hand;

hand position relative to the trunk; and the learned experience of the actor (Ernst & Bülthoff,

2004).

Sensory integration processes refer to the incorporation of these various sources of

1 Gaze-centred and retinotopic reference frames may not always refer to the same coordinate system. A Gaze-centred reference frame could refer to a reference frame that considers the position of the eyes in the head whereas a retinotopic reference frame could refer to a reference frame based solely on retinal activations. Thus, although related, they are not always the same (see Cohen & Andersen, 2002).

Page 43: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

25

information with the goal of forming a useable percept or representation. Presently, the most

accepted model of multisensory integration suggests that each source of sensory information is

integrated based on a maximum likelihood or minimum variance estimate. That is, each sensory

source is converted and given a weight based on its variance. This weighting determines the degree

to which it contributes to the final percept (Ernst & Bülthoff, 2004). This model has been useful

for predicting performance in visual-haptic (Ernst & Banks, 2002) and audio-visual experimental

paradigms (Shams, Ma, & Beierholm, 2005), and has received support from neurological data

examining representations of sensory probability distributions in neural populations (Fetsch,

Pouget, DeAngelis, & Angelaki, 2012).

In the case of goal-directed actions, sensory integration processes could be viewed as the

transformation of position signals derived from visual and somatosensory senses about the location

of the arm, and the location of a goal object, to construct or amend a movement trajectory. When

considering this process, often an intermediate step is proposed whereby sources are converted

into a common reference frame (Ambrosini et al., 2012; Beurze, Van Pelt, & Medendorp, 2006;

Cohen & Andersen, 2002; Jeannerod, 1991; Mueller & Fiehler, 2014b; Pouget, Ducom, Torri, &

Bavelier, 2002). It is hypothesized that the establishment of this common reference frame involves

designating a space common to all inputs and transforming the locations of sensory signals into a

common coordinate system. One goal of this thesis was also to examine how task requirements

and changes in sensory information about the target alter these transformation processes prior to

and during goal-directed actions.

! Multisensory!Combination!and!Integration!During!Movement!Planning!and!Control!

Presently, there is a debate within the literature as to the composition (or even existence) of a

common reference frame for sensorimotor transformations (Beurze et al., 2006; Cohen &

Andersen, 2002; Feldman & Levin, 1996; McGuire & Sabes, 2009; Pritchett et al., 2012). On one

side, it has been argued that the sensory signals used for defining the initial movement vector, as

well as those used for movement control are mapped exclusively in gaze-centred coordinates. In

contrast, a growing body of literature has also built the case for the use of mixed or somatosensory

reference frames.

Page 44: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

26

! The!Common!Reference!Frame:!Visual!Reference!Frame!for!Movement!Planning!and!Control!!

The idea of a common reference frame for motor planning and control can be traced back to early

experimentalists such as Karl Lashley, whom, in his work The Problem of Serial Order in

Behaviour, posited that the central nervous system’s ability to effectively control rapid actions was

made possible, by comparing sensory signals in “reference to a space system” (Lashley, 1951, p

126). Furthermore, he theorized that “perceptions from distance receptors, vision, hearing, and

touch are also constantly modified and referred to the same space coordinates” (Lashley, 1951, p

126). Lashley, a behaviourist by training, had constructed this idea using his own observations

from his experiments on cats, humans, and non-human primates (Lashley, 1917). He was

particularly fascinated by the ability of humans and animals with damaged somatosensory

afferents to still generate accurate movements at varying speeds. Although Lashley did not directly

implicate a specific sensory modality, his observations that movements can be made without

proprioception served as the basis for future work on visual-sensorimotor transformations

(Desmurget, Pélisson, Rossetti, & Prablanc, 1998). Since Lashley’s initial observations, the idea

of a common visual reference frame for action related sensory information processing has received

copious amounts of support from both psychophysical experiments examining human behavior,

and neurophysiological experiments looking a single and population neuronal activities

(Ambrosini et al., 2012; Cohen & Andersen, 2000; Crawford, Medendorp, & Marotta, 2004;

Henriques, Klier, Smith, Lowy, & Crawford, 1998; Pouget et al., 2002; Stetson & Andersen, 2014;

Stricanne, Andersen, & Mazzoni, 1996; Zipser & Andersen, 1988).

!Behavioural!Evidence!for!a!GazeGCentred!Reference!Frame!for!Movement!Planning!and!Control!

Evidence for the use of a gaze-centred reference frame for movement planning and control is

primarily derived from studies looking at the effect of target-eccentricity and gaze-direction on

movement endpoint errors. For example, Bock (1986) noted that when participants performed

reaching movements to visual targets presented in the retinal periphery, without vision of their

reaching limb, they “reached too far to the right if targets were presented in the right hemispace,

and too far to the left if targets were presented in the left hemispace” (Bock, 1986, p. 478).

Moreover, in a second experiment, wherein targets were only presented in the right hemispace (as

Page 45: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

27

participants maintained central fixation), the author noted a stable and systematic bias in horizontal

reach endpoints towards the right of the targets. Because this bias was related to gaze-direction,

the author concluded that upper-limb reaching movements were planned and controlled in gaze-

centred coordinates (see also Bard, Hay, & Fleury, 1985; Prablanc et al., 1979).

Henriques et al. (1998) provided further evidence for a gaze-centred reference frame during

movement planning when they sought to determine if eye movements after target disappearance

affected aiming performance. In their experiments, participants were presented with a target

located at 0 degrees with respect to the participant’s right eye. After the target disappeared,

participants performed a saccade to a point located 15 degrees to the left of the original

(disappeared) target location and subsequently performed a reaching movement to the memorized

target position with the upper-limb. The authors noted that reaching endpoints were biased toward

the right of the target, replicating what was found by Bock (1986). These authors concluded that,

in the primary stages of motor planning, the information used to construct the initial movement

vector is stored in a dynamic gaze-centred map and this vector is remapped after each eye

movement (see also Medendorp & Crawford, 2002).

Evidence for gaze-centred mapping during movement planning and control has also been

demonstrated in experiments looking at the effect of target-to-body versus target-to-gaze position

(Ambrosini et al., 2012). Movement endpoint variability and the degree of endpoint bias

significantly increased with retinal eccentricity of the target. Also, this relationship was

independent of body position. Furthermore, when individuals are performing a visuomotor task,

differences between perceived and actual body position at one spatial location led to movement

control errors representative of a gaze-centred remapping of the entire workspace (Thompson,

Byrne, & Henriques, 2014; Vetter, Goodbody, & Wolpert, 1999). These findings suggest that

movement control processes occur in also accordance with a gaze-centred reference frame.

Although, much of the work mentioned above focused on visual targets, there has been

experiments looking at how other sensory modalities could be mapped in visual coordinates. For

example, building off of the result that arm movements made to either seen or memorized visual

targets are affected by gaze position (e.g., Henriques et al., 1998; Medendorp & Crawford, 2002),

Pouget et al. (2002) sought to determine if this gaze-centred mapping could be used for targets of

Page 46: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

28

different sensory modalities (see Figure 6). To this end, the authors conducted an experiment

wherein human participants performed aiming movements to visual targets, auditory targets (the

perceived source projected sounds), proprioceptive targets (right foot), and imaginary targets.

Similar to the previous experiments, the retinal locations of the targets were altered by changing

the ocular fixation points and movements were performed without vision of the limb. It was found

that changes in fixation position resulted in endpoint bias for all types of targets, regardless of their

sensory modality (see also Blangero, Rossetti, Honoré, & Pisella, 2005; Jones & Henriques,

2010). Thus, the authors concluded that prior to goal-directed actions, the target location and initial

movement vector are represented in gaze-centered coordinates.

Page 47: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

29

Page 48: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

30

&Figure!adapted!from!Pouget!et!al.!(2002).!A)!The!experimental!paradigm,!participants!fixated!on!a!peripheral!fixation!point!(FP)!prior!to!making!a!movement!to!a!central!target.!Targets!were!either!visual!(led),!auditory!(sound!from!a!speaker)!or!proprioceptive!(projected!position!of!the!participant’s!right!foot).!Note!that,!in!all!conditions,!participants!pointed!towards!the!targets!without!actually!touching!them!B)!GazeGdependent!reaching!errors!for!each!target!modality.!Movement!endpoints!to!auditory,!visual,!and!proprioceptive!targets!were!all!significantly!biased!by!gaze!direction.!!

Page 49: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

31

!Behavioural!Evidence!for!Visual!Space!Remapping!After!Target!Perturbations:!Target!Perturbation!Paradigms!

Experiments examining the impact of the relative positions of gaze and the body on reaching

performance show that a visual reference frame may be used for the control of ongoing actions

(e.g., Ambrosini et al., 2012; Thompson et al., 2014). Other experiments have also assessed online

control more directly using target perturbation paradigms (also known as the double-step

paradigm). In the double-step paradigm the initial target location of a goal-directed reaching

movement is shifted (i.e., perturbed) either during movement planning (e.g., Flash & Henis, 1991)

or during online control (e.g., Komilis, Pélisson, & Prablanc, 1993; Reichenbach et al., 2009;

Sarlegna & Blouin, 2010). In the former scenario, participants can adjust their movement plan to

successfully complete the task, whereas in the latter scenario participants must compute and

implement changes to their hand trajectory as the ongoing movement unfolds. Previous studies

have hypothesized that the computation of online trajectory amendments is based on visual

information about both the new target location and reaching limb position (Desmurget et al., 1999;

Komilis et al., 1993; Reichenbach et al., 2009). Thus, when there is no vision of the reaching limb,

sensorimotor transformations would be required to obtain a visual estimate of limb position prior

to the implementation of corrections.

Evidence that online corrections occur in a visual reference frame are derived from studies

employing the double-step paradigm without vision of the reaching limb. Komilis et al. (1993),

performed an experiment wherein participants reached with or without vision of their limb to

targets that were stable (75% of trials), or perturbed at movement onset or peak velocity (25% of

trials). In contrast to other double-step paradigms (e.g., Goodale et al., 1986; Pélisson, Prablanc,

Goodale, & Jeannerod, 1986) participants in Komilis et al. (1993) were fully aware that the target

had changed position. In response to perturbations at movement onset, the authors found no

differences in movement endpoint errors between trials with the limb visible and trials with no

vision of the reaching limb. In contrast, for perturbations at peak velocity, the authors found that

participants only partially corrected for the perturbation. Furthermore, participants corrected to a

greater extent (i.e., 38% of perturbation distance) when they had vision of their limb compared to

when they had no vision of their reaching limb (i.e., 28% of perturbation distance). The authors

Page 50: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

32

stated that, when there was vision of the limb, movement corrections were programmed based on

the retinal information about the new target location and current limb position. However, when the

limb was not visible, corrections were programmed based on retinal information about the target

location and visual estimates of the limb position derived from somatosensory and efferent

information. The authors concluded that the processes required to transform somatosensory

information onto a visual reference frame prior to these computations takes time, and thus leaves

less time available to implement trajectory corrections. This increase in error therefore was

expected as there were no between vision condition differences in movement times (note that times

ranged from 460 to 516 ms for the perturbed trials).

The findings that somatosensory information must be transformed into a visual reference

frame during online control were also supported by the work of Reichenbach et al. (2009). In their

study, participants performed reaching movements to visual targets located 20 cm away from a

home position using a 4 degree-of-freedom haptic manipulandum. The experimental protocol

consisted of perturbations to both the visual targets (i.e., shift in target position 7.5 degrees to the

left or to the right of the original target) and the reaching limb (10 N force the left or to the right

applied to the arm by the manipulandum) which occurred either early (1 cm from starting position)

or late (5 cm from starting position) in the trajectory. Correction latencies to perturbations were

computed using both kinematic measures and muscle activations and were compared between

limb-visible and limb-occluded trials. In support of the conclusions of Komilis et al. (1993),

Reichenbach et al. (2009) found that correction latencies in response to early visual target

perturbations were longer when movements were performed without vision of the limb compared

to when they were performed with vision of the limb (see Figure 7). The authors concluded that

this increase in correction time was associated with the remapping of somatosensory information

onto a visual reference frame prior to corrections.

The conclusions of the previous studies suggest that online control takes place in a visual

reference frame. One limitation of the extant literature, however, is that this type of paradigm has

seldom been used to examine movements to targets of other sensory modalities. This question will

be addressed in Study B of this thesis which investigates the online control of movements to

perturbed somatosensory targets.

Page 51: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

33

&Experimental!setup!and!results!of!visual!target!condition!adapted!from!Reichenbach!et!al.!(2009).!Participants!made!movements!to!visual!targets!that!either!remained!stable!or!were!shifted!early!or!late!in!the!trajectory.!Movements!without!vision!of!the! limb!had! larger!endpoint!errors!and! longer!correction! latencies!than!movements!made!without!vision!of!the!limb.!

Page 52: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

34

!Neurophysiological!Evidence!for!a!GazeGcentred!Reference!Frame!for!Movement!Planning!and!Control!

Neurophysiological evidence supporting the use of a gaze-centred reference frame for goal-

directed actions has been derived from the observed activity of single neurons and neuronal

populations prior to, and during reaching tasks. Overall, this literature indicates that the patterns

of neuronal activity in parietal, premotor, and motor areas support the use of a visual reference

frame for movement planning and control.

The main cortical area implicated in the transformation of sensory signals for planning and

control of goal-directed actions is the posterior parietal cortex (PPC). Most studies on the motor

related activities of the PPC have found evidence of gaze-centred mapping. The seminal studies

touting the PPC as the neurophysiological conduit of the visual reference frame also revealed that

the gain of PPC neuron activities are modulated by changes in eye position (Andersen &

Mountcastle, 1983; Mountcastle, Lynch, Georgopoulos, Sakata, & Acuna, 1975). Specifically,

PPC neurons were found to fire if the angle of an illuminated target was in the preferred direction

of the retinal receptive field. Also, when the eyes moved, the intensity of firing was altered as a

function of the target’s distance from neurons’ preferred directions. Furthermore, it has been

demonstrated that, prior to reaching movements to visual targets, activations patterns of PPC are

altered by gaze position (Batista & Newsome, 2000). This gaze-dependent gain shift in neuronal

activity has also been shown for movements to auditory targets and in delayed reaching paradigms

(Gillman, Cohen, & Groh, 2005; Stricanne et al., 1996). In agreement with the aforesaid

behavioural experiments (Section: 2.4.3.1.2), these findings suggest that similar networks are used

for gaze-centred transformations prior to reaches toward auditory, visual, and memorized target

locations. Furthermore, network analyses of neuronal population activity in the PPC has

demonstrated that it is possible to derive target locations in the external world based on neural gain

field activities (Burnod et al., 1999; Zipser & Andersen, 1988). The above-mentioned results,

together with PPC’s role in the planning and control of saccades and limb movements (for reviews

see Andersen et al., 1997; Cohen & Andersen, 2002), have led researchers to conclude that PPC

gain responses represent a gaze-centred transformation network used for the planning and control

of goal-directed actions.

Page 53: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

35

Gaze-centred activations in PPC have also been hypothesized to drive other areas

associated with movement planning. Although studies examining neural activity in premotor and

motor cortex have revealed that their receptive fields code positions in hand-centred coordinates

(see Battaglia-Mayer, 2003; Bremner & Andersen, 2012; Burnod et al., 1999), researchers have

found that both eye position and retinal limb position do modulate neuronal firing in some

premotor areas (Boussaoud & Bremmer, 1999; Rozzi, Ferrari, Bonini, Rizzolatti, & Fogassi,

2008). For example, Boussaoud et al. (1998) examined the effect of gaze position and retinal target

location on dorsal premotor cortex activity as monkeys made movements to visual targets. It was

found that dorsal premotor activity was modulated principally by three factors: limb movement

direction, retinal target location, and eye position in their orbit. These results were corroborated

by other experiments examining similarities in activation patterns between the PPC, ventral

premotor, and prefrontal cortex (Boussaoud, Barth, & Wise, 1993; Mushiake, Tanatsugu, & Tanji,

1997). In addition, the discovery of interactions with other cortical areas during movement

planning as well as the discovery of kinematic-feature specific coding in PPC neurons (Chen,

Reitzen, Kohlenstein, & Gardner, 2009) have led to the idea that PPC serves as a sensorimotor

interface.

Support for PPC as a gaze-centred sensorimotor interface for movement related

sensorimotor transformations comes primarily from data about regional differences in PPC neuron

activity and virtual lesion studies. As mentioned above, PPC neural activity in response to different

target modalities is representative of a gaze-centred reference frame. Theoretical and

computational models propose that all sensory information important for the movement enters the

PPC via the SC and extrastriate visual areas (Andersen, Essick, & Siegel, 1985; Buneo &

Andersen, 2006; Zipser & Andersen, 1988). Subsequently, this sensory information is used to

adjust the gain of dorsal PPC neurons in the inferior parietal lobule (IPL) and intraparietal sulcus

(IPS) before being transformed into more hand-centred coordinates in the superior parietal lobule

(SPL).

Coherent with the above-mentioned empirical evidence, movement related activity in IPL

and IPS neurons are found to code target and effector positions in gaze coordinates (Andersen et

al., 1997; Cohen & Andersen, 2002). These include sensory information emerging from multiple

sources, which is pertinent to the reaching movement (Andersen et al., 1997). As information

Page 54: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

36

travels to the SPL, two different types of cells are noted. First, in the dorsal areas of the SPL, closer

to the IPS, are neurons with firing patterns similar to the firing patterns of gaze-centred neurons

mentioned above (see Figure 5 in Buneo & Andersen, 2006). In contrast, neurons in the ventral

SPL appear to code effector and target positions similar to the hand-centred neurons of premotor

and motor cortex (Ashe & Georgopoulos, 1994; Bremner & Andersen, 2012; Fabbri, Caramazza,

& Lingnau, 2010). The anatomical arrangement of these cells, their interactions with premotor and

motor areas, as well as the notable absence of cells with mixed reference frames argue for a direct

transformation of gaze-tuned representations of target and hand positions to motor representations

for action (Andersen et al., 1985; Andersen et al., 1997; Batista & Newsome, 2000; Buneo &

Andersen, 2006; Cohen & Andersen, 2000). Furthermore, this direct transformation hypothesis is

supported by the role of the PPC in the online control of reaching based on forward models and

sensory feedback. It has been demonstrated PPC lesions lead to deficits in predicting arm position

(Wolpert, Goodbody, & Husain, 1998) and virtual lesions to the PPC induced by TMS result in

online control impairments when participants perform reaches in the double-step paradigm

(Desmurget et al., 1999).

Overall, the work presented above argues for a gaze-centred reference frame for movement

planning and control. Sensory information from many sources is used to build a map of the

environment in visual coordinates. The gain of neuronal activity related to the visual mapping of

sensory information may be modified by inputs from other sensory modalities (see Figure 9).

Page 55: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

37

&Differences! between! handGcentred! and! gazeGcentred! neural! receptive! fields! (RF).! HandGcentred! neural!activations!shift!with!the!position!of!the!hand!in!space,!and!gaze!centred!activations!shift!with!the!position!of!the!eyes.!Both!types!of!representations!have!been!noted!in!the!PPC.!Adapted!from!Batista!and!Newsome!(2000).!!

Page 56: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

38

&The! common! eyeGcentred! reference! frame! gain! field! hypothesis.! A)! Transformations! in! the! PPC.! The!distribution!of!reach!related!activities!in!the!PPC!is!arranged!from!primarily!eyeGcentred!in!the!inferior!parietal!lobule!(IPL)!to!heterogeneously!limbGcentred!and!eyeGcentred!in!the!superior!parietal!lobule!(SPL).!For!this!reason,!the!PPC!is!viewed!as!the!sensorimotor!interface!for!transformations!related!to!the!planning!and!control!of!goalGdirected!actions.!Sensory! information!about!target!and!effector!positions!from!pertinent!sources!modify!the!reach!related!activities!of!eyeGcentred!neurons.!Response!patterns!of!neurons!in!the!SPL!suggest!that!information!is!then!transformed!directly!into!limbGcentred!coordinates.!!B)!Modulations!of!reach!related!neurons!in!the!PPC.!Neurons!in!the!PPC!fire!with!respect!to!changes!in!eye!position.!The!gain!of!these!responses!can!be!modulated!by!visual!information.!Figure!representations!adapted!from!data!described!in!Buneo!&!Andersen!2006,!and!Cohen!&!Andersen!2002.!

Page 57: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

39

! The!Common!Reference!Frame:!!Somatosensory!Reference!Frames!for!Movement!Planning!and!Control!!

As outlined above, there is ample evidence supporting the existence of a visual reference frame

for sensorimotor transformations occurring prior to, and during goal-directed actions. A growing

body of evidence has called into question the idea that sensorimotor transformation processes are

performed exclusively in this way. Based on the maximum likelihood model of sensory integration

briefly described in Section 2.4.2 (see also Ernst & Banks, 2002), this emerging axiom of literature

purports that the sensorimotor transformation strategies employed by the central nervous system

are dependent on both the availability of sensory information and the relevance of this information

to the planning and completion of the action (Burnod et al., 1999; McGuire & Sabes, 2009;

Saradjian, 2015; Sober & Sabes, 2005).

One of the first studies to examine the relative importance of retinal versus other sensory

signals for motor control was an observational study conducted on behaving monkeys (Cohen,

1961). In this experiment, monkeys had either their extra-ocular muscles severed and had their

ciliary muscle paralyzed, or had their neck muscles paralyzed, before being observed in an

interactive environment. Cohen observed how the loss of these different types of sensory

information affected their ability to interact with their environment. Almost to the surprise of the

author, monkeys with severed extra-ocular muscles showed no severe deficits in behaviour, while

monkeys lacking neck proprioception demonstrated severe motor control deficits akin to those of

serious vestibular damage. The conclusion of this study was that information about the head

relative to the body location provided by neck proprioceptive sensations contributed significantly

to behaviour, and extra-ocular (or eye) signals may play less of a crucial role than previously

thought.

The idea that somatosensory information could be as crucial as eye position information

for behaviour was later built on by studies on goal-directed reaching. These studies have

demonstrated that somatosensory information is important limb movement control processes in

visually-guided actions (Ghez, Gordon, Ghilardi, & Sainburg, 1995). Neurologically intact

participants and participants with large fibre neuropathy performed aiming movements to targets

varying in amplitude and direction. Vision of the reaching limb was also precluded systematically

for each target, while limb kinematics and movement performance variables were recorded.

Page 58: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

40

Patterns of movement errors, limb accelerations, and trajectory profiles revealed that vision

contributed more to movement direction planning. In contrast, somatosensory information was

used to update the internal model of the limb position. Based on these results the authors proposed

a model wherein target positions are defined primarily in visual coordinates (an interpretation

influenced by the neurophysiological work described in Section 2.4.3.1.3 above), and initial limb

position is primarily defined in somatosensory coordinates. Furthermore, the authors stated that

differences in error patterns between patient groups and vision conditions suggest that both sensory

modalities contribute to online control processes.

The proposition that modifying vision and proprioception produced modality-specific

changes in movement errors and trajectory kinematics gave rise to the hypothesis that each sense

may have differing effects on sensorimotor transformations (McGuire & Sabes, 2009; Sober &

Sabes, 2005). Sober and Sabes (2005) conducted a seminal study to specifically test this

hypothesis. The authors employed a reaching task to visual and proprioceptive targets and

systematically altered proprioceptive and visual feedback of the limb’s position. To ascertain the

effect of target modality on sensorimotor transformation processes, two error variables were

computed. The first was a movement vector error that represents the error in the computed vectors

of the visual target and initial hand position. The second was an inverse model error that

represented the differences in planned direction of the movement converted into muscle or joint

based motor commands (see Sober & Sabes, 2005 for details). Performance data from multiple

experiments were used to construct a mathematical model that predicted the contributions of each

sense to the measured error outcomes. Vision of the limb was found to more accurately predict

movement vector error performance for aiming movements to visual targets, whereas limb

proprioceptive sensory information had a stronger predictive power for movements to

proprioceptive targets. These results were in contrast to the visual reference frame hypothesis, as

this would predict vision should have an equally important role for both types of targets.

Overall the above results suggest that early sensorimotor integration processes can be

modulated by task constraints, and perhaps a different type of transformation is used when

performing movements to proprioceptive targets.

Page 59: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

41

!Behavioural!Evidence!for!a!Somatosensory!Reference!Frame!in!Movement!Planning!and!Control!

Since Sober and Sabes (2005), an accumulating body of evidence has emerged supporting the

existence of an alternative reference frame for sensorimotor transformations. Much of this support

is drawn from the observed differences between movements made to visual and somatosensory

targets. Overall, these studies provide further evidence that sensorimotor transformations used for

movements to somatosensory targets are coded in an somatosensory reference frame and this

coding could be a strategy to reduce neuromotor noise (Bernier, Gauthier, & Blouin, 2007;

Sarlegna & Bernier, 2010; Sarlegna, Przybyla, & Sainburg, 2009; Sarlegna & Sainburg, 2007).

Further support for sensorimotor transformations in a non-visual reference frame can be

taken from experiments looking at adaptations and limb kinematics. For example, in Bernier,

Gauthier, & Blouin, (2007), two groups of participants underwent a prismatic adaptation protocol

where they practiced aiming movements to visual targets. After the adaptation period, one group

of participants performed a post-test using only visual targets, while another other group was tested

using peripheral visual targets and a somatosensory target (i.e., the fingertip position of the non-

reaching hand). The authors observed a prismatic after-effect for movements to visual targets but

found no evidence of such a shift when participants performed post-test trials to the somatosensory

target (see Figure 10 A). The authors concluded that separate representations could be used for

each target modality. Thus, the somatosensory transformations used for somatosensory targets

were not biased by the visual prismatic adaptation protocol.

Studies that altered visual and proprioceptive information about initial limb position also

reported that sensorimotor transformations to somatosensory targets occur in a non-visual

reference frame. In Sarlegna & Sainburg, (2007), Two groups of participants performed reaching

movements to either proprioceptive (i.e., again reaching to the fingertip of the non-reaching hand)

or visual targets. The experimenters varied visual position of the reaching limb before movement

onset creating a visual-proprioceptive mismatch. Amplitude and direction errors as well as

movement kinematics were computed to gauge the contributions of each modality to the motor

plan. When reaching to visual targets, movement distance error was dependent on initial visual

information but not proprioceptive information. In contrast, for movements to proprioceptive

targets, it was found that visual information contributed less to movement distance errors than

Page 60: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

42

proprioceptive information. Overall, these results contrast with the classical two-stage model of

motor planning where visual information is used to plan the vector and proprioception is used to

adjust the visual gain response (see “Visual Reference Frame” discussions above in Section

2.4.3.1.3). Instead, these results lend further support for the existence of a different, possibly more

direct, sensorimotor transformation process for movements made to somatosensory targets.

One reason for these differential transformation strategies could be to reduce the noise that

emerges from reference frame conversions. During motor planning, different sources of sensory

information are represented in many reference frames and conversion from one frame to another

could incur a time cost and induce errors, thus hampering performance. This hypothesis was tested

by having individuals perform reaching movements to either proprioceptive or visual targets

without vision of their reaching limb (Sarlegna et al., 2009). Inverse dynamics analysis of shoulder

and elbow torques was used to assess if the transformation of information between reference

frames affected performance. For movements to visual targets, participants exhibited more curved

movement trajectories compared to movements to proprioceptive targets. Also, inverse dynamics

analyses revealed shoulder and elbow muscles were initially counteractive to straight movement

directions for visual targets. In contrast, movements to proprioceptive targets showed less

curvature and less overall muscle torques around both joints (see Figure 10B). These results

indicate that there are more efficient coordination patterns for proprioceptive targets performed

without vision of the reaching limb. Overall, the authors concluded that sensorimotor

transformations occurring between different reference frames produces errors in motor planning.

Page 61: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

43

&A)!Adapted!from!Bernier!et!al.!(2007).!Participants!exhibited!lower!errors!due!to!prismatic!adaptation!when!reaching!to!a!proprioceptively!defined!target!than!a!visual!target.!!B)!Adapted!from!Sarlegna!et!al.!(2007),!movements!to!somatosensory!targets!were!less!curved!than!movements!to!visual!targets!in!the!same!position.!Both!of!these!results!argue! that! different! sensorimotor! transformation! mechanisms! underlie! movements! to! visual! and! somatosensory!targets.!!

Page 62: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

44

! The!Common!Reference!Frame:!!Neurophysiological!Evidence!for!Somatosensory!Reference!Frames!for!Movement!Planning!and!Control!!

Ample support for sensorimotor transformations using non-visual reference frames has also

emerged from experimental neurophysiology. Studies implementing a wide range of techniques

including: virology, immunohistochemistry, electroencephalography (EEG), functional magnetic

resonance imaging (fMRI), and transcranial magnetic stimulation (TMS), have all uncovered

possible networks capable of performing planning and control processes in somatosensory

coordinates. Overall, these studies argue that sensorimotor transformation processes during

movement planning and control can occur in a non-visual reference frame.

Anatomical investigations of the brains of monkeys and cats were the first to theorize the

existence of a network for sensorimotor transformation using somatosensory information (Babb,

Waters, & Asanuma, 1984; Jones & Powell, 1969; Jones & Powell, 1968; Langren, Silfventus, &

Wolsk, 1967). Stimulations, lesions, and categorizations of cellular architecture revealed that the

somatosensory cortex, motor cortex, and the PPC were interconnected and functioned reciprocally

to produce behaviour (Jones & Powell, 1969). It was hypothesized that information obtained by

the somatosensory areas from thalamocortical neurons could be directly projected to the PPC

(Babb et al., 1984; Langren et al., 1967). These connected areas could “well be part of a short

latency feedback of joint position information to motor cortex for the production of movements”

(Babb et al., 1984, p.483; see also Disbrow et al., 2000, 2001; Schmahmann & Pandya, 1990).

From the aforesaid studies, however, it was not possible to state the exact nature of these

connections because sensory regions of the thalamus were viewed as heterogeneous. Thus, it was

unclear whether networks carrying somatosensory information, either from the somatosensory

cortices or thalamus, had direct projections to movement planning areas (PPC). As a result, as

discussed above (Section 2.4.3.1.3), the interactions between proprioceptive information and

neural activity in the PPC during sensorimotor transformations were hypothesized to be

exclusively through visual gain field adjustments (Andersen et al., 1997).

Since the gain field hypothesis, experiments using immunohistochemistry have provided

compelling evidence for the direct transmission of somatosensory signals to the PPC via

thalamocortical neurons and neurons in somatosensory cortical areas (Prevosto, Graf, & Ugolini,

Page 63: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

45

2010, 2011). To study these projections, researchers injected the PPC of macaque monkeys with a

viral (i.e., rabies) neural tracer (i.e., cholera toxin B). A monoclonal antibody2 directed at the rabies

virus was then used to visualize the neural pathways connected to each area. It was revealed that

areas of the PPC, specifically the ventral lateral (LIPv) and medial intraparietal areas (MIP),

received disynapatic inputs directly from somatosensory thalamo-cortical neurons. These neural

projections were also similar to those observed in somatosensory areas, providing evidence for the

direct transmission of information about neck and arm proprioception to the PPC. In addition, the

MIP also received inputs from somatosensory areas associated with the processing of

proprioceptive signals from muscle spindles and joint receptors (Prevosto et al., 2011). Taken

together with cytoarchitectural studies revealing somatotopic arrangement for somatosensory

signals in PPC (Rozzi et al., 2008), these results support the existence of a PPC network with the

physiological features to directly integrate proprioceptive signals.

The existence of a specific PPC network for the transformation of somatosensory signals

is also supported by studies looking at brain activity in humans performing reaches to

somatosensory targets. Bernier and Grafton (2010) examined the possible functions of this network

by looking at how different PPC regions (i.e., precunate nucleus vs. parieto-occipital junction),

respond to movements toward somatosensory targets. In addition to the sensory modality of the

target, the authors varied target location with respect to both body and gaze position. Neural

responses were measured by fMRI to determine what networks were active for each target location

change. Results indicated that anterior precuneus activity was activated to different extents

depending on target modality. Precuneus activations were greater for changes related to gaze

position when visual targets were present, and greater for changes related to body positions when

proprioceptive targets were present. Areas of the premotor and somatosensory cortices also

showed a greater activation for body-position changes when proprioceptive targets were presented.

The authors concluded that these results could represent interactions between PPC and

2 Monoclonal antibodies are proteins produced by lymphocytes that bind foreign protein molecules, such as those produced by viruses (see Alberts, Johson, & Lewis, 2002)

Page 64: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

46

somatosensory areas to facilitate body-centred transformations during aiming movements to

proprioceptive targets.

Further support for a network facilitating non-visual transformations in PPC was provided

by experiments looking brain stimulation during reaching movements. For example, it has been

shown that TMS applied to the dorsal lateral PPC prior to reaching to memorized targets disrupts

initial estimates of unseen hand position rather than visual estimates of the reach vector (Davare,

Zénon, Pourtois, Desmurget, & Olivier, 2012; Vesia, Yan, Henriques, Sergio, & Crawford, 2008).

Furthermore, TMS delivered to particular region of the PPC (i.e., the posterior mIPS), when vision

of the hand was unavailable, impaired the efficacy of somatosensory-based online corrections, but

not visual-based online corrections (Reichenbach et al., 2014).

Overall, the experiments presented above argue for a network capable of using

somatosensory information for sensorimotor transformations during the planning and control of

goal-directed actions.

! Literature!Review!Conclusions,!Gaps,!and!Further!Questions!To summarize, both visual and somatosensory information contribute to the planning and control

of goal-directed actions. Sensory receptors from both senses are capable of coding limb and target

positions. Although theoretical models based on feedback and feedforward mechanisms have been

proposed to account experimental observations, no cohesive model presently exists.

An important remaining point of contention in motor control studies concerns the frame of

reference in which somatosensory information is processed during the planning and online control

of movement. This ongoing debate is based on observations that somatosensory targets can be

encoded in both visual and non-visual reference frames. The dominant hypothesis regarding the

selection of frame of reference posits that all movements are planned and controlled in visual

coordinates (see Section 2.4.3.1). The main underlying assumption of this scenario is that

somatosensory inputs from the reaching limb needs to undergo a series of transformations prior to

movement onset. The alternative hypothesis is that the encoding of somatosensory information is

flexible and depends on the sensory context and task goals (see Section 2.4.3.2).

Page 65: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

47

The goal of the research presented in this thesis was to shed light on the ongoing debate

regarding the sensorimotor transformation processes underlying the planning and control of

movements directed towards somatosensory targets. In these experiments, participants were asked

to reach to specific fingers of the opposite hand (i.e., somatosensory targets) that were identified

either by tactile or auditory cues. In the first experiment, we used an established method to

determine if participants planned their movements using an exteroceptive (i.e., visual) or

interoceptive body representations by measuring gaze-dependent reaching errors. It was

hypothesized that if an exogenous cue (i.e., auditory) was used to signal the somatosensory target

locations, movements would be planned in a visual reference frame. In the second experiment, the

cortical mechanisms underlying the remapping of the somatosensory target position into retinal

coordinates was examined. It was hypothesized that the neural activities underlying sensorimotor

transformations into a visual reference frame would require greater visual processing in areas

involved in visuomotor transformations and gaze-dependent movement coding (i.e., PPC, lateral

occipital cortex) compared to conditions where no conversion of somatosensory information is

required (e.g., reaches to auditory-cued visual targets).

The third experiment of the present thesis examined the reference frame used for the online

control of movements toward somatosensory targets. This question has been much less

investigated than those related to the sensorimotor processes underlying movement planning. As

discussed in Section 2.4.3.1.2, the online control of actions has been considered in the literature to

be performed in a visual reference frame. However, the existence of fronto-parietal networks

capable of relaying somatosensory information to areas associated with online control (e.g., PPC;

Desmurget et al., 1999; Prablanc et al., 2003), and the existences of fast correction mechanisms

(e.g., Scott, 2004), lead to the possibility that movements to somatosensory targets can be

controlled in a non-visual reference frame. To test this hypothesis a study was conducted wherein

participants made reaching movement to visual and somatosensory targets that were displaced

after movement onsets.

Page 66: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

48

&Outline!of!the!experiments!conducted!in!this!thesis,!Experiment!A1!and!A2!investigated!the!sensorimotor!transformations!used!for!planning!movements!to!somatosensory!targets,!whereas!Experiment!B!investigated!the!role!of!these!transformations!in!the!online!control!of!goalGdirected!actions.!!!

!

Page 67: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

49

!!

! General!Methodology!

! Participants!The number of participants for each study was selected based on a power analysis done in

G*power. Mean and median (see Schimmack, 2012) population effect sizes, were derived from a

literature review of relevant studies (see Appendix 1 for further descriptions and analyses). For

every experiment, participants were right-handed, and had normal or corrected-to-normal vision.

Informed consent was obtained from each participant prior to participation. Depending on

the location of the experiments, a local ethics committee at either Aix-Marseille University, or the

University of Toronto approved the experimental protocol.

! General!Procedures!The apparatuses and material used varied substantially by experiment, thus each is described in

detail in the methods section presented in each study. In general, all tasks, to some extent, involved

an upper-limb reaching movement. The involvement of motor networks in higher cognitive

processes, sensory processes, and the continuous and parallel features of the activations underlying

reaching movements, make upper-limb reaches ideal for the study of complex phenomena,

specifically with regard to sensorimotor integration (Allport, 1987; Bekkering & Neggers, 2002;

Cisek & Kalaska, 2005; Craighero, Fadiga, Rizzolatti, & Umiltà, 1999; Hommel, 2004).

Competition between choices (Cisek & Kalaska, 2005; Welsh, Elliott, & Weeks, 1999), spatial

recognition and attention (Craighero, Fadiga, Rizzolatti, & Umiltà, 1999; Tipper, Lortie, & Baylis,

1992), and even language processing capabilities (Song & Nakayama, 2009) can be distinguished

by looking at trajectories mapped onto Cartesian space. Furthermore, analysis of the kinetic and

kinematic features of reaching behaviours can delineate the contributions of planning and online

control processes during action (Chua & Elliott, 1993; Elliott, Helsen, & Chua, 2001; Elliott et al.,

2010; Flash & Hogan, 1985; Heath, 2005).

Page 68: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

50

! Behavioural!Variables!Recording!and!Analysis!Data about the reaching trajectory were collected either with an Optotrak Certus (Northern Digital

Inc., Waterloo, ON, Canada) sampling at 200 Hz, or a Flock of Birds electromagnetic sensor

(Ascension, Burlington) sampling at 100 Hz. Data from both devices were analyzed using custom

Matlab (The Mathworks Inc.) scripts and custom software (Analyse, Marcel Kaszap, Canada).

! Temporal!Measures!

Reaction time (RT), movement time (MT), were the main behavioural temporal variables

computed in this thesis. RT was defined as the time from the movement go signal, to the start of

the limb movement. RT was identified by either the resultant velocity of the limb above 30 mm/s

for 2 samples in a row or the sample where a micro switch was released (see experimental methods

for further details). MT was calculated as the time between the start of the movement start and

movement end, which was defined as the first sample that the limb velocity went below 30 mm/s.

! Error!Measures!

The main error measures used in Study A were radial error, defined as the absolute distance

between target and the finger at movement offset, and movement vector error (i.e., angular error),

defined as the angular difference between the start to target vector and the start to movement end

vector. In Experiment A1 where ocular fixation points were altered, and performance was

compared to that of a central target (see Experiment A1 methods below for details), a population

Z score transformation was used to normalize performance to the control condition(s).

For Study B, constant error and variable error in the movement amplitude and movement

direction axes were used as measures of performance. Constant error was calculated as the bias in

endpoint position and the participants’ perceived target positions. For the amplitude axis, positive

constant values meant the participant exhibited an overshooting bias when aiming towards the

target whereas negative constant error values indicated that participants undershot the target

location. For the direction axis, positive and negative constant error were defined in relation to a

coordinate system (see methods of Study B for details).

Page 69: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

51

Variable error was calculated as standard deviation of movement endpoints. Constant error

provides a measure of endpoint bias and whereas variable error provided an idea of the distribution

of endpoint positions.

! Latency!of!Online!Corrections!

Quantifying the online control of goal-directed actions is an ongoing challenge for researchers

interested in human movement science (see Khan et al., 2006 for a review). The methods employed

to measure online control vary greatly depending on the experimental apparatus and the

manipulations used to induce online corrections. As mentioned in Section 2.4.3.2, the use of target

perturbations is a prominent method for examining online control. The degree of online control

using these methods is quantified by contrasting characteristics of movements to perturbed targets

and of movements to stationary targets. The characteristics used for these contrasts have included:

muscle activations (Carlton & Carlton, 1987; Reichenbach et al., 2009); temporal-kinematic

features of movement trajectories (Oostwoud Wijdenes et al., 2013; Prablanc & Martin, 1992;

Saunders & Knill, 2003; Veerman, Brenner, & Smeets, 2008); and oscillation frequencies of

reaching limb (de Grosbois & Tremblay, 2016, 2018). The identification of the onset of corrective

responses (correction latency) is often of particular interest. For this computation, a variety of

approaches, including: thresholds (Reichenbach et al., 2009, 2014); statistical tests (Brenner &

Smeets, 1997; Heath, 2005; Saunders & Knill, 2003); and more contemporarily linear

extrapolation (Oostwoud Wijdenes et al., 2013; Oostwoud Wijdenes, Brenner, & Smeets, 2014;

Veerman et al., 2008); have been applied to position, velocity, and acceleration traces to generate

estimates (see the introduction section of: Oostwoud Wijdenes et al., 2014 for a brief overview) .

Although there is no universally accepted standard for computing correction latency, previous

work has revealed that reliable and accurate methods of detection do exist (Oostwoud Wijdenes et

al., 2014).

In the present thesis, the method of determining the latency in response to target

perturbations was adapted from Oostwoud Wijdenes et al. (2013). In their study, participants

performed 90 cm reaching movements to visual targets that could be perturbed 5 cm in the

movement direction, movement amplitude, or both axes. To compute correction latency in

response to perturbations in target direction, the difference the acceleration profiles between

Page 70: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

52

perturbed and unperturbed trials in the axis of perturbation was computed. The author’s first

identified the maximum difference in the acceleration occurring after perturbation onset. Then, a

line was drawn between the points on the acceleration profile corresponding to 25% and 80% of

the maximum difference. The correction latency was defined as the time interval between the

perturbation and where this line crossed 0 (i.e., y = 0; see Figure 12D for an application of this

method to a subset of data in Study B). Using simulated movement data, Oostwoud Wijdenes et

al. (2014) examined the efficacy of different assessments of online control, the extrapolation

method was deemed to be the most accurate and precise method for detecting correction latencies

(see Figure 12).

!

Page 71: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

53

&Figure! outlining! the! extrapolation!method! for! determining! correction! latency! as! described! by!Oostwoud!Wijdenes! et! al.! (2014).! The! authors! tested! the! predictive! ability! of! different! computations! of! correction! latency! to!estimate!a!100!ms!latency!to!target!perturbations!of!1!to!4!cm!in!simulated!movements!of!300!–!500!ms.!The!methods!included!threshold,!confidence!interval!and!the!extrapolation!method.!Panel!A)!shows!the!main!findings!of!Oostwoud!Wijdenes!et!al.!(2014)!for!the!3!cm!perturbation!stimulations,!the!extrapolation!method!applied!to!averaged!acceleration!data! yielded!a! very! accurate! prediction! of! correction! latency! across!movement! times.!Panel!B)!shows! the!method!applied!to!data!in!the!Study!B!of!this!thesis.!Averaged!acceleration!profiles!for!the!perturbation!curve!showed!in!panel!B!were!derived!from!movements!to!somatosensory!targets!perturbed!away!from!the!body.!!!

Page 72: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

54

! Robotic!Guidance!Apparatus,!Development,!and!Testing!An Epson Selectively Compliant Assembly Robot Arm (SCARA; Epson E2L853, Seiko Epson

Corp.) capable of moving in four degrees of freedom with a 0.02 mm spatial repeatability was used

to deliver target perturbations in thesis Study B. This robot was used in a previous study to deliver

guidance trajectories (Manson et al., 2014).

To perform the real-time manipulations reported in Study B, a new real-time

communication system was developed between the motion capture apparatus mentioned above

(Optotrak) and the robotic guidance device. There were two problems that needed to be addressed.

The first was the dramatic decrease sampling resolution that occurred when both the robot and

motion capture systems were run on the same computer (EPSON RC420: Microsoft Windows XP:

Intel Celeron processor, 851 Mhz: 1GB RAM). The second, and related issue, was the magnitude

and variability of the lag time between the reception of real-time data from the motion capture

system and movement of the robot (see Appendix 2 for initial values).

To solve these problems, control of the motion capture system was moved to another

computer (Microsoft Windows XP: Intel Pentium ® 3.00 GHz; 2.00 GB RAM). Syncing of the

motion capture with the robot was achieved by creating a local network using an ethernet cross-

over cable, and two custom made parallel output boards. Communication between the motion

capture system and robot guidance was done using both electrical signals and network-shared

variables outputted from a custom MATLAB script on both computers (version 7.10, R2010a and

version 7.2 R2006a) and a custom SPEL+ script (version 4.2) interfaced with MATLAB through

custom dynamic linked libraries.

The sampling frequency, perturbation times, and perturbation speed (acceleration and

deceleration values) used in Study B were chosen partially as a result of output and input frequency

tests for each custom parallel port i/o board, transfer time from motion capture to network variable

writing, and lag time from variable reading to detectable robot motion (for all tests, analyses, and

code used to establish communications: see Appendix 2).

Page 73: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

55

! Electroencephalography!Apparatus,!Recordings,!and!Analyses!

Electroencephalography (EEG) was used in Experiment A1 to measure the brain’s responses to

visual stimuli during sensorimotor transformations. Overall, EEG measures changes in electrical

activity occurring in the extracellular spaces of the brain over time. Although numerous cellular

activities contribute to the charge expressed in the extracellular matrix (see Table 1), activities

measured by EEG are commonly associated with post-synaptic potentials generated by the

depolarization and hyperpolarization of pyramidal neurons in layers IV and V of the cerebral

cortex (Buzsáki, Anastassiou, & Koch, 2012; Katznelson, 1981).

Page 74: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

56

Mechanism Description of Activity Effect on Extracellular Matrix/ EEG Recording

Fast acting potentials Synchronous activation of a neuronal network creating rapid (< 2ms) large amplitude electric fields.

Contributes to high frequency oscillations measured by EEG.

Calcium (Ca2+) spikes Flow of calcium into the cell mediated by opening of voltage gated channels. Can be independent of synaptic activity.

Long lasting (up to 100ms) effects similar an excitatory post-synaptic potential.

Intrinsic current resonance

Voltage dependent membrane responses of neurons activated in synchrony.

Effects neuronal oscillation frequency.

After-hyperpolarization potentials (AHP)

The influx of other ions into the extracellular matrix by ligand gated channels. Usually activated after neuronal hyperpolarization. Activities can be widespread or tightly localized.

Effect on the electrical field potentials if temporally synchronous. AHPs are hypothesized to have a role in the Berietschaftpotential.

Gap junctions Electrical communication between neural synapses.

Has an effect on oscillations if synchronous.

Epiphasic effects Electrical gradients generated in extracellular matrix by the activation of a nearby network of neurons.

Hypothesized to have an effect on the activation or likelihood of activation of nearby neurons (akin to external transcranial direct stimulation).

Table&1.& Descriptions&of&additional&cellular&processes&contributing&to&local&field&potentials&

Page 75: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

57

! Apparatus!!

In Experiment A2, when used, EEG was recorded continuously from 64 pre-amplified Ag-AgCl

(ActiveTwo, Biosemi, Amsterdam, Netherlands) electrodes embedded in an elastic cap mapped to

the extended 10-20 system (see Koessler et al., 2009 for comparisons with newer systems). Two

electrodes, a Common Mode Sense (CMS) active electrode and a Driven Right Leg (DRL) passive

electrode served as a feedback loop driving the average potential of the measured signal to levels

as close as possible to the analog-to-digital converter reference voltage. EEG signals were digitized

at a sampling rate of 2048Hz (DC 268, 3dB/octave).

! Visual!Evoked!Potentials!Features,!Calculations,!and!Analysis!

With regard to EEG, the main dependent variables used in this thesis were the magnitudes and

latencies of the different components of the visual evoked potential (VEP). VEPs are event related

potentials (ERPs) locked to the onset or offset of a visual stimulus and are a reliable way of

assessing the efficacy of retinal and visual cortical pathways (Celesia, 1984; Courchesne, Hillyard,

& Galambos, 1975; Phurailatpam, 2014). Furthermore, and important considering the goals of this

thesis it has been shown that VEPs increase with baseline neural activity related to visual

processing (Chawla, Lumer, & Friston, 2000; Chawla, Rees, & Friston, 1999), and thus are an

adequate way to examine cortical areas involved in visual-sensorimotor processes (e.g., Lebar,

Bernier, Guillaume, Mouchnino, & Blouin, 2015).

For Experiment A2, VEPs were obtained by averaging all epochs time-locked (~200 ms to

~400 ms) to the onset of the visual stimulus (each visual stimulus is described in the methods

associated with the experiment), with an average amplitude of -100 ms to – 5 ms serving as the

pre-stimulus baseline (see Figure 13 for graphic ERP averaging procedure). Raw recordings were

re-referenced to the average of the mastoid electrodes and data was visually inspected to remove

segments with artifacts.

Page 76: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

58

&

&&To!calculate! an!ERP,! continuous!EEG!data!were! preprocessed!and!artifacts! due! to!external! noise! and!interference!were!removed.!After,!the!data!were!segmented!into!defined!time!windows!centered!around!the!event!of!interest.!Finally,!by!averaging!all!trials!for!certain!conditions,!an!ERP!waveform!could!be!observed!for!each!electrode!of!interest!(see!below).!!!

In general, VEPs were characterized by the following deflections occurring at different

points in time with respect to the visual stimulus: a negative deflection occurring at 80- 100 ms

after; followed by a positive deflection occurring at 100- 120 ms; followed by yet another negative

deflection occurring at 140-160 ms; and finally, a second positive deflection occurring 180-220

ms (see Figure 14). Although, there has been evidence to suggest different components of the

VEP are representative of activities localized in different areas of the cortex (Brodeur et al., 2008;

Clark & Hillyard, 1996), the respective contribution of each above-noted component to

sensorimotor processes remains largely unknown. For the purposes of this thesis, the peak to peak

amplitude (delta) of the earlier identifiable components (P100 and N150) were investigated for

each of the electrodes of interest (see Figure 15).

Page 77: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

59

&Characteristic!peaks!of!a!visual!evoked!potential.!A!negative!deflection!N1!occurs!at!around!80!ms!after!the!visual!stimulation.!The!P1!occurs!at!about!100!ms!after!that,!followed!by!the!N2!at!around!140!ms.!Lastly!there!is!a!P2!that!occurs!about!200!ms!after!the!initial!stimulation.!For!the!EEG!analyses!in!the!present!thesis,!the!peak!to!peak!amplitude!between!the!P100!and!N150!was!used!for!analysis!(shown!in!red),!as!this!was!the!most!recognizable!peaks!across!participants.!!

&Activities!of!associated!with!visual!–sensorimotor!processing!were!analyzed!using!the!electrodes!overlapping!the! occipital! cortex! and! the! occipitoGparietal! junction! of! the! left! hemisphere! (electrodes! PO7,! PO3,! O1).! These!electrodes!were!selected!based!on!findings!in!previous!studies!(Lebar!et!al.,!2015).!

!

Page 78: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

60

! Current!Source!Density:!The!Surface!Laplacian!

To enhance the spatial and temporal resolution of the EEG data, current source density (CSD)

calculations were employed (Hjorth, 1975; Law, Rohrbaugh, Adams, & Eckardt, 1993; Perrin,

Bertrand, & Pernier, 1987; Perrin, Pernier, Bertrand, & Echallier, 1989). Signals were interpolated

using the spherical spline interpolation (order of splines: 3; maximal degree of Legendre

polynomials: 10; approximation parameter λ: 1.0e – 005) procedure (Perrin et al., 1987), as

implemented Brain Vision Analyzer 2.0 (Brain Product GMBH; Gilching, Germany). Current

source density calculations are independent of the reference electrode and are lesser affected by

far-field generators, monopolar recordings, and other artifacts (see Kayser & Tenke, 2015; Vidal

et al., 2015, for reviews). Analyses of peak amplitudes and latencies of VEPs were done using the

data obtained from CSD computations.When comparing CSD amplitudes between participants and

conditions, values were expressed as relative to a control condition using a log2 transformation.

! Source!Localization!Procedures!and!Analysis!!

In addition to analysis of CSDs, source analyses were also conducted to determine cortical

generators of the observed activities noted at the level of the scalp. Overall, source analyses attempt

to rectify what is known as the inverse problem of EEG. That is, for any recorded scalp potential,

many different combinations of generator activities are possible (Michel et al., 2004). By using

volume conduction models, and mathematical assumptions based on Maxwell’s laws (Tenke &

Kayser, 2012a; van den Broek, Reinders, Donderwinkel, & Peters, 1998), it is possible to compute

inverse solutions and estimate activated areas associated with recorded scalp activity.

For this thesis, source localization for EEG data was conducted using the minimum-norm

techniques implemented in the Brainstorm software package (Tadel, Baillet, Mosher, Pantazis, &

Leahy, 2011). Data from all sensors were imported, processed, and averaged for each participant.

Forward models for source analysis were computed using the boundary element method (BEM,

Gramfort, Papadopoulo, Olivi, & Clerc, 2010) on the anatomical MRI brain template from the

Montreal Neurological Institute (MNI Colin 27). This method uses triangulations of interfaces

between cerebral compartments of equal isotropic conductivities as a geometrical model. The main

benefit of head models constructed with the boundary element methods is the substantial buffering

Page 79: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

61

of external noise (Michel et al., 2004). Cortical sources for the VEPs were examined at the

latencies wherein the peaks of the CSDs were identified. Mean source activity were then submitted

to statistical contrasts (see methods of Experiment A1).

! Electrooculography!(EOG)!

! EOG!for!EEG!

EOG measures the changes in the standing corneo-fundal potential that primarily emerges from

the retinal pigment epithelium. As a consequence of the polarity between the front and back of

the eye, the potential recorded by electrodes placed on the skin changes when the eye moves

from side to side (Brown et al., 2006).

In Experiment A2, EOG electrodes integrated with the EEG system were used to monitor eye

movements and positions of gaze. The primary purpose of monitoring eye movements during EEG

collection was to subtract artifacts that could have occurred due to oculomotor related noise.

Because eye movements can be both spontaneous and reflexive (Bahill, Clark, & Stark, 1975;

Becker, 1991), the amplitude of artifacts created by different types of eye movements can vary.

Artifacts caused by eye movements and blinks were removed from recordings using independent

components analysis (ICA) s as implemented in Brain Vision Analyzer 2.0.

! EOG!for!Eye!Tracking!and!Gaze!Direction!Measurements!

For experiments not involving EEG, EOG was recorded from using an electroculogram

(Coulbourn Instruments, Lablinc Inc., Lehigh Valley, PA) sampling at 1000Hz. For Study A, eye

positions were monitored to ensure participants complied with instructions and task demands (i.e.,

to ensure participants were maintained fixation).

Page 80: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

62

!!Flexibility!in!the!Encoding!of!Reaching!Movements!to!

Somatosensory!Targets:!Behavioural!and!Electrophysiological!Experiments!!

A revised version of Study A has been submitted to: PloS ONE

Page 81: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

63

! Study!A!!

! Abstract!Prior to goal-directed actions, somatosensory target positions can be localized using either an

exteroceptive or an interoceptive body representation. The goal of the present study was to

investigate if the body representation selected to plan reaches to somatosensory targets is

influenced by the sensory modality of the cue indicating the target’s location. In the first

experiment, participants reached to somatosensory targets prompted by either an auditory or a

vibrotactile cue. As a baseline condition, participants also performed reaches to a visual target

prompted by an auditory-cue. Gaze-dependent reaching errors were measured to determine the

contribution of the exteroceptive representation to motor planning processes. The results showed

that reaches to both auditory-cued somatosensory and auditory-cued visual targets exhibited larger

gaze-dependent reaching errors than reaches to vibrotactile-cued somatosensory targets. Thus, an

exteroceptive body representation was likely used to plan reaches to auditory-cued somatosensory

targets. In the second experiment, the effect of using an exteroceptive body representation to

encode somatosensory target positions on pre-movement sensorimotor transformation processes

was examined. The cortical response to a task-irrelevant visual flash was measured as participants

planned movements to either auditory-cued somatosensory or auditory-cued visual targets. Larger

visual-evoked potentials were found when participants planned movements towards

somatosensory vs. visual targets. Furthermore, source analyses revealed that these activities were

localized to the left occipital and posterior parietal areas. These results indicated that visual and

visuomotor processing areas were more engaged when using an exteroceptive body representation

to plan movements to a somatosensory target, than when planning movements to an external visual

target.

Keywords: Body representation, somatosensory targets, proprioception, visually- evoked

potential

Page 82: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

64

! Experiment!A1!

! Introduction!!

In a game of “Simon Says”, spoken instructions such as “touch your elbow” can prompt

movements towards a specified body location. Similarly, the feeling of a mosquito landing on the

elbow could also prompt movements to this same position. Although the goal of both actions is to

reach to a common body location (i.e., hereafter referred to as a somatosensory target), the manner

in which this position is identified could influence the processes used to determine the target’s

coordinates. The purpose of the present study was to examine if the modality of the stimulus

indicating a somatosensory target’s position influences the representation used to plan movements.

Evidence that movements to somatosensory targets could be planned using multiple

representations is drawn from studies of autotopagnosia, a rare nervous system disorder

characterized by the inability to localize and orient one’s own body parts (Buxbaum & Coslett,

2001; Olsen & Ruby, 1941; Pick, 1922; Sirigu, Grafman, Bressler, & Sunderland, 1991). In a case

study by Sirigu et al. (1991), a patient was tested on her ability to point to her own body positions

in response to different instructional cues. The authors found that the patient was inaccurate when

pointing to body parts in response to verbal instructions but was able to accurately point to the felt

location of small objects placed on these same body positions (see also Ogden, 1985). Based on

these findings, the authors hypothesized that an exteroceptive, visually-based, body representation

was used to plan movements to somatosensory targets in response to verbal cues, whereas an

interoceptive, somatosensory-based, body representation was used to derive somatosensory target

positions in response to tactile stimulation (see de Vignemont, 2010 for a review of different body

representation taxonomies; and Berlucchi & Aglioti, 2010 and Paillard, 1991, 1980 for additional

theoretical discussions).

It is difficult to determine if the context-dependent mapping of somatosensory targets in

patients with autotopagnosia emerges as an adaptation to their neurological impairment or whether

such processes could also occur in healthy individuals. Furthermore, the verbal instructions used

in these studies were much more complex than the tactile cues, such that the additional brain

processes to encode words may offer less than optimal support for the mechanistic explanation. In

the present study, the contribution of the exteroceptive and interoceptive body representations to

Page 83: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

65

movement planning was examined in healthy individuals by evaluating reaching movements

towards somatosensory targets cued by simple auditory and vibrotactile stimuli.

To examine the sensorimotor transformations used to prepare movements to

somatosensory targets, Experiment A1 involved measuring gaze-dependent reaching errors (Bock,

1986; Henriques et al., 1998; Mueller & Fiehler, 2014a, 2016; Pouget et al., 2002; Schütz,

Henriques, & Fiehler, 2013). Large gaze-dependent reaching errors are indicative of a reliance on

visual information for movement planning processes (i.e., gaze-dependent coding, see Blangero et

al., 2005; Henriques et al., 1998; Pouget et al., 2002). Conversely, movements are termed gaze-

independent if the peripheral gaze-shift does not produce significant increases in error (Mueller &

Fiehler, 2014a). Studies examining movements to visual targets have shown that gaze-dependent

errors increase in magnitude with peripheral fixation eccentricity (Bock, 1986; Henriques et al.,

1998). In contrast, previous studies examining movements to somatosensory targets have shown

evidence for both gaze-dependent and gaze-independent coding (Jones & Henriques, 2010;

Mueller & Fiehler, 2014a, 2016).

Although there is evidence for flexibility in the encoding of somatosensory targets (Mueller

& Fiehler, 2014a, 2016), it is not clear if the cue used to indicate somatosensory target’s location

influences the body representation and sensorimotor transformations used for motor planning. The

present study investigated the influence of the cue modality by directly comparing the effect of a

gaze shift on reaching movements towards one of three possible somatosensory targets (i.e., the

index, ring, and middle fingers of the non-reaching hand). Somatosensory targets were identified

either by an exogenous auditory cue (i.e., intensities of sound indicating which finger to reach to)

or a direct tactile stimulation. To provide a reference of gaze-dependent reaching errors,

participants also performed movements to visual targets cued by an auditory stimulus.

! Methods!!

! Participants!!

Ten participants (8 females; mean age: 24.4 ± 2.6 years; range: 21-28) volunteered to take part in

the experiment. All participants were right-handed with normal or corrected-to-normal vision. The

experiment took 1.5 hours to complete and participants were compensated with 10€. Informed

Page 84: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

66

consent was obtained prior to the beginning of the experiment and the local research ethics

committee approved all procedures.

! Apparatus!

A drawn representation of the experimental setup is presented in Figure 16. The experiment took

place in a dark room where participants were seated comfortably in front of a custom-built aiming

apparatus. Once seated, participants placed their forehead on a stabilizing headrest located above

the aiming apparatus. From this position, participants viewed the aiming apparatus through a semi-

reflective glass mounted 30 cm above the aiming surface. Positioned 30 cm above the semi-

reflective glass was a liquid crystal display monitor (HP Compaq LA1956x, Palo Alto, CA) used

to project images onto the glass.

Page 85: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

67

&Panel!A)!A!drawn!representation!of!the!apparatus!used!in!Experiment!1.!Participants!sat!facing!an!immersive!

display!comprised!of!a!computer!monitor,!a!semiGreflective!glass!surface,!and!a!custom!aiming!apparatus!(not!to!scale).!Participants!made!movements!from!an!unseen!home!position!(microswitch)!to!either!visual!targets!projected!onto!the!surface! of! the! aiming! console! or! the! perceived! position! of! their! fingers.! In! the! somatosensoryGtarget! conditions,!participants!positioned!their!target!fingers!beneath!a!plastic!case!and!performed!movements!to!the!perceived!position!of!their!fingers!as!if!projected!onto!the!case’s!surface.!Note!that!the!visual!items!projected!on!the!semiGreflective!glass!were!perceived!by!the!participants!as!appearing!on!the!table!surface.!Panel!B)!A!representation!(not!to!scale)!of!the!aiming!surface!in!the!visual!target!condition.!Piezoelectric!buzzers!were!positioned!to!the!left!of!the!aiming!surface,!and!provided!the!imperative!signals!in!the!auditoryGcued!aiming!conditions! !

Page 86: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

68

The aiming apparatus was also equipped with a microswitch positioned 32 cm directly in

front of the participant. The switch was used as the starting position for the reaching finger. Three

piezoelectric buzzers (frequency: 4,000 Hz; KPEG158-P5H, Kingsgate Electric Corp, Taipei-

Hsien, Taiwan) were positioned 25 cm to the left of the microswitch. The input to the buzzers was

adjusted using potentiometers to produce a loud (i.e., 70 dB), a medium (i.e., 58 dB) or a soft (i.e.,

48 dB) sound, which signaled the location of targets in the auditory-cued conditions (see below).

Three circular indentations were made between the participant and the microswitch, at 16.5

cm, 15 cm, and 15.5 cm from the microswitch (see Figure 16) and -3.5 deg, 0 deg, and 4 deg

relative to the participant’s cyclopean eye, respectively. These indentations served as placeholders

for the somatosensory targets in the somatosensory-target conditions (see below). A black plastic

case (40 cm x 9 cm x 3 cm) with an opening towards the participants was used to cover the three

indentations and prevent the reaching finger from making contact with the target fingers (i.e.,

precluding any tactile feedback about endpoint accuracy). Within each indentation the stimulation

surface of a solenoid vibrator (type 347-652, Johnson Electric, Shatin, Hong Kong) was affixed.

These solenoid vibrators delivered brief vibrations (50 ms at 80Hz, with a 2 mm amplitude) that

signaled the position of the target finger in the vibrotactile-cued somatosensory conditions (see

below).

Fixation positions and visual targets (used in the visual target condition) were projected

onto the aiming surface using a custom MATLAB program (The Mathworks Inc., Natick, MA)

and Psychtoolbox-3 (Brainard, 1997). Three fixation positions were projected as blue circles,

measuring 0.45 deg of visual angle in diameter. They were located 4.5 cm distal to the microswitch,

at 0 deg, 10 deg to the right, and 10 deg to the left of the participant’s cyclopean eye. In the visual

target condition, targets of 1 deg of visual angle were projected onto the same spatial locations as

the indentations utilized for the somatosensory targets (see above).

A white light emitting diode (LED, 4 mm diameter) attached to the participant’s right index

finger was visible through the semi-reflective glass and provided visual feedback of the initial

fingertip position in the visual-target condition (see below). Also, a black plastic shield was placed

beneath the glass surface to occlude vision of the limb during the aiming movement. The position

Page 87: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

69

of the occluding surface was adjusted for each participant such that they could see the illuminated

LED on their finger when they were at the home position.

Eye positions were monitored with electro-oculography (EOG) (Coulbourn Instruments,

Lablinc Inc., Lehigh Valley, PA), sampled at 1000Hz. The position of the index finger was tracked

using an electromagnetic sensor secured to the tip of the right index finger (Flock of Birds,

Ascension Technology Corp., Burlington, VT), sampled at 100 Hz.

Overall, the participants’ task was to maintain gaze fixation while performing accurate

reaching movements with the index finger of the right hand to targets located between the home

position and their body (see Figure 16). Participants reached to somatosensory targets in response

to either an auditory (AUD-SOMA) or a vibrotactile (TACT-SOMA) cue. To obtain reference

values for gaze-dependent reaching errors, participants also performed reaches to visual targets in

response to an auditory cue (AUD-VIS). The presentation order of these conditions was

counterbalanced across participants.

! CueGtarget!conditions!!

In the AUD-VIS condition, participants positioned their left hand in a comfortable position, either

on their lap or on the table surface to the left of the aiming apparatus. The trial sequence was

initiated once the participant depressed the home position microswitch with their right index

finger. This action triggered the illumination of the finger LED and the presentation of the current

fixation point (i.e., -10 deg, 0 deg, or +10 deg). Two seconds later, participants were presented

with a soft, medium, or loud sound. For half of the participants, the loud and soft sounds

corresponded to right and left targets, respectively. This correspondence was reversed for the other

half of the participants. At movement onset, the finger LED was extinguished, therefore

participants only saw the position of their reaching finger relative to the visual target prior to

reaching.

The AUD-SOMA condition was similar to the AUD-VIS condition, but instead of visual

targets, participants placed the three middle fingers of their left hand (i.e., the ring, middle, and

index) into the three circular indentations beneath the plastic casing to serve as the target positions.

Participants made movements to target positions in response to the same auditory cues as in the

Page 88: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

70

AUD-VIS condition. Participants were instructed to “aim to the perceived position of their

fingernail as if it was projected onto the surface of the plastic case”. In contrast to the AUD-VIS

condition, participants did not receive visual information about the position of their reaching hand

prior to movement initiation.

The TACT-SOMA condition was similar to the AUD-SOMA condition, except that a

vibrotactile stimulus indicated the location of the target finger (i.e., index, middle, or ring finger).

Similar to the auditory-cue, the tactile stimulation of the target finger occurred 2 s after the

participant depressed the microswitch (i.e., the home position).

! Familiarization!trials!

Before each experimental condition, participants performed 2 sets of familiarization trials. For the

AUD-VIS and AUD-SOMA conditions, the first set of familiarization trials consisted of 3 blocks

of 3 reaching trials, wherein the same auditory-cue was presented in each block (e.g., 3 trials with

the “loud” sound). Subsequently, participants performed 1 block of 15 trials wherein auditory cues

were presented in random order. After each trial in this familiarization block, participants were

asked to report which auditory cue they had heard (e.g., “loud”, “medium”, or “soft”) and they

were very accurate at distinguishing between the different levels of sound (mean accuracy: 97%;

SD = 0.5%). Participants were also presented with 2 sets of familiarization trials in the TACT-

SOMA condition. Auditory cues were replaced by brief tactile stimulation applied to the reaching

finger. The participants’ task in second set of familiarization trials was to report which target finger

had been stimulated and they were all perfectly accurate in this task.

! Experimental!trials!

In all experimental conditions, participants performed 10 trials in each fixation-target combination

(i.e., 3 fixation [-10 deg, 0 deg, +10 deg] by 3 possible targets [left, middle, right], yielding 90

trials per condition) for each of the 3 cue-target conditions (i.e., AUD-VIS, AUD-SOMA, TACT-

SOMA) yielding a total of 270 total experimental trials.

! Data!analysis!and!reduction!!

Only data from movements to the centre target were analyzed because gaze-shifts were equidistant

with respect to this target (i.e., 10 deg on either side) and a consistent sound level (i.e., medium

Page 89: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

71

sound) was used to signal the target position across conditions. All finger and eye movement

recordings were exported to custom software (Analyse, Marcel Kazsap, QC, Canada) and

processed offline. Trials wherein the participants failed to maintain their fixation position prior to

or during the movement and trials where reaction times or movement times were higher or lower

than 2.5 times the within condition standard deviation were excluded from the analyses. Overall,

less than 9% of trials (to the centre target) were excluded (an average of 2.6 ± 1.6 trials per

participant). Statistical analyses were performed with the Statistical Package for Social Scientists

(version 21: SPSS Inc., Chicago, IL). Post-hoc-comparisons were performed using R (version

3.02, R Development Core Team).

Normalized reaction times (RTs), movement times (MTs), and directional reaching errors

were computed to investigate the effect of peripheral fixation on movement performance. RT was

defined as the time that elapsed between the go signal (i.e., either the sound or the finger vibration)

and the release of the home position microswitch. MT was defined as the time that elapsed between

the release of the home position microswitch and movement end, which was defined as the first

sample at which the limb fell below 30 mm/s. Directional reaching error was defined as the angular

difference (in degrees) between the home-target position vector and the home-movement end

position vector. Directional errors were computed for each fixation condition (i.e., left, middle,

and right). The within-participant mean and standard deviation of the directional errors in the

centre fixation position were used to calculate a population z-score value for each trial in the left

and right fixation condition. Negative and positive values indicate movements biased to the left

and right of the centre position, respectively. The resulting normalization thus took into

consideration the variability in participant’s performance in the centre fixation condition. This

normalization procedure was also completed for the reaction times and movement times. Larger

positive z-scores represent longer times whereas larger negative z-scores are indicative of shorter

times, relative to the centre fixation condition.

All z-score values were submitted to a 2-fixation direction (Left, Right) x 3 cue-target

condition (AUD-VIS, AUD-SOMA, TACT-SOMA) repeated-measures ANOVA. Alpha level

was set at 0.05 for all statistical contrasts, and effect sizes were reported using partial eta squared.

Post-hoc tests for any significant interactions were completed using pairwise t-tests with the

Bonferonni correction. Additionally, for horizontal reaching errors, one-sample t-tests were used

Page 90: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

72

to examine if peripheral fixation elicited a significant bias (e.g., different than 0 deg) for each

fixation direction, effect sizes are reported using Cohen’s dz.

! Results!!

! Directional!reaching!errors!

The ANOVA performed on the directional reaching errors revealed a significant main effect of

fixation direction (F(1,9) = 64.7, p < .001 ηP2 = 0.89) and a significant interaction between fixation

direction and cue-target condition (F(2,18) = 5.1, p < 0.05, ηP2 = 0.36; see Fig 17 A). Critically,

decomposing the interaction (Bonferonni corrected alpha of p = .006) revealed that the differences

in gaze-dependent errors for the left and right fixation directions were higher in both the AUD-

SOMA (M = 1.32, SD = 0.44), and AUD-VIS target conditions (M = 1.31, SD = 0.63), compared

to the TACT-SOMA condition (M = 0.69, SD = 0.69). Furthermore, one sample t-tests revealed

that the magnitude of the directional reaching errors in both left and right fixation conditions in

the AUD-SOMA (left: t(9) = 4.2, p < .001 dz = 1.3; right: t(9) = -4.1, p < .001, dz =1.3 ) and AUD-

VIS conditions (left: t(9) = 2.7, p <. 05, dz = 0.9; right: t(9) = -4.0, p <. 01, dz = 1.3) were

significantly greater than zero. In contrast, neither left or right fixation directions in the TACT-

SOMA condition were significantly greater than zero (left: t(9) = 1.8, p = 0.10, dz = 0.6 ; right:

t(9) = -1.5, p = 0.17, dz = 0.5). These results reveal gaze-dependent coding for movements to

auditory-cued visual and auditory-cued somatosensory targets (see Fig 17 B and Fig 17 C).

! Normalized!RT!and!MT!!

Analysis of RT yielded a significant main effect of cue-target condition (F(2,18) = 5.4, p < 0.01,

ηP2 = 0.38). Post-hoc tests (Bonferonni corrected alpha of p = 0.016) revealed that, in the AUD-

SOMA condition, participants exhibited significantly longer RTs (M = 0.44, SD = 0.43) compared

to both the AUD-VIS (M = -0.02, SD = 0.58), and the TACT-SOMA conditions (M = 0.09, SD =

0.37: see Fig 17 B). In contrast, the analyses of MTs yielded no significant main effects or

interactions (Fs < 1.96, ps > 0.16).

Page 91: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

73

&Panel! A)! Averaged! normalized! reach! trajectories! for! all! participants! in! each! cueGtarget! condition! (error!patches!represent!the!betweenGparticipant!standard!error!of!the!mean).!Panel!B)!Average!normalized!directional!error!for!each!cueGtarget!condition!(error!bars!represent!the!between!participant!standard!deviation).!Participant’s!reaching!errors!were!significantly!more!influenced!by!gaze!fixation!position!(i.e.,!Right!Fix!vs.!Left!Fix)!in!the!auditoryGcued!target!conditions!(GVIS!and!–SOMA)!than!in!the!vibrotactileGcued!target!conditions.!Panel!C)!Mean!normalized!reaction!times!for!participants!in!each!experimental!condition!(error!bars!represent!the!betweenGsubjects!standard!deviation).!Reaction!times!of!gazeGshifted!movements!were!significantly! longer!when!participants!aimed!to!auditoryGcued!somatosensory!targets!compared!to!both!auditoryGcued!visual!targets!and!vibrotactileGcued!somatosensory!targets.!

Page 92: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

74

! Discussion!

In Experiment A1, participants performed reaching movements to one of three somatosensory

target locations cued by either an auditory or vibrotactile stimulus. To provide a comparison for

gaze-dependent reaching errors participants also performed movements to visual targets cued by

an auditory stimulus. Overall, participants exhibited larger gaze-dependent endpoint errors when

reaching to auditory-cued somatosensory targets compared to vibrotactile-cued somatosensory

targets. Also, gaze-dependent errors for reaches to auditory-cued somatosensory targets were no

different than those observed for reaches to auditory-cued visual targets. However, normalized

reaction times were longer for reaches to auditory-cued somatosensory targets compared to

auditory-cued visual targets. Taken together, these findings suggest that more complex

sensorimotor transformation processes were required when using an exteroceptive representation

to plan movements to somatosensory targets than when planning movements to auditory-cued

visual and vibrotactile-cued somatosensory targets.

Participants exhibited larger gaze-dependent reaching errors in both auditory-cued target

conditions compared to the vibrotactile-cued somatosensory-target condition. These results are

congruent with earlier studies that found movements to somatosensory targets were planned in

gaze-independent coordinates if the eyes and target positions remained stable prior to movement

onset (Mueller & Fiehler, 2014a, 2016). According to the classical model of somatosensory

processing, cutaneous and proprioceptive inputs project to different areas (cutaneous: areas 3b and

1; proprioceptive: areas 3a and 2) of the post-central cortex (Friedman, Murray, O’Neill, &

Mishkin, 1986; Kaas et al., 1983). These areas are reciprocally connected and recent investigations

have shown that these areas respond to both cutaneous and proprioceptive stimuli (see Kim et al.,

2015). Given the close link between tactile stimulation and non-visual body representations (see

Serino & Haggard, 2010), the integration of spatially congruent somatosensory inputs in the

TACT-SOMA condition may have favoured the use of an interoceptive body representation for

encoding target position and planning the reaching vector.

In contrast, when the site of the tactile stimulation does not correspond to the location of

the target, planning processes could rely on an exteroceptive, more cognitive type of

representation. Consistent with this notion and similar to previous studies (Blangero et al., 2005;

Page 93: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

75

Jones, Fiehler, & Henriques, 2012; Pouget et al., 2002), gaze-dependent coding for somatosensory

targets was found in the AUD-SOMA condition. The absence of tactile stimulation in the AUD-

SOMA condition might have precluded direct encoding of the finger position and thus, access to

the gaze-independent coordinates when planning the reaching movement. These findings are

consistent with previous studies proposing that if an exteroceptive representation is used to define

a body position, the location of this position is more biased by environmental visual information

(Schwoebel, Coslett, & Buxbaum, 2001).

Although the analyses did not yield significant differences in gaze-dependent errors

between the AUD-VIS and AUD-SOMA conditions, the longer reaction times in AUD-SOMA

compared to the AUD-VIS conditions suggest that more complex sensorimotor transformation

processes are required when planning movements to somatosensory targets using the exteroceptive

body representation. This result is in line with previous studies showing that distinct sensorimotor

transformation processes are used for planning movements to somatosensory targets (Bernier,

Burle, Hasbroucq, & Blouin, 2009; Bernier & Grafton, 2010; Blouin, Saradjian, Lebar, Guillaume,

& Mouchnino, 2014; Sarlegna & Sainburg, 2007). For example, Bernier et al. (Bernier et al., 2009)

found that reach-related activities occurred earlier and were more pronounced in posterior parietal

cortex (PPC), when participants planned movements to visual targets compared to when they

planned movements to somatosensory targets. The authors also noted that movements to

somatosensory targets produced earlier and greater activations in premotor cortex compared to

movements to visual targets.

In the present study, because movements to auditory-cued somatosensory targets showed

gaze-dependent reaching errors, it is possible that the initial movement vector was computed in

visual coordinates. This would require a sensorimotor reference frame transformation wherein

somatosensory information about the target and effector are used to compute visual estimates of

their positions prior to the start of the movement (see Neggers & Bekkering, 2001). This remapping

of the somatosensory target positions would likely require greater visual processing in areas

involved in visuomotor transformations and gaze-dependent movement coding (Batista &

Newsome, 2000; Bernier & Grafton, 2010; Darling, Seitz, Peltier, Tellmann, & Butler, 2007;

Medendorp, Beurze, Van Pelt, & Van Der Werf, 2008) compared to conditions where this

Page 94: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

76

remapping is not required (e.g., reaches to visual targets). A second experiment was designed to

test this prediction.

! Experiment!A2!

! Introduction!

In Experiment A1, movement preparation times were longer when participants reached toward

auditory-cued somatosensory targets compared to when they reached toward auditory-cued visual

targets. This result was attributed to the additional transformation processes required to shift from

a gaze-independent (interoceptive) representation to a gaze-dependent (exteroceptive)

representation. The purpose of Experiment 2 was to examine the neural mechanisms associated

with the gaze-dependent encoding of somatosensory targets.

Although behavioural evidence for the remapping of somatosensory targets onto a gaze-

dependent reference frame has been reported (e.g., Mueller & Fiehler, 2014a, 2016; Pouget et al.,

2002), the neural processes underlying these sensorimotor transformations remain largely

unknown. Previous studies examining motor planning have implicated the occipito-parietal

network in the gaze-dependent coding of effector and target positions (Batista & Newsome, 2000;

Bernier & Grafton, 2010; Medendorp et al., 2008). Moreover, studies on autotopagnosia have

shown that the inability to use the exteroceptive body representation is linked to damage to the left

posterior parietal and parietal-occipital cortices (Buxbaum & Coslett, 2001; Corradi-Dell’Acqua

& Rumiati, 2007; de Vignemont, 2010; Ogden, 1985; Olsen & Ruby, 1941; Sirigu, Grafman,

Bressler, & Sunderland, 1991). Given these findings, it was hypothesized that there would be

greater activation in parieto-occipital networks when preparing movements to somatosensory

targets encoded in gaze-dependent coordinates as compared to external visual targets. Keep in

mind that for external visual targets, no remapping is needed as both hand and target positions are

already encoded in an visual coordinate system (Buneo & Andersen, 2006; Cohen & Andersen,

2002; Reichenbach et al., 2009).

To test these predictions, the cortical responses to a task-irrelevant visual stimulus (i.e., the

visual-evoked potential or VEP) was measured as participants planned movements to both

auditory-cued visual and auditory-cued somatosensory targets. Because baseline neural activity in

Page 95: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

77

the extrastriate cortex is a marker of visual processing and the amplitude of visually-evoked

responses increases along with baseline activity (Chawla et al., 2000, 1999), comparing changes

in VEP amplitudes should provide a good proxy for the engagement of visual and visuomotor

cortical networks.

! Methods!

! Participants!!

Ten participants (5 female, mean age: 26 ± 4.7; range 20-35), who did not participate in

Experiment 1, were recruited for Experiment 2. All participants self-reported being right-handed

and had normal or corrected-to-normal vision. Informed consent was obtained prior to data

collection and a local ethics committee at Aix-Marseille University approved all procedures. In

total, the experiment took place over 2 sessions lasting 1.5 to 2 hours each. There was at least 1

but no more than 10 days between sessions (average = 4 ± 3.5 days).

! Apparatus!

To accommodate the use of electroencephalography (EEG) and reduce electrical noise, all visual

stimuli were generated using LEDs positioned on the aiming surface (see Fig 18) instead of the

LCD monitor used in Experiment A1. Two additional microswitches and 3 yellow LEDs were

added to the aiming surface. Each additional microswitch served as a different possible starting

position and each LED, placed 0.5 cm distal and 0.5 cm to the left of each microswitch, served as

a possible fixation location (see procedures below). Thus, when participants fixated on any of the

LEDs and placed their finger on the corresponding microswitch, the finger was positioned in the

same relative retinal location, in the lower right visual field.

Three orange LEDs were placed at the same positions as the circular indentations that

served as the positions for somatosensory targets in Experiment A1. These LEDs were used as

visual targets in the AUD-VIS condition. To prevent participants from receiving tactile feedback

when reaching to visual targets, a thin piece (39 cm x 9 cm x ~0.3 cm) of transparent laminated

Bristol board was placed over the LEDs. In addition, the participant’s right index finger was

equipped with two LEDs: the same white LED that was used in Experiment 1 to indicate the initial

Page 96: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

78

position of the pointing finger; and a green LED (width 5mm, beam angle 40°, luminance 60000

mcd, 22.7 lm) attached to the fingernail generated the stimuli for the VEPs.

Page 97: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

79

&Panel!A)&A!drawn!representation!of!the!experimental!apparatus!for!Experiment!2!(not!to!scale).!Participants!fixated!on!one!of!three!fixation!locations!and!began!their!movements!from!the!associated!home!position.!In!the!auditoryGcued! somatosensoryGtarget! condition! (represented! above),! participants! performed!movements! to! one! of! the! three!middle!fingers!of!their!nonGreaching!limb.!Panel!B)!A!representation!of!the!aiming!surface!in!the!visual!target!condition.!Participants!fixated!on!of!three!target!positions!and!placed!their!fingers!on!the!corresponding!microswitch!to!begin!each!trials.!Piezoelectric!buzzers!to!left!again!provided!the!imperative!signals!to!initiate!movement!in!both!the!AUDGVIS!and!AUDGSOMA!conditions.!

Page 98: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

80

! Trial!procedures!

In contrast to Experiment A1, no gaze-shift stimulus was presented prior to the reaching

movement. As the purpose of Experiment A2 was to examine the differences cortical network

activation underlying reaches to auditory-cued somatosensory and visual targets, it was imperative

that visual stimulus was presented in the same retinal location prior to the start of the reaching

movement.

Each trial began when one of the three possible fixation positions was illuminated.

Participants fixated on the yellow LED and placed their right index finger onto the corresponding

microswitch. As in Experiment A1, a soft, medium, or loud sound was presented 2 seconds after

the participant placed the finger on the microswitch. That auditory cue once again either indicated

the spatial location of the somatosensory or visual target while also serving as the imperative signal

to begin the pointing movement. Then, 100 ms after the presentation of the auditory cue, during

the participant’s reaction time (i.e., movement planning), the green LED located on the pointing

finger generated a 50 ms flash. This presentation time was chosen based on previous studies

demonstrating significant modulations in evoked responses (Blouin et al., 2014) and reach-related

activity (Bernier et al., 2009) for both movements to somatosensory and visual targets (i.e.,

between 100-150 ms after the go signal). Participants were instructed to reach to the target as

precisely as possible and to stay at the target location until the next fixation point was illuminated.

Only one cue-target condition was used per experimental session (AUD-VIS or AUD-

SOMA), and the order of target modality presentation was counterbalanced across participants.

As in Experiment A1, the sound-target mapping was counterbalanced between participants and

remained the same for both experimental sessions. Movements were also conducted in darkness

and thus, participants had no visual feedback of the reaching limb during the movements.

!CueGtarget!conditions!

In the AUD-VIS condition, when participants placed their finger on the home microswitch, the

finger and all target LEDs were illuminated. When participants released the microswitch to begin

their reaching movement the finger LED was extinguished, but all target LEDs remained

illuminated throughout the trajectory.

Page 99: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

81

In the AUD-SOMA condition, participants performed reaches to the fingernail of one of

the three middle fingers on their left hand (see Section 2.5). As in the AUD-SOMA condition of

Experiment A1, participants did not receive visual feedback of their reaching finger during

movement planning. Because differences in endpoint biases were not informative in Experiment

A2, participants could make physical contact with the target finger, allowing terminal feedback

about their movement endpoint.

!Control!condition!

The magnitude of VEPs varies considerably between participants (e.g., due to difference in

impedance and thickness of the skull; see Allison, Wood, & Goff, 1983). For this reason, a series

of trials were performed in a control condition before both experimental sessions to normalize

VEPs for comparison across participants and sessions. The main difference between the control

and experimental trials was the absence of reaching movements.

For the control AUD-VIS condition, when the microswitch was depressed, the finger and

target LEDs were illuminated. Two seconds later, one of the three sounds were presented (i.e.,

soft, medium or loud). This sound was followed, 100 ms later, by the 50ms-flash of the green LED

of their index finger. Participants remained on the home position until the fixation stimulus was

turned off (i.e., ~3s after it was turned on). The same trial procedure was used for the control AUD-

SOMA condition, except the finger and target LEDs were not illuminated.

! Data!recording!

EEG data were recorded continuously from 64 pre-amplified Ag-AgCl (ActiveTwo, Biosemi,

Amsterdam, Netherlands) electrodes embedded in an elastic cap mapped to the extended 10-20

system. Two electrodes, a Common Mode Sense (CMS) active electrode and a Driven Right Leg

(DRL) passive electrode specific to the Biosemi system, which served as a feedback loop driving

the average potential of the measured signal to levels as close as possible to the analog-to-digital

converter reference voltage. Electrooculographic (EOG) activity was recorded bipolarly with

surface electrodes placed near both outer canthi as well as under and above the orbit of the right

eye. EOG recordings were used to identify ocular artefacts (see below) and also allowed the

experimenters to verify that participants fixated on the fixation LEDs during movement planning

and execution. The EEG and EOG signals were digitized (sampling rate 1,024 Hz, DC, 268 Hz, 3

Page 100: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

82

dB/octave) and band-pass filtered (0.1– 45 Hz, notch filter applied at 50Hz, 12 dB/octave). As in

Experiment 1, kinematic data of the reaching finger were recorded by tracking the positions of an

electromagnetic sensor fixed on the top of the index finger and sampled at a rate of 100 Hz.

! Data!analysis!

VEPs were obtained by averaging the time-locked (-200 to +300 ms) responses to the onset of the

50 ms green flash. These epochs were averaged for each participant and for each condition. The

mean amplitude of the -200 ms to -5 ms segment of each epoch served as the pre-stimulus baseline.

The monopolar recordings were referenced to the average of the right and left mastoid electrodes.

Recordings were visually inspected and epochs with eye movements or artefacts were rejected.

The average number of traces used for each condition varied from 172 to 179 (out of a possible

180), and there were no significant differences between the amount of trials rejected between

conditions as revealed by a 2 phase (control, experimental) by 2 cue-target condition (AUD-VIS

and AUD-SOMA) repeated-measures ANOVA. The result of this analysis indicates that the signal-

to-noise ratio was not different conditions.

A current source density (CSD) analysis was employed to increase the spatial resolution of

EEG signals (Babiloni et al., 1996; Law, Rohrbaugh, Adams, & Eckardt, 1993; Perrin et al., 1989).

The signal was interpolated with a spherical spline interpolation procedure to compute the second-

order derivatives in two-dimensional space (order of splices: 3; maximal degree of Legendre

polynomials: 10; approximation parameter lambda; 1.0e-005: see Perrin et al., 1989). CSD

measurements represent a reference-free estimate of activity at each electrode and are less affected

by far-field generators than monopolar recordings (Bradshaw & Wikswo, 2001; Law et al., 1993;

Nunez et al., 1994). Thus, CSD analyses were deemed to yield measures that better reflected the

underlying cortical activities and local sources (Kayser & Tenke, 2015; Tenke & Kayser, 2012b).

CSD-VEPs computed from the left occipital and left occipital-parietal electrodes sites (O1,

PO3, PO7) were used for the main analyses. These sites, contralateral to both the flash stimuli and

the reaching hand, were chosen based on studies showing increased processing of visual

information as a result of sensorimotor transformation processes, and studies implicating the

underlying cortical areas in the use of the exteroceptive body representation (Bernier & Grafton,

Page 101: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

83

2010; Corradi-Dell’Acqua, Hesse, Rumiati, & Fink, 2008; Corradi-Dell’Acqua, Tomasino, &

Fink, 2009; Felician et al., 2004; Lebar et al., 2015; Medendorp et al., 2008).

CSD-VEPs were assessed, for each electrode, by measuring the peak-to-peak amplitude

between the first positive and negative deflections that could be identified in all participants and

conditions after stimulus onset (see Figure 19). The latencies of the positive and negative

deflections (i.e., ~100 ms and ~150 ms, respectively) are reported for each electrode in Table 2.

The amplitude of the P100-N150 (hereafter referred to as the VEP) was expressed as the ratio of

the CSD-VEP amplitude measured in the Control and Experimental conditions (CSD-VEP ratio =

CSD-VEP experimental conditions/CSD-VEP control conditions). CSD-VEPs were represented

by a log2 transformation to reduce the nonlinearity of ratios. Normalized CSD-VEP amplitudes

were then used for statistical contrasts. An increase in normalized CSD-VEP amplitude was

considered indicative of additional visual processing in the early stages of sensorimotor

transformations.

The neural sources of the P100-N150 in all experimental and control conditions were

estimated using minimum norm technique as implemented in the Brainstorm software (Tadel et

al., 2011). To resolve the inverse problem and estimate the cortical sources of the VEP, data were

imported from all sensors, processed, and averaged for each participant, condition, and electrode.

The forward model was computed for each condition using a symmetric boundary element method

(BEM, Gramfort et al., 2010) on the anatomical MRI brain template from the Montreal

Neurological Institute (MNI Colin27). Sources of the averaged absolute activity were estimated

using the dynamic statistical parametric maps (dSPM) technique (Dale et al., 2000).

Both behavioural temporal measures (i.e., movement time and reaction time) as well as

normalized CSD-VEP amplitudes were submitted to paired samples t-tests. Effects sizes are

reported with Cohen’s dz.

Page 102: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

84

&Grand!average!VEPs!for!each!electrode!in!the!AUDGVIS!and!AUDGSOMA!conditions!(error!patches!represent!the!betweenGparticipant!standard!deviation).!The!peakGtoGpeak!amplitudes!between!P100!and!N150!as!determined!by!a!current!source!density!analysis!were!used!for!statistical!contrasts.!!

Page 103: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

85

The!latencies!of!the!CSDGVEPs!were!analyzed!using!a!2Gphase!(control,!experimental)!by!2Gcondition!(AUDGVIS,!AUDGSOMA)! repeatedGmeasures! ANOVA! for! each! electrode! of! interest.! The! analysis! of! latencies! did! not! reveal! any!significant!main!effects!or!interactions!between!fixation!direction!and!cueGtarget!conditions!for!either!the!P100!or!N150!component!for!any!electrode!(ps!>.05,!Fs!<!5.0).!

AUD-VIS AUD-SOMA

Control Experimental Control Experimental

Electrodes P100 N150 P100 N150 P100 N150 P100 N150

Po3 111 (23) 151 (26) 102 (16) 147 (14) 103 (18) 145 (10) 107 (14) 147 (12)

Po7 103 (7) 144 (13) 101 (14) 156 (16) 106 (13) 151 (14) 109 (8) 152 (17)

O1 101 (15) 140 (13) 104 (10) 153 (16) 112 (13) 148 (12) 106 (11) 143 (15)

Table&2.& &Mean&(and&standard&deviation)&of&latencies&(in&ms)&for&the&peaks&used&in&the&CSD@&VEP&calculation.&&

Page 104: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

86

! Results!!

! Behavioural!Variables!

Overall, there were no significant differences between any behavioral variables related to the

temporal aspects of movement performance between the AUD-VIS and AUD-SOMA conditions.

Paired samples t-tests did not reveal significant differences in movement times (overall mean 737

ms ± 136 ms; t(9) = - 0.14 , p = .88, dz = 0.04 ) or reaction times (t(9) = -2.22, p = .06, dz = 0.7),

although reaction times tended to be longer for somatosensory targets (M = 520 ms, SD = 70 ms)

compared to visual targets (M = 480 ms, SD = 60 ms), as in Experiment A1.

! CSDGVEPs!!

The analysis of CSD-VEPs revealed larger responses to the visual stimulus when reaching to

auditory-cued somatosensory targets compared to auditory-cued visual targets in both the occipital

(O1: see Figure 20A) and occipital-parietal (Po7: Figure 20B) electrodes. At the O1 electrode,

normalized (log2) VEP amplitudes of the P100-N150 component measured in the AUD-SOMA

condition were significantly larger than those observed in the AUD-VIS condition (M = 0.8, SD =

1.2 vs. M = 0.16, SD = 0.85, t(9) = 3.2, p = 0.05, dz = 0.7). Comparable VEP amplitude differences

were found at the Po7 electrode, as the amplitude of the P100-N150 component was larger when

planning movements in the AUD-SOMA condition as compared to the AUD-VIS condition (M =

0.43, SD = 0.90 vs. M = 0.79, SD = 2.6, t(9)= 3.2, p = 0.02, dz = 1.0). In contrast, there were no

significant CSD-VEP differences at the Po3 electrode (t(9) = -0.395, p = 0.70, dz = 0.13).

Figure 21 shows the average source activity between 100 and 150 ms (i.e., between the

peaks of the P100-N150 used for the VEP calculation) projected on a cortical template for all

experimental and control conditions in both the AUD-SOMA and AUD-VIS conditions. The

source analyses revealed a greater response to the flash in left occipito-parietal areas in the AUD-

SOMA compared to the AUD-VIS condition. Interestingly, contrasting the mean activity between

the AUD-SOMA condition and its control condition revealed that movement planning led to

significantly greater visual-related activation of the left occipital and posterior parietal cortices.

Page 105: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

87

Conversely, no significant increases in activity were found in these regions when contrasting the

AUD-VIS condition with its control condition.

! !

Page 106: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

88

!

!

&CSD!normalized!VEPs!for!the!occipital!(O1)!and!occipitoGparietal!(PO7)!electrode!(error!bars!represent!the!standard!error!of!the!mean).!For!both!electrodes,!CSDG!VEPs!were!significantly!larger!in!the!AUDGSOMA!compared!to!the!AUDGVIS!condition.!!

Page 107: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

89

!

&Grand!average!source!activity!for!each!condition!between!the!P100!and!N150!latency!time.!Source!activity!(color!maps! reveal! activation! levels)!was! localized! in!parietal! and!occipital!areas! in! both! experimental! and! control!conditions.!Statistical!contrasts!(pairedGsamples!tGtests,!alpha!set!to!0.05,!tGvalue!map!indicates!direction!of!effects)!revealed!significantly!more!activity!in!the!left!parietal!and!parietoGoccipital!regions!(as!indicated!by!the!shade!of!red)!for!the!AUDGSOMA!experimental!condition!compared!to!the!AUDGSOMA!control!condition.!

! !

Page 108: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

90

! Discussion!!In Experiment A2, the cortical response to a task-irrelevant visual stimulus was measured as

participants performed reaches to both auditory-cued visual and auditory-cued somatosensory

targets. No differences in movement times or reaction times were found between the two

conditions. Because the goal of Experiment A2 was to contrast cortical activation during

movement preparation between target modalities, it was important that the VEP stimulus be

presented at roughly the same time (within 150 ms, see Bernier et al., 2009) during movement

planning in both conditions. Although there was a non-significant trend for longer reaction times

for the AUD-SOMA vs. AUD-VIS conditions, the average magnitude of these differences was 37

ms (SD = 52 ms). Thus, it is likely that the visual probe was presented at a similar time in the

motor preparation process for both the AUD-VIS and AUD-SOMA conditions.

Overall, the results of both the CSD and source analyses revealed greater response to visual

input with the AUD-SOMA condition versus AUD-VIS condition. These findings are consistent

with the hypothesis that the gaze-dependent remapping of somatosensory target positions, using

an exteroceptive body representation, employs cortical networks associated with visual processing

and visuomotor transformations. The increased sensitivity of occipital networks to visual inputs

might be linked to a re-weighting of sensory information prior to engagement in higher-order

processes. For example, because occipital areas have been implicated in early multisensory

integration (see Murray et al., 2016 for a review) the increased activation observed in the current

study could be indicative of the role of these cortical networks in encoding body positions in visual

coordinates. This idea is supported by neuroimaging studies showing that hand-selective cells of

the lateral occipital cortex are strongly lateralized to the left hemisphere (Bracci, Cavina-Pratesi,

Ietswaart, Caramazza, & Peelen, 2012; Bracci, Ietswaart, Peelen, & Cavina-Pratesi, 2010).

In addition to enhanced activity in occipital areas, increased visual processing was found

in the left PPC when planning movements to auditory-cued somatosensory targets. This result

supports the findings of several studies showing that the left PPC is associated with the formation

of exteroceptive body representations (Buxbaum & Coslett, 2001; de Vignemont, 2010; Felician

et al., 2004; Olsen & Ruby, 1941; Sirigu et al., 1991b). Previous research has revealed that PPC

neurons are also responsible for the gaze-dependent encoding of the hand and target positions

Page 109: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

91

(Andersen, Essick, & Siegel, 1985; Buneo & Andersen, 2006; Medendorp et al., 2008;

Mountcastle et al., 1975). Although these studies have mostly used visual targets and visible body

positions, there is some evidence that gaze-dependent coding occurs in conditions without visual

inputs (Darling et al., 2007; Filimon, 2010; Medendorp et al., 2008). In a PET functional brain

imaging study, Darling et al. (2007) found that additional neural activation in the occipital and

posterior parietal lobes when their participants reached to a memorized somatosensory target (i.e.,

position held by the unseen hand before being passively displaced). Based on these findings, the

authors suggested that networks employed to guide reaches to visual targets were also used when

reaching to memorized somatosensory targets. These findings therefore build on those of Darling

et al. (2007) by showing that visual processes are also involved when planning movements to the

actual, non-memorized, position of kinesthetic targets.

Previous studies have shown that, in some cases, visual information is attenuated (i.e.,

gated, see Chapman, Bushnell, Miron, Duncan, & Lund, 1987) during movement planning to

somatosensory targets (Bernier, Gauthier, & Blouin, 2007; Blouin et al., 2014; Sarlegna &

Sainburg, 2007; Sober & Sabes, 2005). Evidence from both behavioural and neurophysiological

experiments have indicated that, when target locations are encoded in somatosensory coordinates

(e.g., fingers of the non-reaching hand), visual information about the reaching limb does not

contribute to the same extent to computation of the reach vector (Sober & Sabes, 2005). Although

there is substantial evidence that transformation networks exist to support direct conversions of

somatosensory information into a movement vector, conflicting evidence about the contexts

wherein these networks are employed have led to uncertainty about their role in motor planning

processes (Battaglia-Mayer et al., 2003; Bernier et al., 2009; Buneo & Andersen, 2006; Prevosto

et al., 2010; Saradjian, 2015). Based on the results of Study A, one contextual factor that should

be considered when evaluating the involvement of these networks is the body representation used

to define somatosensory target locations.

Lastly, it should be noted that the results of the present study do not necessarily support a

disengagement of the occipital cortex and PPC when planning movements to auditory-cued visual

targets or that visual information was not relevant motor planning to visual targets. Rather, the

results of the present study suggest that planning movements when vision of the reaching hand and

target is available does not require additional processing from areas sensitive to visual inputs,

Page 110: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

92

compared to the control condition, at least at the time when the visual probe was presented (i.e.,

100 ms after target cue presentation).

! Conclusion!In summary, it was found that the sensory modality employed to identify the position of a

somatosensory target impacts the body representation used for movement planning processes.

Preparing reaching movements towards auditory-cued somatosensory targets prompted the use of

an exteroceptive representation of the body. Conversely, an interoceptive body representation was

used when the position of the somatosensory targets was indicated by tactile stimulation.

Furthermore, it was found that the time required to initiate movement towards auditory-cued

somatosensory targets is longer than when the targets are indicated by tactile cues. This delay

likely indicates that the use of an exteroceptive representation to encode body position involves

more complex sensorimotor transformation processes as compared to conditions where the

transformation of target and effector positions is not required. By measuring the cortical response

to a visual stimulus, it was found that the additional sensorimotor transformation processes elicited

by the use of an exteroceptive body representation were associated with increased visual

processing in occipital and posterior parietal areas. Taken together, the findings of the present

study indicate that the body representation used to derive somatosensory target locations impacts

sensorimotor integration processes during movement planning. And more specifically, auditory

cues indicating the somatosensory target to be reached invokes visual sensorimotor processes and

an exteroceptive body representation.

Page 111: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

93

!!Rapid!online!corrections!for!upperGlimb!reaches!to!perturbed!somatosensory!targets:!Evidence!for!nonGvisual!sensorimotor!

transformation!processes!!

A version of STUDY B has been accepted for publication in the journal: Experimental Brain

Research

Page 112: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

94

! Study!B!

! Abstract!

When performing upper-limb reaches, the sensorimotor system can adjust to changes in target

location even if the reaching limb is not visible. To accomplish this task, sensory information about

the new target location and the current position of the unseen limb are used to program online

corrections. Previous researchers have argued that, prior to the initiation of corrections,

somatosensory information from the unseen limb must be transformed into a visual reference

frame. However, most of these previous studies involved movements to visual targets. The purpose

of the present study was to determine if visual sensorimotor transformations are also necessary for

the online control of movements to somatosensory targets. Participants performed reaches towards

somatosensory and visual targets without vision of their reaching limb. Target positions were

either stationary, or perturbed before (~450 ms), or after movement onset (~100 ms or ~200 ms).

In response to target perturbations after movement onset, participants exhibited shorter correction

latencies, larger correction magnitudes, and smaller movement endpoint errors when they reached

to somatosensory targets as compared to visual targets. Because reference frame transformations

have been shown to increase both processing time and errors, these results indicate that hand

position was not transformed into visual reference frame during online corrections for movements

to somatosensory targets. These findings support the idea that different sensorimotor

transformations are used for the online control of movements to somatosensory and visual targets.

Keywords: reaching, online-control, somatosensory targets, sensorimotor transformations,

double-step

Page 113: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

95

! Introduction!

When performing movements towards visual targets, the motor system can successfully adapt to

changes in target location (Day and Lyon 2000; Johnson et al. 2002; Sarlegna and Mutha 2015;

Smeets et al. 1990). These corrections can occur even when perturbations are not detected or when

there is no vision of the reaching limb (Goodale et al. 1986; Heath 2005; Komilis et al. 1993;

Pélisson et al. 1986; Reichenbach et al. 2009; Saunders and Knill 2003). When the reaching limb

is not visible, visual information about the new target location and somatosensory information

about the limb’s position can be used to perform trajectory amendments. Because movements to

visual targets are hypothesized to be planned in a visual reference frame (Ambrosini et al. 2012;

Buneo and Andersen 2006; Thompson et al. 2012, 2014), previous studies have argued that the

reaching limb’s position must be converted into visual coordinates prior to the initiation of

corrections (Prablanc & Martin, 1992; Reichenbach et al., 2009). It is unknown if such online

sensorimotor transformation processes also occur for reaches to non-visual targets. The purpose

of the present study was to investigate the online sensorimotor transformation processes involved

in the online control of reaches to somatosensory targets performed without vision of the reaching

limb.

! OnlineG!visuomotor!transformation!processes!for!movements!to!visual!targets!

Previous studies have found that rapid reaches to perturbed targets without vision of the limb are

less accurate and have longer correction latencies when there is no vision of the reaching limb

compared to when there is vision of the limb. For example, Komilis et al. (1993) had participants

perform reaching movements to visual targets that were perturbed either at movement onset or at

peak velocity. Movements could be performed either with or without vision of the limb. Similar

to previous studies (Goodale et al., 1986), the authors found that participants were able to

completely correct their movement trajectories in response to target perturbations at movement

onset, regardless of whether or not the limb was visible. However, for perturbations at peak limb

velocity, movements performed without vision of the limb were slower and slightly less accurate

than movements performed with vision of the limb (see also Heath 2005). Reichenbach et al.

Page 114: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

96

(2009) also noted that correction latencies were longer (+10 ms based on EMG and +30 ms based

on limb kinematics) when participants performed reaches to visual targets without vision of their

limb compared to when they reached with vision of their limb. It was reasoned that, without vision

of the limb, corrections are programmed based on the updated visual target location, and visual

estimates of the current limb position derived from somatosensory inputs and efferent information

(see also. Reichenbach and colleagues concluded that when the limb is not visible additional time

is required for the transformation of somatosensory information about the limb position to a visual

estimate prior to the programming of the correction. Together, the results of these studies support

the idea that a common visual reference frame is used for the online control of actions irrespective

of the sensory modality used to encode hand (see also Buneo and Andersen 2006; Buneo et al.

2008).

! Sensorimotor!transformation!processes!for!movements!to!somatosensory!targets:!planning!and!online!control!

The idea that a common visual reference frame is used for the online control of goal-directed

actions is based primarily on studies of movements to visual targets. Very little is known about

online sensorimotor control processes for movements to somatosensory targets. Research on

movement planning processes, however, has revealed that the references frame used to plan

movements to somatosensory targets could be visual ( Blangero et al. 2005; Jones and Henriques

2010; Pouget et al. 2002) or non-visual (Bernier et al. 2007, 2009; Blouin et al. 2014; McGuire

and Sabes 2009; Sarlegna and Sainburg 2007; Sober and Sabes 2005).

Both Pouget et al. (2002) and Jones and Henriques (2010) found that when participants

shifted their gaze position prior to reaching towards somatosensory targets, participants’ endpoint

errors were biased in the direction opposite to the gaze-shift (see also Blangero et al., 2005). The

effect of gaze position on movement accuracy was similar to that observed when participants

reached to visual targets (see Bock, 1986; Henriques et al., 1998). As mentioned in the above

Section 2.4.3.1.1 (Behavioural Evidence for a Gaze-Dependent Reference Frame), movements to

external visual targets are planned used in visual coordinates. Because these studies have shown

similar errors for movements to targets of both modalities, the authors concluded that movements

to somatosensory targets were also planned in a visual coordinate system (see also Mueller and

Fiehler 2014).

Page 115: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

97

In contrast, other studies have found that movements to somatosensory targets can be

planned in a non-visual coordinate system. For example, Sarlegna and Sainburg (2007) found that

altering visual information about the initial hand position hand no effect on endpoint errors when

participants reached to somatosensory targets but did hinder endpoint performance when they

reached to visual targets (see also McGuire and Sabes 2009; Sober and Sabes 2005). Because both

the limb and the target can be represented in somatosensory coordinates when there was no vision

of the limb (Battaglia-Mayer et al., 2003), and movements were unaffected by visual perturbations,

the authors concluded that the computation of the movement vector occurred in a non-visual

reference frame. Other studies have also suggested that using non-visual sensorimotor

transformations for movement planning to somatosensory targets may avoid errors which are

associated with the conversion from a somatosensory coordinate system to a visual coordinate

system (Sarlegna et al., 2009).

Although sensorimotor transformations in both visual and non-visual coordinate systems

have been found for the planning of movements to somatosensory targets, the type of sensorimotor

transformation employed for online trajectory amendments remains unclear. In the present study,

the latency and magnitude of online trajectory corrections to perturbed visual and somatosensory

targets were assessed to investigate the sensorimotor transformation processes used for the online

control of an unseen reaching limb. It was hypothesized that if a visual reference frame is

employed, longer correction latencies and smaller corrections should be observed for reaches to

somatosensory targets as compared to reaches to visual targets. This is because somatosensory

cues from both the perturbed somatosensory target position and reaching limb would have to be

converted into visual coordinates prior to the initiation of corrections. For movements to visual

targets, only the reaching limb position would require a conversion into a visual coordinate system.

In contrast, if corrections are programmed in a non-visual reference frame then we would expect

faster and more accurate corrections in response to somatosensory target perturbations compared

to visual target perturbations. Corrections in a non-visual reference frame would be programmed

using the new target and limb position in somatosensory coordinates, without the need for a

reference frame transformation.

Page 116: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

98

! Methods!

! Participants!

Fourteen participants (10 women, aged 20 – 33 M = 25, SD = 4) took part in the experiment. All

participants were right-handed (assessed by the Edinburgh handedness questionnaire, adapted

from Oldfield 1971), self-declared neurologically healthy, and had normal or corrected-to-normal

vision.

Written informed consent was obtained prior to the experimental protocol and the

University of Toronto’s Office of Research Ethics approved all procedures. Including the informed

consent, breaks, and debriefing, the experiment lasted approximately 3 hours and participants were

compensated with $20 CAD.

! Apparatus!

A drawn representation of the experimental setup is shown in Figure 22. The experiment was

conducted in a completely dark room, where participants were seated comfortably on a kneeling

chair facing a protective cage. Participants placed their head on a headrest that was positioned on

the outside of the cage. They interacted with the experimental materials that were inside the cage

through a window (80 cm high). A small microphone (FBA_4330948551, Phantom YoYo) that

was used for the vocal response time protocol (see below) was placed near the headrest position.

Inside the cage was a Selectively Compliant Assembly Robot Arm (SCARA; Epson

E2L853, Seiko Epson Corp.) that was used to position target stimuli in both the visual and

somatosensory target conditions (see Manson et al. 2014 and appendix 2 for details on the robotic

device). Located directly below the robot was a table with a custom-built aiming apparatus placed

on the table’s surface.

The aiming apparatus included a black tinted Plexiglas aiming surface (60 cm wide by 45

cm long by 0.5 cm thick) mounted 12 cm above a wooden base. Underneath the aiming surface,

there was a textured home position (2 cm by 2 cm). On the top of the aiming surface was a blue

light emitting diode (LED, 2 mm diameter) that served as the gaze fixation point. The LED was

aligned with the participant’s midline and was located ~65 cm from the participant’s eye position.

Page 117: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

99

In the robot’s neutral position (i.e., for no-perturbation trials), the custom end-effector

attached to the robot arm was positioned 0.5 cm above the aiming surface and 35 cm to the left of

the home position. Participants grasped the robot’s end effector with their left hand and were

instructed to depress a micro-switch (B3F-1100, OMRON) with their index finger. The micro-

switch served as both a reference for the somatosensory target location and a safety mechanism

that shuts off the robot if the button was released (note: no participants released the switch during

the study). In the visual target condition, a green LED (~6mm diameter) was attached to the robot’s

end effector (i.e., at the microswitch location) and served as the target.

The participant’s reaching fingertip and the robot’s end effector were both affixed with an

infrared light emitting diode (IRED). An Optotrak Certus (Northern Digital Inc.) motion tracking

system was used to track movements of the robotic arm and the reaching finger at 200Hz. A

custom MATLAB (The Mathworks Inc.) program was used to gather data from the Optotrak and

microphone, as well as send outputs to the aiming console and robotic effector. A piezoelectric

buzzer (SC628, Mallory Sonalert Products Inc.) was used to provide brief auditory cues to the

participants.

Page 118: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

100

&!A!drawn!representation!of!the!experimental!setup!(not!to!scale)!is!shown!in!panels!(a)!and!(b).!Panel!(c)!is!a!representation!of!the!aiming!console!and!stimuli! for!each!target!modality.! !Participants!sat!comfortably!facing!the!aiming! apparatus! in! a! completely! dark! room.! A! robotic! device! was! used! to! deliver! target! perturbations! in! both!somatosensory! (left! finger)! and! visual! (LED)! targets.!Participants! performed! leftward! arm!movements! to! reach! the!position!of!the!target!as!if!it!was!projected!onto!the!aiming!surface.!!

Page 119: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

101

! Procedure!

Participants completed the experimental tasks over two sessions, one for each target modality. The

presentation of target modality was counterbalanced across participants, and the time between

sessions was between 5 and 14 days (M = 10.5 days) for most participants.

Each session consisted of two protocols: a vocal response time protocol, and a reaching

protocol. In the first protocol, participants were asked to make a vocal response to the perturbation

of the target stimulus. In the second protocol, participants were asked to reach to the target stimulus

as accurately as possible within a movement time bandwidth (i.e., 450 – 600 ms from movement

start to movement end). Participants were given the instructions for the reaching protocol only

once they had completed the vocal response time protocol.

! Vocal!Response!Time!Protocol:!

The goal of the vocal response time protocol was to examine whether the modality of the target

could alter the time taken to detect the motion of the target. For the somatosensory target session,

participants responded to perturbations of the left limb placed on the robot’s end effector. For the

visual target session, participants responded to the perturbation of the target LED placed on the

robot’s end effector. During both the sessions, participants placed their right index finger on the

home position located underneath the aiming surface (see Fig. 22a and 22b).

At the beginning of each trial, the fixation light was turned on. After the experimenter

verified that participants were on the home position, the trial was started. After a random

foreperiod, the robot perturbed the visual or somatosensory target either 3 cm toward or away from

the participants, or 3 cm to the left of them (i.e., “catch trials”). The duration (M = 200 ms, SD =

3.8) and velocity of the target perturbations were the same in both the vocal response time protocol

and the reaching protocol. Participants were instructed to verbally respond with “Yo!” as soon as

the target stimulus moved either toward or away from their body, and to not respond when the

target moved to their left. Note that participants did react on 0.02% of the catch trials, however

none of the participants reacted to more than 1 catch trial. Following the recording of the vocal

response, the fixation light turned off and the robot returned to the neutral position.

Page 120: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

102

Participants were first presented with a set of familiarization trials (6 trials) wherein each possible

perturbation direction (e.g., away, towards, and catch) was presented twice in a row. Following

the familiarization trials, participants were presented with a randomized order of 58 perturbation

trials (24 away, 24 toward, and 10 catch).Vocal response times were computed as the time

difference between the onset of target displacement (i.e., when the velocity of the robot surpassed

30 mm/s) and the response time recorded by the microphone (see Fig. 23).

! Reaching!Protocol:!

After participants completed the vocal response time protocol, the experimenter explained the

reaching protocol and trial procedures. The reaching protocol consisted of 4 phases:

familiarization, perception of target position in the pre-test, reaching trials, and perception of target

position in the post-test. All 4 phases were performed for both target modalities.

There were two kinds of reaching trials: no-perturbation and perturbation reaching trials.

In the no-perturbation trials, participants performed movements from the home position to the

neutral target position within a movement time bandwidth of 450 – 600 ms. Each trial started with

the illumination of the fixation LED. In the visual target session, the target LED was also

illuminated at the same time as the fixation LED. Four hundred milliseconds after the fixation

LED was turned on, an auditory go signal (50 ms beep) cued participants to begin their reaching

movement. The time instant at which finger velocity raised above and fell below 30 mm/s for more

than 10 ms marked movement start and end, respectively. Once the reaching movement was

completed, participants received auditory feedback about their movement time and the fixation

LED was turned off. The auditory feedback indicated to the participants whether or not their

movement time was within the time bandwidth. Participants were presented with two short (50

ms) beeps if their movement time was within the bandwidth; one long (100 ms) beep if their

movement time was too shorter than the lower limit of the bandwidth; or three short (50 ms) beeps

if their movement time was longer than the upper limit of the bandwidth. The beeps also served

as a signal to move back to the home position to begin the next trial.

In the perturbation trials, the target was shifted 3 cm away from or towards the participant

either 300 ms before the go signal, or ~100 ms or ~200 ms after the movement onset. On average,

these perturbations occurred 450 ms (SD = 73 ms) before movement onset, or 93 ms (SD = 4 ms)

Page 121: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

103

or 190 ms (SD = 4 ms) after movement onset. (see Figure 23 for velocity profiles of hand and

robot movements in each perturbation condition). The change of target position required a change

in movement direction (with consideration that movements are planned as a vector defined in terms

of amplitude and direction; Buneo and Andersen 2006; Dadarlat et al. 2015, Desmurget et al.

1998). Offline analyses showed that the 100 ms perturbation time occurred prior to peak velocity,

at a time during which visual information has been found to be important for online corrections

(Kennedy et al., 2015; Tremblay et al., 2017). The 200 ms time roughly corresponded with the

peak velocity of the aiming movements. This time may be too late to use visual feedback

effectively (e.g., Kennedy et al., 2015) but may still be viable for the use of somatosensory

information (Goodman et al. 2018; Redon et al. 1991). Perturbations before movement onset were

included as a control condition to compare the corrections that resulted from planning and online

control processes between somatosensory and visual conditions.

Participants were asked to reach to the position of the target stimulus (e.g., the surface area

of the finger on the button, or target LED) as if projected onto the underneath of the aiming surface.

To perform the reaching task, participants performed an “underhanded” reaching movement,

primarily with muscles in their shoulder joints. Participants' wrists remained supinated throughout

the trajectory. To discourage wrist movements, participants also wore a wrist orthotic (Champion-

C218, Champion Health and Sports Supports, Cincinnati, USA). Participants performed 180

reaching trials for each target modality (360 trials total). These trials consisted of 90 no-

perturbation trials and 90 perturbation trials (15 trials in each of the 6 perturbation conditions).

None of the perturbation conditions were repeated more than twice consecutively.

Page 122: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

104

&!Examples!of!velocity!profiles!of! the!reaching!hand!and!the!robotic!effector!(Target)!for!each!perturbation!time.!Panel!(a)!shows!a!perturbation!occurring!before!movement!onsetl!panel!(b)!shows!a!perturbation!occurring!~100!ms!after!movement!onset,!and!panel!(c)!shows!a!perturbation!occurring!~200!ms!after!movement!onset.!

Page 123: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

105

!Familiarization!

After receiving the instructions for the reaching task, participants performed 30 trials to familiarize

themselves with the experimental task, the auditory feedback, and the movement time bandwidth.

Participants were presented with 18 no-perturbation trials followed by 12 perturbation trials (2

trials in every perturbation condition).

!Perception!of!Target!Position!Pre!and!Post!Tests!

To record their perceptions of target positions, participants were asked to reach to where they

perceived each target position was as if it were projected onto the underneath surface of the

Plexiglas. Participants first reached to the centre target and adjusted their index finger until they

felt it matched the target’s position. Participants then verbally indicated to the experimenter when

their hands were on the target position, and this position was recorded. Once the reaching hand

was returned to the home position, the robotic effector was moved to the ‘away’ target position

and the response procedure was repeated. Finally, the entire sequence was then repeated for the

‘toward’ target position. Participants’ perceived target locations were recorded twice during each

session, once after the familiarization trials (pre) and again after the completion of the reaching

trials (post). The perceived target locations recorded during these trials were used for endpoint

error calculations (see constant and variable error sections in results).

! Data!Analysis!

!Vocal!Response!Time!Protocol!

For vocal response time data, trials with a response time more than 3 standard deviations higher

or lower than the participant’s mean were removed. These accounted for 5.2% of all vocal response

trials. Vocal response data were submitted to a 2-perturbation direction (away, toward) by 2-target

modality (somatosensory, visual) repeated-measures ANOVA.

!Reaching!Protocol!

Trials with movement times, reaction times, or endpoint errors in both the movement

amplitude axis (initial movement axis) and movement direction axis (axis of perturbation) that

Page 124: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

106

were more than 3 standard deviations from the mean were excluded from the analyses. This

resulted in the exclusion of 4.5% of all reaching trials. The main dependent variables for this

experiment were constant error, variable error, correction magnitude, and the latency of online

corrections.

!Constant!and!Variable!Error!!

Constant error was calculated as the bias in endpoint position relative to the participant’s averaged

perceived target position (calculated using the pre- and post- target perception trials). Constant

error was computed for both the amplitude and the direction axes (hereafter referred to as

amplitude constant errors and direction constant errors, respectively). Variable errors were

computed by calculating the standard deviation of these constant errors (hereafter referred to as

amplitude variable error and direction variable error).

For amplitude constant errors, positive values indicated an overshoot relative to the target location

whereas negative values indicated an undershoot relative to the target location. Similarly, for

direction constant errors, positive values represent an over-correction relative to the new target’s

position, whereas negative values represent an under-correction relative to the new target’s

position.

Amplitude and direction constant and variable errors were submitted to separate 2-target modality

(somatosensory, visual) by 2-perturbation direction (away from the body and toward the body) by

3-perturbation time (before, 100 ms, 200 ms) repeated-measures ANOVAs.

!Correction!Magnitude!!

Correction magnitude was calculated as the average of the absolute difference between the average

end position of the perturbation trials (e.g., before, 100 ms, and 200 ms) and the average end

position of the no-perturbation trials. This measure was only computed for the direction axis (i.e.,

axis of the perturbations). It is worth noting that there were no differences in overall endpoint

variability between somatosensory and visual target conditions in the no-perturbation conditions

t(13) = -1.35 p = 0.20 (see results section: Comparison of no-perturbation trials, and Table 2.

Correction magnitudes were submitted to a 2- target modality (somatosensory, visual) by 2-

Page 125: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

107

perturbation direction (away, toward) by 3-perturbation time (before, 100 ms, and 200 ms)

repeated-measures ANOVA.

!Latency!of!Online!Corrections!

The method of determining the latency of online corrections was adapted from Oostwoud

Wijdenes et al. (2013). Using this method, correction latency was computed based on a linear

extrapolation of the differences in the average acceleration profiles in the movement direction axis

(axis of the perturbation) between no-perturbation and perturbation trials (see also Oostwoud

Wijdenes, Brenner, & Smeets, 2011; Veerman et al., 2008). When tested on simulated data, this

extrapolation method was deemed to be the most accurate and precise method for detecting

correction latencies (Oostwoud Wijdenes et al., 2014).

Accelerations profiles in the movement direction axis were computed by a double

differentiation of the displacement data obtained from sampling the finger IRED and subsequently

low pass-filtering these time-series with a second-order recursive bidirectional Butterworth filter

at 50Hz. For each participant, the difference in the average acceleration profiles were computed

between no-perturbation and 100 ms perturbation trials, and no-perturbation and 200 ms

perturbation trials. These difference profiles were then used to compute the correction latency.

The no-perturbation trials used for the computation of the average acceleration profile were

selected based on the distribution of movement times used to compute the profile in the

perturbation condition. The number of control trials used to compute the average trajectories thus

varied for each condition within each participant. The number of control trials used ranged from

25 to 85 (M = 59 trials, SD = 14). Overall, movement times between perturbation trials (M = 544

ms, SD = 17) and control trials (M = 543 ms, SD = 16) were not significantly different, as indicated

by a paired-samples t-test, t(13) = 0.86, p = 0.41.

To determine correction latency, the maximum acceleration value occurring after

perturbation was first identified. Second, a line was drawn between the points on the acceleration

profile corresponding to 25% and 75% of the maximum acceleration. Response latency was

defined as the difference between the time of perturbation and the time where this line crossed

Page 126: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

108

zero (i.e., y value of zero; see Oostwoud Wijdenes et al. 2013, 2014; Veerman et al. 2008) and

Figure 24 for a graphical representation of the method applied to participants’ data from the present

study).

Correction latencies were submitted to a 2-target modality (somatosensory, visual) by 2-

perturbation time (100 ms, 200 ms) by 2-perturbation direction (away, toward) repeated-measures

ANOVA. Note that corrections to the target perturbations that occurred before movement onset

likely occurred during movement planning and are therefore no reflective of online control

processes. Furthermore, accurate correction latencies for the “before” perturbation time are

unlikely to be detected by this method. Because of the above-stated reasons, the before

perturbation condition was not included in the statistical design.

It is also worth noting that for this analysis, the absolute values of correction latency were

not as important as the between-modality differences. In the present study, because of technical

limitations between the syncing of the Optotrak and robotic apparatuses, reaching data were

sampled at a lower frequency (200 Hz) than what was used in previous studies (500 Hz: Oostwoud

Wijdenes et al. 2011, 2014). Also, this was the first time such a method was applied to reaching

movements using performed with a supinated wrist posture to a somatosensory target. Thus, some

contrasts between our values and those commonly found in the literature were expected.

Page 127: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

109

&The!extrapolation!method! for!determining! the! latency!of!online!corrections.!For!each!participant!average!acceleration! profiles! in! the! direction! axis! were! computed! for! both! perturbation! and! noGperturbation! trials.! The!acceleration! difference! between! these! profiles! (Accel! Difference)! was! then! plotted! to! calculate! correction! latency.!Correction!latencies!were!computed!by!drawing!a! line!(Extrapolation!Line)!between!75%!and!25%!of!the!maximum!difference!in!the!Accel!Difference!profile!(Extrapolation!points)!and!extrapolating!the!line!to!the!first!zero!crossing.!The!time!between!the!perturbation!and!the!zero!crossing!was!defined!as!the!correction!latency.!Panel!(a)!shows!this!method!applied!to!averaged!data!for!somatosensory!target!perturbations!and!panel!(b)!shows!the!method!applied!to!averaged!data!for!visual!target!perturbations.!!

Page 128: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

110

!Comparison!of!noGperturbation!trials!!

To examine the effect of target modality on reaching performance and kinematics, paired samples

t-tests were performed (effect sizes reported with Cohen’s dz) on reaction time, movement time,

total movement amplitude, amplitude and direction constant and variable errors, as well as on time

to, and time after peak velocity. Note that for these comparisons, direction constant errors were

defined using a coordinate system relative to the home and target positions. Negative values

indicated deviations closer to the body with respect to the target, and positive values indicated

endpoints further away from the body with respect to the target.

!Statistics!and!postGhoc!tests!!

All statistical analyses were performed using the Statistical Package for Social Scientists (SPSS:

IBM Inc. version 20). For all t-tests and repeated-measures ANOVA’s, alpha was set to p = 0.05.

Only the significant main effects and interactions were reported. Also, when main effects could be

solely explained by a higher order interaction, only the break-down of the interaction was reported.

The Hyunh-Feldt correction was used to correct the degrees of freedom (corrected to 1 decimal

place) when the assumption of sphericity was violated. The Tukey’s Honestly Significant

Differences (HSD) test was used to decompose all significant interactions.

! Results!!

! Vocal!Response!Protocol!!

For target shift detection times, the analysis yielded a significant main effect of stimulus modality,

F(1,13) = 5.04, p < 0.05, ηP2 = 0.28 and a significant stimulus modality by perturbation direction

interaction, F(1,13) = 5.56, p < 0.05, ηP2 = 0.30, HSD = 22 ms. Overall, participants detected

perturbations to somatosensory targets (M = 452 ms, SD = 95) faster than perturbations to visual

targets (M = 486 ms, SD = 104). Breaking down the interaction between modality and perturbation

direction revealed that, when the stimulus was perturbed away from the body, there was a larger

difference in detection times between somatosensory and visual stimuli (somatosensory away: M

= 445 ms, SD = 100; visual away: M = 491ms, SD = 102) than when the stimuli were perturbed

toward the body (somatosensory towards: M = 459 ms, SD = 92; visual towards: M = 481 ms, SD

= 108).

Page 129: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

111

! Reaching!Protocol!!

Normalized trajectory profiles for each condition in the reaching protocol are displayed in Figure

25. Also, the overall kinematic and temporal features of the movements are shown in Table 3

(visual targets) and Table 4 (somatosensory targets).

! Comparison!of!noGperturbation!trials!

First, no-perturbation trials were analyzed to determine whether the sensory modality of the target

had significant effects on the different reaching variables. The analyses revealed significant effects

of target modality on the reaction time, t(13) = 2.98, p < 0.05, dz = 0.8 movement amplitude, t(13)

= 4.12, p < 0.001, dz = 1.1 and constant error in the movement direction axis, t(13) = -3.15, p <

0.01, dz = 0.8 (for all values see Table 1 and Table 2). Participants took more time to initiate

movements to somatosensory targets (M = 330 ms, SD = 60) compared to visual targets (M = 253

ms, SD = 71). Also, participants had larger movement amplitudes when reaching to somatosensory

targets (M = 35.2 ms, SD = 3.4) as compared to visual targets (M = 31.2 ms, SD = 5.0). Finally,

participant’s endpoints were distributed further away from the body when reaching to visual targets

(M = 1.31 cm, SD = 1.19) compared to somatosensory targets (M = 0.16 cm, SD = 1.07).

Page 130: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

112

&&Average!reaching!trajectories!for!each!condition!in!the!reaching!protocol.!Panels!(a),!(b),!depict!perturbations!occurring!before!movement!onset.!Panels!(c)!and!(d)!depict!perturbations!100!ms!after!movement!onset.!Panels!(e)!and! (f)! depict! perturbations!occurring! 200!ms!after!movement! onset.! Trajectories!were! normalized!with! each!point!representing!2%!of!movement!duration.!Error!bars!indicated!the!betweenGsubject!standard!deviation!of!spatial!position!

Page 131: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

113

Table&3.& Mean%and%standard%deviation%for%the%temporal%and%kinematic%variables%of%movements%to%somatosensory%(somato)%and%visual%targets.%

Perturbation Conditions

Variables Targets

Before

100ms

200ms

No Perturbation

Away Toward

Away Toward

Away Toward

Reaction Time (ms) Visual

253 (71)

119 (69) 105 (62)

254 (75) 253 (93)

257 (77) 257 (77)

Somato

330 (60)

127(83) 135 (75)

326 (48) 324 (60)

332 (61) 320 (68)

Movement Time (ms) Visual

521 (22)

518 (27) 506 (32)

559 (40) 544 (32)

533 (24) 559 (30)

Somato

522 (14)

515 (20) 519 (32)

554 (22) 525 (33)

550 (30) 536 (30)

Robot-Hand Start Difference (ms) Visual

-

-449 (71) -435 (69)

91 (2) 91 (2)

192 (1) 191 (2)

Somato

-

-453 (85) -459 (73)

95 (3) 95 (4)

193 (2) 192 (2)

Time to Peak Velocity (%) Visual

39 (5)

36 (5) 37 (4)

37 (5) 37 (5)

37 (6) 37 (4)

Somato

38 (5)

36 (5) 37 (5)

36 (5) 40 (7)

37 (6) 38 (6)

Time after Peak Velocity (%) Visual

61 (5)

64 (5) 63 (4)

63 (5) 63 (5)

63 (6) 63 (4)

Somato

62 (5)

64 (5) 63 (5)

64 (5) 60 (7)

63 (6) 62 (6)

Peak Velocity (m/s) Visual

1.12 (0.18)

1.13 (0.18) 1.24 (0.20)

1.12 (0.14) 1.11 (0.18)

1.13 (0.18) 1.11 (0.18)

Somato 1.31 (0.17) 1.33 (0.19) 1.45 (0.21) 1.27 (0.18) 1.34 (0.15) 1.28 (0.17) 1.30 (0.18)

Page 132: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

114

%

Table&4.& Mean%and%standard%deviation%(in%cm)%for%the%accuracy%variables%of%movements%to%somatosensory%(somato)%and%visual%targets.%

Perturbation Conditions

Variables Targets

Before

100ms

200ms

No Perturbation

Away Toward

Away Toward

Away Toward

Movement Amplitude Visual

31.2 (5)

30.9 (5) 31.4 (4.6)

30.9 (4.8) 31.6 (4.8)

31.2 (5.0) 31.6 (4.8)

Somato

35.2 (3.4)

34.7 (3.5) 35.7 (3.4)

34.6 (3.6) 36.1 (3.4)

34.7 (3.5) 36.2 (3.4)

Amplitude Constant Error Visual

-0.43 (2.22)

-0.02 (1.97) 0.19 (2.33)

0.02 (1.83) 0.40 (2.47)

0.23 (1.91) 0.39 (2.50)

Somato

0.38 (1.49)

0.48 (2.14) 0.66 (2.18)

0.38 (2.12) 1.07 (2.30)

0.44 (1.97) 1.14 (2.45)

Direction Constant Error Visual

1.32 (1.20)

0.47 (1.82) -0.10 (1.63)

-0.14 (2.02) -0.80 (2.11)

-1.62 (1.45) -2.03 (2.03)

Somato

0.16 (1.07)

-0.33 (1.30) -0.07 (1.69)

-0.06 (1.21) 0.60 (1.33)

-1.03 (1.35) -0.73 (1.24)

Amplitude Variable Error Visual

1.47 (0.35)

1.60 (0.92) 1.48 (0.71)

1.55 (0.93) 1.23 (0.30)

1.51 (0.63) 1.43 (0.78)

Somato

1.55 (0.49)

1.53 (0.61) 1.55 (0.38)

1.62 (0.52) 1.60 (0.63)

1.65 (0.67) 1.55 (0.54)

Direction Variable Error Visual

1.39 (0.35)

1.20 (0.37) 1.28 (0.47)

1.56 (0.64) 1.64 (0.41)

1.71 (0.55) 1.82 (0.48)

Somato

1.28 (0.23)

0.99 (0.28) 1.34 (0.38)

1.12 (0.28) 1.70 (0.69)

1.37 (0.55) 1.94 (0.76)

Correction Magnitude Visual

-

3.46 (0.94) 3.71 (0.81)

3.08 (0.97) 3.10 (0.70)

1.83 (0.57) 2.08 (0.72)

Somato - 4.18 (0.99) 4.68 (1.53) 4.45 (0.86) 5.34 (1.23) 3.53 (0.90) 4.06 (1.02)

Page 133: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

115

! Constant!and!Variable!Errors!

For direction constant error, the ANOVA yielded a significant main effect of perturbation time,

F(2,26) = 40.85, p < .001, ηP2 = 0.76, HSD = 0.41 cm, and a significant modality by perturbation

time interaction, F(2,26) = 11.94, p < .001, ηP2 = 0.48, HSD = 0.64 cm. Breaking down the 2-way

interaction revealed that participants showed smaller direction constant errors in response to target

perturbations in both the 100 ms and 200 ms perturbation conditions when they reached to

somatosensory targets as compared to when they reached to visual targets. No difference in

direction constant error between target modalities was observed in response to perturbations before

movement onset (see Fig. 26a and Table 3 & 4). For the trials with target perturbations, the

analyses yielded no significant main effects or interactions for amplitude constant and amplitude

variable errors (Fs > 0.09 & ps > 0.098).

For direction variable error, the analysis yielded a significant main effect of perturbation direction,

F(1,13) = 16.80, p < .001, ηP2 = 0.62; perturbation time, F(2,26) = 21.00, p < .001, ηP2 = 0.56,

HSD = 0.20; and a target modality by perturbation direction interaction, F(1,13) = 11.54, p <.01,

, ηP2 = 0.47, HSD = 0.25 cm. Participants’ endpoint variability was significantly higher when they

reached to targets perturbed 200 ms after movement onset (M = 1.71 cm, SD = 0.62) compared to

when they reached to targets perturbed 100 ms after movement onset (M = 1.51 cm, SD= 0.56).

Direction variable errors were also significantly higher in response to both 100 ms and 200 ms

perturbations than in response to perturbations before movement onset (M = 1.20 cm, SD = 0.39).

Breaking down the target modality by perturbation direction interaction revealed that, when

moving to a somatosensory target, participants' direction variable errors were significantly lower

when the target was perturbed towards the body (M = 1.16 cm, SD = 0.37) compared to when the

target was perturbed away from the body (M =1.66 cm, SD = 0.56). There were no differences

between perturbation directions for reaches to visual targets (global M = 1.62 cm SD = 0.58).

! Correction!Magnitude!

The analysis of correction magnitude yielded a significant main effect of target modality, F(1,13)

= 70.07, p <.001, ηP2 = 0.84 perturbation time, F(2,26) = 39.81, p <.001, ηP2 = 0.75 HSD = 0.63

cm, perturbation direction, F(1,13) = 7.71, p <.05, ηP2 = 0.37 and a target modality by perturbation

time interaction, F(2,26) = 10.04, p < .001, ηP2 = 0.44, HSD = 0.55 cm. Participants exhibited

Page 134: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

116

larger corrections in response to targets perturbed toward the body (M = 3.83 cm, SD = 1.47)

compared to targets perturbed away from the body (M = 3.42 cm, SD = 1.20). Decomposing the

target modality by perturbation time interaction revealed that, overall participants performed larger

corrections in response to somatosensory target perturbations compared to visual target

perturbations and that, compared to the before condition, the increase in correction magnitudes

observed in the 200 ms condition was significantly larger for movement to visual targets (1.63 cm

increase) compared to movements to somatosensory targets (0.64 cm increase: see Fig. 26b).

Page 135: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

117

!(a)!Direction!constant!error.!Participants!were!more!accurate!when!performing!reaches!to!somatosensory!targets!perturbed!after!movement!onset!compared!to!when!performing!reaches!to!visual!targets.!When!aiming!to!visual!targets!participants!exhibited!a!larger!underBcorrection!relative!to!when!making!movements!to!somatosensory!targets.!(b)!Correction!magnitude.!Participants!exhibited!larger!corrections!in!response!to!somatosensory!target!perturbations!than!in!response!to!visual!target!perturbations!at!all!perturbation!times.!Furthermore,!for!both!modalities,!participants!exhibited! smaller! corrections! in! response! to! perturbations! at! 200!ms! than! in! response! to! the! before! and! 100!ms!perturbation!times.!!!!

Page 136: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

118

! Latency!of!Online!Corrections!Following!Target!Perturbation!

The analysis of correction latencies yielded a significant main effect of target modality, F(1,13) =

501.00, p < .001, ηP2 = 0.96, perturbation direction, F(1,13) = 18.11, p < .001, ηP2 = 0.58 and a

target modality by perturbation direction interaction, F(1,13) = 11.62, p <.01, ηP2 = 0.47, HSD =

30 ms. The main effect of target modality revealed that correction latencies in response to

somatosensory target perturbations were significantly shorter (M = 68 ms, SD = 20) than correction

latencies to visual target perturbations (M = 188 ms SD = 46). Breaking down the interaction

revealed that, for visual targets, correction latencies were significantly shorter when the target was

perturbed away from the body (M = 164 ms, SD = 30) as compared to when the target was

perturbed towards the body (M = 213 ms, SD = 46). There were no significant differences in

correction latency between directions for movements to somatosensory targets (see Fig. 27).

A supplementary analysis was performed to investigate if the differences between

modalities in correction latencies could be explained by the differences in perturbation detection

times (see Appendix 3). It was found that the differences in vocal response times between visual

and somatosensory target modalities (M = 34 ms, SD = 61) were largely and significantly lower

than differences between modalities in correction latency (M = 120 ms, SD = 41). Overall, the

results of this analysis revealed that between modality differences in target shift detection could

not fully explain the differences in correction latency observed between the visual and

somatosensory target conditions.

Page 137: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

119

!Correction! latencies! in! response! to! target! perturbations.! Overall,! participants! corrected! faster! for!somatosensory! target! perturbations! than! for! visual! target! perturbations.! Furthermore,! for! visual! targets,! correction!latencies!were!longer!in!response!to!targets!perturbed!toward!the!body.!!!

Page 138: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

120

! Discussion!

The goal of the present study was to investigate whether the online sensorimotor transformations

for movements to somatosensory targets, performed without vision of the limb, occurred in a visual

or non-visual reference frame. Participants performed reaches to both somatosensory and visual

targets that were either stationary or perturbed either before (~ 450 ms) or after (~ 100 ms or ~200

ms) movement onset. If sensorimotor transformations for the online control of movements to

perturbed somatosensory targets employed a visual reference frame, then higher endpoint errors

and longer correction latencies were expected in response to such perturbations. In contrast to this

hypothesis, participants produced larger corrections and were more accurate when reaching to

perturbed somatosensory targets compared to when reaching to perturbed visual targets. Also,

correction latencies were shorter in response to somatosensory target perturbations than in

response to visual target perturbations that occurred after movement onset. Taken together, these

results provide evidence that non-visual sensorimotor transformations are employed for the online

control of movements to somatosensory targets when reaching with an unseen limb.

Participants were able to implement adjustments in response to somatosensory target

perturbations (average correction latency = 68 ms) more rapidly than in response to visual target

perturbations (average correction latency = 188 ms). These differences in latency were observed

even though the amplitude, the speed and timing of target displacements were the same for both

conditions. Correction latencies in response to the shift of visual target position (e.g., 120 – 300

ms) were in the range of what is typically found in other studies that examined reaching movements

performed without vision of the reaching limb (Day and Lyon 2000; Komilis et al. 1993; Prablanc

and Martin 1992; Reichenbach et al. 2009; Saunders and Knill 2003). Furthermore, the noted

correction latencies were longer than the online visual feedback processing times observed when

vision of the reaching limb is available (e.g., ~100 ms: Carlton 1992; Oostwoud Wijdenes et al.

2013; Zelaznik et al. 1983). These findings suggest that, similar to previous studies, corrections in

response to perturbed visual targets likely involved the online remapping of the unseen reaching

hand position to a visual reference frame.

Page 139: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

121

In contrast, correction latencies in response to somatosensory target perturbations were

much shorter than those commonly found when examining movements to visual targets. As

mentioned above (see above Section: 5.1.3.6.5 Latency of Online Corrections), technical

differences between the current study and previous work could account for some of these

discrepancies. However, the much shorter latencies in response to somatosensory target

perturbations as compared to visual target perturbations does provide evidence against the

hypothesis that the online control of reaching movements occur in a common visual reference

frame. For upper-limb reaches to perturbed somatosensory targets, both the unseen target location

and the unseen limb position would have to be remapped onto an extrinsic coordinate system prior

to the initiation of online trajectory amendments. These transformations would likely require more

time (Reichenbach et al. 2009) and result in greater errors (Sarlegna et al., 2009) than

transformations required when reaching to a visual target with the unseen hand position, where

only the latter must be remapped in visual coordinates. Thus, in the present study, it is likely that

online sensorimotor transformations occurred in a non-visual reference frame for planning and

correcting reaches to somatosensory targets.

The use of non-visual sensorimotor transformations for reaches to somatosensory targets

with the unseen hand would support the more rapid and more accurate corrections found in the

present study. For this type of sensorimotor transformation, corrections would be computed in

somatosensory coordinates based on inputs from both the target and the reaching limb (Battaglia-

Mayer et al. 2003; Burnod et al. 1999; Sarlegna and Sainburg 2007). Previous studies investigating

connections of the posterior parietal cortex in macaque monkeys have revealed that neural

networks capable of performing computations in somatosensory coordinates (Prevosto et al.,

2011). Specifically, the medial intraparietal area of the posterior parietal cortex, which is

implicated in sensorimotor transformations during the planning and control of arm movements

(Buneo and Andersen 2006; Desmurget et al. 1999; Reichenbach et al. 2014) was shown to receive

direct projections from both the somatosensory cortex (area 3a) and the dorsal column nuclei

(Prevosto et al., 2010, 2011). It is thus possible that when reaching to a somatosensory target,

updates to target location and reaching limb positions are processed directly through these network

connections. Some support for this hypothesis could be drawn from previous studies which showed

that disrupting processing in medial intraparietal area through transcranial magnetic stimulation

Page 140: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

122

impaired somatosensory-based corrections during the control of goal-directed actions

(Reichenbach et al., 2014).

The short correction latencies noted when participants reached to somatosensory targets

may also suggest the use of predictive mechanisms for correcting the movements. The fact that the

target finger was always displaced by 3 cm may have contributed to the speed of movement

corrections. That is, a priori knowledge of the final finger target position might have facilitated

processes responsible for the trajectory corrections and thus reducing the correction latency.

Although the visual target was always similarly displaced by 3 cm, the latencies for correcting the

movements were much longer in the visual target condition. Together, these observations might

suggest that predictive mechanisms are facilitated for controlling goal-directed arm movements

online when both the target and the reaching hand can be encoded in a common sensory modality.

Similar to previous studies (e.g., Saunders and Knill, 2003; Reichenbach et al., 2009) there

was no effect of perturbation time on correction latency for movements to both visual and

somatosensory targets. This finding supports those of previous studies that showed that online

corrections occurred at roughly the same time relative to the onset of visual target perturbations

(i.e., 163 ms after) in response to both early (25% of movement distance) and mid (50% of

movement distance) perturbation conditions (Saunders & Knill, 2003). This result was taken as

evidence for the pseudo-continuous use of visual feedback throughout the reaching trajectory (see

also Elliott et al. 1991). In the present study, our observation that correction latencies were not

significantly altered by perturbation time when reaching to both visual and somatosensory targets

may indicate that somatosensory information can be used in a pseudo-continuous manner at least

in the first 200 ms of a movement lasting at least 500 ms (see also Tremblay et al. 2017).

It is also important to note that the aforesaid differences in correction latencies between

movements to visual and somatosensory targets were not attributable to differences in the detection

of the target displacement. Although participants detected perturbations to somatosensory targets

faster than they detected perturbations to visual targets, the between modality differences in

detection time was much lower than the between modality differences in correction latencies (see

Appendix 3).

Page 141: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

123

The response times in both detection tasks (i.e., visual and somatosensory) may appear

longer (i.e., >440 ms) compared to previous studies. However, it is important to note that, in the

present study, participants had to respond only if the visual or somatosensory targets moved toward

or away from them and had to refrain from responding when the targets moved to their left.

Therefore, the detection tasks used here can be considered as the go/no-go tasks which are known

to increase the latency of the go response compared to the response latency obtained through

simple reaction tasks (Miller and Low 2001). Moreover, the longer detection times in the visual

target condition could also be explained by the fact that it takes less time to detect motion onset

than to detect motion direction (Blouin et al. 2010).

In agreement with the correction latency results, the analyses of direction constant errors

revealed that participants were more effective at implementing corrections in response to perturbed

somatosensory targets compared to perturbed visual targets. Participants exhibited larger

corrections when reaching to perturbed somatosensory targets compared to perturbed visual targets

for all perturbation times. Moreover, the constant error analyses revealed that participants were

more accurate when correcting for somatosensory target perturbations as compared to visual target

perturbations after movement onset (i.e., 100 ms and 200 ms conditions). No differences in

accuracy were noted with respect to target modality for target perturbations that occurred before

movement onset.

Similar to previous studies (Goodale et al., 1986; Komilis et al., 1993), the current results

showed that, if the target was perturbed during movement planning, participants were able to fully

correct for changes in target location. In the present study, the time available to implement

trajectory amendments in response to perturbations that occurred before movement onset (> 900

ms) could explain the absence of differences in movement accuracy between target modalities.

In contrast, when target perturbations occurred after movement onset, movement endpoints

were more accurate for reaches to somatosensory targets compared to visual targets. This finding

could have two possible explanations. First, because corrections latencies were ~120 ms shorter in

the somatosensory target condition, participants had more time to implement corrections when

reaching to somatosensory targets than to visual targets. Second, when both the target and the hand

are mapped in the same sensory coordinate system, sensorimotor transformations leading to the

Page 142: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

124

movements would be more accurate compared to when a reference frame conversion is necessary

(Blouin et al. 2014; Sarlegna et al. 2009; Sober and Sabes 2005). To investigate whether target

information obtained from somatosensory sources provides a better estimate of new target

position, future studies should examine how disruptions in the accuracy of somatosensory

information from the target limb (e.g., via tendon vibration) affects endpoint accuracy. If more

accurate reference frame transformations are responsible for endpoint accuracy, then disrupting

somatosensory target information should decrease movement accuracy. This decrease in accuracy

would also be independent of the time required to implement corrections.

In the present study, differences were found when targets were perturbed away vs. towards

the body, and that was the case for both the detection and control processes. Participants were

quicker at detecting perturbed somatosensory targets compared to perturbed visual targets, but the

difference between the two detection times was smaller when the targets were perturbed toward

the body. This result could be explained by the physical attributes of the experimental setup. In the

somatosensory target condition, the limb was already in slight extension. Thus, further extension

may be more easily sensed than flexion due to the increased loading of muscle spindles (Hulliger,

1984). A further extension also shifts the limb further away from the body’s centre of gravity likely

invoking a greater postural response that could have been more salient due to greater activation of

vestibular and cutaneous receptors (Lacquaniti & Soechting, 1986). For the visual target, the

physical setup of the experiment could have also played a role. Because of the position of the eyes

and the fixation LED, the angular displacement of the target resulting from the 3 cm perturbations

was greater when the target moved towards than away from the body. Even though all

perturbations took place in the lower visual field, it is possible that this wider change in angle

could have facilitated perturbation detection.

For somatosensory targets, the shorter detection times appeared to have a positive impact

on performance as movements to somatosensory targets perturbed away from the body were

significantly more precise than movements to somatosensory targets perturbed towards the body.

In contrast, the pattern of results in detection times was not consistent with the results obtained for

correction latency for visual targets. For example, it was found that correction latency was

significantly longer for movements to visual targets perturbed towards the body compared to visual

targets perturbed away from the body. These findings are therefore consistent with the hypothesis

Page 143: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

125

that detection processes have very little influence on correction processes during ongoing reaching

movements (Smeets, Oostwoud Wijdenes, & Brenner, 2016).

! Conclusions!

The present study demonstrated that, without vision of the reaching limb, movements to perturbed

somatosensory targets are corrected for faster and more accurately than movements to visual

targets. Thus, in contrast to movements to external visual targets, movements to somatosensory

targets are likely controlled using a non-visual transformation process based on somatosensory

information about the reaching limb and target positions. These findings lend support to the idea

that, different sensorimotor transformations, and perhaps different cortical networks are

responsible for the online control of movements to somatosensory and visual targets performed

without vision of the limb.

Page 144: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

126

!!General!Discussion!

! Thesis!Findings,!and!Future!Directions!

! Summary!of!Thesis!Findings!The current dissertation provides new knowledge pertaining to the sensorimotor transformation

processes underlying the planning and control of movements to somatosensory targets. Experiment

A1 revealed that endpoint errors for movements to auditory-cued somatosensory targets were more

biased by gaze position than movements to tactile-cued somatosensory targets. Also, shifts in gaze-

position resulted in increases in reaction time for movements to auditory-cued somatosensory

targets relative to both auditory-cued visual targets and tactile-cued somatosensory targets. These

results provide evidence that movements to auditory-cued somatosensory targets were planned in

a visual reference frame, and perhaps required more complex sensorimotor transformations.

Experiment A2 investigated the neural basis of sensorimotor transformations for planning

movements to auditory-cued somatosensory targets versus auditory-cued visual targets. It was

found that there was a facilitation of visual information processing when planning movements to

auditory-cued somatosensory compared to auditory-cued visual targets. These results provide

evidence that cortical visual networks play a role in the sensorimotor transformation processes

responsible for the remapping of movements to somatosensory targets into a visual reference

frame. Study B examined if online corrections for movements to somatosensory targets are

programmed in a visual reference frame. Participants showed larger correction magnitudes, more

accurate movement endpoints, and shorter correction latencies when reaching with an unseen limb

to perturbed somatosensory targets as compared to when reaching to perturbed visual targets.

These results provide evidence that non-visual sensorimotor transformations were used for the

online control of movements to perturbed somatosensory targets. The aforesaid findings were

discussed with for relevance to other experimental studies, clinical work, and theories of motor

control.

Page 145: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

127

! GazeB!(in)dependent!Encoding!of!Somatosensory!Targets!are!Influenced!by!the!Modality!of!the!Imperative!Stimulus!

In Experiment A1, gaze-independent coding was observed for movements to somatosensory

targets cued by a vibrotactile stimulus applied to the target position. This result was consistent

with previous studies that showed that if the eyes and somatosensory target positions remain stable

during movement planning, endpoint errors for reaches to vibrotactile-cued somatosensory targets

are not biased by gaze-position (e.g., Mueller & Fiehler, 2014a). Although the mechanistic

explanation for the gaze-independent mapping of somatosensory targets remains speculative,

previous studies suggest gaze-independent coding could be facilitated by the close links between

tactile stimulus processing and the somatotopic maps associated with the interoceptive body

representation (see section 3.2.2).

Studies in primates have found that somatosensory areas in the cortex contain somatotopic

maps of body positions that respond to electrical stimulation of peripheral nerves in the mapped

regions of the body (Kopietz et al., 2009; Sato et al., 2005). Specifically, when tactile stimulation

is applied to different digits, neural activation patterns corresponding to the stimulated digit could

be derived (Sato et al., 2005). Importantly, networks for different digits could be distinguished

even though the stimulated regions were close in proximity. Because these areas contain reciprocal

connections to premotor, motor, and parietal areas, it is possible that tactile stimulation could

promote movement planning using target coordinates derived from the somatotopic map of body

(see Cisek and Kalaska, 2002; Rushworth et al., 1998). Overall, the absence of tactile stimulation

in the auditory-cued conditions may limit access to somatosensory coordinates when planning

voluntary arm movements. In this case, a more cognitive, exteroceptive, representation of the

target and limb positions may be used for movement planning.

The idea that an exteroceptive body representation could be used for planning movements

to auditory-cued somatosensory targets is supported by previous studies showing that movements

to somatosensory targets are planned in a visual reference frame (Blangero et al., 2005; Cohen &

Andersen, 2002; Jones & Henriques, 2010; Pouget et al., 2002). This finding, however, contrasts

with other studies arguing that the reference frame selected for planning movements to

somatosensory targets can be non-visual (Bernier et al., 2009; Bernier & Grafton, 2010; McGuire

& Sabes, 2009; Sober & Sabes, 2003, 2005). Studies examining movements to targets of different

Page 146: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

128

modalities have revealed that the sensorimotor transformation processes prior to movements to

somatosensory targets are distinct from the transformation processes used for movements to visual

targets. For example, somatosensory information was found to contribute more to movement errors

when planning movements to somatosensory targets than visual targets (Bernier et al., 2007;

Sarlegna & Sainburg, 2007; Sober & Sabes, 2003, 2005). Also, other studies have shown that the

cortical processes and activation patterns associated with movement planning are also different

for movements to visual and somatosensory targets (Bernier et al., 2009; Bernier & Grafton, 2010;

Blouin et al., 2014). Although the results of Experiment A1 are not necessarily in conflict with

target modality-dependent transformation hypotheses, the results do suggest that these hypotheses

should be modified. The results of the Experiment A1 extend the literature on modality-dependent

processing by providing evidence that factors other than target modality could influence

integration processes prior to movement onset. This hypothesis coincides with a growing body of

literature, suggesting that somatosensory target positions be represented concurrently in multiple

reference frames, and the reference frame selected is based on task-dependent sensory re-

weighting processes (Badde, Röder, & Heed, 2014; Battaglia-Mayer et al., 2003; Liu & Ando,

2018).

Previous studies have found that the reference frame used to identify body positions is

dependent on the goal of the tasks and the sensory information available. One prominent example

is the crossed limb temporal order judgement task (Yamamoto & Kitazawa, 2001). In this task,

participants are asked to judge the relative onset of two tactile stimuli applied to the right and left

limb with their eyes closed. Participants have more difficulty reporting the correct side of the first

stimulation when adopting a crossed limb posture compared to an uncrossed limb posture (Badde

et al., 2014; Noel & Wallace, 2016; Overvliet, Azañon, & Soto-Faraco, 2011; Yamamoto &

Kitazawa, 2001). In contrast to tactile detection tasks, the adoption of a crossed limb posture has

no effect on errors when participants are asked to reach to the location of the tactile stimulus

(Brandes & Heed, 2015). Analysis of reach trajectories reveal however, that trajectories of hand

movements toward tactile locations are corrected later when moving to tactile targets located on

crossed limbs versus uncrossed limbs (Brandes & Heed, 2015). Furthermore, this difference

between crossed and uncrossed occurs only when multiple target locations on different limbs are

used. In the crossed limb posture, the exteroceptive representation of the body is in a spatial

Page 147: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

129

conflict with somatosensory senses as the right limb is located on the left side of the body and the

left limb is located on the right side of the body. Thus, for tasks involving the identification of

body positions, or determining the location of stimuli applied to body positions, it is possible that

an exteroceptive, visual representation is used.

In Study A, the participants used auditory information to derive spatial locations of body

positions. Thus, participants had to use auditory information as cue to derive the spatial location

of a body position. Even though participants were capable of accurately completing these

localizations, our results suggest that similar to the crossed-limb judgement and reaching tasks,

body positions were localized in visual coordinates prior to the programming of the movement.

! Activation!of!Cortical!Networks!Associated!with!Visuomotor!Transformations!for!Movements!to!AuditoryBCued!Somatosensory!Targets!

In Experiment A2, there was an increased cortical response to a task-irrelevant visual stimulus

during the planning of movements to auditory-cued somatosensory targets versus auditory-cued

visual targets. The sources of this response were localized to occipital and parietal-occipital areas,

which are typically associated with visuomotor processing and sensorimotor transformations

(Buneo & Andersen, 2006; Cohen & Andersen, 2002; Davare et al., 2012; Reichenbach et al.,

2014; Vesia et al., 2008). This increase in cortical visual information processing indicates that

visual and visuomotor processing networks were recruited when movements to somatosensory

target positions were planned in a visual reference frame. These networks could therefore be

implicated in the sensorimotor transformations responsible for converting somatosensory

information about the unseen hand and target locations into visual coordinates.

In agreement with the results of Experiment A2, previous studies have also implicated the

left occipital and posterior parietal cortices in the visual mapping of somatosensory information.

For example, studies on autotpagnosia have revealed that lesions in the left occipital and parietal

cortices result in the inability to identify the spatial location of body positions (Buxbaum and

Coslett, 2001; Felican et al., 2004; Sirigu et al., 1991b) and studies on movement control have also

found that virtual lesions to posterior parietal areas are associated with the impairments in the

online transformation of somatosensory information into visual coordinates (Reichenbach et al.,

Page 148: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

130

2014). The results of Experiment A2 builds on these previous findings by demonstrating that these

networks are also involved visual remapping of somatosensory targets prior to upper-limb reaching

movements. Furthermore, in accordance with Experiment A1 the results of Experiment A2

suggests that alterations in cortical sensory information processing are not exclusively driven by

target modality, but the relevance of the sensory information to sensorimotor transformations.

The assertion that cortical processing of sensory information is influenced by factors

beyond target modality is supported studies examining environmental alterations in sensory cue

relevance. Previous research has demonstrated that the late cortical response to proprioceptive

stimulation (i.e., vibration of the lower limbs) was larger when participants prepared stepping

movements versus when standing still in normal gravity. Importantly, no differences in the cortical

response to somatosensory stimuli between standing and stepping were found in microgravity

(Saradjian, Tremblay, Perrier, Blouin, & Mouchnino, 2013). This facilitation in the cortical

processing of somatosensory information in normal gravity is likely related to equilibrium

constraints when shifting the centre of pressure to unload the stepping leg. The absence of a

facilitation in somatosensory information processing in microgravity could thus be related to the

lack of equilibrium constraints and centre of pressure shifts, as observed by Mouchnino et al.

(1996). Changes in visual sensory information processing have been shown in studies examining

mirror-reversed drawing (Lebar et al., 2015). It was found that accurate performers had enhanced

responses to task-irrelevant visual stimuli during mirror reversed drawing compared to rest. In

mirror reversed drawing, accurate performance is more dependent on visual information about

limb position than somatosensory information (Balslev et al., 2004; Lajoie et al., 1992). Therefore,

the accurate performance of movements would require a greater weighting of visual relative to

somatosensory inputs. Taken together, the results of these studies, Experiment A2, and others (see

Saradjian, 2015 for review), provide evidence that the cortical processing of sensory information

depends not only on target modality, but are flexible. In the case of Study A, our results indicate

the processing of visual information may be enhanced because of its relevance to the processes

required to transform the unseen hand and somatosensory targets locations into visual coordinates.

Page 149: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

131

! Fast!Correction!Latencies!Suggest!Control!Based!on!Optimized!Feedback!use!for!Movements!to!Perturbed!Somatosensory!Targets!

The observation that the online control of arm movements to perturbed somatosensory targets is

facilitated by non-visual sensorimotor transformation processes is also a key finding of this

dissertation. Previous studies have argued that online corrections are programmed in a visual

reference frame (e.g., Komilis et al., 1993; Prablanc & Martin, 1992; Reichenbach et al., 2009;

Thompson et al., 2014). One limitation of this previous work is that primarily movements to visual

targets were investigated. If online control were to occur in a visual reference frame for movements

to somatosensory targets, both the target position and reaching limb position would have to be

remapped unto a visual coordinate system prior to the programming of corrections. These

transformation processes would require time, and likely result in larger errors than movements

where no transformation is required (Sarlegna et al., 2009; and Study A). In Study B, it was found

that corrections were indeed initiated faster, and were more accurate when participants corrected

their movements to perturbed somatosensory targets compared to perturbed visual targets. These

results provided evidence that non-visual transformation processes were used for the online control

of reaches to somatosensory targets.

The use of a non-visual reference frame for trajectory corrections would require

computation of a corrective movement based on the updated target location and the unseen limb

position in somatosensory coordinates. Although neural connectivity studies have revealed that

the PPC and premotor areas both contain reciprocal connections and inputs from thalamocortical

neurons, thus relaying dynamic arm information (Prevosto et al., 2010, 2011), there is very little

evidence that these reciprocal networks are employed during online control (Reichenbach et al.,

2014). Also, given the rapid speed of adjustments to somatosensory perturbations in Study B (~70

ms) and others (see Pruszynski & Scott, 2012; Rothwell, Traub, & Marsden, 1980) it is likely that

subcortical (e.g., Nagy, Kruse, Rottmann, Dannenberg, & Hoffmann, 2006; Werner, 1993) and

optimal feedback mechanisms were also employed.

The results of Study B revealed significantly shorter correction latencies in response to

perturbations to somatosensory targets as compared to visual targets. With the methodological

limitations of the computation of correction latencies taken into consideration, somatosensory

correction times were still much lower than what would predicted solely by feedback control

Page 150: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

132

mechanisms. Recent advances in optimal control theory (see Scott, 2004) have proposed that rapid

motor responses (such as the long latency stretch reflex) could contribute to upper-limb control

based primarily on somatosensory information. In a review, Pruszynski and Scott (2012) argue

that voluntary control processes may employ rapid motor responses (~50 ms) to adjust to

mechanical perturbations delivered to the limbs. To support this claim the authors showed that

rapid motor responses (“reflex responses”) to mechanical perturbations share features with

voluntary responses in that they can be modified by task instructions (Hammond, 1956), features

of the perturbation (Pruszynski & Scott, 2012), and movement goals (Pruszynski, Kurtzer, & Scott,

2008). Furthermore, the neural circuitries underlying rapid motor responses (such as the long

latency stretch reflex) may be accessible to voluntary feedback networks (Pruszynski & Scott,

2012). A fast feedback system based on the recruitment of rapid motor response networks could

serve as a mechanism for rapid adjustments to somatosensory targets perturbation seen in Study

B.

The fast correction times could also be explained by the involvement of subcortical

movement control networks (Alstermark et al., 2007; Azim et al., 2015; Courjon et al., 2015;

Lünenburger et al., 2008). Previous studies on non-human primates have found that subcortical

neuron populations in the superior colliculus code for arm movements independent of gaze-

position (Werner, 1993; Werner, Dannenberg, & Hoffmann, 1997). Neurons in the superior

colliculus also project to motor networks in the spinal cord providing an anatomical basis for a

subcortical movement control pathway (Gandhi & Katnani, 2011). Electrical stimulation of these

pathways during reaching movements in cats have been shown to elicit gaze-independent

deviations in the trajectory in as little as 38 ms (Courjon et al., 2015). Although much more work

in primates is needed to understand if these networks are involved in voluntary movement control,

the speed of these subcortical networks and the capability to produce fast corrections represent a

possible mechanism to explain corrections to somatosensory target perturbations.

Although further work is required to determine the exact neural substrates used for online

corrections in somatosensory coordinates, the findings of Study B extend previous work by

providing evidence for the existence of a rapid and distinct online control mechanisms for

movements to somatosensory targets.

Page 151: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

133

! Future!Directions!and!Perspectives!The conclusions drawn from these thesis experiments serve as a basis for future research, and in

the case of Study B, a new protocol for investigating potential new ideas.

For the study of sensorimotor transformations prior to action, the next step could be to

examine whether the remapping processes associated with the use of exogenous cues are specific

to the auditory modality, or whether other exogenous cue modalities also facilitate the conversion

of somatosensory target positions into a visual reference frame. For example, what would happen

if tactile cues indicating the somatosensory target position were not delivered to the target

position? Would indirect mapping of tactile cues facilitate the use of an exteroceptive

representation? Studies about conflicting body representations (e.g., Brandes & Heed, 2015)

suggest that this could be the case.

With regard to the findings of Study B, the next step would be to investigate whether

providing visual information about initial hand location affects correction times for movements to

visual and somatosensory targets. Previous studies have revealed that movements to visual targets

are more accurate when vision of the limb is provided compared to when it is not (Prablanc et al.,

1979). Following the logic of Study B, if both the limb and target are initially coded in a visual

reference frame then correction times should be shorter and more accurate than when a

sensorimotor transformation is necessary (i.e., with an unseen limb).

Page 152: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

134

!!Conclusions!

Everyday goal-directed reaches include movements towards our own body. The planning of these

movements may use a visual representation, depending on the cue employed to identify the

somatosensory target location. In contrast, the control of an ongoing reaching action towards a

displaced somatosensory target can rely on the use of a somatosensory representation that does not

entail remapping onto a visual representation. Together, these results extend previous work done

on the impact of sensory contexts on goal-directed actions and open up new possibilities for further

investigations of these mechanisms in movements to somatosensory targets.

Page 153: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

135

References!

Alais, D., Newell, F. N., & Mamassian, P. (2010). Multisensory processing in review: From

physiology to behaviour. Seeing and Perceiving, 23(1), 3–38.

http://doi.org/10.1371/journal.pone.0011283

Alberts, B., Johson, A., & Lewis, J. (2002). Analyzing protein structure and function. In

Molecular Biology of the cell (4th ed.). New York: Garland Science. Retrieved from

https://www.ncbi.nlm.nih.gov/books/NBK26820/

Allison, T., Wood, C. C., & Goff, W. R. (1983). Brain stem auditory, pattern-reversal visual, and

short-latency somatosensory evoked potentials: Latencies in relation to age, sex, and brain

and body size. Electroencephalography and Clinical Neurophysiology, 55(6), 619–636.

http://doi.org/10.1016/0013-4694(83)90272-9

Allport, A. (1987). Selection for action: Some behavioural and neurophysiological

considerations of attention and action. In H. Heuer & A. F. Sanders (Eds.), Perspectives on

Perception and Action (pp. 395–419). Hillsdale, NJ: Lawrence Eribraum Associates.

Ambrosini, E., Ciavarro, M., Pelle, G., Perrucci, M. G., Galati, G., Fattori, P., … Committeri, G.

(2012). Behavioral investigation on the frames of reference involved in visuomotor

transformations during peripheral arm reaching. PLoS ONE, 7(12), 1–8.

http://doi.org/10.1371/journal.pone.0051856

Andersen, R. A., Essick, G. K., & Siegel, R. M. (1985). Encoding of spatial location by posterior

parietal neurons. Science, 230(4724), 456–458. http://doi.org/10.1126/science.4048942

Andersen, R. A., & Mountcastle, V. B. (1983). The influence of the angle of gaze upon the

excitability of the light-sensitive neurons of the posterior parietal cortex. Journal of

Neuroscience, 3(3), 532–548.

Andersen, R. A., Snyder, L. H., Bradley, D., & Xing, J. (1997). Multimodal representation of

space in the posterior parietal cortex and its use in planning movements. Annual Review of

Page 154: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

136

Neuroscience, 20(1), 303–330. http://doi.org/10.1146/annurev.neuro.20.1.303

Ashe, J., & Georgopoulos, A. (1994). Movement parameters and neural activity in motor cortex

and area 5. Cerebral Cortex, 4(6), 590–600. http://doi.org/10.1093/cercor/4.6.590

Babb, R. S., Waters, R. S., & Asanuma, H. (1984). Corticocortical connections to the motor

cortex from the posterior parietal lobe (areas 5a, 5b, 7) in the cat demonstrated by the

retrograde axonal transport of horseradish peroxidase. Experimental Brain Research, 54(3),

476–484. http://doi.org/10.1007/BF00235473

Babiloni, F., Babiloni, C., Carducci, F., Fattorini, L., Onorati, P., & Urbano, A. (1996). Spline

Laplacian estimate of EEG potentials over a realistic magnetic resonance-constructed scalp

surface model. Electroencephalography and Clinical Neurophysiology, 98, 363–373.

http://doi.org/10.1016/0013-4694(96)00284-2

Badde, S., Röder, B., & Heed, T. (2014). Multiple spatial representations determine touch

localization on the fingers. Journal of Experimental Psychology. Human Perception and

Performance, 40(2), 784–801. http://doi.org/10.1037/a0034690

Bahill, A. T., Clark, M. R., & Stark, L. (1975). The main sequence, a tool for studying human

eye movements. Mathematical Biosciences, 24(3), 191–204. http://doi.org/10.1016/0025-

5564(75)90075-9

Balslev, D., Christensen, L. O. D., Lee, J.-H., Law, I., Paulson, O. B., & Miall, R. C. (2004).

Enhanced accuracy in novel mirror drawing after repetitive transcranial magnetic

stimulation-induced proprioceptive deafferentation. The Journal of Neuroscience, 24(43),

9698 LP-9702. http://doi.org/10.1523/JNEUROSCI.1738-04.2004

Bard, C., Hay, L., & Fleury, M. (1985). Role of peripheral vision in the directional control of

rapid aiming movements. Canadian Journal of Psychology, 39(1), 151–161.

http://doi.org/10.1037/h0080120

Bard, C., Turrell, Y., Fleury, M., Teasdale, N., Lamarre, Y., & Martin, O. (1999).

Deafferentation and pointing with visual double-step perturbations. Experimental Brain

Page 155: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

137

Research, 125(4), 410–416. http://doi.org/10.1007/s002210050697

Barlow, H. B., Blakemore, C., & Pettigrew, J. D. (1967). The neural mechanism of binocular

depth discrimination. Journal of Physiology, 193(2), 327.

Batista, A. P., & Newsome, W. T. (2000). Visuo-motor control: Giving the brain a hand. Current

Biology, 10(4), 145–148. http://doi.org/10.1016/S0960-9822(00)00327-4

Battaglia-Mayer, A., Caminiti, R., Lacquaniti, F., & Zago, M. (2003). Multiple levels of

representation of reaching in the parieto-frontal network. Cerebral Cortex, 13(10), 1009–

1022. http://doi.org/10.1093/cercor/13.10.1009

Becker, W. (1991). Saccades. In Eye Movements (pp. 95–117). London: Macmillan Press.

Bekkering, H., & Neggers, S. F. W. (2002). Visual search is modulated by action intentions.

Psychological Science, 13(4), 370–374. http://doi.org/10.1111/j.0956-7976.2002.00466.x

Berkeley, G. (1709). Essay Towards a New Theory of Vision.

Berlucchi, G., & Aglioti, S. M. (2010). The body in the brain revisited. Experimental Brain

Research, 200(1), 25–35. http://doi.org/10.1007/s00221-009-1970-7

Bernier, P.-M., Burle, B., Hasbroucq, T., & Blouin, J. (2009). Spatio-temporal dynamics of

reach-related neural activity for visual and somatosensory targets. NeuroImage, 47(4),

1767–1777. http://doi.org/10.1016/j.neuroimage.2009.05.028

Bernier, P.-M., Gauthier, G. M., & Blouin, J. (2007). Evidence for distinct, differentially

adaptable sensorimotor transformations for reaches to visual and proprioceptive targets.

Journal of Neurophysiology, 98(3), 1815–1819. http://doi.org/10.1152/jn.00570.2007

Bernier, P.-M., & Grafton, S. T. (2010). Human posterior parietal cortex flexibly determines

reference frames for reaching based on sensory context. Neuron, 68(4), 776–788.

http://doi.org/10.1016/j.neuron.2010.11.002

Beurze, S. M., Van Pelt, S., & Medendorp, W. P. (2006). Behavioral reference frames for

planning human reaching movements. Journal of Neurophysiology, 96(1), 352–362.

Page 156: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

138

http://doi.org/10.1152/jn.01362.2005

Blakemore, S. J., Wolpert, D. M., & Frith, C. (2000). Why can’t you tickle yourself?

Neuroreport, 11(11), 11–16. doi: 10.1097/00001756-200008030-00002

Blangero, A., Rossetti, Y., Honoré, J., & Pisella, L. (2005). Influence of gaze direction on

pointing to unseen proprioceptive targets. Advances in Cognitive Psychology, 1(1), 9–16.

http://doi.org/10.2478/v10053-008-0039-7

Blouin, J., Saradjian, A. H., Lebar, N., Guillaume, A., & Mouchnino, L. (2014). Opposed

optimal strategies of weighting somatosensory inputs for planning reaching movements

toward visual and proprioceptive targets. Journal of Neurophysiology, 112(9), 2290–2301.

http://doi.org/10.1152/jn.00857.2013

Blouin, J., Teasdale, N., Bard, C., & Fleury, M. (1993). Directional control of rapid arm

movements: the role of the kinetic visual feedback system. Canadian Journal of

Experimental Psychology /Revue Canadienne de Psychologie Expérimentale, 47(4), 678–

696. http://doi.org/10.1037/h0078869

Bock, O. (1986). Contribution of retinal versus extraretinal signals towards visual localization in

goal-directed movements. Experimental Brain Research, 64(3), 476–482.

http://doi.org/10.1007/BF00340484

Boussaoud, D., Barth, T. M., & Wise, S. P. (1993). Effects of gaze on apparent visual responses

of frontal cortex neurons. Experimental Brain Research, 93(3), 423–434.

Boussaoud, D., & Bremmer, F. (1999). Gaze effects in the cerebral cortex: Reference frames for

space coding and action. Experimental Brain Research, 128(1), 170–180.

http://doi.org/10.1007/s002210050832

Bracci, S., Cavina-Pratesi, C., Ietswaart, M., Caramazza, A., & Peelen, M. V. (2012). Closely

overlapping responses to tools and hands in left lateral occipitotemporal cortex. Journal of

Neurophysiology, 107(5), 1443–1456. http://doi.org/10.1152/jn.00619.2011

Bracci, S., Ietswaart, M., Peelen, M. V., & Cavina-Pratesi, C. (2010). Dissociable neural

Page 157: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

139

responses to hands and non-hand body parts in human left extrastriate visual cortex. Journal

of Neurophysiology, 103(6), 3389–3397. http://doi.org/10.1152/jn.00215.2010

Bradshaw, L. A., & Wikswo, J. P. (2001). Spatial filter approach for evaluation of the surface

laplacian of the electroencephalogram and magnetoencephalogram. Annals of Biomedical

Engineering, 29(3), 202–213. http://doi.org/10.1114/1.1352642

Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10(4), 433–436.

http://doi.org/10.1163/156856897X00357

Brandes, J., & Heed, T. (2015). Reach trajectories characterize tactile localization for

sensorimotor decision making. The Journal of Neuroscience, 35(40), 13648 LP-13658.

http://doi.org/10.1523/JNEUROSCI.1873-14.2015

Bremner, L., & Andersen, R. A. (2012). Coding of the reach vector in parietal area 5d. Neuron,

29(2), 997–1003. http://doi.org/10.1016/j.biotechadv.2011.08.021.Secreted

Brenner, E., & Smeets, J. B. J. (1997). Fast responses of the human hand to changes in target

position. Journal of Motor Behavior, 29(4), 297–310.

http://doi.org/10.1080/00222899709600017

Bridgeman, B., Hendry, D., & Stark, L. (1975). Failure to detect displacement of the visual

world during saccadic eye movements. Vision Research, 15(6), 719–722.

http://doi.org/10.1016/0042-6989(75)90290-4

Brodeur, M., Bacon, B. A., Renoult, L., Prevost, M., Lepage, M., & Debruille, J. B. (2008). On

the functional significance of the P1 and N1 effects to illusory figures in the notch mode of

presentation. PLoS ONE, 3(10). http://doi.org/10.1371/journal.pone.0003505

Brown, M., Marmor, M., Vaegan, Zrenner, E., Brigell, M., & Bach, M. (2006). ISCEV standard

for clinical electro-oculography (EOG) (2006). Documenta Ophthalmologica, 113(3), 205–

212. http://doi.org/10.1007/s10633-006-9030-0

Budisavljevic, S., Dell’Acqua, F., & Castiello, U. (2018). Cross-talk connections underlying

dorsal and ventral stream integration during hand actions. Cortex, 103, 224–239.

Page 158: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

140

http://doi.org/https://doi.org/10.1016/j.cortex.2018.02.016

Bullock, D., Cisek, P., & Grossberg, S. (1998). Cortical networks for control of voluntary arm

movements under variable force conditions. Cerebral Cortex, 8, 48–62.

http://doi.org/10.1093/cercor/8.1.48

Buneo, C. A., & Andersen, R. A. (2006). The posterior parietal cortex: Sensorimotor interface

for the planning and online control of visually guided movements. Neuropsychologia,

44(13), 2594–2606. http://doi.org/10.1016/j.neuropsychologia.2005.10.011

Buneo, C. A., Batista, A. P., Jarvis, M. R., & Andersen, R. A. (2008). Time-invariant reference

frames for parietal reach activity. Experimental Brain Research, 188(1), 77–89.

http://doi.org/10.1007/s00221-008-1340-x

Burgess, P. R., & Clark, F. J. (1969). Characteristics of knee joint receptors in the cat. Journal of

Physiology, 203, 317–335.

Burnod, Y., Baraduc, P., Battaglia-Mayer, A., Guigon, E., Koechlin, E., Ferraina, S., Caminiti,

R. (1999). Parieto-frontal coding of reaching: an integrated framework. Experimental Brain

Research, 129(3), 325–346. http://doi.org/10.1007/s002210050902

Buxbaum, L. J., & Coslett, H. B. (2001). Specialised structural descriptions for human body

parts: Evidence from autotopagnosia. Cognitive Neuropsychology, 18(4), 289–306.

http://doi.org/10.1080/02643290126172

Buzsáki, G., Anastassiou, C. A., & Koch, C. (2012). The origin of extracellular fields and

currents — EEG, ECoG, LFP and spikes. Nature Reviews Neuroscience, 13(6), 407–420.

http://doi.org/10.1038/nrn3241

Carlton, L. G. (1992). Visual processing time and the control of movement. In L. Proteau, & D.

Elliott. (Eds) Vision and motor control. (pp. 3–31). Oxford, England: North-Holland.

http://doi.org/10.1016/S0166-4115(08)62008-7

Carlton, L. G., & Carlton, M. J. (1987). Response amendment latencies during discrete arm

movements. Journal of Motor Behavior, 19(2), 227–239.

Page 159: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

141

http://doi.org/10.1080/00222895.1987.10735409

Celesia, G. G. (1984). Evoked potential techniques in the evaluation of visual function. Journal

of Clinical Neurophysiology, 1(1), 55–76.

Chapman, C. E., Bushnell, M. C., Miron, D., Duncan, G. H., & Lund, J. P. (1987). Sensory

perception during movement in man. Experimental Brain Research, 68(3), 516–524.

http://doi.org/10.1007/BF00249795

Chawla, D., Lumer, E. D., & Friston, K. J. (2000). Relating macroscopic measures of brain

activity to fast, dynamic neuronal interactions. Neural Computation, 12, 2805–2821.

http://doi.org/10.1162/089976600300014737

Chawla, D., Rees, G., & Friston, K. J. (1999). The physiological basis of attentional modulation

in extrastriate visual areas. Nature Neuroscience, 2(7), 671–676.

http://doi.org/10.1038/10230

Chen, J., Reitzen, S. D., Kohlenstein, J. B., & Gardner, E. P. (2009). Neural representation of

hand kinematics during prehension in posterior parietal cortex of the macaque monkey.

Journal of Neurophysiology, 102(6), 3310–3328. http://doi.org/10.1152/jn.90942.2008

Chua, R., & Elliott, D. (1993). Visual regulation of manual aiming. Human Movement Science,

12(4), 365–401. http://doi.org/10.1016/0167-9457(93)90026-L

Chuchland, M. M., Santhanam, G., & Shenoy, K. (2006). Preparatory acitivity in premotor and

motor cortex reflects the speed of the upcoming reach. Journal of Neurophysiology, 96(6),

3130–3146. http://doi.org/10.1152/jn.00307.2006

Cisek, P. (2007). Cortical mechanisms of action selection: the affordance competition

hypothesis. Philosophical Transactions of the Royal Society of London. Series B, Biological

Sciences, 362(1485), 1585–1599. http://doi.org/10.1098/rstb.2007.2054

Cisek, P., Grossberg, S., & Bullock, D. (1998). A cortico-spinal model of reaching and

proprioception under multiple task constraints. Journal of Cognitive Neuroscience, 10(4),

425–444. http://doi.org/10.1162/089892998562852

Page 160: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

142

Cisek, P., & Kalaska, J. F. (2005). Neural correlates of reaching decisions in dorsal premotor

cortex: Specification of multiple direction choices and final selection of action. Neuron,

45(5), 801–814. http://doi.org/10.1016/j.neuron.2005.01.027

Clark, V. P., & Hillyard, S. A. (1996). Spatial selective attention affects early extrastriate but not

striate components of the visual evoked potential. Journal of Cognitive Neuroscience, 8(5),

387–402. http://doi.org/10.1162/jocn.1996.8.5.387

Cohen, D. A. D., Prud’homme, M. J., & Kalaska, J. F. (1994). Tactile activity in primate primary

somatosensory cortex during active arm movements: correlation with receptive field

properties. Journal of Neurophysiology, 71(1), 161–72.

http://doi.org/10.1152/jn.1994.71.1.161

Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159.

http://doi.org/10.1037/0033-2909.112.1.155

Cohen, L. A. (1961). Role of eye and neck proprioceptive mechanisms in body orientation and

motor coordination. Journal of Neurophysiology, 24, 1–11.

http://doi.org/10.1152/jn.1961.24.1.1

Cohen, Y. E., & Andersen, R. A. (2000). Reaches to sounds encoded in an eye-centered

reference frame. Neuron, 27(3), 647–652. http://doi.org/10.1016/S0896-6273(00)00073-8

Cohen, Y. E., & Andersen, R. A. (2002). A common reference frame for movement plans in the

posterior parietal cortex. Nature Reviews Neuroscience, 3(7), 553–562.

http://doi.org/10.1038/nrn873

Corradi-Dell’Acqua, C., Hesse, M. D., Rumiati, R. I., & Fink, G. R. (2008). Where is a nose with

respect to a foot? The left posterior parietal cortex processes spatial relationships among

body parts. Cerebral Cortex, 18(12), 2879–2890. http://doi.org/10.1093/cercor/bhn046

Corradi-Dell’Acqua, C., & Rumiati, R. I. (2007). What the brain knows about the body: evidence

for dissociable representations. In Santonianni, F. & Sabatano, C. Brain development in

learning environments: Embodied and perceptual advancement (pp. 50–64). Newcastle:

Page 161: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

143

Cambridge Scholars Publishing.

Corradi-Dell’Acqua, C., Tomasino, B., & Fink, G. R. (2009). What is the position of an arm

relative to the body? Neural correlates of body schema and body structural description. J

Neurosci, 29(13), 4162–4171. http://doi.org/10.1523/JNEUROSCI.4861-08.2009

Courchesne, E., Hillyard, S. A., & Galambos, R. (1975). Stimulus novelty, task relevance and

the visual evoked potential in man. Electroencephalography and Clinical Neurophysiology,

39(2), 131–143. http://doi.org/10.1016/0013-4694(75)90003-6

Courjon, J.-H., Zénon, A., Clément, G., Urquizar, C., Olivier, E., & Pélisson, D. (2015).

Electrical stimulation of the superior colliculus induces non-topographically organized

perturbation of reaching movements in cats. Frontiers in Systems Neuroscience, 9, 109.

http://doi.org/10.3389/fnsys.2015.00109

Craighero, L., Fadiga, L., Rizzolatti, G., & Umiltà, C. (1999). Action for perception: a motor-

visual attentional effect. Journal of Experimental Psychology-Human Perception and

Performance, 25(6), 1673–1692. http://doi.org/10.1037/0096-1523.25.6.1673

Crawford, J. D., Medendorp, W. P., & Marotta, J. J. (2004). Spatial transformations for eye −

hand coordination. Journal of Neurophysiology, 92(1), 10–19.

http://doi.org/10.1152/jn.00117.2004

Dale, A. M., Liu, A. K., Fischl, B. R., Buckner, R. L., Belliveau, J. W., Lewine, J. D., &

Halgren, E. (2000). Dynamic statistical parametric mapping: Combining fMRI and MEG

for high-resolution imaging of cortical activity. Neuron, 26(1), 55–67.

http://doi.org/10.1016/S0896-6273(00)81138-1

Darling, W. G., Seitz, R. J., Peltier, S., Tellmann, L., & Butler, A. J. (2007). Visual cortex

activation in kinesthetic guidance of reaching. Experimental Brain Research, 179(4), 607–

619. http://doi.org/10.1007/s00221-006-0815-x

Davare, M., Zénon, A., Pourtois, G., Desmurget, M., & Olivier, E. (2012). Role of the medial

part of the intraparietal sulcus in implementing movement direction. Cerebral Cortex,

Page 162: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

144

22(6), 1382–1394. http://doi.org/10.1093/cercor/bhr210

Day, B. L., & Lyon, I. N. (2000). Voluntary modification of automatic arm movements evoked

by motion of a visual target. Experimental Brain Research, 130(2), 159–168.

http://doi.org/10.1007/s002219900218

de Grosbois, J., & Tremblay, L. (2016). Quantifying online visuomotor feedback utilization in

the frequency domain. Behavior Research Methods, 48(4), 1653–1666.

http://doi.org/10.3758/s13428-015-0682-0

de Grosbois, J., & Tremblay, L. (2018). Which measures of online control are least sensitive to

offline processes? Motor Control, 22(3), 358–376. http://doi.org/10.1123/mc.2017-0014

de Vignemont, F. (2010). Body schema and body image-pros and cons. Neuropsychologia,

48(3), 669–680. http://doi.org/10.1016/j.neuropsychologia.2009.09.022

Desmurget, M., Epstein, C. M., Turner, R. S., Prablanc, C., Alexander, G. E., & Grafton, S. T.

(1999). Role of the posterior parietal cortex in updating reaching movements to a visual

target. Nature Neuroscience, 2(6), 563–567. http://doi.org/10.1038/9219

Desmurget, M., Pélisson, D., Rossetti, Y., & Prablanc, C. (1998). From eye to hand: Planning

goal-directed movements. Neuroscience & Biobehavioral Reviews, 22(6), 761–788.

http://doi.org/10.1016/S0149-7634(98)00004-9

Disbrow, E., Roberts, T., & Krubitzer, L. (2000). Somatotopic organization of cortical fields in

the lateral sulcus of Homo sapiens: evidence for SII and PV. Journal of Comparative

Neurology, 418(1), 1–21. http://doi.org/10.1002/(SICI)1096-

9861(20000228)418:1<1::AID-CNE1>3.0.CO;2-P

Disbrow, E., Roberts, T., Poeppel, D., & Krubitzer, L. (2001). Evidence for interhemispheric

processing of inputs from the hands in human S2 and PV. Journa of Neurophysiology,

85(5), 2236–2244. http://doi.org/10.1152/jn.2001.85.5.2236

Edin, B. B. (2001). Quantitative analyses of dynamic strain sensitivity in human skin

mechanoreceptors. Journal of Physiology, 65(3), 3233–3243.

Page 163: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

145

http://doi.org/10.1152/jn.00628.2004.

Edin, B. B., & Abbs, J. H. (1991). Finger movement responses of cutaneous mechanoreceptors in

the dorsal skin of the human hand. Journal of Neurophysiology, 65(3), 657–70.

http://doi.org/ 10.1152/jn.1991.65.3.657

Elliott, D., Hansen, S., Grierson, L. E. M., Lyons, J., Bennett, S. J., & Hayes, S. J. (2010). Goal-

directed aiming: Two components but multiple processes. Psychological Bulletin, 136(6),

1023–1044. http://doi.org/10.1037/a0020958

Elliott, D., Helsen, W. F., & Chua, R. (2001). A century later: Woodworth’s (1899) two-

component model of goal-directed aiming. Psychological Bulletin, 127(3), 342–357.

http://doi.org/10.1037/0033-2909.127.3.342

Elliott, D., Lyons, J., Hayes, S. J., Burkitt, J. J., Roberts, J. W., Grierson, L. E. M., … Bennett, S.

J. (2017). The multiple process model of goal-directed reaching revisited. Neuroscience &

Biobehavioral Reviews, 72, 95–110.

http://doi.org/https://doi.org/10.1016/j.neubiorev.2016.11.016

Ernst, M. O., & Banks, M. S. (2002). Humans integrate visual and haptic information in a

statistically optimal fashion. Nature, 415(6870), 429–433. http://doi.org/10.1038/415429a

Ernst, M. O., & Bülthoff, H. H. (2004). Merging the senses into a robust percept. Trends in

Cognitive Sciences, 8(4), 162–169. http://doi.org/10.1016/j.tics.2004.02.002

Fabbri, S., Caramazza, A., & Lingnau, A. (2010). Tuning curves for movement direction in the

human visuomotor system. Journal of Neuroscience, 30(40), 13488–13498.

http://doi.org/10.1523/JNEUROSCI.2571-10.2010

Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using

G*Power 3.1: Tests for correlation and regression analyses. Behavioral Research Methods,

41(4), 1149–60. http://doi.org/10.3758/BRM.41.4.1149

Feldman, A. G., & Levin, M. F. (1996). Grasping cerebellar function depends on our

understanding the principles of sensorimotor integration: The frame of reference hypothesis.

Page 164: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

146

Behavioral and Brain Sciences, 19(03), 442–445.

Felician, O., Romaiguère, P., Anton, J.-L. L., Nazarian, B., Roth, M., Poncet, M., … Roll, J.-P.

P. (2004). The role of human left superior parietal lobule in body part localization. Annals

of Neurology, 55(5), 749–751. http://doi.org/10.1002/ana.20109

Fetsch, C. R., Pouget, A., DeAngelis, G. C., & Angelaki, D. E. (2012). Neural correlates of

reliability-based cue weighting during multisensory integration. Nature Neuroscience,

15(1), 146–54. http://doi.org/10.1038/nn.2983

Filimon, F. (2010). Human cortical control of hand movements: parietofrontal networks for

reaching, grasping, and pointing. The Neuroscientist, 16, 388–407.

http://doi.org/10.1177/1073858410375468

Flash, T., & Henis, E. (1991). Arm trajectory modifications during reaching towards visual

targets. Journal of Cognitive Neuroscience, 3(3), 220–230.

http://doi.org/10.1162/jocn.1991.3.3.220

Flash, T., & Hogan, N. (1985). The coordination of arm movements: An experimentally

confirmed mathematical model. Journal of Neuroscience, 5(7), 1688–1703. Retrieved from

http://www.jneurosci.org/content/5/7/1688

Friedman, D. P., Murray, E. A., O’Neill, J. B., & Mishkin, M. (1986). Cortical connections of

the somatosensory fields of the lateral sulcus of macaques: Evidence for a corticolimbic

pathway for touch. Journal of Comparative Neurology, 252(3), 323–347.

http://doi.org/10.1002/cne.902520304

Gandevia, S. C., & McCloskey, D. I. (1976). Joint sense, muscle sense, and their combination as

position sense, measured at the distal interphalangeal joint of the middle finger. Journal of

Physiology, 260, 387–407. http://doi.org/10.1113/jphysiol.1976.sp011521

Gandhi, N. J., & Katnani, H. A. (2011). Motor functions of the superior colliculus. Annual

Review of Neuroscience, 34(1), 205–231. http://doi.org/10.1146/annurev-neuro-061010-

113728

Page 165: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

147

Georgopoulos, A. P. (1990). Neurophysiology of Reaching. In M. Jeannerod (Ed.), Attention and

Motor Performance XIII:Motor represntation and control (pp. 227–263). Hillsdale, NJ:

Lawrence Eribraum Associates.

Ghez, C., Gordon, J., Ghilardi, M. F., & Sainburg, R. L. (1995). Contribution of vision and

proprioception to accuracy in limb movements. In M. S. Gazziniga (Ed.), The Cognitive

Neurosciences (pp. 549–564). Cambridge: MIT Press.

Gilbert, C. D. (2013a). Intermediate-level visual processing and visual primitives. In E. Kandel,

J. Schwartz, T. Jessell, S. Siegelbaum, & A. J. Hudspeth (Eds.), Principles of Neural

Science (5th ed., pp. 602–619). New York: McGraw-Hill.

Gilbert, C. D. (2013b). The constructive nature of visual processing. In E. Kandel, J. Schwartz,

T. Jessell, S. Siegelbaum, & A. J. Hudspeth (Eds.), Principles of Neural Science (5th ed.,

pp. 556–576). New York: McGraw-Hill.

Gillman, D., Cohen, Y. E., & Groh, J. M. (2005). Eye-centered, head-centered, and complex

coding of visual and auditory targets in the intraparietal sulcus. Journal of Neurophysiology,

94(4), 2259–2260. http://doi.org/10.1152/jn.00021.2005.

Goodale, M. A., & Milner, A. D. (1992). Separate visual pathways for perception and action.

Trends in Neurosciences, 15(1), 20–25. http://doi.org/10.1016/0166-2236(92)90344-8

Goodale, M. A., Pélisson, D., & Prablanc, C. (1986). Large adjustments in visually guided

reaching do not depend on vision of the hand or perception of target displacement. Nature,

319(30), 402–403.

Goodman, R., Crainic, V. A., Bested, S. R., Wijeyaratnam, D. O., de Grosbois, J., & Tremblay,

L. (2018). Amending ongoing upper-limb reaches: Visual and proprioceptive contributions?

Multisensory Research, 31(5), 455–480. http://doi.org/https://doi.org/10.1163/22134808-

00002615

Goodwin, G. M., McCloskey, D. I., & Matthews, P. B. C. (1972). The persistence of appreciable

kinesthesia after paralysing joint afferents but preserving muscle afferents. Brain Research,

Page 166: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

148

37(2), 326–329. http://doi.org/10.1016/0006-8993(72)90679-8

Gramfort, A., Papadopoulo, T., Olivi, E., & Clerc, M. (2010). OpenMEEG: opensource software

for quasistatic bioelectromagnetics. Biomedical Engineering Online, 9(1), 1.

http://doi.org/10.1186/1475-925X-9-45

Green, D. G. (1970). Regional variations in the visual acuity for interference fringes on the

retina. The Journal of Physiology, 207(2), 351–356.

Hacking, I. (1983). Representing and Intervening: Introductory Topics in the Philosophy of

Natural Science (1st ed.). Cambridge: Cambridge University Press.

Hammond, P. H. (1956). The influence of prior instruction to the subject on an apparently

involuntary neuro-muscular response. The Journal of Physiology, 132(1), 17P–18P.

Hansen, S., Tremblay, L., & Elliott, D. (2008). Real-time manipulation of visual displacement

during manual aiming. Human Movement Science, 27(1), 1–11.

http://doi.org/10.1016/j.humov.2007.09.001

Heath, M. (2005). Role of limb and target vision in the online control of memory-guided reaches.

Motor Control, 9(3), 281–311. http://doi.org/ 10.1123/mcj.9.3.281

Held, R., & Hein, A. (1963). Movement-produced stimulation in the development of visually

guided behavior. Journal of Comparative and Physiological Psychology, 56(5), 872–876.

http://doi.org/10.1037/h0040546

Henriques, D. Y. P., Klier, E. M., Smith, M. A., Lowy, D., & Crawford, J. D. (1998). Gaze-

centered remapping of remembered visual space in an open-loop pointing task. Journal of

Neuroscience, 18(4), 1583–1594. http://doi.org/10.1523/JNEUROSCI.18-04-01583.1998

Hinkley, L. B., Krubitzer, L. A., Nagarajan, S. S., & Disbrow, E. (2007). Sensorimotor

integration in S2, PV, and parietal rostroventral areas of the human sylvian fissure. Journal

of Neurophysiology, 97(2), 1288–1297. http://doi.org/10.1152/jn.00733.2006

Hjorth, B. (1975). An online transformation of eeg scalp potentials into orthogonal source

Page 167: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

149

derivations. Electroencephalography and Clinical Neurophysiology, 526–530.

Hommel, B. (2004). Event files: Feature binding in and across perception and action. Trends in

Cognitive Sciences, 8(11), 494–500. http://doi.org/10.1016/j.tics.2004.08.007

Hommel, B. (2009). Action control according to TEC (theory of event coding). Psychological

Research, 73(4), 512–26. http://doi.org/10.1007/s00426-009-0234-2

Howarth, C. I., Beggs, W. D. A., & Bowden, J. M. (1971). The relationship between speed and

accuracy of movement aimed at a target. Acta Psychologica, 35, 207–218.

http://doi.org/10.1016/0001-6918(71)90022-9

Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional

architecture in the cat’s visual cortex. The Journal of Physiology, 160(1), 106–154.

http://doi.org/10.1113/jphysiol.1962.sp006837

Hubel, D. H., & Wiesel, T. N. (1968). Receptive fields and functional architecture of monkey

striate cortex. The Journal of Physiology, 195(1), 215–243.

http://doi.org/10.1113/jphysiol.1968.sp008455

Hulliger, M. (1984). The mammalian muscle spindle and its central control. In Reviews of

Physiology, Biochemistry and Pharmacology (pp. 1–110). Berlin, Heidelberg: Springer.

http://doi.org/10.1007/BFb0027694

Jeannerod, M. (1991). A neurophysiological model for the directional coding of reaching

movements. Brain and Space, 49–69.

Johnson, H., Van Beers, R. J., & Haggard, P. (2002). Action and awareness in pointing tasks.

Experimental Brain Research, 146(4), 451–459. http://doi.org/10.1007/s00221-002-1200-z

Jones, E. G., & Powell, T. P. S. (1968). The ipsilateral cortical connexions of the somatic

sensory areas in the cat. Brain Research, 9(1), 71–94. http://doi.org/0006-8993(68)90258-8

[pii]

Jones, E. G., & Powell, T. P. S. (1969). Connexions of the somatic sensory cortex of the rhesus

Page 168: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

150

monkey. Brain, 92(3), 477–502. http://doi.org/10.1093/brain/92.3.477

Jones, S. A. H., Fiehler, K., & Henriques, D. Y. P. (2012). A task-dependent effect of memory

and hand-target on proprioceptive localization. Neuropsychologia, 50(7), 1462–1470.

http://doi.org/10.1016/j.neuropsychologia.2012.02.031

Jones, S. A. H., & Henriques, D. Y. P. (2010). Memory for proprioceptive and multisensory

targets is partially coded relative to gaze. Neuropsychologia, 48(13), 3782–3792.

http://doi.org/10.1016/j.neuropsychologia.2010.10.001

Kaas, J. H., Merzenich, M. M., & Killackey, H. P. (1983). The reorganization of somatosensory

cortex following peripheral nerve damage in adult and developing mammals. Annual

Review of Neuroscience, 6(1), 325–356.

http://doi.org/10.1146/annurev.ne.06.030183.001545

Katznelson, R. (1981). EEG recording, electrode placement, and aspects of generator

localization. In P. Nunez (Ed.), Electric Fields of The Brain (1st ed., pp. 176–213). New

York: Oxford University Press.

Kayser, J., & Tenke, C. E. (2015). On the benefits of using surface Laplacian (current source

density) methodology in electrophysiology. International Journal of Psychophysiology,

97(3), 171–173. http://doi.org/10.1016/j.ijpsycho.2015.06.001

Keele, S. W., & Posner, M. I. (1968). Processing of visual feedback in rapid movements. Journal

of Experimental Psychology, 77(1). http://doi.org/ 10.1037/h0025754

Kennedy, A., Bhattacharjee, A., Hansen, S., Reid, C., & Tremblay, L. (2015). Online vision as a

function of real-time limb velocity: Another case for optimal windows. Journal of Motor

Behavior, 47(6), 465–75. http://doi.org/10.1080/00222895.2015.1012579

Khan, M. A., Franks, I. M., Elliott, D., Grierson, L. E. M., Chua, R., Bernier, P.-M., … Weeks,

D. J. (2006). Inferring online and offline processing of visual feedback in target directed

movements from kinematic data. Neuroscience and Biobehavioral Reviews, 30(8), 1106–

1121. http://doi.org/https://doi.org/10.1016/j.neubiorev.2006.05.002

Page 169: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

151

Kim, S., Callier, T., Tabot, G. A., Gaunt, R. A., Tenore, F. V., & Bensmaia, S. J. (2015).

Behavioral assessment of sensitivity to intracortical microstimulation of primate

somatosensory cortex. Proceedings of the National Academy of Sciences, 112(49), 15202–

15207. http://doi.org/10.1073/pnas.1509265112

Kistemaker, D., Van Soest, A., Wong, J., Kurtzer, I., & Gribble, P. (2012). Control of position

and movement is simplified by combined muscle spindle and Golgi tendon organ feedback.

Journal of Neurophysiology, 109(4), 1126–1139. http://doi.org/10.1152/jn.00751.2012

Koessler, L., Maillard, L., Benhadid, A., Vignal, J. P., Felblinger, J., Vespignani, H., & Braun,

M. (2009). Automated cortical projection of EEG sensors: Anatomical correlation via the

international 10-10 system. NeuroImage, 46(1), 64–72.

http://doi.org/10.1016/j.neuroimage.2009.02.006

Komilis, E., Pélisson, D., & Prablanc, C. (1993). Error processing in pointing at randomly

feedback-induced double-step stimuli. Journal of Motor Behavior, 25(4), 299–308.

http://doi.org/10.1080/00222895.1993.9941651

Kopietz, R., Sakar, V., Albrecht, J., Kleemann, A., Schöpf, V., Yousry, I., … Wiesmann, M.

(2009). Activation of primary and secondary somatosensory regions following tactile

ttimulation of the face. Clinical Neuroradiology, 19(2), 135–144.

http://doi.org/10.1007/s00062-009-8022-3

Lacquaniti, F., & Soechting, J. F. (1986). EMG responses to load perturbations of the upper

limb: effect of dynamic coupling between shoulder and elbow motion. Experimental Brain

Research, 61(3), 482–496. http://doi.org/10.1007/BF00237573

Lajoie, Y., Paillard, J., Teasdale, N., Bard, C., Fleury, M., Forget, R., & Lamarre, Y. (1992).

Mirror drawing in a deafferented patient and normal subjects: visuoproprioceptive conflict.

Neurology, 42(5), 1104–1106.

Langren, S., Silfventus, H., & Wolsk, D. (1967). Somato-sensory paths to the second cortical

projection area of group I muscle afferents. Journal of Physiology, 191, 543–559.

Page 170: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

152

Lashley, K. S. (1917). The accuracy of movement in the absence of excitation from the moving

organ. American Journal of Physiology, 43, 169–194.

Lashley, K. S. (1951). The problem of serial order behaviour. In L. Jeffress (Ed.), Cerebral

Mechanisms in Behaviour (pp. 112–136). New York: Hafner Publishing Co

Law, S. K., Rohrbaugh, J. W., Adams, C. M., & Eckardt, M. J. (1993). Improving spatial and

temporal resolution in evoked EEG responses using surface Laplacians.

Electroencephalography and Clinical Neurophysiology - Evoked Potentials, 88, 309–322.

http://doi.org/10.1016/0168-5597(93)90055-T

Lebar, N., Bernier, P.-M., Guillaume, A., Mouchnino, L., & Blouin, J. (2015). Neural correlates

for task-relevant facilitation of visual inputs during visually-guided hand movements.

NeuroImage, 121, 39–50. http://doi.org/10.1016/j.neuroimage.2015.07.033

Lee, R. G., & Tatton, W. G. (1975). Motor responses to sudden limb displacements in primates

with specific CNS lesions and in human patients with motor system disorders. The

Canadian Journal of Neurological Sciences., 2(3), 285–293.

http://doi.org/10.1017/s0317167100020382

Liu, J., & Ando, H. (2018). Response Modality vs. Target Modality: Sensory Transformations

and Comparisons in Cross-modal Slant Matching Tasks. Scientific Reports, 8(1), 11068.

http://doi.org/10.1038/s41598-018-29375-w

Lünenburger, L., Kutz, D. F., & Hoffmann, K.-P. (2008). Influence of arm movements on

saccades in humans. European Journal of Neuroscience, 12(11), 4107–4116.

http://doi.org/10.1046/j.1460-9568.2000.00298.x

Manson, G. A., Alekhina, M., Srubiski, S. L., Williams, C. K., Bhattacharjee, A., & Tremblay,

L. (2014). Effects of robotic guidance on sensorimotor control: Planning vs. online control?

NeuroRehabilitation, 35(4), 689–700. http://doi.org/10.3233/NRE-141168

Maunsell, J. H., & Van Essen, D. C. (1983). Functional properties of neurons in middle temporal

visual area of the macaque monkey. I. Selectivity for stimulus direction, speed, and

Page 171: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

153

orientation. Journal of Neurophysiology, 49(5), 1127–1147.

McGuire, L. M. M., & Sabes, P. N. (2009). Sensory transformations and the use of multiple

reference frames for reach planning. Nature Neuroscience, 12(8), 1056–1061.

http://doi.org/10.1038/nn.2357

Medendorp, W. P., Beurze, S. M., Van Pelt, S., & Van Der Werf, J. (2008). Behavioral and

cortical mechanisms for spatial coding and action planning. Cortex, 44(5), 587–597.

http://doi.org/10.1016/j.cortex.2007.06.001

Medendorp, W. P., & Crawford, J. D. (2002). Visuospatial updating of reaching targets in near

and far space. Neuroreport, 13(5), 633–636. http://doi.org/10.1097/00001756-200204160-

00019

Meister, M., & Tessier, L. (2013). Low-level visual procesing in the retina. In E. Kandel, J.

Schwartz, T. Jessell, S. Siegelbaum, & A. J. Hudspeth (Eds.), Principles of Neural Science

(5th ed., pp. 577–600). New York: McGraw-Hill.

Michel, C. M., Murray, M. M., Lantz, G., Gonzalez, S., Spinelli, L., & Grave De Peralta, R.

(2004). EEG source imaging. Clinical Neurophysiology, 115(10), 2195–2222.

http://doi.org/10.1016/j.clinph.2004.06.001

Mishkin, M., Ungerleider, L. G., & Macko, K. A. (1983). Object vision and spatial vision: two

cortical pathways. Trends in Neuroscience, 6, 414–417. http://doi.org/10.1016/0166-

2236(83)90190-X

Mountcastle, V. B., Lynch, J. C., Georgopoulos, A., Sakata, H., & Acuna, C. (1975). Posterior

parietal association cortex of the monkey: command functions for operations within

extrapersonal space. Journal of Neurophysiology, 38(4), 871–908.

http://doi.org/10.1152/jn.1975.38.4.871

Mueller, S., & Fiehler, K. (2014a). Effector movement triggers gaze-dependent spatial coding of

tactile and proprioceptive-tactile reach targets. Neuropsychologia, 62(1), 184–193.

http://doi.org/10.1016/j.neuropsychologia.2014.07.025

Page 172: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

154

Mueller, S., & Fiehler, K. (2014b). Gaze-dependent spatial updating of tactile targets in a

localization task. Frontiers in Psychology, 5, 66–66.

http://doi.org/10.3389/fpsyg.2014.00066

Mueller, S., & Fiehler, K. (2016). Mixed body-and gazed-centered coding of proprioceptive

reach targets after effector movement. Neuropsychologia, 87(1), 63–73.

http://doi.org/10.1016/j.neuropsychologia.2014.07.025

Murray, M. M., Thelen, A., Thut, G., Romei, V., Martuzzi, R., & Matusz, P. J. (2016). The

multisensory function of the human primary visual cortex. Neuropsychologia, 83, 161–169.

http://doi.org/10.1016/j.neuropsychologia.2015.08.011

Mushiake, H., Tanatsugu, Y., & Tanji, J. (1997). Neuronal activity in the ventral part of

premotor cortex during target-reach movement is modulated by direction of gaze. Journal

of Neurophysiology, 78(1), 567–571.

Nagy, A., Kruse, W., Rottmann, S., Dannenberg, S., & Hoffmann, K.-P. (2006). Somatosensory-

motor neuronal activity in the superior colliculus of the primate. Neuron, 52(3), 525–534.

http://doi.org/https://doi.org/10.1016/j.neuron.2006.08.010

Neggers, S. F. W., & Bekkering, H. (2001). Gaze anchoring to a pointing target is present during

the entire pointing movement and is driven by a non-visual signal. Journal of

Neurophysiology, 86(2), 961–970. http://doi.org/10.1152/jn.2001.86.2.961

Nerlich, B., & Clarke, D. D. (1996). Language, Action and Context: The Early History of

Pragmatics in Europe and America (Vol. 80). John Benjamins Publishing.

Noback, C. R., Ruggiero, D. A., Demarest, R. J., & Strominger, N. L. (2005). The Human

Nervous System!: Structure and Function. Totowa: Humana Press.

Noel, J.-P., & Wallace, M. (2016). Relative contributions of visual and auditory spatial

representations to tactile localization. Neuropsychologia, 82, 84–90.

http://doi.org/10.1016/j.neuropsychologia.2016.01.005

Nunez, P. L., Silberstein, R. B., Cadusch, P. J., Wijesinghe, R. S., Westdorp, A. F., & Srinivasan,

Page 173: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

155

R. (1994). A theoretical and experimental-study of high-resolution EEG based on surface

laplacians and cortical imaging. Electroencephalography and Clinical Neurophysiology,

90(1), 40–57.

Ogden, J. A. (1985). Autotopagnosia occurence in a patient without nominal aphasia and with an

intact ability to point to animals and objects. Brain, 108(4), 1009–1022.

http://doi.org/10.1093/brain/108.4.1009

Olsen, C. W., & Ruby, C. (1941). Anosognosia and autotopagnosia. Archives of Neurology and

Psychiatry, 46(2), 340–344. http://doi.org/10.1001/archneurpsyc.1941.02280200146008

Oostwoud Wijdenes, L., Brenner, E., & Smeets, J. B. J. (2011). Fast and fine-tuned corrections

when the target of a hand movement is displaced. Experimental Brain Research, 214(3),

453–462. http://doi.org/10.1007/s00221-011-2843-4

Oostwoud Wijdenes, L., Brenner, E., & Smeets, J. B. J. (2013). Comparing online adjustments to

distance and direction in fast pointing movements. Journal of Motor Behavior, 45(5), 395–

404. http://doi.org/10.1080/00222895.2013.815150

Oostwoud Wijdenes, L., Brenner, E., & Smeets, J. B. J. (2014). Analysis of methods to

determine the latency of online movement adjustments. Behavior Research Methods, 46(1),

131–139. http://doi.org/10.3758/s13428-013-0349-7

Osterberg, G. (1935). Topography of the layer of rods and cones in the human retina. Acta

Ophthalmologica, (13), 1–102.

Overvliet, K. E., Azañon, E., & Soto-Faraco, S. (2011). Somatosensory saccades reveal the

timing of tactile spatial remapping. Neuropsychologia, 49(11), 3046–3052.

http://doi.org/10.1016/j.neuropsychologia.2011.07.005

Paillard, J. (1980). Le corps situé et le corps identifié. Rev. Méd. Suisse Romande, 100(2), 129–

141.

Paillard, J. (1991). Motor and representational framing of space. In J. Paillard (Ed.), Brain and

space (pp. 163-182). New York: Oxford University Press.

Page 174: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

156

Pélisson, D., Prablanc, C., Goodale, M. A., & Jeannerod, M. (1986). Visual control of reaching

movements without vision of the limb - II. Evidence of fast unconscious processes

correcting the trajectory of the hand to the final position of a double-step stimulus.

Experimental Brain Research, 62(2), 303–311. http://doi.org/10.1007/BF00238849

Perrin, F., Bertrand, O., & Pernier, J. (1987). Scalp current density mapping: value and

estimation from potential data. IEEE Transactions on Bio-Medical Engineering, 34(4),

283–288. http://doi.org/10.1109/TBME.1987.326089

Perrin, F., Pernier, J., Bertrand, O., & Echallier, J. F. (1989). Spherical splines for scalp potential

and current density mapping. Electroencephalography and Clinical Neurophysiology, 72,

184–187. http://doi.org/10.1016/0013-4694(89)90180-6

Phurailatpam, J. (2014). Evoked potentials: Visual evoked potentials (VEPs): Clinical uses,

origin, and confounding parameters. Journal of Medical Society, 28(3), 140–144.

http://doi.org/10.4103/0972-4958.148494

Pick, A. (1922). Störung der Orientierung am eigenen Körper. Pyschologische Forschung,

1(303), 318.

Pouget, A., Ducom, J. C., Torri, J., & Bavelier, D. (2002). Multisensory spatial representations in

eye-centered coordinates for reaching. Cognition, 83(1), 1–11.

http://doi.org/10.1016/S0010-0277(01)00163-9

Prablanc, C., Desmurget, M., & Gréa, H. (2003). Neural control of on-line guidance of hand

reaching movements. In C. Prablanc, D. Pélisson, & Y. Rossetti (Eds.), Progress in Brain

Research (Vol. 142, pp. 155–170). Elsevier. https://doi.org/10.1016/S0079-6123(03)42012-

8

Prablanc, C., Echallier, J. E., Jeannerod, M., & Komilis, E. (1979). Optimal response of eye and

hand motor systems in pointing at a visual target. II. Static and dynamic visual cues in the

control of hand movement. Biological Cybernetics, 35(3), 183–187.

http://doi.org/10.1007/BF00337436

Page 175: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

157

Prablanc, C., & Martin, O. (1992). Automatic control during hand reaching at undetected two-

dimensional target displacements. Journal of Neurophysiology, 67(2), 455–69.

http://doi.org/10.1152/jn.1992.67.2.455

Prevosto, V., Graf, W., & Ugolini, G. (2010). Cerebellar inputs to intraparietal cortex areas LIP

and MIP: Functional frameworks for adaptive control of eye movements, reaching, and

arm/eye/head movement coordination. Cerebral Cortex, 20(1), 214–228.

http://doi.org/10.1093/cercor/bhp091

Prevosto, V., Graf, W., & Ugolini, G. (2011). Proprioceptive pathways to posterior parietal areas

MIP and LIPv from the dorsal column nuclei and the postcentral somatosensory cortex.

European Journal of Neuroscience, 33(3), 444–460. http://doi.org/10.1111/j.1460-

9568.2010.07541.x

Prinz, W. (1997). Perception and action planning. European Journal of Cognitive Psychology,

9(2), 129–154. http://doi.org/10.1080/713752551

Pritchett, L. M., Carnevale, M. J., & Harris, L. R. (2012). Reference frames for coding touch

location depend on the task. Experimental Brain Research, 222(4), 437–45.

http://doi.org/10.1007/s00221-012-3231-4

Proske, U., & Gandevia, S. C. (2012). The proprioceptive senses: Their roles in signaling body

shape, Body position and movement, and muscle force. Physiology Reviews, 92(4), 1651–

1697. http://doi.org/10.1152/physrev.00048.2011

Pruszynski, J. A., Kurtzer, I., & Scott, S. H. (2008). Rapid motor responses are appropriately

tuned to the metrics of a visuospatial task. Journal of Neurophysiology, 100(1), 224–238.

http://doi.org/10.1152/jn.90262.2008

Pruszynski, J. A., & Scott, S. H. (2012). Optimal feedback control and the long-latency stretch

response. Experimental Brain Research, 218(3), 341–359. http://doi.org/10.1007/s00221-

012-3041-8

Purves, D., Augustine, G. J., Fitzpatrick, D., Katz, L. C., LaMantia, A.-S., McNamara, J. O., &

Page 176: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

158

Williams, S. M. (2001). Anatomical Distribution of Rods and Cones.

https://www.ncbi.nlm.nih.gov/books/NBK10848/

Redon, C., Hay, L., & Velay, J. L. (1991). Proprioceptive control of goal-directed movements in

man, studied by means of vibratory muscle tendon stimulation. Journal of Motor Behavior,

23(2), 101–108. http://doi.org/10.1080/00222895.1991.9942027

Reichenbach, A., Thielscher, A., Peer, A., Bülthoff, H. H., & Bresciani, J.-P. (2009). Seeing the

hand while reaching speeds up on-line responses to a sudden change in target position. The

Journal of Physiology, 587(19), 4605–4616. http://doi.org/10.1113/jphysiol.2009.176362

Reichenbach, A., Thielscher, A., Peer, A., Bülthoff, H. H., & Bresciani, J.-P. (2014). A key

region in the human parietal cortex for processing proprioceptive hand feedback during

reaching movements. NeuroImage, 84, 615–625.

http://doi.org/10.1016/j.neuroimage.2013.09.024

Rosenbaum, D. A., Cohen, R. G., Jax, S. A., Weeks, D. J., & van der Wel, R. (2007). The

problem of serial order in behavior: Lashley’s legacy. Human Movement Science, 26(4),

525–554. http://doi.org/10.1016/j.humov.2007.04.001

Rothwell, J. C., Traub, M. M., & Marsden, C. D. (1980). Influence of voluntary intent on the

human long-latency stretch reflex. Nature, 286, 496. http://doi.org/10.1038/286496a0

Rozzi, S., Ferrari, P. F., Bonini, L., Rizzolatti, G., & Fogassi, L. (2008). Functional organization

of inferior parietal lobule convexity in the macaque monkey: Electrophysiological

characterization of motor, sensory and mirror responses and their correlation with

cytoarchitectonic areas. European Journal of Neuroscience, 28, 1569–1588.

http://doi.org/10.1111/j.1460-9568.2008.06395.x

Saradjian, A. H. (2015). Sensory modulation of movement, posture and locomotion. Clinical

Neurophysiology, 45(4), 255–267. http://doi.org/10.1016/j.neucli.2015.09.004

Saradjian, A. H., Tremblay, L., Perrier, J., Blouin, J., & Mouchnino, L. (2013). Cortical

facilitation of proprioceptive inputs related to gravitational balance constraints during step

Page 177: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

159

preparation. Journal of Neurophysiology, 110(2), 397–407.

http://doi.org/10.1152/jn.00905.2012

Sarlegna, F. R., & Bernier, P.-M. (2010). On the link between sensorimotor adaptation and

sensory recalibration. Journal of Neuroscience, 30(35), 11555–11557.

http://doi.org/10.1523/JNEUROSCI.3040-10.2010

Sarlegna, F. R., & Blouin, J. (2010). Visual guidance of arm reaching: Online adjustments of

movement direction are impaired by amplitude control. Journal of Vision, 10(5), 24.

http://doi.org/10.1167/10.5.24

Sarlegna, F. R., & Mutha, P. K. (2015). The influence of visual target information on the online

control of movements. Vision Research, 110, 144–154.

http://doi.org/10.1016/j.visres.2014.07.001

Sarlegna, F. R., Przybyla, A., & Sainburg, R. L. (2009). The influence of target sensory modality

on motor planning may reflect errors in sensori-motor transformations. Journal of

Neuroscience, 164(2), 597–610. http://doi.org/10.1016/j.biotechadv.2011.08.021.Secreted

Sarlegna, F. R., & Sainburg, R. L. (2007). The effect of target modality on visual and

proprioceptive contributions to the control of movement distance. Experimental Brain

Research, 176(2), 267–280. http://doi.org/10.1007/s00221-006-0613-5

Sato, K., Nariai, T., Tanaka, Y., Maehara, T., Miyakawa, N., Sasaki, S., … Ohno, K. (2005).

Functional representation of the finger and face in the human somatosensory cortex:

Intraoperative intrinsic optical imaging. NeuroImage, 25(4), 1292–1301.

http://doi.org/10.1016/j.neuroimage.2004.12.049

Saunders, J. A., & Knill, D. C. (2003). Humans use continuous visual feedback from the hand to

control fast reaching movements. Experimental Brain Research, 152(3), 341–352.

http://doi.org/10.1007/s00221-003-1525-2

Schimmack, U. (2012). The ironic effect of significant results on the credibility of multiple-study

articles. Psychological Methods, 17(4), 551–566. http://doi.org/10.1037/a0029487

Page 178: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

160

Schmahmann, J. D., & Pandya, D. N. (1990). Anatomical investigation of projections from

thalamus to posterior parietal cortex in the rhesus monkey: A WGA-HRP and fluorescent

tracer study. The Journal of Comparative Neurology, 295(2), 299–326.

http://doi.org/10.1002/cne.902950212

Schmidt, R. A., Sherwood, D. E., Zelaznik, H. N., & Leikind, B. J. (1985). Speed-accuracy

trade-offs in motor behavior: Theories of impulse variability. In Motor Behavior (pp. 79–

123). Springer.

Schütz, I., Henriques, D. Y. P., & Fiehler, K. (2013). Gaze-centered spatial updating in delayed

reaching even in the presence of landmarks. Vision Research, 87, 46–52.

http://doi.org/10.1016/j.visres.2013.06.001

Schwoebel, J., Coslett, H. B., & Buxbaum, L. J. (2001). Compensatory coding of body part

location in autotopagnosia: Evidence for extrinsic egocentric coding. Cognitive

Neuropsychology, 18(4), 363–381. http://doi.org/10.1080/02643290126218

Scott, S. H. (2004). Optimal feedback control and the neural basis of volitional motor control.

Nature Reviews Neuroscience, 5(7), 532–545. http://doi.org/10.1038/nrn1427

Serino, A., & Haggard, P. (2010). Touch and the body. Neuroscience and Biobehavioral

Reviews, 34(2), 224–236. http://doi.org/10.1016/j.neubiorev.2009.04.004

Shams, L., Ma, W. J., & Beierholm, U. (2005). Sound-induced flash illusion as an optimal

percept. Neuroreport, 16(17), 1923–1927.

http://doi.org/10.1097/01.wnr.0000187634.68504.bb

Sherrington, C. S. (1911). The integrative action of the nervous system.

Shlaer, S. (1937). The relation between visual acuity and illumination. The Journal of General

Physiology, 21(2), 165–188. http://doi.org/10.1085/jgp.21.2.165

Sirigu, A., Grafman, J., Bressler, K., & Sunderland, T. (1991b). Multiple Representations

Contribute To Body Knowledge Processing. Brain, 114(1), 629–642.

http://doi.org/10.1093/brain/114.1.629

Page 179: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

161

Smeets, J. B. J., Erkelens, C. J., & van der Gon Denier, J. J. (1990). Adjustments of fast goal-

directed movements in response to an unexpected inertial load. Experimental Brain

Research, 81(2), 303–312. http://doi.org/10.1007/BF00228120

Smeets, J. B. J., Oostwoud Wijdenes, L., & Brenner, E. (2016). Movement Adjustments Have

Short Latencies because There Is No Need to Detect Anything. Motor Control, 20(2), 137–

148. http://doi.org/10.1123/mc.2014-0064

Sober, S. J., & Sabes, P. N. (2003). Multisensory Integration during Motor Planning. The

Journal of Neuroscience, 23(18), 6982 LP-6992. http://doi.org/10.1523/JNEUROSCI.23-

18-06982.2003

Sober, S. J., & Sabes, P. N. (2005). Flexible strategies for sensory integration during motor

planning. Nature Neuroscience, 8(4), 490–497. http://doi.org/10.1038/nn1427

Song, J. H., & Nakayama, K. (2009). Hidden cognitive states revealed in choice reaching tasks.

Trends in Cognitive Sciences, 13, 360–366. http://doi.org/10.1016/j.tics.2009.04.009

Stetson, C., & Andersen, R. A. (2014). The parietal reach region selectively anti-synchronizes

with dorsal premotor cortex during planning, 34(36), 11948–11958.

http://doi.org/10.1523/JNEUROSCI.0097-14.2014

Stricanne, B., Andersen, R. A., & Mazzoni, P. (1996). Eye-centered, head-centered, and

intermediate coding of remembered sound locations in area LIP. Journal of

Neurophysiology, 76(3), 2071–2076. http://doi.org/10.1021/je9001366

Tadel, F., Baillet, S., Mosher, J. C., Pantazis, D., & Leahy, R. M. (2011). Brainstorm: A user-

friendly application for MEG/EEG analysis. Computational Intelligence and Neuroscience,

2011. http://doi.org/10.1155/2011/879716

Tenke, C. E., & Kayser, J. (2012a). Generator localization by current source density (CSD):

Implications of volume conduction and field closure at intracranial and scalp resolutions.

Clinical Neurophysiology, 123(12), 2328–2345. http://doi.org/10.1016/j.clinph.2012.06.005

Tenke, C. E., & Kayser, J. (2012b). Generator localization by current source density (CSD):

Page 180: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

162

Implications of volume conduction and field closure at intracranial and scalp resolutions.

Clinical Neurophysiology, 123(12), 2328–2345. http://doi.org/10.1016/j.clinph.2012.06.005

Thompson, A. A., Byrne, P. A., & Henriques, D. Y. P. (2014). Visual targets aren’t irreversibly

converted to motor coordinates: Eye-centered updating of visuospatial memory in online

reach control. PLoS ONE, 9(3), 1–10. http://doi.org/10.1371/journal.pone.0092455

Thompson, A. A., Glover, C. V., & Henriques, D. Y. P. (2012). Allocentrically implied target

locations are updated in an eye-centred reference frame. Neuroscience Letters, 514(2), 214–

218. http://doi.org/10.1016/j.neulet.2012.03.004

Tipper, S. P., Lortie, C., & Baylis, G. (1992). Selective reaching: Evidence for action centered

attention. Journal of Experimental Psychology: Human Perception and Performance, 18(4),

891–905.

Tremblay, L., Crainic, V. A., de Grosbois, J., Bhattacharjee, A., Kennedy, A., Hansen, S., &

Welsh, T. N. (2017). An optimal velocity for online limb-target regulation processes?

Experimental Brain Research, 235(1), 29–40. http://doi.org/10.1007/s00221-016-4770-x

Tremblay, L., Hansen, S., Kennedy, A., & Cheng, D. T. (2013). The utility of vision during

action: Multiple visuomotor processes? Journal of Motor Behavior, 45(2), 91–99.

http://doi.org/10.1080/00222895.2012.747483

Treue, S. (2001). Neural correlated of attention in the primate visual cortex. Trends in

Neurosciences, 24(5), 295–300.

van den Broek, S. ., Reinders, F., Donderwinkel, M., & Peters, M. (1998). Volume conduction

effects in EEG and MEG. Electroencephalography and Clinical Neurophysiology, 106(6),

522–534. http://doi.org/10.1016/S0013-4694(97)00147-8

Veerman, M. M., Brenner, E., & Smeets, J. B. J. (2008). The latency for correcting a movement

depends on the visual attribute that defines the target. Experimental Brain Research, 187(2),

219–228. http://doi.org/10.1007/s00221-008-1296-x

Vesia, M., Yan, X., Henriques, D. Y. P., Sergio, L. E., & Crawford, J. D. (2008). Transcranial

Page 181: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

163

magnetic stimulation over human dorsal-lateral posterior parietal cortex disrupts integration

of hand position signals into the reach plan. Journal of Neurophysiology, 100(4), 2005–

2014. http://doi.org/10.1152/jn.90519.2008

Vetter, P., Goodbody, S. J., & Wolpert, D. M. (1999). Evidence for an eye-centered spherical

representation of the visuomotor map. Journal of Neurophysiology, 81(2), 935–939.

Welsh, T. N. (2011). The relationship between attentional capture and deviations in movement

trajectories in a selective reaching task. Acta Psychologica, 137(3), 300–308.

http://doi.org/10.1016/j.actpsy.2011.03.011

Welsh, T. N., Elliott, D., & Weeks, D. J. (1999). Hand deviations toward distractors. Evidence

for response competition. Experimental Brain Research, 127(2), 207–212.

http://doi.org/10.1007/s002210050790

Werner, W. (1993). Neurons in the primate superior colliculus are active before and during arm

movements to visual targets. European Journal of Neuroscience, 5(4), 335–340.

http://doi.org/10.1111/j.1460-9568.1993.tb00501.x

Werner, W., Dannenberg, S., & Hoffmann, K.-P. (1997). Arm-movement-related neurons in the

primate superior colliculus and underlying reticular formation: comparison of neuronal

activity with EMGs of muscles of the shoulder, arm and trunk during reaching.

Experimental Brain Research, 115(2), 191–205. http://doi.org/10.1007/PL00005690

Wienbar, S., & Schwartz, G. W. (2018). The dynamic receptive fields of retinal ganglion cells.

Progress in Retinal and Eye Research. http://doi.org/10.1016/J.PRETEYERES.2018.06.003

Wolpert, D. M., Goodbody, S. J., & Husain, M. (1998). Maintaining internal representations: the

role of the human superior\n parietal lobe. Nature Neuroscience, 1(6), 529–533.

http://doi.org/10.1038/2245

Woodworth, R. S. (1899). Accuracy of voluntary movement. Psychological Review

Monographs.

Yamamoto, S., & Kitazawa, S. (2001). Reversal of subjective temporal order due to arm

Page 182: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

164

crossing. Nature Neuroscience, 4(7), 759–765. http://doi.org/10.1038/89559

Zelaznik, H. N., Hawkins, B., & Kisselburgh, L. (1983). Rapid visual feedback processing in

single-aiming movements, 15(3), 217–236.

http://doi.org/10.1080/00222895.1983.10735298

Zipser, D., & Andersen, R. A. (1988). A back-propagation programmed network that simulates

response properties of a subset of posterior parietal neurons. Nature, 331, 679–684.

http://doi.org/10.1038/331679a0

Page 183: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

165

Appendix!1:!Power!Analyses!for!Experimental!Studies!

! Power!Analyses!for!Experimental!Studies!

! Study!A:!

! Experiment!A1!

After designing the experiment, a power analysis was conducted to determine what sample size

would be required for the statistical tests. The population effect size was estimated by calculating

the average and median effect sizes (see Schimmack 2012) of horizontal reaching error differences

in relevant experiments (see Table 5). Both the median and mean effect sizes were around 0.65.

To estimate the correlation among the proposed repeated-measures (i.e., sensory condition and

fixation direction), the relationship between different target modality experiments in Pouget et al.,

2002 was relied upon. Overall, similar to Pouget et al. (2002), a high positive correlation between

modalities and gaze directions (i.e., high negative correlation for gaze directions) was expected.

Due to the results of Pouget et al. (2002), it was also hypothesized that measures may be less

correlated between visual and proprioceptive target modalities. Thus, two estimates a lower

correlation coefficient 0.15 and a higher correlation coefficient of 0.5 were used for the analysis.

Alpha was set to 0.05 and a nonsphericity correction of 1 was used. All values were submitted to

GPower 3.1 (Faul, Erdfelder, Buchner, & Lang, 2009) and it was determined that 6-9 participants

would be sufficient to achieve 95 percent power, with a critical F between 2.48 and 2.60.

Page 184: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

166

Experiment*and*Comparison! N! Effect!Size!F!

Henriques!et!al.,!1998!!(Experiment!1=!left=right)! 9" 0.82"

Pouget!et!al!2002!Experiment!a!(static=!left=0)! 9" 0.65"

Pouget!et!al!2002!Experiment!a!(static=!right=0)! 9" 0.6"

Pouget!et!al!2002!Experiment!b!(static=!left=!0)! 9" 0.73"

Pouget!et!al!2002!Experiment!b!(static!=right!0)! 9" 0.71"

Pouget!et!al!2002!Experiment!c!(static!=!right!=!0)! 9" 0.51"

Pouget!et!al!2002!Experiment!c!(static=!left!=!0)! 9" 0.46"

Table!5.! Effect!sizes!of!relevant!studies!

Page 185: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

167

! !Experiment!A2!Power!Analysis!

Similar to Experiment A1, an analysis was conducted to determine how many participants were

needed to achieve the power necessary for statistical analyses. However, because few studies have

investigated evoked potentials during movement planning it was difficult to get an estimate of

what the expected population effect size would be for these tests. Blouin et al., 2014 investigated

somatosensory evoked potentials during movement planning to somatosensory and visual targets.

They recruited 10 participants and correlation analyses employed yielded strong positive

correlation between SEP amplitude and directional errors for proprioceptive targets [r = 0.61, p =

.05, yielding a d =1.5 (Cohen, 1992)]. The authors also noted a strong negative correlation between

SEP amplitude and directional errors with visual targets [r = -0.80, p = 0.05, yielding a d = 2.6].

Thus, given the large correlation effect size, and the uncertainty in the literature, the lower of the

two effect sizes was used to compute the estimated power (1.5). The statistical model used was the

pairwise t-test and alpha was set to .05. All values were submitted to Gpower 3.1 and the analysis

revealed that a total of 8 participants would achieve 95 percent power, with a critical t of 2.3.

Due to the uncertainty of the estimates for the population effect size, a greater number of

participants (10) were recruited. This number would also match previous studies looking at evoked

potentials during movement planning (Blouin et al., 2014).

! Study!B!Power!Analysis!

In Study B, a novel paradigm was employed to investigate latency of online corrections under

different sensory conditions. Our power analysis for this study was supposed to be based on

previous studies that investigated the latency of online corrections when reaching to visual targets.

The number of participants of these experiments is outlined in Table 6. However, because of a lack

of information with regard to the exact values of the effect sizes, an a priori power analysis could

not be computed.

Page 186: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

168

Study Number of Participants

Komilis et al. (1993) Exp1: 10; Exp2: 5

Reichenbach et al. (2009) 11

Saunders and Knill (2003) 12

Sarlegna and Blouin (2010) Exp1: 8; Exp2: 5

A post-hoc power analysis was performed on the key variables of constant error and latency of

online corrections to verify that the tests used in Study B were sufficiently powered to find between

modality differences. As G*power is still limited to the assumption of one within subject factor

(Faul et al., 2009) and between modality differences were the focus of the experiment, power

analyses were conducted using the main effects of modality. For the main effect of modality, the

parameters inputted into G*power were as follows:

effect size f (U) was calculated as 1.78; (using the partial eta of 0.76 option: as in SPSS)

alpha: 0.05

Sample size: 14

Number of groups: 1

Number of measurements: 2 (modalities visual and somatosensory)

Nonsphericity correction ! = 1

Power was estimated at 99% with a critical F of 4.67.

For the main effect of modality on correction latency the following values were used:

Table!6.! Number!of!participants!in!relevant!studies!

Page 187: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

169

effect size f (U) was calculated as 4.89; (using the partial eta of 0.96 option: as in SPSS)

alpha: 0.05

Sample size: 14

Number of groups: 1

Number of measurements: 2 (modalities visual and somatosensory)

Nonsphericity correction ! = 1

Power was estimated at 100% with a critical F of 4.67.

Based on these power analyses, the studies in the present thesis were likely sufficiently powered.

These results also suggest that this effect could be seen with smaller sample sizes. A liberal

estimate by G*power reveals that based on our data you should reliable results with samples as

low as 6 participants (as low as 3 for correction latency results only).

Page 188: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

170

Appendix!2:!Supplementary!Analysis!Study!A!

! Robot!Apparatus!Development!and!Testing!

The robot perturbation device used in Study B was developed based on previously a existing

apparatus used for training upper-limb reaching movements (Manson et al., 2014). The device is

an Epson SCARA E2L853 (Seiko Epson Corp). The robot moves with 4 degrees of freedom and

is capable of reproducing highly accurate spatial trajectories (0.02 mm error) with loads of up to 5

kilograms.

To develop a protocol capable of producing real-time perturbations during the movement required

shifting the computational workload required by the motion tracking system to another device.

This solution was tested as numerous other possibilities using the same device for both motion

tracking and robotic guidance (as in Manson et al., 2014) failed to produce the desired result (see

a Figure .

There were two specific challenges: the significant lag and variability in the time between the

processing of the arm trajectory from the optotrak and the starting of the perturbation (see Figure

28). The second and related challenge was the loss of resolution that occurred when both the

optorak and robotic trajectories were recorded (see Figure 29).

Page 189: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

171

!!Reaction!of!the!robot!perturbation!movement!when!signaled!by!motion!tracking!processed!on!the!same!computer.!

!Number! of! lost! samples! in! 500! 1Bsecond! recordings.! Sixty! (60)! percent! of! data! were! lost! samples,! this!severely!decreased!the!amount!and!resolution!of!the!useable!recordings.!Also!55%!during!the!robot’s!trajectory.!!

Lost Samples

Page 190: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

172

To resolve these issues, numerous solutions were proposed and tested. For brevity only the solution

implemented for Study B will be presented. The proposed solution involved the use of a network

crossover cable to transform the single station robotic guidance setup into a dual station setup (see

Figure 30). To accomplish this, trajectories from both the robotic arm and the reaching arm were

tracked by the Optotrak on a separate work station. Using MATLAB’s data acquisition toolbox

and a combination of a hardwired direct ethernet connection, and custom parallel port input-output

devices attached to both robot controller and the Optotrak workstation realtime signals about the

limb’s trajectory (movement start, movement end) were used to signal robot perturbations. Using

this solution resulted in faster and less variable robot reaction times (see Figure 31) and less loss

due to dropped samples (see Figure 32). These values were computed using a longer sample time

(3s) to make it closer to the time used in the actual experiment.

Page 191: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

173

!!Differences!between!single!and!dual!setups!for!the!recording!of!perturbation!data.!The!dual!station!setup!was!found!to!be!the!best!solution!for!both!maximizing!resolution!of!trajectory!data!and!reducing!variability!in!the!speed!of!perturbations.!

!Most!reaction!times!for!the!dual!station!setup!fell!within!60B!75!milliseconds!after!the!go!signal.!This!reduction!perturbation!variability!provided!a!basis!for!the!protocol.!!

!

Page 192: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

174

!

!Overall!the!number!of!lost!samples!was!reduced!from!60%!to!10%!with!only!4%!of!lost!samples!occurring!during!the!robot’s!movements.!!

Lost Samples

Page 193: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

175

Appendix!3:!Supplementary!Analysis!Study!B!

! Motion!Detection!Time!versus!Latency!of!Online!Corrections!

An additional analysis was conducted to examine if the differences in the time needed to detect

target motion was sufficient to explain the differences in correction latencies, between the two

target modalities. To answer this question, the differences in detection times were compared to the

differences in correction latencies. If the differences in correction times were attributed solely to

differences in the detection of target perturbations, then the between-modality differences in

detection times should not be different than the between-modality difference in correction times.

Between-modality differences in detection time (DTdiff)were computed for each of the perturbation

directions in the vocal response time protocol (see equation below).

DTdiff = DTvis – DTsoma

The between- modality differences in correction latencies (CTsoma) were computed by averaging

correction latencies from both the 100 ms and 200 ms conditions (see equations below) and then

subtracting the average differences between modalities for each of the perturbation directions.

CTvis = (CTvis 100 ms + CTvis 200 ms) / 2

CTsoma = (CTsoma 100 ms + CTsoma 200 ms) / 2

CTdiff = CTvis - CTsoma

Both CTdiff and DTdiff times were submitted to a 2 measure (detection time, correction time) by 2

perturbation direction (away, toward) repeated-measures ANOVA. The ANOVA revealed a

significant main effect of measure, F(1,13) = 25.34, p < 0.001, and a significant interaction

between measure and perturbation direction, F(1,13) = 15.45, p <.01, HSD = 39 ms. Overall, the

between modality differences in detections time were significantly lower (M = 34 ms, SD = 61)

than the between modality differences in correction time (M = 120 ms, SD = 41). Breaking down

the interaction revealed that only the between-modality differences in correction latencies were

larger for the targets perturbed towards the body (M = 144 ms, SD = 37) vs. away from the body

Page 194: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

176

(M = 95 ms, SD = 30). Also, these differences in limb trajectory correction latencies were larger

than the between-modality differences in detection times (towards: M = 22 ms, SD = 50; away: M

= 47 ms, SD = 70), which did not differ across perturbation directions (see Figure 33).

!Contrasts!between!the!target!modality!differences!in!correction!time!and!detection!time.!The!difference!between!the!two!target!modality!conditions!is!significantly!larger!incorrection!time!compared!to!detection!time.!

!

! Normalized!Trajectory!Deviations!To further understand the differences in correction latency, a supplementary analysis on trajectory

deviations was also conducted. To do this, liberal t-tests were conducted between normalized

trajectory positions of perturbed and control trials in the direction axis. Overall, the results of these

analyses show that participants corrected for target significantly more in the somatosensory target

condition compared to the visual target condition. Significant trajectory differences were found

later in the trajectory between movements to somatosensory targets perturbed before and 100 ms

after movement onset and unperturbed trajectories. In fact, no reliable differences were found

between movements to visual targets perturbed after movement onset (see Figure 34).

Detection Time Correction Time0

50

100

150

200

Measure

Betw

een-

mod

ality

tim

e di

ffere

nce

(ms)

Visual- Somatosensory Differences for Response and Correction Tasks

Away

Toward

*

Page 195: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

177

The results of this analysis, while supporting the primary hypothesis, does not completely coincide

with the results of our analyses in Study B. Early adjustments to somatosensory target

perturbations and later adjustments (but adjustments nonetheless) to movements to perturbed

visual targets were found. The present analysis could be viewed as evidence that participants were

unable to correct to visual target perturbations. These conflicting results are not unexpected as

studies examining collection latency methods (Oostwoud Wijdenes et al., 2014) has shown spatial

trajectory analyses are the most conservative measure of corrections.

Page 196: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

178

!Analysis!of!trajectory!deviations.!Each!curve!represents!a!temporally!normalized!averaged!trajectory!of!each!condition!used!in!the!experiment.!Error!patches!represent!the!between!subject!standard!deviation.!Targets!were!either!perturbed!Before,!100!ms!or!200!ms!after!movement!onset!or!were!unperturbed!(control).!Participants!adjusted!their!trajectory! to! somatosensory! targets! perturbed! 100! ms! after! movement! onset! but! no! differences! between! spatial!trajectories! were! found! for! visual! targets.! Participants! were! capable! to! fully! adjusting! their! movements! to! targets!perturbed!before!movement!start.!!

Page 197: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

179

Appendix!4:!LaplacianBTransformed!VEPs!for!Study!A!

!!Grand!average!of!the!Laplacian!transformed!VEPs!for!each!electrode!of!interest!in!the!AUDBVIS!and!AUDBSOMA!conditions.!The!peakBtoBpeak!amplitudes!between!P1!and!N1!were!used!for!the!normalization!and!comparisons!made!in!Experiment!A1.!! !

Page 198: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

180

Appendix!5:!Experimental!Questionnaires!!

! Prior!to!Participation!Questionaires!

! Hand!Dominance!Test!

Hand dominance test (adapted from Oldfield, 1971)

Please indicate which hand you would use for the following activities:

Writing right left

Throwing right left

Scissors right left

Toothbrush right left

Drawing right left

***Participants answering right to 4 items or more are deemed to be right hand dominant. ***

Oldfield, R.C. 1971. The assessment and analysis of handedness: the Edinburgh

inventory. Neuropsychologia, 9, 97-113.

Page 199: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

181

! Eye!Dominance!Test!

To perform the Miles (1930) test, participants will be asked to extend both arms in front of

themselves. They are then asked to bring both hands together to create a small opening and then

view a distant object through the opening. The experimenter will then ask the participant to close

right eye. If the viewed the object is no longer visible, the participant will be deemed to be right-

eye dominant.

!!Adapted!from!Miles,!W.R.!(1930).!Ocular!dominance!in!human!adults.!The$Journal$of$General$Psychology,$3,!412B430.!

Page 200: Manson Gerome A 201906 PhD thesis - University of Toronto T … · Manzone, Animesh Kumawat, John DeGrosbois, and Intishar Kazi for their help with numerous aspects of each project.

182

! Brief!Neurological!Questionnaire!!

How often do you experience the following?

Headaches Never Seldom Often

Light-headed or dizziness Never Seldom Often

Numbness or tingling Never Seldom Often

Tremor Never Seldom Often

Paralysis Never Seldom Often

Convulsions or seizures Never Seldom Often

Stroke Never Seldom Often

Sensory impairment Never Seldom Often

To be considered neurologically intact, participants cannot tick more than one “often” box in the

first four categories and must tick “never” in the last four categories


Recommended