+ All Categories
Home > Documents > Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from...

Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from...

Date post: 24-Jul-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
46
Did this stuf get in? Between the Nanoscribe, Protolaser U3, and current 3D printing projects, there’s plenty we can do with rapid integration of polymers and microfluidics for soft complex systems. CGA TODO: Facilities Summary; consists of an overview, a statement on the intellectual merit of the proposed activity, and a 2 and 6 sets of keywords at the end of the overview in the Project Summary. Keywords: xxx; yyy at end of overview Science center stuff Fix Dickey word stuff Robot Skin The introduction is almost done, so edit this seriously. Our goal is to develop robot skin that is actually used by real robots. Skin is an integrated system that forms the mechanical interface with the world. It should be rich in sensors, and support actuation. We take inspiration from human skin, where there are many types of sensors embedded in a mechanical structure that maximizes their performance. There are several new elements of our approach: We will create and deploy many types of sensors, including embedded accelerometers, gyros, tem- perature sensors, vibration sensors, sound sensors, optical sensors sensing nearby objects, and optical sensors tracking skin and object velocity and movement. Previous skin and tactile sensing projects typically focused on one or only a few types of sensors. We will optimize the skin mechanics for manipulation and tactile perception. When the needs of manipulation and tactile perception conflict or are unclear, we will focus on optimizing performance on a set of benchmark tasks. Previous tactile sensing projects often place a tactile sensor on bare metal fingers, with little consideration of skin and tissue mechanics. We will explore a relatively thick soft skin, and consider soft tissue surrounding internal structure (bones) with a relatively human-scale ratio of soft tissue to bone volume, or structures that are com- pletely soft (no bones/rigid elements). We will explore a wide variety of surface textures including arbitrary ridge patterns (fingerprints), hairs, posts, pyramids, and cones. These patterns may vary across the skin provide a variety of contact aordances. We will explore superhuman sensing. For example, we will create vision systems (eyes) that look outward from the skin for a whole body vision system. We will use optical tracking to estimate slipping and object velocity relative to the skin. We will explore embedding ultrasound transducers in the skin to use ultrasound to image into soft materials that are in contact such as parts of the human body. We will explore deliberately creating air and liquid (sweat) flows (both inwards and outwards) for better sensing (measuring variables such as pressure, conductivity, and temperature) and controlling 1
Transcript
Page 1: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

Did this stuf get in? Between the Nanoscribe, Protolaser U3, and current 3D printing projects, there’splenty we can do with rapid integration of polymers and microfluidics for soft complex systems.

CGA TODO:

Facilities

Summary;

consists of an overview, a statement on the intellectual merit of the proposed activity, and a statement on the broader impacts of the proposed activity.

2 and 6 sets of keywords at the end of the overview in the Project Summary.

Keywords: xxx; yyy at end of overview

Science center stuff

Fix Dickey word stuff

Robot SkinThe introduction is almost done, so edit this seriously.Our goal is to develop robot skin that is actually used by real robots. Skin is an integrated system that

forms the mechanical interface with the world. It should be rich in sensors, and support actuation. We takeinspiration from human skin, where there are many types of sensors embedded in a mechanical structurethat maximizes their performance. There are several new elements of our approach:

• We will create and deploy many types of sensors, including embedded accelerometers, gyros, tem-perature sensors, vibration sensors, sound sensors, optical sensors sensing nearby objects, and opticalsensors tracking skin and object velocity and movement. Previous skin and tactile sensing projectstypically focused on one or only a few types of sensors.

• We will optimize the skin mechanics for manipulation and tactile perception. When the needs ofmanipulation and tactile perception conflict or are unclear, we will focus on optimizing performanceon a set of benchmark tasks. Previous tactile sensing projects often place a tactile sensor on bare metalfingers, with little consideration of skin and tissue mechanics.

• We will explore a relatively thick soft skin, and consider soft tissue surrounding internal structure(bones) with a relatively human-scale ratio of soft tissue to bone volume, or structures that are com-pletely soft (no bones/rigid elements).

• We will explore a wide variety of surface textures including arbitrary ridge patterns (fingerprints),hairs, posts, pyramids, and cones. These patterns may vary across the skin provide a variety of contactaffordances.

• We will explore superhuman sensing. For example, we will create vision systems (eyes) that lookoutward from the skin for a whole body vision system. We will use optical tracking to estimateslipping and object velocity relative to the skin. We will explore embedding ultrasound transducers inthe skin to use ultrasound to image into soft materials that are in contact such as parts of the humanbody.

• We will explore deliberately creating air and liquid (sweat) flows (both inwards and outwards) forbetter sensing (measuring variables such as pressure, conductivity, and temperature) and controlling

1

Page 2: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

adhesion. We will explore humidifying the air for better airflow sensing, contact management, adhe-sion control, and ultrasound sensing.

• We will develop materials to make the skin rugged, and methods to either easily replace or repairdamage.

• We will define a set of benchmark tasks to guide design and evaluation of our and other’s work,The tasks include exploring and manipulating rigid and articulated (jointed) objects, and deformableobjects such as wire bending, paper folding, screen (2D surface) bending, and working with clay(kneading, sculpting with fingers and tools, and using a potters wheel). The system we constructwill recognize, select, and manipulate objects among a set of objects (find keys in your pocket, forexample). Our most difficult set of benchmarks will be mockups of tasks often found in caring forhumans: wiping, combing hair, dressing, moving in bed, lifting, transfer, and changing adult diapers.

• We will explore a range of perceptual approaches, including object tracking based on contact types,forces, and distances, feature based object recognition based on features such as texture, stiffness,damping, and plasticity, feature based event recogntion based on spatial and temporal multimodalfeatures such as the frequency content of vibration sensors, and multimodal signature based eventrecognition.

• We will explore behavior and control based on explicit object trajectories and force control, discrimi-nant or predicate based policies, and matching learned sensory templates.

1 Research Plan

This section is almost done except for the Generation 3 skin section, which needs to reflect the ProposedResearch writetup.

Our research plan has two thrusts: A: developing skin and skin sensors, and B: developing and eval-uating perception, reasoning, and control algorithms that allow the skin to do desired tasks. We have athree step plan for developing skin and skin sensors:

A.1 Generation 1: Use off the shelf sensors embedded in optical grade silicone (Near Infrared (NIR)). Thisskin includes optical sensing of marker movement to measure strain in all three directions. We will userange finding sensors to sense objects at up to a 0.5m distance. We will use embedded IMUs whichinclude an accelerometer, gyro, magnetometer, and temperature sensor. We will embed piezoelectricmaterial to capture high frequency vibration, and pressure sensors. We will embed induction loopsto sense electric field, and explore imposing local or global (possibly time varying) electric fields toresolve orientation. We will glue hairs or whiskers to piezoelectric crystals to provide mechanicalsensing at a (short) distance.

A.2 Generation 2: Integrate current benchtop prototypes into Generation 1 skin. Hyperelastic sensingelements will be composed of soft silicone elastomer embedded with microfluidic channels of non-toxic liquid metal alloy, eg. eutectic gallium-indium (EGaIn ). Strain, shear deformation, and/orapplied surface pressure causes predictable changes in resistance or capacitance of the embeddedliquid metal sensing elements. EGaIn can also be used for capacitive touch measurements. Likewise,we can pattern soft, conductive laminate films to create arrays of capacitive touch pixels (for example,graphene pastes or films separated by elastomer). Conductive elastomers will be used to interfaceliquid EGaIn sensors and circuits with a flexible PC board populated with rigid microelectronics.This will include a microcontroller, battery, and off-the-shelf (OTS) RF transmitter or transceiver.

2

Page 3: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

A.3 Generation 3: Develop completely new skin sensing technology using novel concepts, components,and materials. For example, we will explore artificial hair cells with hairs and whiskers attached tofive or six axes of force and motion sensing. Instead of liquid metal, these integrated sensors will becomposed of insulating and conductive elastomers for detecting deformation through changes in elec-trical capacitance. They will be produced with UV laser micromachining or additive manufacturingthrough customized 3D printing or 2-photon polymerization. As before, the robot skin will interfacewith a flexible printed circuit containing a microcontroller and power. For Generation 3, the antennain the transmitter/transceiver will be replaced with a soft, elastically deformable antenna integratedinto the skin itself.

We need a unified discussion of the use of imposed and inherent magnetic and electrical fields. Atsome abstract level the next two paragraphs are doing similar things. I can imagine imposing an ACelectric field and using inductive sensors.

Move the details of all of this to the proposed work section?

Co-PI Onal is studying the use of Hall effect sensing ICs to detect the deformation of soft materialsusing an embedded miniature magnet located at a precise location with respect to the Hall element.Our preliminary results have demonstrated accurate and high-bandwidth curvature measurements forbending elements. We propose to extend upon this work to develop distributed 6-D force/momentmeasurements using an array of multiple magnet-Hall pairs.

For touch and force sensing we will explore distribute nanoparticles in a gel. When pressed, theparticles form a local percolated network that changes the local conductivity of the gel. In biology,touch generates an action potential that signals to the brain. We propose to use soft materials thatgenerate a small potential when deformed. For example, droplets of EGaIn form an oxide skin thatprotects the underlying metal, but when pressed, this skin cracks and exposes the metal. This could,in principle, be harnessed to generate a potential. I may even have some prelim results to dig up. Wecan utilize hydrogels to replace EGaIn for touch sensing because the gels are transparent.

We plan to develop the “off the shelf” skin in Year 1, the EGaIn microchannel-based skin in Year 2,and then 3rd generation skin in Year 3. Skin sensing technology needs to remain functional under extremedeformation. The skin with embedded sensors must remain mechanically suitable for desired tasks.

The development and evaluation of perception, reasoning, and control algorithms for our three skin sys-tems will use a series of test setups:

B.1 Evaluate a patch of skin on the laboratory bench.

B.2 Evaluate the skin on a simple hand.

B.3 Evaluate the skin on our current lightweight arm and hand.

B.4 Evaluate the skin on our Sarcos Primus humanoid (whole body sensing).

2 Why This Matters

This section needs to be edited down to be more concise and actually flow.We are motivated by the need to build better assistive robots and environments for people with disabilities,

older adults trying to live independently, and people with strokes, ALS, and spinal cord and traumatic braininjury. These people need help. It is a tremendous burden on caregivers (typically a spouse) to provide 24/7care, and many people would rather have a machine change their diapers than a stranger. Here are severalways better robot skin is on the critical path:

3

Michael
Sticky Note
I will send you a powerpoint file. In essence, we put a drop of egain as an electrode in contact with a hydrogel. When pressed, we observed a change in current.
Michael
Highlight
Michael
Sticky Note
Please delete.
Michael
Highlight
Michael
Sticky Note
...generate a potential in a circuit composed of soft, biocompatible hydrogels and small droplets of EGaIn.
Michael
Sticky Note
Do we ever define EGaIn?
Michael
Sticky Note
My 2 cents - the following two paragraphs are details (move to proposed work)
Michael
Sticky Note
I would consider combining the three paragraphs to just hit on high level ideas here (keep it to one total paragraph) and move the rest to the proposed work. A.1 and A.2 are only one paragraph each. "For example, we will utilize artificial hairs and whiskers created by additive manufacturing, incorporate Hall sensing elements to detect deformation of soft materials, and explore new concepts of hydrogel-EGaIn composites that mimic the action potential generated in human skin when touched".
Michael
Sticky Note
I'm probably dumb, but I have no idea what Sarcos Primus is?
Michael
Cross-Out
Michael
Inserted Text
to help
Michael
Cross-Out
Page 4: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

Figure 1. Testbeds. Left: lightweight arm and hand. Right: SARCOS Primus hydraulic humanoid.

• After decades of research, robot hands are nowhere close to human levels of performance. One hugeproblem is terrible sensing and perception. Better robot skin would make a huge difference.

• We also plan to apply our skin to the entire arm of our arm testbed, and the entire body of ourSarcos humanoid robot. High quality whole body tactile sensing and perception would make a hugedifference here as well.

• We also want to use our robot skin to instrument everyday objects and the environment such as tabletops, seats, cups, utensils, and kitchen tools, so we can completely sense what is happening in anenvironment. This information can be used to guide more effective robot behavior and provide betterservice to humans by recognizing their actions and activities and predicting their intentions.

Humans greatly depend on tactile sensing during physical interactions. It is well-established that a tempo-rary loss of this modality results in significant losses in functionality, especially during grasping tasks [36].In robotics, however, reliable integration and use of a sense of touch is severely lacking compared to humanlevels or other sensing modalities such as vision. For robotics to deliver on its promise to intimately interactwith human users safely and adaptively in unstructured environments, developing the robotic counterpart ofa skin covering the entire body is a critical need.

In contrast to existing industrial robots, assistive healthcare robots must engage in physical contact withhumans. For this contact to be safe and comfortable, the robot must be padded with material that matchesthe mechanical properties of natural human tissue. Any mechanical “impedance mismatch” could lead tostress concentrations and kinematic constraints that could lead to bodily immobilization or injury. In orderto preserve the compliance of the underlying padding, the robot skin must be soft and elastically deformable.Stretchable functionality is also required to attach the skin to non-developable surfaces and moving joints.While rigid electronics can be buried beneath the padding, sensors and circuit wiring must be composedprimarily of elastically deformable conductive material that remain electrically functional when stretchedwith strains on the order of 10-100%. Flexibility is not enough – such electronics should be composedof insulating elastomer embedded with conductive elastomer traces, microchannels of conductive fluid, ormeshed/wavy/coiled metal wiring. Moreover, these stretchable circuits require a reliable interface with rigidelectronics for multiplexing, processing, and wireless communication.

We focus on a crucial component of robots working with people: physical human-robot interaction. Asa driving application we will develop robots that can help physically manipulate people in hospital, nursinghome, and home care contexts. We envisage cooperative relationships between robots and caregivers, androbots and patients, where each participant leverages their relative strengths in the planning and performanceof a task. One goal is to reduce injuries to caregivers. A second goal is to help patients recover faster. Athird goal is to provide better “customer service” and autonomy to patients, particularly in rehabilitationsettings. We will explore learning complex policies by watching humans execute them or by programmingthem directly, and then refining those policies using optimization and learning. We will test our ideasby implementing them on our Sarcos humanoid robot. The humanoid will work in concert with at least

4

Michael
Sticky Note
Are there good references here?
Michael
Sticky Note
huge difference relative to what?
Michael
Sticky Note
Is "instrument" the right word here?
Michael
Cross-Out
Michael
Inserted Text
interact with
Michael
Sticky Note
This paragraph is starting to deviate a little from the message I was getting from the introduction. This sounds like we are promising the moon and the sun. The introductory paragraphs of the proposal focus mainly on new sensing modalities and creation and calibration of this skin. Here, it sounds like the proposed work has the goal of helping caregiviers. Unless we really plan to do that, I suggest we just mention this as a long term vision or broader impact.
Page 5: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

one human (simulating a human caregiver) as part of a human-robot team. Tasks to be explored includerepositioning patients in bed, helping them get out of bed and into a wheelchair, moving them from awheelchair to another chair, toilet, or car seat, helping them exercise, picking a fallen patient up off the floor,and handling accidents during any of these tasks. Our basic research will lay the foundation for severaldemonstration systems built on open platforms for transfer to industry and for use in clinical studies.

Our computer vision techniques will allow accurate tracking of patients and caregivers and their bodyparts at a distance using real time surface tracking based on structured light techniques. Our ultrasound-based sensing will allow tracking of a patient’s soft tissue and skeleton when the robot contacts the patientand provide volumetric perception of the inside of the patient as the manipulation progresses. EMG mea-surements and on-patient accelerometers and gyros will be part of an early warning system for patient fallsand other failures, and to allow the robot to more effectively cooperate with instrumented caregivers, byreacting more quickly to anticipated human movement.

We will develop new inherently safe robot designs using soft (non-rigid) structures and mechanisms withsmooth, pliable and reactive surfaces, such as inflatable robots.

We will develop new control techniques that refine existing human strategies and invent new ones, basedon cognitive optimization: general-purpose systems that learn to maximize reward or utility, using new,more powerful model-based forms of adaptive/approximate dynamic programming (ADP) and reinforce-ment learning. We will develop efficient multiple model methods for designing robust nonlinear and timevarying task controllers for physical human-robot interaction where the robot, the patient, and possibly othercaregivers all participate in the interaction.

Physically manipulating people is an important area for health care robotics. Robots are useful for dan-gerous tasks. Physically manipulating people in a care facility leads to injuries and occupational disability,and is statistically one of the most harmful of human occupations [? , ? , ? , ? , ? ]. Many care facilitiesare instituting “no lift” policies, in which caregivers are no longer allowed to do large force physical ma-nipulation of patients, but are required to use mechanical aids. There is currently a shortage of nursing staff

for hospitals and nursing homes, compounded by injury and retirements due to disability. We also want tohelp older couples stay in their own homes longer, when one member has limited movement and the spouseis typically not very strong but would be cognitively able to operate a transfer aid. Unfortunately, so farrobotics has largely addressed this problem in a narrow and probably unacceptable way, using a fork liftapproach to manipulating people that leaves little room for either caregivers or the patient to help. We be-lieve an approach that is more synergistic with current practice will be more useful, allow the use of weaker,cheaper, and safer robots, and will be more acceptable to patients and caregivers. A broader goal of theproposal is to enable robots to work with humans in the real world filled with deformable objects. Currentrobotics research focuses on rigid robots manipulating rigid objects. We see applications of our work in jobsthat currently use teams of physically cooperating humans: construction, rescue, warehouse logistics, repairof large objects, and entertainment such as sports.

We want to minimize tissue deformation during manipulation, and especially minimize tissue shear strain.Consider a wet paper grocery bag full of soft items and an off center heavy can (Figure 2). Supporting thebag at the center (dashed arrow) leads to shearing forces on the paper bag and the bag will probably fail.Supporting the bag under the load concentration (the solid arrow) generates less shearing forces on the bag.The right side of Figure 2 shows a schematic example of rolling a human in a bed. Different forces can beapplied to a cross section of the human at the shoulder. The force application marked by the solid arrowpushes tissue onto the scapula and pushes the scapula onto the rib cage, distributing forces and leading tomostly compression of soft tissue, while the forces indicated by the dashed arrows lead to more tissue shear.

Future robots are expected to free human operators from difficult and dangerous tasks requiring dexterityin various environments. Prototypes of these robots already exist for applications such as extra-vehicularrepair of manned spacecraft and robotic surgery, in which accurate manipulation is crucial. Ultimately,we envision robots operating tools with levels of sensitivity, precision and responsiveness to unexpected

5

Michael
Sticky Note
Are there any special precautions needed with working with patients?
Michael
Highlight
Michael
Sticky Note
This paragraph seems a bit redundant from the previous ones.
Page 6: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

Figure 2. Left: Grocery bag example. Solid arrow is better force to apply, dashed arrow causes more shear. Right: Rollingperson in bed example. What is shown is an axial section at the level of the shoulders (a cross section perpendicular to thespine that includes both shoulders). In this case the solid yellow arrow that pushes the scapula onto the rib cage rather thansideways is a better force to apply. The dashed arrows cause more shear strain in the soft tissue. We also need to considersupport forces affected by robot actions (the green solid arrow).

contacts that exceed the capabilities of humans, making use of numerous force and contact sensors on theirarms and fingers.

3 Background: Skin and Skin-based Sensing

This section is currently a collection of paragraphs, which need to be edited down to a concise review ofwhat is already done by others.

Say something about human sensing: sensor types, resolution, etc.Compared to even the simplest of animals, today’s robots are impoverished in terms of their sensing abil-

ities. For example, a spider can contain as many as 325 mechanoreceptors on each leg [12, 48], in additionto hair sensors and chemical sensors [11, 146]. Mechanoreceptors such as the slit sensilla of spiders [12, 18]and campaniform sensilla of insects [105, 151] are especially concentrated near the joints, where they pro-vide information about loads imposed on the limbs – whether due to regular activity or unexpected eventssuch as collisions. By contrast, robots generally have a modest number of sensors, often associated withactuators or concentrated in devices such as a force sensing wrist. (For example, the Robonaut humanoidrobot has 42 sensors in its hand and wrist module [19].) As a result, robots often respond poorly to unex-pected and arbitrarily-located impacts. The work in this paper is part of a broader effort aimed at creatinglight-weight, rugged appendages for robots that, like the exoskeleton of an insect, feature embedded sensorsso that the robot can be more aware of both anticipated and unanticipated loads in real time.

Part of the reason for the sparseness of force and touch sensing in robotics is that traditional metal andsemiconductor strain gages are tedious to install and wire. The wires are often a source of failure at jointsand are receivers for electromagnetic noise. The limitations are particularly severe for force and tactilesensors on the fingers of a hand.

3.1 Robot and other types of artificial skin

Siegfried Bauer provides a useful review of robot skin [1][2].The development of highly deformable artificial skin with contact force (or pressure) and strain sensing

capabilities [127] is a critical technology to the areas of wearable computing [102], haptic interfaces, andtactile sensing in robotics. With tactile sensing, robots are expected to work more autonomously and be

6

Michael
Cross-Out
Michael
Inserted Text
The use of soft, electronic (e-skins) is an emerging topic in the literature.
Michael
Sticky Note
I think this section would also be improved if each sub-section made a connection to the proposed work (e.g., "we will build upon this work by...", or "For these reasons, this modality of sensing is insufficient" or whatever
Page 7: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

more responsive to unexpected contacts by detecting contact forces during activities such as manipulationand assembly. Application areas include haptics [120], humanoid robotics [159], and medical robotics [141].Different approaches for sensitive skin [98] have been explored.

One of the most widely used methods is to detect structural deformation with embedded strain sensorsin an artificial skin. Highly sensitive fiber optic strain sensors have been embedded in a plastic roboticfinger for force sensing and contact localization [124, 133] and in surgical and interventional tools for forceand deflection sensing [68, 130]. Embedded strain gauges have been used in a ridged rubber structurefor tactile sensing [180]. Detecting capacitance change with embedded capacitive sensor [77] arrays isanother approach for tactile sensing, as shown in a human-friendly robot for contact force sensing [168, 138].Embedding conductive materials in a polymer structure is also a popular method for artificial skin such asnanowire active-matrix circuit integrated artificial skins [161], conductive polymer-based sensors [108],solid-state organic FET circuits [74], and conductive fluid embedded silicone robot fingers [176]. In spiteof their flexibility, the above example sensing technologies are not truly stretchable and also cannot remainfunctional at large strains. For example, fiber optic sensors have upper strain limits of approximately 1-3%for silica [91] and 10% for polymers [137], and typical strain gauges cannot tolerate strains higher than 5%[142].

There have been stretchable skin-like sensors proposed using different methods. Strain sensing fabriccomposites for hand posture and gesture detection has been developed using an electrically conductiveelastomer [96]. A stretchable tactile sensor has been proposed also using polymeric composites [170].A highly twistable tactile sensing array has been made with stretchable helical electrodes [27]. An ionic fluidhas been used with an elastomer material for measuring large strains [29]. Electroconductive elastic rubbermaterials have been explored for measuring displacement and control of McKibben pneumatic artificialmuscle actuators [173, 87, 88]. However, these sensors are not able to remain functional at strains over100%.

3.2 Sensing based on conductive liquids in channels

A novel and specialized sensor technology for soft robots is the use of liquid metals embedded in soft sub-strates, a technology stemming from mercury-in-rubber strain gauges from 1960s [65]. Thus, dimensionalchanges due to deformations in the substrate are reflected as resistance changes by the liquid metal. Recentwork incorporates fluidic channels inside silicone rubber filled with eutectic Gallium-Indium (EGaIn ) tomeasure joint angles of a finger [82] and for a tactile sensing array [83]. A short survey on sensors built withEGaIn is given in [172].

We focus on a particular type of conductive liquid materials, i.e., eutectic gallium-indium (EGaIn ) [40],which are finding increasing applications in soft wearable robots [129], flexible sensors [100] and stretch-able electronics [121, 84]. EGaIn is an alloy of gallium and indium maintaining a liquid state at roomtemperature. Due to its high surface tension and high electrical conductance, EGaIn is an ideal conductorfor a soft sensor.

In this paper, we present a highly deformable artificial robotic skin with multi-modal sensing capable ofdetecting strain and contact pressure simultaneously, designed and fabricated using the combined conceptof hyperelastic strain and pressure sensors with embedded microchannels filled with EGaIn [132]. Theprototype is able to decouple multi-axis strains as well as contact pressure at strains of more than 100%.Although there have been some efforts on developing robotic skins and structures that can detect multipletypes of stimuli [45, 158], highly deformable and stretchable materials have not been fully explored formulti-modal sensing.

Mercury, francium, cesium, gallium, and rubidium are all elemental metals that are liquid below or nearroom temperature. Cesium and rubidium are both explosively reactive and francium is radioactive; therefore

7

Michael
Cross-Out
Michael
Inserted Text
It is possible to create a
Michael
Cross-Out
Michael
Inserted Text
It can
Michael
Cross-Out
Michael
Cross-Out
Michael
Inserted Text
low viscosity
Page 8: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

Figure 3. Examples of applications from the Dickey group: 3D printed structures[13], stretchable wires[37] and antennas,[27,28], self-healing circuits[38], microfluidic electrodes,[33] soft memory and diodes,[39],[40] and microdroplets.[41] Considervisiting our YouTube channel to see some of these applications in action.

these materials are not practical. By process of elimination, Hg and Ga (and their alloys) remain. Hg has twoprinciple disadvantages: (1) it is toxic - exposure can take place through inhalation of its vapor or adsorptionvia the skin,[3, 3, 4] and (2) it has a very large surface tension (¿400 mN/m) which generally limits liquidHg to spherical shapes that have limited utility.

Ga is an attractive alternative to Hg. Ga has low-toxicity[5] and no vapor pressure (which means Gacan be handled without worry of inhalation). Ga has a melting point of 30C, but it can be supercooledsignificantly.[6] The surface of the metal reacts rapidly with oxygen to form a native oxide layer. This oxideskin is thin (0.5 3 nm)[7, 8, 9, 10], passivating[8, 11], and composed of gallium oxide.[12] Unlike Hg, theoxide skin allows the liquid gallium to be deformed into stable non-equilibrium shapes. Fig. 1 shows photosof the metal in non-spherical shapes. The oxide skin is strong enough that the metal can be 3D printed(please consider watching, 3D Printing of Liquid Metal at Room Temperature on YouTube to understandthe properties of the material and our excitement for this material).[13] The oxide skin can be removed easilyusing acid (pH ¡3), base (pH ¿10), or electrochemistry.[14] In both cases, the metal beads up in the absenceof the oxide due to the large tension of the metal, as shown in Fig. 1. Various metals (e.g. indium and tin)alloy with gallium and depress the melting point below room temperature.[12, 15] This proposal focusesprimarily on EGaIn , the eutectic alloy of 75 wtelectromagnets and heaters)[32, 33], energy harvesters,[34]batteries,[35] soft robotics and soft electrodes[36].

Recently, researchers in the Soft Machines Lab (PI: Majidi) have developed a fabrication method tosimultaneously pattern insulating elastomers, conductive elastomers, and thin films of EGaIn liquid (4).[97]This versatile technique allows for soft-matter circuits and sensor arrays to be produced in minutes without

8

Michael
Cross-Out
Michael
Sticky Note
You can certainly leave this out. It seems like it got majorly distorted anyhow.
Michael
Cross-Out
Michael
Cross-Out
Michael
Inserted Text
<
Michael
Cross-Out
Michael
Inserted Text
>
Michael
Sticky Note
Something is missing in this last sentence
Michael
Inserted Text
(co-
Michael
Cross-Out
Michael
Sticky Note
This paragraph is likely fine, but seems like too much detail relative to the rest of the proposal. None of the details are critical (just background)
Page 9: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

(a)! (b)!

(e)!

(c)! (d)!

(f)! (g)!

1 cm!

Figure 4. Soft-matter electronics produced with CO2 laser machining.[97] (a) Resistive tactile sensor composed of laser-patterned conductive poly(dimethylsiloxane) (cPDMS). (b) Sensor array composed of overlap- ping strips of cPDMS insulatedby non-conductive elastomer. (c) Laser-patterned inclusions of PEDOT:PSS embedded in PDMS. (d) Laser-patterned PE-DOT:PSS embedded in polyurethane. (e) Laser-patterned eutectic Gallium-Indium alloy (EGaIn ) embedded in PDMS. (f)Integration of a serpentine EGaIn wire and cPDMS electrodes in a PDMS-sealed circuit. g) LED-embedded circuit composedof laser-patterned cPDMS and EGaIn .

the need for labor-intensive casting and injection-filling.

3.3 Magnetic sensing

Hall elements are compact, accessible, and inexpensive. The quick response and accuracy of Hall elementsfor traditional robotic applications have previous been verified for joint angle proprioception [24] as wellas tactile exteroception [165]. Contact-free sensing capabilities are highly desired for soft robotic research.Thus, a unique advantage of our wireless magnetic field measurement approach is its negligible effect onmaterial stiffness.

3.4 Resistive flex sensors

Resistive flex sensors offer a simple and compact solution for embedded sensing in soft robotics. Neverthe-less, we concluded in a preliminary study (reported elsewhere) that they suffer from dynamic artifacts, suchas delayed response and drift.

3.5 Optical sensing of curvature

Fiber Bragg grating is a powerful sensing solution for deformable bodies used successfully for force mea-surements on a soft finger [123], and shape reconstruction [184]. Although this technology facilitates highlyaccurate curvature measurements using a thin and flexible optical fiber, the required supporting hardwaredisables embedded operation, especially for tetherless mobile robots with many degrees of freedom.

Various groups have explored optical fibers for tactile sensing, where the robustness of the optical fibers,the immunity to electromagnetic noise and the ability to process information with a CCD or CMOS cameraare advantageous [38, 71, 99]. Optical fibers have also been used for measuring bending in the fingers of aglove [66] or other flexible structures [37], where the light loss is a function of the curvature. In addition, a

9

Michael
Sticky Note
Figure 4 is not mentioned in the text.
Michael
Sticky Note
This paragraph would be improved if it was connected to the proposed work somehow (1 sentence).
Page 10: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

single fiber can provide a high-bandwidth pathway for taking tactile and force information down the robotarm [7].

We focus on a particular class of optical sensors, fiber Bragg grating (FBG) sensors, which are find-ing increasing applications in structural health monitoring [3, 80, 93] and other specialized applications inbiomechanics [26, 39] and robotics [124, 134]. FBG sensors have been attached to or embedded in metalparts [46, 94] and in composites [156] to monitor forces, strains, and temperature changes. FBG sensors areparticularly attractive for applications where immunity to electromagnetic noise, small size and resistance toharsh environments are important. Examples include space or underwater robots [42, 49, 160], medical de-vices (especially for use in MRI fields) [131, 185], and force sensing on industrial robots with large motorsoperating under pulse-width modulated control [46, 186].

FBG sensors reflect light with a peak wavelength that shifts in proportion to the strain they are subjectedto. The sensitivity of regular FBGs to axial strain is approximately 1.2 pm/µε at 1550 nm center wavelength[15, 75]. With the appropriate FBG interrogator, very small strains, on the order of 0.1µε, can be mea-sured. In comparison to conventional strain gages, this sensitivity allows FBG sensors to be used in sturdystructures that experience modest stresses and strains under normal loading conditions. The strain responseof FBGs is linear with no indication of hysteresis at temperatures up to 370◦C [106] and, with appropriateprocessing, as high as 650◦C [122]. Multiple FBG sensors can be placed along a single fiber and opticallymultiplexed at kHz rates.

3.6 Skin-based Perception, Reasoning about Contact and Interaction, and Control:Background

Scott: Can we focus this more closely on learning contact and physical interaction tasks.Need a paragraph explaining why these things are relevant. Ex: Active Perception is important to tactile

sensing because sensors must be pressed against and moved across objects.

Learning from Demonstration. Learning from demonstration (LfD) [6, 14] is an approach to robotprogramming in which users demonstrate desired skills to a robot. Ideally, nothing is required of the userbeyond the ability to demonstrate the task in a way that the robot can interpret. Example demonstration tra-jectories are typically represented as time-series sequences of state-action pairs that are recorded during theteacher’s demonstration. The set of examples collected from the teacher is then often used to learn a policy(a state to action mapping) or to infer other useful information about the task that allows for generalizationbeyond the given demonstrations.

A variety of approaches have been proposed for LfD, including supervised learning [8, 25, 28, 57, 41, 2],reinforcement learning [150, 1, 187, 81], and behavior based approaches [113].

Gienger et al. [53] segment skills based on co-movement between the demonstrator’s hand and objectsin the world and automatically find appropriate task-space abstractions for each skill. Their method cangeneralize skills by identifying task frames of reference, but cannot describe skills like gestures or actions inwhich the relevant object does not move with the hand. Kjellstrom and Kragic [79] eschew traditional policylearning, and instead use visual data to watch the demonstrator’s hand to learn the affordances of objects inthe environment, leading to a notion of object-centric skills. Ekvall and Kragic [44] use multiple examplesof a task to learn task constraints and partial orderings of primitive actions so that a planner can be used toreproduce the task.

Inverse Reinforcement Learning. A special case of learning from demonstration is that of inverse rein-forcement learning (IRL) [111], in which the agent tries to infer an appropriate reward or cost function fromdemonstrations. Thus, rather than try to infer a policy directly from the demonstrations, the inferred costfunction allows the agent to learn and improve a policy from experience to complete the task implied by

10

Page 11: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

the cost function. IRL techniques typically model the problem as an Markov Decision Process (MDP) andrequire an accurate model of the environment [1, 143, 110], but some recent methods have been proposedto circumvent this requirement by creating local control models [162], and by using an approach based onKL-divergence [22], respectively. Maximum entropy methods have also been suggested as a way to dealwith ambiguity in a principled probabilistic manner [187, 78].

A central topic of this proposal has a very similar goal to that of IRL—classifying task success andfailure is a simple case of inferring a reward function, in which only the goal state is rewarded. However,all the aforementioned IRL methods assume that proper features are available that allow a task appropriatereward function to be learned. By contrast, this proposal focuses on tasks that have complex goals, for whichinformative features are not readily available. To discover these features, active manipulation and perceptionis required to reveal kinematic relationships between objects in the environment.

Several researchers have also previously examined the problem of failure detection or task verification invarious contexts. Pastor et al. [135] use Dynamic Movement Primitives (DMPs) [67] to acquire motor skillsfrom demonstrations of a complex billiards shot. Statistics are then collected from the demonstrations topredict the outcome of new executions of the same skill, allowing early termination if failure seems likely.Plagemann et al. [140] use a particle filter with Gaussian process proposal distributions to model failures(collisions) in a robot navigation task with noisy observations. Niekum et al. [114] learn a finite-staterepresentation of a task and a set of transition classifiers from demonstrations and interactive corrections.These are the used to classify new observations during a task and to initiate appropriate recovery behaviorswhen necessary.

Active Perception The use of active perception has been studied in robotics [72], computer vision [166],and computer graphics [139]. Early work by Triggs and Laugier [166] showed that a camera placementplanner can be designed to allow for optimal visibility while also taking into account the physical constraintsof the robot and the environment. The approach however requires that physical regions that are important toa task are known a priori. However, important physical regions are not obvious in many tasks, making thisapproach difficult to scale-up to more general demonstrated tasks.

Estimating optimal placement of cameras for machine vision tasks has been proposed for such tasksas inspection [61], object recognition [152], and surveillance [20]. In many cases, a significant amountof prior knowledge of objects (eg. location of barcodes, position of visual markers, 3D models, and motiondynamics) is used to optimize the camera viewing angle. One of the features of declarative models, however,is that it becomes difficult to scale up to a larger number of objects since discriminative features must bedetermined manually for every object. Declarative models are also limiting when a robot must be able toadapt to new tasks—the system must be able to learn with novel objects, configurations, and kinematics.

Interactive Scene Understanding. More recently, Katz etal. [72, 73] proposed an efficient method forrecovering articulated objects (ie. multiple connected rigid objects) through the use of structure from motion,where the motion is induced by a robot actively interacting with the object. Furthermore, they show thatthrough a series of interactions, the kinematic state of the connected parts can be classified as prismatic,revolute or disconnected. In work by Sturm etal. [155], more complex kinematic relationship betweenparts are inferred using Gaussian process models and local linear embedding. However, in both works, theperception tasks was significantly simplified by making use of fiducial markers to identify parts. In generalthe parts of an object along with their kinematics must also be inferred by a LfD system.

Discovering Discriminative Visual Features. More recently, the use of data-driven approaches usinglarge amounts of data has shown that it is possible to automatically search for discriminative visual featureswith little a priori knowledge about the perception task. Work by Le etal. [92] has shown that high-levelvisual features for image search can be discovered by using deep networks of autoencoders which build

11

Page 12: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

only on basic pixel intensities for low-level features. The work of Singh etal. [149] showed that imagelarge datasets can be mined for mid-level visual representations by searching for image patches that occurfrequently but are also distinct from the rest of the visual world. Our proposed work will extend this insightto the task of LfD by using large amounts of image data collected from exploration to find discriminativemid-level visual representations that correspond to goal success or failure.

3.7 Results from Prior NSF Support Related to this Work

Chris, Michael, and Scott need to fill in missing sections. The NSF has been rejecting grants without reviewrecently for not following the requested format.

If any PI or co-PI identified on the project has received NSF funding

(including any current funding) in the past five years, information

on the award(s) is required, irrespective of whether the support was

directly related to the proposal or not. In cases where the PI or

co-PI has received more than one award (excluding amendments), they

need only report on the one award most closely related to the

proposal. Funding includes not just salary support, but any funding

awarded by NSF. The following information must be provided:

(a) the NSF award number, amount and period of support;

(b) the title of the project;

(c) a summary of the results of the completed work, including accomplishments, supported by the award. The results must be separately described under two distinct headings, Intellectual Merit and Broader Impacts;

(d) the publications resulting from the NSF award;

(e) evidence of research products and their availability, including, but not limited to: data, publications, samples, physical collections, software, and models, as described in any Data Management Plan; and

(f) if the proposal is for renewed support, a description of the relation of the completed work to the proposed work.

Co-PIs Onal, Park, and Majidi have had no prior NSF support.

Atkeson: (a) NSF award number: EEC-0540865; amount: $29,560,917; period of support: 6/1/06 -5/31/15. (b) Title: NSF Engineering Research Center on Quality of Life Technology (PI: Siewiorek). (c)Summary of Results: This NSF Engineering Research Center supported during the period 6/2010-5/2013 aresearch project (lead C. Atkeson) on soft robotics. We only report on this part of the award.

Intellectual Merit.

Broader Impacts.

Development of Human Resources. The project trained one graduate student, who presented his resultsat conferences (...) and in journals (...). The student is now doing a postdoc at XXX with YYY.

(d) Publications resulting from this NSF award: [? , ? , ? , ? , ? ].

12

Page 13: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

(e) Other research products: www.cs.cmu.edu/ cga/bighero6, www.cs.cmu.edu/ cga/soft

(f) Renewed support. This proposal is not for renewed support.

Michael: You only need to do one.Dickey: (a) NSF award number: ECCS-0925797; amount: $341k; period of support:9/2009-8/2013; (b)Title: Stretchable, Tunable, Self-Healing Micro-Fluidic Antennas.(c) Summary of Results:

Intellectual Merit.This was a collaborative project whose objective was to study and develop new hybrid antenna systemsconsisting of highly stretchable, low-loss radiating/receiving antennas formed using microfluidic technology.See below for broader impact.Development of Human Resources.

(d) Publications resulting from this NSF award:(e) Other research products: None.(f) Renewed support. This proposal is not for renewed support.

Dickey: (a) NSF award number: CMMI-0954321; amount: $400k; period of support: 4/2010-3/2015. (b)Title: CAREER: Understanding and Controlling the Surface Properties of a Micromoldable Fluid Metal. (c)Summary of Results:Intellectual Merit. This project focuses on three aims that are distinct from this proposal: (1) Quantifyinghow the skin ruptures; (2) Studying thermal evaporation onto the liquid metal due to its low volatility toform metallic skins; and (3) Clarifying what happens at the interface between the metal and the substrateduring injection into microfluidic channels in air.

Development of Human Resources.

Broader Impacts. NSF funding has resulted in 3 patent applications and 23 papers in 5 years in high impactjournals such as PNAS, Advanced Materials, Advanced Functional Materials, Nature Communications,Applied Physics Letters, and Lab on a Chip. The work has produced three patents and two companieshave licensed inventions resulting from this work. The research has been highlighted internationally in100 media outlets including Nature, MIT Technology Review, NY Times, BBC, The Economist, MSNBC,Forbes, Wired, Chemical and Engineering News, Chemical Engineering Progress, and US News & WorldReport. Our group has created YouTube video supplements to our papers that have received 2 million hitsin the past two years (these videos cite NSF support). In the past five years, the PI and graduate studentshave presented relevant work at 30 seminars and 60 conferences / symposia. The PI also created a semi-permanent physical display for the NC State Joyner Visitor center to describe and disseminate this research;this center is visited by thousands of students each year. The PI and graduate students associated with theseprojects have participated in outreach activities (engineering camps for middle and high school students, andother local outreach programs), created a blog (Common Fold), and have worked closely with NC StatesEngineering Place to increase the impact and dissemination of the modules. Dickey has integrated graduate,undergraduate, and high school students into these research efforts to create a research team comprised ofdiverse students. Although the outreach aims of the current proposal build on the outreach program initiatedby the CAREER award, the scientific objectives are completely distinct and only share the common focuson liquid metal.(d) Publications resulting from this NSF award:

13

Michael
Cross-Out
Page 14: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

(e) Other research products: None.(f) Renewed support. This proposal is not for renewed support.

Niekum: (a) NSF award number: IIS-1208497; amount: $499,199; period of support: 10/1/2012 -9/30/2015 (b) Title: NRI-Small: Multiple Task Learning from Unstructured Demonstrations. (c) Summaryof Results:

Intellectual Merit. This project addresses the problem of learning complex, multi-step tasks from natural,unstructured demonstrations. Three main capabilities are identified that are necessary for skill learning andreuse: a parsing mechanism to break task demonstrations into simpler components, the ability to recognizerepeated skills within and across demonstrations, and a mechanism that allows for skill policy improvementfrom practice. Some of these issues have been addressed individually in previous research efforts, but nosystem has jointly addressed these problems in an integrated, principled manner. This project builds oncutting-edge methods from Bayesian nonparametric statistics, reinforcement learning, and control theory,with the goal of creating a deployment-ready learning from demonstration system that transforms the waythat experts and novices alike interact with robots.

Broader Impacts. This project offers a potential bridge to a future generation of cooperative robotsthat would transform the home and workplace. Additionally, a simple robot programming interface willexpedite and extend the range of research robotics. This project also strengthens interdisciplinary ties be-tween computer science and several other disciplines including neuroscience and the study of human cog-nitive development, education, and intelligence. Several ROS packages have already been released as aresult of this work that have impact well beyond the scope of this research alone: a package for learn-ing and generating motions from Dynamic Movement Primitives (http://wiki.ros.org/dmp); a pack-age for integrating Kinect data with AR-tag detection for object recognition and pose estimation (http://wiki.ros.org/ar_track_alvar); and a package for doing performing supervised classification usingSVM and other common approaches (http://wiki.ros.org/ml_classifiers). In particular, we areaware of at least 15 separate groups in academia and industry that are using ar track alvar for robot percep-tion. ROS-independent code has also be released for Bayesian nonparametric segmentation of demonstrationdata, and is currently being used by several other research groups that we are aware of.

Development of Human Resources:

(d) Publications resulting from this NSF award: [116, 114].(e) Other research products: ROS stuff.(f) Renewed support. This proposal is not for renewed support.

4 Proposed Research

I am trying to get the order and content roughly right in this sectionThis section expands on Section 1. Unfortunately, we only have space to describe highlights of the

proposed research.

4.1 Example Tasks [Years 1, 2, and 3]

[Year 1] The first set of benchmark tasks are essentially applying sensory psychophysics to individual sen-sors embedded in a patch of skin. How sensitive is the sensor? What are its spatial and temporal properties?How does an array or grid of sensors respond?

14

Page 15: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

[Years 1 and 2] To evaluate integrated skin systems, we will use the hand, lightweight arm, and humanoidtestbeds to evaluate a skin system’s ability to explore and manipulate rigid and articulated (jointed) objects,and deformable objects such as wire bending, paper folding, screen (2D surface) bending, and working withclay (kneading, sculpting with fingers and tools, and using a potters wheel).

[Years 1 and 2] We will also benchmark the ability of a skin system mounted on the hand, arm, and fullhumanoid to recognize, select, and manipulate objects among a set of objects (find keys in your pocket, forexample).

[Years 1, 2, and 3] Our most difficult set of benchmarks will be mockups of tasks often found in caringfor humans: wiping, combing hair, dressing, moving in bed, lifting, transfer, and changing adult diapers.

4.2 Optimizing Skin Mechanics [Years 2 and 3]

We need a discussion of how to find appropriate skin mechanics, what ridges or fingerprints should we haveetc., and how we make skin work around joints (knuckles, palm joints, elbow, and knee are easy since theyare essentially 1D; 2D joints are harder and require stretch?

We will explore shaping multiple types of soft materials into layers, flexures, creases, expansion joints,etc., as well as embedding multiple types of individual fibers and woven fabrics with different stiffnessesand damping. We will optimize the skin mechanics for manipulation and tactile perception. When the needsof manipulation and tactile perception conflict or are unclear, we will focus on optimizing performance ona set of benchmark tasks. We will explore a relatively thick soft skin, and consider soft tissue surroundinginternal structure (bones) with a relatively human-scale ratio of soft tissue to bone volume, or structures thatare completely soft (no bones/rigid elements). We will explore a wide variety of surface textures includingarbitrary ridge patterns (fingerprints), hairs, posts, pyramids, and cones. These patterns may vary across theskin provide a variety of contact affordances.

4.3 Perception [Years 1, 2, and 3]

A transformation that we hope to lead in robot perception based on skin is the combination of sensing at adistance, contact-based position, movement, and force sensing, and volumetric perception of the interior ofmanipulated soft objects.

We will explore a range of perceptual approaches, including object tracking based on contact types,forces, and distances, feature based object recognition based on features such as texture, stiffness, damping,and plasticity, feature based event recognition based on spatial and temporal multimodal features such as thefrequency content of vibration sensors, and multimodal signature based event recognition.

As an example of a perceptual task, let us consider the screw insertion task shown in Figure 5. Fromthe robot’s overhead perspective, the table leg occludes both the screw and the hole, making verification ofthe insertion difficult. We have conducted a preliminary exploration of using data mining techniques [183]to discover key signatures of success and failure from multimodal data from sources such as microphones,wrist accelerometers, and RGB-D cameras. For example, the co-occurrence of a clicking noise with anaccelerometer spike may reliably indicate success, while either feature on it’s own might be insufficient (anaccelerometer spike could be caused from sliding off the table, or a click from missing the hole and hittingthe table). With these features alone, we have achieved a classification success rate of around 85%, but wesuspect that this will greatly improve with more sensory modalities such as pressure and shear force. Thissame logic applies to more dynamic tasks as well, such as pancake flipping or robot walking, which rely onmaking control decisions based on friction, stickiness, and slippage.

For visualization purposes, we show our approach applied to a simpler task with only one sensorymodality—discriminating between a human eating soup and eating meat from wrist accelerometer datain the X, Y, and Z directions. Figure 6 shows 3 examples of each condition. The highlighted sub-signal in

15

Michael
Sticky Note
Carmel??
Page 16: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

Figure 5. Inserting a table leg and screw into a pre-drilled hole.

red shows the most discriminative feature that allows us to separate the data sets and classify new exam-ples, discovered using a simple data mining technique [183]. In the future, we plan to investigate additionaltechniques from data mining and deep learning specifically designed for multimodal data (e.g. [112]) todiscover informative features in multimodal time series data that can inform the robot about task-criticalevents. Additionally, we will draw upon the prior work of co-PI Niekum [117, 115] to investigate the use ofnonparametric Bayesian techniques like the Beta Process Autoregressive Hidden Markov Model to discoverrepeated structure in time series data.

In other recent work (currently under review) [60], co-PI Niekum developed a particle filter based methodto learn about and active reduce uncertainty over articulated relationships between objects. An interac-tive perception algorithm was introduced to combine both perceptual observations and the outcomes of therobot’s actively chosen manipulation actions. However, one major limiting factor in this work was the sen-sory capabilities of the PR2 mobile manipulator that was used in the experiments. Rather than having accessto rich force, pressure, and tactile data, we were forced to base the outcomes of manipulation actions solelyon a measure of motor effort needed to complete the action. Again, we believe that the capabilities of al-gorithms like these will greatly improve with the additional sensory modalities provided by the proposedartificial skin.

4.4 Generating Behavior [Years 1, 2, and 3]

We will explore behavior and control based on explicit object trajectories and force control, policy optimiza-tion, discriminant or predicate based policies, and matching learned sensory templates.

Policy Optimization: We expect most manipulations of deformable objects with soft skin to be difficult tomodel. We propose using policy optimization and learning to control the robot and improve its performanceover time instead of standard model-based optimal control. We will explore learning complex policies forphysically manipulating humans by watching humans execute them or by programming them directly basedon our understanding of these procedures, and then refining those policies using optimization and learning.This section outlines our approach.

We will build on our previous work on learning parametric and non-parametric models [? , ? , ? , ?, ? , ? , ? , ? ]. We will also build on our previous work on learning from demonstration. In our pastwork we used inverse models of the task to map task errors to command corrections [? , ? ]. We havefound that optimization is more effective than trying to track a learned reference movement, especially with

16

Page 17: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

Figure 6. Accelerometer data (rows) collected from examples (columns) of eating soup and eating meat. Highlighted sub-signal shown in red is the automatically discovered most discriminative feature that separates the data sets.

non-minimum phase plants, and greatly speeds up this type of learning. We have also implemented directpolicy learning to allow a robot to learn air hockey and a marble maze task from watching a human [? , ?, ? , ? , ? , ? , ? , ? , ? , ? ]. Other prior work on policy learning and optimization and learning includes [?, ? , ? , ? ]. We expect the policy optimization approach described in this proposal to be more efficient andeffective than our previous work, because we have developed very efficient first and second order gradientmethods to do policy optimization [? ].

Previous work on multiple model policy optimization includes [? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? ].Output feedback optimization is also closely related [? , ? , ? , ? , ? ], as is reinforcement learning [? , ? , ?, ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? ].

4.5 Generation 1 Skin [Year 1]

One important goal of perception is estimating contact forces. Average contact forces across a hand can beestimated using six axis wrist force torque sensing, which we will build into our hands. Local force sensingto detect local concentrations of force will be done by measuring skin strain in various directions. We willexplore the use of existing tactile imaging devices, such as Tekscan material, which we currently use, butit only senses compression forces in the normal direction (perpendicular to the skin/sensor) [? ]. We thinkobtaining estimates of local shearing forces (parallel to the skin/sensor) will be important in reducing therisk of skin damage to the patient. We will use a combination of structured light, markers, and multipleimagers to estimate deformation in skin and in soft (inflatable) robot limbs (Figure ??), along the lines of [?, ? , ? , ? , ? , ? ]. This system will provide inexpensive skin that is easy to replace as it wears or thematerial hardens, softens, or deforms (sags) with age. The skin can easily be applied to curved surfaces andwith various thicknesses and stiffnesses. Fiber optics and Fresnel lens and reflectors can be used to move the

17

Page 18: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

imagers and projectors to more convenient locations, and optimize the numbers of imagers and projectors.One issue of concern is the possible weight of the skin material if we cover the entire robot with it. Thisconcern may lead us to only instrument the hands and forearms, or use thinner skin on other robot parts.

We will create and deploy many types of sensors, including embedded accelerometers, gyros, temperaturesensors, vibration sensors, sound sensors, pressure sensors, optical sensors sensing nearby objects, andoptical sensors tracking skin and object velocity and movement. Previous skin and tactile sensing projectstypically focused on one or only a few types of sensors. We will explore adding electrical wiring such asprinting patterns with conductive ink on the surface or between layers of the skin, and using resistance,capacitance, and inductance (or their combination) to measure skin deformation. Printed antennas similarto what are used in wireless RFID anti-shoplifting devices may also be useful placed on the skin surface orembedded in the skin. It may also be possible to embed mechanical elements in the skin that click or raspwhen deformed, and use microphones to track skin deformation.

We will use off the shelf sensors embedded in optical (Near Infrared (NIR)) grade silicone. This skin in-cludes optical sensing of marker movement to measure strain in all three directions (similar to www.optoforce.com).We will use embedded cameras or the same technology optical mice use (essentually using very high framerate cameras with low angle of incidence illumination (Avago ADNS9800, for example)). We will also userange finding sensors to sense objects at up to a 0.5m distance (Sharp GP2Y0E02B, for example). We willalso use embedded IMUs (Invensense MPU-9250, for example. In a 3x3x2mm package we have an ac-celerometer, gyro, magnetometer, and temperature sensor). We will explore using conductive paths printedon and in soft material to provide flexible wiring and additional circuitry needed by these small surfacemount chips. We will use measurement of the gravity vector to determine roll and pitch (rotations per-pendicular to the gravity vector). We will explore imposing a local or global magnetic field (possibly timevarying) to help track yaw orientation (rotation about the gravity vector) and/or position. We will embedpiezoelectric material (such as that from microphones and buzzers (RadioShack 273-066, for example)) tocapture high frequency vibration. We will also embed pressure sensors (Freescale MPL115A2, for exam-ple). [HOWE AND DOLLAR Pressure stuff]. We will embed induction loops to sense electric field, andexplore imposing local or global (possibly time varying) electric fields to resolve orientation. We will gluehairs or whiskers to piezoelectric crystals to provide mechanical sensing at a (short) distance.

4.6 Generation 2 Skin [Years 1 and 2]

“Soft-matter” sensors and circuits will be produced by embedding microfluidic channels of liquid metalalloy in a thin elastic film. This work will build on preliminary efforts by co-PIs Dickey, Majidi, and Park.The dependence of circuit resistance (R), capacitance (C), and inductance (L) on elastic deformation arewell understood and will be examined using principles in solid mechanics and finite elasticity.[? , 100, ?, ? , ? ] In general, we will use the displacement or stress formulation of field equations for an incom-pressible, homogenous, isotropic solid to determine the change in length and cross-sectional geometry ofthe microfluidic channels as a function of external tractions (e.g. surface pressure, friction, tensile loading,...). From the deformed geometry of the microchannels, we will then obtain new estimates for the circuitRCL properties. When possible, we will obtain a closed-form, algebraic approximation of this mappingusing existing solutions for elastic membranes, plates, and shells. We will also make use of approximateenergy methods, such as the Rayleigh-Ritz technique for comparing the potential energy associated with aparameterized set of kinematically-admissible deformations. These theories will inform the design of sens-ing and circuit architectures in which functionality is influenced by electromechanical coupling. Examplesinclude (i) capacitive touch sensors that should change capacitance in response to applied surface pressurebut not tensile loading, (ii) curvature sensors that respond to bending strain but uniform stretch, and (iii)circuit wiring that maintains constant resistance under any loading or elastic deformation.

18

Page 19: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

Figure 7. Soft-matter circuits with an anisotropic “Z-tape” conductive elastomer that functions as an electrical via to a sealedEGaIn circuit.[145] (a) Arrowpad with EGaIn electrodes and conductive paper wires on wrist (b) MOSFET transistor interfacedwith Z-tape and liquid GaIn circuit can be stretched and flexed without destroying the liquid metal circuits (c) LED circuitsmanufactured with liquid GaIn and conductive paper wires hooked up onto an Arduino (d) LED lighted up when finger presseddown on GaIn circuit.

Do you still want this text? Seems of value. Hardware integration remains a challenge since the mechani-cal impedance mismatch between rigid pins and soft or fluidic conductors can lead to kinematic incompati-bility and loss of electrical contact.

Did we get all the good parts of this into the next paragraph? To address this, we have begun explor-ing reliable methods for forming mechanically robust electrical connections at fluid-solid interfaces. Oneapproach is to use anisotropically conductive elastomers that function as vias between the terminals of em-bedded EGaIn circuits and the pins of surface-mounted electronics. Presently, this is accomplished with ananisotropic “Z-tape” conductive elastomer.[145] The Z-tape is a commercial acrylic-based elastomer (3MTM

ECATT 9703) that exhibits the following features: (i) as soft as skin (modulus ∼ 1 MPa); (ii) elastic (10×stretchable); (iii) adhesive on both sides (pressure-sensitive bonding); (iv) 50 µm thick; (v) anisotropic con-ductivity (107 S·m through the thickness). It is composed of conductive silver-coated iron microparticlesthat are ferromagnetically arranged to extend through the thickness of the tape.[59]

Recently, researchers in the Soft Machines Lab (PI: Majidi) have developed a fabrication method tosimultaneously pattern insulating elastomers, conductive elastomers, and thin films of EGaIn liquid (4).[97]This versatile technique allows for soft-matter circuits and sensor arrays to be produced in minutes withoutthe need for labor-intensive casting and injection-filling. However, hardware integration remains a challengesince the mechanical impedance mismatch between rigid pins and soft or fluidic conductors can lead to

19

Page 20: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

PDMS EGaIn Terminals

FFC Microcontroller Pin

Z-tape

a  

b  

c  

d   d  

e  

Figure 8. (a) Z-tape is composed of aligned columns of metal microparticles in a soft elastomer. (b) Top-down view ofAg-Fe microparticle columns in 3M ECATT 9703. (c) Application of Z-tape as electrical vias between EGaIn circuit andsurface-mounted electronics. (d) Demonstration with an LED and MOSFET. (e) Tactile sensing demonstration. (f) Proposedfabrication technique based on stencil lithography with a CO2 laser.

kinematic incompatibility and loss of electrical contact. To address this, we have begun exploring reliablemethods for forming mechanically robust electrical connections at fluid-solid interfaces. One approach is touse anisotropically conductive elastomers that function as vias between the terminals of embedded EGaIncircuits and the pins of surface-mounted electronics. Presently, this is accomplished with an anisotropic“Z-tape” conductive elastomer (Fig. 8).[145] The Z-tape is a commercial acrylic-based elastomer (3MTM

ECATT 9703) that exhibits the following features: (i) as soft as skin (modulus ∼ 1 MPa); (ii) elastic (10×stretchable); (iii) adhesive on both sides (pressure-sensitive bonding); (iv) 50 µm thick; (v) anisotropicconductivity (107 S·m through the thickness). It is composed of conductive silver-coated iron microparticlesthat are ferromagnetically arranged to extend through the thickness of the tape.[59]

Anisotropic conductive elastomers, i.e. “Z-tape”, will be used to form direct electrical contact betweenliquid metal alloy (e.g. EGaIn) and surface mounted electronics. Z-tape can also be used to allow forelectrical contact between liquid metal and human skin for tactile sensing and data entry. As shown in Fig.8a, the Z-tape is composed of aligned columns of metal microparticles embedded in a soft elastomer such asPDMS. Percolation between particles within each column allows for conductivity through the thickness butnot in the plane. Fig. 8a shows an optical top-down image of vertically aligned columns of Ag-coated ironmicroparticles in an acrylic-based VHBTM tape (3M). This conductive elastomer (3MTM ECATT 9703) isproduced by suspending the microparticles in uncured acrylic polymer and applying a strong magnetic field(0.1-1 Tesla) in order to align the ferromagnetic Ag-Fe particles into vertical columns. For Generation 2skins, we will use ECATT 9703 and also explore producing our own with Fe nanopowders (Sigma-Aldrich)in PDMS (Sylgard 184; Dow Corning).

Applications of Z-tape to sensing and hardware integration are presented in Figs. 8c-e. Because of itsanisotropic conductivity, they can function as electrical vias between embedded EGaIn circuit terminals andsurface-mounted electronics (Fig. 8c). The latter include 8-bit microcontrollers, transceivers for wirelesscommunication, batteries, and flat flexible cables (FFCs) for connecting to external hardware. Fig. 8d showsan example of a simple EGaIn circuit connected to a surface-mounted LED and MOSFET with ECATT9703. A potential method for tactile sensing is presented in Fig. 8e, in which commercial Z-tape allows fordirect electrical contact between an EGaIn arrowpad and a finger tip for data entry.

One method for producing EGaIn circuits with Z-tape is presented in Fig. 8f. EGaIn or Galinstan isfirst patterned using a stencil lithography method that allows for control of both the height and width of themicrofluidic channels. The channels are patterned in a 254 µm thick film of soft acrylic-based elastomer

20

Michael
Sticky Note
All of this page is really great, but seems like too much detail (relative to the rest of the proposal). Maybe combine down to 1-2 paragraphs (Carmel?)
Page 21: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

adhesive (F-9473PC VHBTM Tape; 3MTM). We begin with a thicker 0.5 mm film of VHB tape (4905VHBTM Tape; 3MTM) and use a CO2 laser (VLS 3.50; Universal Laser Systems) to produce a positive of thecircuit geometry (steps i-iii) on the non-stick backing of the VHB tape (the mask). This defines the base ofthe microchannels. The rest of the mask is removed, leaving behind the positive of the circuit (iii), and the254µm thick layer of VHB tape is applied on top of circuit (step iv).The circuit is engraved onto the secondlayer of VHB with the CO2 laser engraver (step v). The laser cut VHB is peeled off along with the maskto expose channels 254 µm deep for the liquid metal circuits (vi). The mask on the thin layer of VHB tapeacts as a stencil for the deposition of liquid GaIn. To ensure an even, thin channel of EGaIn, it is depositedinto the exposed VHB channels with an Ecoflex-tipped pen (vii). It is important to create a relatively deepchannel for the deposition of the liquid GaIn. This is to avoid smearing the circuit when the Z-tape is appliedand also to ensure sufficient volume of liquid GaIn to maintain good electrical conductivity. After the liquidGaIn is deposited (viii) and the stencil mask is removed, the circuit is sealed in Z-tape (ix). External wiringin the form of conductive fabric tape (3MTM CN-3490) may then be applied directly on top of the Z-tape.

4.6.1 Magnetic Field Sensing

An array of custom force sensors distributed within an elastomeric matrix (silicone rubber) will be devel-oped utilizing a combination of custom flexible circuit fabrication technologies and soft material moldingmethods currently under investigation in WPI Soft Robotics Laboratory (see Figure 9). Force sensors willcomprise a magnet and Hall element pair, whereby strains due to normal loading will move a small magnetand the corresponding magnetic field densities will be measured by the Hall element on a miniature flexiblecircuitry with adjustable sensitivity. The range of forces will be customized based on the initial distancebetween the magnet and the Hall element (currently 5 mm) and the material properties of the elastomericsubstrate (currently Smooth-on Ecoflex 0030 or DragonSkin 10). We will perform static and dynamic char-acterization experiments to determine the sensitivity, range, and response time of individual sensors foroptimized operation. Feasibility experiments on preliminary force sensors depicted in Figure 9 reveal anapproximately linear analog voltage response for static loading conditions.

We propose to study the adjustment of the range and sensitivity of these sensors based on material prop-erties of the silicone rubber formulation, the geometric parameters that define the magnet position and ori-entation, as well as the parameters of signal conditioning electronics on the circuit layer, covering the entiredesign space of this sensing approach. We will employ Ogden hyperelastic material model to describe thelarge deformations exhibited by the elastomeric material under the assumption of rubber incompressibility.The geometric parameters will be studied on a Finite Element package and through iterative experimentationto verify our numerical findings. Electronic effects will be determined in a dynamic model that describesnot only the range and precision, but also the response rates achievable for our measurements.

Figure 9. The proposed force sensors comprise a Hall Effect IC with corre-sponding circuitry on a flexible substrate and a miniature magnet positionedat a precise location with respect to the Hall element, embedded within amolded silicone rubber substrate. Sensors are planned to be manufacturedin a multi-layer mold (top). A preliminary prototype (left) and its correspondingstatic normal loading response (right) are displayed on the bottom row.

The preliminary design shown inFigure 9 employs a 1-D Hall element(Analog Devices AD22151), whichprovides analog voltage outputs and of-fers convenient customization throughexternal resistors and capacitors. Afterwe formalize the fundamental scienceand methodology to develop soft mod-ules that provide normal force feed-back, we will consider the measure-ment of shear force distributions on theskin. We expect that interfacial fric-tional forces reflected by the magnitude

21

Page 22: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

of shear measurements will be crucialfor physical interactions and a lack ofthis measurement would reduce the ef-fectiveness of our proposed robot skin.To enable the measurement of the full3-D force vector in a distributed array,we will utilize Hall elements that mea-sure the 3-D magnetic field (MelexisMLX90333) such that every translationof the magnet due to material strainscan be captured. Since expected shearforces are typically lower than normalforces, we will utilize different levelsof sensitivity for each measurement di-rection and characterize crosstalk. Force sensing modules will be compared with ground-truth load-cellmeasurements to verify accuracy and dynamic response of the sensory system. An array of these sensorswill be developed in a grid and connected over a serial communication network (SPI) such that an embeddedmicrocontroller can receive feedback from a large number of sensors. Moments will be determined usingthe force vectors of neighboring sensors.

As an alternative to liquid metals, microchnnels can be also filled with different conductive liquid, suchas ionic liquids. There two main advantages of using nonmetallic liquid conductors.

First, an ionic liquid can be more biocompatible than liquid metals. Although, the liquid metals we havebeen using, such as EGaIn or galinstan, are considered nontoxic unlike mercury, we do not want to eitherabsorb or make a direct contact with our skin for an extended period of time. However, the robots we areinterested in building with our robotic skin will be mainly assisting our daily activities and are expected tomake many physical interactions with human. Using more biocompatible materials in the robot will increasethe safety of the human users.

The other advantage of ionic liquid is rejection of unnecessary sensor signal fluctuations. Since theskin we are designing will be stretchable and flexible, we also need to make our signal wires with softmaterials. If we use one same liquid metal for both wiring and sensing, it will not be easy to distinguishresistance changes that are from microchannel sensors and soft wires. Using two different liquids for wiringand sensing will address this issue. Since ionic liquids have much higher nominal resistance values thanliquid metals, resistance changes from soft wires that are filled with a liquid metal will have much smallerresistance change than microchannel sensors with an ionic liquid. We have previously made this type ofsoft skin sensors for detecting axial strain [31] and normal pressure sensing based on electrical tomographicimaging [32].

4.7 Generation 3 Skin [Years 1, 2, and 3]

We will explore superhuman sensing. For example, we will create vision systems (eyes) that look outwardfrom the skin for a whole body vision system. We will use optical tracking to estimate slipping and objectvelocity relative to the skin. We will explore embedding ultrasound transducers in the skin to use ultrasoundto image into soft materials that are in contact such as parts of the human body.

We will explore deliberately creating air and liquid (sweat) flows (both inwards and outwards) for bettersensing (measuring variables such as pressure, conductivity, and temperature) and controlling adhesion.We will explore humidifying the air for better airflow sensing, contact management, adhesion control, andultrasound sensing.

We will develop materials to make the skin rugged, and methods to either easily replace or repair damage.

22

Michael
Inserted Text
including the use of self-healing, soft, stretchable wires developed by Dickey
Michael
Sticky Note
Ref: Dickey / Palleau, Advanced Materials, 2013.
Page 23: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

For the third generation of robot skin, we will explore “solid-state” soft-matter circuit and sensing archi-tectures that do not rely on liquid-phase metal alloys such as EGaIn. Three alternative methods that we focuson are gels filled with a percolating network of conductive nano- and microparticles, (ii) patterned graphenemonolayer and thin films of exfoliated graphite. In contrast to EGaIn and cPDMS, the conductive gels andgraphene will be transparent and can be used as a sensor layer over a flexible display. They will be patternedwith Protolaser U3 (LPKF) UV laser micromachining system or through microcontact printing (µCP) usingstamps produced with the Nanoscribe microscale 3D printing system.

The Protolaser U3 is capable of patterning sub-millimeter thin sheets of organic materials, ceramics, andmetal. It has a sufficiently low wavelength (355 nm), high pulse rate (200 kHz), and high power (5W) to drilland cut with 15 µm beam diameter over a 9”×12” cutting area at 0.5 m/s cutting speed. This is accomplishedthrough photochemical ablation and the ability to concentrate high power in a small area at a high pulse ratethat prevents burning and melting in the surrounding material. Sensors will be produced using the samerapid prototyping techniques currently used with a CO2 laser. However, because of its higher wavelength,higher beam diameter, and lower pulse rate, the CO2 laser cannot produce planar features below 100 µm.

Microcontact printing will be performed using stamps produced with either conventional photolithog-raphy (e.g. SU-8) or with the Nanoscribe 3D printing system (Nanoscribe GmbH). The Nanoscribe uses2-photon polymerization to selectively cure photoresist and photopolymers to produce three-dimensionalstructures with sub-micron features. We will use this machine to directly produce stamps in photoresist orUV-curable elastomer. Using µCP or imprint lithography, we will then pattern conductive gels, graphene,and exfoliated graphite into circuits and sensors with 0.1-10 µm planar features. Using the 3D printed struc-ture as a master or mold, we will also explore replica molding with thermoset and thermoplastic elastomersin order to access a broader range of polymer chemistries and µCP wetting properties.

4.7.1 Fiber Optically Sensorized Tactile Skin

Instead of microfluidic channels, soft skin can be sensorized with embedded fiber optic strain sensors, suchas fiber Bragg gratings (FBGs). Any contact or pressure made on the skin will cause structural deformationof the skin and will induce mechanical strains on the FBG areas of the optical fiber. The FBG sensorsare very accurate and provide high resolution. They are also flexible and compact, which make them idealto be embedded in any type of structures. Another very useful advantage is immunity to electromagneticinterference (EMI). In robotic applications where many electromagnetic actuators exist, many electronicsensors show noisy sensor readings, which requires various types of noise filtering and shielding and signalamplification. Our previous robotic skin prototypes showed the feasibility of using FBGs for tactile andforce sensing with high precision control [133, 124, 134]

4.7.2 Soft Optical Waveguide for Pressure and Strain Sensing

Another optical soft sensing method is to use the embedded microchannel for optical pressure sensing.When coated with a reflective material, such as a thin metal layer, the microchannel can be transformedto an optical waveguide. Without any stress on the elastomer, the straight waveguide will transmit opticalsignals like fiber optics. However, if stress or strain is applied, the soft waveguide will mechanically deformand make small cracks in the metallic coating, which causes an optical power loss in the transmission. Byutilizing the light intensity modulation, we will be able to estimate how much pressure and strain is appliedto the sensing structure from the optical power loss. Figure 10 shows our preliminary prototype of opticalsoft sensing material and its result for detecting deformation. Since this method not use any liquid materials,it has a potential to significantly increase the safety and practicality of the device.

23

Michael
Sticky Note
We say three, but only list two.
Page 24: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

Figure 10. Optical soft pressure sensor. (a) Preliminary prototype. (b) Prototype with light source and detector. (c) Stressedprototype showing cracked reflective coating. (d) Preliminary experimental result showing resistance increase in the photodetector with increased curvature.

4.8 Ultrasound sensing

We will explore using ultrasound transducers mounted in the skin of the robot to see into the person beingmanipulated, to track bones, tissue movement, and estimate tissue strain. We can use real time ultrasoundmeasurements to locate bones to guide manipulation. We can also use ultrasound to monitor tissue defor-mation (displacement and strain) to guide forces and detect when a manipulation is failing and should bestopped or adjusted. Using phase-sensitive 2D speckle tracking [? , ? ], tissue deformation can be accuratelymeasured in real time at high spatial (< 1 mm) and temporal (< 10 ms) resolution with a large imaging depthof a few cm at typical clinical ultrasound imaging frequencies (2-10 MHz). Based on the tissue displace-ment information from speckle tracking, a complete set of locally developed strain tensors of the deformedtissue will be generated, indicating the degree of the strains of the tissue in different directions to guidemanipulation. The “globally applied” strain also can be estimated by measuring the distance change of thetissue and skin from the bones. Non-linear analysis of “locally developed” strain and “globally applied”strain will also be applied to determine the criteria at which tissue deforms significantly and passes intothe non-linear region of the soft tissue mechanics [? , ? ], and the robot should change its behavior. Inyear 1, we will do feasibility tests using a commercially available linear array transducer, connected to acommercial ultrasound research platform (Verasonics US Engine, Verasonics Inc, WA, USA), which willbe mounted in the robot skin. In the following year, a set of off-the-shelf single element transducers willbe mounted on the palm and fingers of the robot hand for optimal sensing, imaging and feedback. Basedon the design parameters obtained during Y2, a custom built array transducer system incorporated in theskin will be developed and evaluated in Y3. We expect substantial revision and improvements in our volu-metric sensing in response to how well it does in experimental use, including development of better imageprocessing algorithms, as we go. In principle, the concept of mounting ultrasound transducers in robot skinis very similar to the detection of prostate cancers from an ultrasound probe mounted on a doctor’s fingerin the rectum to measure the elastic properties of nearby tissue [? , ? ]. The integration of an ultrasoundlinear array transducer onto the robot hand in Y1 will also be similar to the approaches of a previous studyof muscle fatigue in the forearms [? ]. Building a custom ultrasound transducer opens the possibility ofdistributing the transducer across the skin, or putting elements of the transducer in support surfaces suchas a bed or chair, and doing transmission ultrasound imaging as well as reflective imaging. Transmissionimaging may give us better images at greater depth.

Preliminary Results: In Figure ??, a commercial ultrasound linear transducer integrated on to a simu-lated robot hand (knee pad) measures local elasticity and contractility of the muscles in the forearm. In the

24

Page 25: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

proposed study, the knee pad will be replaced with an actual robot hand and the full set of strain tensorswill be calculated to determine the criteria for pain and eventual tissue failure due to the applied shear force.It should be noted that ultrasound may conduct poorly without a liquid coupling and there is a potentialchallenge to image the tissues through patient clothes. It was confirmed by our pilot tests that ultrasoundimaging is possible with a minimal spray of water (only enough to moisten the clothes, Figure ??). A mois-turizing device may be necessary as part of the robot hand next to the ultrasound transducers. We will alsoexplore the use of other coupling materials such as very soft silicone or gels.

Initial tests also confirmed the feasibility of obtaining two dimensional strain tensor fields from the ultra-sound data obtained with clothes on the forearm. The strain tensor consists of the axial, lateral and shearstrains. The axial direction corresponds to the direction of propagation of ultrasound while the lateral direc-tion is the one perpendicular to the axial direction. If the measured strain goes beyond a specified thresholdthe robot would be programmed to alter its manipulation procedure. Such real time strain feedback comeswith the possibility of reinforcement learning for the robot.

4.9 Skin Repair

An unavoidable problem with skin is continual wear and occaisional damage. We will explore three methodsto address wear and tear: 1) make the skin easy to replace, by having removable sections that can easily beattached and electrically connected, 2) Making it easy to apply material and replace components from theoutside, potentially using a repair robot and true 3D “Maker” technology (essentially and ink jet print headon the end of a robot), and 3) making an outer layer of the skin continually slough off and be replacedfrom the inside, as in human skin. New material could be provided through channels in the skin, and couldharden based on loss of a solvent, an active hardener such as light or heat, or an A/B epoxy-type mixing ofcomponents.

4.10 Systems Integration and Evaluation

The development and evaluation of perception, reasoning, and control algorithms for our three skin systemswill use a series of test setups:

B.1 Evaluate a patch of skin on the laboratory bench.

B.2 Evaluate the skin on a simple hand.

B.3 Evaluate the skin on our current lightweight arm and hand.

B.4 Evaluate the skin on our Sarcos Primus humanoid (whole body sensing).

5 Broader Impact

[EVERYBODY: This is a place holder. Please send me stuff to include in this section.]Unplanned Broader Impact. Often the broader impacts of our work are serendipitous, and not planned

in advance. Examples of such ad hoc broader impacts from our recent work include: 1) Our technologiesbeing demonstrated on entries in the DARPA Robotics Challenge. 2) A graduate student was a participanton a Discovery Channel TV series, ”The Big Brain Theory: Pure Genius”. One purpose of the TV seriesis getting people excited about engineering. 3) A graduate student helped run a group of all female highschool students in the robot FIRST competition. 4) Our work on soft robotics inspired the soft inflatablerobot Baymax in the Disney movie Big Hero 6 [? ]. We have participated in extensive publicity as a result.An explicit goal of this movie was to support STEAM. We expect to be involved with sequels to Big Hero 6,

25

Page 26: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

and to support Disney’s STEAM effort (which is quite large and well funded). We expect similar unplannedbroader impacts to result from this work, especially based on dramatic videos of agile robots.

Development of Human Resources. Students working on this project will gain experience and expertisein teaching and mentoring by assisting the course students in class projects. Students working on this projectwill also have the opportunity to train their communication and inter-personal skills, as they will activelyparticipate in the dissemination of the research results at conferences and in related K-12 outreach programsof the Robotics Institute.

Participation of Underrepresented Groups. We will attract both undergraduate and graduate students,especially those from underrepresented groups. We will also make use of existing efforts that are part ofongoing efforts in the Robotics Institute, and CMU-wide efforts. These efforts include supporting minorityvisits to CMU, recruiting at various conferences and educational institutions, and providing minority fellow-ships. CMU is fortunate in being successful in attracting an usually high percentage of female undergrad-uates in Computer Science. Our collaboration with the Rehabilitation Science and Technology Departmentof the University of Pittsburgh in the area of assistive robotics is a magnet for students with disabilities andstudents who are attracted by the possibility of working on technology that directly helps people.

Outreach. One form of outreach we have pursued is an aggressive program of visiting students and post-docs. This has been most successful internationally, with visitors from the Delft University of Technology(4 students) [4, 101, 178], the HUBO lab at KAIST (1 student and 1 postdoc) [13, 30, 76], and the ChineseScholarship Council supported 5 students [95, 179]. We welcome new visitors, who are typically paid bytheir home institutions during the visit. We are currently experimenting with the use of Youtube and labnotebooks on the web to make public preliminary results as well as final papers and videos. We have foundthis is a useful way to support internal communication as well as potentially create outside interest in ourwork. We will continue to give lectures to visiting K-12 classes. The Carnegie Mellon Robotics Institute al-ready has an aggressive outreach program at the K-12 level, and we will participate in that program, as wellas other CMU programs such as Andrew’s Leap (a CMU a summer enrichment program for high school stu-dents), the SAMS program (Summer Academy for Mathematics + Science: a summer program for diversityaimed at high schoolers), Creative Tech Night for Girls, and other minority outreach programs.

Co-PI Dickey has a strong record of dedication to outreach activities, recruitment of diverse students, andmentorship of high school, exchange, and undergraduate students.

In six years, the co-PI has mentored more than thirty undergraduate researchers including six REU sup-ported undergraduate researchers. The PI has helped twelve of those students receive undergraduate researchfellowships. Eight undergraduates have been authors (including four first authors) on papers from our group.One undergraduate received recognition as the best poster at the 2012 AIChE National Meeting and anotheras the 3rd best poster at the 2011 Meeting. The PI has also mentored twelve high school students (one ofwhom is now enrolled at NC State and pursuing a Chemical Engineering degree, and approximately half ofwhom are female) and six exchange students from China (all of whom are pursuing graduate degrees). Werecruit students from local Apex High School through a program called Academy of Information Technol-ogy. The PI is committed to continue supporting undergraduate research and undergraduates will work onthis project. In addition, the PI has mentored 12 female students in the past six years. Also, as a graduatestudent, the PI was a big brother to a 9 year old African American boy; that experience has inspired the PIto participate in as many outreach opportunities as time permits.

The research will be integrated with an outreach module that the PI developed during the past three years.The module discusses the liquid metal utilized in this proposal and has been given to science camps for bothmiddle school and high school students. Fig. 13 is a photograph of the PI presenting to middle school agedchildren about liquid metal. Hundreds of students have seen this material directly from the PI or graduatestudents. The PI will continue to work with the Engineering Place to refine the module and present it atleast twice annually through their camp. The camp - geared toward middle school children - reaches out toa diverse group of kids who become energized toward science via their participation.

26

Page 27: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

The module will also be presented at NC States NanoDays, an open house for all ages that focuses onemerging technologies being developed at NC State. This event attracts 2,000 people every year and offersan opportunity to get feedback on the module from a diverse audience (parents / kids). The PI participatedin this activity in 2013 and 2014 and plans to continue participation every year. The PI also volunteers forpanel discussions on science that are open to the public at the NC State library.

The context for the module is the popular movie Terminator 2, which features a material resembling aliquid metal that flows and morphs into human-like characters (Fig. 13). The liquid metal central to thisproposal is similar in the sense that it can flow and be molded. We introduce the module by showing aclip from the movie that is appropriate for all ages (see YouTube: Shape Reconfigurable Liquid Metal).The module provides an opportunity to highlight the unique assets of the liquid metal and the more generalconcepts of surface tension, wetting, and rheology. We will include a demonstration of the liquid metalspreading. The appeal of the program is that it draws on popular culture in a highly visual manner to whichall ages can relate.

The module also introduces the field of flexible electronics and demonstrates the utility of the liquidfor forming highly stretchable electronics through a prototype microfluidic antenna (Fig. 2). Exposing thestudents to the potential applications of flexible electronics (e.g., electronic paper, textiles, solar cells) shouldbe inspirational and illustrate the role of creativity in engineering. The module also offers the opportunity todiscuss other unusual fluids (e.g., ketchup, magnetorheological fluids, cosmetics). These familiar fluids willhelp the students connect what they learn with real-life applications. The PI will continue to enlist graduateand/or undergraduate students to assist with these outreach activities.

Dissemination Plan. For a more complete description of our dissemination plan, see our Data Man-agement Plan. We will maintain a public website to freely share our simulations and control code, and todocument research progress with video material. We will present our work at conferences and publish it injournals, and use these vehicles to advertise our work to potential collaborators in science and industry.

Technology Transfer. Our research results and algorithms are being used in Disney Research and throughthis technology transfer path will eventually be used in entertainment and education applications, and will beavailable to and inspire the public. Three former students work at Boston Dynamics/Google transferring ourwork to industrial applications, several students have done internships at Disney Research, and two formerpostdocs work there full time.

Benefits to Society. The proposed research has the potential to lead to more useful robots. A successfuloutcome can enable practical controllers for robust locomotion of legged robots in uncertain environmentswith applications in disaster response and elderly care. The outcomes of this project may also providenew insights into human sensorimotor control, in particular, into how humans adapt locomotion behaviors.Understanding how humans actually control their limbs has the potential to trigger neural and robotic pros-theses and exoskeletons which restore legged mobility to people who have damaged or lost sensorimotorfunctions of their legs due to diabetic neuropathy, stroke, or spinal cord injury as well as improve fall riskmanagement in older adults [? , ? , ? , ? ].

Enhancement of Infrastructure for Research and Education. The project will help to maintain CMU’sATRIAS and SARCOS robot testbeds. Both robots are powerful tools for research and education in dynam-ics and control of legged locomotion. These robots regularly attract the interest of students and inspire themto advanced education and training in robotics.

Relationship to DRC funding. We are currently part of one of the funded teams in the DARPA RoboticsChallenge [? ]. This support will end in the summer of 2015. The DRC focuses on reliability, implementa-tion issues, and logistics. We are not able to develop the proposed ideas in the time frame of the DRC, solonger term NSF funding complements our soon to end DRC funding.

27

Page 28: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

6 Curriculum Development Activities

Graduate Education: PI Onal designed and teaches Special Topics on Soft Robotics, a unique graduatelevel course, first offered last Spring. This project-based course is open to undergraduate seniors as welland received popular demand among both student populations majoring in Mechanical Engineering andRobotics Engineering, beyond our expectations. The significant relevance of the proposed research to thiscourse provides an opportunity to both introduce advanced methods to quantitatively study safe and adaptiveinteractions with human users, and also integrate mini-projects in the curriculum, which is expected toinduce interested students getting further involved with the research group. Two of the five projects in thefirst offering of this class are currently being expanded by the original groups, to be turned into conferenceand journal paper submissions. Consequently, three unpaid students joined our lab to continue research insoft robotics.Undergraduate Education: The Major Qualifying Project (MQP) is the capstone design experience atWPI, which requires student research and development effort, typically divided over the whole senior year.MQPs establish qualification in the student’s major, many times with industrial sponsors, off-campus re-search organizations, or ongoing faculty research and interests. Students usually form teams to tackle multi-disciplinary problems with faculty advisors. PI Onal is actively involved with the projects program at WPI.Last year, he advised four capstone project teams involving mechanical engineering, robotics engineering,electrical and computer engineering, and computer science majors in the general area of safe human-robotinteraction utilizing compliance, to design, characterize, build, and validate bio-inspired robotic systems,including a soft hydraulic exo-musculature system, and a semi-autonomous anthropomorphic hand prosthe-sis that requires minimal human supervision. These efforts led to multiple awards, including a first-placewin at the national Cornell Cup, first- and second-place wins among Robotics Engineering MQPs, and theprestigious Edward C. Perry Award for Outstanding Projects in Mechanical Design. Building on this initialsuccess, we propose to cultivate a tradition of undergraduate project involvement with the proposed researchby incorporating MQP activities on modeling, characterization, and analysis of the proposed sensory sys-tems.

Curriculum Development. We will pursue multiple directions for dissemination. First, we will de-velop course material on robot control and biologically inspired approaches to humanoid and rehabilitationrobotics which will directly be influenced by the planned activities of this proposal. The PIs currently teachseveral courses that will benefit from this material. The first course, 16-642: Manipulation, Mobility & Con-trol, is part of the Robotics Institute’s recently established professional Master’s degree program that aimsat training future leaders in the workforce of robotics and intelligent automation enterprises and agencies.Two other courses, 16-868: Biomechanics and Motor Control and 16-745: Dynamic Optimization, directlyaddress the research areas in which this proposal is embedded. We also teach a course designed to attractundergraduates into the field, 16-264: Humanoids. All of these courses emphasize learning from interactionwith real robots. We will make these course materials freely available on the web.

28

Page 29: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

7 Still Not Discussed

Forgot to mention - the goal of ASSIST is to make wearable devices

(for humans) that sense biological cues / signal (EKG, biomarkers,

etc) in a self-powered mode. I suppose it is complementary to the

efforts here, but the motivation is quite different.

The ASSIST initiative is a perfect fit. Let’s try to incorporate that

as part of the longer term agenda. In the meantime, we can propose

creating anisotropic conductive elastomers as vias to connect the

soft(microfluidic?) sensors and circuits with rigid

microelectronics. Whatever we propose, it must be untethered,

self-powered (with a battery or vibrational energy harvesting) and

have wireless connectivity. I’ll send a few paragraphs later tonight

on what I have in mind.

FYI - I am a member of the NSF ERC center at NC State called "ASSIST",

whose goal is to make wearable electronics (think Nike FitBit on

steroids). The work you are proposing is different, but just

something to keep in mind that might be leveraged (?) or at least

mentioned. The ASSIST folks are focused on energy harvesting and

sensing of biological health cues.

http://assist.ncsu.edu/

29

Michael
Sticky Note
No necessary to include any of this, except maybe 1 sentence in broader impact. "Dickey is a member of the NSF ERC 'ASSIST' center and will help us connect our efforts to the broader area of wearable skin sensors."
Page 30: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

8 Collaboration Plan

2 page collaboration plan supplementary document Where appropriate, the Collaboration Plan might in-clude: 1) the specific roles of the project participants in all organizations involved; 2) information on howthe project will be managed across all the investigators, institutions, and/or disciplines; 3) identificationof the specific coordination mechanisms that will enable cross-investigator, cross-institution, and/or cross-discipline scientific integration (e.g., yearly workshops, graduate student exchange, project meetings atconferences, use of the grid for videoconferences, software repositories, etc.), and 4) specific references tothe budget line items that support collaboration and coordination mechanisms. If a Large proposal, or aMedium proposal with more than one investigator, does not include a Collaboration Plan of up to 2 pages,that proposal will be returned without review. should make a convincing case that the collaborative con-tributions of the project team will be greater than the sum of each of their individual contributions. Largeprojects will typically integrate research from various areas, either within a cluster or across clusters, Ratio-nale must be provided to explain why a budget of this size is required to carry out the proposed work. Sincethe success of collaborative research efforts are known to depend on thoughtful coordination mechanismsthat regularly bring together the various participants of the project, a Collaboration Plan is required for allLarge proposals. Up to 2 pages are allowed for Collaboration Plans. The length of and degree of detailprovided in the Collaboration Plan should be commensurate with the complexity of the proposed project.

PI: Chris Atkeson (CMU), Co-PIs: Scott Niekum (CMU), Carmel Majidi (CMU), Yong-Lae Park (CMU),Michael Dickey (NCSU), and Cagdas Onal (WPI).Lead Institution: Carnegie Mellon University

1. Specific roles of the PIs: Our team brings together experts in soft materials and sensors with expertsin robot perception, reasoning, control, and learning.

Atkeson (CMU), PI, is an expert in robot perception, reasoning, control, and learning. He has extensiveexperience with robot arms and full body humanoids, as well as robot learning. Atkeson will lead the workin software algorithms, including perception, reasoning, control, and learning. Atkeson also has extensiveexperience with soft robots having recently developed a soft robot (that was the inspiration for Baymax inthe movie Big Hero 6).

Niekum (CMU), co-PI and postdoctoral fellow, will lead efforts in building integrated manipulation andperception systems that leverage the multimodal sensing capabilities of the prototype skin. This work willinclude signal processing and sensor integration, probabilistic modeling of sensor data, and development ofmachine learning and control algorithms for perceptual classification, action selection, and plan execution.For the past five years, Niekum’s research has focused on learning exploitable structure from robot perceptsand human demonstration data. Examples include learning high-level task descriptions from demonstrationdata and using interactive perception to learn about and reduce uncertainty over articulated relationshipsbetween objects.

Majidi (CMU), co-PI, will lead efforts to design sensors and circuit elements with pre-determined elec-tromechanical coupling between elastic deformation and changes in electrical resistance, capacitance, andinductance. Such designs will be informed by theoretical models based on solutions to field equations innonlinear (finite) elasticity. When possible, a Rayleigh-Ritz method will be used to obtain approximatealgebraic expressions for the mapping between deformation (tensile strain, compressive strain, bending cur-vature, shear) and electrical response. Otherwise, Majidi and his students will make use of commercialFEA software. In addition to design and modeling, they will also assist in materials selection, fabrication,and integration of soft-matter electronics with rigid hardware. This includes work on mechanically robustinterfaces between liquid and solid conductors.

Park (CMU), PI, will lead efforts in system integration of skin and robot systems. He will also develop asensor network embedded skin and work with co-PIs on development of various soft robotics integral em-bodiments and specific elements including: hydraulic, cable driven, granular media jamming elements, and

30

Page 31: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

pneumatic/inflatable elements. During last five years Parks research has been focused on Exo-Musculatureactuation and sensing modalities. Among others he has worked on pneumatically actuated (McKibben type)soft muscle tendons for the foot and lower leg for use in treating gait pathologies associated with neuromus-cular disorders. He also worked on a thin cylindrical sleeve with pneumatically actuated muscles based onminiaturized arrays of chambers and embedded Kevlar fiber reinforcement. Finally, he developed thin softartificial skin with embedded arrays of various sensing modalities (strain, pressure, shear forces etc.) for usewith aforementioned actuation modalities.

Co-Pi Dickey is a chemical engineer with expertise in patterning and actuating soft materials includingpolymers, gels, and liquid metals for applications that include soft and stretchable electronics, sensors, andwearable devices. A student in his group is also studying sensors for the human skin through the NSF ERCASSIST project. He has expertise in microfabrication, materials characterization, and soft matter. Dickeywill work with the team to help develop novel soft material strategies to enable next generation robot skin.

Co-PI Onal (WPI) has a track record in soft robotic systems in theoretical modeling, design, fabrication,actuation, sensing, and control solutions for soft-bodied robotic systems. Onal will be responsible for theresearch thrust on next generation distributed 6-D force and moment measurement using an array of magnetand Hall element pairs embedded in silicone rubber. He will work closely with PI Atkeson on the integrationof the proposed sensors within the robotic sensory skin. For the past five years, Onal has been an activecontributor to the emergence and development of Soft Robotics through a series of research projects onevery aspect of robots comprising pressure-operated elastomeric materials, including power generation,valving, dynamic modeling, sensing, control, and algorithmic studies.

The students and remaining postdoc will interact with all of the involved faculty. We have found thatcombining postdocs with several faculty and multiple students provides a very productive mix of expertise.One postdoc, Niekum, will coordinate development of algorithms, testbeds, and systems. The other postdocwill take the lead in coodinating work on skin mechanics and skin sensing. What students will work on willvary through the duration of the project. All students will develop new theory and algorithms as well aswork with actual implementations. The PIs will jointly supervise, guide, and provide assistance as needed,as well as focus on theoretical and algorithm development themselves. All members of the team will worktogether to evaluate each other’s results.

What makes this team more than the sum of the individuals is that our team brings together experts insoft materials and sensors with experts in robot perception, reasoning, control, and learning. Atkeson andNiekum have extensive prior experience in robot control and learning. Atkeson has expertise in optimalcontrol. He has prior work in human motor psychophysics and modeling human behavior. In terms ofproject management, Atkeson has prior experience as PI of an NSF IGERT program, and was a thrustleader for Mobility and Manipulation in our NSF Quality of Life Engineering Research Center. Atkesonis currently a co-PI on a DARPA project with colleagues at WPI, so he is experienced with long distancecollaboration.

Management across institutions and disciplines: Specific coordination mechanisms: In addition tofrequent ad hoc meetings and interactions, the personnel involved in this work will participate in weeklyproject meetings to review research results and discuss future directions. Participants at WPI and NCSUwill use videoconferencing software (Skype, Google Hangout, Blue Jeans, ...). Budget line items thatsupport these coordination mechanisms: Faculty, postdoc, and student support will provide coverage forthe time necessary to interact. Multiple trips are budgeted for all participants to visit each other for extendedperiods. Governance: The co-PIs will make decisions by consensus. The PI is the final decision maker ifconsensus is not reached.

2. Management across institutions and disciplines: Copies of the experimental setups (except thoseinvolving robots costing more than $50,000)), will be made available at all research sites. Robots thatare at one site will be able to be teleoperated remotely to facilitate experiments and data collection by allparticipants. Project data generated by each member of the team will be made accessible to other team

31

Page 32: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

members. Team members will tightly collaborate. Their research activities will be coordinated by PI Park.The team will report on research findings via conference presentations and proceedings, journal papers, andin direct communication with appropriate NSF IIS personnel. The team will generate a project website withproject description, list of team members, list of publications and acknowledging sponsoring agency.

3. Specific coordination mechanisms: Specific features of the project to ensure coordination of the workwill be:

• Regular meetings of the research team to plan experiments, analyze results and write manuscripts willinvolve all members of the group including the undergraduates who will be mentored especially inaspects of the project that are not part of their major.

• The share drive will be kept up to date and well documented. It will include research plans, data,detailed descriptions of that data, an electronic library of indexed relevant publications, and drafts ofnew manuscripts.

• Graduate and undergraduate students will keep up to date the project web site so that all members ofthe team can review progress.

• Writing of peer reviewed manuscripts and preparation for conference presentations will be reviewedby the entire group.

4. Budget line items that support theses coordination mechanisms: The budget includes severalitems that specifically support collaboration and coordination amongst the team. The main use of the travelfunds is to participate in collaboration visits and conferences in the US. The preparation for these meetingsand conferences, the writing of manuscripts for publication and the preparation for meetings will be usedto ensure collaboration and coordination. Participation in these activities will ensure that the team worktogether and are focused on the timely completion of the project.

32

Page 33: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

The Research Experiences for Undergraduates (REU): Sites and Supplements solicitation (NSF 13-542)gives instructions for embedding a request for an REU Supplement in a proposal. Proposers are invitedto embed a request for an REU Supplement in the typical amount for one year only according to normalCISE guidelines (detailed below). REU stipend support is one way to retain talented students in under-graduate education, while providing meaningful research experiences. The participation of students fromgroups underrepresented in computing - underrepresented minorities, women and persons with disabilities- is strongly encouraged. CISE REU supplemental funding requests must describe results of any previoussuch support, including students supported, papers published, etc. Other factors influencing supplementalfunding decisions include the number of REU requests submitted by any one principal investigator across allof her/his CISE grants. Investigators are encouraged to refer to the REU program solicitation (NSF 13-542)

33

Page 34: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

All:

list of project personnel as well as a list of collaborators

Mary Smith; XYZ University; PI

John Jones; University of PQR; Senior Personnel

Jane Brown; XYZ University; Postdoc

Bob Adams; ABC Community College; Paid Consultant

Susan White; DEF Corporation; Unpaid Collaborator

Tim Green; ZZZ University; Subawardee

A list of past and present any Collaborators (related or not to this proposal)

Collaborators for Mary Smith; XYZ University; PI

Helen Gupta; ABC University

John Jones; University of PQR

Fred Gonzales; DEF Corporation

Susan White; DEF Corporation

Collaborators for John Jones; University of PQR; Senior Personnel

Tim Green; ZZZ University

Ping Chang, ZZZ University

Mary Smith; XYZ University

Collaborators for Jane Brown; XYZ University; Postdoc

Fred Gonzales; DEF Corporation

Collaborators for Bob Adams; ABC Community College; Paid Consultant

None

Collaborators for Susan White; DEF Corporation; Unpaid Collaborator

Mary Smith; XYZ University

Harry Nguyen; Welldone Institution

Collaborators for Tim Green; ZZZ University; Subawardee

John Jones; University of PQR

34

Page 35: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

References

[1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedingsof the Twenty First International Conference on Machine Learning, pages 1–8, 2004.

[2] B. Akgun, M. Cakmak, J. W. Yoo, and A. L. Thomaz. Trajectories and keyframes for kinestheticteaching: a human-robot interaction perspective. In Proceedings of the seventh annual ACM/IEEEinternational conference on Human-Robot Interaction, pages 391–398, 2012.

[3] M. Amano, Y. Okabe, N. Takeda, and T. Ozaki. Structural health monitoring of an advanced gridstructure with embedded fiber Bragg grating sensors. Structural Health Monitoring, 6(4):309–324,2007.

[4] S. O. Anderson, M. Wisse, C. G. Atkeson, J. K. Hodgins, G. J. Zeglin, and B. Moyer. Powered bipedsbased on passive dynamic principles. In Proceedings of the 5th IEEE-RAS International Conferenceon Humanoid Robots (Humanoids 2005), pages 110–116, 2005.

[5] H.-J. Appell. Muscular atrophy following immobilisation: A review. Sports Medicine, 10(1):42–58,1990.

[6] B. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning from demonstration.Robotics and Autonomous Systems, 57(5):469–483, 2009.

[7] L. Ascari, P. Corradi, L. Beccai, and C. Laschi. A miniaturized and flexible optoelectronic sensingsystem for a tactile skin. International Journal of Micromechanics and Microengineering, 17:2288–2298, 2007.

[8] C. G. Atkeson and S. Schaal. Robot learning from demonstration. In Proceedings of the FourteenthInternational Conference on Machine Learning, pages 12–20, 1997.

[9] S. K. Au, J. Weber, and H. Herr. Powered ankle–foot prosthesis improves walking metabolic econ-omy. IEEE Transactions on Robotics, 25(1):51–66, February 2009.

[10] S. K. Banala, S. H. Kim, S. K. Agrawal, and J. P. Scholz. Robot assisted gait training with active legexoskeleton (ALEX). IEEE Trans. Neural Syst. Rehabil. Eng., 17(1):2–8, 2009.

[11] F. G. Barth. Spider mechanoreceptors. Current Opinion in Neurobiology 2004, 14:415–422, 2004.

[12] F. G. Barth and J. Stagl. The slit sense organs of arachnids. Zoomorphologie, 86:1–23, 1976.

[13] D. Bentivegna, C. G. Atkeson, and J.-Y. Kim. Compliant control of a hydraulic humanoid joint. InIEEE-RAS International Conference on Humanoid Robots (Humanoids), 2007.

[14] A. Billard, S. Calinon, R. Dillmann, and S. Schaal. Handbook of Robotics, chapter Robot program-ming by demonstration. Springer, Secaucus, NJ, USA, 2008.

[15] R. J. Black, D. Zare, L. Oblea, Y.-L. Park, B. Moslehi, and C. Neslen. On the gage factor for opticalfiber grating strain gages. Proceedings of the Society for the Advancement of Materials and ProcessEngineering (SAMPE’08), 53, 2008.

[16] S. Blanton, S. P. Grissom, and L. Riolo. Use of a static adjustable ankle-foot orthosis following tibialnerve block to reduce plantar-flexion contracture in an individual with brain injury. Physical Therapy,82(11):1087–1097, 2002.

1

Page 36: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

[17] J. A. Blaya and H. Herr. Adaptive control of a variable-impedance ankle-foot orthosis to assist drop-foot gait. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 12(1):24–31, 2004.

[18] R. Blickhan and F. G. Barth. Strains in the exoskeleton of spiders. Journals of Comparative Physiol-ogy A, 157:115–147, 1985.

[19] W. Bluethmann, R. Ambrose, M. Diftler, S. Askew, E. Huber, M. Goza, F. Rehnmark, C. Lovchik,and D. Magruder. Robonaut: A Robot designed to work with humans in space. Autonomous Robots,14:179–197, 2003.

[20] R. Bodor, A. Drenner, M. Janssen, P. Schrater, and N. Papanikolopoulos. Mobile camera positioningto optimize the observability of human activity recognition tasks. In Intelligent Robots and Systems,2005.(IROS 2005). 2005 IEEE/RSJ International Conference on, pages 1564–1569. IEEE, 2005.

[21] R. Bogue. Exoskeletons and robotic prosthetics: a review of recent developments. Ind. Rob.: Int. J.,36(5):421–427, 2009.

[22] A. Boularias, J. Kober, and J. Peters. Relative entropy inverse reinforcement learning. In Proceedingsof the 15th International Conference on Automated Planning and Scheduling, pages 20–27, 2011.

[23] M. Bruelmeier, V. Dietz, K. L. Leenders, U. Roelcke, J. Missimer, and A. Curt. How does the humanbrain deal with a spinal cord injury? European Journal of Neuroscience, 10(12):3918–3922, 1998.

[24] J. Butterfass, M. Grebenstein, H. Liu, and G. Hirzinger. Dlr-hand ii: next generation of a dextrousrobot hand. In Robotics and Automation, 2001. Proceedings 2001 ICRA. IEEE International Confer-ence on, volume 1, pages 109–114 vol.1, 2001.

[25] S. Calinon and A. Billard. Incremental learning of gestures by imitation in a humanoid robot. InProceedings of the Second Conference on Human-Robot Interaction, 2007.

[26] L. Carvalho, J. C. C. Silva, R. N. Nogueira, J. L. Pinto, H. J. Kalinowski, and J. A. Simoes. Applica-tion of Bragg grating sensors in dental biomechanics. The Journal of Strain Analysis for EngineeringDesign, 41(6):411–416, 2006.

[27] M.-Y. Cheng, C.-M. Tsao, Y.-Z. Lai, and Y.-J. Yang. The development of a highly twistable tactilesensing array with stretchable helical electrodes. Sens. Actuators, A, 166(2):226–233, 2009.

[28] S. Chernova and M. Veloso. Confidence-based policy learning from demonstration using gaussianmixture models. In Proceedings of the International Conference on Autonomous Agents and Multia-gent Systems, 2007.

[29] Y.-N. Cheung, Y. Zhu, C.-H. Cheng, and W. W.-F. L. C. Chao. A novel fluidic strain sensor for largestrain measurement. Sens. Actuators, A, 147(2):401–408, 2008.

[30] D. Choi, C. G. Atkeson, S. J. Cho, and J. Y. Kim. Phase plane control of a humanoid. In IEEE-RASInternational Conference on Humanoid Robots (Humanoids), pages 145–150, 2008.

[31] J.-B. Chossat, Y.-L. Park, R. Wood, and V. Duchaine. A soft strain sensor based on ionic and metalliquids. Sensors Journal, IEEE, 13(9):3405–3414, 2013.

[32] J.-B. Chossat, H.-S. Shin, Y.-L. Park, and V. Duchanine. Design and manufacturing of soft tactileskin using an embedded ionic liquid and tomographic imaging. ASME Journal of Mechanisms andRobotics, 2005 (to appear).

2

Page 37: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

[33] C.-P. Chou and B. Hannaford. Measurement and modeling of McKibben pneumatic artificial muscles.IEEE Trans. Rob. Autom., 12(1):90–102, 1996.

[34] F. Daerden and D. Lefeber. The concept and design of pleated pneumatic artificial muscles. Int. J.Fluid Power, 2(3):41–50, 2001.

[35] F. Daerden and D. Lefeber. Pneumatic artificial muscles: actuators for robotics and automation.Europ. J. Mech. Environ. Eng., 47:10–21, 2002.

[36] R. S. Dahiya, G. Metta, M. Valle, and G. Sandini. Tactile sensing – from humans to humanoids.Robotics, IEEE Transactions on, 26(1):1–20, 2010.

[37] L. A. Danisch, K. Englehart, and A. Trivett. Spatially continuous six degree of freedom position andorientation sensor. Sensor Review, 19(2):106–112, 1999.

[38] L. A. Danisch and E. M. Reimer. World patent of canadian space agency. pages PCT, Wo. 99,No.04234, 1999.

[39] C. R. Dennison, P. M. Wild, M. F. Dvorak, D. R. Wilson, and P. A. Cripton. Validation of a novelminimally invasive intervertebral disc pressure sensor utilizing in-fiber Bragg gratings in a porcinemodel: An ex vivo study. Spine, 33(17):E589–E594, 2008.

[40] M. D. Dickey, R. C. Chiechi, R. J. Larsen, E. A. Weiss, D. A. Weitz, and G. M. Whitesides. Eutecticgallium-indium (EGaIn): A liquid metal alloy for the formation of stable structures in microchannelsat room temperature. Adv. Funct. Mater., 18(7):1097–1104, 2008.

[41] S. Dong and B. Williams. Motion learning in variable environments using probabilistic flow tubes. InRobotics and Automation (ICRA), 2011 IEEE International Conference on, pages 1976–1981. IEEE,2011.

[42] W. Ecke, I. Latka, R. Willsch, R. A., and R. Graue. Fiber optic sensor network for spacecraft healthmonitoring. Measurement Science and Technology, 12(7):974–980, 2001.

[43] M. F. Eilenberg, H. Geyer, and H. Herr. Control of a powered ankle-foot prosthesis based on a neuro-muscular model. IEEE Transactions on Neural Systems ans Rehabilitation Engineering, 18(2):164–173, Apr. 2010.

[44] S. Ekvall and D. Kragic. Robot learning from demonstration: a task-level planning approach. Inter-national Journal of Advanced Robotic Systems, 5(3):223–234, 2008.

[45] J. Engel, J. Chen, Z. Fan, and C. Liu. Polymer micromachined multimodal tactile sensors. Sens.Actuators, A, 117(1):50–61, 2005.

[46] A. F. Fernandez, F. Berghmans, B. Brichard, P. Megret, M. Decreton, M. Blondel, and A. Delchambre.Multi-component force sensor based on multiplexed fibre Bragg grating strain sensors. MeasurementScience and Technology, 12(7):810, 2001.

[47] D. P. Ferris, J. M. Czerniecki, and B. Hannaford. An ankle-foot orthosis powered by artificial peu-matic muscles. Journal of Applied Biomechanics, 21:189–197, 2005.

[48] R. F. Foelix. Biology of spiders, Second edition. Oxford University Press US, 1996.

3

Page 38: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

[49] E. J. Friebele, C. G. Askins, A. B. Bosse, A. D. Kersey, H. J. Patrick, W. R. Pogue, M. A. Putnam,W. R. Simon, F. A. Tasker, W. S. Vincent, and S. T. Vohra. Optical fiber sensors for spacecraftapplications. Smart Materials and Structures, 8:813–838, 1999.

[50] I. Galiana, F. L. Hammond, R. D. Howe, and M. B. Popovic. Wearable soft robotic device for post-stroke shoulder rehabilitation: Identifying misalignments. In Proc. IEEE/RSJ Int. Conf. Intell. Rob.Syst., pages 317–322, Vilamoura, Portugal, October 2012.

[51] J. F. Geboers, M. R. Drost, F. Spaans, H. Kuipers, and H. A. Seelen. Immediate and long-termeffects of ankle-foot orthosis on muscle activity during walking: A randomized study of patients withunilateral foot drop. Archives of Physical Medicine and Rehabilitation, 83:240–245, 2002.

[52] J. F. Geboers, J. H. van Tuijl, H. A. Seelen, and M. R. Drost. Effect of immobilization on ankledorsiflextion strength. Scandinavian journal of rehabilitation medicine, 32:66–71, 2000.

[53] M. Gienger, M. Muhlig, and J. Steil. Imitating object movement skills with robots: A task-levelapproach exploiting generalization and invariance. In International Conference on Intelligent Robotsand Systems, pages 1262–1269. IEEE, 2010.

[54] V. Giurgiutiu and A. N. Zagrai. Embedded self-sensing piezoelectric wafer active sensors for struc-tural health monitoring. Trans. ASME, J. Vibr. Acoust., 124(1):116–125, 2002.

[55] K. E. Gordon, G. S. Sawicki, and D. P. Ferris. Mechanical performance of artificial pneumaticmuscles to power an ankle-foot orthosis. Journal of Biomechanics, 39:1832–1841, 2006.

[56] J. B. Green, E. Sora, Y. Bialy, A. Ricamato, and R. W. Thatcher. Cortical sensorimotor reorganizationafter spinal cord injury: an electroencephalographic study. Neurology, 50(4):1115–1121, 1998.

[57] D. H. Grollman and O. C. Jenkins. Sparse incremental learning for interactive robot control policyestimation. In Proceedings of the International Conference on Robotics and Automation, 2008.

[58] E. Guizzo and H. Goldstein. The rise of the body bots. IEEE Spectrum, 42(10):50–56, 2005.

[59] R. B. Hartman. Flexible tape having bridges of electrically coductive particles extending across itspressure-sensitive adhesive layer. US Patent 4,548,862, 1985.

[60] K. Hausman, S. Niekum, S. Osentoski, and G. Sukhatme. Active Articulation Model Estimationthrough Interactive Perception. In IEEE International Conference on Robotics and Automation (sub-mitted), May 2015.

[61] X. He, B. Benhabib, K. Smith, and R. Safaee-Rad. Optimal camera placement for an active-visionsystem. In Systems, Man, and Cybernetics, 1991.’Decision Aiding for Complex Systems, ConferenceProceedings., 1991 IEEE International Conference on, pages 69–74. IEEE, 1991.

[62] H. M. Herr and R. D. Kornbluh. New horizons for orthotic and prosthetic technology: artificialmuscle for ambulation. Proc. SPIE, 5385:1–9, 2004.

[63] H. M. Herr and A. Wilkenfeld. User-adaptive control fo a magnetorheological prosthetic knee. In-dustrial Robot: An International Journal, 30(1):42–55, 2003.

[64] K. W. Hollander, R. Ilg, T. G. Sugar, and D. Herring. An efficient robotic tendon for gait assistance.J. Biomech. Eng., 128(5):788–792, 2006.

4

Page 39: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

[65] H. E. Holling, H. C. Boland, and E. Russ. Investigation of arterial obstruction using a mercury-in-rubber strain gauge. American heart journal, 62(2):194–205, 1961.

[66] J. Hong and X. Tan. Calibrating a VPL dataglove for teleoperating the utah/mit hand. Proceedings ofthe 1989 IEEE International Conference on Robotics and Automation, 3:1752–1757, 1989.

[67] A. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor primitives.Advances in Neural Information Processing Systems 16, pages 1547–1554, 2003.

[68] I. Iordachita, Z. Sun, M. Balicki, J. U. Kang, S. J. Phee, J. Handa, P. Gehlbach, and R. Tayolor. A sub-millimetric, 0.25 mN resolution fully integrated fiber-optic force-sensing tool for retinal microsurgery.Int. J. Comput. Assist. Radiol. Surg., 4(4):383–390, 2009.

[69] K. Jung, K. J. Kim, and H. R. Choi. A self-sensing dielectric elastomer actuator. Sens. Actuators, A,143(2):343–351, 2008.

[70] A. Kadowaki. Development of soft sensor exterior embedded with multi-axis deformable tactilesensor system. In Proc. IEEE Int. Symp. Rob. Hum. Interact. Commun., pages 1093–1098, Toyama,Japan, September 2009.

[71] K. Kamiyama, H. Kajimoto, M. Inami, N. Kawakami, and S. Tachi. Development of a vision-basedtactile sensor. IEEJ Transactions on Sensors and Micromachines, 123(1):16–22, 2003.

[72] D. Katz and O. Brock. Extracting planar kinematic models using interactive perception. In UnifyingPerspectives in Computational and Robot Vision, pages 11–23. 2008.

[73] D. Katz, M. Kazemi, J. A. D. Bagnell, and A. T. Stentz. Interactive segmentation, tracking, andkinematic modeling of unknown articulated objects. Technical Report CMU-RI-TR-12-06, RoboticsInstitute, March 2012.

[74] H. Kawaguchi, T. Someya, T. Sekitani, and T. Sakurai. Cut-and-paste customization of organicFET integrated circuit and its application or electronic artificial skin. IEEE J. Solid-State Circuits,40(1):177–185, 2005.

[75] A. D. Kersey, M. A. Davis, H. J. Patrick, M. LeBlanc, K. P. Koo, C. G. Askins, M. A. Putnam, andE. J. Friebele. Fiber grating sensors. Journal of Lightwave Technology, 15(8):1442–1463, 1997.

[76] J.-Y. Kim, C. G. Atkeson, J. K. Hodgins, D. Bentivegna, and S. J. Cho. Online gain switching algo-rithm for joint position control of a hydraulic humanoid robot. In IEEE-RAS International Conferenceon Humanoid Robots (Humanoids), 2007.

[77] N. Kirchner, D. Hordern, D. Liu, and G. Dissanayake. Capacitive sensor for object ranging andmaterial type identification. Sens. Actuators, A, 148(1):96–104, 2008.

[78] K. M. Kitani, B. D. Ziebart, J. A. Bagnell, and M. Hebert. Activity forecasting. In European Confer-ence on Computer Vision (ECCV 2012), 2012.

[79] H. Kjellstrom, J. Romero, and D. Kragic. Visual object-action recognition: Inferring object af-fordances from human demonstration. Computer Vision and Image Understanding, 115(1):81–90,2011.

[80] J. M. Ko and Y. Q. Ni. Technology developments in structural health monitoring of large-scalebridges. Engineering Structures, 27(12):1715–1725, 2005.

5

Page 40: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

[81] G. Konidaris, S. Kuindersma, R. Grupen, and A. Barto. Robot learning from demonstration byconstructing skill trees. The International Journal of Robotics Research, 31(3):360–375, 2012.

[82] R. Kramer, C. Majidi, R. Sahai, and R. Wood. Soft curvature sensors for joint angle proprioception.In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pages 1919–1926, Sept 2011.

[83] R. Kramer, C. Majidi, and R. Wood. Wearable tactile keypad with stretchable artificial skin. InRobotics and Automation (ICRA), 2011 IEEE International Conference on, pages 1103–1107, May2011.

[84] R. Kramer, C. Majidi, and R. J. Wood. Wearable tactile keypad with strethcable artificial skin. InProc. IEEE Int. Conf. Rob. Autom., pages 1103–1107, Shanghai, China, May 2011.

[85] H. Krebs, N. Hogan, W. Durfee, and H. Herr. Rehabilitation robotics, orthotics, and prosthetics;chapter 48. Textbook of Neural Repair and Rehabilitation (M. E. Selzer, S. Clarke, L. G. Cohen, P. W.Duncan, and F. H. Gage), Cambridge University Press, 2005.

[86] K. Kure, T. Kanda, K. Suzumori, and S. Wakimoto. Flexible displacement sensor using injectedconductuve paste. Sens. Actuators, A, 143(2):272, 278 2008.

[87] S. Kuriyama, M. Ding, Y. Kurita, T. Ogasawara, and J. Ueda. Flexible sensor for McKibben pneu-matic actuator. In Proc. IEEE Sens. Conf., pages 520–525, Christchurch, New Zealand, January2009.

[88] S. Kuriyama, M. Ding, Y. Kurita, J. Ueda, and T. Ogasawara. Flexible sensor for McKibben pneu-matic artificial muscle actuator. In. J. Autom. Tech., 3(6):731–740, 2009.

[89] S. Kuriyama, M. Ding, Y. Kurita, J. Ueda, and T. Ogasawara. Flexible sensor for McKibben pneu-matic artificial muscle actuator. Int. J. Autom. Technol., 3(6):731–740, 2009.

[90] B. G. A. Lambrecht and H. Kazerooni. Design of a semi-active knee prosthesis. In Proceedings of2009 IEEE International Conference on Robotics and Automation, pages 639–645, 2009.

[91] M. C. J. Large, J. Moran, and L. Ye. The role of viscoelastic properties in strain testing using mi-crostructured polymer optical fibres (mPOF). Meas. Sci. Technol., 20(3), 2009.

[92] Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng. Buildinghigh-level features using large scale unsupervised learning. arXiv preprint arXiv:1112.6209, 2011.

[93] H.-N. Li, D.-S. Li, and G.-B. Song. Recent applications of fiber optic sensors to health monitoring incivil engineering. Engineering Structures, 26(11):1647–1657, 2004.

[94] X. C. Li and F. Prinz. Metal embedded fiber Bragg grating sensors in layered manufacturing. Journalof Manufacturing Science and Engineering, 125:577–585, 2003.

[95] C. Liu and C. G. Atkeson. Standing balance control using a trajectory library. In IEEE/RSJ Interna-tional Conference on Intelligent Robots and Systems (IROS), pages 3031 — 3036, 2009.

[96] F. Lorussi, E. P. Scilingo, M. Tesconi, A. Tognetti, and D. De Rossi. Strain sensing fabric for handposture and gesture monitoring. IEEE Trans. Inf. Technol. Biomed., 9(3):372–381, 2005.

[97] T. Lu, L. Finkenauer, J. Wissman, and C. Majidi. Rapid Prototyping for Soft-Matter Electronics. Adv.Funct. Mater., 24(22):3351–3356, June 2014.

6

Page 41: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

[98] V. J. Lumelsky, M. S. Shur, and S. Wagner. Sensitive skin. IEEE Sens. J., 1(1):41–51, 2001.

[99] H. Maekawa, K. Tanie, and K. Komoriya. Tactile feedback for multifingered dynamic grasping. IEEEControl Systems Magazine, 17(1):63–71, 1997.

[100] C. Majidi, R. Kramer, and R. J. Wood. Non-differential elastomer curvature sensors for softer-than-skin electronics. J. Smart Mater. Struct., 20(10), 2011.

[101] T. Mandersloot, M. Wisse, and C. Atkeson. Controlling velocity in bipedal walking: A dynamicprogramming approach. In Proceedings of the 6th IEEE-RAS International Conference on HumanoidRobots (Humanoids), 2006.

[102] D. Marculescu, R. Marculescu, N. H. Zamora, P. Stanley-Marbell, P. K. Khosla, S. Park, S. Ja-yaraman, S. Jung, C. Lauterbach, W. Weber, T. Kirstein, D. Cottet, J. Grzyb, G. Troster, M. Jones,T. Martin, and Z. Nakad. Electronic textile: A platform for pervasive computing. Proc. IEEE,91(12):1995–2018, 2003.

[103] C. Mavroidis, R. G. Ranky, M. L. Sivak, B. L. Patritti, J. DiPisa, A. Caddle, L. G. K. Gilhooly,S. Sivak, M. Lancia, R. Drillio, and P. Bonato. Patient specific ankle-foot orthoses using rapid proto-typing. J. Neuroeng. Rehabil., 8(1), 2011.

[104] F. Miller. Cerebral Palsy. Springer, 1st edition, January 2005.

[105] D. T. Moran, K. M. Chapman, and R. S. Ellis. The fine structure of cockroach campaniform sensilla.The Journal of Cell Biology, 48:155–173, 1971.

[106] W. W. Morey, G. Meltz, and J. M. Weiss. Recent advances in fiber-grating sensors for utility industryapplications. Proceedings of SPIE, Self-Calibrated Intelligent Optical Sensors and Systems, 2594:90–98, 1996.

[107] T. Nakamura and H. Shinohara. Position and force control based on mathematical models of pneu-matic artificial muscles reinforced by straight glass fibers. In Proc. IEEE Int. Conf. Rob. Autom.,pages 4361–4366, 2007.

[108] S. Nambiar and J. T. Yeow. Conductive polymer-based sensors for biomedical applications. Biosens.Bioelectron., 26(5):1825–1832, 2010.

[109] B. G. Nascimento, C. B. S. Vimieiro, D. A. P. Nagem, and M. Pinotti. Hip orthosis powered by pneu-matic artificial muscle: Voluntary activation in absence of myoelectrical signal. Artificial Organs,32(3):317–322, 2008.

[110] G. Neu and C. Szepesvari. Apprenticeship learning using inverse reinforcement learning and gradientmethods. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, pages 295–302, 2007.

[111] A. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Proceedings of the Seven-teenth International Conference on Machine Learning, pages 663–670, 2000.

[112] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng. Multimodal deep learning. In Proceed-ings of the 28th International Conference on Machine Learning (ICML-11), pages 689–696, 2011.

[113] M. Nicolescu and M. J. Mataric. Natural methods for robot task learning: Instructive demonstra-tion, generalization and practice. In Proceedings of the Second International Joint Conference onAutonomous Agents and Multi-Agent Systems, pages 241–248, 2003.

7

Page 42: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

[114] S. Niekum, S. Chitta, B. Marthi, S. Osentoski, and A. G. Barto. Incremental semantically groundedlearning from demonstration. In Robotics: Science and Systems, 2013.

[115] S. Niekum, S. Chitta, B. Marthi, S. Osentoski, and A. G. Barto. Incremental Semantically GroundedLearning from Demonstration. In Robotics: Science and Systems, June 2013.

[116] S. Niekum, S. Osentoski, G. Konidaris, and A. G. Barto. Learning and generalization of complextasks from unstructured demonstrations. IEEE/RSJ International Conference on Intelligent Robotsand Systems, pages 5239–5246, 2012.

[117] S. Niekum, S. Osentoski, G. Konidaris, and A. G. Barto. Learning and generalization of complextasks from unstructured demonstrations. In IEEE/RSJ International Conference on Intelligent Robotsand Systems, pages 5239–5246, October 2012.

[118] J. Nikitczuk, B. Weinberg, P. K. Canavan, and C. Mavroidis. Active knee rehabilitation orthotic de-vice with variable damping characteristics implemented via an electrorheological fluid. IEEE/ASMETransactions on Mechatronics, 15(6):952–960, 2010.

[119] K. Noda, E. Iwase, K. Matsumoto, and I. Shimoyama. Stretchable liquid tactile sensor for robotjoints. In Proc. IEEE Int. Conf. Rob. Autom., Anchorage, AK, May 2010.

[120] A. Okamura and M. R. Cutkosky. Feature detection for haptic exploration with robotic fingers. Int.J. Rob. Res., 20(12):925–938, 2001.

[121] J. Paik, R. Kramer, and R. J. Wood. Stretchable circuits and sensors for robotic origami. In Proc.IEEE/RSJ Int. Conf. Intell. Rob. Syst., pages 414–420, San Francisco, CA, September 2011.

[122] S. Pal, J. Mandal, T. Sun, K. T. V. Grattan, M. Fokine, F. Carlsson, P. Y. Fonjallaz, S. A. Wade,and S. F. Collins. Characteristics of potential fibre Bragg grating sensor-based devices at elevatedtemperatures. Measurement Science and Technology, (14):1131–1136, 2003.

[123] Y.-L. Park, K. Chau, R. Black, and M. Cutkosky. Force sensing robot fingers using embedded fiberbragg grating sensors and shape deposition manufacturing. In Robotics and Automation, 2007 IEEEInternational Conference on, pages 1510–1516, April 2007.

[124] Y.-L. Park, K. Chau, R. J. Black, and M. R. Cutkosky. Force sensing robot fingers using embed-ded fiber Bragg grating sensors and shape deposition manufacturing. In Proc. IEEE Int. Conf. Rob.Autom., pages 1510–1516, Rome, Italy, May 2007.

[125] Y.-L. Park, B. Chen, C. Majidi, R. J. Wood, R. Nagpal, and E. Goldfield. Active modular elastomersleeve for soft wearable assistance robots. In Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst., pages1595–1602, Vilamoura, Portugal, October 2012.

[126] Y.-L. Park, B. Chen, N. O. Perez-Arancibia, D. Young, L. Stirling, R. J. Wood, E. Goldfield, andR. Nagpal. Design and control of a bio-inspired soft wearable robotic device for ankle-foot rehabili-tation. Bioinspiration & Biomimetics, 9(1):016007, 2014.

[127] Y.-L. Park, B. Chen, and R. J. Wood. Soft artificial skin with multi-modal sensing capability usingembedded liquid conductors. In IEEE Sens. Conf., Limerick, Ireland, October 2011.

[128] Y.-L. Park, B. Chen, and R. J. Wood. Design and manufacturing of soft artificial skin using embeddedmicrochannels and liquid conductors. IEEE Sens. J., 12(8):2711–2718, 2012.

8

Page 43: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

[129] Y.-L. Park, B. Chen, D. Young, L. Stirling, R. J. Wood, E. Goldfield, and R. Nagpal. Bio-inspiredactive soft orthotic device for ankle foot pathologies. In Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst.,pages 4488–4495, San Francisco, CA, September 2011.

[130] Y.-L. Park, S. Elayaperumal, B. Daniel, S. C. Ryu, M. Shin, J. Savall, R. J. Black, B. Moslehi, andM. R. Cutkosky. Real-time estimation of 3-D needle shape and deflection for MRI-guided interven-tions. IEEE/ASME Trans. Mechatron., 15(6):906–915, 2010.

[131] Y.-L. Park, S. Elayaperumal, B. L. Daniel, E. Kaye, K. B. Pauly, R. J. Black, and M. R. Cutkosky.MRI-compatible haptics: Feasibility of using optical fiber Bragg grating strain-sensors to detect de-flection of needles in an MRI environment. International Society for Magnetic Resonance in Medicine(ISMRM) 2008, 16th Scientific Meeting and Exhibition, 2008.

[132] Y.-L. Park, C. Majidi, R. Kramer, P. Berard, and R. J. Wood. Hyperelastic pressure sensing with aliquid-embedded elastomer. J. Micromech. Microeng., 20(12), 2010.

[133] Y.-L. Park, S. C. Ryu, R. J. Black, K. Chau, B. Moslehi, and M. R. Cutkosky. Exoskeletalforce-sensing end-effectors with embedded optical fiber-Bragg-grating sensors. IEEE Trans. Rob.,25(6):1319–1331, December 2009.

[134] Y.-L. Park, S. C. Ryu, R. J. Black, B. Moslehi, and M. R. Cutkosky. Fingertip force control withembedded fiber Bragg grating sensors. Proceedings of the 2008 IEEE International Conference onRobotics and Automation, pages 3431–3436, 2008.

[135] P. Pastor, M. Kalakrishnan, S. Chitta, E. Theodorou, and S. Schaal. Skill learning and task outcomeprediction for manipulation. In Proceedings of the 2011 IEEE International Conference on Robotics& Automation, 2011.

[136] B. L. Patritti, S. Straudi, L. C. Deming, M. G. Benedetti, D. L. Nimec, and P. Bonato. Robotic gaittraining in an adult with cerebral palsy: a case report. PM&R, 2:71–75, 2010.

[137] K. Peters. Polymer optical fiber sensors - a review. Smart. Mater. Struct., 20(1), 2011.

[138] S. Phan, Z. F. Quek, P. Shah, D. Shin, Z. Ahmed, O. Khatib, and M. Cutkosky. Capacitive skin sensorsfor robot impact monitoring. In Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst., pages 2992–2997, SanFrancisco, CA, September 2011.

[139] C. B. Phillips, N. I. Badler, and J. Granieri. Automatic viewing control for 3d direct manipulation. InSymposium on Interactive 3D graphics, 1992.

[140] C. Plagemann, D. Fox, and W. Burgard. Efficient failure detection on mobile robots using particlefilters with gaussian process proposals. In IJCAI, pages 2185–2190, 2007.

[141] P. Puangmali, K. Althoefer, L. D. Seneviratne, D. Murphy, and P. Dasgupta. State-of-the-art in forceand tactile sensing for minimally invasive surgery. IEEE Sens. J., 8(4):371, 2008.

[142] M. Pyo, C. C. Bohn, E. Smela, J. R. Reynolds, and A. B. Brennan. Direct strain measurement ofpolypyrrole actuators controlled by the polymer/gold interface. Chem. Mater., 15(4):916–922, 2003.

[143] D. Ramachandran and E. Amir. Bayesian inverse reinforcement learning. Proceedings of the 20thInternational Joint Conference on Artificial Intelligence, 2007.

9

Page 44: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

[144] A. Roy, I. Kerbs, D. J. Williams, C. T. Bever, L. W. Forrester, R. M. Macko, and N. Hogan. Robot-aided neurorehabilitation: a novel robot for ankle rehabilitation. IEEE Trans. Rob., 25(3):569–582,2009.

[145] Ruthika, J. Wissman, and C. Majidi. Interfacing Liquid & Solid Electronics with a Soft AnisotropicConductor. manuscript under review, 2014.

[146] E.-A. Seyfarth, W. Eckweiler, and K. Hammer. Proprioceptors and sensory nerves in the legs of aspider, Cupiennius salei (Arachnida, Araneida). Zoomorphologie, 105:190–196, 1985.

[147] D. Shin, I. Sardellitti, Y.-L. Park, O. Khatib, and M. Cutkosky. Design and control of a bio-inspiredhuman-friendly robot. International Journal of Robotics Research, 29(5):571–584, 2010.

[148] K. A. Shorter, G. F. Kogler, , E. Loth, W. K. Durfee, and E. T. Hsiao-Wecksler. A portable poweredankle-foot orthosis for rehabilitation. J. Rehabil. Res. Dev., 48(4):459–472, 2011.

[149] S. Singh, A. Gupta, and A. A. Efros. Unsupervised discovery of mid-level discriminative patches. InEuropean Conference on Computer Vision (ECCV 2010), 2012.

[150] W. D. Smart and L. P. Kaelbling. Effective reinforcement learning for mobile robots. In 2002 IEEEInternational Conference on Robotics and Automation, pages 3404–3410, 2002.

[151] D. S. Smith. The fine structure of haltere sensilla in the blowfly Calliphora erythrocephala (Meig.)with scanning electron microscopic observations on the haltere surface. Tissue and Cell, 1:443–484,1969.

[152] D. Stampfer, M. Lutz, and C. Schlegel. Information driven sensor placement for robust active objectrecognition based on multiple views. In Technologies for Practical Robot Applications (TePRA), 2012IEEE International Conference on, pages 133–138. IEEE, 2012.

[153] E. Steinmetz. Americans with disabilities: 2002. Washington D.C.: U.S. Census Bureau, 2006.

[154] L. Stirling, C. Yu, J. Miller, R. J. Wood, E. Goldfield, and R. Nagpal. Applicability of shape memoryalloy wire for an active, soft orthotic. J. Mater. Eng. Perform., 20(4-5):658–662, 2011.

[155] J. Sturm, V. Pradeep, C. Stachniss, C. Plagemann, K. Konolige, and W. Burgard. Learning kinematicmodels for articulated objects. In Proc. of the International Joint Conference on Artificial Intelligence(IJCAI), 2009.

[156] R. Suresh, S. C. Tjin, and S. Bhalla. Multi-component force measurement using embedded fiberBragg grating. Optics and Laser Technology, doi:10.1016/j.optlastec.2008.08.004, 2008.

[157] W. Svensson and U. Holmberg. Ankle-foot-orthosis control in inclinations and stairs. In Proc. IEEEInt. Conf. Rob. Autom. Mechatron., pages 301–306, Chengdu, China, November 2008.

[158] Y. Tada, K. Hosoda, Y. Yamasaki, and M. Asada. Sensing ability of anthropomorphic fingertip withmulti-modal sensors. In Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst., pages 31–35, Las Vegas, NV,October 2003.

[159] R. Tajima, S. Kagami, M. Inaba, and H. Inoue. Development of soft and distributed tactile sensorsand the application to a humanoid robot. Adv. Rob., 16(4):381–397, 2002.

[160] N. Takahashi, A. Hirose, and S. Takahashi. Underwater acoustic sensor with fiber Bragg grating.Optical Review, 4(6):691–694, 1997.

10

Page 45: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

[161] K. Takei, T. Takahashi, J. C. Ho, H. Ko, A. G. Gillies, P. W. Leu, R. S. Fearing, and A. Javey.Nanowire active-matrix circuitry for low-voltage macroscale artificial skin. Nat. Mater., 9(10):821–826, 2010.

[162] J. Tang, A. Singh, N. Goehausen, and P. Abbeel. Parameterized maneuver learning for autonomoushelicopter flight. In Proceedings of the IEEE International Conference on Robotics and Automation,pages 1142–1148. IEEE, 2010.

[163] H. Tomori and T. Nakamura. Theoretical comparison of McKibben-type artificial muscles and novelstraight-fiber-type artificial muscles. Int. J. Autom. Technol., 5(4):544–550, 2011.

[164] B. Tondu and P. Lopez. Modeling and control of mckibben artificial muscle robot actuators. IEEEControl Syst. Mag., 20(2):15–38, 2000.

[165] E. Torres-Jara, I. Vasilescu, and R. Coral. A soft touch: Compliant tactile sensors for sensitivemanipulation. MIT Computer Science and Artificial Intelligence Laboratory Technical Review, 2003.

[166] B. Triggs and C. Laugier. Automatic camera placement for robot vision tasks. In InternationalConference on Robotics and Automation, 1995.

[167] J. Ueda, D. Ming, V. Krishnamoorthy, M. Shinohara, and T. Ogasawara. Individual muscle controlusing an exoskeleton robot for muscle function testing. IEEE Trans. Neural Syst. Rehabil. Eng.,18(4):399–350, 2010.

[168] J. Ulmen and M. Cutkosky. A robust, low-cost and low-noise artificial skin for human-friendly robots.In Proc. IEEE Int. Conf. Rob. Autom., pages 4836–4841, Anchorage, AK, May 2010.

[169] J. F. Veneman, R. Krudhof, E. E. G. Hekman, R. Ekkelenkamp, E. H. F. V. Asseldonk, and H. van derKooij. Design and evaluation of the LOPES exoskeleton robot for interactive gait rehabilitation. IEEETrans. Neural Syst. Rehabil. Eng., 15(3):379–386, 2007.

[170] L. Ventrelli, L. Beccai, V. Mattoli, A. Menciassi, and P. Dario. Development of a stretchable skin-like tactile sensor based on polymer composites. In Proc. IEEE Int. Conf. Rob. Biomimetics, pages123–128, Guilin, China, December 2009.

[171] B. Verrelst, R. V. Ham, B. Vanderborght, F. Daerder, M. V. Damme, and D. Lefeber. Second gen-eration pleated pneumatic artificial muscle and its robotic applications. Adv. Rob., 20(7):783–805,2006.

[172] D. Vogt, Y. Mengus, Y.-L. Park, M.Wehner, R. K. Kramer, C. Majidi, L. P. Jentoft, Y. Tenzer, R. D.Howe, and R. J. Wood. Progress in soft, flexible, and stretchable sensing systems. In InternationalConference on Robotics and Automation (ICRA), 2013 IEEE/RSJ International Conference on, 2013.

[173] S. Wakimoto, K. Suzumori, and T. Kanda. Development of intelligent McKibben actuator. In Proc.IEEE/RSJ Int. Conf. Intell. Rob. Syst., pages 487–492, Alberta, Canada, August 2005.

[174] M. Wehner, Y.-L. Park, C. J. Walsh, R. Nagpal, R. J. Wood, T. Moore, and E. Goldfield. Experimentalcharacterization of components for active soft orthotics. In Proc. IEEE Int. Conf. Biomed. Rob.Biomechatron., pages 1586–1592, Roma, Italy, June 2012.

[175] M. Wehner, B. Quindan, P. Aubin, E. Martinez-Villalpando, M. Baumann, L. Stirling, K. Holt, andR. Wood. A lightweight soft exosuit for gait assistance. In Proc. IEEE Int. Conf. Rob. Autom., pages3347–3354, Karlsruhe, Germany, May 2013.

11

Page 46: Robot Skin - Carnegie Mellon School of Computer Sciencecga/tmp-public/mddickey3.pdf · outward from the skin for a whole body vision system. We will use optical tracking to estimate

[176] N. Wettels, D. Popovic, V. J. Santos, R. S. Johansson, and G. E. Loeb. Biomimetic tactile sensorarray. Adv. Rob., 22(8):829–849, 2008.

[177] T. F. Winters, J. R. Gage, and R. Hicks. Gait patterns in spastic hemiplegia in children and youngadults. J. Bone Joint Surg., 69(3):437–441, 1987.

[178] M. Wisse, C. G. Atkeson, and D. K. Kloimwieder. Swing leg retraction helps biped walking stability.In Proceedings of the 5th IEEE-RAS International Conference on Humanoid Robots (Humanoids),2005.

[179] D. Xing, C. G. Atkeson, J. Su, and B. Stephens. Gain scheduled control of perturbed standing balance.In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4063–4068,2010.

[180] D. Yamada, T. Maeno, and Y. Yamada. Artificial finger skin having ridges and distributed tactilesensors used for grasp force control. J. Rob. Mechatron., 14(2):140–146, 2002.

[181] S. Yamamoto, M. Ebina, M. Iwasaki, S. Kubo, H. Kawai, and T. Kayashi. Comparative study ofmechanical characteristics of plastic AFOs. J. Prosthet. Orthot., 5(2):59–64, 1993.

[182] Y. Yamamoto, S. Wakimoto, and K. Suzumori. Evaluation of electro conductive film and strain gageas displacement sensor for pneumatic artificial muscle. In Proc. IEEE Int. Conf. Rob. Biomimetics,pages 1206–1211, 2011.

[183] L. Ye and E. Keogh. Time series shapelets: a new primitive for data mining. In Proceedings ofthe 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages947–956. ACM, 2009.

[184] J. Yi, X. Zhu, L. Shen, B. Sun, and L. Jiang. An orthogonal curvature fiber bragg grating sensor arrayfor shape reconstruction. In K. Li, X. Li, S. Ma, and G. Irwin, editors, Life System Modeling andIntelligent Computing, volume 97 of Communications in Computer and Information Science, pages25–31. Springer Berlin Heidelberg, 2010.

[185] L. Zhang, J. Qian, Y. Zhang, and L. Shen. On SDM/WDM FBG sensor net for shape detection ofendoscope. Proceedings of the 2005 IEEE International Conference on Robotics and Automation,4:1986–1991, 2005.

[186] W. Zhang, E. Li, J. Xi, J. Chicharo, and X. Dong. Novel temperature-independent FBG-type forcesensor. Measurement Science and Technology, 16:1600–1604, 2005.

[187] B. D. Ziebart, A. Maas, J. D. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcementlearning. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, 2008.

[188] A. B. Zoss, H. Kazerooni, and A. Chu. Biomechanical design of the Berkeley lower extremity ex-oskeleton. IEEE/ASME Trans. Mechatron., 11(2):128–138, 2006.

12


Recommended