+ All Categories
Home > Documents > ROBOTICS AND NEURAL NETWORKS.pdf

ROBOTICS AND NEURAL NETWORKS.pdf

Date post: 02-Jun-2018
Category:
Upload: sarveshsinghsvnit
View: 233 times
Download: 0 times
Share this document with a friend

of 21

Transcript
  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    1/21

    i | P a g e

    ROBOTICS AND NEURAL NETWORKS

    A SEMINAR REPORT

    Submitted in partial fulfillment of the requirements

    For the degree of

    Bachelor of Technology

    By

    SARVESH SINGH(Roll No. U09EE537)

    Under the Supervision of

    Mrs. KHYATI MISTRY

    ELECTRICAL ENGINEERING DEPARTMENT

    SARDAR VALABHBHAI NATIONAL INSTITUTE OF

    TECHNOLOGY

    Sardar Vallabhbhai National Institute of Technology

    Surat-395 007, Gujarat, INDIA.

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    2/21

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    3/21

    iii | P a g e

    ABSTRACT

    This project gives an introduction about the robotics. It tells about why

    robotics is important, where we need robotics and how we can build smart

    robots.

    We have also discussed about artificial intelligence using neural

    networks in this report.

    Starting with the definition and application areas, working and

    fundamental characteristics of robots are discussed. In working we have

    talked about fundamental steps involved during the working of a robot i.e.

    1) Perceiving its environment through sensors.

    2) Thinking about the reaction.

    3) Acting upon that environment through effectors

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    4/21

    iv | P a g e

    TABLE OF CONTENT

    1) What are robots?? 1

    2) Why do we need robots?? .1

    3) How does a robot works?? 4

    3.1) Perceiving its environment through sensors 4

    3.1.1) Properties of environment ..5

    3.1.2) Robotic sensing .6

    1) Proprioceptors ..6

    2) Exteroceptors 7

    2.1) Contact sensors ..7

    2.2) Proximity Sensors ..82.3) Far Away Sensing .8

    3.2) Thinking about the reaction .9

    3.2.1) Look up table 9

    3.2.2) Simple reflex programs 9

    3.2.3) Program that keeps track of the world ..10

    3.2.4) Goal based programs 10

    3.2.5) Utility based programs .10

    3.2.a) Neural networks ...10

    3.2.b) Neural network construction ..12

    3.2.c) A feed forward network .13

    3.3) Acting upon the environment through the effectors 14

    3.3.1) Effectors/Actuators 14

    3.3.2) Types of actuators ..14

    3.3.3) Kinematics ..14

    3.3.4) Basic joints ..15

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    5/21

    v | P a g e

    LIST OF FIGURES

    1. A robotically assisted surgical system used for prostatectomies,

    cardiac valve repair and gynecologic surgical procedures . 2

    2. NASA robots on mars ..2

    3. Articulated welding robotsused in a factor ..3

    4. Gladiator unmanned ground vehicle ..3

    5. Network with one layer .12

    6. A feed forward network .13

    7. A manipulator .14

    8. A revolute joint ..15

    9. A prismatic joint .15

    10. A spherical joint ..15

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    6/21

    1 | P a g e

    ROBOTICS AND NEURAL NETWORKS

    1) WHAT ARE ROBOTS??

    The Robot Institute of America defines a robot as a programmable, multifunction

    manipulator designed to move material, parts, tools, or specific devices through variableprogrammed motions for the performance of a variety of tasks.

    Robot issimply an active, artificial agent whose environment is the physical world.

    2) WHY WE NEED ROBOTICS??

    Outer Space- Manipulative arms that are controlled by a human are used to unload the docking

    bay of space shuttles to launch satellites or to construct a space station

    The Intelligent Home- Automated systems can now monitor home security, environmental

    conditions and energy usage. Door and windows can be opened automatically and appliances such

    as lighting and air conditioning can be pre programmed to activate. This assists occupants

    irrespective of their state of mobility.

    Exploration- Robots can visit environments that are harmful to humans. An example is monitoring

    the environment inside a volcano or exploring our deepest oceans. NASA has used robotic probes

    for planetary exploration since the early sixties.

    Military Robots- Airborne robot drones are used for surveillance in today's modern army. In the

    future automated aircraft and vehicles could be used to carry fuel and ammunition or clear

    minefields

    Farms- Automated harvesters can cut and gather crops. Robotic dairies are available allowing

    operators to feed and milk their cows remotely.

    The Car Industry- Robotic arms that are able to perform multiple tasks are used in the car

    manufacturing process. They perform tasks such as welding, cutting, lifting, sorting and bending.

    Similar applications but on a smaller scale are now being planned for the food processing industry

    in particular the trimming, cutting and processing of various meats such as fish, lamb, beef.

    Hospitals- Under development is a robotic suit that will enable nurses to lift patients without

    damaging their backs. Scientists in Japan have developed a power-assisted suit which will give

    nurses the extra muscle they need to lift their patients - and avoid back injuries.

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    7/21

    2 | P a g e

    Fig.1 A robotically assisted surgical system used for prostatectomies, cardiac valve repair and gynecologic

    surgical procedures

    Fig2 NASA robots on mars

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    8/21

    3 | P a g e

    Fig3Articulatedwelding robotsused in a factory

    Fig4Gladiatorunmanned ground vehicle

    http://en.wikipedia.org/wiki/Articulated_robothttp://en.wikipedia.org/wiki/Articulated_robothttp://en.wikipedia.org/wiki/Welding_robothttp://en.wikipedia.org/wiki/Welding_robothttp://en.wikipedia.org/wiki/Welding_robothttp://en.wikipedia.org/wiki/Gladiator_Tactical_Unmanned_Ground_Vehiclehttp://en.wikipedia.org/wiki/Gladiator_Tactical_Unmanned_Ground_Vehiclehttp://en.wikipedia.org/wiki/Unmanned_ground_vehiclehttp://en.wikipedia.org/wiki/Unmanned_ground_vehiclehttp://en.wikipedia.org/wiki/Unmanned_ground_vehiclehttp://en.wikipedia.org/wiki/Unmanned_ground_vehiclehttp://en.wikipedia.org/wiki/Gladiator_Tactical_Unmanned_Ground_Vehiclehttp://en.wikipedia.org/wiki/Welding_robothttp://en.wikipedia.org/wiki/Articulated_robot
  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    9/21

    4 | P a g e

    3) HOW DOES A ROBOT WORK??

    Working of a robot can be viewed in the following three steps:-

    1) Perceiving its environment through sensors.

    2) Thinking about the reaction.

    3) Acting upon that environment through effectorsAll these three stages involves a lot of detailed study. Here well be talking about each stages in

    detail one by one after a brief introduction about each stage.

    1) PERCEIVING ITS ENIRONMENT THROUGH SENSORS:-

    Like any living body robots also need some kind of stimulus to react upon. Robots are

    programmed to do certain things and are programmed to react in a certain manner

    according as the programmer has programmed the robot.

    Robots have sensors that receive a stimulus from the environment and reactsaccording to the stimulus. Sensors acts as the input for robot. Robots think upon that input

    and reacts according its thinking.

    2) THINKING ABOUT THE REACTION :-

    Now when the robot has received the stimulus from the sensors it has the responsibility to

    act according to its knowledge provided by

    the programmer. There are many ways how a robot can think and act accordingly. Here we

    will give brief description about how the robot can act as a smart robot. We say a smartrobot is a robot that can learn things on his own and act. There are many ways of

    programming the robot to make a robot smart.

    One way to make a robot smart is to use neural network.Basically neural network

    is the way of programming in which we give an opportunity for the program to learn. In this

    programming style we imitate the way how a human brain works. We use the concept of

    neurons and how each neurons are linked with other neurons.

    3) ACTING UPON THE ENVIRONMENT THROUGH THE EFFECTORS

    Robots are basically made up ofjoints, links, end-effectors.

    Robot has some sort of rigid body, with rigid links that can move about. Links meet each

    other at joints, which allow motion. For example, on ahuman the upper arm and forearm

    are links, and the shoulder and elbow are joints.

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    10/21

    5 | P a g e

    An effector is any device that affects the environment, under the control of the

    robot. To have an impact on the physical world, an effector must be equipped with an

    actuator that converts software commands into physical motion. The actuators themselves

    are typically electric motors or hydraulic or pneumatic cylinders.

    Acting through the effectors requires the study of

    1) kinematics of robot.

    2) dynamics of robot.3) force control.

    4) motion planning(related with intelligence of the robot).

    5) motion control.

    DETAILED DESCRIPTION

    3.1) PERCEIVING ITS ENVIRONMENT THROUGH SENSORSAs the robot is interacting with environment and it is interacting through sensors, thus we need to

    understand environment and sensors

    3.1.1) Properties of environments1)Accessible vs. inaccessible.

    If an agent's sensory apparatus gives it access to the complete state of the environment, then we

    say that the environment is accessible to that agent. An environment is effectively accessible if the

    sensors detect all aspects that are relevant to the choice of action. An accessible environment is

    convenient because the agent need not maintain any internal state to keep track of the world .

    2)Deterministic vs. nondeterministic

    If the next state of the environment is completely determined by the current state and the actions

    selected by the agents, then we say the environment is deterministic

    3)Episodic vs. nonepisodic

    In an episodic environment, the agent's experience is divided into "episodes." Each episode

    consists of the agent perceiving and then acting. The quality of its action depends just on the

    episode itself, because subsequent episodes do not depend on what actions occur in previous

    episodes. Episodic environments are much simpler because the agent does not need to think ahead

    4)Static vs. dynamic.If the environment can change while an agent is deliberating, then we say the environment is

    dynamic for that agent; otherwise it is static. Static environments are easy to deal with because the

    agent need not keep looking at the world while it is deciding on an action, nor need it worry about

    the passage of time.

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    11/21

    6 | P a g e

    5)Discrete vs. continuous

    If there are a limited number of distinct, clearly defined percepts and actions we say that the

    environment is discrete. For example Chess is discretethere are a fixed number of possible moves

    on each turn. Taxi driving is continuousthe speed and location of the taxi and the other vehicles

    sweep through a range of continuous values.

    3.1.2) Robotic sensingSince the action capability is physically interacting with the environment, two types of sensors

    have to be used in any robotic system:

    1) proprioceptors for the measurement of the robots (internal) parameters

    2) exteroceptors for the measurement of its environmental (external, from the robot point of

    view) parameters.

    Data from multiple sensors may be further fused into a common representational format

    (world model). Finally, at the perception level, the world model is analyzed to infer the system and

    environment state, and to assess the consequences of the robotic systems actions.

    1. ProprioceptorsRobot consists of series of links interconnected by joints.Each joint is driven by an actuator which

    can change the relative position of the two links connected by that joint. Proprioceptors are

    sensors measuring both kinematic and dynamic parameters of the robot.

    The usual kinematics parameters are the joint positions, velocities, and accelerations.

    Dynamic parameters such as forces, torques and inertia area are also important to monitor for the

    proper control of the robotic manipulators.

    Kinematic parameters:-

    Joint position sensors: They are usually mounted on the motor shaft.

    Encoders are digital position transducers which are the most convenient for

    computer interfacing. Incremental encoders are relative-position transducers which

    generate a number of pulses proportional with the traveled rotation angle. These

    gives relative position of the arms and in case of power failure it gives bad results as

    it has lost the data of it relative position.

    Absolute shaft encoders are attractive for joint control applications because

    their position is recovered immediately and they do not accumulate errors asincremental encoders may do.

    Angular velocity sensors: is measured (when not calculated by differentiating joint

    positions) by tachometer transducers.

    A tachometer generates a DC voltage proportional to the shaft' rotational speed.

    Digital tachometers using magnetic pickup sensors are replacing traditional, DC

    motor-like tachometers which are too bulky for robotic applications.

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    12/21

    7 | P a g e

    Acceleration sensors : are based on Newtons second law. They are actually measuring the

    force which produces the acceleration of a known mass. Different types of acceleration

    transducers are stress-strain gage, piezoelectric,capacitive, inductive.

    2. Exteroceptors

    Exteroceptors can be classified according to their range as follows:- contact sensors.

    - proximity (near to) sensors.

    - far away sensors.6 :Contact sensors are used to detect the positive contact

    between two mating parts and/or to measure the interaction forces and torques

    which appear while the robot manipulator conducts part mating operations.

    Force/Torque Sensors

    The interaction forces and torques which appear, during mechanical assembly operations, at the

    robot hand level can be measured by sensors mounted on the joints or on the manipulator wrist.This solution is not too attractive since it needs a conversion of the measured joint torques to

    equivalent forces and torques at the hand level. The forces and torque measured by a wrist sensor

    can be converted quite directly at the hand level. Wrist sensors are sensitive, small, compact and

    not too heavy, which recommends them for force controlled robotic applications.

    A wrist force/torque has a radial three or four beam mechanical structure. Two strain gages

    are mounted on each deflection beam. Using a differential wiring of the strain gages, the four -

    beam sensor produces eight signals proportional with the force components normal to the gage

    planes.

    Tactile SensingTactile sensing is defined as the continuous sensing of variable contact forces over an area within

    which there is a spatial resolution. Tactile sensors mounted on the fingers of the hand allow the

    robot to measure contact force profile and slippage, or to grope and identify object shape.

    The best known of tactile sensor technologies are: conductive elastomer, strain gage,

    piezoelectronic, capacitive and optoelectronic. These technologies can be further grouped by their

    operating principles in two categories: force-sensitive and displacement-sensitive. The force-

    sensitive sensors (conductive elastomer, strain gage and piezoelectric) measure the contact forces,

    while the displacement-sensitive (optoelectronic and capacitive) sensors measure the mechanical

    deformation of an elastic overlay.

    Tactile sensing is the result of a complex exploratory perception act with two distinctmodes.First, passive sensing, which is produced by the cutaneous sensory network, provides

    information about contact force, contact geometric profile and temperature. Second, active

    sensing integrates the cutaneous sensory information with kinesthetic sensory information (the

    limb/joint positions and velocities).

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    13/21

    8 | P a g e

    2. Proximity Sensors

    Proximity sensors detect objects which are near but without touching them. These sensors are

    used for near-field (object approaching or avoidance) robotic operations. Proximity sensors are

    classified according to their operating principle; inductive, hall effect, capacitive, ultrasonic and

    optical.

    Inductive sensors are based on the change of inductance due to the presence of metallicobjects. Hall effect sensors are based on the relation which exists between the voltage in a

    semiconductor material and the magnetic field across that material. Inductive and Hall effect

    sensors detect only the proximity of ferromagnetic objects. Capacitive sensors are potentially

    capable of detecting the proximity of any type of solid or liquid materials. Ultrasonic and optical

    sensors are based on the modification of an emitted signal by objects that are in their proximity.

    3) Far Away Sensing

    Two types of far away sensors are used in robotics: range sensors and vision

    a) Range sensors

    Range sensors measure the distance to objects in their operation area. They are used for robotnavigation, obstacle avoidance or to recover the third dimension for monocular vision. Range

    sensors are based on one of the two principles: time-of-flight and triangulation.

    Time-of-flight sensors estimate the range by measuring the time elapsed between the

    transmission and return of a pulse. Laser range finders and sonar are the best known sensors of

    this type.

    Triangulation sensors measure range by detecting a given point

    on the object surface from two different points of view at a known distance from each other.

    Knowing this distance and the two view angles from the respective points to the aimed surface

    point, a simple geometrical operation yields the range.

    b) visionRobot vision is a complex sensing process. It involves extracting, characterizing and interpreting

    information from images in order to identify or describe objects in environment. A vision sensor

    (camera) converts the visual information to electrical signals which are then sampled and quantized

    by a special computer interface electronics yielding a digital image

    The digital image produced by a vision sensor is a mere numerical array which has to be

    further processed till an explicit and meaningful description of the visualized objects finally results.

    Digital image processing comprises more steps: preprocessing, segmentation, description,

    recognition and interpretation. Preprocessing techniques usually deal with noise reduction and

    detail enhancement. Segmentation algorithms, like edge detection or region growing, are used to

    extract the objects from the scene. These objects are then described by measuring some(preferably invariant) features of interest. Recognition is an

    operation which classifies the objects in the feature space. Interpretation is the operation that

    assigns a meaning to the ensemble of recognized objects.

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    14/21

    9 | P a g e

    3.2) THINKING ABOUT THE REACTIONNow as we have seen about the receiving of stimulus well see how the robot reacts upon it.

    Robot controller can have a multi-level hierarchical architrcture:

    1. Artificial intelligence level, where the program will accept a command such as, Pick up the

    bearing and decompose it into a sequence of lower level commands based on a strategic model of

    the task.2. Control mode level where the motions of the system are modelled, including the dynamic

    interactions between the different mechanisms, trajectories planned, and grasp points selected.

    From this model a control strategy is formulated, and control commands issued to the next lower

    level.

    3. Servo system level where actuators control the mechanism parameters

    using feedback of internal sensory data, and paths are modified on the basis of external sensory

    data. Also failure detection and correction mechanisms are implemented at this level.

    In this section we are basically going to talk about artificial intelligence level control.

    There are plenty of ways how we can program a robot to work. Here well discuss about the

    programming styles and talk about smart robots. Smart robots are those which has his own

    thinking ability.

    LOOK UP TABLES

    This is the simplest way of programming. It is not suited for the case where we have a lots of data.

    In this method we simply form a table in which each input refers to an output. When input arrives,

    program logic will look for the corresponding output from the table.

    This method is not suitable and its quite simple and with this method we cant think about

    artificial intelligence and it has a lot more demerits. Although we can use this when we need a

    smaller table.

    e.g.The visual input from a single camera comes in at the rate of 50 megabytes per second

    (25 frames per second, 1000 x 1000 pixels with 8 bits of color and 8 bits of intensity information).

    So the lookup table for an

    hour would be 260x60x50M entries. That is this method is not suitable.

    SIMPLE REFLEX PROGRAMS

    Such kinds of program simply uses if-else logic. It makes easy decisions based on the input. It reacts

    as yes or no logic. It does not have its thinking it just reacts as a small kid saying only yes or nowhen the kid has just learnt these words.

    e.g. suppose we have a robot who is a driver i.e. driving a car.

    When it sees some condition like it sees red back light of the car in front then it simply applies the

    brake.

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    15/21

    10 | P a g e

    if car-in-front-is-braking then initiate-braking

    general programming style for this case:-

    function SiMPLE-REFLEX-PROGRAM(percept) returns action

    static: rules, a set of condition-action rules

    state

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    16/21

    11 | P a g e

    e.g. suppose car is parked at a place and our robot driver has to decide where to go. It can

    make an intelligent decision only if it knows the goal otherwise itll do whatever it is programmed

    to do after the start. i.e. if it was programmed to take right when starts then itll take right. No

    matter whether turning right is pushing it away from the goal or towards the goal.

    Now reaching the goal from a particular position still has many possible ways. The question

    arises how our robot will make intelligent decision to select the best way?? The answer to the

    question is search algorithm and planning algorithm.In search algorithm our robot thinks of all possible ways and maximize the parameters that

    are concerned with the performance of the system. In our case of driver robot it will select the

    shortest path or maybe consider about more parameters like fuel efficiency i.e. the shortest path

    may have some hurdles due to which car cant be drivenfast.

    UTILITY BASED PROGRAMMING

    In this style of programming we associate a degree of happiness i.e. utility after the task has

    accomplished. Here we also think about if the task was performed in the best possible way or not.

    And this makes a robot smarter

    NEURAL NETWORKSNeural networks are not much different from the above discussed methods logically. But it is quite

    different if we think about programming style. Things that we are doing here is also same logically.

    We are concerned with all the things that are stated above. The only thing we are doing is imitation

    our own brain networks and making our programming easier and more effective and learnable.

    The human brain uses a web of interconnected processing elements called neurons to

    process information. Each neuron is autonomous and independent; it does its work

    asynchronously, that is, without any synchronization to other events taking place.

    NEURAL NETWORKS

    A neural network is a computational structure inspired by the study of biological neural processing.

    There are many different types of neural networks, from relatively simple to very complex, just as

    there are many theories on how biological neural processing works.

    A layered feed-forward neural network has layers, or subgroups of processing elements. A

    layer of processing elements makes independent computations on data that it receives and passes

    the results to another layer. The next layer may in turn make its independent computations and

    pass on the results to yet another layer. Finally, a subgroup of one or more processing elements

    determines the output from the network. Each processing element makes its computation based

    upon a weighted sum of its inputs. The first layer is the input layer and the last the output layer.The layers that are placed between the first and the last layers are the hidden layers. The

    processing elements are seen as units that are similar to the neurons in a human brain, and hence,

    they are referred to as cells, neuromimes, or artificial neurons. A threshold function is sometimes

    used to qualify the output of a neuron in the output layer. Even though our subject matter deals

    with artificial neurons, we will simply refer to them as neurons. Synapses between neurons are

    referred to as connections, which are represented by edges of a directed graph in which the nodes

    are the artificial neurons.

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    17/21

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    18/21

    13 | P a g e

    2) ENCODINGrefers to the paradigm used for the determination of and changing of weights on the

    connections between neurons. In the case of the multilayer feed-forward neural network, you

    initially can define weights by randomization. Subsequently, in the process of training, you can use

    the backpropagation algorithm, which is a means of updating weights starting from the output

    backwards. When you have finished training the multilayer feed-forward neural network, you are

    finished with encoding since weights do not change after training is completed

    3)RECALL refers to getting an expected output for a given input. If the same input as before ispresented to the network, the same corresponding output as before should result. The type of

    recall can characterize the network as being autoassociative or heteroassociative.

    Autoassociation is the phenomenon of associating an input vector with itself as the output,

    whereas heteroassociation is that of recalling a related vector given an input vector. You have a

    fuzzy remembrance of aphone number. Luckily, you stored it in an autoassociative neural network.

    When you apply the fuzzy remembrance, you retrieve the actual phone number. This is a use of

    autoassociation.

    A FEED FORWARD NETWORK

    A sample feed- forward network, as shown in Figure 1.2, has five neurons arranged in three layers:two neurons (labeled x1 and x2) in layer 1, two neurons (labeled x3 and x4) in layer 2, and one

    neuron (labeled x5) in layer 3. There are arrows connecting the neurons together. This is the

    direction of information flow. A feed-forward network has information flowing forward only. Each

    arrow that connects neurons has a weight associated with it (like, w31 for example). You calculate

    the state, x, of each neuron by summing the weighted values that flow into a neuron. The state of

    the neuron is the output value of the neuron and remains the same until the neuron receives new

    information on its inputs

    Fig6 A feed forward network

    For example, for x3 and x5:

    X3= w23x2+ w13x1

    X5= w35x3+ w45x4

    We present information to this network at the leftmost nodes (layer 1) called the input layer. We

    take information from any other layer in the network, but in most cases do so from the rightmost

    node(s), which make up the output layer. Weights are usually determined by a supervised training

    algorithm, where you present examples to the network and adjust weights appropriately to achievea desired response. Once you have completed training, you can use the network without changing

    weights, and note the response for inputs that you apply. Note that a detail not yet shown is a

    nonlinear scaling function that limits the range of the weighted sum. This scaling function has the

    effect of clipping very large values in positive and negative directions for each neuron so that the

    cumulative summing that occurs across the network stays within reasonable bounds. Typical real

    number ranges for neuron inputs and outputs are1 to +1 or 0 to +1. Now let us contrast this

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    19/21

    14 | P a g e

    neural network with a completely different type of neural network, the Hopfield network, and

    present some simple applications for the Hopfield network

    3) ACTING UPON THE ENVIRONMENT THROUGH THE EFFECTORS

    Effectors / Actuators

    Effectors: The component of a robot that has an effect. Actuator: An actuator is the mechanism that enables the effectors to execute some work.

    (active ~ passive )

    Effectors: Actuator :: Hands: Muscles (tendons)

    Types of Actuators

    Electric Motorelectrical to mechanical energy

    Hydraulics: fluid pressure (large, dangerous, needs good packing)

    Pneumatic: air pressure, very powerful

    Photo reactive/ chemical reactive/ thermal/ piezoelectric

    Kinematic

    Manipulator (links + joints)

    Kinematic chain (series of kinematic pairs)

    Forward kinematics vs Inverse kinematics

    Fig7 A manipulator

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    20/21

    15 | P a g e

    Other basic joints

    Revolute Joint

    1 DOF ( VariableY)

    Fig8 A revolute joint

    Prismatic Joint1 DOF (linear) (Variables - d)

    Fig9 A prismatic joint

    Spherical Joint

    3 DOF ( Variables - Y1, Y2, Y3)

    Fig10 A spherical joint

  • 8/11/2019 ROBOTICS AND NEURAL NETWORKS.pdf

    21/21

    16 | P a g e

    Actuator Types

    Electrical

    Hydraulic

    Pneumatic

    Others

    Electrical actuators are best of all

    easy to control

    from mW to MW

    normally high velocities 1000 - 10000 rpm

    several types

    accurate servo control

    ideal torque for driving

    excellent efficiency

    autonomous power system difficult

    CONCLUSIONS :-With the help of neural networks we can build smart robots that can help us in

    many ways like in home, industries, hospitals, astronomy, military etc. We are

    making the robots to learn things on its own.

    Robots are our very good friends and so we need to know about our friend and

    we need to work on it to make it much smarter and much fast. Studying about AI and

    robotics is very interesting and very useful too.

    referencesi

    C++ Neural Networks and Fuzzy Logic by Valluru B. RaoArtificial Intelligence(A Modern Approach) by Stuart J. Russell and Peter NorvigCEG 4392 Computer Systems Design Projectihttp://www.melbpc.org.au/pcupdate/2205/2205article10.htm

    http://en.wikipedia.org/wiki/Military_robot

    http://en.wikipedia.org/wiki/Robotic_surgery

    http://en.wikipedia.org/wiki/Welding_robot

    http://cdn.theatlantic.com/static/mt/assets/science/EvolutionofRovers.jpg

    http://www.melbpc.org.au/pcupdate/2205/2205article10.htmhttp://www.melbpc.org.au/pcupdate/2205/2205article10.htmhttp://en.wikipedia.org/wiki/Military_robothttp://en.wikipedia.org/wiki/Military_robothttp://en.wikipedia.org/wiki/Robotic_surgeryhttp://en.wikipedia.org/wiki/Robotic_surgeryhttp://en.wikipedia.org/wiki/Welding_robothttp://en.wikipedia.org/wiki/Welding_robothttp://cdn.theatlantic.com/static/mt/assets/science/EvolutionofRovers.jpghttp://cdn.theatlantic.com/static/mt/assets/science/EvolutionofRovers.jpghttp://cdn.theatlantic.com/static/mt/assets/science/EvolutionofRovers.jpghttp://en.wikipedia.org/wiki/Welding_robothttp://en.wikipedia.org/wiki/Robotic_surgeryhttp://en.wikipedia.org/wiki/Military_robothttp://www.melbpc.org.au/pcupdate/2205/2205article10.htm

Recommended