Date post: | 25-Oct-2014 |
Category: |
Documents |
Upload: | deepak-devadiga |
View: | 83 times |
Download: | 1 times |
Chapter 1
INTRODUCTION
1.1 Overview
A Mouse Interaction Using Hand Gesture is software that allows users to give mouse inputs
to a system without using an actual mouse. To the extreme it can also be called as hardware
because it uses an ordinary web camera. A virtual mouse can usually be operated with
multiple input devices, which may include an actual mouse or a computer keyboard. Mouse
Interaction Using Hand Gesture which uses web camera works with the help of different image
processing techniques. In this the hand movements of a user is mapped into mouse inputs. A
web camera is set to take images continuously. The user must have a particular color in his hand
so that when the web camera takes image it must be visible in the image. This color is detected
from the image pixel and the pixel position is mapped into mouse input. Depending upon the
size of the image taken by camera various scaling techniques are used because the pixel
position in the image will not have a correspondence with screen resolution.
1.2 Problem statement
The prevailing system uses a physical device for mouse functionalities. The problem here is
to develop a way so that humans can interact with a computer without having any physical
connection with the computer. Many ideas were put forward but they all required physical
movement of hardware. So the final decision is to develop a virtual mouse which uses simple
and cheap image processing techniques.
1.3 Languages and Software Tools Used
Microsoft visual studio 2010 express edition is used as the integrated development
environment. C# language is used for programming with image processing techniques.
1.3.1 C# Language:
C# is an elegant and type-safe object-oriented language that enables developers to build a
wide range of secure and robust applications that run on the .NET Framework. C# syntax is
highly expressive, it is also simple and easy to learn. Developers who know any of these
languages are typically able to begin working productively in C# within a very short time. C#
1
syntax simplifies many of the complexities of C++ while providing powerful features such as
nullable value types, enumerations, delegates, anonymous methods and direct memory
access, which are not found in Java. As an object-oriented language, C# supports the
concepts of encapsulation, inheritance and polymorphism. All variables and methods,
including the Main method, the application's entry point, are encapsulated within class
definitions. A class may inherit directly from one parent class, but it may implement any
number of interfaces.
1.3.2 Microsoft Visual Studio 2010:
Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft. It
is used to develop console and graphical user interface applications along with Windows
Forms applications, web sites, web applications, and web services in both native code
together with managed code for all platforms supported by Microsoft Windows, Windows
Mobile, Windows CE, .NET Framework, .NET Compact Framework and Microsoft
Silverlight.
Visual Studio supports different programming languages by means of language services,
which allow the code editor and debugger to support (to varying degrees) nearly any
programming language, provided a language-specific service exists. Built-in languages
include C/C++ (via Visual C++), VB.NET (via Visual Basic .NET), C# (via Visual C#), and
F# (as of Visual Studio 2010[4]). Support for other languages such as M, Python, and Ruby
among others is available via language services installed separately. It also supports
XML/XSLT, HTML/XHTML, JavaScript and CSS. Individual language-specific versions of
Visual Studio also exist which provide more limited language services to the user: Microsoft
Visual Basic, Visual J#, Visual C#, and Visual C++.
2
Chapter 2
FEASIBILITY STUDY
2.1 General
Feasibility study is very important aspect of building software. It will answer the question of
why would a developer build a software. Without planning, the software may not be able to
perform or function as planned and feasibility study makes sure the program created is
needed and will be appealing to the intended users. The decision to implement any new
project or program must be based on a thorough analysis of the current operation. In addition,
the impact of the implementation of the proposed project/program on the future operation of
“MOUSE INTERACTIONS USING HAND GESTURES” must be evaluated.
The input to the feasibility study is outline description of the system and how it will be used
within an organization. The results of the feasibility study should be a report, which
recommends whether or not it is worth carrying on with the requirements engineering and
system development process. When the information is available, the feasibility study report is
prepared. This should make recommendation about whether or not the system development
should continue. It may propose changes to the scope, budget and schedule of the system and
suggest further high-level requirement of the system. A feasibility study should be relatively
cheap and quick.
2.2 Literature review
The prevailing system uses a physical device for mouse functionalities, the purpose of the
project is to create a virtual mouse that works with the help of a web camera. In this project a
camera continuously takes images of hand movements of a user, which is then mapped into
mouse inputs. This means that we can give inputs to computer without having any physical
connection with the computer and without having any hardware movements. In total it makes
life easy for user increasing comfort and improving usability.
3
2.3 Existing system
Normally, whenever software is developed it is either a totally new product or a further
development of some existing product for an improvement to occur in the product, we need
that product to exist. There can be some drawbacks or some features missing in the original
product. Hence, the new product under development may think of overcoming these
drawbacks, and adding more features to the existing one.
In the existing system the mouse functionalities are implemented using a hardware device
such as optical mouse or track-pad etc. Due to the limitation of these physical devices the
useable command set is also limited and user depends on this hardware for its implementation
of mouse functions.
2.4 Proposed System
Our project “Mouse Interaction Using Hand Gesture” is an application to overcome the
drawbacks of the existing scenario. Foregoing the traditional mouse setup to interact with a
computer, strong gesture recognition could allow users to accomplish frequent or common
tasks using hand gestures to a camera. Most gesture-based interfaces researched so far have
needed a mouse or stylus pen as a gesture input device. This interface enables a user to
specify commands and additional parameters by drawing single intuitive gestures with his
finger. However, we believe that the idea of making gestures with a human finger appeals
more to not only experienced users but also novice users who are usually not accustomed to
using computer devices.
2.5 System Feasibility
The feasibility study is carried out to determine whether the proposed system can be
developed with the available resources. Feasibility study is a test of system proposed
according to its workability, impact on the organization, ability to meet the user’s needs and
effective use of the resources.
It focuses on four major areas:
Operational feasibility
Technical feasibility
4
Economical feasibility
Schedule feasibility
2.5.1 Operational Feasibility
Operational feasibility [4] is a measure of how well a proposed system solves the problems,
and takes advantage of the opportunities identified during scope definition and how it
satisfies the requirements identified in the requirements analysis phase of system
development.
The application does provide an excellent Mouse interaction as expected and the system runs
without any problems. The application provides the user friendly mouse operations that can
performed with ease of use. The application is easy to modify and is easy to operate. Even a
novice user is able to work on it.
2.5.2 Technical Feasibility
Technical feasibility is the step of feasibility analysis that is concerned with determining
whether it is possible to code the project using the language selected.
During the technical analysis, the implementation of the designed system requires specific
additional software. The language selected for our coding is Visual C# and the developer
software is Visual studio 2010. Visual Studio 2010 is a toolkit to develop windows based
applications. It supports a variety of languages for the software development. The .NET
framework is a completely new model for building systems on the windows family of
operating system.
Features of .NET and Visual Studio
Full interoperability with existing Win32 code: .NET allows invoking raw C based
functions from managed code [1].
Complete and total language interaction [1]: Unlike classic COM .NET supports cross
language debugging. A common runtime engine is shared by all .NET aware
languages.
Multithreading [1]: .NET provides multithreading. This allows the software to run
modules in parallel.
5
Direct access to windows API’s:.NET provides a direct access to windows APIs. The
APIs including DLL (Dynamic Link library) functions [1].
Visual Studio provides GUI design tools: For designing the software. It simplifies
overhead of coding to an extent and helps to reduce the software development time
and provides a better user friendly system.
2.5.3 Economic Feasibility
It is economically feasible to implement the project within the available budget. Our software
can be easily deployed on the existing infrastructure requiring a webcam. Moreover
maintenance is easy and even an average person can easily maintain and configure the
system. Therefore the cost incurring on training etc may be avoided.
2.6 Summary
Feasibility analysis is a process of validating whether the software to be developed is possible
with the current hardware and software technologies. In this stage, various factors such as,
time constraints, budgetary constraint etc, are considered. Also, here we check whether the
project to be developed will be cost-effective and can be developed with the given budgetary
constraints and time constraints.
6
Chapter 3
SYSTEM ANALYSIS
3.1 General
System analysis is a detailed study of various operations performed by a system and their
relationships within and outside the system. During analysis, data are collected on the
available files, decision points and transactions handled by the present system. Interviews,
on-site observation and questionnaire are the tools used for system analysis.
There are two major activities in this phase [4]:
Requirement analysis
Requirement specification
3.2 Requirement Analysis
The requirement analysis process involves deriving the system requirements through
observations of existing systems, discussion with potential users and procedures, task
analysis and so on. The implementation of the software is of least importance to the end user.
The user interacts with the software through the user interface. Thus, any software to be
designed should have a good user interface that is relatively easy to learn and use.
In our project “MOUSE INTERACTIONS USING HAND GESTURES”, requirement
analysis led to, need to provide improved, better and more comfortable user interactions with
the system using a hand gesture. To implement this webcam was necessary. There by,
providing all mouse functionalities in a more comfortable way.
3.3 Software Requirement Specification
A software requirements specification (SRS) is a complete description of the behavior of a
system to be developed. Requirement specification adds more information to requirement
definition. After the completion of analysis phase, requirements must be written or specified.
This also provides a reference for validation of the final product.
7
3.3.1 Functional requirements
A functional specifications document is created after the software requirements specification
document. It provides more detail on information originally described in the software
requirements specification document. The functional specification document defines what the
functionality will be, but not how that functionality will be implemented.
The various Functional requirements of the system can be summarized as follows:
The system is will provide a good user interface through which the user can interact
with the system. The virtual mouse enables the user to give mouse inputs. [4]
Interactions are done using any colored objects.
The color should be differentiable from that of the environment.
3.3.2 Non-Functional Requirements
Non-Functional Requirements describes constraints imposed on software and the restrictions
on freedom of designers.
The requirements here are:
The system handling by the user should be easy.
System should give a precise and optimum response.
3.4 System Requirements
3.4.1 Hardware Requirements
Processor Pentium 4
Memory 1GB RAM
Speed 1GHZ
Webcam 1.3 Mega Pixel
3.4.2 Software Requirements
Framework C# and .NET platform
Visual Studio 2010
Operating Systems: Windows XP, Windows 7.
8
C# Language
C# language [1] is intended to be a simple, modern, general-purpose, object-oriented
programming language. C# has a unified type system. This unified type system is called
Common Type System (CTS) [4].
A unified type system implies that all types, including primitives such as integers, are
subclasses of the System. Object class. For example, every type inherits a ToString()method.
For performance reasons primitive types (and value types in general) are internally allocated
on the stack.
Features of C#
C# (pronounced “c sharp”)is the native language for the .NET Common Language Runtime.
It has been designed to fit seamlessly into the .NET Common Language Runtime. Code can
be in either Visual C++ or Visual Basic, but in most cases, C# will likely fit programmer’s
needs[1][4].
Component Software
The .NET Common Language Runtime is a component-based environment, and it should
come as no surprise that C# is designed to make component creation easier. It’s a
“component-centric” language, in that all objects are written as components, and the
component is the center of the action.
Component concepts, such as properties, methods, and events, are first-class citizens of the
language and of the underlying runtime environment. Declarative information (known as
attributes) can be applied to components to convey design- time and runtime information
about the component to other parts of the system. Documentation can be written inside the
component and exported to XML. C# objects don’t require header files, IDL files, or type
libraries to be created or used. Components created by C# are fully self-describing and can be
used without registration process. C# is aided in the creation of components by the .NET
Runtime and Frameworks, which provide a unified type system in which everything can be
treated as an object, but without the performance penalty associated with pure object systems,
such as Smalltalk.
9
The Execution Environment
This section was once titled, “The Execution Engine,” but .NET Runtime is much more than
just an engine. The environment provides a simpler programming model, safety and security,
powerful tools support, and help with deployment, packaging, and other support.
Safety and Security
The .Net Runtime environment is designed to be a safe and secure environment. The .NET
Runtime is a managed environment, which means that the Runtime manages memory for the
programmer. Instead of having to manage memory allocation and de-allocation, the garbage
collector does it. Not only does garbage collection reduce the number of things to remember
when programming, in a server environment it can drastically reduce the number of memory
leaks. This makes high-availability systems much easier to develop.
Additionally, the .NET Runtime is a verified environment. At runtime, the environment
verifies that the executing code is type-safe. This can catch errors, such as passing the wrong
type to a function, and attacks, such as trying to read beyond allocated boundaries or
executing code at an arbitrary location.
The security system interacts with the verifier to ensure that code does only what it is
permitted to do. The security requirements for a specific piece of code can be expressed in a
finely grained manner, code can, for example, specify that it needs to be able to write a
scratch file, and that requirement will be checked execution.
Exception Handling
The first reason is that exception handling is deeply ingrained in the .NET Runtime, and is
therefore very common in C# code. The second reason is that it allows the code examples to
be better.
Trying and Catching
To deal with exceptions, code needs to be organized a bit differently. The sections of code
that might throw exceptions are placed in a try block, and the code to handle exceptions in the
try block is placed in a catch block.
10
Visual Studio 2010
Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft. It
can be used to develop console and graphical user interface applications along with Windows
Forms applications, web sites, web applications, and web services in both native code
together with managed code for all platforms supported by Microsoft Windows, Windows
Mobile, Windows CE, .NET Framework, .NET Compact Framework and Microsoft
Silverlight. Visual Studio includes a code editor supporting IntelliSense as well as code
refactoring. The integrated debugger works both as a source-level debugger and a machine-
level debugger. Other built-in tools include a forms designer for building GUI applications,
web designer, class designer, and database schema designer [1].
Visual Studio 2010 comes with .NET framework 4.0 and supports developing applications
targeting Windows 7. It supports IBM DB2 and Oracle databases, in addition to Microsoft
SQL Server. It has integrated support for developing Microsoft Silverlight applications,
including an interactive designer. Visual Studio 2010 offers several tools to make parallel
programming simpler: in addition to the Parallel Extensions for the .NET Framework and the
Parallel Patterns Library for native code, Visual Studio 2010 includes tools for debugging
parallel applications. The new tools allow the visualization of parallel Tasks and their
runtime stacks. Tools for profiling parallel applications can be used for visualization of thread
wait-times and thread migrations across processor cores. Intel and Microsoft have jointly
pledged support for a new Concurrency Runtime in Visual Studio 2010 and Intel has
launched parallelism support in Parallel Studio as an add-on for Visual Studio[4].
3.5 Summary
Detailed study of the various operations performed by the system and their relationship
within and outside the system is studied. Non-Functional requirements which describe
constraints imposed on the software and the restriction on freedom of designing are also
studied and collected system requirements.
11
Chapter 4
SYSTEM DESIGN
4.1 General
System design is a process through which requirements are translated into a representation of
software. Initially, the representation depicts a holistic view of software. Subsequent
refinement leads to a design representation that is very close to source code. Design is the
place where quality is fostered in software development.
4.2 Block diagram
Figure 4.1 Block diagram
The block diagram specifies how the project works in simple terms. Initially, we insert a
color band to our fingers and these colors are set in the project. A webcam takes continuous
sequence of image of the hand, based on the difference in location of the specified colored
object in the image sequences specific functionalities can be seen in the monitor.
12
4.3 Use-Case Diagrams
4.4 Data Flow Diagram:
13
14
The flow diagram depicts the way of flow of control as well as how the various events are
handled. A video source from the list available is selected and various color markers in the
hand are identified. The hand movements in front of the webcam are processed in the form of
continuous images and required color pixels are identified and based on the type and location
of these color pixels mouse inputs are mapped and mouse functionalities are implemented.
4.5 Summary
Subsequent refinement to the design representation is made that is very close to source code.
Requirements are translated into a representation of software. Different modules are
identified. Flow of data between different modules is analyzed.
15
Current Signal StatusFigure 4.2 Data flow Diagram
Chapter 5
DETAILED DESIGN
5.1 General
In detailed design we should know the organization of the design and what each function
should do. It is concerned with producing short design specification of each function. This
section provides a brief description of the various components that are used in our project
‘Mouse Interaction Using Hand Gestures’.
Webcam: It enables to detect bright colored objects, provided the color of the object and
environment should be different. The webcam should be minimum of 1.3Mp.The webcam is
connected to the computer using the data link. The webcam will detect the object and will be
processed accordingly and the outputs are displayed on the screen of the computer.
Colored Objects: It enables the selection of one or more markers. For Object detection we
make use of markers. Markers can be anything i.e. colored objects such as color band, ball,
pen etc. All the markers should satisfy one criterion that is it should be distinguishable from
its environment. Hence we can’t have a marker of the same color as that of the environment.
4.2 Modules Used
Video source selection:
This module is used to identify the various video sources present within the system. System
searches for and lists all active video sources attached to the system using drop-down list
provided by Visual studio 2010. It takes first video source as default. Thus this module helps
to select desired video source.
Object color selection:
User needs to move the colored object to the indicated location and click color selection
button provided by Visual Studio 2010. The selected colors are used to perform the required
operation. This module is used to select desired object of required color to track.
Processing the image sequence:
16
The sequences of images are processed from video-source and selected object is identified
and thus mouse functions are implemented. Processing of image is initiated by the
NewFrameHandler event of VideoSourcePlayer class defined in Aforge library. In each image
location of the selected colour object is determined. And this location information is used for
performing different operations.
The Using:
The using statement allow the programmer to specify when objects that use resources should
release them. This statement is like pre-processor directive used in C and C++ languages. By
adding a statement like Using xyz will add an external assembly to the code file.
E.g. Using System.Forms
4.2.1 User Defined Functions:
public void videoSourcePlayer1_NewFrame(object sender, ref Bitmap image)
This function is called whenever a NewFrameHandler event occurs i.e whenever a new frame
arrives. As soon as a frame arrives it is processed inorder to find the location of the desired
colored object.
Processing is carried as follows:
Image is scanned and the desired pixels are retained as it is and the rest is made uniform
using Eucledian Filtering and Grayscaling which is available in Aforge library.
The retained pixels are identified and its location is obtained. Based on the locations of the
two colored objects we set the flags and perform the required operation.
public void initialise_mouse():
This function will initialise the mouse variables to predefined values.
public void DoLSMouseClick():
The function is used to perform mouse single left click operation. This function is called
based on the flags set during the processing of the image. The flags are constantly monitored
by the timer function.
17
public void DoRightMouseClick():
The function is used to perform mouse single right click operation. This function is called
based on the flags set during the processing of the image. The flags are constantly monitored
by the timer function.
public void DoDoubleMouseClick():
The function is used to perform mouse left double click operation. This function is called
based on the flags set during the processing of the image. The flags are constantly monitored
by the timer function.
4.2.2 Windows forms Definition
Main Form
This form contains link to all other forms, i.e. it contains the following buttons:
Video source selection: This selection button lists all the available video sources
currently installed in the system. First video source in that list is set as default.
Object color selection: The colored objects are brought near small color picker on the
output screen and identified colors are selected. Two colors selected is name as
Color1 and Color2 respectively.
Color picker box: This color box saves the color of the object detected by the color
picker.
Move cursor: Below video source button there is a small checkbox named as “Cursor
Control”. This functionality is used to move the cursor by identifying the movement
of the colored object on the screen.
5.2 Algorithm
Img: signifies the image produced by the selected video-source.
color1: signifies the color of the first marker.
color2: signifies the color of the second marker.
EucledianColorfiltering is applied to the image Img to filter out undesired pixels.
GreyScaleFiltering is applied to convert image Img to greyscale image.
18
BlobCounter is used to get location of the desired colored objects.
Based on the location of the colored objects, the required operations like mouse cursor
movement, mouse left click, mouse right click and mouse double clicking are
performed.
5.3 Summary
Various modules were used in proposed project. The study of detailed design of these
modules enabled us to improve upon the workability of the system. Application of algorithm
brought about the much needed synchrony amongst the modules. This study has opened up
the scope and need for incremental improvement of the application.
19
Chapter 6
TESTING
6.1 General
Testing a system’s capabilities is a very important task. Users are interested in getting a job
done and test cases should be chosen to identify aspects of the system that will stop them
from doing their job. Although errors, such as screen corruption, are irritating they are less
disruptive than errors, which cause loss of data or program termination. Testing typical
situations is very important. It is important that a system works under normal usage
conditions than occasional conditions that only arise with extreme data values. Some of the
most important testing methods are explained below:
6.2 Levels of Testing
6.2.1 Unit Testing
Unit testing is a testing in which the individual unit of the software is tested in isolation from
other parts of a program.
Unit Testing is primarily carried out by the developers themselves.
Deals functional correctness and the completeness of individual program units.
Unit testing focuses on verification on effort on the smallest unit of software design module.
The purpose of unit testing is to uncover errors in the smallest software unit in the routine.
Each routine will be tested individually using black box oriented test. The programmer of
each design will design a set of test cases for the module and ensure that the module is fully
tested importance or complex routine will also be tested by at least other person [4].
6.2.1.1 Test Cases
Test cases are the specific executable test that examines all aspects including inputs and
outputs of a system and then provides a detailed description of the steps that should be taken,
the results that should be achieved, and other elements that should be identified. Steps
explained in a test case include all details even if it they are assumed to be common
knowledge. Test cases are used as a technical explanation and reference guide for systems.
Test cases indicate the various test carried out and comparing the results obtained from the
test cases with the expected results. These will help us to know the different user inputs and
20
the results obtained for those inputs. The test cases generated for our project has been
displayed in the Table 6.1
Steps Input Specification
Expected Result
Obtained Result
Remarks
1 Video source
selection
Video source
must be selected
Video source is
selected
As Expected
2 No video source Error message is
must be
displayed
Error message is
selected
As Expected
3 Color 1 picker Color 1 must be
selected
Color 1 is
selected
As Expected
4 Color 2 picker Color 2 must be
selected
Color 20 is
selected
As Expected
5 Cursor control Moves the
mouse cursor
Moves the
mouse cursor
As Expected
6 Left click
function
Must click the
selected object
Clicks the
selected object
As Expected
7 Right click
function
Must click the
selected object
Clicks the
selected object
As Expected
8 Double click
function
Must click the
selected object
Clicks the
selected object
As Expected
Table 6.1 Test cases
21
6.2.2 Modules Testing
A module is a collection of related components. Each such individual module and function
was put to test. In our project Video source selection module, Object color selection module,
Image sequence processing module were tested and found to be working as expected.
6.2.3 System Testing
The sub-systems are integrated to make up the entire system. During this phase of testing,
various sub-systems that were obtained after integration were put to use. System test will
involve some testing by the client. The Video source selection module, Object color selection
module, Image sequence processing module are put together as a single system and tested [4].
When integrated, the system was working successfully.
6.2.4 Acceptance Testing
It is the final stage of the testing process. During this phase the final product is given to some
of the potential clients and feedback is obtained. After this test some small modification are
made and finally the system is ready.
6.7 Summary
Testing typical situations is very important. It is important that a system works under normal
values. Different types of testing are done for the application developed. The test cases are
chosen to identify aspects of the system that will stop them from doing their job.
22
Chapter 7
CONCLUSION AND FUTURE ENHANCEMENTS
New advancements in technologies in computer vision and human computer interaction have
brought a great change in living conditions. Recently, there has been a surge in interest in
recognizing human hand gesture. Hand-gesture recognition has various applications like
computer games (Motion sensing Games), machinery control and through mouse
replacement. This project was developed and implemented successfully. The practical
knowledge and technical skill we have gained through the project was vast and valuable.
With only the help of a normal webcam we can make a user experience of mouse interaction
without a physical device.
23
BIBLIOGRAPY
[1] Andrew Troelsen, Pro C# with .NET 3.0, Special Edition, Dream tech press, 2007.
[2] Stuart Russel, Peter Norvig, Artificial Intelligence: A Modern Approach, 2nd Edition,
Pearson Education, 2003.
[3] Herbert Schildt, C# : The Complete Reference, 5th Edition, Tata Mc-GrawHill, 2004.
[4] Ian Somerville, Software Engineering, 8th Edition, Pearson Education, 2007.
24
Appendix A
USER MANUAL
A Mouse Interaction Using Hand Gesture is software that allows users to give mouse inputs
to a system without using an actual mouse. It is easy to use and provides user comfort.
Initially .NET framework has to be installed within the system for the working of this project,
also a webcam or any video source is required for the proper working of the project.
There are following steps to be followed by a user in order to use this software:
Open the application a main form is displayed in the screen.
Select the video source from the list available in the drop-box present at right end of
the main form. Select one of the devices and click ok button. As ok button is pressed
video will be displayed in two video-boxes within the form .At any moment you want
to stop the video streaming just click the disconnect button present next to the ok
button.
There is a small red box within in the second video-box this is called as color picker.
Bring the colored band attached to the finger or colored object over the color picker,
the objects color can be seen as background of the select color button at the right end
of the main form. By clicking the button the objects color is chosen.
Follow the step3 for next colored object. Once the color of the colored band or object
is selected, we when move these colored bands attached to the fingers in front of the
webcam we can see these color bands identified in the video-box. The first color is
used for clicking purpose and second color is used for cursor movement.
Select the cursor movement checkbox present at the top right end for cursor
movement in the display. Selecting this will move the mouse pointer in the direction
of the direction of the colored band.
When we bring the two color bands together left click is implemented at the mouse
pointer, this is shown by the overlap of the color identifiers in the second video-box in
the main form.
When two color bands are brought along x-axis in a single line with few meters apart
then right click is implemented at position of the mouse pointer.
25
When two color bands are brought along y-axis in a single line with few meters apart
then double click is implemented at position of the mouse pointer.
Following these steps will implement all the functionalities of the mouse thus will work as a
virtual mouse.
26
Appendix B
SCREEN LAYOUT
A.1 The Microsoft visual studio environment
27
A.2 The view of the interface
A.3 Identifying the color of the marker
28
A.4 Detecting the motion of two colored-objects
A.5 Demonstrating clicking functions
29
30
31