Post on 11-Jan-2016
transcript
GRADUATE EDUCATION AND RESEARCH IN PATTERN RECOGNITION
INTERIM PROGRESS REPORT
Center of Excellence in Pattern Recognition
Research Projects
Transforming Communication between Deaf and Hearing people using Pattern Recognition
Brain Prosthesis – An Advanced Signal Analysis and Classification of P300 ERP Signal- Component in BCI
Measuring and Improving Human Video Surveillance Performance
Computational Fluid Dynamics of Pharmaceuticals Processing
Marine Security: Towards Maritime Sentry System
Red Tide Detection and Tracking
GRADUATE EDUCATION AND RESEARCH IN PATTERN RECOGNITION
INTERIM PROGRESS REPORT
Center of Excellence in Pattern Recognition
Principal Investigators
Emanuel Donchin, College of Arts and SciencesDmitry Goldgof, College of EngineeringLawrence Hall, College of Engineering
Chad Lembke, College of Marine ScienceBarbara Loeding, College of Education
Rangachar Kasturi, College of EngineeringDmitry Khavinson, College of Arts and Sciences Frank Muller-Karger, College of Marine Science
Scott Sampson, SRI (was with College of Marine Science)Ravi Sankar, College of Engineering
Thomas Sanocki, College of Arts and SciencesSudeep Sarkar, College of Engineering
Aydin Sunol, College of Engineering
GRADUATE EDUCATION AND RESEARCH IN PATTERN RECOGNITION
INTERIM PROGRESS REPORT
Center of Excellence in Pattern Recognition
Graduate Students
Weijan ChengSergiy Fefilatyev
Jayapragas GnaniahSiri Kamp
Valentina KorzhovaL. Kuznia
Kun LiVasant Manohar
B. SmeltzerInia Soto
Noah Sulman
GRADUATE EDUCATION AND RESEARCH IN PATTERN RECOGNITION
INTERIM PROGRESS REPORT
Center of Excellence in Pattern Recognition
Summary of Activities
Pattern Recognition Technology Showcase
Distinguished Speaker Series
International Conference on Pattern Recognition
Florida Imaging and Recognition Systems and Technology (FIRST) Center of Excellence
External Grant Proposals
New Courses
Cluster Computing System
GRADUATE EDUCATION AND RESEARCH IN PATTERN RECOGNITION
INTERIM PROGRESS REPORT
Center of Excellence in Pattern Recognition
Pattern Recognition Technology Showcase
USF has extensive expertise in Video and Image Analysis, Knowledge Discovery & Language Learning, Medical Data Analysis and
Bioinformatics technologies.
These technologies result in commercial products and services that enhance the Security, Health, and Quality of Life of our citizens.
These technologies were demonstrated to the public at the Pattern Recognition Technology Showcase held at the Sam and Martha Gibbons
Alumni Center – Traditions Hall on
September 28, 2007
GRADUATE EDUCATION AND RESEARCH IN PATTERN RECOGNITION
INTERIM PROGRESS REPORT
Center of Excellence in Pattern Recognition
Distinguished Speaker Series
Sharat Chandran, IIIT – Bombay, India, “Modeling Translucency in Image Based Relighting, August 29, 2007
Horst Bunke, University of Bern, Switzerland, A family of novel graph kernels for structural pattern recognition, September 4, 2007
Demetri Terzopoulos, University of California, Los Angeles, Artificial animals and humans – From biomechanics to intelligence, October 3, 2007
Alexander Vasilev, University of Bergen, Norway, Pattern recognition: Energy of the Laplace evolution, December 14, 2007
Mark Mineev-Weinstein, Los Alamos National Labs, New Mexico, The Laplacian growth: Physics, mathematics, and algorithms for shape recovery, January 14, 2008
GRADUATE EDUCATION AND RESEARCH IN PATTERN RECOGNITION
INTERIM PROGRESS REPORT
Center of Excellence in Pattern Recognition
International Conference on Pattern Recognition
In a World-wide Competition, USF was Chosen to Host the 2008 International
Conference on Pattern Recognition (www.ICPR2008.org)
This Premier International Conference has not been held in the USA since 1990
This is a Great Opportunity for Florida-based Companies to Showcase their Products to
some 2,000 Delegates from Around The World
GRADUATE EDUCATION AND RESEARCH IN PATTERN RECOGNITION
INTERIM PROGRESS REPORT
Center of Excellence in Pattern Recognition
Florida Imaging and Recognition Systems and Technology (FIRST) Center of Excellence
We have submitted a $20 Million research proposal in partnership with the University of Central Florida to the Florida State Centers of Excellence competition. Much of the research described in this presentation will
serve as the foundation to the activities of the proposed center.
GRADUATE EDUCATION AND RESEARCH IN PATTERN RECOGNITION
INTERIM PROGRESS REPORT
Center of Excellence in Pattern Recognition
External Grant Proposals
ElasticFace: Characterization of Facial Dynamics in Video (NSF) (PI - Goldgof)
Evaluation of Smart Video for Transit Event Pattern Detection, Federal Department of Transportation (FDOT-NCTR), (PI - Goldgof, Sapper)
Tracking Fluid Flow Parameters in Video Data , (NSF) (PI: Goldgof)
iSIMON: Development of an Automated Intelligent Sign Language Monitor to Facilitate On Demand Sign Language Instruction and Learning, US Department of Education, (PIs: Loeding, Sarkar)
Gravitational Lensing, Complex Lightning Bolts and Complex Analysis (NSF, PI: Khavinson)
Bottom Stationed Ocean Profiler Design Improvements (ONR, continuation funding, Lead: C. Lembke)
MERHAB: Eastern Gulf of Mexico Sentinel Program (NOPP/NOAA continuation funding, Lead: C. Lembke)
A P300 Brain Computer Interface used to Tele-operate a Robotic Arm Mounted to a Mobile Platform, National Institute of Health (NIH), Lead: E. Donchin)
GRADUATE EDUCATION AND RESEARCH IN PATTERN RECOGNITION
INTERIM PROGRESS REPORT
Center of Excellence in Pattern Recognition
New Courses
CIS 6930 Three-Dimensional Data from Images - Spring 2008, Goldgof
CIS 6930 Video Processing - Spring 2007, Goldgof
CIS 6930 Document Image Analysis, Fall 2007, Kasturi
EXP 7099 Tools for Language Discovery, Fall 2007, Sanocki
GRADUATE EDUCATION AND RESEARCH IN PATTERN RECOGNITION
INTERIM PROGRESS REPORT
Center of Excellence in Pattern Recognition
Cluster Computing System
• Initiated construction of 64 cores Compute Cluster• Intel Xeon X5355 Clovertown 2.66GHz (Quad-core)• 256 Gbyte of memory distributed on 8 compute nodes• Interconnected via 1000 Mbps Ethernet switch• 12 TB disk array • 340.48 GFlops max theoretical performance• All parts received and assembled• All CPUs are tested• Initiated benchmarking for final testing• Available for users – end of Spring 2008 semester
Red Tide Detection and Tracking L.O. Hall, F. Muller-Karger, C. Hu, D.
Goldgof, W. Cheng, I. SotoComputer Science and Engineering and Marine
Science
Center of Excellence in Pattern Recognition
Harmful algae are microscopic, single cell plantsOccasionally, the algae grow very fast (bloom)Accumulate into dense, visible patchesThe can appear red, orange, brown, or greenish depending the organism or the waters.
Karenia Brevis
Consequences
Recipe
Location and Frequency
Use Satellite imaging to find blooms in conjunction withData Mining.
The following 5 attributes from MODIS satellite images were used: chlorophyll, fluorescence line height, normalized water-leaving radiance at 412 nm, at 443 n and at 488 nm. We utilized Fuzzy C Means clustering to cluster seawater in potential red tide regions into 4 different clusters or classes. Different clusters are labeled with different colors. The cluster centroid with the highest fluorescence line height value was recognized as red tide (colored in bright red).We tested our algorithm on nine different days in 2004 and 2005 and achieve an overall accuracy of 66.13% vs. pixels known to contain red tide through physical measurement. Ground truth red tide spots are marked as blue crosses in the figure while the non-red tide spots as white crosses.
Marine Security: Towards Maritime Sentry SystemSergiy Fefiliatiev, Dmitry B. Goldgof 1, Chad Lembke, Scott Sampson 2
1 Department of Computer Science and Engineering, College of Engineering, University of South Florida
2 Center for Ocean Technology, University of South Florida, SRI International
Center of Excellence in Pattern Recognition
Abstract
This project presents a new algorithm for automatic detection of marine vehicles in open sea from a buoy camera system. Users of such system include border guards, military, port safety and flow management, sanctuary protection personnel. To reduce human effort in the venture, it is desirable to build a system that would perform such a task autonomously, detecting the vessels in the sea in a robust manner. The goal of the system is to detect an approximate window around the ship and prepare the small image for transmission and human evaluation. The proposed algorithm combines horizon detection method with edge detection and post-processing. The dataset of 100 images is used to evaluate the performance of proposed algorithms.
Algorithm DescriptionOverview
Experiments
In this work a dataset of 100 images of open sea with horizon was used. Most of the images had multiple ships present on the horizon line; maximum number of ships in an image was 6. Some of the images did not include any ships or floating objects. Manual search for parameters of the Canny edge detector was conducted on a separate dataset of 27 images. The next table shows accuracy of ship detection and false alarms depending on threshold value
The algorithm consisting of six steps. The image acquisition step uses a digital camera installed on the buoy for acquiring images of the surrounding area. For edge detection step we have chosen Canny edge detector. Horizon detection step is the most essential step in the algorithm - the found horizon line is used to eliminate all edges in the edge-image that would not belong to floating objects – we expect that in an image all objects of interest, i.e. ships, lie above the horizon line. The purpose of the post-processing step is to connect closely lying separate edges belonging to a single object and to eliminate some edges that would not belong to that object. The output result consists of a region of the original image inside the bounding box around found objects.
Summary and Conclusions
High detection rate with a moderate alarm rate for low threshold values is recorded. The visual results for the experiment reflect the fact that a typical failure for ships detection occurs due to a multiple region representation of single objects.
One of the proposed future improvements is the introduction of a training step in order to identify parameters of the Canny edge detector. Another enhancement may include a ‘merge’ procedure, which would connect closely lying ship regions.
We propose a semi-automated computer system that detects ships on the horizon, using a computer vision approach. Such system will be equipped with a digital camera, located on a buoy and will work in autonomous mode taking images of the surrounding area. After processing the image information the system will send only images of the found objects to a human operator for further evaluation and action. One of the constraints for such a system is low communication bandwidth available while reporting the results. Hence, one of the requirements lies in efficient compression of the obtained visual results before they are sent to the control center.
Threshold Value Detection Rate False Alarm Rate
0.1 95.11% 27.72%
0.2 94.02% 28.80%
0.3 90.76% 32.07%
0.4 86.96% 35.87%
0.5 72.83% 50.00%
0.6 53.26% 69.57%
0.7 43.48% 79.35%
0.8 27.17% 95.65%
0.9 14.67% 108.15%
Respective hardware (Figure above)
Physical measurements
Series of videos (Figures below)
Main applications
- chemical (processes of heat or mass
transfer between expanded liquid and
surrounded gas)
- medical ( blood oxygenation)
- engineering (cooling devices)
Computational Fluid Dynamics of Pharmaceuticals Processing
Dmitry B. Goldgof, Grigori M. Sisoev, Aydin Sunol, Dmitry Khavinson,
College of Engineering, College of Arts and Sciences
Center of Excellence in Pattern Recognition
Rotating disk
Flow meter
Top reservoir
Control box
Return pump
Cupper tubing
Bottom Reservoir and Stand
The block-scheme of wave detecting
and estimating the wave parameters
The resulting images
Estimation of fluid flow parameters -wavelengths -inclination angles -radial velocity
Comparison
Top of the wave point detection
Finding the tangent at the given point to the circle and to the spiral
Calculating the inclination angle between tangents at the given point
Image preprocessing (thresholding and local enhancement)
Calculating the Euclidian distance between two consecutive waves
Determination the two neighboring maxima correspond to the image intensity and belong to the same radius on the disk (semi-manual)
Determination of β and φ
a)
b)
The calculated averaged wavelengths a) of the sequence of ten frames ,b) over five videos compared with theoretical wavelengths and c) wavelength vs. radius, asterisks and o-s denote experimental and theoretical values.
c)
,003.01
2
Rs
ds
y
dy
x
dx
R
dR
The calculated averaged a), b) inclination angles β and c) their changes ∆β compared to the predicted.
a) b)
c)
Radial velocity component of wave flow.
,3
3/1
r
hopt
).~~2~2~(2
121122
iiiiirrrr
hr
Estimation of inclination angle and radial velocity component is ill-posed problem, so the asymptotically optimal method is used to determine the step of differentiation under the initial data accuracy ε given
,,...,2,1;,...,2,1;,...,,0;2
1SjNiMjMjkIrk
MR Mj
Mjk ikij
Evolution equationsFlow is governed by the continuity, the Navier-Stokes equations, and an appropriate set of boundary conditions. Derived model is described in1) with the addition of non-axisymmetric terms. A localized version of equations is used for computation of parameters of developing waves.Camera calibration accuracyConsidering lens distortion of a camera with
coefficients k1 and k2 and the standard relative deviations for estimates of k1 and k2 do not exceed 3-4%, one has
where s is scale factor, x, y are image coordinates and X, Y are world coordinates.
Spiral detection algorithmConsidering the spirals below as periodic functions with
The period ∆Φ in the polar system of coordinates
r = rj(φ) : rj(φ + ∆Φ ) = rj(φ),
∆ φ be the angle-step of the calibration, N = ∆Φ / ∆ φ > 1 is
integer, and
φ = φi = φ0+ i∆ φ, i = 1, 2, ...,N; rj(φ 0) = r0 = min rj(φ);
r0 < ri1 < … < riS < 100,
where S is the number of spirals for each i, rij are
experimental data for φ = φi; the points (i, rij) are on the
respective spirals. Then 1st spiral is:
(φ1, r11), (φ2, r21),(φ3, r31), ...,(φN, rN1);(φN+1, r12),(φ N+2, r22),
...,(φ2N, rN2); ...;(φ(S-1)N+1, r1S), (φ(S-1)N+2, r2S), ...,(φSN, rNS);…………………………………………………………………
Let ∆r be the radius-step of calibration and (φi, k∆r; Iik), i = 1, 2,
...; k = 1, 2, ..., be the calibration net on the contracted disk, then
where Iik are the intensities of the points (φi, k∆r) .---------------------------------------------------------------------------------------------------------
---------1)G.M. Sisoev, O.K. Matar, and C.J. Lawrence, ”Axisymmetric wave regimes in viscous liquid film
flow overa spinning disk”, J. Fluid Mech 495, 385-411 2003.
The novel video-based algorithms for detection and tracking of spiral waves in a spinning disk reactor are presented. They are based on processing of experimental video data consisting of the discrete field of disk point coordinates and its intensities. The experimental results are compared with theoretical results. Also, model-based algorithms to estimate the disk speed and the fluid-flow rate in a spinning disk reactor are developed.
Conclusion
Recovery of controlling parameters •Fluid flow rate•Disk speed
Abstract
Data Acquisition
The developed algorithms allow for recovery of bothobserved flow parameters (including wavelengths, inclination angles and radial velocity components) and controlling parameters. The inclination angles between spirals and the respective circles and radial velocity components are calculated using the asymptotically optimal method. Results computed from video data are compared with number predicted by the theoretical model.
Theory and Results
Automated Detection of Sign Language Patterns
Sudeep Sarkar, Barbara Loeding, Sunita Nayak , Alan Yang
Department of Computer Science and Engineering, Department of Special Education
Center of Excellence in Pattern Recognition
Goal and Impact Statement
Unsupervised Learning of Sign Models
Representation that does not require tracking
Publications and Acknowledgement
Movement Epenthesis Aware Matching
• R. Yang; S. Sarkar, B. Loeding, Enhanced Level Building Algorithm for the Movement Epenthesis Problem in Sign Language Recognition, to be presented at IEEE Conf. on Computer Vision and Pattern Recognition, 2007.• R. Yang; Sarkar, S., “Gesture Recognition using Hidden Markov Models from Fragmented Observations,” IEEE Conference on
Computer Vision and Pattern Recognition pp. 766- 773, 17-22 June 2006.• R. Yang and S. Sarkar, “Detecting Coarticulation in Sign Language using Conditional Random Fields,” International Conference on Pattern Recognition vol.2, pp. 108- 112, 20-24 Aug. 2006• S. Nayak, S. Sarkar, and B. Loeding, “Unsupervised Modeling of Signs Embedded in Continuous Sentences,” IEEE Workshop on Vision for Human-Computer Interaction, vol. 3, pp. 81, June 2005.• R. Yang, S. Sarkar, B. L. Loeding, A. I. Karshmer: Efficient Generation of Large Amounts of Training Data for Sign Language Recognition: A Semi-automatic Tool. ICCHP 2006: 635-642• B. L. Loeding, S. Sarkar, A. Parashar, A. Karshmer: Progress in Automated Computer Recognition of Sign Language. ICCHP 2004: 1079-1087
This work was supported in part by the National Science Foundation under ITR grant IIS 0312993.
Goal: To advance the design of robust computer representations and algorithms for recognizing American Sign Language
from video.Broader Impact:
• To facilitate the communication between the Deaf and the hearing population.
• To bridge the gap in access to next generation Human Computer Inferences.
Intellectual Merit: We are developing representations and approaches that
can • Handle hand and face segmentation (detection)
errors, • Learn, without supervision, sign models from
examples, • Recognize in the presence of movement epenthesis,
i.e. hand movements that appear between two signs.Learn sign model given example sentences with one sign in common. In the following two sentences, the target sign model to be learned is HOUSE (marked in red)
SHE WOMAN HER HOUSE FIRE
fs-JOHN CAN BUY HOUSE FUTURE
S1
O3O2O1
S3S2
},{ 12
11
11 ppg
},{ 13
12
12 ppg ……
},{ 22
21
21 ppg
},{ 23
22
22 ppg ……
},{ 32
31
31 ppg
},{ 33
32
32 ppg ……
...
...
Frag-Hidden Markov Models:• Groups across frames are linked• Best match is a path in this
induced graph over groups• Matching involves optimization
over states AND groups for each frame
Movement epenthesis is the gesture movement that bridges two consecutive signs. This effect can be over a long duration and involve variations in hand shape, position, and movement, making it hard to model explicitly these intervening segments. This has been a problem when trying to match individual signs to full sentences. We have overcome this with a novel matching methodology that do not require modeling of movement epenthesis segments.
The error rates for enhanced Level Building (eLB) (our method), which accounts for movement epenthesis, and classical Level Building (LB) that does not account for movement epenthesis.
Segmentation Aware Matching
• We have proposed a novel representation that captures the Gestalt configuration of edges and points in an image.
• It can work with fragmented noisy low-level outputs such as edges and regions
• It captures the statistics of the relations between the low-level primitives• Distance and orientation between edge primitive.
• Vertical and horizontal displacement • Relationships between short motion tracks
• Normalized RD is an estimate of Prob (Any two primitives in the image exhibit a relationship)
• The shape of the RD changes as parts of the objects move.
• Relational distributions over time model high-level motion patterns.
What are the Limits of Human Ability in Video Surveillance?
Rangachar Kasturi 1, Thomas Sanocki 2, Dmitry Goldgof 1, Sudeep Sarkar 1,Vasant Manohar 1, Noah Sulman 2
1 Department of Computer Science and Engineering, College of Engineering, University of South Florida
2 Department of Psychology, College of Arts and Sciences, University of South Florida
Center of Excellence in Pattern Recognition
• Objective: To conduct a formal psychological study on the efficiency of the human operator in video surveillance and suggest recommendations, if any, based on the research findings.
• What are the research questions?• How many simultaneous windows of videos can the analyst effectively monitor?• Given a target information to monitor, what is the ability of the analyst to detect this event and how does this vary as a
function of number of windows?• Is the current method of presenting the raw data to the analyst sufficient?• Do the patterns revealed in the earlier experiments vary across expert and non-expert populations? If the results vary,
what is an effective and systematic way to train non-experts?
Signal Analysis and Classification of P300 ERP Signal Component in Brain Computer Interface (BCI)
Emanuel Donchin, Ravi Sankar, Kun Li, Siri-Maria Kamp, David Seebran and Yael Arbel
1 Department of Psychology, College of Arts and Sciences, University of South Florida
2 Department of Electrical Engineering, College of Engineering, University of South Florida
• Objective• Design new P300 based analysis algorithms; Increase P300 detection speed with high accuracy; Therefore achieve effective communication speed for locked-in people.
• What are the research questions?• How to increase the P300 detection speed and accuracy?• What are the complexities of the new algorithms and how to relieve the computation load? • Does the current raw data provide sufficient information to the analyst?• How to increase the data transfer rate for effective communication?
Center of Excellence in Pattern Recognition