Date post: | 21-Nov-2023 |
Category: |
Documents |
Upload: | independent |
View: | 0 times |
Download: | 0 times |
Page 2 of 71
CONTENTS INTRODUCTION 3
TRENDS 10 TECHNOLOGY EXPANSION IN PUBLIC SAFETY: IOT, SENSORS AND ANALYTICS 10
CASE STUDIES 16 HD AND NETWORK VIDEO: MOVING PUBLIC SAFETY AND SCHOOLS FORWARD IN SECURITY 16
STANDARDS 25 NAVIGATING THE SECURITY AND PUBLIC SAFETY INDUSTRY: FROM ASSOCIATIONS TO STANDARDS 25 ULTRAHD AND THE VIDEO SURVEILLANCE INDUSTRY 31 ULTRAHD RESOLUTIONS 33
IOT, SENSORS AND ANALYTICS 35
CYBER SECURITY OF IOT SENSORS 42
IOT AND CYBER SECURITY FAQ 48 ASIS INTERNATIONAL SECURITY APPLIED SCIENCES FACILITY MODEL 51 UL 2802 – STANDARD FOR PERFORMANCE TESTING OF CAMERA IMAGE QUALITY 57 FORENSIC VIDEO PROGRAM READINESS 62 DIGITAL MULTIMEDIA CONTENT — MORE THAN JUST VIDEO DATA 62 FORENSIC REVIEW 62 VIDEO CONTENT ANALYSIS 63 CHECKLIST: IMPLEMENTING A FORENSIC VIDEO READINESS PROGRAM 64 TOP TECHNOLOGY CONSIDERATIONS IN FORENSIC VIDEO 65 CRIMINAL PATTERN IDENTIFICATION AND SECURITY/VIDEO DATA 66 LINKING DMC TO POLICY 67
IMPLEMENTATION 68
PROJECT IMPLEMENTATION PLAN FOR A NETWORK VIDEO SURVEILLANCE SOLUTION 68
INDEX 71
Page 3 of 71
Introduction Digital multimedia content (DMC) is defined as including the video content itself, plus associated metadata (feature data) and audio. The storage location or device (i.e., network video record, server, or virtual [cloud] storage) where digital video, digital multimedia content, or digital multimedia evidence (DME) is originally stored is significant in achieving Video QoS or the ability to acquire, render, search, disseminate, distribute DMC. DMC may also be referred to as IP or digital video, IP video content, or Digital Multimedia Evidence (DME). Digital data representing audio content, video content, metadata information, location‑based information, relevant IP addresses, recording time, system time, and any other information attached to a digital file. DMC may be compressed or uncompressed and may also be referred to as original, copied, local, or virtual. DMC may be compressed or transcoded from the original DMC in an industry standard file format, resulting in a reduced amount of data required to represent the original data set. For forensic readiness, the original DMC is extremely important; data recorded and retrieved to DMC media in its native file format (i.e., first usable form) must always be retained at the embedded video camera solid state media, local network attached storage, local server or virtualized cloud. For further information, see the Digital Video Handbook Volume I Video Content Analysis A discussion of core terminology in achieving video quality is important. Video analytics is an analysis “snapshot” in time; it differs from video content analysis (VCA), which analyzes video data by single or multiple criteria and then delivers a search result. VCA is not to be confused with a newer technology, known as video summarization or synopsis, which condenses an entire day of video to a matter of minutes. Video summarization is based on the movement of objects through tubes; the movement is represented on a condensed video clip along with object time stamps. Digital multimedia content (DMC) is defined as including the video content itself, plus associated metadata (feature data) and audio. The storage location or device (i.e., network video record, server, or virtual [cloud] storage) where digital video, digital multimedia content, or digital multimedia evidence (DME) is originally stored is significant in achieving Video QoS or the ability to acquire, render, search, disseminate, distribute DMC. DMC may also be referred to as IP or digital video, IP video content, or Digital Multimedia Evidence (DME). Digital data representing audio content, video content,
Page 4 of 71
metadata information, location-‐based information, relevant IP addresses, recording time, system time, and any other information attached to a digital file. DMC may be compressed or uncompressed and may also be referred to as original, copied, local, or virtual. DMC may be compressed or transcoded from the original DMC in an industry standard file format, resulting in a reduced amount of data required to represent the original data set. For forensic readiness, the original DMC is extremely important; data recorded and retrieved to DMC media in its native file format (i.e., first usable form) must always be retained at the embedded video camera solid state media, local network attached storage, local server or virtualized cloud. For further information, see the Digital Video Handbook Volume I Video Content Analysis A discussion of core terminology in achieving video quality is important. Video analytics is an analysis “snapshot” in time; it differs from video content analysis (VCA), which analyzes video data by single or multiple criteria and then delivers a search result. VCA is not to be confused with a newer technology, known as video summarization or synopsis, which condenses an entire day of video to a matter of minutes. Video summarization is based on the movement of objects through tubes; the movement is represented on a condensed video clip along with object time stamps. FOR EDITING How is Digital Multimedia Content (DMC) searched or made actionable? Video analytics solutions (also referred to as Video Content Analysis, or VCA), are a set of computerized vision algorithms that automatically analyze live or recorded video streams, without the need for human intervention. The output of these algorithms is actionable data. Adding video analytics to a surveillance network allows the system operator to be more effective in the detection, prevention, response and investigation of incidents captured by the surveillance system. VCA can also be used to collect valuable business intelligence about the behavior of people and vehicles. What are the VCA Categories? 1. Situational Awareness & Incident Response: Real-‐Time Detections and Alerts Defining events and scenarios and receiving real-‐time alerts when such events are detected 2. Forensic Analysis/DMC Search
Page 5 of 71
Searching through recorded video after an incident to pinpoint specific video footage 3. Business Intelligence Analyzing video footage and generating statistical reports from the data collected within the video What are the application categories? 1.Applications: 2.Security & Perimeter Protection 3.Safety 4.Traffic Monitoring 5.Asset Protection What are the category examples, using search parameters and filters, of Forensic Analysis/DMC Search? 1.Target Type – People, Vehicles, Static Objects 2.Event Type – Moving, Stationary, Crossing a Line, Occupancy, Crowding 3.Filter by Color 4.Filter by Size 5.Filter by defined Time Ranges 6.Search on selected cameras or group of cameras 7.Search for Similar Targets – Once a target is observed, a simple search can be conducted to locate additional appearances of the same / similar target in the recorded video What are application categories of DMC-‐based business intelligence? 1.Customer Traffic Analysis 2.In-‐store Customer Behavior Analysis 3.Operational Efficiency 4.Vehicle Traffic Analysis What are the most popular uses of analyzed DMC data output? 1. Accurate, wide-‐ranging statistical data related to people and vehicles 2. Multiple viewing options for statistical analysis of traffic volumes (people / vehicles), including numerical charts and user-‐friendly graphs to enable traffic comparisons, aggregates and identification of traffic trends 3. Advanced visualization options of heat map and target path to analyze movement trends and motion patterns, enabling effortless comprehension of hot / cold traffic zones and dominant traffic paths 4. Easy exporting of raw data for further analysis or integration with other systems
Page 6 of 71
"Underutilization of video surveillance" 1. "66% of those who use video surveillance estimate less than half the footage is actively monitored" 2. Only 9% report 100% monitoring of surveillance footage" 3. Sonic Automotive is a U.S.-‐based dealership, parts and services chain with 23 collision centers and 110 dealership locations throughout 14 states. "Some of those other outcomes have been measured by Sonic in terms of a reduction in “actionable events”—any time a suspicious person enters the premises. With the live monitoring that its analytics allows, the security monitoring team can identify an actionable event as it occurs and communicate via loudspeakers that the person is under surveillance. “Nine times out of 10 that person will leave,” says Hallice.“We want to prevent something from happening in the first place, and communicating in this way is a great way to prevent incidents.” 4. Cost-‐effectiveness (52%), effectiveness of monitoring (29%) and the nuisance of false alarms (21%) were the three factors most often cited as preventing further investment in video surveillance monitoring. 5. Storage: 61% of respondents keep video data for more than 30 days before deleting. 30% of those keep it for 90 days or more. 56% of respondents say the data is reviewed only if there is an incident, while just 23% report that it is reviewed on a regular schedule. 6. Underutilized surveillance 58% say their organizations cover 50% or less of their valuable assets with video surveillance. The trend is more significant in the healthcare and critical infrastructure (oil, gas, utilities and energy) industries: 72% and 71% of respondents, respectively, estimated that less than half their valuable assets are covered by video surveillance. Just 4% of respondents report 100% of their valuable assets are currently covered by video surveillance. Where video surveillance is in use, two-‐thirds of respondents (66%) estimate less than half the footage is actively monitored by security personnel. Again, the trend is more significant in certain industries; in healthcare (81%), transportation and logistics (74%), and critical infrastructure (68%) in particular. The proportion of respondents who indicate security personnel actively monitor at least 75% of all surveillance was again low— just 20% overall—and only 9% reported 100% monitoring.
Page 7 of 71
7. Who uses the Data? security/facilities (73%) operations (43%) IT (28%) 8. Who uses video surveillance data from analytic tools? IT (41%) Marketing (30%) Sales (24%) Customer relations (22%) HR (20%) departments
Page 8 of 71
There are numerous trends in the public safety and security industries and the impact from technology-‐related and emerging markets is having a big effect. The technology in physical security is greatly impacted from consumer electronics and IT technology. The hot technologies in the consumer space include Ultra HD, including 4K video, which will start impacting security during 2015, as well as Near Field Communication (such as Apple Pay and Google Wallet), which can also be used for access control in the future. Other technologies are Cloud, which everyone talks about, but few have a real plan for executing the implementation and understanding how it will affect their operations. Intelligent, adaptive security devices like IP cameras that automatically adjust for the extremes of ultra low light and intense light sources will become standard “go-‐to” products for public safety and event security. Entry screening technologies are getting noticed for private corporations. With the increase in border drug and controlled substance traffic, it has become commonplace for high risk facilities to link mobile X-‐Ray and backscatter technologies to live video feeds. Yes, check those tires; there might be contraband inside. The biggest surprise may come from cyber security; big data breaches will still happen and vendors and integrators along with Security departments need to be prepared for taking their responsibility for securing the systems. This Handbook will address case studies in several markets. There are a number of significant trends impacting video surveillance. School security will continue to be important to the overall community, as will city surveillance and critical infrastructure security. Retailers will still see the easier to calculate ROI, even if they will continue to be under pressure from online sales. Public safety will have a growing surveillance market in event security. More cities are becoming entertainment centers, often with hundreds of thousands of visitors at a single event. Part of this market will be served by the temporary surveillance and entry screening solution market. In the Standards section of this Handbook, we will identify significant ecosystem members and these groups are evolving and requiring new solutions. Regarding manufacturers and solution providers, a continued inflow of new solutions will continue as physical security continues to attract new companies, as well as entrants from Asia. That means that competition is increasing meaning that vendors will have to continue to invest to stay relevant. Dealers and systems integrators have a continued need to understand requirements form an IT perspective, and work strategically with end users to sell value a long term relationships, and not products. Monitoring providers are playing a greater role; with high quality video being available also for small system at reasonable cost,
Page 9 of 71
expectations for video verification from the end users will continue to emerge. Great opportunities exist in the area of video monitoring, with bandwidth and technology being appropriate form solutions, and also mobility bringing additional value to the systems. Ideally alarm system should be integrated with video verification. In the residential market new entrants such as Telco’s and Google will continue to make inroads also on the security systems side. There are initiatives in some vertical market segments, like Schools, for policies around security. The need for school safety and security standards and best practices is being met by states with the largest systems, like California, Florida, New York, Chicago and Connecticut. Critical infrastructure working groups are now focusing efforts in petrochemical, power and food and water defense. There are however, some pressing security industry issues expected to remain unresolved. There are areas where technology is ahead of the industry. One example is integrated systems. While most security managers would agree that security systems should be fully integrated (i.e. intrusion, access control and video) most systems today, even new ones being installed, are still stand alone systems. End users are rapidly replacing “closed” appliance-‐based solutions with platforms linking security devices for scalability, agility and elasticity. Another thing is video verification, most alarms today are not verified by video which means that guards/police are dispatched on many false alarms. Video verification could help reduce false alarms, and also making safety staff be better prepared as they respond to a real alarm. Hosted video provided for a scalable what to provide video verification. With more data breaches some very tough requirements might come down on the cyber security side that the vendors and integrators need to understand and live up to.
Page 10 of 71
Trends
Technology expansion in public safety: IoT, Sensors and Analytics The Internet of Things has truly changed the technology landscape. In fact, many of the things we only dreamt about a few short years ago are now commonplace. As IoT begins to converge with sensors and analytics it’s evident that the technology landscape is poised to change yet again. And that change will impact industries across the board. It’s not hard to imagine some of those scenarios that are probably just around the corner. In a not too distant future, an interesting set of events is taking place in a single day. Virtual visit A family checks in to medical reception for one member’s outpatient procedure. The patient is given an RF ID tag and a family member’s smartphone NFC function is activated via the hospital’s patient care application. Both are beacons, meaning they present a specific set of information securely to nearby sensors. The patient is already in pre-‐op and the family member with the smartphone walks over to a self-‐serve kiosk that senses they’re nearby. A greeting given, a simple yet trusted identity verification performed and the kiosk pulls up a video of their loved one resting comfortably awaiting surgery. “We’ll be back before you know it,” the nurse says reassuringly to the intelligent video surveillance dome camera, knowing the patient’s family is anxiously waiting. Within the hour, a notification pops into the family’s smartphone, letting them know the patient is in recovery. A return to the kiosk lets them say a quick “hello.” Virtually no buttons have been pushed or complex device registration performed; the video surveillance camera, application and connectivity did [almost] all the work. With the outpatient census at this particular smart healthcare center being quite high, this same cycle is repeated over and over again, simultaneously and with improving optimization of
Page 11 of 71
the services. A facial recognition application verifies the patient location on exit and will speed their check-‐in when a follow up visit is required. For today’s challenging and increasing patient “elopements” or unplanned wandering off-‐premises of longer term care individuals, the facial recognition App and mobile notification to a security officer’s smartphone becomes another tool in potentially saving a life. The data relating to the quality and frequency of this interaction are logged so that the smart application can reserve more Healthcare Center infrastructure services during peak periods. Crime fighting intelligence In another city, a specialized team of law enforcement professionals just received intelligence of a new “crib” potentially nearby, where illegal narcotics and weapons are being stored. Two members of the city’s gang violence reduction unit start a tour and soon receive a “hit” off a fixed surveillance camera with a built-‐in license plate recognition (LPR) application and connected to the NCIC1. The detecting camera has a microcomputer connected to a NAS (network attached storage). A “hot list” was recently uploaded to the NAS device after the city’s crime analysis software related the vehicle registration of a known gang associate to the primary suspects. A “beacon” application notifies the team nearby, as well as the Command Center of the alert. The camera is equipped with a “next generation” codec called Zipstream, so a clip of the vehicle passing by the camera is pushed to law enforcement on site. The camera is capable of rendering details through forensic capture, and the team sees the vehicle occupants are armed with automatic weapons. Knowing the location of their law enforcement assets, Command automatically pushed out notifications of the filtered social media chatter to the nearby detectives and tactical response team. The cloud-‐based social media app has been listening for known gang language, keywords and locations, focusing the intel to just what is most vital in this operation. A warrant is obtained and the unit leader requests the “go-‐ahead” for the operation from Command. A traffic management application creates a protective radius around the site operation, freezing traffic and dispatching EMS for potential injuries and HAZMAT should there be toxic drug production onsite. The team executes the warrant and takes key gang members into custody.
1 The Federal Bureau of Investigation (FBI) compares license plates against its National Crime Information Center (NCIC) database. As law enforcement agencies take advantage of advanced technologies, the opportunities for using data to help with investigations increases greatly.
LPR cameras locate license plates within an image, decode them using automatic number plate recognition and character recognition.
Page 12 of 71
Each one of these realistic scenarios is not only possible with a well running IoT ecosystem, they represent a ubiquitous development of social and technology fusion. What really makes this work? An even better question is, can this scale, or can hundreds of these scenarios be running simultaneously and continuously in many locations? The Internet of Things links sensors like advanced IP cameras, together with communications, storage, infrastructure, analytics and quality of service. To say that IoT is about just sensors and infrastructure would take us back to the days when virtually every part of a security solution required monitoring or interpretation. With the evolution of IoT, tactical profiles may be identified and potential response scenarios generated, with the most confident automation by analytical processes performed repeatedly. First responders become safer through intelligence; families stay informed and involved. IoT prominence Measurable outcomes to the impact of IoT are well documented. According to Gartner, IoT product and service suppliers will generate incremental revenue exceeding $300 billion in 2020; Cisco reports an increase in private sector profits of 21% and add $19 trillion to the global economy also by 2020. The McKinsey Global Institute reports $36 trillion operating costs of key affected industries could be impacted by IoT. In 2014, Industrial Internet Consortium2 (IIC) was formed by AT&T, Cisco, GE, IBM and Intel and is focused on accelerating growth by coordinating IoT ecosystem initiatives to connect and integrate objects with people, processes and data. Why is this happening now? There are scalable improvements in each part of the IoT puzzle; sensors like IP cameras are more powerful than ever – it takes a significant amount of consistent processing and image quality to encode as efficiently say, for example Zipstream that can reduce bandwidth and storage by at least 50%. Analytics like LPR and facial recognition run more effectively in cameras with more powerful processing. At IFSEC, last month, one of the industry’s first cameras with built-‐in facial recognition was introduced. Last year, ultra low light technologies like LightFinder are making video analytics possible right at the “edge” sensor. However, the sensor is no longer an edge, but more like a node in the IoT network linking consuming devices like smartphones and “wearables,” while supported by infrastructure and storage in cloud and near (or embedded in) the sensors. By collecting, storing and analyzing data more cost effectively, reliably and securely, both industrial and consumer markets rely on connectivity. To streamline the 2 The current priorities of the IIC are to build end-‐to-‐end security use cases, apply security use cases to each of the use case groups, derive requirements from each use case, identify what is common (architectural), identify what is one-‐off (application-‐specific), design secure, integration framework based on combined use cases (with technology team) and build test beds
Page 13 of 71
processing of satellite images involved with field-‐testing the All-‐Terrain Hex-‐Limbed Extra-‐Terrestrial Explorer (ATHLETE) robot, NASA/JPL engineers developed an application that takes advantage of the parallel nature of the Amazon Web Services (AWS) workflow. AWS recently processed complex 3.2 giga-‐pixel images to support the ATHLETE robot operations enabling the vehicle to travel across various types of terrain—ranging from smooth surfaces to rolling hills to ruggedly steep terrain, yet also reconfigure on demand to form robot “feet.” For the Mars Science Laboratory, AWS served as one of the primary data processing and delivery pipelines and “allowed us to process nearly 200,000 Cassini [Satellite] images within a few hours under $200.” By connecting “everything” the number IoT devices will be approximately seven times the number of people on earth today by 2020, according to Cisco. The continuing growth in demand from subscribers for better voice, video and mobile broadband experiences is encouraging the industry to look ahead at how networks can be readied to meet future extreme capacity and performance demands. According to Nokia3, 10,000 times more traffic will need to be carried through all mobile broadband technologies at some point between 2020 and 2030. We made our prediction in 2010 and since then have gathered information from the market which shows that the growth we foresaw is actually happening. The need for more capacity goes hand-‐in-‐hand with access to more spectrum on higher carrier frequencies. The new 5G system needs to be designed in a way that enables deployment in new frequency bands. We will see growth between ten and a hundred devices for each mobile communications user – even now many people have a phone, tablet, laptop and a few Bluetooth-‐enabled devices. The security of security Device, network and application security is critical to IoT's adoption. With all these devices, how can we prepare against seemingly endless security vulnerabilities? According to Good Technology Chief Executive Officer Christy Wyatt, the key question is “What data is ending up on what device and how do I protect it? Cyber Security Is A Journey Not Destination.” Deploying an IP camera capable of handling 3 “5G use cases and requirements,” Nokia Networks FutureWorks, 2014
Page 14 of 71
digital certificates to verify they are a “trusted” non-‐person entity on a network will be essential to scale IoT security. One method is to validate the identities and permissions of both IP camera and the consumer of the digital multimedia content on the network. IoT in action Monitoring of equipment and people for safety and security, using IoT is underway by Union Pacific, the largest railroad in the United States. IoT devices like acoustic sensors and IP cameras help predict equipment failures and reduce derailment risks. These sensors are placed on or near tracks to monitor the integrity of train wheels. Union Pacific, has been able to reduce bearing-‐related derailments, which can result in catastrophic events and costly delays, often up to $40 million in damages per incident. By applying analytics to sensor data, Union Pacific can predict not just imminent problems but also potentially dangerous developments well in advance. Train operators can be informed of potential hazards within five minutes of detecting anomalies in bearings or tracks. Retail is an industry also undergoing significant changes using IoT innovation. The ability to detect customer behavior when they visit a “connected retail store” results in improved customer experience. These IoT sensors include video cameras and location “beacons” that provide in-‐shelf availability, inventory and merchandise optimization, loss prevention and mobile payments. Public Safety professionals in the wake of recent events are now closely considering the benefits of IoT devices, not only to do their job better, but also to remain safe. The recent Public Safety Summit held by the Video Quality in Public Safety (VQIPS) working group of the Department of Homeland Security Science and Technology Directorate (DHS S+T) revealed an exciting new program called the Next Generation First Responder Program. “If we were able to track [first responder] status using technology tools, the on scene commanders would know what is happening in real time,” according to Paramedic Don MacGarry of Loudon County Fire and Rescue in Virginia. “Safety is always a priority. We go into high-‐rise buildings, and sometimes into basements, which turn out to be sometimes the most dangerous places to be, and we don’t have communications;” Victoria Anthony, Master Firefighter, Rockville MD Volunteer Fire Dept.
Page 15 of 71
First Responder of the future goals include:
- Be protected from hazards and have the best situation awareness possible - Be connected to their peers and are able to locate them on demand - Be connected to their commanders and citizens they support
Chem-‐bio, gas and explosives sensors are wearable IoT devices under consideration for this program. If a police office has a tactical picture of where an active shooter is, they will have a better chance at directing the appropriate response and saving lives. Having improved situation awareness during a fire is essential for placing assets, including what side of the building is burning and what openings are available. IoT devices and their ecosystem will help first responders not only respond to an incident, but control it, and positively impact the safety of our cities.
Page 16 of 71
CASE STUDIES
HD and Network Video: moving public safety and schools forward in security Adoption is the key word in high definition video; whether it is for learning, research or forensic investigation. Give yourself a little test. Is there any growing industry that relies on video content that is less than HD resolution? According to IMS research4, some interesting trends are apparent in not only the Americas, but also the world:
• “In terms of shipments, the proportion of the world market accounted for by network cameras is forecast to rise significantly, from just 16.2% of security camera shipments in 2012 to 40% in 2017. In terms of revenue, network cameras are forecast to account for over 65.5% of the market in 2017.”
• In the education sector, the revenues for analog video surveillance is forecast to decrease each year through 2017, compounded annually at an average of -‐10.3% (CAGR)
• In the education sector, the revenues for network video surveillance is forecast to increase each year through 2017, compounded annually at an average of 19.8% (CAGR)
Revenue often drives research; research, technology improvements; technology, ROI; and ROI, adoption. One of the most significant drivers for adoption in any market is compliance. There are both mandatory and elective measures in school safety and security and the dial is moving toward the former. What better way to enhance design criteria than by planning for what will be required, than what will need to be replaced? Recently the State of Connecticut introduced a Standard titled “Report of the School Safety Infrastructure Council” (revised and updated to 2/4/2014).
• “Mechanical surveillance is the use of mechanical or electronic devices for observation purposes, such as mirrors, closed circuit television (CCTV)”
• “the following minimum standards shall be met: At minimum, mechanical surveillance shall be used at the primary access points to the site for both pedestrian and vehicular traffic”
• [School exterior] “the following minimum standards shall be met: At minimum, mechanical surveillance shall be used at the primary points of entry.”
4 The World Market for CCTV & Video Surveillance Equipment – 2013 Edition (re-‐issued 14 June 2013)
Page 17 of 71
• [Parking Areas and Vehicular and Pedestrian Routes] “At the minimum, mechanical surveillance shall be used at the primary access points to the site for both pedestrian and vehicular traffic.”
This Standard continues to cite the required use of guidance as provided by agencies like the US Department of Homeland Security (DHS), Science and Technology Directorate. There is a substantial amount of guidance on the DHS site, firstresponder.gov, including the guide authored by the Agency’s Video Quality in Public Safety Program, the Digital Video Handbook (May 2013, available here). The focus of this group’s efforts (and document) is to deliver best practices on achieving video quality in public safety disciplines, including the school safety and security sector. The Digital Video Handbook illustrates the significance of matching a required number of imaging “pixels on target” for forensic and recognition requirements. Since many school surveillance systems do not use continuous observation of all cameras, the necessity for the system to be “forensic-‐ready” or capable of accurately reviewing critical events affecting student safety with the highest visual acuity possible is paramount. Most every use case outlined in this best practice document can only be economically achieved through network or IP video devices. What is driving this adoption? We’ve seen one example of a Standard driving this adoption, and the economies of achieving visual acuity and forensic evidence through network video’s higher resolution. That is a simply a baseline. A number of school district safety committees, made up of building principals, facility management and representatives from centralized jurisdictions, are putting together standardized plans for implementation controlled access rollouts. Whether it includes controlling access to prevent loss, validating entries of students, faculty and staff or locking down in the event of a crisis, the ability to integrate network cameras is vital to situation awareness. The time saved by pinpointing a student’s entry or exit time using integration available with network (IP) video might just save a life. Security measures are moving forward in many schools and with expanded training for virtually everyone on campus. The greater simplification and availability that network video offers is making this mandatory in many cases. In one county in Alabama, every single staff member is to be trained on what to do in not just an intruder situation, but it other emergency situations like a fire or severe weather, allowing for better communications.
Page 18 of 71
Is this a force multiplier for security staff in-‐house, and for first responders? When all six first responder disciplines are considered, and not just law enforcement, the goal becomes clearer. Fire, EMS, Search & Rescue, Explosives and Hazmat are important support sectors and each school must be prepared to give these life saving personnel the most capable situation awareness tools. They also need to do their very best at keeping the first responders themselves safe during the often-‐dangerous response scenarios. This is the cornerstone of the recently highlighted National Unified Goal, promoting the safety and effective teaming with first responders. Can your school’s surveillance system support video mobility requirements for multiple personnel responding in each first responder discipline? One of several basic features of network (IP) video is the ability to distribute multiple video streams over compatible wireless or wired infrastructure. The liability of a “closed” analog video surveillance system in these cases may be reason enough to adopt IP video. “What we need is a plan” was a recent statement of one concerned school staff member in a Nevada K-‐12 facility. The safety and security personnel needed to assess risks and develop a Security Master Plan. Once completed, the technology extends and helps enable this plan. There are cautions, however, in adopting any technology and recognizing that the plan comes first is an important step. Should the security and surveillance solution fall short in capability, flexibility, agility and the velocity of delivering the right data when needed, the “plan” could be obstructed. The flexibility of today’s network video solutions contributes to moving past this potentially dangerous barrier. Is the use of network video making a difference in school security in local or state jurisdictions? In local, private schools the focus on protective child services is often driving the enhanced surveillance IP video provides. Moving wider in geography, districts, counties, especially those in more densely populated areas are guided or even directed by central AHJ’s (authority having jurisdiction). These are often separated into K-‐12, middle schools and Pre-‐K in areas requiring specialized protective and safety services. State jurisdictions prevail in high population states like New York (Office of General Services) and California, supporting centralized purchasing and specification. In some cities like New York there is a "School Construction Authority" providing a suite of design services to the public school system. Cases like this offer a wide opportunity for economies of scale where the cost of a network video system is often less than their proprietary analog counterpart. With recent case studies, it has become apparent that an IT manager of the school, district, or other AHJ is heavily involved. For value engineering and simply making use of existing infrastructure investments, this can be a significant opportunity. Network video systems are designed to leverage existing infrastructure to deliver
Page 19 of 71
power, support expansion (and contraction) and offer the value of using video data for departments other than security and safety, if that weren’t enough! Sporting events at school gymnasiums outfitted with compatible network video systems can distribute video streams for entertainment purposes. Cameras in cafeteria check out lanes verify student purchases. Network video plus access control reduce theft of today’s high value tablets and projectors. In the State of Illinois one city’s school system has deployed a unique “virtual gymnasium” using multiple projectors to promote exercise and health in a limited space and at reduced cost. The recent trend of gamification at the same school literally creates a fun learning experience, through the use of wirelessly connected tablets at each student workstation. The continuity of these progressive strategies could easily be compromised should loss occur. Network video solutions can spot check entries to the high value storage areas and even integrate with radio frequency loss prevention tags, alerting personnel to unauthorized technology removal. Every part of the ecosystem, from designer, first responder to user makes the case for IP video even stronger with today’s beneficial technologies.
Page 20 of 71
A day in the life of security and public safety on campus Examining a day in the life of a campus (corporate, education or other) can reveal some fascinating aspects of the safety and operational requirements you may not have known about. It is a well known process to track the activities of a safety and security officer to better help them deliver vital services in an active environment. The following represents a simple timeline of potential events in a day at a typical campus. 02:00
Early morning at a university in the Midwest, a vehicle breaches a staff parking entrance, the driver parks near a poorly lit loading dock and forces a service door open. Responding security officers already had a head start when the vehicle’s license plate was not in the school student, faculty, staff or contractor database and the gate camera sent an alert immediately and directly. The video intercom at the service door also showed the door breached and the suspect vehicle with waiting accomplice still in the vehicle. The alert campus command center operator dispatches law enforcement, which arrives immediately after campus security, apprehending the suspects. 05:00
The campus day begins with the arrival of a fresh security officer shift that checks in at command. The team reviews the last shift’s events with a quick overview of several video clips, providing the incoming safety and security officers a visual overview of the previous days issues and additional intelligence to keep their campus safe. The usual main entrance monitoring and screening overviews, classes let out, students returning to access controlled dormitories, evening deliveries, automated faculty escorts and the one overnight breach are reviewed in minutes using the saved and metadata video management system searches. The team goes on their way and begins their tours and manning their posts. All officers are equipped with tablets capable of real time video view, alarm review, dispatched incidents and direct messaging to the local public safety answering point (PSAP) for all first responder categories. The term “first responder” is now well defined by the US Department of Homeland Security Science and Technology Directorate (S&T) web site, firstresponder.gov, and six disciplines are represented: EMS, law enforcement, fire, explosives, HAZMAT and search & rescue. All these disciples contribute to improved campus safety and security, and their diversity is well represented in S&T’s guidance document and publication “Digital Video Handbook”, available on the website www.firstresponder.gov, along with useful policy and best practices. 07:00
Page 21 of 71
Staff, student and faculty arrival builds continuously early morning and command center operators and safety/security director are all at attention and monitoring the activity. The use of 360°, HDTV network cameras achieves a panoramic view of school ingress and egress areas, with a useful overview of controlled access points, main student entry and reception. “My field of view has been increased tenfold,” says the safety and security director when asked about the system’s enhanced video surveillance. “If I don’t get you coming in, I’m going to get you going out.” 12:00 The campus lunch break has the usual students eating both indoors and outdoors, together with a number of activity tables on the common grounds. Seeing that the break has just started, the command center uses the campus public address system and directs students which way to go for the lunch break’s events. The video surveillance system confirms they’ve got the message as activity builds. 16:00 The day is just about over for most of the student population, but not for staff, faculty and an incoming evening shift that prepares for a review of the previous day’s and shift’s events, again made simple through intelligent searches and embedded applications inside the network cameras. These “apps”, whether license plate detection, cross line detection, student activity mapping or people counting literally turn the network cameras into domain awareness sensors, relaying a steady stream of data available on demand. The incoming security crew knows when to expect the student exit activity to decrease through the video surveillance “heat” or activity mapping tools. They wait until after this time to conduct the shift transition to avoid any missed incidents. 20:00 The evening’s student dorm access control entries continue and security officers on patrol are ready to receive alerts of doors left and propped open. The door entries are silent until a door left open signal buzzes and automatically resets on door close. Safety and security’s command responsibility is to review these door breaches on video and verify that no suspicious activity and “piggybacking” has taken place. The simplified alarm “histogram” guides to operator over to the video associated with the door alarm. All’s well as it was just a couple of students carrying in a replacement microwave oven. Safe waiting areas around the campus have a good amount of pedestrian traffic as students and faculty board transport at prearranged locations around campus. There are enhanced LED lighting, 360°, HDTV network cameras, area video analytics, wireless connectivity and an audio system at each one of these areas. Should someone be walking toward the waiting area, the LED lights flash, increasing
Page 22 of 71
their output as a safety indication and alert a person already waiting there. They then have the option to use their smartphone or call box as an alert device, should they feel unsafe. This time of evening finds faculty and staff walking to their vehicles and using a “video escort” application on their smartphones. Should they confirm an incident, passively report and fail to check in while they walk to their cars or dorm, campus command is immediately notified and nearby cameras activated. The system has preprogrammed camera locations that are related to the alert’s location, from the user’s smartphone or tablet. About 20 students and faculty are using this application after hours and the system is ready to process any alerts. 22:00 Our “day in the life” concludes with the security staff verifying cafeteria and facility supply deliveries. Each of the delivering vendors has checked in online prior to delivery, entering their commercial trailer plate and approximate delivery window. The embedded license plate recognition system automatically detects the plate on entry and exit, delivering an exception alert should the plate either not be in the database or failed to exit. The safety/security officers work together with command and make a visual verification of the delivery into the transition space. The vendor does not have to enter the secured building space for delivery, simplifying and shortening the process for both parties. A HDTV network camera monitors the delivery transition area and the camera’s embedded video motion detector is active after hours. Each of the above examples is technology in action, enhancing the safety on campus and delivering tactical advantages to campus resources. How do these solutions work and can the designer easily specify them? Four categories used in our “day in the life” example include the following:
• Design of video surveillance systems for forensic video “readiness” • Campus video mobility, supporting enhanced situation awareness and
response • Image quality and video analytics supporting improved response to off-‐
normal conditions • The maturity of license plate recognition (LPR) and how campus surveillance
systems benefit Forensic Video Readiness in Campus Security In campus security, recorded video and related feature data of events is one of the first and most important resources for incident review. But what is available in this
Page 23 of 71
data, and what are the opportunities for command center personnel, first responders, operations management and safety teams? To understand this, we first need to define what is included with this “forensic” video data and then apply a process for its use. Digital Multimedia Content is more than just video data. This is digital data representing audio content, video content, metadata information, location-‐based information, relevant IP addresses, recording time, system time, and any other information attached to a digital file. All of this is information valuable to the campus security professional either in real time or for forensic use. Applying an understanding of the effect of light on the scene can improve the image quality of the video content. Advances in camera technology that produce usable color or “find the light” in dark or low illumination scenes are improving forensic video content. The design of the video solution to provide maximum coverage is of great importance for systems used for forensic review. Using standards-‐based, high-‐image quality sources like HDTV IP cameras and technologies to accommodate difficult lighting will improve the recorded image quality. Video analytics is of interest to the campus safety and security professional as they can perform complex repetitive functions such as object detection and recognition simultaneously on many channels of video. These tools can provide improved searches, based on object characteristics and behavior. These include metadata-‐incorporating object characteristics such as color, size, trajectory, location-‐based information, relevant IP addresses, recording time and system time. Video analytics embedded in the network camera represents a growing segment where applications run and values or decisions based on recognition are available with the “edge” network camera and minimal software. One popular example that can report student behavior at main entry/egress points uses a “people counter” where the network camera and built-‐in app return the number of people passing into a zone, through a boundary, or into the field of view. This can provide criteria on which to increase camera frame rate and stored resolution during the time of highest traffic. Another popular video-‐recognition solution that runs either as an embedded network camera application or in the Video Management System is fixed License Plate Recognition and Capture (LPR/LPC). This specialized app captures license plate information for immediate processing by LPR software. The software may run in a rapid-‐acquisition mode and compare plates later against an approved list or perform the recognition sequentially as the vehicles pass within the camera field of view. In either case, LPR is a mature application embraced by campus safety for
Page 24 of 71
entry and exit locations. The trend to embed this function reduces cost and allows greater flexibility. “Heat” activity mapping provides a visual color-‐coded summary showing how students, faculty and staff move about a campus. This type of video content analysis can improve safety by analyzing the flow of pedestrian and vehicular traffic on campus. Understanding personnel traffic flow will often help camera placement and ultimately the video forensic-‐review process. Integrated surveillance cameras and real time command and control will improve a campus safety and security operator’s ability to detect. There are numerous resources for the campus professional to improve safety, operations and communications. Local law enforcement, first responders and fusion centers, often run by state police agencies have outreach teams and can often be the best resource you never knew about. There are a number of key subjects and issues that a campus safety and security professional need consider prior to the deployment of any of the aforementioned technologies. Some of these include:
• The school security risk assessment model • Steps to successful Security and Safety Master Planning • Crime Prevention Through Environmental Design and Lighting – how to
create a safe environment • Lock it down or free egress – the intelligent debate in school crisis
environments • Loss prevention is more important than you think • Video, mobility, emergency messaging, video escorting and deploying a “See
Something, Say Something” program • Forming a digital evidence repository as a tool for first responders and
school safety A balance of operational responsibilities, appropriate response, technology engagement and hopefully personal growth and insight of their service delivery mark the day-‐to-‐day activities on campus for the safety and security professional.
Page 25 of 71
STANDARDS
Navigating the Security and Public Safety Industry: from Associations to Standards If you think about it, without standards to govern the product and services we manufacturer and buy, industries might collapse. Cars wouldn’t run. Buildings wouldn’t be “intelligent” or save energy. Service people would be at a loss as to how to fix things or even get the correct replacement parts. The electronic security sector is no different. Without standards cameras and video management software couldn’t communicate with each other. Integrators wouldn’t know how to install systems. And end users wouldn’t be able to view and search their video for forensic evidence. So who decides what standards should be adopted? And how do those standards permeate the various industries that depend on them to protect their businesses? Normative vs. informative standards – requirements vs. recommendations Before we delve into those questions, it’s important to understand that there are two kinds of standard. Standards with a capital “S” – also known as normative standards – contain specific requirements that must be followed. Standards with a small “s” – also known as informative standards – are best practices guidelines for achieving a specific security goal. One example of a normative Standard would be the specifications published by the Society of Motion Picture and Television Engineers (SMPTE) that outline what qualifies as an HDTV camera. The image must conform to a 16:9 widescreen format, contain 720 or 1080 scan lines, be able to stream at 30 or 60 frames per second, and generate video in high color fidelity. An example of an informative standard would be the recommendation to install redundant local archiving as a fail-‐safe in case network connectivity to the remote server is disrupted. Who creates standards? Normative Standards are created by accredited Standards Developing Organizations (SDOs). There are a host of SDOs that play an important role in shaping electronic security. Here are just a few of them: ASIS International In the physical security and security applied sciences end-‐user community, the largest global accredited SDO is ASIS International. It bases its comprehensive educational programs as study for the CPP (Certified Protection Professional) and PSP (Physical Security Professional) credentials on industry Standards and
Page 26 of 71
guidelines. ASIS has organized its membership into regional chapters, as well as vertical markets and councils to apply each domain’s Standards. In the retail market both the National Retail Federation and ASIS International work together on interpreting the significance of the PCI-‐DSS Standard which governs the payment card data security process -‐-‐ including prevention, detection and appropriate reaction to security incidents. Physical Security, Facility Security and Advanced Security Solutions like explosives detection have resulted in ASIS focusing members into the Physical Security, Security Architecture & Engineering and the newly formed Security Applied Sciences Council (SAS). SAS and the IT SDO, ISC(2) are working together now to deliver advanced guidance on trending solutions. Recently, at the ASIS Annual Seminar and the co-‐located ISC(2) World Congress, ASIS International Education delivered high interest during sessions on Mobile Device Forensics, Explosives and Contraband Detection and Active Shooter Response. SIA The Security Industry Association (SIA) has evolved into a significant provider of focused collaborations for industry manufacturers and solution providers. If security and safety devices are interoperable, they are more easily deployed and solutions can be scalable, agile and elastic, meeting end user requirements. SIA also provides a lobbying point, bringing policy makers and stakeholders together to address federal and state initiatives affecting the security industry. SIA Education is using trending industry Standards to deliver classes on UltraHD video and Near Field Communications (NFC) at industry events. NFC-‐based Apple Pay and Google Wallet services allow consumers to “Tap and Pay,” while the same NFC technology is turning smartphones into electronic access control credentials. BICSI In the building industry, Building Industry Consulting Service International (BICSI) supports the advancement of the information and communication technology (ICT) community, which covers voice, data, electronic safety and security, project management and audio/video technologies. BICSI recently published a Data Center design Standard to specify how to properly engineer a data center. BICSI also recently published an Electronic Security and Safety Standard (ESS), becoming the first SDO to unify physical security, physical infrastructure and safety in a single document. ESA Another important SDO is the Electronic Security Association (ESA) whose membership includes independent, national and global systems integrators. One of its charters is to provide extensive vertical industry education to its members. Recently they’ve taken a leadership role in developing Electronic Security Guidelines for Schools to ensure the safety of children, teachers and school personnel.
Page 27 of 71
CSAA The Central Station Alarm Association (CSAA) represents protection service providers, users and bureaus certified by Nationally Recognized Testing Laboratories like UL. CSAA activities and Standards are encouraging the industry practices leading to life-‐saving false alarm reduction and improved central station performance. Through CSAA’s ASAP to the PSAP program, the second largest Next Generation 911 (NG911) center in the City of Houston can often process alarms previously taking several minutes in 15 seconds. LEVA In the law enforcement sector, the Law Enforcement Video Association (LEVA) not only publishes best practices guidelines for conducting forensic video investigation but also offers a rigorous certification program for Forensic Video Analysts and Forensic Video Technicians. Nowhere was the value of that training more evident than during the Vancouver Stanley Cup riots in 2011 when more than 18,000 arrests were made. SISC In the electronic security industry, there is a unique working group known as the Security Industry Council (SISC). It reviews and coordinates the standards activities of accredited member SDOs, identifies related organization with relevant expertise for SDO assistance and coordinates with their individual standards projects. How do standards extend into the user community? Standards come into play on multiple levels in a security scenario. Take for instance an after-‐hours jewelry store robbery. The robbery is detected by the near perfect processing of three alarm sensors. A glass breaking detector is activated by a breach in the glass panel of the store’s front door. A remote central station monitors the audio level at the store and is able to recognize the sound of multiple people on the store. The central station also monitors the video cameras on the premises as a third verification of the burglars in action. The operator can now call law enforcement with full details of the situation and let officers know how many suspects are on the premises so that all participants can be apprehended. To ensure all the critical systems to function properly and in concert with one another requires the adherence to multiple Standards. For instance, there are Standards that govern the alarm transmission and the video verification, including bit rates and . There are protection services Standards adopted by the Central Station Alarm Association (CSAA) that are certified by a CSAA-‐approved Nationally Recognized Testing Laboratory (NRTL), such as Underwriters Laboratory, FM and ETL. There are also image quality Standards that make it possible to identify the suspects, such as HDTV and Ultra High Definition, also known in the industry as 4K and 8K, whose specifications are governed by the Consumer Electronics Association.
Page 28 of 71
And there are video compression Standards such as H264 and H.265 (also known as High Efficiency Video Codec or HEVC) that govern the how the video is streamed so as to reduce bandwidth consumption without degrading image quality. If you think about it, standards are the fundamental building blocks of what we make and do. For instance, energy Standards designate the specific formula of unleaded gas and diesel fuel. Standards also apply to generally accepted practices, such as the ringing of a bell to connote that the school day is about to start. The first is a normative Standard (note the capital “S”) published by a formal standards developing organization (SDO), in this case the EPA. The other is an informative standard (lowercase “s”) – a best practices guideline for a particular sector, in this case education. Both types of standards can be found in just about every major industry. For instance, security system manufacturers rely on technology Standards like H.264 compression and HDTV resolution when developing products to ensure component interoperability. A best practice standard, on the other hand, might be a recommend guideline to integrators on how best to optimize system performance. For the Healthcare industry privacy Standards like HIPPA govern the handling of patient data while a best practices standard might advise healthcare professionals on some of the best ways to triage patients. Another might include the use of ultra low light, full motion video capture or “Lightfinder” for monitoring patients in sleep centers or Neonatal Intensive Care Units (NICU) where visual acuity and color reproduction can indicate when infants have apnea (breathing difficulty) or jaundice (oxygen deficiency, yellowing skin). For Critical Infrastructure operational Standards like CIP-‐014-‐1 address the physical security of energy facilities. Best practices guidelines, on the other hand, might clarify ways operators can more rapidly and efficiently diagnose malfunctions before full failure. An energy provider in the Northeast uses thermal imaging to remotely detect hot running transformers and switchgear. Public Safety professionals in the wake of recent events are now closely examining surveillance video retention policies, best practices and Agency Standards. Are we keeping enough video content to perform a comprehensive investigation? Do we have a policy in place to discard video storage after a period of time? This was a recent topic of discussion at a Public Safety Summit held by the Video Quality in Public Safety (VQIPS) working group of the Department of Homeland Security Science and Technology Directorate (DHS S+T). It was discovered that some agencies in the Los Angeles area require at least 90 days of Digital Multimedia Content (video plus metadata), while an agency in Little Rock, Arkansas must dispose of video kept longer than 120 days.
Page 29 of 71
A newly formed public safety policy team for VQIPS will be working together in 2015 on this very topic, which will no doubt positively impact the safety of our cities.
Page 30 of 71
The following diagrams illustrate the participation of security and public safety ecosystem entities in various industry associations and Standards Development organizations.
Page 31 of 71
UltraHD and the video surveillance industry The success of modern IP cameras having image quality and contributing to successful investigations would not be possible without today's high definition Standards like HDTV and Ultra High Definition (UHD). UHD includes both 4K and 8K formats. The Consumer Electronics Association (CEA) defines UHD as having an aspect ratio of at least 16:9 and at least one digital input capable of carrying and presenting native video at a minimum resolution of 3840×2160 pixels. The way we currently use security video is also the way many consumers are viewing their favorite television programs. Netflix has just announced that popular shows like "House of Cards" and "The Blacklist" are available via 4K video streaming. Subscribers will pay a premium for this service, but it is another video Standard that makes it possible. Most surveillance video today is encoded with the Advanced Video Codec (AVC) or h.264 Standard, basic profile. Today's UHD video streaming services for entertainment and gaming are making use of an even more efficient encoding Standard, the High Efficiency Video Codec (HEVC). Recently, several car races have been streamed at UHD and encoded with HEVC, delivering amazing resolution to both broadcasters and mobile users. HEVC made a big introduction at this year's National Association of Broadcasters (NAB) event. HEVC's significant (often 40%) efficiency over existing H.264 solutions, makes a compelling solution for the substantially increased bandwidth required for 4K. Adoption breeds competition and Dynamic Adaptive Streaming over HTTP (DASH), also known as MPEG-‐DASH will compete with HEVC.
1. What are the advantages for a big company using 4K video for surveillance? a. The widespread deployment of 4K HDTV video surveillance cameras
for a large user not only provides increased opportunities for improved investigations, but also business intelligence. For example, personnel and customer paths are more accurately traced, and the resolution is high enough to often support multiple content analysis opportunities like people counting and vehicle plate recognition.
2. What are some of the pieces they need -‐-‐ high-‐end computers, 4K screens, 4K
video cams, and...? a. The most important consideration in 4K video surveillance is the
source, usually an IP video 4K HDTV camera. A 4K IP video camera equipped with solid state internal storage like an SD Card can retain several hours of recorded video, even if recording server and display should fail.
Page 32 of 71
b. The second most important device is a Network-‐Attached Storage unit, usually located nearby a group of IP cameras and providing redundant recording and the opportunity to reduce bandwidth on the network.
c. The recording and application server must be powerful enough to store, index and conduct searches of the 4K video content.
d. 4K displays are especially important if the images will be viewed on larger displays. For example, given a viewing distance of 10’, the approximate display resolution required will increase with display size. At the 10’ viewing distance, a 36” display will require at least 720p HDTV resolution, 60” display will require at least 1080p and 100” will require 4K5.
3. Which companies already provide security surveillance in 4K or have plans
to? Who should? a. On 4/1/2014, Axis Communications announced its first 4K resolution
camera as part of a new compact bullet-‐style series. The AXIS P1428-‐E Network Camera features a resolution four times higher than HDTV 1080p and is ideal for overlooking large areas like parking lots and public squares while also being able to capture fine details. Companies participating in the ONVIF video interoperability forum introducing similar cameras include the Dinion IP ultra 8000 MP camera from Bosch and the Samsung NX1 28.2 Smart 4K Camera. Sony has demonstrated a multipurpose 4K sensor for future IP video products.
4. What are the challenges in 4K for surveillance?
a. The network architect or IT designer may decide not use Network Attached Storage Devices, thus requiring the network infrastructure to bear the 4K streams to the recording server. Internal and distributed storage provides redundancy efficient video stream management.
5. Can the 4K video be used in a court proceeding to ID a suspect more readily than even HD? Why or why not?
a. All HDTV resolutions, 720p, 1080p and 4K can provide improved opportunities to identify an individual, vehicle or object in real time observation and forensic review. A video source capable of supporting the identification of a person or vehicle of interest depends not only on resolution, but imager, image processing, lens, illumination and compression efficiency. With all parameters being
5 Digital Trends, 3/8/2013, “720p vs. 1080p: Can You Tell The Difference Between HDTV Resolutions?”
Page 33 of 71
equal, 4K provides four times the resolution of 1080p HDTV video sources.
UltraHD Resolutions
Page 35 of 71
IoT, Sensors and Analytics The significant technology expansion that’s about to happen The Internet of Things has truly changed the technology landscape. In fact, many of the things we only dreamt about a few short years ago are now commonplace. As IoT begins to converge with sensors and analytics it’s evident that the technology landscape is poised to change yet again. And that change will impact industries across the board. It’s not hard to imagine some of those scenarios that are probably just around the corner. In a not too distant future, an interesting set of events is taking place in a single day. Virtual visit A family checks in to medical reception for one member’s outpatient procedure. The patient is given an RF ID tag and a family member’s smartphone NFC function is activated via the hospital’s patient care application. Both are beacons, meaning they present a specific set of information securely to nearby sensors. The patient is already in pre-op and the family member with the smartphone walks over to a self-serve kiosk that senses they’re nearby. A greeting given, a simple yet trusted identity verification performed and the kiosk pulls up a video of their loved one resting comfortably awaiting surgery. “We’ll be back before you know it,” the nurse says reassuringly to the intelligent video surveillance dome camera, knowing the patient’s family is anxiously waiting. Within the hour, a notification pops into the family’s smartphone, letting them know the patient is in recovery. A return to the kiosk lets them say a quick “hello.” Virtually no buttons have been pushed or complex device registration performed; the video surveillance camera, application and connectivity did [almost] all the work. With the outpatient census at this particular smart healthcare center being quite high, this same cycle is repeated over and over
Page 36 of 71
again, simultaneously and with improving optimization of the services. A facial recognition application verifies the patient location on exit and will speed their check-in when a follow up visit is required. For today’s challenging and increasing patient “elopements” or unplanned wandering off-premises of longer term care individuals, the facial recognition App and mobile notification to a security officer’s smartphone becomes another tool in potentially saving a life. The data relating to the quality and frequency of this interaction are logged so that the smart application can reserve more Healthcare Center infrastructure services during peak periods. Crime fighting intelligence In another city, a specialized team of law enforcement professionals just received intelligence of a new “crib” potentially nearby, where illegal narcotics and weapons are being stored. Two members of the city’s gang violence reduction unit start a tour and soon receive a “hit” off a fixed surveillance camera with a built-in license plate recognition (LPR) application and connected to the NCIC6. The detecting camera has a microcomputer connected to a NAS (network attached storage). A “hot list” was recently uploaded to the NAS device after the city’s crime analysis software related the vehicle registration of a known gang associate to the primary suspects. A “beacon” application notifies the team nearby, as well as the Command Center of the alert. The camera is equipped with a “next generation” codec called Zipstream, so a clip of the vehicle passing by the camera is pushed to law enforcement on site. The camera is capable of rendering details through forensic capture, and the team sees the vehicle occupants are armed with automatic weapons. Knowing the location of their law enforcement assets, Command automatically pushed out notifications of the filtered social media chatter to the nearby detectives and tactical response team. The cloud-based social media app has been listening for known gang language, keywords and locations, focusing the intel to just what is most vital in this operation. A warrant is obtained and the unit leader requests the “go-ahead” for the operation from Command. A traffic management application creates a protective radius around the site operation, freezing traffic and dispatching EMS for potential injuries and HAZMAT should there be toxic drug production onsite. The team executes the warrant and takes key gang members into custody.
6 The Federal Bureau of Investigation (FBI) compares license plates against its National Crime Information Center (NCIC) database. As law enforcement agencies take advantage of advanced technologies, the opportunities for using data to help with investigations increases greatly. LPR cameras locate license plates within an image, decode them using automatic number plate recognition and character recognition.
Page 37 of 71
Each one of these realistic scenarios is not only possible with a well running IoT ecosystem, they represent a ubiquitous development of social and technology fusion. What really makes this work? An even better question is, can this scale, or can hundreds of these scenarios be running simultaneously and continuously in many locations? The Internet of Things links sensors like advanced IP cameras, together with communications, storage, infrastructure, analytics and quality of service. To say that IoT is about just sensors and infrastructure would take us back to the days when virtually every part of a security solution required monitoring or interpretation. With the evolution of IoT, tactical profiles may be identified and potential response scenarios generated, with the most confident automation by analytical processes performed repeatedly. First responders become safer through intelligence; families stay informed and involved.
IoT prominence Measurable outcomes to the impact of IoT are well documented. According to Gartner, IoT product and service suppliers will generate incremental revenue exceeding $300 billion in 2020; Cisco reports an increase in private sector profits of 21% and add $19 trillion to the global economy also by 2020. The McKinsey Global Institute reports $36 trillion operating costs of key affected industries could be impacted by IoT.
Page 38 of 71
In 2014, Industrial Internet Consortium7 (IIC) was formed by AT&T, Cisco, GE, IBM and Intel and is focused on accelerating growth by coordinating IoT ecosystem initiatives to connect and integrate objects with people, processes and data. Why is this happening now? There are scalable improvements in each part of the IoT puzzle; sensors like IP cameras are more powerful than ever – it takes a significant amount of consistent processing and image quality to encode as efficiently say, for example Zipstream that can reduce bandwidth and storage by at least 50%. Analytics like LPR and facial recognition run more effectively in cameras with more powerful processing. At IFSEC, last month, one of the industry’s first cameras with built-in facial recognition was introduced. Last year, ultra low light technologies like LightFinder are making video analytics possible right at the “edge” sensor. However, the sensor is no longer an edge, but more like a node in the IoT network linking consuming devices like smartphones and “wearables,” while supported by infrastructure and storage in cloud and near (or embedded in) the sensors. By collecting, storing and analyzing data more cost effectively, reliably and securely, both industrial and consumer markets rely on connectivity. To streamline the processing of satellite images involved with field-testing the All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) robot, NASA/JPL engineers developed an application that takes advantage of the parallel nature of the Amazon Web Services (AWS) workflow. AWS recently processed complex 3.2 giga-pixel images to support the ATHLETE robot operations enabling the vehicle to travel across various types of terrain—ranging from smooth surfaces to rolling hills to ruggedly steep terrain, yet also reconfigure on demand to form robot “feet.” For the Mars Science Laboratory, AWS served as one of the primary data processing and delivery pipelines and “allowed us to process nearly 200,000 Cassini [Satellite] images within a few hours under $200.” By connecting “everything” the number IoT devices will be approximately seven times the number of people on earth today by 2020, according to Cisco. The continuing growth in demand from subscribers for better voice, video and mobile broadband experiences is encouraging the industry to look ahead at how networks can be readied to meet future extreme capacity and performance demands.
7 The current priorities of the IIC are to build end-‐to-‐end security use cases, apply security use cases to each of the use case groups, derive requirements from each use case, identify what is common (architectural), identify what is one-‐off (application-‐specific), design secure, integration framework based on combined use cases (with technology team) and build test beds
Page 39 of 71
According to Nokia8, 10,000 times more traffic will need to be carried through all mobile broadband technologies at some point between 2020 and 2030. We made our prediction in 2010 and since then have gathered information from the market which shows that the growth we foresaw is actually happening. The need for more capacity goes hand-in-hand with access to more spectrum on higher carrier frequencies. The new 5G system needs to be designed in a way that enables deployment in new frequency bands. We will see growth between ten and a hundred devices for each mobile communications user – even now many people have a phone, tablet, laptop and a few Bluetooth-enabled devices. The security of security Device, network and application security is critical to IoT's adoption. With all these devices, how can we prepare against seemingly endless security vulnerabilities? According to Good Technology Chief Executive Officer Christy Wyatt, the key question is “What data is ending up on what device and how do I protect it? Cyber Security Is A Journey Not Destination.” Deploying an IP camera capable of handling digital certificates to verify they are a “trusted” non-person entity on a network will be essential to scale IoT security. One method is to validate the identities and permissions of both IP camera and the consumer of the digital multimedia content on the network. IoT in action Monitoring of equipment and people for safety and security, using IoT is underway by Union Pacific, the largest railroad in the United States. IoT devices like acoustic sensors and IP cameras help predict equipment failures and reduce derailment risks. These sensors are placed on or near tracks to monitor the integrity of train wheels. Union Pacific, has been able to reduce bearing-related derailments, which can result in catastrophic events and costly delays, often up to $40 million in damages per incident. By applying analytics to sensor data, Union Pacific can predict not just imminent problems but also potentially dangerous developments well in advance. Train operators can be informed of potential hazards within five minutes of detecting anomalies in bearings or tracks. Retail is an industry also undergoing significant changes using IoT innovation. The ability to detect customer behavior when they visit a “connected retail store” results in improved customer experience.
8 “5G use cases and requirements,” Nokia Networks FutureWorks, 2014
Page 40 of 71
These IoT sensors include video cameras and location “beacons” that provide in-shelf availability, inventory and merchandise optimization, loss prevention and mobile payments. Public Safety professionals in the wake of recent events are now closely considering the benefits of IoT devices, not only to do their job better, but also to remain safe. The recent Public Safety Summit held by the Video Quality in Public Safety (VQIPS) working group of the Department of Homeland Security Science and Technology Directorate (DHS S+T) revealed an exciting new program called the Next Generation First Responder Program. “If we were able to track [first responder] status using technology tools, the on scene commanders would know what is happening in real time,” according to Paramedic Don MacGarry of Loudon County Fire and Rescue in Virginia. “Safety is always a priority. We go into high-rise buildings, and sometimes into basements, which turn out to be sometimes the most dangerous places to be, and we don’t have communications;” Victoria Anthony, Master Firefighter, Rockville MD Volunteer Fire Dept. First Responder of the future goals include:
- Be protected from hazards and have the best situation awareness possible
- Be connected to their peers and are able to locate them on demand - Be connected to their commanders and citizens they support
Chem-bio, gas and explosives sensors are wearable IoT devices under consideration for this program. If a police office has a tactical picture of where an active shooter is, they will have a better chance at directing the appropriate response and saving lives. Having improved situation awareness during a fire is essential for placing assets, including what side of the building is burning and what openings are available.
Page 41 of 71
IoT devices and their ecosystem will help first responders not only respond to an incident, but control it, and positively impact the safety of our cities.
Page 42 of 71
Cyber Security of IoT sensors
As users, we feel the effects of network and sensor outages in some extraordinary ways. A communications failure of one of the world’s largest airline carriers created cascading effects for not only the travel industry, but also logistics and distribution of consumer goods. “Internal technical issues” were stated to have affected operations simultaneously at two global financial exchanges. These events also preceded communications in the “Dark Web,” congratulating them on a job well done and referencing the institutions by name. Chances are, you're not so familiar with this hidden network of websites as they require special tools to access. Ironically, this network, often used to sell illegal goods voluntarily, shut down when it was discovered services protocol vulnerabilities could deanonymize server locations. Essentially, the hackers got hacked. A very recent failure of a sensor network shut down the largest petroleum refinery in the Midwest, causing the wholesale price for gasoline in Chicago and St Louis to increase 60¢ per gallon from the previous day. A fault in a submarine optical fiber cable connecting Australia, Guam and Japan also recently failed; however, hundreds of gigabit per second data traffic was rerouted from the impacted section to alternate paths on an optical fiber ring configuration. This avoided a route outage while a cable repair ship was mobilized, transited to the repair site, and implemented repairs in the challenging deep waters. Should this resiliency have not existed, the outage would have meant potential shutdown of credit-card purchases, withdrawing money from ATMs and vital teleconferencing for health care. The two most significant vulnerabilities of IoT devices are9 password attacks and identity spoofing. A better process to establish trusted access to the IoT was required and server-based authentication appeared to provide this.
9 Source: Capgemini Consulting and Sogeti High Tech, “Security in the Internet of Things Survey”, November 2014
Page 43 of 71
However, cyber security that is based on server locations far away from devices connected to the Internet or of the “things” in IoT is becoming less desirable to the network architect and the users they serve. The use of multi-factor authentication (MFA) can permit multiple servers, located closer to the “edge” IP cameras, physical access control panels or communications devices to process more rapid authorization requests. In order to sustain the high performance, increased security requirements and exponentially increasing numbers of IoT sensors and devices, improvement in the authentication process is necessary. One of the most common and secure examples of these high performance security processes is right in your pocket or handbag. Push Notification Services (PNS) highly efficient and secure remote notifications features for Android, iOS and Microsoft smartphone devices. Each IoT smartphone, device, sensor or even IP camera establishes an encrypted IP connection with the PNS and receives notifications over this connection. If a notification for an app arrives when that app is not running, the device alerts the user that the app has data waiting for it. When new data for an app arrives, the provider prepares and sends a notification to the PNS, which pushes the notification to the target device. The PNS establishes the IoT sensor identity Transport Layer Security (TLS) peer-to-peer authentication. This is very favorable and preferable over SSL, especially considering the latest PCI-DSS 3.1 compliance requirements. The IoT device initiates a TLS connection with the PNS, which returns a digital certificate from the server. The device validates this certificate and then sends its device certificate to the PNS, which validates that certificate.
Page 44 of 71
The PNS servers also have the necessary certificates, credential authority certificates, and cryptographic keys (both private and public) for validating connections and the identities of providers, corporate servers and IoT devices. Whether it is a text message, video stream or complete Digital Multimedia Content (video, audio and metadata), this process to establish trusted identities is the cornerstone of IoT device cyber security. Moving back to our example of high speed submarine cables, we see the need for resilient connectivity to be maintained and protected against attack. Although different transmission modes provide for connectivity, the architecture and authentication of devices on the network is prefereable. Why not just use “sat-backup” to keep authentication servers connected? The total carrying capacity of a typical submarine cable between the USA and East Asia is in the terabits per second, while satellites typically offer far less, in the thousands of megabits per second. The “Cyber Cloud” operators, governments and enterprises need the ability to quickly deliver differentiated services by activating on notice a virtual pool of bandwidth through resilient architecture. In our previous petroleum example, if caught in time, the IoT sensor network to detect leaking pipes would have notified plant operators to implement a repair in real time. Should a petroleum refinery flow sensor or server come under attack, the usage archives or historical data could be compromised. A cyber attack on a refinery during peak winter (heating) or summer (automobile usage) periods could be catastrophic. Protecting IoT sensor data in motion is important, but intelligence in the wrong hands can be deadly. This is what cyber resilience is all about: not only protecting the operation and transmission of data, but preserving “data at rest.” We know what we need to do to protect IoT devices, but why would corporations invest in these safeguards? To understand, we need to expose risk management professionals to the four main daunting consequences of suffering a breach:
1. class-action lawsuits 2. regulatory fines, penalties and consumer redress 3. reputational damage 4. data and income loss
So called “cyber damage” is also fueling the growth of a new type of business insurance: Cyber security insurance including Network Security and Privacy Liability Coverage. The following diagram illustrates potential components:
Page 45 of 71
IoT Device and Solution Cyber Insurance Components IoT Reference F E & O10
Digital Multimedia Content (DMC)
Ecosystem Cyber Security PII Data Elements
Legacy Reference E&O11 Media and Data Network Security Privacy
IoT device/solution failure Infringement of Intellectual Property (other than patent)
Cybercrime and Intellectual Property Crime
PII exposure by hacker
IoT performance degradation Advertising & Personal Injury
Unauthorized Access IoT device loss or theft
Misconfiguration Data at Rest compromised
Transmission of Virus or Malicious Code
Workplace IoT device breach (ex. BYOD)
Solution design Errors & Omissions
Data in motion intercepted or denied transmission/reception
Theft/Destruction of Data Lost IoT Device
Failure to deliver services by IoT devices Cyber Extortion
Breach of PII data elements Watering hole formation
Device/Solution Overutilization IoT to Botnet conversion
Server Deanonymization
Failure to detect Advanced Persistent Threats (APT)
Failure to provide recovery paths
In order to authenticate IoT devices, we often rely on both the public network and mobile carriers. Network outages are one big fear of mobile operators and a recent survey by Light Reading estimates that as many as 50% of mobile operators worldwide could be suffering an hour-long outage in a significant part of their network at a rate of once a year as a result of cyber attacks. Also, the attacks are growing, up from 11% in October 2013 to 16% in 2014. The deployment of in-house authentication servers may be a justified expense. The importance of threat intelligence sharing is also creating tough decisions. Does an enterprise keep their name out of the headlines as the next victim or contribute to a safer and secure world through cyber security attack postures? As a leading cyber security consultant recently stated, "When it comes to cyber intelligence, there's strength in numbers."
10 Failure of IoT device; Errors and Omissions relating to IoT solution 11 Errors and Omissions
Page 46 of 71
The Internet of Things in Safety and Security represents a networking paradigm where interconnected, smart sensors are powered, are protected and continuously generate data and transmit it over the Internet. There are more layers needed to make the IoT sensor operational and secure that must be considered. The primary categories of the most significant processes include:
1. IoT Device Structure – this can include the physical housing of an IP camera, as well as a wireless communication antenna and solar energy collection device
2. Wireless and Wired Communications 3. Cyber Security – the heart of the external, internal cyber security and
device protection functions, this layer also serves to assist in MFA 4. Power Transfer – where energy for storage is acquired; for example
wireless charging or energy harvesting 5. Energy Storage – primarily where volatile data storage and processing
functions 6. Data Exchange – performs protocol negotiation and interoperability 7. Process and Analysis – metadata analysis, efficiency, analytics, energy
management and storage optimization and indexing process 8. Data Storage – the “Thing” in IoT, all sensor data
The term IoT refers to three aspects that are expanded on by our IoT device model: 1.The Thing itself (the device) 2.The Local Network, wired or wireless, using Ethernet, Bluetooth or other connectivity 3.The Internet
Page 47 of 71
Data interchange and communication protocol and storage impact to important IoT device to consumer response time. Depending on the urgency of the response required for the event, there may be a need for the device to process certain data internally. Furthermore, there is metadata that describes size, color, speed, trajectory and time of the scene, in the case of an IP camera. Some of this data may be needed urgently, as in the case of a license plate of a vehicle associated with an amber alert or other time sensitive emergency. The object identification, location, processes and services provided are an example of such data. IoT data may statically reside internally, immediately nearby or in mobile objects and IoT data concentration storage points. The data may be distributed widely, but it always must be transmitted and stored securely. This migration or “flow” of IoT data can continue from one secure container to another, until a centralized data store is reached where more sophisticated processing and analysis takes place, like facial recognition or other pattern matching algorithm. Communication, storage, process, data exchange, security and device power are defining factors in the IoT model structure. The IoT sensor must be considered to be less of a “razor blade” and more of an evolving device and resource. This resources most vital part is often its data and so four important choices must be made about the data: how it is collected, how it is stored, how it is processed and how it is protected. With this foundation we can expand our use of IoT to a wide range of data types and formats from different data sources, including time and geo-location tags, and global intelligence. The IoT “Superball” model was developed in cooperation with the Security Applied Sciences Council, ASIS International and The Video Quality in Public Safety Working Group, US Dept. of Homeland Security Science and Technology Directorate.
Page 48 of 71
IoT and Cyber Security FAQ How does certificate-‐based authentication work? When presented with a certificate, an authentication server will do the following (at a minimum):
1. Has the Digital Certificate been issued/signed by a Trusted CA? 2. Is the Certificate Expired – checks both the start and end dates 3. Has the Certificate been revoked? 4. Has the client provided proof of possession?
Has the Digital Certificate Been Signed by a Trusted CA? The signing of the certificate really has two parts. The first part is the certificate must have been signed correctly (following the correct format, etc). If it is not, it will be discarded immediately. Next, The signing CA’s public key must be in a Trusted Certificates store, and that certificate must be trusted for purposes of authentication. Has the Certificate Expired? Just like a driver’s license or a passport, a certificate will have 2 dates listed in it: a date issued, and date it is valid to (when does it expire). When you present an expired drivers license to law enforcement, it’s a problem as the credential is no longer a valid source of identity. “Access-‐Reject” An authentication server does the same sort of check. Is the certificate valid for the date and time that the authentication request comes in. This is one reason why Network Time Protocol (NTP) is so important when working with certificates. Many of us have seen problems where time was out of sync. For example: a certificate was presented on January 10th 2014 at 11:11am, but its “valid-‐from” value started on January 10th at 11:30am. This was because of a time sync issue where the Certificate Authority thought it was 20 minutes later than the authentication server, and the brand-‐new certificate was not valid yet! Has the Certificate Been Revoked? You are driving down the road, and you are pulled over by a policeman. The policeman asks for your driver’s license and proof of insurance. You hand the officer a driver’s license, which is immediately checked for evidence of authenticity, i.e.: does it look like a valid driver’s license or a forgery. Ok, it’s not fake, Check. Next, expiration: it is not expired. Check. Now the policeman asks you to wait there, while he goes back to his squad car.
Page 49 of 71
While in the squad car, the officer will perform some authorization checks (are you a registered owner of the car you were driving, etc.). Those are not important for this conversation, though. What is important is that the policeman must make sure your VALID drivers license was not revoked by the DMV. What is the relationship between certificates and Active Directory? The difference lies between between authentication and authorization. A certificate issued by Active Directory Certificate Services is still just a certificate. It will go through all the authentication validation listed above, regardless of the fact that the CA was integrated into AD. What is possible, is to examine a field of the Certificate & then to do a separate look-‐up into AD based on that field during the authorization phase. For example, a certificate corresponding to a subject person is attempting authorization. The RADIUS server will take the certificate subject and do a look-‐up into AD for that username. This is where group membership and other policy conditions will be examined, and the specific authorization result will be issued. What's the difference between two-‐factor authentication and multifactor authentication? I've seen both terms used, but the specifics are still a bit unclear. What's the better option in terms of securing devices and systems? Each of these authentication frameworks uses more than a simple username/password scheme to identify an individual, but they go about it in different ways. Two-‐factor authentication (2FA) uses a single authentication step where the individual authenticates with something he knows, for example a login name, and something he has, such as a biometric component -‐-‐ like retinal scans, fingerprints or voice recognition -‐-‐ or an assigned 2FA token issued by the organization. For example, when I log onto my workstation it first prompts me for my login name, then prompts for the number showing on my hard token that I have on my person. If both match my login data, then I can then access my files. Multifactor authentication (MFA) can include both 2FA and non-‐2FA credentials, but its major distinguishing factor is that it is a multi-‐authentication process. Using the same example from above, when I log onto my workstation it prompts me for my login name, and then prompts for the number showing on my hard token. I am then prompted to enter a number that is texted to my mobile phone. If the information entered matches my login data I can then access my files. In reality, instead of working in conjunction with a 2FA credential, more often than not MFA is used with a simple username and password, and the number from a text message to a mobile phone, or some other non-‐2FA information such as secret question responses, typing in text garbled on an image, picking an image that the user previously selected in another session, or entering additional account information.
Page 50 of 71
MFA and 2FA require something you know and something you have to authenticate, and are considered even when it comes to security. However, information like answers to a secret question, is easier for attackers to discover or guess, thanks to the Internet of Things, social media and other potential sources of data leaks, so 2FA is considered more secure. But the bigger question to ask when deciding whether to use 2FA or MFA is which is more easily supported by your applications and infrastructure? If the applications you wish to protect only support one or the other then the answer is quite clear: use the one supported. If the applications can support both, 2FA would be the preferred method since the user only has to perform one authentication event. If the applications support neither, then it might be necessary to recode the application. Regardless of which method you choose, both will require some level of registration process changes, and of course the end users will need to be trained on how to use the new authentication method and how to seek help should they run into an issue logging in.
Page 51 of 71
ASIS International Security Applied Sciences Facility Model The following are block diagrams showing the progressive influence of control/analysis system, network infrastructure, policy and physical infrastructure on application locations where video surveillance is deployed.
Page 52 of 71
The following diagram adds the physical and logical aspects of the network, together with policy.
Page 53 of 71
The following diagram adds the relationship of physical infrastructure such as physical wiring plant, wireless communications.
Page 54 of 71
The following diagram adds significant interoperability and data exchange paths between subsystems like Physical Access Control, Communications, Video Surveillance.
Page 57 of 71
UL 2802 – Standard for Performance Testing of Camera Image Quality UL 2802 is one of the newest video imaging standards and provides progressive and objective test methods to assess the image quality of digital camera equipment. The standard was published in September 2013 and the development included input from various stakeholders such as producers, supply chain and distributers, authorities having jurisdiction, practitioners, commercial / industrial users, government, etc. UL 2802 defines testing procedures and quantifies image quality based on an objective set of performance tests that measure the following nine image quality attributes, which are conducted on production camera samples:
• Image resolution/sharpness – measures how closely the digital image captured by the camera matches the actual image
• TV distortion – quantifies the extent to which the two – dimensional image captured deviates from the actual image
• Relative illumination – measures the ability of a camera to effectively capture related light intensity across an object
• Maximum frame rate – measures how effectively a camera can capture a subject in motion at full resolution
• Sensitivity – determines the amount of light required to digitally re-‐create the image as realistically as possible
• Veiling glare – quantifies the impact of stray light on the camera • Dynamic range – assesses the ratio of the minimum and maximum light
intensities captured by the camera • Grey level – quantifies how well a camera can differentiate areas of interest
under different illumination, reflectance or luminance levels • Bad pixel – measure the level of pixel defects
Since no single attribute or criterion can provide a reasonably accurate and objective evaluation of a given camera, lens, software, image processor, camera lens housing, electronic components, or any combination of the like critical elements, UL 2802 uses several different quantifiable metrics to assess a video camera’s performance. These metrics, along with consistent, documented test methods, eliminate any potential variations in the evaluation of a camera. It is anticipated that UL 2802 is one of a series of video imaging standards that will be developed in addressing the complete video ecosystem. During the test program, the manufacture and testing organization (UL) collaborate to ensure that the video camera settings are optimized and the test results reflect the camera’s best abilities. The process typically involves fine tuning resolution properties; optimizing exposure time and gain during each test, enabling or disabling default features, etc., to optimize camera performance and achieve the most favorable test results based on the cameras abilities. The process of optimizing settings is no different from what is typically done during deployment of a video system.
Page 58 of 71
Below are a few unique features of the UL test program that offers a new approach from traditional test methods. All test and test apparatus are comprehensively detailed in the published standard. Electronic Test Targets Traditional methods for testing video camera image quality involve the use of standardized, published test charts for resolution and grey level tests. UL’s program implements a different approach that incorporates calibrated light sources of known luminous levels, frequency, spatial orientation, and grey levels. Each of these factors contributes to a video camera’s ability to capture record and store an image relative to the actual object. The electronic test targets that are used for the test procedures eliminates some of the known potential inconsistencies associated with the use of printed test charts under very specific lighting conditions and lighting angles, which can be difficult to control. The electronic test targets are calibrated and measured during each test program, with all of the data and settings recorded for each test.
Circular Edge Analysis
Page 59 of 71
Image resolution is a critical test in that it is a measure of how closely the digital image captured by the camera matches the actual subject. Digital images are made up of pixels, which are stored in the relative dimension of the actual image. An accurate resolution measurement requires an evaluation of a camera’s lens and sensors, as well as its imaging software, and is often presented as line pairs per picture height or LP/PH. The metric is a measure of how many distinguishable alternating colors can be represented in an image. Modulation transfer function (MTF) is a technique used to quantify image resolution in more complex images. MTF corresponds to the spatial frequency of image line pairs per picture height (LP/PH), that is, the ability to represent the “real” object by taking the light intensity and plotting it along imaginary lines traversing the representation of the object. Other mechanisms for measuring distortion are detailed in ISO 12233 (photography—Electronic still picture cameras—Resolution measurements), and in the standard mobile imaging architecture (SMIA) forum specification. UL 2802 uses the same spatial frequency response (SFR) method for resolution. The primary difference is that UL 2802 uses a circular edge analysis method (see “The Circular-‐edge Spatial Frequency Response Test,” by Richard Baer, Agilent Laboratories, 2002) versus a more traditional slanted-‐edge method to detect the transition edge of a known image. Additional benefits of the circular edge methods include an accurate method of averaging SFR from all circular directions versus multiple (horizontal and vertical) measurements using slanted-‐edge techniques. An added benefit is that consistency of measurements is improved using circular-‐edge techniques, as the slanted-‐edge SFR assumes that the transition edge of the printed resolution chart is a straight line and factors such as lens distortion can deviate the edge from a straight line, resulting in measurement inaccuracies. The benefits of using the electronic test targets and circular edge techniques for measuring SFR are cumulative in producing accurate test data.
Circular edge vs. slanted edge. Performance Score Each test results in an interpolated performance score that is calculated from traditional photographic units. UL reports the scores in two usable units of measure. For simple comparisons the UL units scores of 0-‐100 makes comparing one camera’s test score to another’s relatively simple without being a camera expert. For those that are more technically involved, the traditional photographic unit scores
Page 60 of 71
may be more meaningful in comparing test results. Either way, the performance scores reflect the image quality achieved for each test parameter. Performance scores will help a video system integrator make an accurate decision of what camera would be best for their particular use case based on known conditions. For example, if I need a camera that will perform well under low lighting condition and with fast moving objects, One may want to compare camera parameters that specifically relate to those conditions such as sensitivity, dynamic range and frame rate. No single number or criteria can provide a reasonably accurate and objective evaluation of a given camera therefore it is normal that an integrator will need to consider multiple parameters based on the camera’s use case application. UL 2802 is a standard that provides objective test results based on the cameras tested image quality. Other UL 2802 Considerations Unlike other UL Standards that generally focus on addressing fire and shock safety concerns, UL 2802 provides guidance for both safety and performance characteristics of cameras. Video cameras evaluated according to the performance criteria of UL 2802 must also comply with the safety requirements found in one of more of other applicable product Standards. These Standards include: UL 60950-‐1, the Standard for Safety of Information Technology Equipment, Safety – Part 1: General Requirements UL 60065, the Standard for Safety of Audio, Video, and Similar Electronic Apparatus – Safety Requirements UL 62368-‐1, the Standard for Safety of Audio/Video, Information and Communication Technology Equipment – Part 1: Safety Requirements UL 2044, the Standard for Safety of Commercial Closed CircuitTelevision Equipment Video cameras used in outdoor settings must also comply with the safety requirements found in UL 62368-‐1 60950-‐22, the Standard for Safety of Information Technology Equipment – Safety – Part 22: Equipment to be installed Outdoors. This progressive camera image quality standard can assist the end user in determining the most appropriate video camera(s) for specific use cases.
Page 62 of 71
Forensic Video Program Readiness Recorded video and related feature data of events is one of the first and most important resources for incident review. But what is available in this data, and what are the opportunities for industrial security, first responders, operations management, safety and loss-‐prevention professionals? To understand this, we first need to define what is included with this “forensic” video data and then apply a process for its use.
Digital Multimedia Content — more than just video data Digital Multimedia Content (DMC) is also known as digital video data, IP video content, or Digital Multimedia uncompressed and may also be referred to as original, copied, local or virtual. Compressed DMC is the most common video data available that has been transcoded from the original DMC in an industry-‐standard file format, resulting in a reduced size and network bandwidth required to represent the original data set. Advances in h.264 video compression, the ability to store DMC within the camera or video-‐encoding device itself and virtualized or cloud computing have dramatically improved the volume and duration of video data available to investigations. Uncompressed DMC or a copy of the original DMC with no further compression or loss of information that is in an industry-‐standard file format — although desirable by professional video-‐evidence examiners — is often unavailable and can be an unreasonable expectation due to the far larger storage requirements. Given the choice between having compressed video evidence that can be used together with other data, still-‐image photography is what most security professionals prefer.
Forensic Review The act of applying forensic video technology to this DMC defines “forensic video review.” Some of these review tasks include playback and analysis of DMC, together with applying a scientific methodology of forensic video analysis. Also important is the use of DMC evidence in the legal setting, performing data recovery as needed, performing forensic image comparison and the development of a visual presentation of evidence. DMC authentication and tamper detection are examples of maintaining the chain of custody for DMC evidence as specified under Law Enforcement and Emergency Services Video Association “Guidelines for the Best Practice in the Forensic Analysis of Video Evidence.”
Page 63 of 71
Design considerations Applying an understanding of the effect of light on the scene can improve the image quality of the video content. Advances in camera technology that produce usable color or “find the light” in dark-‐ or low-‐illumination scenes are improving forensic video content. The design of the video solution to provide maximum coverage is of great importance for systems used for forensic review. Using standards-‐based, high-‐image quality sources like HDTV IP cameras and technologies to accommodate difficult lighting will improve the recorded image quality.
Video content analysis Applications that use video analytics can perform complex repetitive functions such as object detection and recognition simultaneously on many channels of video. These tools can provide improved searches, based on object characteristics and behavior. Digital Multimedia Evidence (DME). The purpose of referring to this evidence, as “multimedia” is essential in understanding the different digital data categories included. Video content, audio content, metadata-‐incorporating object characteristics such as color, size, trajectory, location-‐based information, relevant IP addresses, recording time and system time may be attached or associated with a digital video file. Designers consider this where the system uses a large quantity of cameras that require monitoring for specific conditions or behaviors that are capable of being recognized. The setup and installation are relatively simple for the video analytics subsystem, which has high, sustained accuracy for the types of behaviors and objects recognized. With video synopsis or summarization, a condensed clip of all motion for selected criteria is continuously generated and stored, allowing an “instant review” of a readily available “video synopsis.” It is possible to summarize a 24-‐hour period of event entries in as little as 15 minutes, reducing incident-‐review time by at least 50 percent. Video analytics offering abnormal scene detection allows the user to set specific object criteria and direction. The scene is analyzed continuously, and abnormal behavior differing from the majority of the scene content is detected and annunciated or marked for later review. Video analytics embedded in the network camera represents a growing segment where applications run and values or decisions based on recognition are available with the “edge” network camera and minimal software. One popular example in retail and quick-‐service establishments is the “people counter” where the network camera and built-‐in app return the number of people
Page 64 of 71
passing into a zone, through a boundary, or into the field of view. This can provide criteria on which to increase camera frame rate and stored resolution during the time of highest traffic. Another popular video-‐recognition solution that runs either as an embedded network camera application or in the Video Management System is fixed License Plate Recognition and Capture (LPR/LPC). This specialized app captures license plate information for immediate processing by LPR software. The software may run in a rapid-‐acquisition mode and compare plates later against an approved list or perform the recognition sequentially as the vehicles pass within the camera field of view. In either case, LPR is a mature application embraced by law enforcement, electronic-‐toll collection, and parking management organizations; the trend to embed this function reduces cost and allows greater flexibility. “Heat” activity mapping provides a visual color-‐coded summary showing how people have moved in the camera scene for a fixed duration. Useful in retail environments where “business intelligence” data is needed, this type of video content analysis can improve safety by analyzing the flow of pedestrian and vehicular traffic in a facility. Understanding personnel traffic flow will often help camera placement and ultimately the video forensic-‐review process. Checklist: preparing for digital forensics – Issues/opportunities • Digital multimedia content (DMC), incorporating forensic video • Cost assessment for prep: balance cost-‐effective with the technically feasible • Target collection capability on the risks to the business/event/assets • Consider implementing ALICE: alert, lockdown, inform, counter and evacuate* • Collect admissible evidence within the legal compliance requirements: review the legality of any monitoring; not a technical issue of what can be obtained through forensic video review • All forms of potential evidence should be considered, not only IP cameras or legacy CCTV cameras, personnel records, access control systems, still images • Understand the functional differences between systems: Observation, Forensic Review and Recognition/Content Analysis • Understand the difference between pixel density and visual acuity/image processing and how the highest quality video content is produced • Differentiate between video content analysis/recognition systems optimized for “pro-‐active” vs “reactive”, understanding that many “reactive” tools are best for forensic video review
Checklist: implementing a forensic video readiness program
Page 65 of 71
1. Define the business/industrial/first responder scenarios that require digital multimedia content (DMC), incorporating forensic video 2. Identify available sources and different types of potential evidence 3. How will you retrieve the DMC? 4. Establish the capability for securely gathering legally admissible evidence to meet the requirement 5. Establish a policy for secure storage and handling of potential evidence Ensure monitoring is targeted to detect and deter major incidents (consider ALICE and proactive vs reactive technologies) 6. Specify circumstances when escalation to a formal investigation (which may use the digital evidence) should be launched 7. Train staff in incident awareness, so that all those involved understand their role in the digital evidence process and the legal sensitivities of evidence 8. Document a sample case with forensic video evidence, use as a model to describe incident and its impact 9. Ensure legal review to facilitate action in response to the incident
Top technology considerations in forensic video • Simplified high quality redundancy of recording: “edge” camera recording • High quality, High Definition, low light tech (full color, thermal) • “Proactive” “Ahead of the threat” advance warning video tech (abnormal detection) • “Reactive” video technologies help investigations (video summarization, synopsis, LPR, face location) • Video + Mobility and/or Alarm Automation
Page 66 of 71
Criminal Pattern Identification and Security/Video Data Is crime cyclic in nature? Can we predict when it will occur in cities? How does this impact your IT operation and how can you leverage analytics? We can explore the efforts of Chicago and Philadelphia, Phoenix and Dallas in new crime fighting efforts using data analytics, cloud computing, physical security and predictive policing efforts. How does a city analyze crime statistics and then execute crime prevention? Can crime mapping reveal problem locations not considered previously? Temporal crime analysis can often reveal trends for different times and locations in a city. Today's crime fighting team now often includes persons experienced in crime analysis and psychology to explain and sometimes predict a higher possibility of data center breaches, business interruption and violent crime. Considering the unique viewpoints of IT management, police, attorneys, and physical security professionals, the review of crime data and predictions is significant, from video data acquisition to dissemination and policy.
Page 67 of 71
Linking DMC to policy See the chart below as an evolving overview how data is linked to policy. DMC12Acquisition DMC Usage DMC Storage DMC Compliance Object, incident capture Search Retention Solution13
Interoperability
Categorization of Content
Monitoring, Analysis, and Analytic Applications
Dissemination and Information Sharing
Auditing
Access Secure Transmission
System/Content Security
Video Content Management Systems and Technology
Sources Relationship to other databases
Back-‐Up/Continuity of Operations Planning (COOP) Issues
Operational Impacts
Compliance Retention, release and DMC destruction req't
Governance Issues
Community Privacy Social and Environmental Issues
12 Digital multimedia content (DMC) is defined as including the video content itself, plus associated metadata (feature data) and audio 13 DMC acquisition devices, virtualization, application licensing, analysis subsystems, content distribution and DMC rendering and consumption
Page 68 of 71
Implementation
Project implementation plan for a network video surveillance solution 1. Planning
a. Project commencement meeting: Introduce contractor personnel to other parties involved
b. Determine impact of other projects being undertaken at the same time. c. Review the client’s requirements and concerns d. Verify policy compliance e. Review the IT department requirements and concerns f. Contractor to review and interpret the project documents g. Facility Infrastructure Review -‐ conduct a walk-‐through of all work areas
with the client h. Contractor will be required to describe specific work methods and
proposed schedules i. Attend periodic IT Project Meetings: review the status of current and
planned activities, review the schedule 2. Infrastructure impact
a. Develop and implement infrastructure impact plan b. Develop bandwidth/network/routing maps c. Develop bandwidth measurement scenarios. d. Bandwidth calculation
i. Note individual security device (eg. Camera) bandwidth values ii. Use camera and recorder bandwidth calculators as required. iii. Accumulate to nearest network switch—apply totals. iv. Accumulate multiple network switch bandwidth—apply totals. v. Accumulate total security device (eg camera) bandwidth for each
aggregation point (eg network video server or physical access control panel)
vi. Apply totals. vii. Note individual user monitoring station bandwidth values. viii. Note command center bandwidth values ix. Apply scenarios given in Command Center Display Scenarios chart x. Accumulate to nearest network switch; apply totals xi. Accumulate typical multiple user monitoring station bandwidth
values for each aggregation point, apply totals. e. Infrastructure protocol, power, QoS
i. Verify infrastructure compatibility and protocol support. ii. Verify QoS and desired performance (eg with PACS, credential
acceptance delay; with IP Video, refresh rate, control delay, stream quality, individual image capture acuity)
Page 69 of 71
iii. Consider that cable infrastructure needs to support bandwidth capacity requirement.
iv. Consider that cable installation and cable quality significantly impact data rate.
v. Verify design of wiring plant topology and network switches need to support placement of security and surveillance system devices.
f. Power i. Deploy PoE systems effectively to support the system’s power requirements
ii. Verify redundant power as required g. Network infrastructure simulation
i. Verify network usage scenarios. ii. For example, recording system utilization with recording streams,
monitoring users. iii. Network switch functions to support bandwidth load and failure
scenarios. iv. Verify recovery from the most common and reasonable
infrastructure failures. v. Simulate as many network conditions and loads as possible for
components, edge devices, and infrastructure. 3. Commissioning
a. Verify the network device commissioning plan to specify the system deployment.
b. Perform step-‐by-‐step staging, programming, installing, and commissioning tasks.
4. Diagrams a. Verify that the following diagrams are available: b. Security block diagram. c. Data closet design. d. Security device type schedule and bill of materials. e. Security device type detail. f. Riser diagrams. g. Point-‐to-‐point diagrams. h. Command center elevations and stretchout. i. Command center sequence of operations by scenario
5. System Bills of material (BOM) a. Master bill of material (BOM) to include: b. Security device BOM c. Telecom Room BOM. d. Command center BOM. e. Workstation BOM.
6. Acceptance testing a. Acceptance testing performed after the completion of a successful and
complete system burn-‐in period
Page 70 of 71
b. Acceptance testing should include testing individual devices for operation, scenarios and system responses
c. All access control points, intrusion detection points, video cameras, and intercom systems require testing and observation to ensure that they operate as required in the construction documents
d. In order to account for all lighting conditions, video cameras must be examined during the day and at night
e. Acceptance testing may also include a “defeat the system” test to demonstrate that there are no potential shortcomings within the hardware and software system that would compromise the integrity of the system under normal operating conditions Acceptance testing is to be complete and test documentation approved by the client prior to the project completion
f. Verify performance requirements to policy g. Video-‐based acceptance test.
7. Framework for continuous performance verification includes performance testing metric and positive influence against possible legal challenges to video evidence.
8. Training.
Figure 1 Sample Security Project Timeline (Gantt Chart)
Page 71 of 71
INDEX
“
“Report of the School Safety Infrastructure Council” · 7
A
ASIS International · 17
C
Consumer Electronics Association · 24 Continuity of Operations Planning · 41
D
Digital multimedia content · 3
F
forensic video review. · 36 forensic-‐ready · 8
L
Law Enforcement Video Association · 19
P
pixels on target · 8
S
Security Applied Sciences Council · 18 Security Industry Association · 18 Security Industry Council · 19
U
Underwriters Laboratory · 19
V
Video Content Analysis · 3