Post on 21-May-2020
transcript
Leveraging Open Source Browsers to Optimize Apps and UI Performance on Set-Top Boxes Albert Dahan, Co-Founder and CTO
Thank you!
Streaming OTT Video Apps Making it Simpler
Transformation of TV
• There is no limit to the content found on the internet. The content is either free or fee based.
• OTT services are accessed via 3rd party platforms, for example, Apple TV, Roku, Fire TV and the list goes on.
• The OTT apps provide services to watch movies or TV shows or even Live TV. • Traditional TV industry is changing and the cable TV is no longer limited to one
screen. • Cable companies have their own TV Apps for live TV and On Demand content. This
approach gives their customers Live TV experience on multiple screens. • MSO’s are trying IP only Video services using 3rd party platforms which could
benefit from developing a common framework to integrate the OTT Apps with Live TV MSO App.
Challenges
• With so many media OTT Apps available, the consumer is challenged on which Media App should he watch and why should he invest in that App.
• Different login credentials for different apps
Challenges (Cont.)
• Not flexible as changing TV channels.
• Incompatible ecosystems on different platforms.
Solution Ideas
• MSO can be an App aggregator. • Content from various Apps made available regardless of origin. • Treat each App as an individual channel and aggregate the content to offer
multiple channels. • MSO TV app can be a Master App. Apart form the regular subscribed channels it
should also display all the free channels available on the internet. • Create a common frame work for authentication, create a profile on the box or
network. • Provide an App framework to integrate to all the channels in multiple app
environment to one common App to provide better remote experience.
OTT Video App Common Framework
• Common framework can be achieved by creating a controlled environment, using a specific operating system and a platform where Video Apps can be developed in one unified way.
• Common framework can be integrated in the cloud which could be used across different STB and browser based platforms.
• Apps need to be developed once, across different platforms. • The common framework could support over the air channels, Live TV and On
Demand content. • Leveraging the customers antenna for over the air channels could allow the
operators to avoid retransmission fees for local stations.
New Features
• Voice search on Remote which could be enhanced to work across all Apps on common framework.
• Common framework will also allow the cable operators to integrate social media and live video experience with family and friends.
• Pairing of Mobile and other devices to common framework could simplify content sharing.
• Digital tracking of content could be leveraged across devices.
Conclusion
• Common framework will provide the MSOs a mechanism to integrate live TV, VOD and App content seamlessly. It could work on legacy STB as well as newer IP-only STB. This framework will also allow MSOs to deploy a branded App store and Apps. It would also provide an easier path to integrate new features and improve the user experience
Thank You
How Automatic Content Analysis enables the Entertainment Experiences of the Future
Jan Neumann Senior Manager, Technical R&D
Comcast Labs DC May 17, 2016
Most metadata is at the asset level
• Genres • Credits • Synopsis • Keywords
Much more data exists within the asset
• Chapters • Moments • Annotations
Movie Frame Shot
Scene Chapter
Why is this useful?
Who is in this scene?
What are the best moments on TV?
In-game highlight
navigation
Also good for better search and recommendations
How does Automatic Content Analysis work?
Computer Vision
Audio Analysis
Natural Language Processing
Machine Learning
Chaptering
Scene-level Annotations
Video
Frame-level Annotations
Example - Annotation
Example - Chaptering
Normal program
Normal program
Normal Program with Station logo
Spot Spot
Black frame Structure of TV
program
Why is it possible now?
Large-scale Image recognition performance
Big Data
Better Algorithms
(Deep learning)
Cloud/GPU Computing
• TV content contains a lot of untapped information • Need it to enable next-gen TV experiences • Automatic content analytics can unlock it
Summary
Applications of Machine Learning in Cable Access Networks
Karthik Sundaresan, Nicolas Metts, Greg White (CableLabs)
Albert Cabellos-Aparicio (UPC BarcelonaTech)
Tons of Unused Network data
Lots of Hidden Knowledge in data
3
Hidden patterns
Predictions vs. static
instructions
Dynamically adapt, learn
from new data
Learn from historical data
Family of ML algorithms
Machine learning
Supervised (Task Driven)
Support Vector
Machine
K Nearest Neighbors
Neural Networks
Unsupervised (Data Driven)
Clustering
K-Means, Fuzzy,
Hierarchical
Principal Component
analysis
Apriori Algorithm
Machine Learning in Cable
PNM : Upstream
equalization PNM :
Downstream spectral analysis
Profile Management
Predictive Video
Channel Lineups/ DVR
Recording
Network Health
Patterns Internet Traffic
Classification
Network Traffic
Engineering
Customer Churn
Prediction
SDN Routing
Allocating VNFs to
Appropriate VMs
Problem Space
5
Applications of interest
PNM
Full Band Capture
Predictive Video Lineups
Multicast channels
Network Health
CM/STB Patterns
Applications of interest (2)
SDN Routing
Network Optimization
Customer Churn Prediction
Usage Tracking
BW prediction
Fiber Node Splits
Best Practices, Challenges Clear Problem
Definition Objective functions
Proper Features
Suitable Methods and
Algorithms
Adequate Representative
Data Training data
Testing and Cross
Validation
Metrics / Evaluation
Criteria
Evaluating a Hypothesis
Narayan Srinivasa INTX, Boston May 17, 2016
Machine Learning – Past, Present and Future
Intel Confidential — Do Not Forward 2
Past: Artificial Neural Networks (ANNs)
Synapses Dendrite
Dendritic Trunk
Cell Axon
• Biological analogs for various elements of an artificial neuron • Networks of these neurons represents an artificial neural network • Conventional machine learning was developed based on this layered architecture • Abstracted details away – simplified
Intel Confidential — Do Not Forward 3
Past: Shallow Learners
• Goal: Learn an input to output transform/mapping
• Supervisor provides ground truth
• Feedforward fully connected architecture
• Gradient descent using output error to assign credit or blame to neurons
• Local minima issues that gets worse with depth of network
Shallow learners that required hand crafted feature data for learning (1000s of parameters) but also susceptible to irrelevant features in data
Intel Confidential — Do Not Forward 4
Present - Deep Learning
• Multiple modules with feedforward network to compute a nonlinear input-output mapping
• Increases both selectivity and invariance of representation with depth
Idea # 1: Contrastive Divergence for Feature Detection (Hinton 2002)
Visible
Hidden
Data Point
Idea #2: Pooling of semantically similar units after filtering – (Fukushima 1986, Poggio 1998)
Deep learners trainable with large depths (millions of parameters) using raw data – sensitive to features of object and not background
1 4 2 6
9 6 10 5
4 2 0 3
3 2 4 5
9 10
4 5
C1 Feature MapsS1 Pooled Maps
(ReLU Units)
Intel Confidential — Do Not Forward 5
Convergence leads to Deep Learning Everywhere
Fast and Affordable Compute &
Memory
Large Labelled Data sets for
Training & Benchmarking
RBM and ConvNets
enable deep learners
Global Supply Management 6
Future: Neuromorphic Computing
Original term “neuromorphic electronics” coined by Carver Mead (Caltech) in the late 80’s to describe electronic analog circuits that mimic neurobiological circuits and architectures in the nervous system
Lately the term “neuromorphic engineering/computing” was introduced to expand the scope to include analog, digital, mixed-mode analog/digital VLSI and software systems
Intel Confidential — Do Not Forward 7
Phenomenological Models
Synapses Dendrite
DendriticTrunk
CellAxon
Morphological features, large connectivity, laminar structure and recurrence
Time is implicit in its dynamics – neurons and synapses are dynamic elements
Intel Confidential — Do Not Forward 8
Asynchronous Processing with Spikes is both Efficient & Scalable
• The brain is asynchronous – no global clock • Single wire to encode analog information (Lazar
2002) – area efficiency & scalability • Dissipate power only during spike events –
energy efficiency • Robust transmission over long distances–
reliability & scalability
Intel Confidential — Do Not Forward 9
Learning via plasticity • Reweighting – refers to change in synaptic
strength (most models today focus on this)
• Rewiring – refers to the processes that create/prune new axonal arbors
• Reconnection – refers to the processes that form/prune new synapses
• Regeneration – refers to the processes that create/remove neurons
Intel Confidential — Do Not Forward 10
Memory and Processing are Strongly Co-located
CPU MEM Weakly Co-located
Neural Dynamics
Synaptic/Structural Dynamics
Strongly Co-located
Intel Confidential — Do Not Forward 11
Neuromorphic vs. Von Neumann Computers
Operates at low speeds (< 10-100 Hz) Asynchronous (no global clock) – clock free Composed of noisy and imprecise parts and yet can produce reliable behaviors Spike encoding enables energy efficient computations Very large connectivity (1:10000) between processing elements Memory and computation are integrated Sparse and distributed processing Can learn on chip in an unsupervised, supervised or with reinforcements received from its interaction with its environment
Operates at very high speeds (> GHz) Synchronous (global clock) – clock aligned Composed of precise and noise free parts and fails even for a single bit error Digital Encoding produces energy hungry computations This is not present or even if present (as in GPUs and FPGA) is very limited – 1:100 Memory and computation are separated Dense and modular processing Needs human in loop to acquire any form of knowledge
Neuromorphic Computer Von Neumann Computer