Upload
ngominh
View
222
Download
1
Embed Size (px)
Citation preview
NEAR-LIVE CONTENT DISTRIBUTION WITH ASPERA FASPSTREAM - ENABLING THE SECOND SCREEN EXPERIENCE
2
ASPERA’S MISSION
Creating next-generation transport technologies
that move the world’s digital assets at maximum speed,
regardless of file size, transfer distance and network conditions.
3
“… an industry game changer”
65th EMMY AWARDS
Explosive growth in size and volume of digital content
Proliferation of multiple video formats, devices, and connected TVs
Insatiable appetite of audiences to consume more media, more quickly on more devices
Growing audience expectations around quality and immediacy of access
4
TRENDS IN THE OTT MEDIA ‘REVOLUTION’
Distance degrades conditions on all networks • Latency (or Round Trip Times) increase • Packet losses increase • Fast networks just as prone to degradation
TCP performance degrades with distance • Throughput bottleneck becomes more severe with
increased latency and packet loss
TCP does not scale with bandwidth • TCP designed for low bandwidth • Adding more bandwidth does not improve throughput
Alternative Technologies • TCP-based - Network latency and packet loss must be low • UDP traffic blasters - Inefficient and waste bandwidth • Modified TCP – Improves TCP performance but insufficient for fast networks • Data caching - Inappropriate for many large file transfer workflows • Data compression - Time consuming and impractical for certain file types • CDNs & co-lo build outs - High overhead and expensive to scale
5
CHALLENGES WITH TCP AND ALTERNATIVE TECHNOLOGIES
Maximum transfer speed • Optimal end-to-end throughput efficiency • Transfer performance scales with bandwidth independent
of transfer distance and resilient to packet loss
Congestion Avoidance and Policy Control • Automatic, full utilization of available bandwidth • On-the-fly prioritization and bandwidth allocation
Uncompromising security and reliability • Secure, user/endpoint authentication • AES-128 cryptography in transit and at-rest
Scalable management, monitoring and control • Real-time progress, performance and bandwidth utilization • Detailed transfer history, logging, and manifest
Low Overhead • Less than 0.1% overhead on 30% packet loss • High performance with large files or large sets of small files
Resulting in • Transfers up to thousands of times faster than FTP with precise and predictable transfer times • Extreme scalability (concurrency and throughput)
6
FASP® – HIGH-PERFORMANCE DATA TRANSPORT
7
FASP® – PERFORMANCE BREAKTHROUGH
• Location Agnostic: FASP transfer speeds don’t degrade as transfer distances increase while FTP speeds do decrease
• Predictable & Reliable: Transfer times decrease linearly as bandwidth increases. FTP transfer times don’t improve with bandwidth
• Versatile: Supports large files just as easily as and large sets of small files
FASPTM – PERFORMANCE BREAKTHROUGH
Across US US – Europe US – ASIA
10 GB 100 GB 10 GB 100 GB 10 GB 100 GB
FTP
45 Mbps
10-20 Hrs Impractical 15-20 Hrs Impractical Impractical Impractical 100 Mbps
1 Gbps
Aspera FASP™
45 Mbps 32 Min 5.3 Hrs 32 Min 5.3 Hrs 32 Min 5.3 Hrs
100 Mbps 14 Min 2.3 Hrs 14 Min 2.3 Hrs 14 Min 2.3 Hrs
1 Gbps 1.4 Min 14 Min 1.4 Min 14 Min 1.4 Min 14 Min
8
NEXT GEN FASPTM – PERFORMANCE BREAKTHROUGH
Across US US – Europe US – ASIA
10 GB 100 GB 10 GB 100 GB 10 GB 100 GB
FTP
1 Gbps
10-20 Hrs Impractical 15-20 Hrs Impractical Impractical Impractical 40 Gbps
80 Gbps
Aspera FASP™
1 Gbps 1.4 Min 14 Min 1.4 Min 14 Min 1.4 Min 14 Min
40 Gbps 2.2 Sec 0.4 Min 2.2 Sec 0.4 Min 2.2 Sec 0.4 Min
80 Gbps 1.3 Sec 0.2 Min 1.3 Sec 0.2 Min 1.3 Sec 0.2 Min
9
DISTANCE IS ALL AROUND! HOW FAST CAN YOU TRANSFER WITH
TCP-BASED PROTOCOLS SUCH AS FTP?
10
TCP performance degrades with distance. Therefore what is the maximum theoretical rate possible per TCP session between these endpoints? Doha Æ Cologne 120ms RTT 0.5%plr = 1.38 Mbps/flow Cape Town Æ London 540ms RTT 2%plr = 0.15 Mbps/flow London Æ Los Angeles 250ms RTT 2%plr = 0.66Mbps/flow Figures based upon assumed averages
TRANSFER CLIENTS WEB APPLICATIONS MANAGEMENT & AUTOMATION
SYNCHRONIZATION
TRANSFER SERVERS
FASP® PATENTED HIGH-SPEED TRANSPORT
11
ASPERA PRODUCT PORTFOLIO
Web, Desktop, Email, Mobile, Embedded
Private On Premise
Distribution, sharing, collaboration and exchange
Transfer management, monitoring and automation
Scalable, high-performance synchronization and replication
Any Data Size, Any Distance, Any Network Conditions Any Infrastructure: Block, Object, On Premises, Cloud
Public and Private Cloud Hybrid
Enable Customers to: • Operate on received data before the end of the file is reached • Bypass the delay and complexities of a “middle-man” of a file server and storage
when these are unnecessary and deliver data directly between applications • Provide an extremely simple and familiar interface to programmers, allowing them to
read and write data directly from a stream • To facilitate implementations of proxies, trans-code, (sequential access) file systems and other “middleware” applications
12
ASPERA FASPSTREAM – BUSINESS NEEDS
Media processing and distribution • Inline transcoding: Begin encoding process while the transfer is still in progress • Accelerate media delivery or play out • Perform inline file validation while data is being transfer rather than upon transfer
completion
Remote imaging or data capture • Speed capture and distribution from remote locations to improve data acquisition • Initiate time-sensitive image processing analysis of large data files sooner to make
faster business decisions
Improve healthcare decisions • Transfer high-resolution medical images, with speed, security and privacy • Enable diagnostic-quality viewing by healthcare practitioners in remote locations to
eliminate wait times and accelerate diagnosis
Enhance legal discovery • Accelerate the collection, indexing, processing, and analysis • Enable faster recovery and data analysis to view pertinent information relevant to a
legal hold or case.
13
FASPSTREAM USE CASES
Key Features • Utilizes FASP transport sessions to send and receive byte-stream
data from application memory • Compatible with any Aspera transfer server • Receiver: directly access the incoming high-speed FASPStream in
memory instead of waiting for the transfer to complete • Sender: use the FASPStream API to transfer data directly from memory
rather than reading the source from disk • Send any stream of bytes over a high-speed FASP connection, not just files • Broad platform support including .NET, Java, and C++ • Flexible integration approaches to enhance existing workflows and allow for different deployment models
Key Benefits • Easily integrate Aspera high-speed transfer technology directly into your applications • Start processing the incoming data as soon as the first set of bytes gets transferred, rather than waiting for the entire
transfer to complete • Initiate high-speed FASP transfers sending any stream of bytes to any receiver directly from within your application • Enable in-memory access of the data for faster processing and better decision-making • Utilize the other Aspera APIs to complement the FASPStream API
14
ASPERA FASPSTREAM API
15
BEFORE FASPSTREAM SIMPLIFIED TRANSFER WORKFLOW USING FASP
FASP
1
2
4
Aspera Node
Application Application
FASP
THE SOLUTION
1. An application is creating or capturing data. The large file is now stored locally on disk and ready to be shared.
2. An Aspera client reads the file from disk and sends the file over to the Aspera Node using FASP.
3. The Aspera Node receives the file and writes it to disk. 4. The other application needs this file file so it transfers the
file using Aspera FASP. It must wait for the transfer to finish before using this file.
3
CUSTOM ENDPOINTS WITH FASPSTREAM EMBEDDED
THE SOLUTION
• Enable in-memory to in-memory data transfer using FASPStream via the FASP protocol
• FASPStream sends byte-stream data as it is being captured or created
• Developers have full control over the pre and post-processing from within both the sender and receiver endpoints
16
CUSTOM SENDER WITH ASPERA TRANSFER SERVER
THE SOLUTION
• Enable in-memory data to be transferred to an Aspera Server creating FASPStream-to-file transfer
• Applications (programs, imaging equipment, and video equipment) can send data as it is being generated or captured
• Enable data transfer immediately rather than waiting for a large file to be completely written to disk
17
CUSTOM RECEIVER WITH ASPERA TRANSFER SERVER
THE SOLUTION
• Send byte-stream of data as a file is being read by the Aspera Transfer Server thereby creating a file-to-FASPStream transfer
• Applications can access the data in memory as it is being received rather than waiting for the complete file
• Act on content as soon as the first bytes arrive
18
FASPStream APIs for:
Windows • 64-Bit and 32-Bit Target Environments
• Java, C++, and .NET on Visual Studio 2010 & 2012
Linux • 64-Bit and 32-Bit Target Environments
• Java & C++
19
FASPSTREAM SDK COMPATIBILITY
USE CASES
20
USE CASE: LIVE STREAMING DATA WORKFLOW
21
THE SOLUTION
• Live content can be ingested, transformed and delivered in near real time using FASPSteam
• Live content is delivered to the Content Receiver as it is being captured
• The Content Receiver will send the byte-stream over to the transcoding service
• As bytes are transcoded, the transcoded content will be sent to an Origin Server
• Once inside a Content Delivery Network, the live content will be distributed to end users
TRADITIONAL CONTENT DELIVERY NETWORK INFRASTRUCTURE
22
• Traditional CDN utilizes traditional transfer protocols that takes longer to distribute media
• Each part of the process requires a file to be fully delivered before that particular part of the workflow can begin
• Since traditional protocols deliver media at much slower speeds, media content needs to be replicated onto Edge Servers to be closer to the consumer.
THE SOLUTION
USE CASE: EXISTING CONTENT DELIVERY NETWORK USING FASPSTREAM
• The content provider uses FASPStream to deliver the media file to byte stream to the transcoding service.
• The transcoding process begins as soon as bytes are received.
• As the transcoding process is occurring, the completed transcoded bytes are sent to an Origin Server.
• Once the bytes are distributed to the Origin Server, it can then use FASPStream distribute these bytes to Points of Presence (PoPs) or Edge Servers.
THE SOLUTION
23
USE CASE: CONTENT DELIVERY NETWORK
• The content provider uses FASPStream to deliver the media file to byte stream to the transcoding service.
• The transcoding process begins as soon as bytes are received and transcoded bytes are sent to an Origin Server.
• As the Origin Server receives bytes, it distributes the media content using FASPStream across the entire network.
• Since FASPStream uses FASP and overcomes packet loss and latency, the media content does not need to reside close to the end user.
THE SOLUTION
24
25
USE CASE: FASPSTREAM DELIVERY OF IN FLIGHT ENTERTAINMENT
THE SOLUTION
• FASPStream using the FASP protocol delivers updated in flight entertainment content to aircrafts.
• When an aircraft lands at the terminal, the FASPStream will begin delivering a byte-stream of in flight entertainment content to the aircraft while it is at the gate.
• The aircraft will receive the updated media content and be able to offer it to customers to view.
26
TRANSCODING WORKFLOW
THE SOLUTION
• FASPStream can be used in conjunction with workflow automation processes
• A workflow can be set up to deliver media content to be transcoded anytime new media arrives. It can also be set up to deliver files to a filesystem for processing.
• The workflow can finish by delivering the transcoded content to another server or the original server.
27
TELESTREAM VANTAGE WITH EMBEDDED ASPERA FASPSTREAM
Remote Browse (HTTPS)
Cloud Infrastructure Remote Datacenter
FASPStream
CDN
High-Speed Pull or Push (FASPStream)
Remote Browse (HTTPS) High-Speed Pull or Push
(FASPStream)
High-Speed Upload (FASPStream)
High-Speed Upload (FASPStream)
SDI Broadcast
Feed
MPEG-2 Transport
Stream
Without FASPStream
With FASPStream
FASP
Local / FASPStream
Sender running on
Linux
LIVE VIDEO WITH FASPSTREAM AT NAB2015
Distance 1-2%
packet loss
Remote / FASPStream
Receiver
Las Vegas
New York
28
WHY DISTRIBUTED CLOUD ARCHITECTURES FOR NEAR LIVE?
• Near Live experiences have highly bursty processing and distribution requirements • Transcoding generates 100s of varieties of bitrates and
formats for a multitude of target devices • Audiences peak to millions of concurrent streams and die off
• Near “Zero Delay” in the video experience is expected • “Second screen” depends on near instant access & replay
• Linear transcoding approaches simply can not meet demand (and are too expensive for short term use!) • Parallel, “cloud” architectures are essential
• On premise bandwidth for distribution is also impractical • Millions of streams equals terabits per second
EXAMPLE – THE WORLD CUP CHALLENGE
June 12 - July 13 2014 • 12 Stadiums • 64 Games
IBC in Rio de Janeiro
EVS appointed for multilateral production • On site live production • IBC file-based video management • Multimedia production and distribution
Production Goals • Up to 24 different camera angles • Streamed live to millions of viewers world wide • Supporting simultaneous matches • Delivered to multiple devices and formats
6 Live Streams HLS streaming of 6 HD streams to tablets & mobiles per match
20 Replay cameras On-demand replays of selected events from up to 20+ cameras on the field
+4000 VoD elements Exclusive on-demand multimedia exclusive edits
EVS C-CAST - THE WORLD’S LARGEST SPORTING CLOUD-BASED LIVE STREAMING EVENT
ENABLING A GROUNDBREAKING SECOND-SCREEN EXPERIENCE
WORLD CUP “NEAR LIVE” SOLUTION
32
FASP
Scale Out High-Speed Transfer by
Aspera
Multi-screen capture by EVS
On Demand
Scalable Live Transcoding by Elemental
Global Delivery
by Akamai
TECHNOLOGY CHALLENGES IN TRANSPORT
• Live Streaming : REAL TIME CONSTRAINT! 6 feeds @ 10 Mbps = 60 Mbps X 2 double headers - games at the same time X 2 for safety (if games would be delayed) = 240 Mbps
• VOD Multicam Near-live replays : Up to 24 clips @ 10 Mbps = 240 Mbps
• Obtainable Throughput using TCP? Bits-per-seconds-throughput = TCP-Window-Size-in-bits / Latency-in-seconds Maximum throughput per session = 65535 * 8 / 0.2 = 2621400 bps = 2.62 Mbps Real throughput (2% packet loss, 0.2s) = ~0.5 Mbps
• Multicam Near-live replays Require 10 Mbps per Stream / 480 Mbps Aggregate! Up to 24 clips @ 10 Mbps = 240 Mbps X 2 games at the same time = 480 Mbps
33
LIVE STREAMING INGEST (ASPERA)
+ 27 TB of video data Key Metrics Total over
62 games Average
per Game
Transfer Time (in hours) 13,857 216
Number of GB Transferred 27,237 426
Number of Transfers 14,073 220
Number of Files Transferred 2,706,922 42,296
< 14,000 hrs video transferred
200 ms of latency over WAN
10% packet loss over WAN
34
MEETING THE WORLD CUP CHALLENGE
Live Streams
660,000 Minutes
Transcoded Output
X 4.3 = 2.8 Million Minutes
Delivered Streams
X 321 = 15 Million Hours, 35 Million Unique
Viewers
35
COMPARE FASPSTREAM TO TCP
0%
20%
40%
60%
80%
100%
1 Mbps 2 Mbps 5 Mbps 10 Mbps 20 Mbps 40 Mbps
Per
cent
age
of P
layb
ack
Rat
e
Playback Rate
TCP Can Achieve a Fraction of Playback Rate
LOCAL - 20 ms 0.1% loss
CROSS USA - 100 ms 1%loss
INTERNATIONAL - 250 ms2% loss
BADWIRELESS/SATELLITE -500 ms 5% loss
36
With the same test parameters: • Three video bitrates 6, 10 and 40Mbps • 100 streaming transfers of 30 second videos • Over WAN with 200ms delay, 2% packet loss
Test results shows the following delay before playback begins:
With No Guarantee of smooth playback.
37
TCP COMPARISON
Video bit rate Time before startup
6Mbps 6 - 22 seconds
10Mbps 15 – 25 seconds
40Mbps 72 – 103 seconds
Aspera created a testing framework to measure performance in an empirical way, we ran: • Three video bitrates 6, 10 and 40Mbps • 100 streaming transfers of 30 second videos • Over WAN with 200ms delay, 2% packet loss
38
TESTING FASPSTREAM
Video bit rate Time before expected skip
6Mbps 6.60 days
10Mbps 3.96 days
40Mbps 0.99 days
100 Tests of 30 second clip
Linux Server Linux Server
FASP Emulated WAN 2% Packet Loss 200ms Delay
Test Results
INTRODUCING FASPSTREAM
FASP
P (T > 0) = 1 P(T >=1 * RTO) = 1 – (1 – p) N
P(T >= M * RTO) = 1 – (1 – pM) N N = video_play_rate/ packet_size M = number of RTOs
p = packet loss probability
1 2 3 1 2 3
T
X p
THEORY: PROBABILITY OF WAITING FOR M RTOS OR GREATER
Video playing rate: X bytes/s
Packet size: Y bytes
Assume the minimum number of packets needed for a smooth play is N = X/Y packets/seconds (could vary for different video stream players)
Packet loss ratio: 0 ≤ p ≤ 1 • i.e., the probability of a packet getting lost in transmission; lost packets needs to be retransmitted and the
probability of a following retransmission getting lost is still P
Probability of waiting for ≥M RTOs for a video stream with the N packets/s requirement is thus 1 - (1 – PM)N
Proof (Method 1): • A packet NOT received within 1 RTO is p2 (lost in original and also lost in retransmission)
• A packet NOT received within M RTO is pM+1 (lost in original and also lost in all the following M retransmissions)
• A packet received within M RTOs is 1 – pM+1
• N packets received within M retransmissions is thus (1 - pM+1)N
• N packets received in ≥M+1 is thus 1 - (1 – pM+1)N
41
VISUALIZATION OF PERFORMANCE ANALYSIS
6Mbps
10Mbps
40Mbps
0
1
2
3
4
5
6
1 Mbps 2 Mbps 5 Mbps 10 Mbps 20 Mbps 40 Mbps
Initi
al B
uffe
r [se
cond
s]
Playback Rate
How Much to Buffer - for at most 1 "glitch" per hour -
LOCAL - 20 ms 0.1% loss
CROSS USA - 100 ms 1%loss
INTERNATIONAL - 250ms 2% loss
BADWIRELESS/SATELLITE -500 ms 5% loss
HOW MUCH BUFFER TIME FOR “GLITCH-FREE” PLAYBACK?
42
WAN TRANSFER CHALLENGE IS COMPOUNDED IN THE CLOUD
CLOUD STORAGE & BIG DATA: TYPICAL APPLICATION OPTIONS
44
ASPERA ON-DEMAND WITH DIRECT-TO-CLOUD
46
ASPERA AUTOSCALING PLATFORM
WORKFLOW AUTOMATION ASPERA ORCHESTRATOR
Applications and Use Cases • Advanced contribution and
automation • High volume processing and
transformation • Ad ingest and insertion for video-on-
demand • End-to-end content preparation and
distribution
Features and Benefits • Intuitive Web based graphical
workflow designer • Real time monitoring of active
workflows • Unattended and interactive
workflows, with ad hoc and recurring (scheduled) execution
• Extensive and growing library of plug-ins for 3rd party product integration
• Highly scalable in support of the most demanding file-based workflows
• Integration with the Aspera file transfer environment and Aspera Console for reporting
• Open architecture for full customization
Web-based application and SDK for creating and managing automated workflows, from simple file forwarding, to complex process orchestration.
47
Cloud
GROWING 3RD PARTY LIBRARY FREE TO ORCHESTRATOR USERS
48
Database / Storage Antivirus
Image Manipulation
ImageMagik
Media Management IT Management
Encryption / Virus Scanning
Watermarking
Ad Insertion & Media Management
Video Quality Control
MediaInfo
Transcoding
File Transfer
FTP/SFTP/XFTP SCP
CLICK TO EDIT MASTER TITLE STYLE CLICK TO EDIT MASTER SUBTITLE STYLE
Thank you Visit us online at www.asperasoft.com or reach us at [email protected]
For more information on any Aspera product, see us at H10.1/R15