Mediakind predicts effect of emerging codecs in 2020
December 12, 2019
MediaKind, a global player in media technology solutions and innovation, has shared its 2020 industry predictions courtesy of Principal Technologist Tony Jones and VP Portfolio Development Meir Lehrer. Jones’ predictions are:
1) The effect of emerging codecs in 2020
The media landscape in 2020 is fascinating. There has never been a time when so many competing video coding standards have been vying for use. A number of standards are established and can be expected to be in use for a long time yet: MPEG-4 AVC (H.264) has become the near-ubiquitous standard for both HD in conventional TV, and for streaming of paid content. Even MPEG-2 will remain in the media delivery ecosystem for a number of years yet, because it is widely deployed in situations where replacing the decoder population is not cost effective enough to do so.
For 4K TV, HEVC (H.265) is the established format, and while the number of 24/7 TV channels is still relatively small, the decoder population has grown to a point where it will guarantee HEVC’s dominance in that space for many years. There are, of course, more options in existence, and yet more on the way: VP9 has had some degree of success, mainly in the sphere of free content, but then there are newer formats: AV1, VVC and two variants of EVC. AV1 can be considered as a next-generation version of VP9 and has the appeal for some that it is intended to be royalty free.
VVC, on the other hand, is the successor to HEVC, and promises to reduce further the bit rates by around 40 per cent (as always, this is content, operating point and implementation dependent). There is good evidence that VVC will genuinely perform well, however the question for VVC relates to its adoption for a particular purpose. It is possible that it might supplant HEVC in the UHD market, but its success is probably clearer for any 8K commercial services.
VVC is expected to be fully standardised in 2020, however for volume consumer goods, there is a real-world time required to implement in silicon devices in order for wide adoption to take place in consumer electronics – this is true for all coding standards. It is, however, somewhat easier for new standards to become established in an adaptive bitrate streaming world, since there is the potential to supply the most appropriate format, based on device capabilities.
Despite this possibility, AVC remains stubbornly in place as the clearly dominant format, with HEVC next in rank. There are two variant profiles of EVC: baseline and main, but they are really different standards, as main is not a superset of baseline. EVC baseline uses only techniques where the patents have expired, whereas the main profile uses IPR from just four companies, but is licensable. It may be some time before another standard overtakes AVC for streaming, but at the moment, the strongest candidate would appear to be HEVC, despite the licensing uncertainties.
2) Remote Production
Remote production is all about trying to make event content easier, cheaper and environmentally better. The premise is that rather than taking a full outside broadcast production truck to an event, only the minimal set of equipment and staff need to be sent. It relies on being able to bring back to the studio a complete set of camera feeds, so that the fixed infrastructure can be used to produce the event.
For events that are individual, or where there’s a production value associated with producing at the event, it may not be the best option, but for recurring events, particularly those that occur at different times, the ability to use fixed studio infrastructure is a real benefit. The enabler for this has been improved connectivity, allowing multiple camera feeds to be brought back from event locations.
As we move into 2020, a number of technologies will work together to make remote production more appealing:
1) ABR streaming will make it easier for more live content to make it to the viewers;
2) HEVC (and eventually VVC) will reduce the bit rate needed for each camera’s content;
3) Better connectivity from locations making backhaul easier over IP, 5G network slicing will further help that;
4) Better resilience schemes, such as SRT, will come to the fore.
Lehrer’s predictions are:
1. Innovation in cloud technology
As we enter 2020, the broadscale maturation and industry-wide adoption of container orchestration in data centres is more exciting than ever. The drive to public cloud providers has largely been two-fold. First, the cost of ramping up and down hardware at a moment’s notice has always been cost prohibitive for operators running data centers of their own. Public clouds are excellent avenues for testing, prototyping, launching and initially scaling new services. However long-term business planning requires careful review of all relevant cost structures such as storage (Terabytes), processing (CPUs/VMs) and egress bandwidth (Gbps) to determine the best pricing model and rate plan with their selected public cloud provider.
Second, the complexity involving operations of a data centre with multiple software systems, redundancy solutions, networking requirements, and monitoring needs, has always been a challenge. Virtualisation has not brought significant relief to these issues, although growth was aided by virtual machines over purely bare-metal data centre solutions. Given the operational costs associated with long-term growth on a public cloud platform, operators running large-scale platforms needed a solution to these complexity issues if they were to consider retaining the independence of operating their own infrastructure.
Orchestrated platforms and the use of containers really provide operators with a good working solution for the second problem noted above. For the first issue of initial hardware scaling, at least now operators can equally consider whether and when launching on public cloud versus private orchestrated cloud, are more appropriate for each business use case.
2. The challenge of converting linear events into full-fledged On Demand assets will be a major test to utilising real production environments
In terms of operators and direct to consumer broadcasters looking to convert linear events into fully fledged on demand assets, the biggest challenge revolves around deciding how to execute L2F/L2V (Live to File/VoD) in an existing operational flow for either an operator (MVPD) or broadcaster. The issue is that these existing operational architectures, at least for operators, already support VoD to either legacy (i.e. Set-Top Box) devices or OTT (i.e. Mobile ABR based) devices. However, incoming linear programming will need to become a third managed workflow. This is unless the operator/broadcaster can design a way to assetise programmes from their existing linear channels and dropping these new assets (VoD programmes) into existing legacy and/or VoD workflows to create the least amount of operation disruption and swivel-chair systems.
The bottom line is reducing operation disruption. Client software user interfaces to find content (VoD & linear) as well as support back-office infrastructures all exist. So the L2F/L2V process of linear channel events should seek to drop into these existing processes as much as possible. However, it should now be clear that the lines between live linear, catch-up, VoD and DVR/cDVR have become hopelessly blurred simply due to customer demand to meet their on-the-go lifestyles. The most optimal system that can cater to all of these viewing use cases as much as possible under one roof will win the competition for operators and broadcasters’ hearts and minds.
3. The future of smart cities
The concept of Smart Cities is now out of its early adopter phase, both from a private and public sector perspective. Nest, Ring and other consumer cameras are mainstream. Governments are introducing far more cameras, and Service Providers (e.g. Cable & Telcos) have standardised offers for Home Security including camera access on multiple device types and video retention options.
This growth across all sectors will lead to areas of focus to aid in monetisation, cost efficiencies, as well as key feature improvements of the video cameras’ ever-increasing presence. AI (Artificial Intelligence) will become a developing part of offline and online video analytics in order to better leverage the growing inventory of available streaming and file-based assets.
Cost efficiencies will become a priority in terms of scaling video storage and enabling lowest latency on record and playback (i.e. disk I/O). This will manifest in a more rapid migration to public and private cloud adoption for video storage and streaming as opposed to today’s largely appliance-based model (distributed bespoke servers) which will become unsustainable. Also, the only way to reasonably leverage AI or other CPU intensive video analytics will be to pool all live and stored video across the same centralised platform (such as a public or private cloud, whether software solutions operate on virtual or bare-metal architectures).
Monetisation of the private sector will start bleeding over into public sector video. Much in the same way the US has seen “Adopt a Road” programmes to help augment public sector budgets for roadwork, it would not be shocking to see governments of all shapes and sizes starting to introduce Ad Insertion on public-square video cameras to help offset the upfront and operational costs. This will also require that all video be ABR as opposed to some of the more traditional Transport Stream based CBR video being used in video surveillance today (much of it CBR MPEG-4 over OnVif RTSP delivery).