Get Your FREE Pass To Join Us for Streaming Media Connect - February 19-22; Register Now!

Cloud Atlas: MovieLabs’ 2030 Vision Roadmap for Industry-Wide M & E Interoperability

Article Featured Image

Streaming has always been cloud-focused, and now the production end of content development is examining ways to standardize its digital transformation. The cloud has changed the way many industries do things, and media and entertainment is no exception.

MovieLabs is a nonprofit R&D joint venture created in 2006 by Disney, Twentieth Century Fox, Sony, Universal, Paramount, and Warner Bros. to architect next-generation production technologies and processes. In many ways, MovieLabs stands at the fulcrum of digital transformation for the media and entertainment industry’s pre-eminent players.

“There’s a transition happening as production moves to the cloud that is going to require reimplementing lots of aspects of workflows,” says Jim Helman, CTO of MovieLabs (Figure 1). “Can we take this opportunity to also improve the interoperability across different stages of the workflow?”

Jim Helman, MovieLabs
Figure 1. Jim Helman, CTO, MovieLabs

In recent years, MovieLabs’ focus has shifted from distribution to its current emphasis on metadata and production technologies. The organization either works with or seeks input from almost every entity that touches media production, including major studios, cloud providers, and multiple vendors involved in each part of the workflow. Its goal is to identify infrastructure that is cloud-scalable, internet-accessible, and interoperable. It also wants to develop common approaches, rather than have each studio create its own particular flavor of one or another element that could benefit from standardization (as many streaming services have done).

Encouraging organizations to migrate to cloud workflows can pose cultural challenges if their production operations were previously done on-prem or even with proprietary systems. MovieLabs has established a 10- year goal called the 2030 Vision (Figure 2). It is outlined in a 50-plus page paper. I’ll cover the highlights; you can find a lot more detail at movielabs.com.

MovieLabs 2030 Vision
Figure 1. MovieLabs’ 2030 Vision report

Ten Principles

MovieLabs identified the main principles it felt were important to creating a more efficient and more secure cloud-based software-defined workflow, shown in Figure 3.

MovieLabs 10 Principles

Figure 3. The 10 principles of MovieLabs’ 2030 Vision initiative

In the interest of space, we can boil down these 10 principles to a few key takeaways:

  • Assets are ingested and stored in the cloud (public cloud or private cloud in data centers), or even on-prem, and do not need to be moved.
  • Applications go to assets (and not the other way around, which is more common today).
  • Workflows are non-destructive and use common underlying data formats and metadata.
  • Media elements are referenced using a universal linking system.
  • Security is based on a zero-trust model.

Since MovieLabs has started down the road to its 2030 Vision, there have been numerous vendors developing case studies, including Overcast working with Britain’s Royal Opera House. In this partnership, 2,000 staffers have access to 70 different shows and all of the assets that make up the opera house’s live and VOD content. Another case study features Sony creating a singular source (aka “single source of truth”) for 30TB of master files stored on Amazon Web Services for 975 shots and 1,750 pulls for DPX 12-bit VFX files. 

Most of these case studies illustrate at least a few of the 10 principles. Before discussing them in more detail, let’s look at some of the key terminology involved and how and why MovieLabs worked to develop a media ontology that could be used across the industry.


Metadata is widely used across preproduction, production, and distribution. “It’s been one of the connecting threads that ties distribution to production-related work,” says Helman. “In distribution, there were a lot of things about content availability being communicated in spreadsheets, PDFs, and emails.”

The studios tried to identify standard terminology for a period of time before handing this task to MovieLabs. MovieLabs looked at identifying the metadata needed to connect workflows in media creation so that both people and machines could avoid miscommunication. Those details were extremely important and varied widely across the different studios.

MovieLabs Case Studies 

Table 1. Selected MovieLabs Showcase Program case studies

The Ontology for Media Creation was the result of this effort, and it divided production building blocks into five categories: task, participant, asset, context, and relationship. This is important because it enables the various entities involved to use common descriptions for content and elements of the media workflow and supply chain.

Another very important part of this undertaking was the Entertainment Identifier Registry (EIDR), which created one standardized ID for content, including consistent ways to uniquely identify creative works and complexities around versions of those creative works, whether it’s a movie, a short, a trailer, or an episode of a TV series along with relationships to series and seasons. Helman and Raymond Drewry, chief scientist and VP of EMEA operations at MovieLabs, were awarded an Engineering, Science & Technology Emmy for the creation of EIDR.

EIDR now holds more than 2.8 million records and is used in digital supply chain workflows in 70-plus media and entertainment companies. It has been widely adopted by all of the major U.S. studios and is also being adopted in international markets. “We worked with our studio members as well as with a lot of their distribution partners in the Digital Entertainment Group (DEG) and OTT.X on developing the standards,” says Helman.

Developing this common terminology is very important, especially in a machine-to-machine activity, so that clearly defined metadata can travel within workflows, whether this is camera, asset management, or any of the many other stops along a production lifecycle. Andy Beach, CTO for media and entertainment at Microsoft, explains why common terminology is so valuable to organizations that are handling produced content: “We continually re-create timebased metadata about content. I can’t tell you the number of times that companies re-index with Video Indexer on the same content because they didn’t have access to the metadata that got created originally. A lot of systems re-create metadata because even if the metadata is available, they’re getting it in a form that they may not be able to read or understand.”

The need to pass along content with a clearly understandable label is one of the problems that the 2030 Vision hopes to solve. “There are some gaps still with respect to the MovieLabs 2030 Vision. One of them is connecting multiple silos of data through linking the content and metadata,” says Jeff Rosica, CEO of Avid Technology. “We are working on proofs of concept and participating in the MovieLabs Ontology for Media Creation working group to help with the development of specifications.”

Asset Storage

Asset management is a very interesting challenge—especially avoiding the tendency to unnecessarily copy assets because of organizational inefficiencies. “One of the 2030 principles is that assets go directly into the cloud,” Helman says, “as opposed to the traditional model where assets are repeatedly copied from one department to the next department or from the production to their vendors, and then from one vendor to another vendor. The idea is that there’s a single copy that is the ‘source of truth’ and that—at least in principle—these applications are all coming to those assets to minimize the number of copies that need to be maintained.”

There are two major benefits here. One is measured by usage and waste. Moving around multiple versions of an asset incurs extra costs, extra management overhead, and extra time to keep the various copies updated, archived, and secure. As for the other benefit, Helman says, “I think the number-one value is the flexibility in being able to support teams and individuals who can work from anywhere. If you have the assets in the cloud, it’s much easier to support remote workers and remote collaboration.”

Asset IDs

One of the key goals of MovieLabs’ 2030 Vision is to streamline media workflows and enhance interoperability wherever content is accessed through the implementation of standard identifiers that make assets easy to locate and access regardless of where they’re stored. “I think most workflows have relied very heavily on hierarchical file systems and folder structures and use those extensively in references and asset management systems and in some of the content files that need to refer to other files,” says Helman. “One of the principles in the 2030 Vision is that we move from referring to assets by their location to using identifiers. [Assets] can be looked up through an asset management service to find out where is the most appropriate copy for a particular application or service to get that piece of content.”

This data model can encompass everything from picture-cut versions applied via changes to the timeline to language tracks, subtitles, and distribution details. “We have not only a common identifier system, but also a common data model for thinking about what we mean by a version of content,” says Helman.

Apps to Assets

Having apps come to the media (rather than the other way around) is another principle MovieLabs is putting forth. One key capability of the cloud is support for containerized applications that can run in any cloud. Leveraging this capability should be table stakes for any vendor—preferably with applications engineered to work in the cloud from the ground up.

Previously, organizations may have been saddled with many multiple versions of assets sitting in multiple storage locations. With the “main source of truth” approach, applications perform whatever transformative process they do at the location of the asset. “An ideal situation would be that every asset can go to one place, and applications can always come to them,” Helman says. “We’re seeing vendors respond with bring-yourown-storage solutions to bring buckets or blobs to the application as opposed to having to transfer and ingest those assets and copy them into the infrastructure for that particular SaaS offering.”

To implement this approach, Avid Technology has been re-engineering two aspects of how it builds applications. “One is to virtualize the data in our NEXIS software-defined storage product to ensure that applications can access the data the same way, regardless of which cloud or on-prem storage system it resides on,” says Rosica. “The second aspect is to virtualize the applications so there is a choice of running the application in the cloud or running it remotely from a physical machine. For this second mode, we have also re-engineered some of our applications to be [delivered] through web and mobile, which access minimal amounts of data.”

“I’ve actually never been a proponent of having multiple versions of assets, [doing] multiple re-encodes, or reprocessing of the same content over and over,” says Microsoft’s Beach. The principle of “applications go to the assets,” he contends, is “a foundation of cloud, if you think about it. It makes more sense to bring a function to the content when you have it stored in a central place.” Another part of this is solving for repetitive tasks. “Some of the regular production workflows tend to be a bit more bespoke and manual. We’d like to see those mundane pieces of repetitive work automated,” says Helman. Both distribution and production technology have a lot of manual processes for packaging or repackaging content, transcoding, and getting it delivered to the right location. Whether this involves human to machine or machine to machine, moving things along a workflow in an automated fashion is common sense. It’s building the steps to getting there that’s complicated.


Bringing cloud access to production requires a shift in how to look at security, especially if your production has an air-gapped environment set up. “In the cloud, you need to rethink security,” says Helman, “and part of that is making it easier, not only from a technical perspective for remote teams and workers to access those assets, but also from a security perspective to do that while ensuring that those assets are secure and are not going to be exposed to additional threats.”

Taking pains to ensure that content will be both more accessible and more secure—as contradictory as it sounds—is challenging. “You can’t just have everyone VPN’ing into your cloud and getting access to everything,” Helman says. “That’s one of the reasons we did [the] Common Security Architecture for Production to lay out what’s widely known in the industry as zero-trust security, where basically every access to a service or an asset in the cloud needs to be authenticated.” One part of the Common Security Architecture for Production is changing the access permissions on assets, which makes it possible for a wider set of people to view and work with those assets, including a vendor that may be given access to them.

“In practice, there are constraints where you may need particularly low latency or high bandwidth,” Helman concedes. “For a virtual workstation running Maya in the cloud on a virtual machine, you may need to copy those assets to a machine at a cloud provider’s data center [closer to the] VFX houses. The amount of bandwidth that is required for that has not always been available for getting the content into the cloud.”

Following this approach, Beach says, “The only reason you would move the content around is if you’re working in an environment where those processes can’t reach it.”


Being interoperable is arguably the ultimate goal. It means being able to communicate with Vendor A and Vendor B in the same way, with the same assets and the same content metadata. It’s important to have the ability to plug in different tools at different steps of a supply chain and have the content flow through them in a managed way.

Simon Eldridge, CPO of SDVI, defines interoperability as follows: “I would say it is an open system, not a closed system. Are there published, documented APIs that can get data out of that system, or is it a black box?”

One example of interoperability is aligning the cost of work to the value of the content. “If you’re HBO, and you’re delivering premieres,” Eldridge says, “you’re not particularly worried about the production costs of getting that piece of content to screen. You’re more worried about the quality,” he explains. “Whereas if you’re delivering library content to a FAST channel that’s 30-year-old content, maybe cost is a more interesting aspect to decide on the tools that you use. If you didn’t have interoperability, then you wouldn’t be able to do those things.”

The 2030 Vision foresees workflows that are highly interoperable. “Some of what MovieLabs does is [encourage] interoperability between various platforms and various pieces,” says Beach. “It gives a guideline for how we should be thinking about organizing the metadata so that it can be understood.” Vendors today are at various points of the interoperability continuum, from fully integrated to just starting out.

“Most of Avid’s solutions are already based on many of the asset and infrastructure interoperability key tenets,” says Rosica. “We heavily utilize interoperable standardized data models such as AAF, ALEs, EDLs, MXF, and SMPTE VC-3 (DNx), and we expose APIs for connectivity.”

“The Ontology for Media Creation is trying to create not only a common set of concepts and nomenclature, but also get into some of the machine-readable formats for data exchange,” says Helman (Figure 4).

MovieLabs Ontology for Media Creation

Figure 4. MovieLabs’ Ontology for Media Creation

This data exchange is an essential part of being interoperable. It applies to both assets and usage rights (Figure 5).

SDVI content ontologies

Figure 5. An example of how SDVI manages content ontologies

“The identity part of this becomes a collaboration point,” says Beach. “It’s not something that the partners themselves offer up, but it’s a thing that they need to work with in the larger ecosystem to ensure that their tools interoperate to make it possible for the final end customer, the studio, or the content owner to be able to have secure access.”

What’s needed for software-defined workflows to support the automation and collaboration is interoperable formats that extend to elements like timelines and color transformations. “[This] data plane is much more Wild, Wild West, and it’s probably a place where we can take some learnings from distribution and [live] broadcast,” says Helman. “There are lots of workflow and automation systems on the broadcast side, and maybe some of those things can be adapted [to production].”

Defining the Control Plane

Once the data exists and is interoperable, there is a much-needed control plane in the 2030 Vision. This is one of the areas that is more challenging and is a little bit more greenfield than the production side, says Helman. “Interoperability is also one of those things where you’re never going to get to complete plug-and-play interoperability, where things just assemble like Lego blocks.”

It’s a question of whether the system that you’re sending it to understands, parses, and can match it back to the content. “Often in the past,” says Beach, “it’s been a case of ‘We don’t know how to use [the metadata], and it’s easier for us just to re-create a whole new dataset from scratch and then use that.’ ”

Beach believes that AI could be a game changer in this area. “AI technology that’s coming in is actually a great tool to be that interop agent because it can read the metadata as one form and output it into another, so that [different] systems … understand the APIs,” he says. This could also be a great help to smaller companies that may have to make choices about whether to concentrate on innovation or interoperability.

“I think as people get more of a taste,” Helman says, “and see workflows that have successfully used software to improve the speed at which groups can do iterations, try new things, and collaborate…, more production groups will want to avail themselves of that.”

Although adoption of the 2030 Vision by the likes of Disney, Paramount, and Universal is critical to its implementation across the M&E universe, Helman says, the success of the plan also depends on its use at all levels of the content production and delivery world. “We want to see all of those being able to take advantage of the benefits of cloud, not just the highest-end productions. Small production teams can be very empowered [even without] all the infrastructure to build and assemble software to make these workflows function seamlessly. That’s the ultimate goal.”

I contacted a number of other vendors and unfortunately did not get a response by deadline. When examined as a whole, the recommended principles make sense, but as with any technology re-envisioning that’s this broad, the issues become more challenging when it comes to institutionally shifting the culture of how things were done in the past.

Can MovieLabs put a figure on how close it is to meeting the 2030 Vision goals? “Given all the different dimensions, that is really hard,” says Helman. “We’re not there yet, and it’s going to be a few more years. If we accomplish most of what I’ve described by 2030, I think we’ll be doing really well.”

“You’re really talking about implementing standards inside of these media workflows,” says Beach, “which also means then that the IT culture has to change what they think about security access and availability and have to ensure that they’re not blocking or hindering work from getting done.”

It sounds like there will be a new role of chief interoperability officer in 2030 Vision-adopting organizations—somebody who is positioned to make sure everything is interoperating correctly. When this role is created, organizations can be sure MovieLabs will have helped along the way to ensure that the studios, production companies, numerous vendors, and cloud providers are working from the same understanding of things.

SDVI's Advice for Those in an Early State of Cloudification

Cloudification is still early days in many aspects. “I would say that we’re at 20 to 30 percent of [M&E workflows that] are cloud-ready, cloud-first, or cloud-dabbling,” says Simon Eldridge, CPO of SDVI, as “most vendors are trying to migrate from legacy on-premise systems to cloud-based systems.”

SDVI designed its platform in the cloud many years ago and had the benefit of building from the ground up, instead of having to cloudify its systems. “What MovieLabs tried to do is say, ‘When working in the cloud, this is the way that you should do it,’ and if you look at any best practices for building applications in the cloud, these are media-specific versions of them,” notes Eldridge.

What advice does Eldridge have in terms of getting people there? “There are customers who will endlessly investigate, those who want to completely replicate the way that they’ve done things without cloud, but on the cloud— which doesn’t make sense,” he says. “And there are customers who want to have everything perfect from day one. The whole notion of cloud is to iterate and be agile.”

Another key piece of advice is to look at cloud migration as the operational paradigm shift that it is, rather than just shifting an existing workflow to a different location. “As soon as you move to cloud, you shouldn’t be worrying about capacity planning, how many licenses you need, where the bottleneck is, or what’s your maximum throughput rate,” says Eldridge. “Most legacy on-premise workflows were designed around those limitations. [If] you replicate exactly that workflow in the cloud, all you’ve done is replicate those same limitations or increased costs for having unused capacity.

A lot of on-prem environments are often designed for peak load instead of average load. “What you don’t want to do,” he continues, “is just lift everything you’ve got on-prem and shift it. You want to redeploy in a manner that allows you to scale up or scale down, so the infrastructure is constantly matching whatever demand is happening at that moment, so you’re not paying too much and you’re not limited by capacity constraints.”

SDVI typically sees people starting at either the left or right edge. “The left edge is, you’ve got a huge library of content on-premise, and the idea of knowing how to get that up to the cloud is daunting,” says Eldridge. The best way to start is by receiving the new incoming content to the cloud. “The other way—which is pretty common, because a lot of content distribution now is cloud-centric if you’re delivering to Hulu, Netflix, or Prime—the destination is the cloud,” Eldridge says.

His recommendation for those in this scenario is to “keep all your on-premise supply chains as they are, and just move distribution. Then you gradually either move from the left in or from the right back and join those two things up.”

Streaming Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

A Broadcaster’s Cloud Migration Primer

This article offers a primer on how a traditional broadcaster could start to move services to a cloud provider.

Location, Location, Location: Cloud vs. On-Prem vs. Hybrid Streaming Workflows

Everything streaming is moving to the cloud, right? Not so fast. Where you keep your content and do your work depends on your own particular circumstances. And on-prem is by no means dead.

Moving Streaming Production to the Cloud

The first thing to think about when considering a move to cloud production as an alternative to traditional, centralized, on-prem production workflows is, why the cloud? It's because it has specific advantages for certain applications. Whenever you're thinking about the cloud, you need to be thinking about how it will benefit you and your productions.

Why Should Broadcasters Make the Switch to the Cloud?

Broadcasting has been behind streaming in its embrace of the cloud, but the rewards of making the switch can be significant. We talk to executives from American Public Television, WarnerMedia, Paramount+, and Streaming Global to get their take on why you should make the change if you haven't already.

Companies and Suppliers Mentioned