Latest collaboration enables developers to leverage TwelveLabs’ video understanding capabilities to create new applications with the security, privacy, and performance of AWS
LAS VEGAS, April 6, 2025 /PRNewswire/ — Amazon Web Services (AWS), an Amazon.com, Inc. company and TwelveLabs, the video understanding company, today announced that TwelveLabs’ state-of-the-art multimodal foundation models, Marengo and Pegasus, will soon be available in Amazon Bedrock. Amazon Bedrock is a fully managed service that offers developers access to high-performing models from leading AI companies through a single API. Seamless access to TwelveLabs’ advanced video understanding capabilities will enable developers and enterprises to transform how they search, analyze, and generate insights from video content, leveraging the security, privacy, and performance of AWS. AWS is the first cloud provider to offer models from TwelveLabs.
“This integration with Amazon Bedrock represents the next phase in our collaboration with AWS, making our video understanding AI more accessible to enterprises worldwide,” added Lee.
“Video contains nearly 80% of the world’s data, yet most of it remains unsearchable and underutilized,” said Jae Lee, Co-founder and CEO of TwelveLabs. “By making our models available through Amazon Bedrock, we’re empowering even more enterprises to bring video understanding to their existing infrastructure. Our technology enables users to search across their entire content library—from videos collected 10 years ago or 10 minutes ago—to find the precise moment they’re looking for in less than a single second, and then interpret and analyze those moments. This opens the door for all kinds of novel uses. Through the collaboration with AWS, we can extend powerful capabilities to customers and accelerate innovation across industries.”
Advanced AI Video Capabilities
Video is commonly regarded as one of the world’s largest unsearchable data sources, yet TwelveLabs’ cutting-edge technology turns it into a trove of accessible information. Whether it’s giving a sports network the ability to instantly pull every instance of a specific play style or commentator reaction or helping a broadcaster identify recurring themes across large volumes of footage, TwelveLabs helps teams turn their video archives into usable, indexable assets, unlocking both operational efficiency and new revenue opportunities.
TwelveLabs overcomes the inherent complexities associated with video understanding to allow customers to search video across all modalities. Specifically, TwelveLabs delivers:
Natural language video search that pinpoints precise content momentsDeep video understanding without requiring pre-defined labelsMultimodal AI processing visual, audio, and text simultaneouslyTemporal intelligence connecting related events across timeEnterprise solutions scaling extensive video libraries into accessible knowledge
“At MLSE, we are defining the future of the sports and entertainment business. Innovation is in our DNA, and we’re leading the charge in shaping what comes next. With powerful tools like Amazon Bedrock and TwelveLabs’ AI models supporting our vision, we’re accelerating our ability to create smarter, more immersive experiences for fans.” said Humza Teherany, Chief Strategy and Innovation Officer at Maple Leaf Sports & Entertainment.”
Unlocking the Power of Video Understanding for AWS Customers
With Marengo and Pegasus available in Amazon Bedrock, AWS customers can use TwelveLabs’ models to build and scale generative AI applications without managing underlying infrastructure. Using Amazon Bedrock, customers gain access to a broad set of capabilities while maintaining complete control over their data, benefiting from enterprise-grade security and utilizing cost control features—all essential for deploying AI responsibly at scale.
TwelveLabs’ fully managed, serverless models in Amazon Bedrock allow developers to:
Create applications that search through videos, classify scenes, summarize content, and extract insights using natural languageBuild sophisticated video understanding features without specialized AI expertiseScale video processing from small collections to massive libraries with consistent performanceDeploy solutions with enterprise-grade security and governance controls
“Video understanding is revolutionizing how industries like media & entertainment, sports, automotive, and education work with and discover content,” said Samira Panah Bakhtiar, General Manager of Media & Entertainment, Games, and Sports at AWS. “Over the last year, I have consistently said that natural language semantic search is a ‘strategic unlock’ for our entertainment customers, as they reexamine their existing intellectual property and breathe new life into it. By bringing TwelveLabs’ advanced models to Amazon Bedrock, we’re helping our customers make sense of any video moment, unlocking the full value of their treasured video assets. Businesses will now be able to easily search, categorize, and extract insights from their vast video libraries, enabling new use cases and better user experiences that were previously impossible without significant technical expertise.”
The integration will benefit multiple industries, from media, entertainment, advertising and beyond. For example:
Film and TV Studios can rapidly manage video workloads from dailies, content repacking, and archive managementSports Leagues and Teams can efficiently create match highlights and create customized fan focused content at scaleNews Agencies and Broadcasters can quickly manage large libraries to find the moments that matterStreaming services can better package and distribute content across platforms and more effectively insert relevant video ads
AWS and TwelveLabs’ integration partner Monks expressed their excitement: “We’ve been putting AI to work across the entire video value chain for IP holders, broadcasters and brands. TwelveLabs in Amazon Bedrock makes it easier to realize opportunities for monetization in broadcast news, entertainment and sports by making it simpler and more secure to build and scale applications with powerful video understanding,” said Lewis Smithingham, EVP Strategic Industries at Monks.
Expanding Collaboration Between AWS and TwelveLabs
This announcement builds on a strong existing relationship between AWS and TwelveLabs and continues the momentum of their Strategic Collaboration Agreement (SCA). TwelveLabs is working with AWS to accelerate the development of its foundation models, deploy its advanced video understanding foundation models across new industries, and enhance its model training capabilities using Amazon SageMaker HyperPod. With the reliable and scalable infrastructure offered by SageMaker HyperPod, TwelveLabs has accelerated model training while reducing training costs.
“This integration with Amazon Bedrock represents the next phase in our collaboration with AWS, making our video understanding AI more accessible to enterprises worldwide,” added Lee.
To learn about TwelveLabs’ industry leading models, please explore twelvelabs.io, Marengo 2.7 and Pegasus 1.2. Find out more about TwelveLabs models in Amazon Bedrock here.
About TwelveLabs
TwelveLabs uses multimodal foundation models to bring human-like understanding to video data. The company’s foundation models map natural language to what’s happening inside a video, including actions, objects, and background sounds, allowing developers to create applications that can search through videos, classify scenes, summarize, and extract insights with unprecedented accuracy. Headquartered in the US, TwelveLabs serves customers across media, entertainment, sports, advertising, and government. For more information, visit www.twelvelabs.io
Media Contact
Amber Moore, Moore Communications, 1 5039439381, amber@moorecom2.com, Moore Communications
View original content to download multimedia:https://www.prweb.com/releases/twelvelabs-to-bring-its-state-of-the-art-video-ai-models-to-amazon-bedrock-302421510.html
SOURCE TwelveLabs