• The Tensor
  • Posts
  • AI gets a starring role in Hollywood

AI gets a starring role in Hollywood

Amazon introduces Alexa for sellers

Welcome to today’s edition of The Tensor, get smarter with executive insights on the latest in AI & Tech Industry, 5 min reads, 3x a week.

It’s AI Agents all around in today's Tensor:

  • Top Story - Microsoft wants you to build your own AI co-worker with Copilot Studio

  • Industry - Salesforce announces Agentforce to build your AI sales team.

  • Research - Small but Mighty: How Tiny AI Models Are Making a Big Difference

  • TLDR - Runway API for Video Generation AI, New AR glasses from Snapchat and 2 more.

Let’s dive in

Source: La La Land by Lionsgate Studios

The scoop: Lionsgate is partnering with AI startup Runway, known for its amazing video AI models, to train a custom video model on its vast library of films and TV shows, including franchises like John Wick and The Hunger Games. In an industry built on human stories, the next big star might just be artificial.

Highlights:

  • Massive LEGAL Training Data: Over 20,000 titles from Lionsgate's catalog will feed into Runway's AI model, providing legally cleared data—a rarity in the generative AI space.

  • Filmmaking Assistance: The AI is designed to aid in tasks like storyboarding, background creation, special effects, and even automated video editing.

  • Efficiency Boost: Lionsgate sees this as a way to create content more capital-efficiently, potentially reducing production costs significantly.

Why it matters: This collaboration could redefine content creation in Hollywood, offering tools that streamline production but also raising serious ethical and legal questions. Runway is already embroiled in lawsuits for copyright infringement. As studios explore AI to cut costs and accelerate timelines, the balance between innovation and preserving human artistry becomes the real cliffhanger.

Bottom line: Whether Lionsgate's AI venture becomes a blockbuster hit or a box office bomb hinges on how it navigates the line between enhancing creativity and replacing it. We've seen other companies navigate this, but there is no clear answer.

Source: Amazon

The scoop: Amazon has been trying to find its footing in the AI race with releases like Rufus and Alexa hardware improvements. Their latest offering is Amelia, an AI assistant designed to simplify life for third-party sellers. From answering nuanced business questions to resolving issues on the fly, Amelia aims to be the virtual team member for sellers.

What can it do?

  • Personalized Assistance: Sellers can ask Amelia specific business questions, receiving concise, relevant answers sourced from Seller Central and beyond. Whether it's prepping for the holiday rush or optimizing listings, Amelia delivers tailored advice.

  • Real-Time Metrics: Need a quick update on your sales performance? Amelia provides instant access to key metrics like sales data and customer traffic, with the ability to drill down into individual product performance.

  • Issue Resolution: Beyond diagnosing problems, Amelia is set to take proactive steps to resolve issues, such as investigating inventory discrepancies and connecting sellers with Amazon support when necessary.

  • Enhanced Listings: Amelia fine-tunes product titles and descriptions to align with customer search behavior, increasing the visibility and relevance of listings.

  • Video Content Creation: With the new Video Generator feature, sellers can effortlessly transform still images into engaging video ads, enhancing their marketing efforts without the need for specialized skills. This tool is currently available to select sellers and will expand to all U.S. sellers by 2025.

Why it matters: Amazon is the world’s largest marketplace, and third party sellers make it what it is. If it can successfully launch an AI which helps them be more productive it will boost seller productivity and frankly help them navigate the complex marketplace with ease. This could be one of the biggest plays in AI going under the radar.

The scoop: Since the OpenAI demo of their real-time voice mode, the race is on to make it open source. We finally have it with LLaMA-Omni—the first open-source model that enables seamless, real-time speech interactions with large language models. This breakthrough allows users to engage with AI naturally, without the need for text transcription. You've gotta try it to understand.

How it works:

  • Integrated Architecture: Combines a speech encoder, speech adaptor, LLM (Llama-3.1-8B-Instruct), and a streaming speech decoder into one cohesive system.

  • Transcription-Free Processing: Processes speech instructions directly, eliminating the need for intermediate speech-to-text conversion.

  • Simultaneous Output: Generates both text and speech responses at the same time, ensuring ultra-low latency.

  • Custom Training Dataset: Trained on InstructS2S-200K, a dataset of 200K speech instructions and responses tailored for speech interaction.

  • Efficient Training: Achieved superior performance over previous models, all while training in under 3 days on just 4 GPUs.

Bottom line: LLaMA-Omni represents a significant leap forward that we expect the rest of the industry to adopt within the year. It transforms AI into a conversationalist that never needs to pause or say, "hold on a sec." It's like chatting with a friend who can process entire encyclopedias while maintaining a seamless conversation.

Thank you for reading today’s edition of the Tensor!
Someone forwarded you today’s edition and you want to read more?

/