SMPTE Media Technology Summit 2025 (MTS2025)
Pasadena Convention Center, October 13-16, 2025
By Nicholas Da Silva (@ZOOLOOK)
Inside the Future of Media: My Journey Through SMPTE MTS 2025
Once a year, SMPTE (Society of Motion Picture and Television Engineers) gathers the brightest minds in media and technology for the SMPTE Media Technology Summit. This year, from October 13–16 at the Pasadena Convention Center, the conversation stretched far beyond pixels and protocols — it became a dialogue about how technology, creativity, and humanity converge in the act of storytelling.
For me, attending as ZOOLOOK — a multidisciplinary artist who lives at the crossroads of sound, motion, and code — felt like stepping inside the machine that shapes our collective imagination. Across four days of sessions, I witnessed the invisible gears of modern media turning in real time: from AI-assisted production to immersive live broadcasts, from the artistry of sound to the ethics of automation.
The following reflections capture some of the standout sessions from Live Sports Symposium Day — a day that redefined what it means to connect with audiences in motion.
SMPTE’s MTS2025 Live Events Broadcast Symposium included programs ranging from Innovations in Live Broadcast, AWS: AI-Driven insights for High-Speed Sports, and the Art and Tech of Live Audio to Designing Broadcasts for Global Audiences, Applied AI and Automation in Live Production, and the Next Chapter in Broadcast and Streaming.
The session “Inside the Leagues – Innovations in Live Broadcast” pulled back the curtain on how technology is reshaping the way fans experience live sports. It wasn’t just about faster feeds or sharper images — it was about storytelling through motion, data, and perspective.
From World Archery’s global production strategies to Animation Research Limited’s mind-bending data visualizations, each speaker showed how the lines between broadcast, analytics, and artistry are blurring. Cinesys walked us through their multi-league integration work across NASCAR, the NHL, and the NBA — proof that sports broadcasting is becoming more unified, connected, and immersive than ever. And then came the real game-changer: referee camera technology. Seeing the game from the official’s point of view isn’t just a new angle — it’s a new emotional perspective, placing fans in the pulse of the action.
As someone who lives at the crossroads of technology and storytelling, I couldn’t help but see parallels with filmmaking. Every innovation shared here is, in its own way, about presence — about giving the audience the feeling of being there.
In the session, “Acceleration and Personalization in Sports Broadcasting,” the focus shifted from the arena to the cloud, where AI, automation, and analytics are rewriting the rules of fan engagement. Thomas Edwards (Amazon Web Services), ErinRose Widner (Verizon Business), and Jeremy Middle (FanDuel) explored how real-time data and machine learning are creating faster, more adaptive live productions.
The conversation revealed a fascinating intersection between broadcast engineering and human behavior. AWS showcased how AI can identify key plays or player moments as they happen, while Verizon’s network innovations enable those moments to be delivered instantly across multiple devices. FanDuel added another layer — tailoring highlights and insights to individual fan preferences, turning every viewer into their own director.
From my seat, it felt like watching the personalization of storytelling unfold in real time. What used to be a single broadcast stream is evolving into millions of micro-stories — each one tuned to what the fan values most. As an artist, that idea of intelligent storytelling at scale hits close to home. It’s not just sports that are being reimagined — it’s the entire experience of narrative itself.
The session “Quiet Chaos – The Art and Tech of Live Audio” captured the essence of live event production — that delicate dance between precision and unpredictability. Moderated by Patti Gunnell (Riedel Communications Inc.) and featuring Fernando Delgado (Stickman Sound), Jeff Peterson (ATK Audiotek), Steve Cormier (OSA International), Andrew McHaddad (NEP Group), and Jan Schaffner (Riedel Communications), the panel dove deep into the unseen world of audio teams who make the impossible sound effortless.
From Coldplay’s global tours to major sports broadcasts, the speakers shared stories of constant coordination — teams syncing across continents, troubleshooting in real time, and innovating under pressure to ensure the audience never hears the chaos behind the curtain. The conversation revealed how modern communications systems, RF management, and hybrid network setups have redefined what’s possible in live sound.
One phrase summed it up perfectly:
“Audio without video is radio. Video without audio is a night light.”
As a musician and sound-driven storyteller, I couldn’t agree more. This session was a reminder that sound isn’t just technical infrastructure — it’s emotional architecture. In the hands of these experts, audio becomes the invisible art form that turns chaos into cohesion, transforming fleeting moments into powerful, shared experiences.
In “Designing Broadcasts for Global Audiences,” the focus shifted to the scale and complexity of live soccer — a sport that unites the world through motion, emotion, and precision timing. Lukas Schaufler (Lawo Inc.), Dan Murphy (NEP Group), and Pete Emminger (Independent) took us behind the scenes of what it means to deliver every kick, every roar, and every goal to millions of fans around the world without missing a single frame.
They broke down the IP-based technical backbone that powered last summer’s major U.S. tournament — a seamless network of control, transmission, and creative collaboration. The real story, though, was about evolution. Lawo and NEP are now building toward the next global soccer competition in 2026 — spanning Canada, the U.S., and Mexico — where IP workflows, network orchestration, and remote operations will redefine what “live” truly means.
Listening to their process felt like watching the blueprint of a living organism — data, design, and emotion pulsing in sync across continents. As an artist, I saw the beauty in their architecture: every router, every fiber line, every perfectly timed feed forming the invisible rhythm behind the world’s most beloved game.
If there was one session that captured the pulse of where media technology is headed, it was “Applied AI and Automation in Live Production.” Featuring Thomas Edwards (Amazon Web Services), Chris Lennon and James Doyle (Ross Video), and Olivier Barnich (EVS Broadcast Equipment), this panel explored how artificial intelligence is quietly becoming the unseen collaborator in every modern broadcast.
From AI-powered replays and real-time commentary to frame interpolation and automated highlights, the conversation revealed how machines are learning to assist — not replace — the human creative process. The most striking demos showed how voice-controlled production tools and adaptive workflows are already helping small teams produce big-league content with unprecedented speed and consistency.
As someone who blends art, sound, and story through technology, I found this session both exhilarating and grounding. AI isn’t just changing production — it’s expanding the language of live storytelling. The creative future isn’t man or machine; it’s a symbiosis where automation handles the chaos, and we, the creators, stay focused on the emotion.
“Broadcast Horizons: Innovation, Efficiency, and Immersive Experiences” closed out the Live Sports Symposium with a fitting sense of perspective — not just on where the industry stands, but where it’s headed.
Moderated by a panel of heavy hitters — Juan Reyes (Tech Align Group), Willem Vermost (EBU), Matthew Goldman (Sinclair), Terry Harada (Astrodesign), and Richard Friedel (ATSC) — the discussion felt less like a debate and more like a roadmap for the evolving broadcast ecosystem.
The conversation touched on everything from the uncertain future of linear TV to the growing need for interoperability and standardization across rapidly diversifying platforms. What stood out most was the shared recognition that immersive experiences — from volumetric video to next-gen HDR and real-time streaming — are driving the next wave of audience connection. Efficiency is no longer just about cost; it’s about creative sustainability.
This session echoed the broader theme of the Summit: transformation through convergence. Broadcast is no longer a single stream — it’s a living network of possibilities, where art and engineering merge in service of human experience. As I listened, I couldn’t help but imagine the next frontier of storytelling — one where the signal itself becomes part of the story.
One sessions that fascinated me the most was “Optimizing Color Grading with the Gamut Rings Color Scope” with Kenichiro Masaoka, Senior Engineer at NHK Foundation Inc. He spoke on The Gamut Rings Color Scope, a new 2D color visualization tool that displays lightness, chroma, and hue angle together—something traditional color scopes can’t do. This design gives a clearer, more intuitive view of color relationships across different images. His presentation demonstrated that it surpasses standard vectorscopes and chromaticity scopes by more effectively revealing mid-tone details, hue shifts, gamut issues, and HDR rendering behaviors, making it particularly useful for analyzing tone mapping and saturation changes.
Kenichiro Masaoka’s “Gamut Rings Color Scope” is essential in the fields of imaging, color science, video engineering, and HDR production:
It bridges the gap between technical data and human perception.
Traditional scopes display numeric color data, rather than how humans perceive color. Masaoka’s Gamut Rings visualize lightness, chroma, and hue together—mirroring CIE LCh space—to let colorists and engineers interpret color in perceptual terms.
It reveals hidden problems in HDR and tone mapping.
In HDR and wide-gamut workflows, colors can shift or clip unpredictably. The Gamut Rings expose these distortions, making tone-mapping and gamut consistency easier to manage across formats like Rec. 709, P3, and Rec. 2020.
It advances visual analytics for research and development.
For researchers, the scope offers a clearer 2D view of color behavior, simplifying tone-mapping analysis, color-rendering evaluation, and gamut mapping. It turns complex math into intuitive visual insight.
It’s directly practical for creative industries, empowering creative work.
From film to digital art, the Gamut Rings give creators perceptual control—helping colorists refine hues, VFX artists detect gamut errors, and cinematographers predict lighting effects—bridging science and creativity.
Kenichiro Masaoka earned his BS and MS in Engineering from the Tokyo Institute of Technology and completed his PhD there in 2009. A Fellow of the Society for Information Display (SID) and recipient of the 2025 SID Otto Schade Prize, he is recognized for advancing image quality through contributions such as BT.2020 colorimetry, UHD TV performance evaluation, the invention of gamut rings, and MTF measurement methods. He currently chairs the ICDM Subcommittees on Color Metrology and Spatial Measurements.
Where Art, Tech, and Story Collide
Leaving Pasadena, I felt the hum of innovation still buzzing in my mind — not just from the technology on display, but from the people behind it. Engineers, artists, and storytellers all chasing the same elusive goal: to make the invisible visible.
The future of media will belong to those who can navigate both the circuitry and the soul. For ZOOLOOK, that’s the sweet spot — where technology amplifies creativity, and every new tool becomes another brushstroke in the art of storytelling. The Summit reminded me that the next revolution in media won’t come from machines alone, but from how we, as creators, choose to tune them to the human frequency.
#smpte2025 #mts2025 #mediatechnology #liveproduction #aibroadcasting #storytelling #zoolook #innovation #creativetechnology #filmmaking #broadcastfutureAbout the Author
Nicholas Da Silva, known professionally as ZOOLOOK®, is an Afrofuturist multidisciplinary artist, musician, and filmmaker based in San Francisco. He is also the President of the Student Chapter of SMPTE representing City College of San Francisco. Through music, film, and digital art, ZOOLOOK explores the intersection of culture, technology, and storytelling.