ep250-editfest-2023

EditFest 2023: Exploring the Impact of Artificial Intelligence on the Art of Editing

» Click to read the full transcript


When it comes to AI, I’ve made it pretty clear where I stand. I believe we are seeing the transition away from specialization and back to generalization when it comes to the value we bring to the workplace. And AI is only accelerating this process. As creatives, we can choose to embrace the possibilities of what AI can offer us while doubling down on our unique zone of genius that sets us apart from the machines or we can choose to be left behind.

At last year’s EditFest, I was one of 4 panelists invited to talk about the convergence of AI with the creative process, particularly related to the art of editing. My fellow distinguished panelists were Jon Dudkowski, ACE Editor (Star Trek DiscoveryUmbrella AcademyMan in the High Castle) ; Chad Nelson, producer and creator of Critterz, an AI-generated short film; and Cristobal Valenzuela, co-founder and CEO of Runway (and named one of the top 100 most influential people in AI by Time magazine). The panel was moderated by Carolyn Giardina, tech editor at The Hollywood Reporter.

I was honored to be part of a panel on this important topic and I am immensely appreciative to EditFest and Jenni McCormick for giving me permission to publish the talk here for all of you to experience. Please enjoy the 2023 EditFest Panel, Artificial Intelligence: Exploring the Technology and It’s Impact on the Art of Editing.

Want to Hear More Episodes Like This One?

» Click here to subscribe and never miss another episode

Here’s What You’ll Learn:

  • What generative AI is and how it works
  • What separates humans from machines
  • How AI can be used in the cutting room
  • How AI enhances the editing process
  • Where AI was used in creating the animated short film, Critterz
  • What AI can’t do and why the human mind is still essential in the creative process
  • How AI can help creatives
  • The Gartner Hype Cycle and where we are in terms of AI technology
  • What creatives can do to catch up with AI
  • What mentoring in the edit bay can look like in the age of AI
  • What makes humans irreplaceable (even actors and voice actors)
  • How data sets that AI feeds on are used and maintained

Useful Resources Mentioned:

The Age of Spiritual Machines: When Computers Exceed Human Intelligence by Ray Kurzweil

匚尺丨ㄒㄒ乇尺乙 — An animated short created with AI

The Gartner Hype Cycle

Ep233: Redefining Your Career Path in a Post Generational Society | with Mauro Guillén

Ep85: Mentorship, Networking, and Surviving Hollywood Blockbusters | with Dody Dorn, ACE

The Creativity Code: Art and Innovation in the Age of AI by Marcus Du Sautoy

Continue to Listen & Learn

Ep249: OpenAI’s Chad Nelson on How Artificial Intelligence Could Shape the Future of Creativity, Collaboration, and How We Can Survive

Ep222: Is Artificial Intelligence Coming for Your Job? Maybe…and Here’s How to Prepare | with Michael Kammes

Ep221: How to Be an Irreplaceable Creative in the Emerging World of Artificial Intelligence | with Srinivas Rao

Ep214: What Creativity Is, How It Works, and the Laws to Learning It | with Joey Cofone

Ep212: The Science of Storytelling, Why We Need Stories, and How to Rewrite Our Own | with Will Storr

Ep246: Building a Career Beyond Your Job Title, Strategically Crafting Your Story, and Diversifying Your Career Portfolio | with Jeff Bartsch

Ep232: How to Future-Proof Your Creative Career, Avoid Burnout, and Build a Life Bigger Than Your Résumé | with Christina Wallace

Ep245: How to Reinvent Yourself, Pursue Your Dreams, and Change Careers at Any Age | with Marcelo Lewin

Ep231: How to Become Resilient In the Face of Change (and Manage an Identity Crisis) | with Brad Stulberg

Mastermind Q&A: How to Successfully Be a “Specialized Generalist” | with Michael Addis

Episode Transcript

Jon Thomas

I'm pleased to introduce the artificial intelligence panel from EditFest LA. Please welcome our moderator extraordinaire, the tech editor at The Hollywood Reporter, Carolyn Giardina and our panelists, Zack Arnold ACE, Jon Dudkowski ACE, Creative collaborator with open AI, Chad Nelson, and co founder and CEO of Runway ML Cristobal Valenzuela. Take it away Carolyn.

Carolyn Giardina

How's everybody's day going? Well, we are very excited to be here to talk about AI and, and editing today. And thanks to ACE for putting this together, especially the incomparable Jenni McCormick. So we had some introductions already. But seated next to me is Zack. And then we have Jon, Chad. And on the end is Cris. So you know. So AI is not new. But with of course, the conversation surrounding the strikes. This has become a topic that's raised a lot of questions started a lot of conversations, and certainly concerns about the future of jobs. So hopefully, during this session will help educate you and give you a lot of information about it as as everybody proceeds forward. This is really a collision of Business Technology and Creative issues. But I think a good way to start again, is to just focus on generative AI, which is really what's caused this, this explosion. Cris, you are going to give a little generative AI 101 for us?

Cristobal Valenzuela

I'll try that's a good question, I guess. But yeah, I will start first of all with, with what you just said around AI not being new. This is AI has been around for 80 years or so. It's only now that we're starting to see a new wave of models and technologies really developed and become a bit more mainstream, specifically a subfield of artificial intelligence called generative AI. Because there's other types of AI that you will and we all use every day, when you when you Spotify or Netflix or any streaming service or recommendation algorithm is an AI system behind the scenes. Generative AI is it's it's it's new in the sense that it's getting better and more effective at doing not only understanding the world, which is what algorithms used to do before but generating things in the world. And that could be language, audio, video images. At the core, these algorithms are just math, it's a probabilistic and statistical model that tries to predict a certain pattern in data. And what we're seeing right now is a boom and an explosion in the quality of how these algorithms work. But it's really important to remember that these are tools, these are algorithms that don't have a goal and intention, a system, they don't behave and act on their own, there are tools, and these tools are getting really good.

Carolyn Giardina

And probably the area that at least has been having active conversations about AI the longest is the visual effects community. I mean, if you look back two decades ago, when the Lord of the Rings came out, if you remember the big battles with the, you know, enormous orc armies, those were, were created with the assistance of AI. And more recently, a lot of tools that are used in including, you know, Adobe's premiere, are have AI tools within them. And we're gonna be seeing a lot more. For example, there's work being done to make ray tracing faster that so that you can do real time virtual production and things like that. So you're going to be starting to see that but today, we're going to focus on what it means to to all of you. So what I wanted to do is to start with Zack and Jon, would each of you just, you know, kind of, you know, set the stage about what you've both been paying a lot of attention to this, and you know what your current feelings are on it?

Zack Arnold

Sure. So the first thing that I'll start with is that if the expectation is that I'm on this panel as an AI expert, I am absolutely no idea what I'm talking about right now. And anybody that calls himself an AI expert, and they know where it's going next, it's like asking a dog to do calculus, they have absolutely no idea what's going to happen next, including the people that have created all of this new technology. So I just I want it to come at it from the perspective of I don't have the answers. I don't know where we're going next. But my perspective is not so much from the technology and the apps and the tools. My specific aim is to help us understand what makes us human, what makes us creatives and what sets us apart from the machines. And what I've seen over the last several months in the work that I'm doing with all the students in my Optimizer program, is them realizing that the fact that they have very specialized singular skill sets is going to put them at risk. So the reason today, and the reason everybody's been asking me, why am I carrying around a 2002, Thomas guide, at an editor's panel? How many of you even know what the heck this is? Hey, I'm not as old as I thought I was. Now, how many of you actually owned one of these and used it? Okay, so it's more than I thought. So if in 2002, and this one is very meaningful to me, because this was when I moved to LA, for those that don't know, this is Google Maps to get around Los Angeles, so you don't get lost in 2002. If your business model was I sell paper versions of Thomas guides? Do you think you're still in business? Probably not. However, what if your business model is I get people in Los Angeles from point A to point B, with the smallest amount of confusion possible, and I make their journey better? If you pivot? Can you still be in business today? That's what I specialize in is helping you understand what is the narrative of the value that I truly bring to my craft, because my belief is that over the next three to five years, and I've now had this backed up from multiple economists and experts in a multitude of industries beyond entertainment, on my podcast, they've all said the following, and one person just told me last week, 5%, or less of specialized careers are going to survive AI. So what we're seeing is the transition from specialization in our economy, to generalization. And I always correct that saying it's back to generalization. Because we were a generalized society for most of human civilization, then all of a sudden, the industrial revolution comes along. And for the sake of efficiency, we all became widgets on the assembly line of somebody else's dreams, that model is going away, and AI is going to start replacing highly specialized skill sets. So if it's a matter of what are the tools I need to learn, what does this do? What does that do? That's what these guys are for? I come from it, come at it from the perspective of how do we separate ourselves from the machines? And what are the human skills that are going to make us irreplaceable creatives?

Jon Dudkowski

You know, I, I wish I brought a prop.

Zack Arnold

Sorry I didn't prepare you.

Jon Dudkowski

I was, it was very, very smart. So I I'm also not an expert on AI, I can claim to be somebody who has been really passionately fascinated with artificial intelligence for a couple of decades. I remember, my first spark of inspiration came from Ray Kurzweil, as you know, the Age of Spiritual Machines, which I think I picked up in, like 1998. And there's a premise in that book, which is that while we can't predict the future, as Zack says, what we can predict is what we would do if we had enough computing power. If Moore's law is a law, what would we do with enough computing power? And that's the premise of this book. I don't know. Raise your hand if you've read any Ray Kurzweil? Okay, there's a couple people, I think it's a fascinating guy. He's, he's, he's a little odd. And some of his theories are kind of out there. But that simple premise, I think, opens up a lot of doors. I've been interested in science fiction, and it's really been a passion of mine for a long time. And then I was at a lecture at USC, about seven or eight years ago, and there was a professor named Norm Hollyn a member of ACE. Yeah, and Norm gave a lecture about machine learning and artificial intelligence used in the cutting room, and how, what were some of the what was some of the cutting edge work being done in laboratories around the US at that time? And he showed a demo of some students at Stanford, I think, who had built a I mean, we've always joked about auto edit, but they were building a machine learning system for cutting scripted dialogue scenes. And it used facial recognition and audio recognition and It would use existing tropes for editing and it would build out a scene and it would do what would take an editor three hours to get the cut to the point where you can then start to do the refining process. And the machine did it in like eight seconds. And, you know, Norm said, this is something that they're doing now, this is something that's on deck. And that blew my mind. Because the the Ray Kurzweil science fiction theory, stuff suddenly intersected directly with my, my career and my passion, which is filmmaking. And then he also had another step in that same lecture, and I'm paraphrasing, the norm passed away a few years ago, and the world is a sad place without him. But I think he'd let me paraphrase a bit. In this instance, the, the technology we were looking at was based on facial recognition for video clips. But he said, What happens when we turn the camera around, and the device you're holding, is recognizing the face of the viewer. And you've got an algorithm that is building something on the fly. And it begins using the face the same rate facial recognition processes that tell it whether or not somebody's happy, or sad or excited, or whatnot, and you start using basically, biofeedback to train the algorithm to give the audience exactly what it is they want. As an editor that I mean, like he started by blowing my mind, and it was like he took the scraps and what was left, and he just set them all on fire. I, this has been a really fascinating subject to mine for a long time. And, and then, a couple of years ago, I was painting a bench at my kids elementary school, and I started talking to another dad, and he was a, he was a visiting professor at USC. And I was like, I do a little teaching at USC. And he was a doctorate in, where he runs the neuroscience and Artificial Intelligence Lab at the University of Montreal. And we decided to put together an event talking about using the his context, the engineering school and my contacts to film school, we decided to put together an event to talk about the intersection of AI and filmmaking. And that was about 18 months ago, and the subject of AI and generative intelligence has become very hot in those 18 months. It was serendipitous, I guess. And that's actually how I came to know Chad, and through Chad, I came to know what Cris is doing. I mean, I don't have all the answers. I don't know, specifics. I do have some theories. And I've know the questions that I've been asking. And I can tell you what has been resonating with me and what I see where I see things stand now. And I can tell you what, what concerns I have and what excites me. It's it's pretty radical stuff. You know, I think they're, I guess, in a in a phone call we all had a few days ago, I said, I like to surf that's like my, my hobby, the way I keep my head straight. And I think personally, that artificial intelligence is a wave that we're all watching, coming towards us. And we're all just sitting in the lineup, seeing this wave come and the only thing I know to do when I see a wave is to begin to paddle and to try to get into position. So if see if I can serve it. And so all of these conversations, all of these discussions are all about figuring out how to be in position to surf this wave. I don't know how big it's gonna be. It might not be surfable. But if it is a sure is how I'm going to try.

Carolyn Giardina

And thank you for bringing up Norm. He's the first editor that I spoke with about AI as well. He was way ahead.

Jon Dudkowski

He's a really inspiring guy. Yeah.

Carolyn Giardina

Yes. So. So again, there are a lot of tools out there now that are using AI to varying degrees and for various purposes, and more are being developed. So as an example, Cris, would you like to take over and tell us a little bit about what your company is doing?

Cristobal Valenzuela

Yeah, for sure. So Runway is a company that's now turning five years, I've been working on Davy of Runway almost for for eight years now. What we do is we do fundamental research in the field of generative AI and artificial intelligence, to come up with new tools for human expression. That's how we like to think about it. And so really thinking about creation and editing on automation of everything that would allow you to take ideas from your head into real pieces of film or content in the shortest amount of time possible. So pretty much like the guide here, instead of having the guide physically, it would be great if you could just have a version of that guy in a much more detail version that's much cheaper and easier to get around. That's how we think about the tools that we do. We're building a system that will replace traditional more time consuming processes and make them much more convenient and easy and expressive to use. We've made some algorithms on image generation. So we do hear about latent diffusion or stable diffusion. And then more recently for video duration that I think some of you guys were using for creating video. So gen one and gen two, those are the models that we've released more recently. And yeah, I'm working on this for some time now.

Carolyn Giardina

And I'm just to get to give this a little bit of context for the editors in the audience who currently have their, you know, editing system of choice that they use. Is this a tool that augments that work or and works alongside of it is, is there do you have to replace something, could you just give us a little sense of the workflow.

Cristobal Valenzuela

So it's a tool that augments workflows, so there's tools on the automation side of things. For rotoscoping, for example, we have a tool called green screen that can segment and do rotoscoping, and just a few seconds. And so a lot of editors use data alongside traditional and lease or compositing software, we have tools for in painting that allow you to allows you to remove wires and objects within shots. And we have tools that allow you to generate videos. So think about it as the best way to think about really AI these days, I like to think about it as a new kind of camera. These are camera, the way you use the camera really matters. And so the camera can be using old stories in ways in combination with existing workflows, like a lot of writers are using these days. And so right now, it's a tool that enhances that process of editing, or creating a film or short film or a video.

Carolyn Giardina

And again, they're all cloud based tools.

Cristobal Valenzuela

This is all cloud based. It's the one challenging thing around AI these days is that it's very compute intense. And so you need a lot of computing power to be able to run these models at the speeds of that you require. And so we have a supercomputer in the cloud that allows you to run these models fast enough. And so you don't have to have a supercomputer in your house.

Carolyn Giardina

And to give us a little bit of a sense of, you know, again, this is developing so quickly, is based right now, where are you going? Could you give us a little sense of your roadmap?

Cristobal Valenzuela

Sure. Um, without going too much into details out a lot has happened over the I would say the last 12 months. But again, AI is not new as a field, it's been there for years and decades. Over the last 12 months I've seen we've seen an explosion of new models and new qualities and new overall results in every domain in every modality from language for to images to video. Right now we're entering, I would say the first phase of video models. And so it's good to remember that there are different domains in AI, there's language models, and when you hear about LLM, which I think a lot of people have heard, you know, people might have views or maybe some here have heard over use language models and chatbots. That's this particular domain and type of artificial intelligence. There's other so emojis another one are views and other ones, they're models that can do a bit of everything. And so really what we're working on is pushing forward video as one of those modalities that can continue to achieve great results. And then going deeper into it will take two things control. These are tools and tools become really useful the moment you can control them. Right now you can join video, if you haven't tried it, tried it, it's phenomenal. It's really interesting, as we were discussing her and you can drink video using words, you can join video using images. But really, to get to the next level, you need to be able to control it in the same way that you can control a camera, you can move the camera around, you can go from the side, you can change the lens, you can change the camera you're using, depending on the situation, that kind of like really matters for us. And then the next stage of I would say Rome, the further improvements are on this idea of multimodal systems. So think about when you watch a movie you're watching, it's a multi sensorial experience, you're watching the movie, you're there sound or tax, you're seeing all of those things at the same time. Creative software should work in the same way for you to allow you to communicate with the software, with words with verbal communication with tax and the AI, the system should be able to transfer your intentions into not just audio and videos and images, but a cohesive narrative. And that's where the we're still not there yet, but a lot of it will be to push the boundaries of multimodality.

Carolyn Giardina

Chad, we're gonna get to you now. You recently created animated short using Dall-e. So for starters, why don't you explain the technology and then and then we can get into the shorts?

Chad Nelson

Sure. So I work a lot with open AI. And I think everyone now has probably heard of that company through the release of Chat GPT or GTP four. And last year in April, they announced a text to image system. So just like Cris was talking about, you can basically type in a thought, if you will, or a prompt, as they like to call it and it would generate an image. So they had you know, examples in the beginning days would be like, oh, I want an avocado chair or I want to shipmonk Riding a skateboard in Central Park and it would deliver that image. And when I saw that it was very fascinating to me because I If I'm a creative director, so my whole business is about not only thinking of ideas, but then how to sell those ideas how to actually pitch and present and actually create go from that that light bulb in your head to a final piece of finished work. And when I saw this, it made me say to myself, I think this might be the first tool that I've seen, where it has changed the way I think about conceiving ideas, as opposed to executing finished work. And that was where I saw I wrote them literally, I wrote open AI code and said, Hey, if I were a person, like, if I worked for, say, Disney, and I was representing Star Wars, or Marvel and the entire universe, and this system was trained on my full library of all my planets, and characters and wardrobe, and all this, how quickly could I envision my world? And so basically, because of that email, they said, well, we'd love for you to see what, what you think. And so what I did is they gave me access to Dali, about April, like I said, of last year, and I played with it for about eight hours straight, just creating things because like architecture and locations, characters, and so forth. And what I found is that it may be realized that this is a tool that lets me essentially ideate in execution level of quality. And it changed the way I think about just presentation, conceptualization, and even then we think about, like, you know, the last presentation talking about all the previous, like the previous now could start to really look like close to final work. I mean, and that's what runway allows, I think there was one story before I go into critters, like John recently asked me as a little case studies like, hey, well on, I don't remember exactly what it's for. But it was like, I need like an LED mountainscape like, like a little 3d, like, you know, classic green screen kind of like LED landscape. Like it's on a computer screen. How quickly can you do that? So I went into Runway into gen two, and I typed a book description. About three minutes later, I had a couple of different options. And then I sent them to Jon. Jon's like, yeah, that was would be a week of discussion, meetings, prevous, and then work and we just whip something out in in a matter of minutes, just on a phone call, basically. And that, to me is where this really was so radical. So if I fast forward, in September of last year, so about a year ago, I said to openly I think we can make a movie with Dall-e. I mean, just literally taking all these visuals, and now taking traditional tools, Premiere, After Effects and so forth. And then also utilizing the Unreal Engine. I think we could actually make a short film, and they and they and there was this was fascinating. They said, Well, how quickly could you do it? Because if we would do this, we want it to be the first movie made with AI. So they gave me a week to basically say, and so I had nothing and they said, Well, what's your idea? And so I pitched them I said, Well, I have this idea for like a David Attenborough's Planet Earth colliding with a Monty Python sketch where all the animals start talking back to the narrator. Like, oh, that's, that's great. When can you actually show us something. And so normally in this, you know, in this business, you would probably go in with a script and maybe a couple of boards. I went back to him a week later with the film. Now it didn't wasn't fully animated, and it didn't have any voice actors. I just use myself. But I showed them the movie, the five minute movie, and it was still a little rough around the edges. And they're like, Great, let's fund this. Actually, the longest part of the process was actually getting the actors to agree to do it for open ACC because they were so nervous about getting the, you know, open mic retrain a model on their voices on the town's voices. So anyway, we made this film we launched it is now in the short sort of like a film festival circuit. And it's it's really interesting to see the response because one of the debates we had was, do we put at the opening titles, like open AI presents? Or do we just let people walk into this film? And we chose essentially, to put that not at the beginning, but at the end. And I think that's where it's been very fascinating. Because when I see the response, and I see people's reactions who don't know, it's then they see that oh, wow, the AI was involved in this and then it also starts a conversation about real curiosity and intrigue about well, how did you use the what did it benefit? Or how did it benefit your process? Or how much shorter was it because of this, so I think we can we'll take a look at a clip I think.

Carolyn Giardina

Yeah, and then we'll talk about it. Can we play the clip please?

Dennis

I'm David Attenborough's neighbor, Dennis and welcome to a forest filled with little critters. Our first encounter is this curious fellow with the eyes of a hawk and the ears of a bat. He stands guard for the many creatures of these woods. Now, this astonishing creature is thankful for the lookout for it sleeps 23.6 hours a day. Specific I know that witnessing their subconscious adventures. is truly adorable. Looking upward we find the red jackal spider. It's very distinctive. Wait, I'm sorry. Who is speaking?

Blue

I'm speaking to you.

Dennis

Oh dear I've eaten the wrong berries.

Frank

Nope. It ain't the berries.

Dennis

Who are you?

Frank

I'm Frank.

Blue

And I'm Blue. Which is weird, because I'm a red spider.

Dennis

I could not call you Frank or Blue. This is a science documentary.

Blue

Science? You think secretly filming someone while they sleep is science.

Dennis

Why, yes.

Blue

Why, no. It's creepy.

Frank

So creepy.

Dennis

Listen, I can assure you I'm here as an award winning documentarian behest by His Majesty.

Frank

Oh. So you're an English colonialist?

Dennis

No, no, we're not here to colonize.

Frank

Right. Pretty soon you'll have us talking all dainty, drinking tea.

Blue

Not me. I prefer coffee.

Frank

Yeah. me too.

Critter

Me three.

Blue

We should do a copy. Run. Yeah,

Dennis

Please, please. I just need you all to be mysterious little critter.

Frank

What little?

Blue

My therapist warned me that people like you. You just love to label

Dennis

Therapist?

Blue

Yeah. Would you say you will the guardian of the woods? Yes. See? It's tough being red spider. So many expectations.

Frank

Oh, yeah.

Chad Nelson

So what's funny also, what's funny about that is I watched that now and it's a year old. So in AI terms, that's like 10 years old. Like it's just like it, but it just shows you like we basically all that was created by just typing in descriptions. And then I would Dall-e at the time, you could do or you still can. But you could go into areas and say, really modify the character. So you could choose the eyes, you could change the backgrounds. It wasn't like just one and done push a button. And it makes it it was very much like having just it's a tool. And really it's about how quickly can I get those thoughts of oh, what does this character might look like? Or what how does it move? Or what is the background? What's the the where's it inhabiting? Just how quickly can I generate that. And that's what I found was so fascinating with this process.

Carolyn Giardina

So Chad, I think a lot of us would be interested to hear a little bit more about the impact on the workers what actually was done in AI. I know, it's not an end, Dall-e isn't an animation program. So where does the animation work fit in? And importantly, I think you were the editor. But tell us about your editorial.

Chad Nelson

Yeah, so that was again, what was so fascinating, because I showed an animatic of the whole film, like I said a week after, you know, kind of coming up with the script and so forth. And and what I showed was essentially dolly generation. So it looks like finished work. It wasn't just a storyboard sketch, or wasn't primitive 3d geometry. It looks like the finished film. So from the moment in this case, the client, I guess, you could say open eyes first saw an image it was what it ultimately looked like, which again was very fascinating way of selling or getting approval on a project. As for editing, what I did is essentially like you would in any animatic you'd lay it out and you block it out and you're doing all the voice acting and you get to a cut. And then when I found when I actually worked with the voice actors and got all their takes and essentially started replacing my dialogue subtle changes but it's still kind of retaining the original cuts so to me it was it was just an interesting process where you're thinking about the Edit even as you're writing it even as you're creating it, which I think is always a fascinating thing as opposed to just being handed footage but being a part of that process from the very inception again was a very big change for me personally.

Carolyn Giardina

You said the biggest change you found was in pre production but Would you elaborate on what was different in post production.

Chad Nelson

In post I mean, we still used after effects all those backgrounds were cut up and wrote out and animated traditionally and After Effects and and then Unreal Engine if any of you have seen any work or performance capture through that and I know some I think some people have done it worth with some of the Star Wars properties and so forth but it's it that was all I mean the AI kind of stopped once we got to the visuals everything else was was I would call it you know traditional digital post production or production right now I mean we use facial capture most of that was again freeware software that you can just download integrated with Unreal so the our goal wasn't to replace the human effort of what it all those traditional techniques that production use, we wanted to deploy those but how we conceived it and how we got to that point was where the ad really gave us the assist.

Carolyn Giardina

And as we've been saying this is moving lightning and this was done a year ago. So had you made this today what would have been different

Chad Nelson

Well this is where I mean it's so funny because Cris text to video and or take even text to image and then image to video is kind of the latest playground and that's where I'm you know, my role in this industry, if you will is just basically say, Well, there's a lot of demos out there. But how can we show or what can we make that's truly watchable? What's production quality or production where like a real example, or a case study, and to me, that is where I'm looking at now. And I'm not announcing anything but one could say is like, we're definitely playing with where text to image and an image to video. Using tools, like what Cris has developed is, I think, kind of the next forefront.

Carolyn Giardina

Jon, I'm sure you have questions.

Zack Arnold

I personally have many, what I'm curious about coming at it from the perspective of kind of workflow, especially I think, going off of what you said about it being the vast difference between a year ago and now what you're going to say is going to be obsolete by the time we're all talking after the conversation that we have today. But it seems to me that now it's as simple as you can go into Chat GPT, you can say, write me a three minutes short with the following characters. There's a red spider, there's a blue monster, and the voice of David Attenborough. He's narrating and they start talking back to him. It gives you the script, you then export import into something like Runway Dall-e. I know this a little bit more complex than that. But just to help me understand because I'm still learning and I think everybody else is how close are we to, you can pair all these together. And you've all been eliminated the writer, you've definitely eliminated storyboard artists, you've definitely eliminated somebody that has the ability to do the animations, or what what parts of the process could you do now? And what can you Where can you basically skip?

Chad Nelson

Well, I, you know, so when we did when we you know, so critters was interesting, because chat GPT hadn't been released, and they only had GTP four available to the public, then you know, I get I get lucky that I can play with some of the new models. I did not use it to create or write anything with Critters GTP four that is because I found it was far more interesting to improv and act and play. I think human play and human creativity is what brings a lot of the life and the spark to entertainment, where we also create that connection. And I found and I did that experiment, I said great GTP four is out, let's try to have it create a critter and let's see how good it is. And the reality is some of it came up with some pretty funny lines, there is every once in a while a gym, but it is a slot machine pool. And it also doesn't know if it's made anything brilliant. Or if it's made literally, you know, something completely mundane, it has no idea. So you still have to curate it. I do think what's interesting about this world is in the command N of my entire career, it's always been a blank slate, there's always a white page or a white, photoshopped image. And in this new world, when I hit Command N, it will probably be something presented to me that I'm starting from and never never starting at blank. Now. I think that is I think there will always be people that will still want to start at that blank blank page. And I think there's some some merit to that, obviously. But I think it's interesting, this new technological wave as you refer to it, John, is that, that? Yeah, that blank page. I mean, once this is integrated into Microsoft Office and the entire Adobe Creative Suite. Yeah, opening a Photoshop file might actually be intelligent enough to know that I'm working on this project for this Star Trek series that it might actually start to generate things for me based on knowing what I'm already working on, then I think that's, it's going to be very fascinating to see what emerges from that.

Jon Dudkowski

Can I kind of dive in, sort of riffing on that. The what? So the event, we had this event at USC, earlier this year. And it was, again, it was the engineering school in the film school. And there was some interesting takeaways. And that actually was part of the reason that that I guess I'm sitting here now is because I talked to Jenny about it and was like, Hey, I think he should be talking about this. And it turns out, there's a lot of other people with the same idea at the same time. The that as a, as an editor, and as a filmmaker. There are, there's a responsibility to go down all the paths. And I'm actually I'm paraphrasing a friend of mine who's in the audience right now. There's an editor out there named Tim Good, who is up for an Emmy for The Last of Us, by the way, if you guys are all voters. Little little plug there, Tim. He's an old friend of mine. And he's really remarkable guy, brilliant editor. But he actually I think was one of the first people to articulate that idea to me, which was that part of the editors job is to pursue is to pursue all the different paths. You gotta go down the path, you get, you get notes, you get a scene, you got to explore all the different avenues and that is part of the job and you can't take shortcuts, you can't say I'm going to just jump to what I think the right version of the scene is going to be. Or at least it's it should be generally discouraged because a lot of times it's the discovery that you make along the way. Right now the tools like like Gen two what Adobe is making Chat GPT There's, there's a lot of really interesting generative AI tools out right now that are, that are allowing you to go down a path faster, and are opening up new paths. Like, for example, like, using Gen Gen two actually, so Runway gen two is really incredible stuff, and you gotta go check it out. The, when Chad introduced me to it, one of the first things I thought of as, as an editor is, could I use the software to, it's not unusual that I, you know, I'm cutting a dialogue scene or I'm cutting a scene and the producers decide that the the, maybe we'd like the beginning of the scene, we'd like the end of the scene and everything in the middle, we're going to, we'd love it, we could just take it out, you know, and maybe you've got a bunch of characters that are walking around the room. And so continuities just been blown to hell. And it's not unusual to find yourself in that position. And or at least I found myself in that position. And a lot of times, the only way to get around that is with an insert shot, or with some some piece of footage that you can use to to reestablish the geography of a scene. And, you know, the director a lot of times shoots what's on the page and what's in their head, and a lot of times aren't necessarily thinking, what if they destroy my scene completely and only use a piece of it, and try to stick it onto this other piece and make it a Frankenstein? You know? So I thought of Runway Gen two and I said, Could you use Runway Gen two to create an insert shot? You know, could I use it to create a you know, again, I work in a lot of science fiction. So can I use it to make a, you know, a hybrid fish hand touching a computer panel and a red light goes on? Right? I as an editor could use that to then pitch that sort of an insert shot, I can I can build the scene. And it could it could I could I could build a I could use that insert shot in such a way. And I could tie it into my sequence and whether or not it is arable as a useful piece of film. And honestly, some of the stuff that gen two does is pretty damn close. I can then absolutely use that to sell it to the studio dirge student tell it sell it to the producers and say, here's the version here, one of the paths that I've gone down, is this is this insert shot, we can tell the story we can do the things we're asking for. But I think that I think the missing piece is this insert shot. And here's an idea of what that might look like. And as Chad is saying, Now sudden, we're not starting at zero, we're not, I'm not, it's not just text on the screen saying you know, insert shot of blah, blah, blah, you don't have to convince a, I don't know, if you've ever struggled to get a studio or a network to imagine what it is you're pitching before you actually have the image of it. A lot of times having something that you can point to and say no, this is kind of the gist of what we're looking for, of course, it's gonna look, it's gonna be the good version of this, you know that that gets you a long way to solving that problem. You know, there's, there's other generative AI technologies, like, there's recently I found a piece of software that that'll create stems from any music, right. So you're like, God, I love this song, but I wish I could pull the vocals out. Right? There's a really awesome piece of software out there right now that will do that. You know, like, Oh, this is fantastic. But I wish I could just use the drums at the beginning for the first 10 to 15 seconds and then bring in the whole, you know, this is the kind of stuff that you generally would have to go and ask your, your music supervisor to get this to get the stems, and then you bring in the stamina, we need to do it take days. Now you can go down that path, as Tim would have said, like one of the paths is trying to explore this musical musical editing choice. You can use this software to, to create stems, they're not perfect. But they're really, really good. This all brings me back to the idea that right now. And I think of artificial intelligence personally, in three stages. And right now, I think we're in stage one. And that is that these tools are they're increasing the capabilities and the creative tools and power of the storytellers. And it's incredible. I mean, it really is it's so empowering. And it's it's creating new ways to get down a path, it's creating new ways to find paths. Stage one is is is really, really exciting because you still very much need a human being in the loop to say this is a this is good, this is a good choice. And this is a bad choice. But it just generates a bunch of interesting ideas. It creates a bunch of interesting ideas and you can scroll through it and then use that to riff on to something better or you can you know, you as an as an editor or as a creative, you can then start to use that to work with. You know, stage two is, I think what a lot of people get to begin, where at least personally, I get concerned about artificial intelligence, which is when these models gets so big, that it begins needing human beings less and less in the loop. And I do think I mean, you know, exactly right. I don't think anyone can predict the future with you know, if somebody tells you what's coming, you should probably turn around and walk away from them. Which means I really shouldn't be saying this. But I think that stage two AI seems like it's coming down the path. And it's going to require less and less human being human interaction to say that's a good idea or not a good idea. Because the model will be big enough. And it's statistically you're able to say, well, this is statistically really, like, you're gonna look at it, and just, it's just gonna work. Like I don't know, if you've used Chad GPT 3.4 versus GPT Four, but it is significantly better. The model, the algorithm is better, it gives you good answers a lot more of the time, fives coming and fives coming and honestly, and the thing is, it's gonna be not like, oh, that made up the total, you know, there's a couple nuggets of good stuff in there, and a lot of it's junk, it's gonna be like, actually, this is all really, really solid. And I mean, you know, you may or may not need to do any adjustments, that that's it stage two, that I think is, is is, is both exciting and scary, because that's the wave face beginning to pick up. And then, you know, the science fiction junkie and me, you know, the, you know, the, you know, the the one who's making shows about spaceships and evil, artificial intelligence machines, I mean, that then at some point, you know, the same technologies are going to allow us to build artificial general intelligence. And personally, as a parent, think about my kids and think about the world they're gonna live in. And I think about my own career, and I think about what impacts Artificial General Intelligence is going to have as, as distinct from general from generative AI. You know, and I think I think that's actually something that everybody, whether you're working in the film industry, or you're just living on this planet, you should be you should be considering it, because those are the window of time that we're gonna be able to do things to affect how that is going to have those outcomes are going to work out, that window will close at some point. And we're in that window now.

Zack Arnold

Do you mind if I jump in here? All right. Anybody, you don't need to do this now. Because I don't want you guys pulling out your phones, pull out something that's called the Gartner Hype Cycle, G A R T N E R, where he just said about phase one, phase two, phase three. This is not the first time we've ever seen a major inflection point with technology. If you look at Gartner Hype Cycle, you're going to see that there's this graph where it goes up and stage one, massive amounts of hype about any new technology and how it's going to change everything. Then it drops all the way back down into this thing called the trough of despair. This is not what we thought it was things have changed in a totally different way than it slowly climbs up a slope. That's called the slope of enlightenment. Oh, this isn't quite what we expect it but we're learning how to adapt to it. Guess where we are? Right now? Right? We're the top of this hype cycle. And I think that the thing that everybody's thinking, the elephant in the room is not where do I fit into this workflow? Or how does this tool work? How does that tool work? It's an I even gonna have a job anymore. Is there anybody that this is the real reason you came to the panel today is, I'm not even sure if I'm going to be needed. And first of all, I want to say I'm very supportive of a lot of the tools that are out there, I'm experimenting with them as well. So I want to come at what you had said from a slightly different angle, kind of slightly playing the devil's advocate, where I had no idea there was an AI tool to build stems out of music, like, Oh, my God, how much would that change the world of editing a seat, right? So from our perspective, we're thinking, here's an amazing new tool that's going to help us be better editors. How do you think the composers in the room feel about this tool right now? If you were a composer in the room, you'd be like, That's what I do. That's my thing. And those were my cues. And I want you to come through me so I can create the stem. So now imagine giving the same speech that he was given studio executive, imagine the possibilities, where I can just go to this AI tool, and there's an edit button, the ability to tell stories just got vastly better? How do all of us feel about that right now? That's really what we're thinking about. So depending on the perspective that you're taking from this, as we the storytellers, the editors, we're seeing all these different tools, if I'm a rotoscoping. Artist, and I'm thinking about the rotoscoping tool that runway ml has, if that's my one specialization, I'm in pretty big trouble right now. And they're just recently I don't know if they've released it. I know they've announced it, and I hope I don't get in trouble because I'm pretty sure it's public. But they have an AI version of script sync that's coming and I think Michael is in the room, isn't he? He was at some point. So I hopefully I'm not talking out of turn. But I remember hearing publicly that a AVID is looking at AI scripts sync if the only thing that you do is script sync manually, you're in really big trouble. So this is where I continue to talk about how generalizing your skill set an understanding how you can provide value as a more complex storyteller is so important, because from our perspective, on this panel, we're talking about all these new tools that make our lives better. But these tools that are making our lives better. There's somebody else that saying, what about me. And now that wave is coming for all of us sooner than we think. So I think that that's a bigger part of this conversation. But again, I support all of these advances. But if we look at it from another perspective, there are people that are listening to this conversation and they don't see opportunity, we're going to be a part of that conversation sooner than we know, which is largely what's going on with the politics of the industry, which that's we're not going to go there. But I know that that's, that's a big reason, you know why we're, we're following all of this.

Carolyn Giardina

For the group, if you are working in editorial in one of these areas that are could change in the not too distant future, what additional skill sets? Or what sorts of things would you recommend from an education standpoint?

Zack Arnold

I guess I can start with this one. And then I think when it comes from the technology standpoint, the tools the this is definitely the better side of the panel for that. But what I work on with my students as identifying what are the skill sets that make us uniquely human, where I'm not worried about being replaced by technology anytime soon. And one of the reasons that I at least as an editor, there's a lot more complexity to specialized skill sets in editing or rotoscoping, or color, that's very complex. But as an editor, the reason I'm not worrying about being replaced anytime soon is one thing. It is the note underneath the note, just imagine you have a cut, and there's an edit button. And an executive says, do these 10 notes, what a tremendous disaster that would be, it is our job to interpret the note and say, yeah, oh, this is really stupid. However, you kind of have a point. But the reason you think this doesn't work isn't this, it's actually this. So our ability to interpret the note underneath and notice the reason that I'm not worried, the reason that we can empathize with our characters, the reason that we have these thing called mirror neurons, that as an editor, when I'm watching performances, I can tell the difference between take one, take two, take four and take 10. And say, I felt something watching that facial expression that the actor didn't do in the others. AI is that a place where your facial recognition, it can say, because the cricket the mouth was here, instead of here, it might create this emotion, but it doesn't feel it doesn't have empathy. As soon as AI can read the note underneath the notes, and it can empathize, we're in really big trouble. But I don't think we're anywhere close to that unless I just don't understand the technology. So that's why I'm not worried. So the skill sets, learning how to be more empathetic, learning how to solve complex problems are really important one is going to be communication skills. And it's also going to be communicating with technology, there are going to be entire degrees on prompt engineering. So we learn how to communicate with these tools. Because if I say write me a funny script, that's five pages, but I don't understand how to break down my prompts, the quality of my script is going to be very different. And I'm sure that Cris can speak to the the prompt engineering of creating a better shot versus the worst shot. But I think one of the most important ones that I know that we as editors, and just creatives in general, that I think is so downplayed is our ability to manage conflict and understand how to get the result in the vision that somebody wants. So if you want to go out and make yourself future proof, get a degree in clinical psychology, you will be an editor for decades to come, because AI is never going to figure that out.

Carolyn Giardina

So So will the AI be able to negotiate with the director and

Zack Arnold

Yeah, exactly. But those are when you get past a certain point in your career. That's what separates you from everybody else. This is not an AI conversation. This is always the way that it's been. When you're entering your career. Yeah, you have to learn Avid or Premiere or script sync or Google Sheets, or whatever it is, there's a baseline of hard skill competency. If you don't know how to do these things, you can't do the job. Once you get to a certain level. Everybody knows how to use those tools. It's your ability to extract the essence of you your ability to make decisions as your opinions, your life experiences, change what you think is the right shot, versus what I think is the right shot. It's what you're bringing to the equation. It's your personality, it's your ability to communicate, that's why people hire you, once you get past a certain level. AI is not going to change any of that. But we're gonna have to diversify our ability to manage and communicate with completely different tools.

Carolyn Giardina

Today, the next generation of film editors learn from, you know, assisting they're, you know, they're in the cutting room, they're, you know, being mentored. Does the mentoring and training process for the next generation of creatives change with this? And if so, how, and anyone can jump on that?

Jon Dudkowski

Yeah, I'm worried about that. Honestly, because I feels like it. Yeah, I mean, I think it's a real concern. I mean, I think that there's I think one of the best things about the editor, Assistant Editor relationship and like I was fortunate to have wonderful, generous mentors, so that I could learned my craft enough so that I was employable. And and then I got to a point where I could do the job well enough, and then I could start making mistakes. And, and then I would learn from each one of those mistakes, and I made a lot of mistakes. And I was looking at my my pension and health care recently and why why would you do that? Well, there's a number in there. And it was, it was just a little north of 50,000 hours. Right? So like, I mean, I, you know, I'm, I'm not ancient, but I've been editing professionally in the Union for more than 50,000 hours, right? I mean, if 10,000 hours is mastery, and I'm still only 20%, as quick as most people, you know, let's say I'm 20% of the average person, you know, I probably got there, you know, I don't, I don't mean to call myself a master. But I mean, you do something for 50,000 hours, like you get to get decent at it. You know, like I, I, I understand exactly what you're saying. But the note behind the note, right? I got there by making small mistakes, and by experimenting and exploring paths. And I am both very excited. I mean, so scripting is a great example. As an assistant editor, part of the scripting process, as as grueling as it is, you know, you're going through the texts, you're watching the footage over and over again. And you are subconsciously, or maybe very consciously deciding which you would choose, you know, and that, and that is critical to being a good editor. And I agree that artificial intelligence being used for scripting would be wonderful, because so much time is spent doing that, and it would be it would, it would free that same assistant editor up for something else, but that same assistant editor then probably doesn't need to sit through the dailies. And if they choose not to, and then choose not to watch all of those choices, and make their own decision as to what would be the right cut, and then see how the cut gets developed and say, Oh, I see why they chose that I was gonna go this way. But I see why they went that way, or vice versa, like, oh, they went that way. And that was the wrong choice, they missed out on all this great comedy, you know, where they missed out on all this great drama, like, the the the same tools that we're talking about that are so empowering, and, and free up, this is an editor for all their free time, it's also taking away something really significant. And that is a, it's a concern, you know, and I think as editors, I think one of our responsibilities to the next generation of people coming up is to, is help them navigate that. Because, I mean, I recognize that, but the assistant editor that I'm working with, next that's doing, you know, auto scripting, you know, I might say, like, I want you to watch every single piece of film, and I want you to, you know, I just want you to go through it on I want you to take your own notes. And I'm going to need to, in order to make that assistant editor to help them be in a position to get the job that they're striving for. I need to reach out and help them. And in the same way that I parent, you know, it's the trick is, you know, you want to you want to encourage and challenge and engage. You know, and the problem with these AI tools is that I expect at some point, some studio executive is gonna say, well, we don't need all that scripting time now. So you probably could do it with one less assistant editor, right? You know, and then as an editor, it's going to be my job to stand up and say, No, it's really important that we, we keep that assistant editor in the loop. And as part of the process, you know, that I need to creatively find ways for them to support me, so that I can then go and explore more of the paths. And if I can come up with creative ways for that assistant editor to help support me come up with more exploration of the path to give the producers and the directors that I'm working with an even better product and in a slightly faster timeframe, then that assists an editor as a as a role in the process. But you know, part of it's part of the onus is going to be on the editors to, to find creative ways to be supportive and to and to mentor that next generation because these tools are, you know, they're gonna give him they're going to take and they do now.

Zack Arnold

As soon as you said, mentorship, I have to prevent myself from doing an entire TED Talk. Because mentorship is a huge, huge passion of mine. I think as many people here already know. And I want to bring both Cris and Chad into this conversation. What I one of the concerns that I have with the technology is that especially those that are coming up that are younger, there are going to be a lot of different I don't think they're they're gonna have the same opportunities to learn and grow where it's like, well, you know, we don't need as many assistant editors and technology can do the work. I'm not as concerned about that. One of the reasons being, and this was where I think understanding that mentorship is two way street is so important. I think that right now with disruption, there's a tremendous opportunity for the people that are younger, that are coming up. And that's working with the curmudgeons like me that don't want to learn and change. When I was younger in this industry, I was learning Final Cut seven, and I was learning after effects. And I wanted to know all the new tools, so I could be valuable to the editors on my team and I could work with directors and producers. I don't want to learn anything anymore. I just want to come to work and I want to work in AVID 2018. And they changed my window if I didn't change the colors. I liked it that way, right? There's a tremendous opportunity for mentorship because I want to hire and work with assistant editors that know the new language, that know prompt engineering where I say, I don't want to make the insert shot. Can you just tell Runway ml what I want and they can mentor me. So I don't know is Dody Dorn by any chance I know she was you're obviously she was earlier on a panel. She's not here anymore. But I did a podcast conversation with her where she was my very first mentor I reached out to her literally sent her handwritten letter founder address on Yahoo, you know, address search 20 plus years ago, she reached out she became my editing mentor. But then something changed about 10 years ago, when I started what was at the time Fitness in Post and is now Optimize Yourself. She read one of my articles, and she was really blown away by this idea of I should treat myself better than my technology. And she went into anybody that knew Dody more than 10 years ago. If you saw her today, you're like, where the rest of you go. I mean, she looks fantastic. And she reached out to me, he said, You've been a mentor to me. So if you're thinking mentor mentee relationships is I'm super experienced. And I know everything. And I'm here and I know nothing. And I teach you this is a two way street. So if you really want to become valuable, and you want to learn, I think there's a ton of opportunity for younger people to learn these technologies, and teach it to people like us. So I wanted to bring it to the two of you to help me understand how could somebody here that wants to be valuable to me learn these tools, if I really don't want to learn them?

Chad Nelson

How can we make you learn? Not. It's interesting

Zack Arnold

It's not so much how can I how can I? How can you augment the fact that I just want to keep doing my thing

Chad Nelson

If we're talking about the fresh generation, this is where I think it's funny to me and exciting also, as well, because when I started, finished school, I actually went, I was starting at USC and then I decided to switch and go into acting and direction in theater and went to study in London. And in way, that was probably one of the most valuable decisions for me, because it actually took me out of this comfort zone of this task, or this one kind of craft, if you will, and allow me to put myself on a stage with nothing around me and learn how to actually direct a performance or even try to give a performance. And to me that was so valuable, in a way where it's like I I don't know how my career would have changed or how my life would have changed if I didn't have that general knowledge. And I think I was back literally, to about what you said in the very beginning, which is general knowledge is so important. The other thing I think is fascinating about this, this world is that the speed at which you can create and I do agree that you do need things to just it, you need things to like let it kind of let it breathe and let it sleep on it so forth. But I do find it fascinating that a young student today coming out might have an entire career, the my 30 year career couldn't be done in 10 years in terms of the same output with these tools. And to me, that means they're making more their take, they're given a chance to take more risks, to make more mistakes, and to hopefully find their own sense of art and their own voice in a faster period of time. And maybe in a way where it allows them to even find that define their voice even stronger than in the current system. And to me that's actually interesting and exciting. Because I have no new filmmakers from around the world that might emerge or storytellers with these tools, through the old system, they might be eliminated, they might even get more nose than they ever get that chance to actually make something so whereas these tools might actually allow us to make something that actually gets them seen and discovered sooner, and then allow them to continue to do so even further. I know, that's where it's interesting to me, but to your point, general knowledge, you know, human interaction, take yourself out of just I mean, I don't want to say take yourself out of the editing room and get into the real world. But I mean, there is a part of that where he Yeah, human interaction is the full essence of this collaborative creative community and, and these AI tools just they're not being taught that part of it. I mean, they're they're able to give you results. Don't get me wrong. But that collaboration is just so essential. And so I hope the schools and I know we've talked about this USC is specifically it's just like that needs to really be almost the fundamental foundation, looking ahead, because the tools will always change and the tools will get better. But it's that that that collaborative, creative spirit that we really got to teach and that general knowledge.

Zack Arnold

Yeah. I don't. It's far as the we're talking about these core skills, communicating your vision. That's what filmmaking has been since the beginning of the film camera communicating a vision. We're just using different tools and different modes of communication. That's still a generalized skill. Like, for me, the there have been so many like, great moments from today's panels, the one that stuck with me the most was when Evan Schiff was talking about the stunt coordination for John Wick three. And he said, they didn't just learn the moves, they learn jujitsu so that if they didn't, if they had to change the move, or work around a column or something else, it wasn't a matter of, oh, I don't understand that move. Their brain had a generalized knowledge of how jujitsu works so they could pivot, right? And it's the difference between I had this one skill set versus I had this generalized knowledge. And that's, that's really the approach here. And I guess I, I'm, I'm putting my podcaster head on, I'm sorry. I get into this mode. Go ahead.

Carolyn Giardina

What I do want to do is, we do want to allow a little bit of time for q&a, because I'm sure we have a lot of questions. So please raise your hands. And I think we might have a mic. Yes. Right. Here we have a question.

Audience 1

Yeah, this question is for Chad use, I think you mentioned that the longest part of your project was getting the voice actors today, you have like companies like 11 labs that allows you to do voice cloning, add in motion, if you do it today, would you be using that?

Chad Nelson

I mean, I've used that for sometimes inserts or even thinking about, like, let's try something. But I still am. I love the idea of working with artists and I the idea of just completely replacing the artist, especially the life that a performer will bring to or the actor will bring to a performance I think is so essential, I find that a lot of those tools just feel like like maybe first or second table reads, at best. They just don't have that spark, they don't have that energy. Now, granted, that will that'll come. I think that's where it's gonna be very interesting for interactivity. So for things, you know, future video games, and those types of applications, where those, you know, you'll be able to talk to your favorite characters and exist in their world and have dynamic conversations as opposed to just script trees. But I think for linear media and storytelling, I'm, they're good, but I still find that it's not even remotely close to that performance, then the surprise you get with working with talent?

Audience 1

For Zack, same question is for generalizing for voice actors. If this tools gets to the point where you can't tell the difference between a human actor and in a voice or a digital actor, in voice wise, how do they generalize? What do they do?

Zack Arnold

Well, the actually the first thing I want to I'll answer the question, but I think we could easily do an entire panel, if not an entire day on the ethics of AI. This is a conversation I've been having with my team a lot where we're experimenting with chat GPT, and all these different models and building second brains. And we're having fun and really learning what they're capable of. But one of the things I told my team, there's some blog posts or newsletter that I wrote one of my team members use mid journey, which is similar to generative AI similar to what dolly or runway did. They said, We think this would be a great thumbnail. And I said absolutely not. So there's no way I'm releasing a piece of content knowing that that content was generated using the art of other human beings, right, I want to learn how the technology works and what it's capable of. But if I'm putting it out into the world, it's not going to be generative, because there's ethics behind it. I'm not going to release a newsletter or blog post that's written by chat GPT, unless it's an experiment actually sent out an outreach message that was written to Eddie Hamilton, the editor of Top Gun displaying, here's how I wouldn't write an outreach message. And I got a whole bunch of feedback. And then I told everybody, by the way, chat GPT wrote this message I didn't they're like, whoa, like, I didn't realize you could write something you didn't know it wasn't human, right. So the I guess the first thing I would say is that, like Chad, I would use this stuff all day long to play around with it. If I were going to actually use it, I would not use it unless I were working with human beings because there's an element of nuance and emotion and feeling that you don't get from these tools. So I guess for me, other than an exploratory tool, I wouldn't see myself using 11 Labs. But if I'm somebody like a voice actor, and I'm thinking, this is all that I do as I do voices, then 11 Labs could potentially change what I do. I would be asking myself the question, what's the deeper reason that I'm a voice actor or that I'm a performer? What's the emotion that I want to create for people in what am I uniquely good at? Do I do voice acting? Where it's you know, fun, happy children's stuff, because I have that energy? Is it that I can do something really dark and brooding? What is the energy or the emotion that I'm uniquely good at creating? I guarantee you can do that in other areas other than just voiceover like the world of audiobooks, that's going by by like audiobooks are going to be largely done by AI but that doesn't mean you can't use your voice as a talent. Think about what's the essence of what I do that I can apply in other areas. That's how I would look at it?

Carolyn Giardina

I would say just, you know, keep an ear to the ground on it because it's changing so fast right now, Jenny?

Audience 2

This question is for Cristobal. Obviously, we're all creatives looking to you like you are know what's coming? What kinds of questions do you have for us the creative editors?

Cristobal Valenzuela

That's a real question. What are you really excited about in terms of using this technology? I think I've seen answers to these questions with experts here. I think as as something I keep reflecting on, I think a theme of the conversations has been around tools, we're speaking about tools and how we use these tools. Don't obsess with the tools obsessed with the with what are you getting out of them. And perhaps a good way of making sense for me of change is to look at the past, because I think the past has a lot of insights as to what's going to happen in the future. And read the reason why we're here is because 150 years ago, a group of scientists from all the world started playing with optics and with light and with, with engineering and came up with a camera. And then someone had the idea of stitching images together to create moving pictures. And this is the recent while we're here at 150 years after that, it would have been impossible in the late 1800s, to think about us being here speaking about everything we're speaking about, at that time, right? Because the craft, about art and storytelling was painting. Everything was done via paintbrush. And so painting was the medium, it was the thing photography didn't wasn't taking seriously, because it didn't work as it works now. And film was like not even like in the realm of what we thought, could be something so serious that we could build entire industries and jobs and worlds around it. Even the Lumineers, early on, they created like the first 10 films, they felt like this is going nowhere, like it's not going to work doesn't matter. And the I think the lesson from from me there is really to first of all embrace and understand it's very hard, as we were saying before predicting the future, like predicting those changes, the magnitude of the change was just really hard because it all happen in a few years with a few innovations with a few changes. And so really embracing that and being like we're all pretty clearly painters were like holding paint brushes, and the camera is about to be invented or most important invented. So in the 1910s, first 1900s. And think about it from our perspective, like we're in the black and white rewrite, like big camera ages of cinema. And then everything will come after it will be us collectively experimenting with the stimuli. And then inventing things like digital effects, like visual effects was just people like experimenting with taking cinema, like forward and forward and forward. So we're then questions, perhaps an invitation could be like embrace, as I think a lot of you already are, and experiment, because what we need right now is people that can look at the technology, not from my kind of like position, which is research and engineering and like putting it out, but really from an artistic perspective. And just like pushing yourself to say, how can I use this? And how can we make it so that we come up with things that are we're not even like thinking about right now. And that's, I guess, a good sense. And a good like direction I would do is just embrace and experiment.

Audience 3

I don't know who to direct it at. But I am fascinated with the tools. But I'm also fascinated with how the tools work. I have to assume that a lot of this AI is database driven. Correct?

Cristobal Valenzuela

Happy to take that. Database. They models use datasets. And so they use data sets to to understand patterns within data. And this is there's a subfield of AI. That's all the rage right now called deep learning. And deep learning is a subfield of machine learning. And in that subfield of AI is the reason we're mostly here. That field in particular uses data to train and understand patterns. So yeah, I guess does your question there's this idea of a data set that's get feeds into into the model.

Audience 3

So the tools only kind of as good as the database?

Cristobal Valenzuela

Yes, the tools are always as good as the data set.

Audience 3

Who controls the database?

Cristobal Valenzuela

It depends, because models are not trained equally. Like for example, we're talking about the rotoscoping tool. That's a tool that we've created that allows you to select objects in a straight shot and it will like automatically create a segmentation map or a mask. There's no data set around segmentation, I guess there is because it might be scattered around different like editors might have the before and after shots of the shots. You wrote a scope that doesn't exist like a library of sorts where you can just feed on so when we're thinking About kind of like, how do we solve that problem of making rotoscoping really quick, we had to invent a dataset. So the way we invented that data set, we started collaging videos together of different objects and recording like a green screen, we tried so many different methods. And so that data set didn't exist, we have to come up with the data set. So all models are trained differently, because the task around the models and the thing you're trying to do depends on might be different. So I guess that's your question. It depends. Yeah.

Audience 3

It feels like the database is going to become a commodity at some point.

Cristobal Valenzuela

I think models will be come the commodities like the outputs, and you'll see this with language models these days were like chat GPT was like all the rage. But there are models now that are coming and coming coming really close, which I think it's the like, the rate of progress, like, again, even go back to thinking about this as a new camera. Cameras are sort of a commodity, like you have a camera in your phones, right? But what makes you a filmmaker is not that you have a camera, like you're not going to press record. And you know, you're like great, where's my Oscar like, I'm a filmmaker, right? Like this is the same, right? You can get a camera anywhere, but the craft of filmmaking is a craft, you need to understand how to use it into either you need to experiment. And so if models becomes a commodity, and everyone gets access to all these video language, audio models, that's great. But to tell a story, you need more than just the tool, you need to know how to use the tool. And so models I think will become commodities.

Chad Nelson

Yeah, the one thing I would say, I feel like the industry so far has been based off of just general knowledge of no models or datasets that have either one, the worst been just scraped across the internet, just you know, haphazardly or they've been trained on licensed content, where like Adobe, for example, is saying they're only training on licensed content. I think the next phase, I mean, if I were to add a like a 1.5, tier three stages, the next phase is going to be Disney Corporation or Paramount or Toyota Motor Corp can say we have so much information that we own. And we want to train these models only on our information, and no one else in the world will see that training data in that kind of the generic models. And so as in like, let's just say Star Trek Jon's world, he will have the most perfect if you will, or the most knowledgeable Star Trek, basically LLM or you know, language model as well as generative dataset that that can create as close as it can create perfectly to you know, someone hand drawing or someone photographing it. But no one else had a different studio would be able to see or access that information. I think that's about we're, we're on the verge of that. And I think that starts to change again, some of the nature of generic datasets versus IP that a company or a corporation or an individual owns.

Jon Dudkowski

There's a great book that I finished a little while back called The Creativity Code by his last name is du Sautoy. But one of the one of the experts the book, it talks about Deep Blue, playing chess against apples are against Google's Deep Mind. And I just think it's a really for me, it was very impactful. That have a bit of story. Because really, it always comes back to story, right? The basically, I mean, we I Well, most people remember when Deep Blue beat Garry Kasparov, I think it was like 1998. And no computer was ever gonna be able to beat the greatest chess players in the world. And of course, they did. That same algorithm, that same machine, Deep Blue continue to evolve. And so by 2016, that was a fantastically powerful chess playing juggernaut. I mean that that deep blue creation was, was the Godzilla of chess. Google's and I'm remembering a book. So I mean, my details are probably wrong, but I highly recommend it. It's called the creativity code. Google's Deep Mind, which had never been trained to play chess, but it was using, I think it was the it was like a Monte Carlo algorithm. Basically, basically, it used a neural neural net algorithms to understand chess, as opposed to just understanding a giant data set. It had a new way of looking at the data that it had. It it was a much weaker computational system, but it had a different way of looking at the data. I mean, I think the the example they gave in the book was that deep blue could project 20 million chess moves in any given direction at any given time. DeepMind on the other hand, could project 70,000 That kind of gives you perspective on the amount of computing power when they put Deep Mind and deep Blue head to head. And again, deep blue are DeepMind, the Google's AI based system had never played chess didn't, it really didn't even know. They just gave it the basic parameters of what is a win and what is a loss. It took four hours to destroy Deep Blue, and to become unbeatable. In that, for me, my takeaway to your question is, in that instance, it's not about the data set, as much as it's about how that data is processed. You know, I think I, as an editor, that particular story, I mean, I don't know that it really pertains to your question, but it makes me concerned and excited about something that could you if, if, if DeepMind can turn its attention to chess, and become unstoppable in four hours? What an amazing tool that could be for me in the cutting room. And also, what a terrifying adversary that would be in the hands of the studio trying to make me less involved with the process.

Zack Arnold

Something like chess, which to us is incredibly complex, really easy for technology to figure out because it's all permutations and computations where the 20 million different variables are the 70,000. What AI has a really hard time with is things like, how does the human walk driving, right? Still haven't figured out auto driving? They're trying to figure it out? How do we feel things? How can we predict human emotions? So even though to us is like, Oh, my God, if we can figure out chess, we're in big trouble. There's actually a lot of reasons that we're not in as much trouble as we think. And again, I don't understand neural networks. And I don't understand the technology. But I understand the implications, because I read so much about this, like you have, and I'm already like, well, creativity code, gotta get the author of a podcast, right. But at the end of the day, I've heard, like the world's experts on this say that we think if they can do chess, they can do everything. It's way easier for AI to figure out chess than it is to figure out humans. So don't don't think that it's all doom and gloom because of that. And if I'm speaking out of turn, you let me know.

Carolyn Giardina

Our next question.

Audience 4

Hi. So as an assist, who's actually leaned more into the AI side of things to help with stems and titles and all those things like that, it's hard not to feel with the way technology is going with remote work and, you know, the, you know, more monotonous work being covered by AI that the mentor mentee, you know, aspect of being an assistant editor is really struggling. I mean, you know, it's, I'm, I'm seeing a lot more of its being more of a funnel, in terms of like the number of assistants needed, and, you know, even just having that personal face to face, you know, communication with editors. I mean, what could you say, that would assure, you know, an assistant editor that, you know, like, even with, you know, being up to date with the technology and being up to date with AI, that, you know, studios aren't just going to see you as an extra paycheck that they can do with that, you know, because there's things that, you know, even editors won't have to say with, you know, when it comes down to finances.

Zack Arnold

Yeah, the first thing is, we're all paychecks to the studio, like, if you're thinking something's going to change, like, Oh, now we're commodity and it's all about the numbers. And what about the art, it's never been about the art, we've always been a paycheck, and we've always been a light on and that's never going to change, they're just going to see technology, giving them more of an opportunity to save more money. So for thinking that's changing, it's always been that way. I, I tend to be very, I tend to be overly optimistic, and sometimes I have to be brought down. So I may look back at this in a year and say that didn't age well. I think there's actually a tremendous opportunity for there to be more mentorship. Because what I've seen over the transition from film to digital, and I talked to a lot of other more seasoned editors that were in the film days, where there was a lot more mentorship than technology came along. And I'm sure many assistant editors can speak to their job is just data management. I'm just doing all these rote tasks and doing all the metadata, and I'm doing the spreadsheets and all these other things. I actually think there's an opportunity for the creative mentorship to have more space and time. If we set boundaries with our schedules and with our budgets. This is a contract conversation that I'm not going to get into. But I actually think that it as long as it's not a matter of we don't need the assistance anymore, because we have the AI to do it. That to me is a different conversation. Let's assume that the jobs remain and we can protect them through contracts or boundaries wherever necessary. I think AI actually creates a tremendous opportunity for more creative mentorship than we've seen since film transition to digital. But again, this might not age well. I don't know that. I want to assume the more optimistic version because I don't want to think about the alternative. I just want to prepare for it just in case.

Carolyn Giardina

Okay, so I'm afraid although we could talk about this for hours, we do have to wrap up. So I hope what You took away from this as we really don't know what's going to happen other other than that there's going to be change and and, but hopefully we gave you some things to think about. Please join me in thanking our panelists.

Transcribed by https://otter.ai

Show Credits:

This episode was edited by Curtis Fritsch, and the show notes were prepared by Debby Germino and published by Glen McNiel.

The original music in the opening and closing of the show is courtesy of Joe Trapanese (who is quite possibly one of the most talented composers on the face of the planet).

Like us on Facebook


Note: I believe in 100% transparency, so please note that I receive a small commission if you purchase products from some of the links on this page (at no additional cost to you). Your support is what helps keep this program alive. If you have any questions, please don’t hesitate to contact me.

Zack Arnold (ACE) is an award-winning Hollywood film editor & producer (Cobra Kai, Empire, Burn Notice, Unsolved, Glee), a documentary director, father of 2, an American Ninja Warrior, and the creator of Optimize Yourself. He believes we all deserve to love what we do for a living...but not at the expense of our health, our relationships, or our sanity. He provides the education, motivation, and inspiration to help ambitious creative professionals DO better and BE better. “Doing” better means learning how to more effectively manage your time and creative energy so you can produce higher quality work in less time. “Being” better means doing all of the above while still prioritizing the most important people and passions in your life…all without burning out in the process. Click to download Zack’s “Ultimate Guide to Optimizing Your Creativity (And Avoiding Burnout).”