My favorite quote on this topic is "Why should I bother to read something you didn't bother to write?"
Especially when you're talking about fiction and reading/watching for enjoyment, what does it matter if you can shit out 1000 hours of AI content? Maybe it's good to keep babies entertained? Studios have gotten so into the habit of treating "content" as a fungible commodity, but the fact is that even blockbuster movies still live and die by actually being entertaining.
> Why should I bother to read something you didn't bother to write?
The answer seems obvious: because it's better.
Obviously it isn't better now. But it's easy to imagine a day in the not-too-distant future where AI can shit out 1000 hours of content that is better in every way than what humans create. And it will even feel more human than the human made stuff because the AI will have learned that we like that.
What do you do then? Watch the worse stuff? Maybe, and I think a lot still will. But how long does that last?
The point really is that "better" in this context means "made by a human" - not "faked to look like made by a human". People need connection to other people - art is one of the means of communicating _between people_.
Is that easy to imagine? I’m not sure it is, particularly.
Ultimately, the LLM industry can’t run on jam tomorrow forever. At some point, people have to stop concentrating on the hypothetical magic future, and concentrate on what actually exists.
Mechanizing the expression of artist endeavour seems silly. Does an LLM know the pleasure and pain that love can instill or does it just regurgitate tokens in a pattern it thinks is best fit?
My cynical take is, younger generation who is growing up with AI generated content will accept it as normal and move on. We only enjoy human-created stuff, as that seems "natural" to us. That "natural" feeling tends to change in every new generation.
I can easily imagine AI spitting out volume. I can't imagine it spitting out quality. Most of what it generates now is just trash. Like the Tourist/bear paradigm in dumpster security there may be overlap between the worst human writing and the best AI writing... but that's not how you make a successful film.
Train it to attribute its failure to land on wokeness, have it generate designs for merch that communicate this idea, and book an appearance on JRE and it'll have completed the arc of a lot of short run comics.
it's impossible to answer to this line of reasoning without wasting time so I'll just start right away with the ad hominem.
you just don't like art, you don't understand it and you want slop, admit it and don't feel compelled to enter the discussion with your growth oriented bullshit mindset
Literally this. Ben hits the nail on the head that these tools can “write convincing Elizabethan language but can’t write Shakespeare”, along with his metaphor about craftsmen vs artists.
These tools can never create art because art is the imperfection of reality transposed from the mind’s eye using the talent of the artisan and their tools. Writing a convincing enough prompt to generate an assortment of visual outputs that you “choose” as the final product can never be art, because your art skills ended with the prompt itself - everything after was just maths, and not even maths you had a direct hand in. Even then, you cannot really shill your prompt as art either, because you wrote tokens to ingest into a LLM to generate pseudorandom visual outputs, not language to be interpreted by other humans and visualized on their own accord.
Art is one of those things you cannot appreciate until you make it, and generating slop is not creating art. A preschooler with a single, broken crayon and a napkin makes better art than anything generated via tokens and math models - and to really drive that home, I’d argue that the teenager goofing around with math formulas on their graphing calculator to create visually beautiful or interesting designs is also superior art than whatever the LLM can spew forth using far more advanced maths.
If you really want art, then make it. Learn to draw, practice photography, paint some scenery, experiment with formula visualizations, layout a garden, or heck, just commission an artist to bring your idea into reality. Learning to articulate your vision with language in such a way others can illustrate or create it is a far more valuable skill than laying out tokens for an LLM.
I never created a movie. I can appreciate a good one over a bad one (Argo, Gigli).
Statistically speaking, the number of people who have created a movie rounds to zero. And yet, to suggest basically no one appreciates a movie or the difference between a good movie and a bad one is obviously very dumb.
You're conflating the reality of the situation with me. I didn't say I wanted AI generated content. Just that it seems like it will inevitably win. All the insults in your comment just stem from an imaginary and inaccurate picture of me, a stranger, that you created in your head.
> don't feel compelled to enter the discussion with your growth oriented bullshit mindset
This is the classic expression of the fallacy that the value of something is based on the cost to create it from the seller, not the benefit it brings to the buyer.
Additionally, there are lots of examples where cheaper production has produced an inferior product yet the difference in price causes the inferior product to usurp in superior product. Building materials exhibit this effect frequently: plaster vs. drywall, asphalt vs. slate, balloon framing vs structural masonry, etc.
In media, TikTok exemplifies this effect. People watch fewer movies (expensive, high quality) and watch more short-form content (cheap, low quality).
Saying movies are inherently a superior product to TikTok shorts is incredibly untrue. I would rather watch 90 minutes of the dumbest TikTok crap imaginable than sit through Madame Web again.
Cheaper movies in terms of cost are often better than expensive movies because they have a humanity that shows through. Let me know when an AI can make a Jon Carpenter movie
> Art is the domain where "the cost to create it from the seller" matters.
Is it? Or does it boil down to: "This has been done and rehashed multiple times before, it's no longer interesting"? There is tons of recognized art out there that, in literal time spent, could be done in minutes. What is important is what people gain from the art, not the time put into the art.
Do you value that they wrote it, or that it's their opinion? Hypothetically, if there was a system to take one's thoughts on a topic and generate text that accurately represents them, would you be interested in reading it if someone sent you their thoughts?
I don't think that's true at all. Duchamp's "Fountain" is an example of something that is profoundly impactful, didn't "cost" him anything, yet an AI could never reproduce it.
AI tools will continue to get better, and they'll really shine when they can enable increasingly smaller teams of people to execute on their creative vision. Existing non-AI tools have already helped enable tiny teams to create media which is enjoyed by millions of people; the biggest and most recent example is how Source2 is used to animate Skibidi Toilet. But there's still room for increasing accessibility.
The biggest issue I've noticed with most existing AI-gen tools is that they only focus on generating completed output. Ideally you'd have tools that can generate multiple layers or scenes within creative tools to allow for continued iteration.
The future is when one person can do by themselves what previously would've taken hundreds or thousands of people. I'm really looking forward to what sort of creative works we'll get from people that wouldn't normally have access to Hollywood-tier resources.
In specific genres of game where the developers put a lot of effort into the mechanics to facilitate procedural generation, yes it's interesting. You can't tell me Dwarf Fortress was fast and easy to make because fortresses are procedurally generated.
Sequelitis and remakes have persuaded me that some people really want intellectual baby food. I'm worried that the majority really just wants content to pass the time. Let's hope creative indie movies will continue to flourish, as we may just end up depending on them.
>My favorite quote on this topic is "Why should I bother to read something you didn't bother to write?"
"If you're like the average consumer, because you have no life, and would doomscroll whatever shit we post for you, and watch whatever slop we produce for you. You did it with the tons of crap remakes, rehashes, franchize, and "multiverse" crap in the 2010-2024 span, you ain't gonna stop just because it's an AI writing it. It's not like the commercial hacks writing the stuff you watched earlier were any more creative than AI".
Was it? He concludes that LLMs will never write Shakespeare, or create original work of that caliber, or animate an actor the way Ben Guinness could have done it. I'm paraphrasing but isn't this confirmaation bias from a guy now heavily vested in making movies? Move 37 was so astonishing the world champiion Go player, hardly known for theatrics, got up and left the table. It was considered an insanely "original" move. In every field we don't consider deeply "human," Ai has jumped light years ahead, in astonishing ways. Affleck is entitled to his opinion and seems like he's trying to understand things, but I think poetry, acting, music, they're all on the table until they aren't, for better or worse.
>Move 37 was so astonishing the world champiion Go player, hardly known for theatrics, got up and left the table.
This is an incredibly minor nitpick, but I recently went down the rabbit hole of move 37[1] after reading Richard Powers' latest novel, Playground, and Sedol didn't get up, he was already away from the table. He did, however, take far longer to make his next move (iirc something like 12-15 minutes instead of 2-4).
It has yet to be seen if this can truly be generalized and if such an analogy holds.
Go is a game constrained by an extremely narrow set of rules. Brute forcing potential solutions and arriving at something novel within such a constrained ruleset is an entirely different scenario than writing or film-making which occur in an almost incomprehensibly larger potential "solution" space.
Perhaps the same thing will eventually happen, but I don't think the success of AI in games like Go is particularly instructive for predicting what can happen in other fields.
Ahh good point. I was thinking about “machine plays itself repeatedly until it gets good” aspect of AlphaGo Zero and my brain jumped to brute forcing, but agree that’s a misnomer.
But a game is something with an objective measure. Either a move is good or it's not. Can you say the same of parts in a movie, where it's more about taste?
I'm not making any statement about LLMs here, but the counterpoint to this is that you don't need to make what film critics would overwhelmingly call a "good" movie. You need to make things that make money.
I can imagine two options for that: utilize expertise from people that know how to make films that make money, or make so many movies, one or two can make enough money to pay for all the others and then some.
Really, I think it's more about what gets more attention and then you make deals with Roblox and Fortnite or whatever to sell digital goods.
Idk. LLMs may effectively emulate human creativity up to a point, but in the end they are writing literally the most predictable response.
They don’t start with an emotion, a vision, and then devise creative ways to bring the viewer into that world.
They don’t emotion-test hundreds of ideas to find the most effective way to give the viewer/reader a visceral sense of living that moment.
While they can read sentiment, they do not experience an internal emotional response. Fundamentally, they are generating the most probable string of words based on the data in their training set.
There is no way for them to come up with anything that is both improbable and not nonsensical. Their entire range of “understanding” is based on the statistical probability of the next conceptual fragment.
I’m not saying that it might not be possible for LLMs to come up with a story or script… they do that just fine. But it will literally be the most predictable, unremarkable, innovation-less and uninspiring drivel that can be predicted from a statistical walk through vectors starting at a random point.
There is a reason why AI output is abrasive to read. It is literally written with no consideration given to the reader.
No model of mind of the effect that it will have on the receiver, no interesting and unusual prosaic twists to capture your attention or spur your imagination…. Just a soulless, predictable stream of simulated thoughts that remarkably often turns out to be useful, if uninspiring.
LLMs are fantastic tools for navigating the commons of human culture and making a startling breadth of human knowledge easily accessible.
They are truly amazing tools that can release us from many language manipulation burdens in the capture and sorting of data from diverse and unorganized sources.
They will revolutionize many industries that currently are burdened by extensive human labor in the ingestion, manipulation, and interpretation of data. LLMs are like the washing machine to free our minds from the handwashing of information.
But they are not creative agents in the way that we admire creative genius.
>There is a reason why AI output is abrasive to read. It is literally written with no consideration given to the reader.
>No model of mind of the effect that it will have on the receiver, no interesting and unusual prosaic twists to capture your attention or spur your imagination…. Just a soulless, predictable stream of simulated thoughts that remarkably often turns out to be useful, if uninspiring.
An LLM can produce text about the humliation and pain of being picked last on the recess kickball team. But it never had that experience. It can't. It has zero credibility about the subject. It's BSing in exactly the same way that sports fans pretend they can manage their favorite team better than the current staff.
fun fact but that is a real stretch to say she was the most famous TV actress in the world when she probably wasn't even the most famous on Family Ties
In 1986-87, when Family Ties was at its peak, it had a rating of 32.7, which means almost a third of all US households watched the show every week.
That probably amounted to 60m+ people tuning in, which is close to Super Bowl numbers ... every week. The TV audience was concentrated then in a way it isn't now. Yellowstone gets less than 12 million viewers per episode today.
Maybe you think Meredith Baxter was better known, but I'll bet more people were paying attention to the teenager than the hippie mom. But let's say she was no 2, or no 5. She was galaxies more famous than the most famous people on TV today. And she has a CS degree. Which taken together is more astonishing than Ben Affleck opining on LLMs.
I'll wager you at any point during Family Ties' run that more people knew who Baxter was, considering she had been a celebrity for 10 years going into that show. Also that's not how share works. A 32.7 share in 1986 is more like 29-30 million viewers.
If you're just going by ratings/buzz Lisa Bonet and Phylicia Rashad are contenders, but this is a weird definition of "famous" that is just "who's popular right now ignoring history and context". Like, Betty White exists.
His perspective mainly focuses on AI as a replacement for creativity, but he doesn't seem to consider it a tool for creative expression. I can envision a future where one person comes up with ideas, goes to their computer, puts everything together, and creates something comparable to what a small studio can produce today. This has already happened in music; in the '70s, the band Boston started with a single person working in a basement putting together a complete album and releasing it. Once it became a hit, they formed a full band.
I can see a similar situation happening with AI. Hollywood is definitely undergoing changes; the traditional concept of having Hollywood in one location might fade away, leading to a global creation of filmmaking. While this increase in content could bring more creativity, it might also lead to an overwhelming amount of options. With thousands of films available, it may not be as enjoyable to watch a movie anymore. This could lead to a decline in the overall experience of enjoying films, which is not necessarily a positive outcome.
I'm not in the industry and I can tell he's failing to conceive many possibilities. I don't understand why he's being praised, I guess confidence impresses people.
> I'm not in the industry and I can tell he's failing to conceive many possibilities.
It's also possible that because you're not in the industry, you don't understand the problem domain well enough.
Tech bros love to think they're just so much smarter than everyone in other industries, but it rarely ends up being true. We saw this with Blockchain; this distributed consensus protocol was supposed to solve all the problems with money transfers, settlements, and securities across the world and uh... did none of that. But tech bros sure did love to talk about how the blockchain doubters just didn't understand the technology, without considering that maybe they didn't understand the problem space.
I think the other thread did a good job showing you that it's the other way around, people who have been in the industry do not tend to have the imagination people with fresh eyes (and maybe some tech chops) do.
An example I had in mind was when Affleck was speaking of being able to generate the show but with their preferred cast from a different production. He really has no clue that people will be generating themselves and their friends as the main characters of these stories. Like this one, there are many other examples where I thought he was lacking creativity.
Kudos to him for spending time thinking about it but I'm surprised how well received his thoughts have been for just stringing a few ideas together.
Bezos wasn't in retail. He also wasn't in compute hardware.
Reed Hastings wasn't in entertainment and crushed it. Jeff Katzenberg was, and Quibi was a disaster.
Ken Griffin was a punk kid in harvard, never worked in finance. Jim Simmons was a math professor, didn't work in finance.
AirBnb guys weren't in real estate or tourism or whatever bucket you want that to be.
Larry & Sergey knew zero about advertising. Zuckerberg too.
The incumbents have been destroyed with some frequency by outsiders who take a different approach. It's almost impossible to tell in advance if understanding a domain is an asset or a liability.
Nah, you were talking about tech people operating outside of their domain. You just don't like how easy it is to show counter examples so you pretend you had some other point.
But if you don't know the technology involved for each of the above, maybe stay out of a conversation involving technology.
Except every company you listed is a technology company. It's technologists doing technology. Katzenberg proves the point.
Amazon, Netflix, Quibi (a disaster run by a non-technologist), AirBnB, Google, Facebook. These are websites and apps.
I don't think Griffin is particularly illustrative of anything except it's nice to ask your grammy for $100K when you're a teenager.
Feel free to listen to a NotebookLM podcast of a PG post, but if you think any AI is going to create an original thought that catches fire like the MCU, Call Her Daddy, Succession, cumtown, Hamilton, Rogan, Inside Out, Serial, or Wicked, maybe it's you that should stay out of the conversation when it comes to creativity.
Amazon is a retailer. AWS is compute(/etc) for rent. AirBnB is homes for short term rent. Quibi is/was short movies on mobile. Google and facebook are advertising. Netflix is movies/tv shows. There is no such thing as a pure technology company - the technology has to be used to do something people want.
The people closest to the thing which is about to be dominated by machines are often clueless about what is going on.
First, do all the techbro failures where they thought they were smarter than the industry.
Second, show the company where the technologists made the same thing the old school was making, and better. Amazon retail disrupted but didn't destroy physical retail, and certainly didn't replace it, and certainly isn't better at it. Same with AirBnB => hotels, Google/FB => advertising (disrupted with a new type of product... a tech product... but have no presence in literally every other form of the industry).
The closest thing you can get to dominance is Netflix making movies and television, and there's no evidence that they make better movies and television than the old school. Technologies can use money to leverage their position against slow-moving industry players, but in this specific discussion, we've seen nothing to suggest that eventually AI could make a better film than human beings.
If you were actually in the industry you'd know that the top decisionmakers at Netflix have decreasing respect from the creative community, increasing reputation for being a cheap and difficult company to work with, and are generally regarded as a mill that creates a lot of mediocre to slightly-above-average content that gets swept aside every 3-6 months for the next wave of grist. Profitable certainly but nowhere close to being a leader in quality, for as much money as they've thrown at trying to win that Best Picture trophy (and spoiler: Emilia Perez isn't gonna do it this year either).
If you don't know anything about Hollywood, maybe you should stay out of a discussion about Hollywood.
Nope, you do the opposite. The list is long on both sides, displaying how difficult it is to know if industry knowledge is an asset or liability.
Of course they don't do the exact same thing. Only someone in industry would think to do the exact same thing. The value proposition is the same in each case, that's what matters.
The people who don't know Hollywood are the ones taking over the entertainment business. Customers of the entertainment business don't care about the "creative community" or "hollywood".
While AI will ( and does) reduce the burden of writers, it’s going to -kill- celebrity acting in most cases.
In doing so, it will turn directing on its ear in a way that many talented directors will not be able to adapt to.
Directors will become proxy actors by having to micromanage AI acting skills. Or Perhaps they will be augmented with a team of “character operators” that do the proxy-acting. Either way, there will be little point in paying celebrity actors and their extravagant salaries for most roles. Instead, it might turn out that skilled and talented generic, no-name actors can play any role, then have the character model deepfaked onto them… which could actually create a large demand in lower paying jobs for character actors, possibly actually being a kind of boom in the acting business.
Lots of possibilities, but the director is going to take center stage in this new reality, while celebrity actors will have to swallow hard as they are priced out of their field.
Why would it require a director if a generative process can use the information it has on audience (even individual) preferences to produce the story and format that will best hook consumers?
Because it will perform similarly to the way that writing LLMs do…. They obey and produce singularly predictable and droll stories that have trouble keeping the attention of a five year old. It’s the definition of stale tropes and predictable scenes at its worst.
It has no idea of the mind of the viewer or the reader. It’s literally generating the most probable next few words to tack on to the story.
LLMs are great for a lot of stuff, but they are by design not at all creative in any admirable sense of the term. They cannot produce a narrative that is simultaneously unique or genuinely surprising without it also being nonsensical.
Hallucinations are not a bug, they are a result of proper operation , just undesirable. The same thing that makes nonsensical“hallucinations “ if the “temperature” is set too high is what prevents llms from having any unique ideas if the temperature is set low enough to not hallucinate wildly.
LLMs are text prediction engines. Extremely useful and revolutionary in many ways, but not in creative work. Everything they do is by definition derivative and likely.
What graphics models do in images is no different, it’s not all that creative, and works much better when you -tell it- to be derivative. It’s just that the nature of graphic representation is mainly predictable and derivable… so it doesn’t bore us when it produces derivative, predictable work.
He's partially right. AI won't be able to write an interesting, wholly original script anytime soon. For that it's necessary to live as a human, to experience anguish, loss, fear, hope, etc. Art, true art, will be the last field to be taken over by AI. Has anyone seen a truly poignant AI image yet, that illuminates something about the human experience? One they would put in a museum? I haven't.
However there is such a large body of work already, all the derivative stuff can be made effortlessly. What would happen for instance if you told the AI of 2050 "I want to mix the Brothers Karazamov with the Odyssey, set on Mars after the apocalypse?" I just tried with Sonnet and its not too bad. Maybe a large enough basis of stories has already been told that most AI scripts will feel novel enough to be interesting, even if they are just interpolations in the latent narrative space humans have constructed.
Actors will be wholly unnecessary. A human director will prompt an AI to show more or less of a particular emotion or reaction. Actors are just instruments.
The three most interesting transformations AI will bring are:
1. Allowing a single human "director" to rapidly iterate on script, scenery, characters, edits, etc. One person will be able to prompt their way through a movie for the price of electricity.
2. Movies will become interactive. "Show me this movie but with more romantic tension / with a happy ending / Walt loves Jesse"
3. Holodecks will allow young people to safely experience a much broader range of events and allow them to grow their souls faster, meaning they can make better movies. Modern movies suck because life is too predictable and tech-focused for good writers to emerge. We won't ever put ourselves in real danger, but what if your senior year of high school was to live through the French Revolution in a holodeck? It would change you forever.
The trope of the sci-fi robot has been so engrained into our cultural narrative that I think people constantly underestimate the creative capabilities of AI. People still think in terms of this dichotomy of math/science and art/creativity. But AI has already broken through so many boundaries of what anyone would have believed possible just a few years ago.
I hear people complaining about uncanny valley stuff all the time, but I think these guys are just hanging on to old-world thinking. Because it cannot not be what must not be.
But even today AI is doing incredibly hard stuff that most people would have thought could never be done "mechanically" by a mere machine. Think coding assistants. If you don't think they are absolutely amazing, you are nuts! Sure, there's still a long way to go to perfection, but we don't have to wait until we get there because a lot of what is possible today, let alone tomorrow, is already incredibly useful.
The point is: art is a means of communicating _between people_. We can appreciate pictures/movies/music generated by machines but at the end of the day we (people) are in a deep need of connection to _other people_.
I don't believe replacing people with machines in this context is possible (one might say: by definition). But if it happens it will be the anti-utopian world of loneliness.
This is exactly the kind of argument I think is informed by 20th century ideas of AI. I also think in this form, it greatly exaggerates the issue at hand - AI will not totally replace human-human communication.
Also, since the original context is Hollywood movies, I don't think that there is a lot of "art" in the storytelling of big block busters. Why AI shouldn't be able to write a compelling story needs to be argued more convincingly, I think.
I beg to differ. No autocomplete in the traditional sense can take a human language description of something you want to achieve and produce code that implements just that. Or at least, some version of that. And if you're not happy, you can refine - either by hand, or by having a dialog with the AI.
You can also give it buggy code and it can critique it.
That's an order of magnitude more advanced than autocomplete.
Rehearsed word soup. He clearly knows zero about the underlying technology. If he did understand it in any depth, he'd show a lot less certainty about the future.
Maybe the main actors, the stars of the movies are safe for now. But background actors who just sit in the background or have some small extra role will be decimated, because those roles can be replaced with AI.
Since many actors start with small roles or background work, it will have an effect on the industry if entry jobs are eliminated.
In the long run even the main stars can get into trouble, because it may be true that AI won't be able to reproduce what they do, creating a new, creative scene, but most movies are trash even today, so if AI can make an average, good enough movie in the future which is cheap to generate, but people still watch it then the bottom of the industry will fall out, lots of people who work on movies will lose their jobs which can have a huge impact on the industry.
100 percent as good for 1/100 the price is where celebrity acting is headed. They will take roles for exposure and make their money on their brand, like influencer cattle. It will be the only way to compete with the human wave of extremely talented but unknown talent that currently can’t get parts, but who can absolutely wreck an audience as a human canvas upon which to render a studio owned character likeness.
Ben went and drank his own celebrity exceptionalism cool-aid.
It's funny how folks see their jobs as safe, but nearby jobs that they don't understand as at risk. In this case, actors are safe, but VFX better be worried. Ben knows actors are artists and it's going to be hard for an AI to mimic all the experience and subtleties he's gained over his life. The same is true in VFX, it's just that Ben doesn't understand the art or the process of visual effects.
I think it's a great speech, but I may make the point that he's probably underrating audiences' demand for interesting or new content. Audiences already seem pretty happy with endless iterations of the same few concepts.
His example of having AI generate a kludged together episode of Succession feels exactly like what most shows already do now.
One thing I'd really like AI to do (and Ben Afflect touched during his talk) is that once AI is cheap enough to run at home, I'd like to feed it all David Lynch movies/TV shows, and ask it to produce a Twin Peaks season 4.
Excellent art might be attributed less to genius and more to a selection process and creator-audience feedback loops.
We might be falling into a bit of survivorship bias when missing that excellent art is the product of a variety of processes, not just human inspiration.
Machines can probably replicate much of that process, for better or worse.
Artists are very rarely concerned with their audience. Good Art, capital A, is the unique expression of a singular consciousness - you can't produce that by catering to an audience. That's just a product (see Warhol).
He is wrong.
Movies will be automated and when they will be, all these actors will be first one to go. I say this despite him and Matt Damon my favorite actors.
When it pours it rains. They don't know what is in store and what we are working on.
Hint: codename: Project Hailey (named after Ho--y)
You can't automate art, sorry. Maybe you can crank out a bunch of hollywood dross that undiscerning audiences will slop up, but you're not adding anything of value to our culture.
I don't want to watch a movie cranked out by an AI. I want to see a movie that came from another human's mind because they have an idea they want to communicate with me. LLMs have nothing to "say".
In addition to the experiential gap between theatre and film, we have productions which translate from one to the other. Each is a distinct market.
A subset of the Chinese TV version of "Three-Body Problem" consists of video gameplay "footage". The lack of human actors is compensated by over the top footage of cataclysms, which could never be filmed. It's mostly additive to the storyline with human actors.
AI can create new media markets with different cost structures. These will necessarily subtract some attention-minutes from existing film audiences. But it should also lower production costs for artistic filmmakers focused on portrayal of human actors for human audiences.
My prediction. As more and more movies are made with AI, and when they start replacing real actors with AI, people are going to value movies less, and live performances more.
I expect AI will bring a resurgence of live performances like live theater. Why? Because we value what people do more than what machines do.
> He is wrong. Movies will be automated and when they will be, all these actors will be first one to go.
Man, whom should I trust? A guy who's been living and breathing movies since the 1990s, or a 19-minute-old, green-colored account on hackernews? Man, I just don't know.
What would a guy who has been living and breathing movies since the 1990s know about AI? Zero, and it shows. Did you trust newspaper guys in the 1990s too?
That was a mistake. At least the guy yelling on the corner was obviously nonsense and you could realize you shouldn't trust him.
The newspaper guy had a sophisticated, industry insider sounding explanation for why newspapers would always be around. Detroit guys had logical sounding reasons why Tesla wouldn't work. Aerospace and defense guys had logical sounding reasons spacex wouldn't work. Everyone in retail had reasons amazon wouldn't work.
People who have been doing something for a long time haven't had a great record of predicting the future in the very field they operate in.
If you've heard Ben Affleck speak before I don't really think his take is surprising. He's not a dumb guy. His take is also extremely realistic in contrast to the AI utopians that assume AI will just do everything better because computer magic.
Generative AIs are cool tools people can use to make things but they're not magic. Once you ask for things outside of their training set you get really weird results. Even with a lot of fine tuning (LoRAs etc) results can be hit or miss for any given prompt. They're also not very consistent so which means series of clips are incongruent or just mismatched.
That doesn't mean generative AIs aren't going to be useful and won't impact film making. You're not necessarily going to be able to ask ChatGPT to "write me an Oscar winning screenplay" but you can certainly use one to punch up dialog or help with story boarding.
I think he's right on that visual effects people will be hard hit because there's going to be a lot less jobs for mundane tasks like cutting out an errant boom mic or Starbucks cup in a shot. I'd extend that to other "effects" jobs like ADR since can AI can fix a flubbed line in an otherwise good take.
I feel like many fields are going to basically cut out junior level jobs for AI, then 10 years down the line complain that there are no more people they can hire for mid level/senior jobs.
You're mis-reading what he says. People create new stuff, AI (so far) remixes old stuff. Down the line you will be able to order a remix of succession and AI should be able to do that for you but it won't be able to add any interesting additional spin - it's just gonna make it to order (obviously a big deal! which he says).
>implying the non-AI content we have nowadays is highly creative and unique
You just inserted this, he doesn't say or imply this - he asserted that all of the new ideas (so far) have been generated by humans. Sure lots of it is slop - but all of AI content is slop. That's his point: the new, creative stuff is still coming from humans - even if they are using "AI" tools to do it.
Also, you make a lot of claims that just don't seem to reflect what I've seen. Many human-made movies are cookie cutter plots (ie the new Twisters), and many AI prompts return seemingly novel results. Whether or not that are actually novel isn't all that relevant, since humans aren't databases that have memorized all the source material.
> Also, you make a lot of claims that just don't seem to reflect what I've seen.
Can you say more about my claims? Yes, many human creations are re-hashes of old creations. All AI creations are re-hashes. As you say - they seem "novel" to a particular human, but in aggregate they are what they are - re-hashes. There's a meme about how people with limited exposure to something will over-associate the relevance of their small exposure that kind of aligns with this[1].
Show me a movie in the last decade where the premise of it couldn't be generated by some AI prompt.
(I'll even make it easier by stating that said AI prompt can be generic and to some degree trivial; i.e. not the premise explicitly laid out in the prompt).
Believe it or not, but there’s a lot more that goes into filmmaking beyond the premise.
Parasite, The Substance, Uncut Gems, mother!, Krisha, Tangerine, and The Lighthouse are some movies from the last decade that are novel in ways beyond their premises.
No generic prompts to an LLM could create those movies, even if it could spit out a vague 2 sentence premise that matches them.
Note that I’m not saying it could never happen, but LLMs as we have them in 2024 produce regurgitated slop, and require significant creative input from the prompter to make anything artistically interesting.
This feels almost like a request made in bad faith. I don't exactly know what quintessence of inspiration is required too put together any given movie - but lets take Portrait de la jeune fille en feu[1]. It's regarded as a masterwork on relationships and as visually stunning, judgements I agree with. The skill in its craft is in drawing out how the technique of painting lingers on details of humans in the same way that humans linger on them. On how the emphasis of an element (a stroke, a gaze) can overcome its humble character in the context of the moment. The overall plot of the movie is unremarkable - the reason it is a masterpiece lies in the how the actors are able to draw each other out and how the team putting together the look of the movie is able to reflect their relationship in tone and vibe.
"Oh but you can't compare a paragraph to a whole movie script!"
Sure, just apply the same procedure recursively, stop at a depth where you're satisfied, see [2].
Bear in mind I've only spent like 10 minutes and a negligible amount of money. Now imagine you have a year and a budget of 10MUSD to come up with the best movie script you can create with the help of AI.
I think, if you are under the impression that Portrait of a Lady on Fire is even...gestured at in any way in your examples - then we are in different places (I recommend you start an ssri).
I urge you to compare this to the translation[1] of the shooting script of Portrait of a Lady on fire:
1. STUDIO PARIS-INT-DAY
A blank page. A feminine hand draws a black line, the first
stroke of a drawing. Another blank page. And a new female
hand, which starts a new line. This action is repeated
several times, coinciding with the credits.
WOMAN'S VOICE
First my contours. My silhouette.
The silhouette takes shape as we move from frame to frame and
the sketch of the figure comes to life under the strokes of a
pencil.
A large room pierced with skylights. An artist's studio. It’s
Paris in 1780. Eight young girls between 13 and 18 years old
are sitting on stools that are nearly obscured by the
fullness of their dresses. They are bent over their drawing
boards, which serve them as support. The faces of the young
girls are focused. Their eyes oscillate between their drawing
and the horizon.
People who think generating language is very much different from generating audio/video are in for a ride, which I think is the flaw with his reasoning.
(Also, yeah, there's a very strong interest for him in saying actors will not be replaced, but let's assume he's being frank here).
All the "AI personalities" that are already popping up here and there are already decent enough, again, it will only take a couple years until their quality is on par with meatspace video.
Then it's gg for actors (and many other professions).
If what the churn mills are any indicator the coherence is already starting to get there. Saw one this weekend where it was star trek set in the 1940s sci-fi vibe. The coherence was a bit off. But way better than it was last year with the same kind of videos. Even from scene to scene it is getting better. In scene the warping is starting to go away. Add in the ability to write a story using chatgpt (or something like it) then add in the 'ai voices' and you can pick and choose who/what you want.
Movie making is about to radically shift to an extremely lower cost model to produce. Just not quite yet.
I can already ask chatgpt to 'write me a SCP story about a couch found in an abandoned factory that is keter class' and it will pump out something mildly entertaining in that style. It is not that big of a leap of logic to say 'good enough' will win out. Even now you can kinda pull it off with a small amount of work and stitching several tools together. There is no reason to think it will not be automated together.
> People who think generating language is very much different from generating audio/video are in for a ride, which I think is the flaw with his reasoning.
How could anyone think they are similar?
You don’t just pump video data into an LLM training pipeline and get video models.
A lot of work goes into both. It’s non trivial and only barely related.
It’s only recently that transformer models are being used for video.
Is there any deep fake video that is possibly convincing that is 100% generated synthetically?
All the deep fakes I've seen that don't look incredibly weird at first glance essentially re-use clips (like Back To The Future[0]) and they replacement an element, like a face, but the rest of the footage already exists.
If you look closely though, particularly at the mouth of the video I linked, something isn't quite natural about it still.
Convincing deepfakes is a strong word. There are also people out there who think modern CG looks good, too. There will always be people who are ultra-sensitive to things like this. So convincing might take 2x as long for some vs others.
Getting to 90% is easy. The last 10% can take 50 years. Don't forget we're the product of evolution and subconsciously look for things current technology will not be able to replicate.
My favorite quote on this topic is "Why should I bother to read something you didn't bother to write?"
Especially when you're talking about fiction and reading/watching for enjoyment, what does it matter if you can shit out 1000 hours of AI content? Maybe it's good to keep babies entertained? Studios have gotten so into the habit of treating "content" as a fungible commodity, but the fact is that even blockbuster movies still live and die by actually being entertaining.
> Why should I bother to read something you didn't bother to write?
The answer seems obvious: because it's better.
Obviously it isn't better now. But it's easy to imagine a day in the not-too-distant future where AI can shit out 1000 hours of content that is better in every way than what humans create. And it will even feel more human than the human made stuff because the AI will have learned that we like that.
What do you do then? Watch the worse stuff? Maybe, and I think a lot still will. But how long does that last?
> The answer seems obvious: because it's better.
Define "better".
The point really is that "better" in this context means "made by a human" - not "faked to look like made by a human". People need connection to other people - art is one of the means of communicating _between people_.
Is that easy to imagine? I’m not sure it is, particularly.
Ultimately, the LLM industry can’t run on jam tomorrow forever. At some point, people have to stop concentrating on the hypothetical magic future, and concentrate on what actually exists.
Mechanizing the expression of artist endeavour seems silly. Does an LLM know the pleasure and pain that love can instill or does it just regurgitate tokens in a pattern it thinks is best fit?
Maybe AI will enjoy reading AI generated stuff but humans like flawed human made 'content'. Made by humans for humans (R).
My cynical take is, younger generation who is growing up with AI generated content will accept it as normal and move on. We only enjoy human-created stuff, as that seems "natural" to us. That "natural" feeling tends to change in every new generation.
art is essentially human, warts n all, how can something non-human make human art.
It can't, what it can do is mimic other human art and generate it 10000x faster.
A brush makes human art
I can easily imagine AI spitting out volume. I can't imagine it spitting out quality. Most of what it generates now is just trash. Like the Tourist/bear paradigm in dumpster security there may be overlap between the worst human writing and the best AI writing... but that's not how you make a successful film.
Can AI write really funny jokes? honest question
I asked ChatGPT to write about walking the dog in scientific language. It came up with this gem:
"Logging middleware records metrics (distance, stops, events) into a shared datastore for analytics on future walk optimization."
Train it to attribute its failure to land on wokeness, have it generate designs for merch that communicate this idea, and book an appearance on JRE and it'll have completed the arc of a lot of short run comics.
Reminds me of a short story "Jokester" by Asimov :)
it's impossible to answer to this line of reasoning without wasting time so I'll just start right away with the ad hominem.
you just don't like art, you don't understand it and you want slop, admit it and don't feel compelled to enter the discussion with your growth oriented bullshit mindset
Literally this. Ben hits the nail on the head that these tools can “write convincing Elizabethan language but can’t write Shakespeare”, along with his metaphor about craftsmen vs artists.
These tools can never create art because art is the imperfection of reality transposed from the mind’s eye using the talent of the artisan and their tools. Writing a convincing enough prompt to generate an assortment of visual outputs that you “choose” as the final product can never be art, because your art skills ended with the prompt itself - everything after was just maths, and not even maths you had a direct hand in. Even then, you cannot really shill your prompt as art either, because you wrote tokens to ingest into a LLM to generate pseudorandom visual outputs, not language to be interpreted by other humans and visualized on their own accord.
Art is one of those things you cannot appreciate until you make it, and generating slop is not creating art. A preschooler with a single, broken crayon and a napkin makes better art than anything generated via tokens and math models - and to really drive that home, I’d argue that the teenager goofing around with math formulas on their graphing calculator to create visually beautiful or interesting designs is also superior art than whatever the LLM can spew forth using far more advanced maths.
If you really want art, then make it. Learn to draw, practice photography, paint some scenery, experiment with formula visualizations, layout a garden, or heck, just commission an artist to bring your idea into reality. Learning to articulate your vision with language in such a way others can illustrate or create it is a far more valuable skill than laying out tokens for an LLM.
I never created a movie. I can appreciate a good one over a bad one (Argo, Gigli).
Statistically speaking, the number of people who have created a movie rounds to zero. And yet, to suggest basically no one appreciates a movie or the difference between a good movie and a bad one is obviously very dumb.
You're conflating the reality of the situation with me. I didn't say I wanted AI generated content. Just that it seems like it will inevitably win. All the insults in your comment just stem from an imaginary and inaccurate picture of me, a stranger, that you created in your head.
> don't feel compelled to enter the discussion with your growth oriented bullshit mindset
Then why respond?
> You're conflating the reality of the situation with me.
… I mean, you’re the one talking about a hypothetical future rather than what actually exists.
This is the classic expression of the fallacy that the value of something is based on the cost to create it from the seller, not the benefit it brings to the buyer.
Additionally, there are lots of examples where cheaper production has produced an inferior product yet the difference in price causes the inferior product to usurp in superior product. Building materials exhibit this effect frequently: plaster vs. drywall, asphalt vs. slate, balloon framing vs structural masonry, etc.
In media, TikTok exemplifies this effect. People watch fewer movies (expensive, high quality) and watch more short-form content (cheap, low quality).
Saying movies are inherently a superior product to TikTok shorts is incredibly untrue. I would rather watch 90 minutes of the dumbest TikTok crap imaginable than sit through Madame Web again.
The quality floor of all mediums will always be 0.
Cheaper movies in terms of cost are often better than expensive movies because they have a humanity that shows through. Let me know when an AI can make a Jon Carpenter movie
Art is the domain where "the cost to create it from the seller" matters.
Now, for those OK with slop, they can have it, but that's called content. Hollywood and SV (and most consumers) conflate the two all the time.
> Art is the domain where "the cost to create it from the seller" matters.
Is it? Or does it boil down to: "This has been done and rehashed multiple times before, it's no longer interesting"? There is tons of recognized art out there that, in literal time spent, could be done in minutes. What is important is what people gain from the art, not the time put into the art.
I would if that changes for more personal communication?
If someone didn’t write something, I’m not sure I have much interest in talking to them about it.
Do you value that they wrote it, or that it's their opinion? Hypothetically, if there was a system to take one's thoughts on a topic and generate text that accurately represents them, would you be interested in reading it if someone sent you their thoughts?
I don't think that's true at all. Duchamp's "Fountain" is an example of something that is profoundly impactful, didn't "cost" him anything, yet an AI could never reproduce it.
AI tools will continue to get better, and they'll really shine when they can enable increasingly smaller teams of people to execute on their creative vision. Existing non-AI tools have already helped enable tiny teams to create media which is enjoyed by millions of people; the biggest and most recent example is how Source2 is used to animate Skibidi Toilet. But there's still room for increasing accessibility.
The biggest issue I've noticed with most existing AI-gen tools is that they only focus on generating completed output. Ideally you'd have tools that can generate multiple layers or scenes within creative tools to allow for continued iteration.
The future is when one person can do by themselves what previously would've taken hundreds or thousands of people. I'm really looking forward to what sort of creative works we'll get from people that wouldn't normally have access to Hollywood-tier resources.
When do estimate there to be quality AI generated content better than say the 90th percentile of Hollywood output?
The bar could drop too...
We already have procedurally generated content in games - and quite a few of those seem to be pretty popular.
In specific genres of game where the developers put a lot of effort into the mechanics to facilitate procedural generation, yes it's interesting. You can't tell me Dwarf Fortress was fast and easy to make because fortresses are procedurally generated.
Sequelitis and remakes have persuaded me that some people really want intellectual baby food. I'm worried that the majority really just wants content to pass the time. Let's hope creative indie movies will continue to flourish, as we may just end up depending on them.
>My favorite quote on this topic is "Why should I bother to read something you didn't bother to write?"
"If you're like the average consumer, because you have no life, and would doomscroll whatever shit we post for you, and watch whatever slop we produce for you. You did it with the tons of crap remakes, rehashes, franchize, and "multiverse" crap in the 2010-2024 span, you ain't gonna stop just because it's an AI writing it. It's not like the commercial hacks writing the stuff you watched earlier were any more creative than AI".
Was it? He concludes that LLMs will never write Shakespeare, or create original work of that caliber, or animate an actor the way Ben Guinness could have done it. I'm paraphrasing but isn't this confirmaation bias from a guy now heavily vested in making movies? Move 37 was so astonishing the world champiion Go player, hardly known for theatrics, got up and left the table. It was considered an insanely "original" move. In every field we don't consider deeply "human," Ai has jumped light years ahead, in astonishing ways. Affleck is entitled to his opinion and seems like he's trying to understand things, but I think poetry, acting, music, they're all on the table until they aren't, for better or worse.
>Move 37 was so astonishing the world champiion Go player, hardly known for theatrics, got up and left the table.
This is an incredibly minor nitpick, but I recently went down the rabbit hole of move 37[1] after reading Richard Powers' latest novel, Playground, and Sedol didn't get up, he was already away from the table. He did, however, take far longer to make his next move (iirc something like 12-15 minutes instead of 2-4).
1: A good start, maybe is <https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-re...> and ofc wiki <https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol>
Art is not a means of content generation. It is communication between consciousnesses. Why would an AI have anything interesting to tell me?
I feel absolutely insane when I read views like this, like I share the planet with body snatchers.
I like this analogy. If one piece of software can win a game then a completely different piece of software can win at making movies
It has yet to be seen if this can truly be generalized and if such an analogy holds.
Go is a game constrained by an extremely narrow set of rules. Brute forcing potential solutions and arriving at something novel within such a constrained ruleset is an entirely different scenario than writing or film-making which occur in an almost incomprehensibly larger potential "solution" space.
Perhaps the same thing will eventually happen, but I don't think the success of AI in games like Go is particularly instructive for predicting what can happen in other fields.
Kind of tangential, but I don’t think you can brute force go. We’d run out of energy before it finished.
Statistical models shortcut that search space.
But I agree that just because a screwdriver is good at screws doesn’t mean its good at hammering a nail, even though both tools are made of metal.
Ahh good point. I was thinking about “machine plays itself repeatedly until it gets good” aspect of AlphaGo Zero and my brain jumped to brute forcing, but agree that’s a misnomer.
But a game is something with an objective measure. Either a move is good or it's not. Can you say the same of parts in a movie, where it's more about taste?
I'm not making any statement about LLMs here, but the counterpoint to this is that you don't need to make what film critics would overwhelmingly call a "good" movie. You need to make things that make money.
I can imagine two options for that: utilize expertise from people that know how to make films that make money, or make so many movies, one or two can make enough money to pay for all the others and then some.
Really, I think it's more about what gets more attention and then you make deals with Roblox and Fortnite or whatever to sell digital goods.
AlphaGo wasn't an LLM.
They're obviously talking about the broader topic of "AI in the movie business" not the technically accurate meaning of LLMs.
On the other hand he was right about music and video on demand back in 2003…
>In every field we don't consider deeply "human," Ai has jumped light years ahead, in astonishing ways
Didn't you answered yourself?
Idk. LLMs may effectively emulate human creativity up to a point, but in the end they are writing literally the most predictable response.
They don’t start with an emotion, a vision, and then devise creative ways to bring the viewer into that world.
They don’t emotion-test hundreds of ideas to find the most effective way to give the viewer/reader a visceral sense of living that moment.
While they can read sentiment, they do not experience an internal emotional response. Fundamentally, they are generating the most probable string of words based on the data in their training set.
There is no way for them to come up with anything that is both improbable and not nonsensical. Their entire range of “understanding” is based on the statistical probability of the next conceptual fragment.
I’m not saying that it might not be possible for LLMs to come up with a story or script… they do that just fine. But it will literally be the most predictable, unremarkable, innovation-less and uninspiring drivel that can be predicted from a statistical walk through vectors starting at a random point.
There is a reason why AI output is abrasive to read. It is literally written with no consideration given to the reader.
No model of mind of the effect that it will have on the receiver, no interesting and unusual prosaic twists to capture your attention or spur your imagination…. Just a soulless, predictable stream of simulated thoughts that remarkably often turns out to be useful, if uninspiring.
LLMs are fantastic tools for navigating the commons of human culture and making a startling breadth of human knowledge easily accessible.
They are truly amazing tools that can release us from many language manipulation burdens in the capture and sorting of data from diverse and unorganized sources.
They will revolutionize many industries that currently are burdened by extensive human labor in the ingestion, manipulation, and interpretation of data. LLMs are like the washing machine to free our minds from the handwashing of information.
But they are not creative agents in the way that we admire creative genius.
>There is a reason why AI output is abrasive to read. It is literally written with no consideration given to the reader.
>No model of mind of the effect that it will have on the receiver, no interesting and unusual prosaic twists to capture your attention or spur your imagination…. Just a soulless, predictable stream of simulated thoughts that remarkably often turns out to be useful, if uninspiring.
An LLM can produce text about the humliation and pain of being picked last on the recess kickball team. But it never had that experience. It can't. It has zero credibility about the subject. It's BSing in exactly the same way that sports fans pretend they can manage their favorite team better than the current staff.
Vfx subreddit reaction to this
https://www.reddit.com/r/vfx/comments/1grgyps/affleck_on_ai_...
>Vfx subreddit reaction to this
An amazing amount of angry finger pointing and very little actual reflection on the points raised. Burn the messenger! He's a witch!
More video content is going to get made, at lower cost. The skills required will change. The economics will change.
The VFX industry has struggled financially for many years. AI will not improve the financial prospects of the VFX industry.
Fun fact: Justine Bateman of Family Ties, once the most famous TV actress in the world, holds a degree in computer science from UCLA.
fun fact but that is a real stretch to say she was the most famous TV actress in the world when she probably wasn't even the most famous on Family Ties
In 1986-87, when Family Ties was at its peak, it had a rating of 32.7, which means almost a third of all US households watched the show every week.
That probably amounted to 60m+ people tuning in, which is close to Super Bowl numbers ... every week. The TV audience was concentrated then in a way it isn't now. Yellowstone gets less than 12 million viewers per episode today.
Maybe you think Meredith Baxter was better known, but I'll bet more people were paying attention to the teenager than the hippie mom. But let's say she was no 2, or no 5. She was galaxies more famous than the most famous people on TV today. And she has a CS degree. Which taken together is more astonishing than Ben Affleck opining on LLMs.
I'll wager you at any point during Family Ties' run that more people knew who Baxter was, considering she had been a celebrity for 10 years going into that show. Also that's not how share works. A 32.7 share in 1986 is more like 29-30 million viewers.
If you're just going by ratings/buzz Lisa Bonet and Phylicia Rashad are contenders, but this is a weird definition of "famous" that is just "who's popular right now ignoring history and context". Like, Betty White exists.
His perspective mainly focuses on AI as a replacement for creativity, but he doesn't seem to consider it a tool for creative expression. I can envision a future where one person comes up with ideas, goes to their computer, puts everything together, and creates something comparable to what a small studio can produce today. This has already happened in music; in the '70s, the band Boston started with a single person working in a basement putting together a complete album and releasing it. Once it became a hit, they formed a full band.
https://en.wikipedia.org/wiki/Boston_(band)
I can see a similar situation happening with AI. Hollywood is definitely undergoing changes; the traditional concept of having Hollywood in one location might fade away, leading to a global creation of filmmaking. While this increase in content could bring more creativity, it might also lead to an overwhelming amount of options. With thousands of films available, it may not be as enjoyable to watch a movie anymore. This could lead to a decline in the overall experience of enjoying films, which is not necessarily a positive outcome.
I'm not in the industry and I can tell he's failing to conceive many possibilities. I don't understand why he's being praised, I guess confidence impresses people.
> I'm not in the industry and I can tell he's failing to conceive many possibilities.
It's also possible that because you're not in the industry, you don't understand the problem domain well enough.
Tech bros love to think they're just so much smarter than everyone in other industries, but it rarely ends up being true. We saw this with Blockchain; this distributed consensus protocol was supposed to solve all the problems with money transfers, settlements, and securities across the world and uh... did none of that. But tech bros sure did love to talk about how the blockchain doubters just didn't understand the technology, without considering that maybe they didn't understand the problem space.
I think the other thread did a good job showing you that it's the other way around, people who have been in the industry do not tend to have the imagination people with fresh eyes (and maybe some tech chops) do.
An example I had in mind was when Affleck was speaking of being able to generate the show but with their preferred cast from a different production. He really has no clue that people will be generating themselves and their friends as the main characters of these stories. Like this one, there are many other examples where I thought he was lacking creativity.
Kudos to him for spending time thinking about it but I'm surprised how well received his thoughts have been for just stringing a few ideas together.
That's one example. Here are a few others...
Bezos wasn't in retail. He also wasn't in compute hardware.
Reed Hastings wasn't in entertainment and crushed it. Jeff Katzenberg was, and Quibi was a disaster.
Ken Griffin was a punk kid in harvard, never worked in finance. Jim Simmons was a math professor, didn't work in finance.
AirBnb guys weren't in real estate or tourism or whatever bucket you want that to be.
Larry & Sergey knew zero about advertising. Zuckerberg too.
The incumbents have been destroyed with some frequency by outsiders who take a different approach. It's almost impossible to tell in advance if understanding a domain is an asset or a liability.
Ok but you're listing people/businesses, not technologies, which is what I'm talking about. Or is this nonsensical LLM slop to prove a point?
Nah, you were talking about tech people operating outside of their domain. You just don't like how easy it is to show counter examples so you pretend you had some other point.
But if you don't know the technology involved for each of the above, maybe stay out of a conversation involving technology.
Except every company you listed is a technology company. It's technologists doing technology. Katzenberg proves the point.
Amazon, Netflix, Quibi (a disaster run by a non-technologist), AirBnB, Google, Facebook. These are websites and apps.
I don't think Griffin is particularly illustrative of anything except it's nice to ask your grammy for $100K when you're a teenager.
Feel free to listen to a NotebookLM podcast of a PG post, but if you think any AI is going to create an original thought that catches fire like the MCU, Call Her Daddy, Succession, cumtown, Hamilton, Rogan, Inside Out, Serial, or Wicked, maybe it's you that should stay out of the conversation when it comes to creativity.
Amazon is a retailer. AWS is compute(/etc) for rent. AirBnB is homes for short term rent. Quibi is/was short movies on mobile. Google and facebook are advertising. Netflix is movies/tv shows. There is no such thing as a pure technology company - the technology has to be used to do something people want.
The people closest to the thing which is about to be dominated by machines are often clueless about what is going on.
First, do all the techbro failures where they thought they were smarter than the industry.
Second, show the company where the technologists made the same thing the old school was making, and better. Amazon retail disrupted but didn't destroy physical retail, and certainly didn't replace it, and certainly isn't better at it. Same with AirBnB => hotels, Google/FB => advertising (disrupted with a new type of product... a tech product... but have no presence in literally every other form of the industry).
The closest thing you can get to dominance is Netflix making movies and television, and there's no evidence that they make better movies and television than the old school. Technologies can use money to leverage their position against slow-moving industry players, but in this specific discussion, we've seen nothing to suggest that eventually AI could make a better film than human beings.
If you were actually in the industry you'd know that the top decisionmakers at Netflix have decreasing respect from the creative community, increasing reputation for being a cheap and difficult company to work with, and are generally regarded as a mill that creates a lot of mediocre to slightly-above-average content that gets swept aside every 3-6 months for the next wave of grist. Profitable certainly but nowhere close to being a leader in quality, for as much money as they've thrown at trying to win that Best Picture trophy (and spoiler: Emilia Perez isn't gonna do it this year either).
If you don't know anything about Hollywood, maybe you should stay out of a discussion about Hollywood.
Nope, you do the opposite. The list is long on both sides, displaying how difficult it is to know if industry knowledge is an asset or liability.
Of course they don't do the exact same thing. Only someone in industry would think to do the exact same thing. The value proposition is the same in each case, that's what matters.
The people who don't know Hollywood are the ones taking over the entertainment business. Customers of the entertainment business don't care about the "creative community" or "hollywood".
While AI will ( and does) reduce the burden of writers, it’s going to -kill- celebrity acting in most cases.
In doing so, it will turn directing on its ear in a way that many talented directors will not be able to adapt to.
Directors will become proxy actors by having to micromanage AI acting skills. Or Perhaps they will be augmented with a team of “character operators” that do the proxy-acting. Either way, there will be little point in paying celebrity actors and their extravagant salaries for most roles. Instead, it might turn out that skilled and talented generic, no-name actors can play any role, then have the character model deepfaked onto them… which could actually create a large demand in lower paying jobs for character actors, possibly actually being a kind of boom in the acting business.
Lots of possibilities, but the director is going to take center stage in this new reality, while celebrity actors will have to swallow hard as they are priced out of their field.
Why would it require a director if a generative process can use the information it has on audience (even individual) preferences to produce the story and format that will best hook consumers?
Because it will perform similarly to the way that writing LLMs do…. They obey and produce singularly predictable and droll stories that have trouble keeping the attention of a five year old. It’s the definition of stale tropes and predictable scenes at its worst.
It has no idea of the mind of the viewer or the reader. It’s literally generating the most probable next few words to tack on to the story.
LLMs are great for a lot of stuff, but they are by design not at all creative in any admirable sense of the term. They cannot produce a narrative that is simultaneously unique or genuinely surprising without it also being nonsensical.
Hallucinations are not a bug, they are a result of proper operation , just undesirable. The same thing that makes nonsensical“hallucinations “ if the “temperature” is set too high is what prevents llms from having any unique ideas if the temperature is set low enough to not hallucinate wildly.
LLMs are text prediction engines. Extremely useful and revolutionary in many ways, but not in creative work. Everything they do is by definition derivative and likely.
What graphics models do in images is no different, it’s not all that creative, and works much better when you -tell it- to be derivative. It’s just that the nature of graphic representation is mainly predictable and derivable… so it doesn’t bore us when it produces derivative, predictable work.
He's partially right. AI won't be able to write an interesting, wholly original script anytime soon. For that it's necessary to live as a human, to experience anguish, loss, fear, hope, etc. Art, true art, will be the last field to be taken over by AI. Has anyone seen a truly poignant AI image yet, that illuminates something about the human experience? One they would put in a museum? I haven't.
However there is such a large body of work already, all the derivative stuff can be made effortlessly. What would happen for instance if you told the AI of 2050 "I want to mix the Brothers Karazamov with the Odyssey, set on Mars after the apocalypse?" I just tried with Sonnet and its not too bad. Maybe a large enough basis of stories has already been told that most AI scripts will feel novel enough to be interesting, even if they are just interpolations in the latent narrative space humans have constructed.
Actors will be wholly unnecessary. A human director will prompt an AI to show more or less of a particular emotion or reaction. Actors are just instruments.
The three most interesting transformations AI will bring are:
1. Allowing a single human "director" to rapidly iterate on script, scenery, characters, edits, etc. One person will be able to prompt their way through a movie for the price of electricity.
2. Movies will become interactive. "Show me this movie but with more romantic tension / with a happy ending / Walt loves Jesse"
3. Holodecks will allow young people to safely experience a much broader range of events and allow them to grow their souls faster, meaning they can make better movies. Modern movies suck because life is too predictable and tech-focused for good writers to emerge. We won't ever put ourselves in real danger, but what if your senior year of high school was to live through the French Revolution in a holodeck? It would change you forever.
The trope of the sci-fi robot has been so engrained into our cultural narrative that I think people constantly underestimate the creative capabilities of AI. People still think in terms of this dichotomy of math/science and art/creativity. But AI has already broken through so many boundaries of what anyone would have believed possible just a few years ago.
I hear people complaining about uncanny valley stuff all the time, but I think these guys are just hanging on to old-world thinking. Because it cannot not be what must not be.
But even today AI is doing incredibly hard stuff that most people would have thought could never be done "mechanically" by a mere machine. Think coding assistants. If you don't think they are absolutely amazing, you are nuts! Sure, there's still a long way to go to perfection, but we don't have to wait until we get there because a lot of what is possible today, let alone tomorrow, is already incredibly useful.
The point is: art is a means of communicating _between people_. We can appreciate pictures/movies/music generated by machines but at the end of the day we (people) are in a deep need of connection to _other people_.
I don't believe replacing people with machines in this context is possible (one might say: by definition). But if it happens it will be the anti-utopian world of loneliness.
This is exactly the kind of argument I think is informed by 20th century ideas of AI. I also think in this form, it greatly exaggerates the issue at hand - AI will not totally replace human-human communication.
Also, since the original context is Hollywood movies, I don't think that there is a lot of "art" in the storytelling of big block busters. Why AI shouldn't be able to write a compelling story needs to be argued more convincingly, I think.
When it comes to blockbuster movies AI could certainly create them, they're so formulaic that you could probably even get away with Markov chains.
I use AI coding assistants every day and while they're helpful, they're nothing more than a glorified autocomplete.
I beg to differ. No autocomplete in the traditional sense can take a human language description of something you want to achieve and produce code that implements just that. Or at least, some version of that. And if you're not happy, you can refine - either by hand, or by having a dialog with the AI.
You can also give it buggy code and it can critique it.
That's an order of magnitude more advanced than autocomplete.
Advanced autocomplete is still autocomplete.
I don’t disagree with everything he’s saying but it’s odd seeing the responses this is getting when it sounds extremely rehearsed.
Rehearsed word soup. He clearly knows zero about the underlying technology. If he did understand it in any depth, he'd show a lot less certainty about the future.
Maybe the main actors, the stars of the movies are safe for now. But background actors who just sit in the background or have some small extra role will be decimated, because those roles can be replaced with AI.
Since many actors start with small roles or background work, it will have an effect on the industry if entry jobs are eliminated.
In the long run even the main stars can get into trouble, because it may be true that AI won't be able to reproduce what they do, creating a new, creative scene, but most movies are trash even today, so if AI can make an average, good enough movie in the future which is cheap to generate, but people still watch it then the bottom of the industry will fall out, lots of people who work on movies will lose their jobs which can have a huge impact on the industry.
80% as good for 1/10000th the price is quite the offer.
100 percent as good for 1/100 the price is where celebrity acting is headed. They will take roles for exposure and make their money on their brand, like influencer cattle. It will be the only way to compete with the human wave of extremely talented but unknown talent that currently can’t get parts, but who can absolutely wreck an audience as a human canvas upon which to render a studio owned character likeness.
Ben went and drank his own celebrity exceptionalism cool-aid.
Getting closer every day.
https://www.nature.com/articles/s41598-024-76900-1
It's funny how folks see their jobs as safe, but nearby jobs that they don't understand as at risk. In this case, actors are safe, but VFX better be worried. Ben knows actors are artists and it's going to be hard for an AI to mimic all the experience and subtleties he's gained over his life. The same is true in VFX, it's just that Ben doesn't understand the art or the process of visual effects.
I think it's a great speech, but I may make the point that he's probably underrating audiences' demand for interesting or new content. Audiences already seem pretty happy with endless iterations of the same few concepts.
His example of having AI generate a kludged together episode of Succession feels exactly like what most shows already do now.
One thing I'd really like AI to do (and Ben Afflect touched during his talk) is that once AI is cheap enough to run at home, I'd like to feed it all David Lynch movies/TV shows, and ask it to produce a Twin Peaks season 4.
And you'd never get anything close to what Lynch would make, you'd get the equivalent of fan fiction.
Source tweet: https://x.com/bilawalsidhu/status/1857816146688217099
Excellent art might be attributed less to genius and more to a selection process and creator-audience feedback loops.
We might be falling into a bit of survivorship bias when missing that excellent art is the product of a variety of processes, not just human inspiration.
Machines can probably replicate much of that process, for better or worse.
Artists are very rarely concerned with their audience. Good Art, capital A, is the unique expression of a singular consciousness - you can't produce that by catering to an audience. That's just a product (see Warhol).
Yes, art is a product, particularly in the context of this conversation about AI.
Very little is the unique expression of a singular consciousness.
Art being "good" is subjective and the audience is the source of that valuation.
Art is only a product if you have a repugnantly rapacious worldview.
He is wrong. Movies will be automated and when they will be, all these actors will be first one to go. I say this despite him and Matt Damon my favorite actors.
When it pours it rains. They don't know what is in store and what we are working on.
Hint: codename: Project Hailey (named after Ho--y)
You can't automate art, sorry. Maybe you can crank out a bunch of hollywood dross that undiscerning audiences will slop up, but you're not adding anything of value to our culture.
I don't want to watch a movie cranked out by an AI. I want to see a movie that came from another human's mind because they have an idea they want to communicate with me. LLMs have nothing to "say".
The movie industry is already endlessly pumping out crap with real actors, I suppose it won't get much worse if it removes them entirely.
On the other hand, there's a whole other medium for actors that AI can't replace.
In addition to the experiential gap between theatre and film, we have productions which translate from one to the other. Each is a distinct market.
A subset of the Chinese TV version of "Three-Body Problem" consists of video gameplay "footage". The lack of human actors is compensated by over the top footage of cataclysms, which could never be filmed. It's mostly additive to the storyline with human actors.
AI can create new media markets with different cost structures. These will necessarily subtract some attention-minutes from existing film audiences. But it should also lower production costs for artistic filmmakers focused on portrayal of human actors for human audiences.
Strong claims.
Have we seen any other form of communication and/or artistic expression become totally automated?
Even 2d art hasn’t been automated by image generation models.
It’s just another tool in the bag.
We haven't even automated live music performances, which are actually bigger business than they were before the existence of recorded music
My prediction. As more and more movies are made with AI, and when they start replacing real actors with AI, people are going to value movies less, and live performances more.
I expect AI will bring a resurgence of live performances like live theater. Why? Because we value what people do more than what machines do.
I wonder if we're already starting to see this - have noticed that jazz is becoming popular again with young audiences.
> He is wrong. Movies will be automated and when they will be, all these actors will be first one to go.
Man, whom should I trust? A guy who's been living and breathing movies since the 1990s, or a 19-minute-old, green-colored account on hackernews? Man, I just don't know.
Why trust either?
What would a guy who has been living and breathing movies since the 1990s know about AI? Zero, and it shows. Did you trust newspaper guys in the 1990s too?
> Did you trust newspaper guys in the 1990s too?
Vs. some ding-dong yelling on the street corner? Yup!
That was a mistake. At least the guy yelling on the corner was obviously nonsense and you could realize you shouldn't trust him.
The newspaper guy had a sophisticated, industry insider sounding explanation for why newspapers would always be around. Detroit guys had logical sounding reasons why Tesla wouldn't work. Aerospace and defense guys had logical sounding reasons spacex wouldn't work. Everyone in retail had reasons amazon wouldn't work.
People who have been doing something for a long time haven't had a great record of predicting the future in the very field they operate in.
> library of vectors of meaning and transformers that interpret the context
New phrase from a hollywood actor.
If you've heard Ben Affleck speak before I don't really think his take is surprising. He's not a dumb guy. His take is also extremely realistic in contrast to the AI utopians that assume AI will just do everything better because computer magic.
Generative AIs are cool tools people can use to make things but they're not magic. Once you ask for things outside of their training set you get really weird results. Even with a lot of fine tuning (LoRAs etc) results can be hit or miss for any given prompt. They're also not very consistent so which means series of clips are incongruent or just mismatched.
That doesn't mean generative AIs aren't going to be useful and won't impact film making. You're not necessarily going to be able to ask ChatGPT to "write me an Oscar winning screenplay" but you can certainly use one to punch up dialog or help with story boarding.
I think he's right on that visual effects people will be hard hit because there's going to be a lot less jobs for mundane tasks like cutting out an errant boom mic or Starbucks cup in a shot. I'd extend that to other "effects" jobs like ADR since can AI can fix a flubbed line in an otherwise good take.
I feel like many fields are going to basically cut out junior level jobs for AI, then 10 years down the line complain that there are no more people they can hire for mid level/senior jobs.
This will age like a dead fish out in the Sun.
He even contradicts his whole premise of "actors will always be necessary" with his example of a custom generated series episode.
Also,
>implying the non-AI content we have nowadays is highly creative and unique
*facepalm*
You're mis-reading what he says. People create new stuff, AI (so far) remixes old stuff. Down the line you will be able to order a remix of succession and AI should be able to do that for you but it won't be able to add any interesting additional spin - it's just gonna make it to order (obviously a big deal! which he says).
>implying the non-AI content we have nowadays is highly creative and unique
You just inserted this, he doesn't say or imply this - he asserted that all of the new ideas (so far) have been generated by humans. Sure lots of it is slop - but all of AI content is slop. That's his point: the new, creative stuff is still coming from humans - even if they are using "AI" tools to do it.
> (so far)
This parenthetical is doing a lot of work here.
Also, you make a lot of claims that just don't seem to reflect what I've seen. Many human-made movies are cookie cutter plots (ie the new Twisters), and many AI prompts return seemingly novel results. Whether or not that are actually novel isn't all that relevant, since humans aren't databases that have memorized all the source material.
> Also, you make a lot of claims that just don't seem to reflect what I've seen.
Can you say more about my claims? Yes, many human creations are re-hashes of old creations. All AI creations are re-hashes. As you say - they seem "novel" to a particular human, but in aggregate they are what they are - re-hashes. There's a meme about how people with limited exposure to something will over-associate the relevance of their small exposure that kind of aligns with this[1].
[1] https://knowyourmeme.com/memes/getting-a-lot-of-boss-baby-vi...
People create new stuff? People remix old stuff. The movie industry is a joke as far as "new stuff" is concerned.
???
Show me a movie in the last decade where the premise of it couldn't be generated by some AI prompt.
(I'll even make it easier by stating that said AI prompt can be generic and to some degree trivial; i.e. not the premise explicitly laid out in the prompt).
Believe it or not, but there’s a lot more that goes into filmmaking beyond the premise.
Parasite, The Substance, Uncut Gems, mother!, Krisha, Tangerine, and The Lighthouse are some movies from the last decade that are novel in ways beyond their premises.
No generic prompts to an LLM could create those movies, even if it could spit out a vague 2 sentence premise that matches them.
Note that I’m not saying it could never happen, but LLMs as we have them in 2024 produce regurgitated slop, and require significant creative input from the prompter to make anything artistically interesting.
This feels almost like a request made in bad faith. I don't exactly know what quintessence of inspiration is required too put together any given movie - but lets take Portrait de la jeune fille en feu[1]. It's regarded as a masterwork on relationships and as visually stunning, judgements I agree with. The skill in its craft is in drawing out how the technique of painting lingers on details of humans in the same way that humans linger on them. On how the emphasis of an element (a stroke, a gaze) can overcome its humble character in the context of the moment. The overall plot of the movie is unremarkable - the reason it is a masterpiece lies in the how the actors are able to draw each other out and how the team putting together the look of the movie is able to reflect their relationship in tone and vibe.
[1] https://en.wikipedia.org/wiki/Portrait_of_a_Lady_on_Fire
Here's your "Lady on Fire" generator, [1].
"Oh but you can't compare a paragraph to a whole movie script!"
Sure, just apply the same procedure recursively, stop at a depth where you're satisfied, see [2].
Bear in mind I've only spent like 10 minutes and a negligible amount of money. Now imagine you have a year and a budget of 10MUSD to come up with the best movie script you can create with the help of AI.
It really is only a matter of time ...
1: https://chatgpt.com/share/673bc781-0318-8003-a555-5ea4ea1463...
2: https://chatgpt.com/share/673bc976-0a24-8003-ab8e-11c7800527...
I think, if you are under the impression that Portrait of a Lady on Fire is even...gestured at in any way in your examples - then we are in different places (I recommend you start an ssri).
I urge you to compare this to the translation[1] of the shooting script of Portrait of a Lady on fire:
[1] https://www.tumblr.com/mlleclaudine/634613427269173248/engli...???
show me an AI prompt that can generate a movie premise in the last decade where the underlying model hasn't already been trained on that movie's plot.
show me a writer that hasn't been trained on a lot of movie plots before them.
> He even contradicts his whole premise of "actors will always be necessary" with his example of a custom generated series episode.
He said it would be "janky and weird" and likened it to a remix.
People who think generating language is very much different from generating audio/video are in for a ride, which I think is the flaw with his reasoning.
(Also, yeah, there's a very strong interest for him in saying actors will not be replaced, but let's assume he's being frank here).
All the "AI personalities" that are already popping up here and there are already decent enough, again, it will only take a couple years until their quality is on par with meatspace video.
Then it's gg for actors (and many other professions).
If what the churn mills are any indicator the coherence is already starting to get there. Saw one this weekend where it was star trek set in the 1940s sci-fi vibe. The coherence was a bit off. But way better than it was last year with the same kind of videos. Even from scene to scene it is getting better. In scene the warping is starting to go away. Add in the ability to write a story using chatgpt (or something like it) then add in the 'ai voices' and you can pick and choose who/what you want.
Movie making is about to radically shift to an extremely lower cost model to produce. Just not quite yet.
I can already ask chatgpt to 'write me a SCP story about a couch found in an abandoned factory that is keter class' and it will pump out something mildly entertaining in that style. It is not that big of a leap of logic to say 'good enough' will win out. Even now you can kinda pull it off with a small amount of work and stitching several tools together. There is no reason to think it will not be automated together.
> People who think generating language is very much different from generating audio/video are in for a ride, which I think is the flaw with his reasoning.
How could anyone think they are similar?
You don’t just pump video data into an LLM training pipeline and get video models.
A lot of work goes into both. It’s non trivial and only barely related.
It’s only recently that transformer models are being used for video.
I have a strong suspicion that people who think AI-generated characters are "similar" to people are borderline autistic and simply can't grasp nuance.
How soon though? 2 years? 10? 50? He starts by saying this won't happen soon, _maybe_ ever.
I'd bet that in two years you could do something equivalent to that "make me a new episode" use case.
I already see really convincing deep fakes on Facebook, etc., it's not an impossible problem anymore, it's just a matter of increasing their quality.
Is there any deep fake video that is possibly convincing that is 100% generated synthetically?
All the deep fakes I've seen that don't look incredibly weird at first glance essentially re-use clips (like Back To The Future[0]) and they replacement an element, like a face, but the rest of the footage already exists.
If you look closely though, particularly at the mouth of the video I linked, something isn't quite natural about it still.
Same deal with this Taylor Swift fake[1]
[0]: https://www.youtube.com/watch?v=8OJnkJqkyio
[1]: https://x.com/McAfee/status/1745226438641602866
Convincing deepfakes is a strong word. There are also people out there who think modern CG looks good, too. There will always be people who are ultra-sensitive to things like this. So convincing might take 2x as long for some vs others.
Getting to 90% is easy. The last 10% can take 50 years. Don't forget we're the product of evolution and subconsciously look for things current technology will not be able to replicate.