Tech companies propose value is shifted from real people to them.What is the value proposition with creating these videos?
It reminds me of a quote from American Psycho:Impressive, but distinct lack of any acutal acting going on in that video.
There is an idea of a Patrick Bateman; some kind of abstraction. But there is no real me: only an entity, something illusory. And though I can hide my cold gaze, and you can shake my hand and feel flesh gripping yours and maybe you can even sense our lifestyles are probably comparable... I simply am not there.
Nah, you did fine, I got the Variety link from your link lol. Just drilled one level deeper.Thank you for the proper source! Not sure why I found some link farm site instead of that, I should have gone deeper.
Agreed that it does sound like a good use case for AI, and a useful tool (“fancy magic wand” is exactly the right use case for a probabilistic tool like this, the magic wand is already probabilistic), but it’s also far from what their main product seems to be marketed as (wholesale video generation). It also sounds like it’s something Runway purpose built rather than their generic tool, but the Variety article could be misleading on that count.
I keep waiting for the day where I can feed a low-res tv series like Star Trek Voyager into an AI model and have it synthesize a 4K version from scratch. No upscaling, no sharpening, no weird artifacts, just take the video/scene as an input and spit out a high quality recreation that looks like it was natively produced in high-def. I've seen what people are doing with tools like Topaz, and that seems like rubbing two sticks together compared to the potential of a product like Runway.
Add 3D/VR version and we've got out own holodeck.
Based on some of the later Transformer movies, he doesn't even care about continuity of entire environments, so I'm not surprised.Had a co-worker who was an extra on a Michael Bay film. They had no continuity person at all in the scenes he worked.
Think of all the 1980s forklift safety training videos that could get remade!It doesn't hold up to critical viewing, but it's already better than some of the corporate training-video dreck I've been exposed to. I can easily output from this being used to create "Tom the Trainer" and walk him through demonstrations of what a painfully obvious quid-pro-quo looks like.
I feel like a really good use of AI would be feeding scripts, audio, and stills from the missing Doctor Who episodes from the 1960s and have the model generate replacement episodes that match the extant episode in terms of how they look. Maybe they could even generate full color HD versions of The Celestial Toymaker and Marco Polo from production stills.I keep waiting for the day where I can feed a low-res tv series like Star Trek Voyager into an AI model and have it synthesize a 4K version from scratch. No upscaling, no sharpening, no weird artifacts, just take the video/scene as an input and spit out a high quality recreation that looks like it was natively produced in high-def. I've seen what people are doing with tools like Topaz, and that seems like rubbing two sticks together compared to the potential of a product like Runway.
Hot take but I think the definition of common sense will change. People will pay less attention to those small details as AI gets better and both will eventually meet somewhere in the middle. Starting with stuff like kids shows anyway. I genuinely don’t think a kid would notice the details you mentioned.This is driving me crazy...that example is only barely the same person. Her blemishes and scars constantly change scene-to-scene. [Edit: the side profile at the beginning looks like a totally different person, the younger sister of the woman we see later.]
"Come back in one year and see-" I am quite confident that in 10 years we will still have AI that is much dumber than a rat and not actually capable of understanding object permanence. They will be better at faking it, but I strongly doubt this tech will ever be capable of creating 30 minutes of coherent film without a human constantly correcting all the common-sense errors.
Voyager was shot on film, aside from the dodgy CGI and stagey lighting, it should look really good with a quality scan. No worse than any feature film of the era. I haven't looked for one but I'm surprised it hasn't been rescanned at at least HD (and really, that's all you need, the advantage of 4k over HD is marginal at best in all but the most extreme viewing scenarios - in fact, most films are still finished at 2k, although that's changing).I keep waiting for the day where I can feed a low-res tv series like Star Trek Voyager into an AI model and have it synthesize a 4K version from scratch. No upscaling, no sharpening, no weird artifacts, just take the video/scene as an input and spit out a high quality recreation that looks like it was natively produced in high-def. I've seen what people are doing with tools like Topaz, and that seems like rubbing two sticks together compared to the potential of a product like Runway.
I can absolutely guarantee you that they did in fact have a continuity person. Your friend just didn't notice them - extras really don't get to interact with anyone except other extras, third ADs, and PAs, for the most part. They would never interact with the continuity person. (Or they did and were confused because the actual name of the position is "script supervisor").Had a co-worker who was an extra on a Michael Bay film. They had no continuity person at all in the scenes he worked.
Also, Bay found an extra that he really liked, and he rewrote the script to give them lines, and more screen time. Which was at least kinda cool.
You might be right, but I really hope not. That sounds like a really, really sad future.Hot take but I think the definition of common sense will change. People will pay less attention to those small details as AI gets better and both will eventually meet somewhere in the middle. Starting with stuff like kids shows anyway. I genuinely don’t think a kid would notice the details you mentioned.
I’m kind of outing myself as being a victim of sludge brain from ai generated Instagram reels, but I’ve already started thinking of entities in those videos as concepts rather than.. you know immutable physical objects.You might be right, but I really hope not. That sounds like a really, really sad future.
The OP is dunking on Squaresoft’s “virtual actors” concept which they tried to make a thing with the lead character in the Final Fantasy movie.I personally hate that idea.
Holding on to my dvd collection for this. At some point sooner, it'll be something you can run a job for overnight, but the ultimate goal would be something that can do it on the fly so it keeps the old feel of physically handling a movie.I keep waiting for the day where I can feed a low-res tv series like Star Trek Voyager into an AI model and have it synthesize a 4K version from scratch. No upscaling, no sharpening, no weird artifacts, just take the video/scene as an input and spit out a high quality recreation that looks like it was natively produced in high-def. I've seen what people are doing with tools like Topaz, and that seems like rubbing two sticks together compared to the potential of a product like Runway.
AI generated music is still absolute trash because of the level of consistency and math involved in making music sound good dwarfs what you need for pictures to look ok. If you’re off by a few Hz or a fraction of a second, people are going to hear it.Listening to the "music", the piano chords - in a leading tone tonality, then a little semi staccato arpeggio, it ends up on a flat 7, tonally completely out of character in reference to the chords we just heard which approximate badly done appropriated Chopin. Ugh.
Just a wild thought, but you could make a conscious decision to stop watching that stuff.I’m kind of outing myself as being a victim of sludge brain from ai generated Instagram reels, but I’ve already started thinking of entities in those videos as concepts rather than.. you know immutable physical objects.
Like if I know I’m watching just a dumb reel I’ll connect the bearded guy in the red sports car from the last scene to the current one even if his brand of glasses and the model of car has changed. That’s an extreme example and at least I’m aware of it, but it’s becoming a little more second nature than I’d like to admit.
I am also worried about kids growing up with that actually being second nature.
"Creative auteurs without theWhat is the value proposition with creating these videos?
Have you checked out Cara?I look at Instagram. I try to not do it too much, but a lot of artists and hobbies I'm interested are there, and I'll definitely find myself scrolling. So again, not judging.
But the moment they start feeding me AI garbage I close the app. It's actually helpful as a reminder that I'm using it too much.
In all seriousness, given the hurdles involved in planning and producing VR content, this could end up being a viable killer application.Add 3D/VR version and we've got out own holodeck.
The issue is that it’s assuming a whole heap of functionality that isn’t there yet. The current systems are generating 2d images of videos. They’re not generating 3d anything. That’s a whole different problem that may require a very different approach.In all seriousness, given the hurdles involved in planning and producing VR content, this could end up being a viable killer application.
Exactly, you’d get those wildly gyrating/limb-sprouting gymnasts, but in VR and also all of the background scenery is doing the same thing.The issue is that it’s assuming a whole heap of functionality that isn’t there yet. The current systems are generating 2d images of videos. They’re not generating 3d anything. That’s a whole different problem that may require a very different approach.
Yeah that is uh, incredibly disturbing. What the actual fuck.I’m kind of outing myself as being a victim of sludge brain from ai generated Instagram reels, but I’ve already started thinking of entities in those videos as concepts rather than.. you know immutable physical objects.
Like if I know I’m watching just a dumb reel I’ll connect the bearded guy in the red sports car from the last scene to the current one even if his brand of glasses and the model of car has changed. That’s an extreme example and at least I’m aware of it, but it’s becoming a little more second nature than I’d like to admit.
I am also worried about kids growing up with that actually being second nature.
She'll fly apart!If it can generate that "Star Trek: Excelsior" series starring George Takei as Captain Sulu that would have been produced in a just universe, I'm all for it
Honestly, I'm just waiting for the day when I can feed movies into an AI model and have it synthesize it in the style of a different director and with different actors. Just for the heck of it, I'd redo Star Wars eps1-6 as directed by Stanley Kubrick, David Lynch, John Carpenter, Quentin Tarantino, David Cronenburg, and John Ford just for variety. Then substitute in The Muppets for everyone but Samuel L. Jackson.I keep waiting for the day where I can feed a low-res tv series like Star Trek Voyager into an AI model and have it synthesize a 4K version from scratch. No upscaling, no sharpening, no weird artifacts, just take the video/scene as an input and spit out a high quality recreation that looks like it was natively produced in high-def. I've seen what people are doing with tools like Topaz, and that seems like rubbing two sticks together compared to the potential of a product like Runway.
Oh yes, A remake of "Staplerfahrer Klaus - Der erste Arbeitstag" would be great.Think of all the 1980s forklift safety training videos that could get remade!
Things are just getting started but there's people working on text -> Image -> video -> 3d -> multi-view video -> 4D generation (3D over time) pipeline:The issue is that it’s assuming a whole heap of functionality that isn’t there yet. The current systems are generating 2d images of videos. They’re not generating 3d anything. That’s a whole different problem that may require a very different approach.
That era Star Trek is so formulaic that you should be able to feed the library to AI and have it generate fresh episodes for you on request.
Computer, generate me a double episode with Troy's mom seducing ensign Wesley..
Would love to have that quality of show today. The modern versions are the 'formulaic' ones of pointless mediocrity. We can throw in Star Wars too, in fact humans ran out of ideas. Let the AI take over is what modern day has shown me.That era Star Trek is so formulaic that you should be able to feed the library to AI and have it generate fresh episodes for you on request.
Computer, generate me a double episode with Troy's mom seducing ensign Wesley..
And at less than 10 m² of rainforest per each Slop™ produced it's great for the environment!Honestly feels like a good use of AI to me. Automating boring work that doesn’t require creativity.
It’s a fancy Photoshop magic wand tool. Cool.
Currently, at best what they've got is something that can make bland, dreamy TV commercials.
People still ccnfuse genuinely impressive tech demos with a functional tool that should become "invisible" (= not aggressively impose a set of constraints and homogenisation) once matured.Was… that video supposed to be consistent?
The strange “indent” on the spare tire keeps changing, the woman is very clearly different in each scene (she doesn’t even appear to be of the same ethnicity), her earrings are all weird, the cabin spontaneously sprouts two front windows…
Plus everything still has that odd floaty movement that AI video has, because it can’t figure out how things are supposed to move with weight and purpose.
Blech. No thanks.
There's a carbon budget that needs to be spent ASAP!What is the value proposition with creating these videos?
I was standing in the bus the other day behind a tired looking mom and her ~5 year old son. The kid sat with his head buried in the phone watching 10 sec long very colorful AI generated videos of different animals / humans / objects merging into combinations, one 10 sec video after the other until I got off.What is the value proposition with creating these videos?I