Over 30 years later, while I would've never anticipated smartphones... I really thought impersonation technology through video & audio editing (not dependent upon look-alike actors) would've been here sooner. Another example of wildly underestimating the complexity of what might seem like a simple problem.
From about the 1940s to the 1960s he made many radio addresses that he wrote himself. He was politically very active, touring the country and doing a lot of speeches on his own.
As a president he absolutely had the best speech writers of his time, but he went over each speech meticulously and gave feedback the writers claimed was expert and welcome.
One of his speech writers actually published a book that showed photographs of Reagan’s own handwritten notes for his speeches. There are thousands of them in his presidential library.
> As a president he absolutely had the best speech writers of his time,
He was both much better (as a writer) and worse (as a law-breaker) than your depiction of him.
"Ronald Reagen was probably innocent by way of Alzheimers"
A lot of people who met him during the last two years of his presidency described what we now know to be early symptoms of Alzheimers. He also came out publicly as having it not 5 years after leaving office.
For hard evidence of this, see Russia's use of deepfakes a few months ago to impersonate Zelensky and attempt to make Ukraine think their leader was surrendering.
The deep fake was technically advanced but also laughably bad.
What would be an interesting and difficult question is whether this state of things can largely be attributed to the AI community's commitment to making advances in AI open to public knowledge and use, or if there is some stronger factor at work.
Basically, every time we formed an investor pool after a while one of them would "suddenly realize" actor replacement applied to porn and then he'd fixate on the idea. He'd talk the other angels into the idea and then they'd insist the company pursue porn. We'd explain the non-porn higher valued applications, and the danger of porn psychologically tainting the public perception of the technology and the creators themselves. Plus, we had VFX Oscar winners in the company, why the hell would they do porn?
This made me wondering how many among the newer generations social media addicts would think along the lines of "I've only ever seen him in person, so how do I know he is real?".
It wouldn't solve any of the fundamental problems of trust, of course (namely, the issue of people cargo-culting a specific point of view and only trusting the people that reinforce it). But, it would at least allow people to opt out of sketchy "unsigned" videos showing up on their feeds.
I guess it would also allow people to get out of embarrassing situations by refusing to sign. But, maybe that's a good thing? We already have too much "gotchya" stuff that doesn't advance the discourse.
They don't intend to dictate who can authorize media, only provide a verification mechanism that the media was sourced from the place it claims to have been sourced from and is unaltered.
I think of it as https but for media content.
It seems like they're on the right track. I think the key is to keep scope creep to a minimum. As soon as someone tries to add DRM, for instance, the whole effort will go up in flames.
The inverse is equally problematic and harder to solve: those in power discrediting real photos/videos/phone-calls as “deep fakes.”
Not releasing AI models doesn’t stop this. The technology being possible is sufficient for its use in discrediting evidence.
Signing real footage isn’t sufficient. You can get G7 to sign an official conference recording, but could you get someone to sign the recording of them taking a bribe?
Generating deep fakes that hold up to intense scrutiny doesn’t appear to be technically feasible with anything available to the public today. But that isn’t necessary to discredit real footage as a deep fake. It being feasible that nation state level funding could have secretly developed this tech is sufficient. It seems we are quickly approaching that point, if not already past it.
This is true. Although witnesses and/or the person behind the camera could sign the video. They might want to, in fact, if they thought they might be witnessing something illegal and might need to defend themselves later.
I guess I'm imagining a future where signed videos are common. Unsigned content or content signed by some random entity would draw suspicion. Maybe not enough to keep people from seeing it, but enough that it wouldn't spread like wildfire like it does today.
There could be disputed videos, too-- Where one party signs and one doesn't. Or maybe a situation where two parties secretly ally against another to run a more convincing smear. Hmm, there's all kinds of weirdness that a system like this might create. Maybe a cyberpunk author could explore it further :)
Combining all of those things would make it impractically difficult to fake a scene without knowing what you want to fake in advance as well as developing credible witness reputations even further in advance.
For example, imagine a car accident caught by dashcams. You'd not only have your own dashcam footage certified to have been produced no later than the event by a timestamping service, but also corroborating footage from all other nearby traffic also certified in the same way but by other, competing services.
It'd be the future equivalent of having many independent witnesses to some event.
Maybe it won't be necessary to go quite as far, but I think it would be possible for recordings to remain credible in this way, should the need arise.
The idea of verifying media has yet to permeate the edges of mainstream thought. People are arbitrarily credulous or dismissive, based mainly on who's sharing the video.
I might be far fetching here, but wouldn't this lead to people being more mindful of what they watch and interact with? I think all that it will take is a few "state of the art" deepfakes to cause a ruckus and the domino effect should do the rest.
Anyone in the field spent time thinking on this or has had similar notions?
I imagine that deepfakes will follow a similar path to edited photos -- lots of deception, followed by trustworthy sources gaining a little more cachet, but with many people still getting fleeced. Skepticism will ramp up in direct relation to youth, wealth, and tech-savvy.
Text, photoshop, special effects, deepfakes they're all just tools for spreading ideas, but we've been dealing (to some degree of success) with folks telling lies for as long as we've had language. I just can't see this fundamentally changing anything except the level of skepticism we give to video which (considering what hollywood has been capable of for some time) we should have been developing already.
Seems like there'd have to be some big caveats for that to be at all true, but it is interesting. I've read a lot of ridiculous crap and haven't been persuaded by much of it.
> (Do you believe me?)
So when a global pandemic occurs and we're trying everything we can to isolate and socially distance, that priority changes real quick. People get talking and problems get solved.
Of course, sore losers will complain about anything to justify their loss, and this "new thing" was a prime scapegoat. It was also well known ahead of time that the mail in votes would be largely Democratic (because COVID was VERY politicized and democrats were more likely to follow quarantine guidance and therefore vote by mail). So when the votes came in, they pointed to that imbalance and called it "fraud".
Besides all that, there's no reason to be more suspicious of mail-in ballots than in-person ones. In-person, you mark a paper ballot and then put it in a stack... which then gets mailed somewhere else. If someone is going to be changing mail-in ballots, then they're already in a position to be changing regular ones as well (and every election security professional will tell you that paper ballots are more secure than electronic ones).
The one advantage of physical voting I can think of is the ability to just be close to voting station on voting day, counting people who go in there, asking people (who are willing to share) for whom did they vote. This allows to independently check if fraud exists.
Why? Mail-in voting is hardly unique to the US; what made you suspicious?
(A few people have been charged with voting their spouse's ballots.)
Unfortunately in 2020 the fringe became the GOP mainstream, treating equally soft claims as fact.
"Seventy-two percent (72%) of Democrats believe it’s likely the 2016 election outcome was changed by Russian interference, but that opinion is shared by only 30% of Republicans and 39% of voters not affiliated with either major party."
Russia absolutely did and continues to push propaganda into elections in the USA and elsewhere. That's not really in dispute at this point so I'm not surprised it polls that high.
Got a poll that shows similar numbers for fraud? I would be genuinely surprised to see that.
I should actually note here that I didn't vote for Trump, either time, nor did I vote for Clinton or Biden.
I just hate hypocrisy.
That sounds like a perfectly reasonable claim with evidence that supports it, paralleled by other elections in other countries as well; quite obviously very different to what was discussed above.
In 2020, Hillary Clinton was still casting aspersions regarding the outcome of the 2016 election, sowing discontent about the electoral college, preparing Democrat voters to ignore the results until Joe Biden was declared the winner.
Portraying this game as if it's only being played by one team does not help restore any trust in the federal election process.
The bigger problem in that election was Russian-intelligence-stolen (and possibly tampered with) documents being released to the press in the lead up to the election in coordination with the Trump campaign (with the FBI keeping its investigation of that secret), and then the FBI director making an unprecedented and (we found out only afterward) unsupportable statement attacking Clinton immediately before the election, after being pressured into it by a handful rogue FBI agents who were friends of Trump’s campaign threatening insubordination.
And perhaps the biggest problem of all, an entirely too credulous mainstream media who didn’t put those developments in context, leaving voters to draw mistaken inferences, and giving oodles of free airtime to Trump’s rallies without making any effort to dispute outright lying in real time.
But she doesn't count as "anyone", I guess?
In other words: not saying that there was actual fraud sufficient to change the election, not saying the election was "stolen" in the sense people seem to be saying here.
Stolen by foreign influence is very different than what the 2020 nuts have claimed. How many court cases did they lose? How many times were they asked to produce evidence and came up with ... nothing?
But we do.
There's research. Even if you read something that you know is wrong you still believe it. Especially when distracted or not taking the time to analyze. As we rarely do.
That seems like bullshit to me. I read your words (you even posted links!) so how come I don't just instantly believe you? If it were true, wouldn't it make all fiction inherently dangerous?
Let's see how it holds up in real life... here's a lie: "My uncle works at Nintendo and he told me that Mario (Jumpman at the time) was originally intended to only have one testicle, but the NES (famicom) didn't have powerful enough graphics to show that so they scraped that part of his official character design and have left the number of testicles unspecified ever since."
Somewhere, secretly deep inside you, do you believe that now?
Nah. I think we don't have to worry about people believing everything just because they read it. Reading things can put ideas into your head (have you ever even considered Mario's testicles before today?) but at this point we're straining the hell out of "belief" and going into philosophical arguments. In real life though, we are capable as a species of separating fact from fiction some of the time.
It works because people don't judge truth based on sober, rational, purely logical analysis but emotion and bias and most importantly comfort. If you're in an environment in which everyone believes X, you will inevitably begin to conform if the social pressures to do so are strong enough, and counter-signals weak enough. This is how radicalization works, through the gradual osmosis of a worldview, and the acceptance of smaller lies that lead to accepting bigger lies. It's how Nazi propaganda worked, and it's how modern advertising and politics work. It's why witness testimony is unreliable and how police can convince suspects that they committed a crime they went in knowing they were innocent of.
The effect isn't universal - nothing about human psychology is. But it is real.
Repetition isn't enough, but I'll accept that with enough effort from enough people you might eventually be gaslit or brainwashed into believing just about anything so long as it didn't violate some fundamental aspect of your identity in which case it's more likely you'll just lie about believing it to make your life easier.
By '06 I had an MBA with a Masters Thesis on the creation of a new Advertising format where the viewer, their family and friends are inserted into brand advertising for online advertising. By '08 I had global patents and an operating demonstration VFX pipeline specific for actor replacements at scale. However, it was the financial crisis of '08 and nobody in the general public had ever conceived of automated actor replacements. This was 5-7 years before the term deep fake became known. VCs simply disbelieved the technology was possible, even when demonstrated before their eyes.
Going the angel investor route, 3 different times I formed an investor pool only to have at some point them realize what the technology could do with pornography, and then the investors insist the company pursue porn. However, we had Academy Award winning people in the company, why would they do porn? We refused and that was the end of those investors. With an agency for full motion video actor replacement advertising not getting financing, the award winning VFX people left and the company pivoted to the games industry - making realistic 3D avatars of game players. That effort was fully built out by '10, but the global patents were expensive to maintain and the games industry producers and studios I met simply wanted the service for free. Struggled for a few years. We closed, sold the patents, and I went into facial recognition.
I was bitter about this all for a long time.
I'm not sure there's a huge market for people wanting to insert their friends and family into porn, but if there was why not just try it? Seems like it'd have demonstrated the tech worked commercially which could have attracted investment in non-pron uses and it could have ended up just one more technology in a long list of tech made successful as a result of porn.
Meanwhile, simply the process of inserting consumers in the trailer of any Marvel superhero film and charging $1 for you own copy would make tens of millions. Repeat with any highly desirable fantasy franchise. That is the most obvious application. Actor replacement with ordinary people has a huge number of positive to society applications. Porn is not one of them.
I think for claims that you think are important to determine an objective truth value for (like who the President of the U.S. is), your determinism mechanism is based on trusting sources you deem reliable and looking for broad agreement among many sources you deem to be independent. You're probably not just looking at a single sourceless video of Ronald Reagan behaving as if he's the president and believing that claim because the video couldn't possibly have been faked.
And for other claim that you don't think are important to determine an objective truth value for, I don't think you need very high-fidelity evidence anyway. For example, people have no trouble believing claims that corroborate their closely-held ideologies even with very low-fidelity fraudulent evidence, or even claims made with no attempt whatsoever to provide even fraudulent evidence!
A lot of people point to Photoshop not breaking truth, but in my experience, we simply rely on video instead. When I served on a grand jury, essentially all of the non-testimony evidence I saw was video, and it was incredibly compelling. Neutering the reliability of that evidence will hurt.
I don't think people will be more mindful of what they watch and believe, I think the opposite will happen: an attraction to fake content. People will embrace the fantasy and share deepfakes at a scale so large governments will be running campaigns to alert the public that such-and-such video is fake, possibly attempting to regulate how content shared online must be labeled.
That said I still believe when these lies are closer to us, enough for us to care either as professionals or friends and family, that we will be more discerning about reality.
There was a demo earlier this year (Jan) showcasing the proposed 1.0 spec working in Microsoft Edge:
No. We have already run the case study where people on Reddit, Twitter, and other social media will seethe at mere screenshots of headlines and captions under a picture with zero need for verification.
Here on HN we will pile into the comments to react to the title without even clicking the link to read it ourselves.
Deepfakes feel like a drop in the bucket. What does it matter that you can deepfake a president when people will simply believe a claim about the president than spreads around social media? I don’t see it.
Just like it was for millions of years before now.
Face-to-face is only applicable with you small social network.
That is my point precisely. And I will point out here that small social networks are our historical environment. I believe not only can we return to them and flourish, but also that it would be a great boon to human flourishing, the only outcome that really matters.
I've worked at home for nearly 20 years so I've had to learn to create a strong in-person social network. The pandemic and some medical issues have interfered, but I still much more enjoy having fun with my friends than doing things online.
It’s a surveillance thriller.
Let me quote myself from a discussion I was having this morning with a friend who is a tenured professor of philosophy working on AI (as an ethics specialist his work is in oversight),
we were discussing the work shared on HN this week showing a proof-of-concept of Stable Diffusion as better at image "compression" than existing webstandards.
I was very provoked by commentary here about the high "quality" images produced, it was clear that they could in theory contain arbitrary levels of detail—but detail that was confabulated, not encoded in any sense except diffusely in the model training set.
"I'm definitely inclined to push hard on the confabulation vs compression distinction, and by extension the ramifications.
I see there a very meaningful qualitative distinction [between state of the art "compression" techniques, and confabulation by ML] and, an instrumental consequence which has a long shadow.
The thing I am focused on being, whether or not the fact that a media object is lossy or not can be determined, even under even forensic scrutiny.
There was a story I saw this week about the arms race in detection of 'deep fake' reproduction of voice... which now requires some pretty sophisticated models itself. Naturally I think this is an arms race in which the cost of detection is going to rapidly become infeasible except to the NSA. And maybe ultimately, infeasible full stop.
So yeah, I think we're at a phase change already, which absolutely has been approaching, back to Soviet photo retouching and before, forgery and spycraft since forever... so many examples e.g. the story that went around a couple years ago about historians being up in arms about the fad for "restoring" and upscaling antique film and photographs, the issue of concern being that so much of that kind of restoration is confabulation and the presumptive dangers of mistaking compelling restoration for truth in some critical detail. Which at the time mostly seemed a concern for people who use the word hermeneutics unironically...
...but we now reach a critical inflection point where society as a whole integrates the notion that no media object no matter "convincing" can be trusted,
and the consequent really hard problems about how we find consensus, and how we defend ourselves against bad actors who actively seek their Orbis Tertius Christofascist kingdom of rewritten history and alternative facts.
The derisive "fake news" married to indetectably confabulated media is a really potent admixture!"
If I want to compress the movie Thunderball -- a sufficiently clever "compression" algorithm could start with the synopsis at https://en.wikipedia.org/wiki/Thunderball_(film) add in some images of Sean Connery, and generate the film. That's...maybe a 100K to 1 compression ratio?
If the algorithm itself understands "Sean Connery" then you could (theoretically) literally feed in the text description and achieve a reasonable result. I've seen Thunderball, but it was years ago and I don't remember the plot (boats?). I'd know the result was different, but I likely wouldn't be able to point to anything specific.
put it this way:
written text has always been extremely mutable and therefore falsifiable. video is simply going to become more like text. people will trust it based on the context and their own judgment rather than the content. I suspect most people already do this anyway
I assume that full-on impersonation will still be illegal, but certain looks that are sometimes quite similar to a real celebrity will trend now and then.
The context for this is the continual improvement in the capabilities and comfort of VR/AR devices. The biggest one I think is going to be lightweight goggles and eventually glasses. But also the ability to stream realistic 3d scenes and people using AI compression (including erasing the goggles or glasses if desired) could make the concept of going to a physical place for an event or even looking exactly like yourself feel somewhat quaint.
I formally built an ad agency with a feature film VFX pipeline for actor replacement advertising - but I built it back in '08, years before "deep fakes" and nobody believed what I had working was possible.
It's less about using fakes to push your agenda, and more about being able to (plausibly or implausibly it doesn't matter) claim that whatever video is a deepfake.
The truth is meaningless, and as tools like deepfakes become more and more sophisticated, it's harder and harder to establish baseline realities.
And someone is benefiting from that shift away from reality, I just don't know who.
People made the same arguments about photoshop, but it's really not a problem. Almost never is a single video the only evidence of anything and in the cases where it is and that video can't be verified it's probably best not to ruin someone's life over it.
Skepticism in general will only be applied to people we don't like and ignored for people we do.
The continued lapping up of blatant Ukrainian propaganda in mains stream media for example doesn't even need photoshop to be believed, just the vague 'sources said'.
Have you been paying any attention to what's going on the last several years?
Seems like every AI project does something halfheartedly, ponders what the world will be like once it’s perfected, and then starts the next project long before the first project is actually useful for anything but meme videos.
For instance Siri and Google Voice: they are clearly understandable but they sound noticeably different than real people.
Or Stable Diffusion which will supposedly put real artists out of business. It is definitely viable for stock photos, but I can usually tell when an image was made by Stable Diffusion (artifacts, incomplete objects, excessive patterns).
thispersondoesnotexist.com faces can also be spotted, though only if I look closely. If they are a profile pic I would probably gloss over them.
In fact, I bet you can make an ML model which very accurately detects whether something was made by another ML model. Actually that's a good area of research, because then you can make a deepfake model which tries to evade this model and it may get even more realistic outputs...
Ultimately I think we will see a lot more AI before we start seeing truly indistinguishable AI. It's still close enough that the ethical concerns are real, as people who don't really know AI can be fooled. But I predict it will take at least a while before a consensus of trained "AI experts" can't agree on authenticity.
https://www.youtube.com/watch?v=bPhUhypV27w (Not the greatest visually but funny nonetheless, esp the end)
Somewhere, someone is working hard to perfect these. In this particular case probably under NDA... le sigh
This has been the case for decades now. Much more realistic than x isn't a good enough metric. It needs to be indistinguishable from the real thing.
I'm old enough to remember this being called photo realistic: https://static1.thegamerimages.com/wordpress/wp-content/uplo...
And it was, compared to everything that had come before. Now ... not so much.
This is the act I meant. Judge for yourself, but I believe we're close to bridging the uncanny valley
I didn't think it would be possible to do in this decade, but we seem to be making progress fast now. Very impressive to see. (and scary)
But even the failures at temporal coherence have their own aesthetic appeal. Like all of this stuff has been it's very "dreamy" the way the clothing subtly shifts forms.
Beyond the coolness I'm glad that individual people are getting access to digital manipulation capabilities that have only before been available to corporations, institutions, and government before.
Some of the others in this space have great results : https://imgur.io/seBTPG8
We've perfected voice replacement and I'll have more to show soon.
The animation, is that a 3d actor with replaced visage by AI? Could you explain what you did there?
> These include anatomy, psychology, basic anthropology, probability, gravity, kinematics, inverse kinematics, and physics, to name but a few. Worse, the system will need temporal understanding of such events and concepts...
I wonder if unsupervised learning (as could be achieved by just pointing a video camera at people walking around a mall) will become more useful for these sorts of model; one could imagine training an unsupervised first-pass that simply learns what kind of constraints physics, IK, temporality, and so on will provide. Then given that foundation model, one could layer supervised training of labels to get the "script-to-video" translation.
Basically it seems to me (not a specialist!) that a lot of the "new complexity" involved in going from static to dynamic, and image to video, doesn't necessarily require supervision in the same way that the existing conceptual mappings for text-to-image do.
Combined with the insights from the recent Chinchilla paper from DeepMind (which suggested current models could achieve equal performance if trained with more data and fewer parameters), perhaps we don't actually need multiple OOMs of parameter increases to achieve the leap to video.
Again, this is not my field, so the above is just idle speculation.
For motion, there's yet another layer of fakery required (and this is something security / identity detection systems tackle nowadays) -- stuff like gait, typical motions or gestures or even poses. To deepfake a Tom Cruise clone, you need to not just look like the actor, but project the same manic energy, and signature movements.
Unless an existing reference image exists - whatever the switch does will be a guess. Many motivated folks already do this with photoshop; it’s all over 4chan and similar message boards (request threads) and has been that way for at least a decade.
This is already the reality for celebrities with photoshop - their likeness is returned unclothed in image search.
That’s not their body
There would be small details kept intact between the source image and the output that would make it feel much more personal than even the best manual fakes of today.
There is a lot of variation in details between human bodies that are covered by clothing.
You can infer some things, like skin tone and hair color, from other parts of the exposed body with pretty decent accuracy. You can infer general body shape from how the clothes fit. But for things like size, shape, color, hair, birth marks, moles, surgical modifications, etc. of various concealed body parts? All those vary wildly from person to person. Unless you have a reference image that you can use to answer those questions - I can't imagine that you will be able to infer those. If you can't infer those, you aren't getting the real body of the person you are trying to undress. You're getting a dream of what that person might look like if they were to remove their clothes - a dream that is not accurate.
Not to discredit what you are saying: those dream images are definitely going to cause an entire generation of discomfort. But the cat is out of the bag and has been for some time. Artists were already capable of creating images like this without consent - but it required more talent than most humans poses to get that onto paper. Photoshop made it possible too. AI is making it even easier.
Society is weird about nudity. To be fair, I am too. We have all of these constructs built around the human body and concealing it that many of us have bought into.
At it's core, I think the fear of this tech and nudity is that it will be used to "steal dignity" from folks. The question is: can you steal dignity from someone with pencil and paper? Is a photorealistic sketch of your friend unclothed sufficient for them to have lost their dignity? What about photoshop? How about passing your photorealistic sketch through an AI to make it even more photorealistic? At what point have you robbed someone of dignity? Robbing someone of dignity is a social construct, in some ways this form of dignity stealing is something we _allow_ people to do to one another by buying into that construct. I do feel like the narrative we should be pushing is "that isn't my body." If we invest in breaking the construct, my hope is that we can remove the power this holds over people.
Beyond that, there's also a host of otherwise these materials can be used for targeted harassment. Sending a woman images of "herself" in extreme sexual acts can be traumatising even if the victim knows it's a fake. There's also the rise of "what would you do with her" and "irl girls" pornography on places like Reddit where unconsenting women are targeted by stalkers who sexualize and degrade them publically for kicks. This just gives them further fuel for their obsessive fantasies.
The shift to being able to "see" arbitrary women as they'd look in certain positions or level of undress will also change how young men perceive real women in harmful ways in a society where women already have to deal with constant objectification.
In terms of inference. Firstly it's about things like lighting, skin tone, background details etc that set the scene for us on the unconscious level and which inevitably leaves the current generation of fakes somewhere in the uncanny valley. Secondly the fact that we know they're fake didn't impact the initial associations our brain will create upon seeing them. If lies were not harmful defamation cases wouldn't be a thing.
> The shift to being able to "see" arbitrary women as they'd look in certain positions or level of undress will also change how young men perceive real women in harmful ways in a society where women already have to deal with constant objectification.
There is a storyline in a children's movie about a young girl making sexually suggestive sketches of a boy in her notebook (Turning Red). The mom discovers it and confronts the boy thinking he's taken advantage of her daughter. That's going to evolve into using AI to do the same. It's not just a trope, those problems exist already; this is just going to make these cases more common. And you're nailing it with how it's going to change perceptions. These dream images are going to be created to match the fantasies of the person generating the image. Those images aren't the actual bodies of the person being undressed - it's just a fantasy created by the artist optimized to their own preferences.
> Firstly it's about things like lighting, skin tone, background details etc that set the scene for us on the unconscious level and which inevitably leaves the current generation of fakes somewhere in the uncanny valley. Secondly the fact that we know they're fake didn't impact the initial associations our brain will create upon seeing them.
Expert fakes exist for celebrities that pass uncanny valley without the use of AI. There are sites dedicated to cataloging celebrity images and documenting if they are fake or not. Same with request threads and WYWD threads - the stuff on message boards can be pretty convincing, enough to get to those initial associations. I'm not sure why you think otherwise. The dark corners of the internet are full of this content.
This cat is out of the bag and has been for some time. The frequency is only going to increase as the bar for generating these images lowers. Photoshoppers running request threads are the bottleneck right now; soon those threads are going to be replaced with generators that spit out 100s of candidate images.
We don't really have control over how others use our likeness beyond the tools the law extends to us but, even then, it only addresses the problem after it happens and doesn't stop someone from doing it in the first place. I don't see us stopping it. If we can't stop it, we either let it happen to us or we figure out how to remove its power over us at an individual level (and help others do the same).
I'd like to turn this around. You're bringing up a lot of problems without solutions. Other than shifting our view of these images on a person-to-person basis, how do you see us stopping it?
I can fantasise about an intergalactic space war, and I can even make a screen play and produce Star Wars. If i make the movie or not it's still fantasy, still something I just imagined. But making a representation of it and distributing it vastly alters the power it has to affect public consciousness.
Fantasy doesn't equate to harmlessness.
Thats totally a browser extension next year... Right click, remove clothes...
When you think about it, ethically it's in the same ballpark as right click, copy, something you probably also be doing without asking the subject of the image.
This should be very illegal.
That's not the most alarming use case of this tech. By far. (IMHO)
Also, I find this reasoning very off-putting. Putting child porn into a discussion kills it. All participants are (mostly) willing and basically required to agree and "let's not talk about this further".
The fundamental technology that underpins these achievements is more than capable of destroying civilization if things start to go south - which I believe they will, sooner or later. I find that to be more worthy of discussion than moral jousting about things people do in their private lives that I will - hopefully - never know about.
Let's all use our imagination and see where these kinds of models, both diffusion and transformers can take us. Sure they can generate plausible visual information, but that's not all they can do. Some days ago someone posted about ACT-1, a transformer for actions. People can and will hook up these things in all sorts of complicated pipelines and boy, generating some insensitive imagery is way, way down on the list of things to worry about.
First, I see "AGI" as a real problem we'll have to face at some point. I believe we will be too late by the time we recognize it as a problem, so let's ignore that "threat" for now.
The more pressing problem IMO is that, to use technical terms, a shitload of people will have to face the reality that a software system is outperforming them on just about anything they are capable of doing professionally. I believe this will happen sooner than later and I am totally not seeing society being ready for that. Already I am seeing these models outperforming me - and my collegues - on quite a few important axes, which worries me and also the fact they almost universally dismiss it because it's not "perfect". I know it's hot these days to either under- or overestimate AI, but I do feel we have crossed a certain line. I don't see this genie going back into its bottle.
Perhaps I'm still handwavey. I guess I am a handwavey person and I'm sorry about that, but when I see GPT3 finishing texts with such grace I can't help but see a transformer also being capable of finishing "motor movements" or something else entirely like "chemical compounds", "electrical schematics" or even "legal judgements". I just found out about computational law BTW, might interest someone. Even just the "common sense" aspect of GPT3 is (IMO) amazing. Stuff like: we make eye contact during conversation, but we don't when driving. Why not? But also stuff like detecting in which room of the house we are based on which objects we see. That sort of stuff is amazing and it's a very general model too. Not trained on anything specific.
I guess the core of what I'm saying is that "predicting the next token" and getting it right often enough is frightenly close to what makes a large percentage of the human populace productive in a capitalist sense. I know I'm not connecting a lot of dots here, but I clearly lack the space, time and perhaps more importantly, the intelligence to actually do that. I fear I might be a handwavey individual - in fact easily replaced by GPT#. Do you now see why am I so worried? :)
I'm pretty sure Slashdot is willing to put up the money for thousands of renders of "Natalie Portman pours Hot Grits over <thing>" alone.
If a model is already trained on lots of images and captions, it would probably be possible to just feed it tons of whatever video and let it figure out the rest itself.
Should Tom Cruise heirs receive a perpetual rent 200 years from now when Mission Impossible 57 staring their ancestor is airing?
What regulation should be put in place / would be effective in a world where any teen with the latest trending scoial media app on their phone can realistically impersonate a celebrity in real-time for likes?
Instead we'll probably see a bunch of crap, but on top of that crap it will allow people who never would of had a chance before (no connections, money, etc..) to be discovered who have true talent. It lowers the bar to content creation significantly.
> to be discovered who have true talent.
I think deepfakes have the power to do much more real, immediate damage to society vs the "threat" of AGI
That has been true for 38 of the past 40 centuries. Somehow I suspect making it 39 out of 41 won't be that big of a problem, especially compared to getting people to not take video evidence as being at all related to truth.
(Edit: in case it wasn't obvious: you can't take video evidence as being at all related to truth if you don't have any video evidence to so take.)
Criminalizing making fake videos of any kind is extreme authoritarian behavior and libel is already punished by law everywhere I know. If you don't like how libel is punished in your jurisdiction that is another matter, but if you want to get some ideas on how to make them harsher just get a list of countries and filter by dictatorships and you're bound to find examples that take libel and its definition very seriously.
Honest question. It’s going to be a long trip throughout the Uncanny Valley where everyone will clearly notice the fakery and then … what? What is the end goal here? Ok, making more Superman movies starring Christopher Reeves, obviously. But then what?
To quote someone who deserves to be more in more deep-fakes, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
It's also concerning to imagine the social impact this could have on young boys as well, in a climate where pornography addiction issues become more visible each year.
I'm not concerned at all about pornography addiction, I don't think that's real. On the contrary, pornography promotes autonomy and independence by making people less dependent on others for sexual stimulation. It's a massive social good, and unrestricted pornography is the sign of a modern society.
>but what people choose to draw, imagine, photoshop, or deepfake for non-commercial use is their business
And although I'm sure that one can harass or incite violence or defame with artwork, whether that means artwork of the imagination as artwork should fall into that category is dubious. An instructive example here concerns fantasy versus 'hate speech' and where to draw that line while maintaining maximum freedom and a polite society.
I don't think society as particularly 'safer' in the cases of defamation and copyright infringement being prohibited; and when it comes to fiction I'm even more dubious, given that fiction is known to be able to produce statements we can more easily take as fantasy.
If i make deepfake images of your wife being forced into sex, being beaten, crying and screaming, and then I distribute them online free of charge is that really harmless just because I don't profit from it?
In your example, I don't think I'd have a problem with the creation of the images if they're not distributed, but on a macro scale, if I put aside my ethical concerns with the proliferation of patriarchy, it's worth considering whether "anyone can do anything with images" would have such an effect if my wife were only one targeted among millions.
The internet we have today is not free. The society we have is not a wholly free one but we rightfully make trade-offs to protect people.
We know that today there is already a huge issue of nonconsensual pornography, revenge porn etc. Why the line of what is "free" drawn at protecting these groups, why do we tolerate open abuse against women but not against children? I wonder if our outlook on women's safety as a society is really as forward thinking as we would hope when we look around the world today.
> "unrestricted pornography" is the sign of a modern society.
Is it though? In another world you could say the same thing about drugs. Some people in America today might say it about gun freedoms.
I don't know. I think there are lines to be drawn and I think we can be open to discussing those without falling immediately into hysterics about state overreach.