Why ThisPersonDoesNotExist (and its copycats) need to be restricted
You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com, a website, launched two weeks ago, that uses Nvidia’s publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. It’s also irresponsible and needs to be restricted immediately.
We’re living in an age when individuals and organizations rampantly use stock images and stolen social media photos to hide their identities while they manipulate and scam others. Their cons range from to pet scams to romance scams to fake news proliferation to many others. Giving scammers a source of infinite, untraceable, and convincing fake photos to use for their schemes is like releasing a gun to market that doesn’t imprint DNA.
Prior to this technology, scammers faced three major risks when using fake photos. Each of these risks had the potential to put them out business, or in jail.
Risk #1: Someone recognizes the photo. While the odds of this are long-shot, it does happen.
Risk #2: Someone reverse image searches the photo with a service like TinEye or Google Image Search and finds that it’s been posted elsewhere. Reverse image search is one of the top anti-fraud measures recommended by consumer protection advocates.
Risk #3: If the crime is successful, law enforcement uses the fake photo to figure out the scammer’s identity after the fact. Perhaps the scammer used an old classmate’s photo. Perhaps their personal account follows the Instagram member they pilfered. And so on: people make mistakes.
The problem with AI-generated photos is that they carry none of these risks. No one will recognize a human who’s never existed before. Google Image Search will return 0 results, possibly instilling a false sense of security in the searcher. And AI-generated photos don’t give law enforcement much to work with.
AI-generated photos have another advantage: scale. As a scammer, it’s hard to create 100 fake accounts without getting sloppy. You may accidentally repeat a photo or use a celebrity’s photo, like when a blockchain startup brandished Ryan Gosling on their team page. But you can create thousands of AI-generated headshots today with little effort. And, if you’re tech savvy, go further.
Imagine you’re a scammer whose target is a recent immigrant from Iran. To get that person to trust you, you could browse their Facebook page, download photos of their favorite nephew in Tehran, and then use Nvidia’s technology to create a fake person who looks like that nephew. Trust won!
What do we do now?
By publicly sharing its code, Nvidia has opened Pandora’s box. The technology is out there and will only get better and more accessible over time. In the future, we won’t be limited to one portrait of an AI-generated human, either; we’ll be able to create hundreds of photos (or videos) of that person in different scenarios, like with friends, family, at work, or on vacation.
In the meantime, there are a few things we can do to make the lives of scammers more difficult. Websites that display AI-generated humans should store their images publicly for reverse image search websites to index. They should display large watermarks over the photos. And our web browsers and email services should throw up warnings when they detect that a facial photo was AI-generated (they already warn you about phishing scams). None of these solutions is a silver bullet, but they will help thwart the small-time con artists.
Ultimately, the technology Nvidia released haphazardly to the public illustrates a common problem in our industry: For the sake of a little novelty, they’re willing to cause a lot of mess for everyone else to clean up.
Source: VentureBeat