Deepfake used his image. She fights back.

Advertising

transcript

This transcript was created by speech popularity software. Although it has been reviewed by human transcribers, it may contain errors. Please review the audio of the episode before quoting this transcript and email transcripts@nytimes. com if you have any questions.

My call is Nicolas Kristof. I’m a columnist for the New York Times. We’ve heard a lot in recent years about the potential risks of AI. This includes deepfakes. This year, before the primary, New Hampshire voters won an automated call. He looked a lot like President Biden.

Tuesday’s vote will allow Republicans in their quest for Donald Trump’s re-election.

There’s also a video of Taylor Swift supporting Donald Trump.

This video is notoriously fake and has been doctored. If Taylor Swift had ever endorsed Donald Trump at the Grammy Awards, she would have been in the headlines by now.

But I think there’s a bigger problem that we hear a lot less about, and that’s deepfake nude videos and photos. Deepfakes are AI-generated photographs that use the faces and identities of other real people. One study found that 98% of deepfake videos online are pornographic. And of those, 99 percent of the time, the other people they’re targeting are women and girls. The first time an activist seeking to fight it gave me an idea of the magnitude of this challenge.

My calling is Breeze Liu and I am a survivor of online symbol abuse.

Her story begins in April 2020, when she was just 24 years old.

I got a message from a friend of mine. He said, drop everything and call me right away. When I called him, he said, “I don’t need you to panic. “But a video of you is circulating on Pornhub. Et for someone like me who doesn’t even watch. I thought it was a joke. So I told him it’s not funny and then he said, “I’m not kidding. “I’ll send you the link. You have to look. They gave me the link and I’m there. I’m on Pornhub.

It was one of the most devastating moments of my life. It’s a video of me recorded without my wisdom or consent. When I saw the video, my head went absolutely blank and I went to the roof of my building because the misfortune was so overwhelming and I didn’t need to live anymore. So I almost killed myself and jumped off the roof of my building. I didn’t do it because I didn’t need my mom to see me dead. But there were times after that when I felt like it would have been better.

With Breeze, it started as a genuine video, but then ended up being fake with over 800 links on the internet. In the article I wrote on this topic and in this conversation, you don’t hear some of the underage women who have been attacked and you don’t hear some celebrities talk about it, and that’s out of shame. People feel humiliated when they are shown incredibly graphic fake videos of themselves being raped. In general, victims are reluctant to talk. And unfortunately, this has a tendency to perpetuate the problem.

Society imposes invisible chains on our minds, silencing us through shame. Me, when I was communicating with the police and the lawyers, I mean, I just lost my voice and I was absolutely frozen because it was so devastating. I couldn’t even communicate it without shaking or having panic attacks. What you feel is like you’ve been murdered. You’re dead. A part of you was definitely dead. And to get justice, you have to look at the cadaverous component of yourself over and over again, and you’ll never get justice.

So, Breeze went to great lengths to remove the video and deepfakes from the internet. He contacted many sites. He emphasized platforms to avoid links to those sites and drive traffic to those sites. But it’s an uphill war because those corporations monetize it.

I asked Pornhub to remove it. They removed it after I discovered a lawyer. But in the case of the other malicious websites, some of them simply didn’t respond despite our most productive efforts, our tireless efforts, to try to get the platforms to take them down. They refused to deal with this problem. They refused to dismantle it.

Deepfake corporations made a mistake by targeting Breeze because it’s very tech-savvy and it’s Silicon Valley. She comes from this world. Then he came up with his own solution. He has his own company called Alecto AI, which offers an app that uses facial recognition generation to perform searches for opposing symbols. It tells people where their symbols appear on the internet and helps connect users. to platforms to review, not to take photos. – Consensus symbols down.

I made the decision to create my own solution because I run into walls. If I don’t replace the system, justice may not even be an option for me.

There are several categories of actors in this field. There are deepfake corporations that make money from classified ads and subscriptions. Then there is another category, which is search engines, such as Google and Bing, that drive traffic to those Internet sites and applications. , and make money because they accept classified ads from those deepfake companies. So Google is an integral component of this actually sordid ecosystem, and those who suffer from it have virtually no recourse.

I reached out to Google and Bing to get their side of the story. Google agrees that there’s room for improvement, but no one affiliated with the company was willing to officially let it know. Google gave me a saying, a quote: “We understand how distressing this content can be and we’re committed to strengthening our existing protections to help those affected,” and Bing’s spokeswoman said something quite similar. But look, I’m not impressed.

Google knows how to do the right thing when it’s necessary. If you ask Google, how can I commit suicide? Then, it doesn’t give you step-by-step instructions. Rather, it takes you to a suicide hotline. So Google can be socially culpable when it needs to. But in this case, he is utterly indifferent to corporations that go out of their way to humiliate and profit from women and girls. What’s surprising is that this is a type of non-consensual sexual harassment and rape that is obviously not illegal.

Basically, the problem is that generation has become complex much faster than the law, so there is no law at the federal level that clearly covers this issue. There would be an option for civil damages, which would allow those affected to sue Google or a deepfake company. But Section 230 of the Communications Decency Act protects tech corporations from those civil lawsuits, or appears to protect corporations. It turns out that the most productive therapy is not so much the anti-delinquency law, but rather the amendment of segment 230 so that corporations can be prosecuted and are required to control themselves.

Tech corporations like Google are willing to fake corporations, whose entrepreneurial style is solely about producing fake sex videos, and those corporations wouldn’t really exist if Google didn’t drive traffic to them and make them profitable. So I hope that some of the executives and board members will listen and maybe we’ll go canvass their consciences right now.

transcript

This transcript was created by speech popularity software. Although it has been reviewed by human transcribers, it may contain errors. Please review the audio of the episode before quoting this transcript and email transcripts@nytimes. com if you have any questions.

My call is Nicolas Kristof. I’m a columnist for the New York Times. We’ve heard a lot in recent years about the potential risks of AI. This includes deepfakes. This year, before the primary, New Hampshire voters won an automated call. He looked a lot like President Biden.

Tuesday’s vote will allow Republicans in their quest for Donald Trump’s re-election.

There’s also a video of Taylor Swift supporting Donald Trump.

This video is notoriously fake and has been doctored. If Taylor Swift had ever endorsed Donald Trump at the Grammy Awards, she would have been in the headlines by now.

But I think there’s a bigger problem that we hear a lot less about, and that’s deepfake nude videos and photos. Deepfakes are AI-generated photographs that use the faces and identities of other real people. One study found that 98% of deepfake videos online are pornographic. And of those, 99 percent of the time, the other people they’re targeting are women and girls. The first time an activist seeking to fight it gave me an idea of the magnitude of this challenge.

My calling is Breeze Liu and I am a survivor of online symbol abuse.

Her story begins in April 2020, when she was just 24 years old.

I got a message from a friend of mine. He said, drop everything and call me right away. When I called him, he said, “I don’t need you to panic. “But a video of you is circulating on Pornhub. Et for someone like me who doesn’t even watch. I thought it was a joke. So I told him it’s not funny and then he said, “I’m not kidding. “I’ll send you the link. You have to look. They gave me the link and I’m there. I’m on Pornhub.

It was one of the most devastating moments of my life. It’s a video of me recorded without my wisdom or consent. When I saw the video, my head went absolutely blank and I went to the roof of my building because the misfortune was so overwhelming and I didn’t need to live anymore. So I almost killed myself and jumped off the roof of my building. I didn’t do it because I didn’t need my mom to see me dead. But there were times after that when I felt like it would have been better.

With Breeze, it started as a genuine video, but then ended up being fake with over 800 links on the internet. In the article I wrote on this topic and in this conversation, you don’t hear some of the underage women who have been attacked and you don’t hear some celebrities talk about it, and that’s out of shame. People feel humiliated when they are shown incredibly graphic fake videos of themselves being raped. In general, victims are reluctant to talk. And unfortunately, this has a tendency to perpetuate the problem.

Society imposes invisible chains on our minds, silencing us through shame. Me, when I was communicating with the police and the lawyers, I mean, I just lost my voice and I was absolutely frozen because it was so devastating. I couldn’t even communicate it without shaking or having panic attacks. What you feel is like you’ve been murdered. You’re dead. A part of you was definitely dead. And to get justice, you have to look at the cadaverous component of yourself over and over again, and you’ll never get justice.

So, Breeze went to great lengths to remove the video and deepfakes from the internet. He contacted many sites. He emphasized platforms to avoid links to those sites and drive traffic to those sites. But it’s an uphill war because those corporations monetize it.

I asked Pornhub to remove it. They removed it after I discovered a lawyer. But in the case of the other malicious websites, some of them simply didn’t respond despite our most productive efforts, our tireless efforts, to try to get the platforms to take them down. They refused to deal with this problem. They refused to dismantle it.

Deepfake corporations made a mistake by targeting Breeze because it’s very tech-savvy and it’s Silicon Valley. She comes from this world. Then he came up with his own solution. He has his own company called Alecto AI, which offers an app that uses facial recognition generation to perform searches for opposing symbols. It tells people where their symbols appear on the internet and helps connect users. to platforms to review, not to take photos. – Consensus symbols down.

I made the decision to create my own solution because I run into walls. If I don’t replace the system, justice may not even be an option for me.

There are several categories of actors in this field. There are deepfake corporations that make money from classified ads and subscriptions. Then there’s another category, which is search engines, like Google and Bing, that drive traffic to those internet sites and apps. , and make money because they accept classified ads from those deepfake companies. So Google is an integral component of this actually sordid ecosystem, and those who suffer from it have virtually no recourse.

I reached out to Google and Bing to get their side of the story. Google agrees that there’s room for improvement, but no one affiliated with the company was willing to officially let it know. Google gave me a saying, a quote: “We understand how distressing this content can be and we’re committed to strengthening our existing protections to help those affected,” and Bing’s spokeswoman said something quite similar. But look, I’m not impressed.

Google knows how to do the right thing when it’s necessary. If you ask Google, how can I commit suicide? So, it doesn’t give you step-by-step instructions. Rather, it takes you to a suicide hotline. So Google can be socially culpable when it needs to. But in this case, he is utterly indifferent to corporations that overlook his way of humiliating and profiting from women and girls. What’s surprising is that this is a type of non-consensual sexual harassment and rape that is obviously not illegal.

Basically, the problem is that generation has become complex much faster than the law, so there is no law at the federal level that clearly covers this issue. There would be an option for civil damages, which would allow those affected to sue Google or a deepfake company. But Section 230 of the Communications Decency Act protects tech corporations from those civil lawsuits, or appears to protect corporations. It turns out that the most productive therapy is not so much the anti-delinquency law, but rather the amendment of segment 230 so that corporations can be prosecuted and are required to control themselves.

Tech corporations like Google are willing to fake corporations, whose business style is to produce fake sex videos, and those corporations wouldn’t really exist if Google didn’t drive traffic to them and make them profitable. So I hope that some of the executives and board members will listen and maybe we’ll go canvass their consciences right now.

By Nicolas Kristof

Produced by Jillian Weinberger

Deepfakes (AI-generated symbols that use the faces and identities of other genuine people) have proliferated online. A recent study found that 98% of deepfake videos online are pornographic, and 99% of them are directed at women and women. The activist symbol and survivor Breeze Liu has been used in several places. In this audio essay, he tells his story to columnist Nicholas Kristof. They argue that search engines like Google and Bing have the strength to fight the scourge of deepfake pornography. “Google can be socially guilty when it wants to,” Kristof says. “But in this case, she is absolutely indifferent to the corporations that strive to humiliate women and women and make money from it. “

(A full transcript of this audio essay will be available within 24 hours of it being posted to the previous audio player. )

This episode of “The Opinions” was produced by Jillian Weinberger. Editing by Kaari Pitkin and Alison Bruzek. Mixed by Carole Sabouraud. Original music by Sonia Herrero, Pat McCusker and Isaac Jones. Fact check by Mary Marge Locker. Audience strategy by Kristina Samuelewski.

The Times agrees to publish a series of letters to the editor. We’d love to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes. com.

Follow the New York Times op-ed on Facebook, Instagram, TikTok, WhatsApp, X, and Threads.

Nicholas Kristof, columnist for the Times Opinion Desk in 2001, has won two Pulitzer Prizes for his policy toward China and the genocide in Darfur. @NickKristof

Advertising

Leave a Comment

Your email address will not be published. Required fields are marked *