Hot Posts

6/recent/ticker-posts

Taylor Swift's 'deepfake' in the US is outraged... Calls for regulation are growing everywhere

AI synthetic pornographic photos spread in X Microsoft's generative AI tool was pointed out Nadella, “It’s terrible... it needs to be regulated”

Over the weekend, a deepfake obscene image of pop star Taylor Swift created using artificial intelligence (AI) technology spread on social media, sparking outrage from fans. Although social media belatedly took steps to block searches, the image was found to have already been viewed tens of millions of times. There are growing voices calling for regulation of deepfake images and videos that are rapidly spreading on social media.

On the 26th (local time), foreign media such as the New York Times reported that a fake photo of Taylor Swift's face composited into an obscene photo by X has spread widely. X took emergency action to suspend the account in question and make it impossible to search for Taylor Swift on the platform. Nevertheless, the image in question was viewed 47 million times on X alone, and was spread to other social media such as Instagram and Facebook. This is a representative example of a side effect that occurred as AI emerged and it became possible to create believable fake images with just a few lines of simple commands. It is reported that Swift is currently considering legal action against social media accounts that created composite images of herself.

Taylor Swift's fandom, 'Swifty', has taken collective action by posting a sentence to protect Swift from X. Democratic U.S. Representative Joe Morrell called the incident “appalling,” and Democratic Senator Mark Warner said, “We have repeatedly warned that AI can be used to create non-consensual, intimate images, and this is a deplorable situation.” . There is speculation in the tech industry that this incident could lead to a flurry of regulatory bills related to the deepfake videos that are flooding social media in the political world.

Satya Nadella, CEO of Microsoft (MS), a leader in the AI ​​industry, also said in an interview with NBC that the incident was “surprising and terrible” and added, “We need to move and act quickly (to fight deepfakes). “I do it,” he said. “It is our responsibility to install safety measures in AI technology to ensure that safe content is produced,” he said. “At the same time, law enforcement agencies will be able to regulate more by working with technology platforms.”

The reason Nadella actively supports government regulation and is trying to distance himself from the deepfake controversy is because it has been claimed that this Swift deepfake image was created with Microsoft's AI creation tool, 'Designer'. MS said it was “investigating” this.

It's not just Swift. In relation to the recent US presidential election, a phone call containing a fake voice of President Joe Biden encouraging Democratic party members to refuse to vote a day before the New Hampshire primary caused problems by spreading on social media. White House spokeswoman Carine Jean-Pierre said she was “very concerned” about the Swift synthetic image incident and added, “We will continue to work to reduce the risks of images produced by generative AI, and Congress must also take strategic legislative action.” He said.

Post a Comment

0 Comments