CALIFORNIA LEGISLATION TO STOP DEEPFAKES

      No Comments on CALIFORNIA LEGISLATION TO STOP DEEPFAKES

 

DEEPFAKES AND THE THREATS THEY POSE

We have discussed before, how deepfake videos have the potential to do serious amounts of harm to the people they target. Deepfakes use AI technology to make a person appear to do or say something that they are not. They are completely fake videos or photos generated by machine learning. It is getting to the point where these deepfakes are appearing more and more real. The difficulty will lie in coming up with a foolproof way to distinguish between what is real and what is fake.

The biggest concerns with deepfake videos are as follows. Personal defamation, in these instances a person is made to appear in a humiliating or damaging way for blackmail purposes. Political manipulation, this is a big one, since we know this is already going on albeit in a different way.

Portraying politicians in fake videos saying things that they have not said is a very powerful way to influence opinions and even the outcome of elections. There are already enough domestic attacks on the integrity of our elections from large tech companies like Google who manipulate search engine results to change public opinion. Facebook refused to take down a deepfake of House Speaker Nancy Pelosi despite knowing it was fake which only contributes to the problem. We already have an issue with tech companies overstepping their bounds. If we don’t find a way to effectively combat deepfakes the problem will only grow in severity across time.

THE CALIFORNIA LEGISLATION

In response to many incidents including Mark Zuckerberg himself becoming the victim of a deepfake California decided to introduce two pieces of legislation to try and tackle the problem. The first bill signed into law by the state of California recently allows victims to sue anyone who puts their image into a pornographic video without consent. This may deter people from creating pornographic deepfakes of someone to ruin their reputation. This law explicitly requires consent on the part of the person portrayed to create these kinds of videos.

The second bill centers around portraying politicians doing or saying things that could sway public opinion in a negative way. It was introduced by Marc Berman, chair of the Assembly’s election committee. The bill (which was recently passed into law) makes it illegal to “knowingly or recklessly” share “deceptive audio or visual media” of a political candidate within 60 days of an election “with the intent to injure the candidate’s reputation or to deceive a vote into voting for or against the candidate.”

It is rather interesting that the bill is worded in such a way considering that many other methods of deceiving people into voting for a particular candidate are perfectly legal. Deepfakes are being taken more seriously because of the threats they pose which makes sense.

ISSUES OF FAKE NEWS AND THE FIRST AMENDMENT

Fake and deceptive news certainly isn’t a new phenomena in politics. But as technological developments like deepfakes make it increasingly difficult to sort fake from real news it is clear that something must be done. Marc Berman was quoted saying “I don’t want to wake up after the 2020 election, like we did in 2016, and say, ‘dang, we should have done more.” Which is interesting because again, hardly any attention is paid to more obvious election threats such as Google and their search engine manipulation. Nevertheless, it is good that politicians are starting to understand that unregulated AI technology has the potential to wreak havoc.

This bill didn’t pass without a First Amendment fight. The American Civil Liberties Union of California and the California News Publishers Association opposed the bill on First Amendment grounds. At the senate hearing Whitney Prout, staff attorney with the publishers’ association expressed distaste stating that this law could discourage social media users from sharing any political content online, lest it be fake and they be held legally liable.

Supporters of the law used arguments such as the First Amendment evolved in a pre-Internet world and because of how fast our world is changing all of our laws need to be re examined from time to time to make sure they still hold up. Generally speaking this line of thinking is correct and people need to be more careful about what they post on these social media platforms anyway. There doesn’t seem to be an inherent danger of social media users being targeted for things they post, lest they willingly post deepfakes.

THE FUTURE OF REGULATING AI

The bill passed on the Senate Floor and the Assembly Floor with overwhelming support. This is just one state and public opinion on matters such as these is highly varied but the times are changing. When more people start seeing the damage that deepfakes can cause support for reasonable regulations will only increase.

It is rather silly that the bill only makes it illegal to post deepfakes of a politician 60 days before an election but this is a step in the right direction. These very realistic fake videos pose a serious national security risk and must not be taken lightly. Also the more widespread deepfakes become the more we will see open source AI programs being created that can detect these fake videos and get them off social media platforms before they can cause harm.

Leave a Reply

Your email address will not be published. Required fields are marked *