Can you trust your eyes?
The term ‘fake news’, back in 2014 and 2015 wasn’t really a “thing” — it took off in 2016.
A lot of the reports about the phenomena say it was the advent of the Trump candidacy that gave the term a rocket boost.
It was mid-2016, and Buzzfeed’s media editor, Craig Silverman, noticed a funny stream of completely made-up stories that seemed to originate from one small Eastern European town. “We ended up finding a small cluster of news websites all registered in the same town in Macedonia called Veles,” Silverman recalls.
He and a colleague started to investigate, and shortly before the US election they identified at least 140 fake news websites which were pulling in huge numbers on Facebook. The young people in Veles may or may not have had much interest in American politics, but because of the money to be made via Facebook advertising, they wanted their fiction to travel widely on social media. The US presidential election — and specifically Donald Trump — was (and of course still is) a very hot topic on social media.
President-elect Trump took up the phrase the following month, in January 2017, a little over a week before taking office. In response to a question, he said “you’re fake news” to CNN reporter Jim Acosta. Around the same time he started repeating the phrase on Twitter.
Trump wasn’t the first to use the term, and misinformation, spin, lies and deceit have of course been around forever.
But I believe things are going to get worse, way way worse.
Deep fakes, apps that fake your age and AI that can put words in other people’s mouths.
If you’ve not heard of deep fakes, then you might be amazed at how good they are getting
Then you have apps like the FaceApp that are quite honestly astonishing, the quality of the outputs are incredible (and funny!)
All of this is to say that we, as a society, are struggling with fake news already— we don’t have the tools to discern truth from fiction and that’s just on the written word stuff, let alone images and videos we can watch with our own eyes.
I wrote an article for an idea to combat fake news, and jumped at the chance to install NewsGuard — but it’s not enough to protect people. I have to admit, the issue then becomes, who decides what should be trusted and what isn’t trustworthy. We don’t always know individual or groups’ intentions and agendas. And often to discern fact from fiction can take weeks, months, years and sometimes is actually just impossible.
This is going to be a problem, it might very well already be, we just don’t know it.
I love tech, I always have and believe always will.
I don’t like it when a new technology comes out and people pour scorn and list all the things that could be done with it, with ill intent.
I am old enough to still remember all the warnings about the Internet and how damaging it was going to be, and that it should be stopped. I think it is pretty much undeniable that the Internet has been both a force for good and bad. Personally I think the good of the Internet outweighs the bad.
However, other than for fun, and entertainment, I am seriously concerned about the new tech in place and emerging, around face and voice faking, that will mean that not only won’t we be able to read news articles, we also won’t be able to trust images or videos we view.
The sheer number of fake accounts that are already been created — but that have been so far traceable due to stolen imagery — could be right now created, using the FaceApp (or similar) app, so that new identities and profiles can be built that are photorealistic and not easy to detect — e.g you can’t reverse image search a FaceApp pic — it won’t exist.
Even if we start building tools right now to detect the digital signatures of these fake images and videos (e.g like http://imageedited.com/) I feel it will be too little too late and the content will proliferate before having a chance to be checked.
Take this xkcd cartoon
Now apply that to a video you see on a news report as though the president/prime minister or leading figure said it.
Because let’s not forget, it’s not just petty criminals using this sort of tech for a given agenda, it is also rival political parties, major publications (for commercial gain), corporate espionage (etc).
Do we just roll over and accept that we can’t trust anything anymore and use skepticism as a tool to distrust everything? That feels like a pretty horrible way of life.
Can we build tools to stop it? Can we trust the technology to exist — is it doing enough to push society forward, or can it even be stopped?
I believe a few individuals were going to open source their deep fake tech, but had a change of heart and closed it when they realised what they’d be unleashing on the world if it was open to everyone. (See: https://www.vice.com/en_us/article/kzm59x/deepnude-app-creates-fake-nudes-of-any-woman)
Should we build more governance and regulators? I’m hopeful something can be done, but fearful we’ll all just have to bring skepticism to the fore.
I don’t like it, but I see no other real choice.
For now, I will laugh at the funny viral deep fakes and fake apps.
Many people will say the tech is harmless — a bit of fun — but inside I am very scared about what it could mean for our future.