The YouTube Balancing Act

Users, YouTubers and Advertisers

With 300 hours of video uploaded every minute and almost 5 billion videos watched every day – YouTube is a behemoth to manage. Here Eddy, our PPC Manager, gives a short history of the platform and looks at how they can avoid another Adopocalypse, while balancing the needs of their users.

In the beginning, three geeks from PayPal said, “Let there be a video hosting site” and there was a video hosting site. The geeks called it ‘YouTube’, and the geeks saw that it was good. But the fledgling platform’s growth was unprecedented, becoming the fastest growing website of 2006.

This growth did not go unnoticed, for deep in the land of San Jose, in the fires of Alphabet HQ, lives a beast that never sleeps. Its name is ‘Google’ and its great eye is ever watchful. It saw YouTube with its cat videos and sensed a potential revenue source.

YouTube has dabbled in advertising since its early days, but their early attempts were easy-mode compared to what Google had in store, and advertisers were in for a treat. In the years that followed the search engine giant’s acquisition of YouTube in October 2006, it rolled out InVideo ads, a Partner Program, an insight and analytics tool, un-skippable pre-roll ads (much to everyone’s chagrin), and seven alternate ad formats.

A cavalcade of options became available and with over 1.8 billion users every month, YouTube proved an enticing prospect for advertisers looking to invest their ad-bucks.

Advertising on YouTube works like this: people upload their videos to YouTube, some of those videos gain traction and generate a lot of views, some of those uploaders become so good at uploading that they consistently receive a high number of views, so they become content creators. YouTube notices this and begins to show ads on the content creators’ videos, with content creators earning a cut of the ad revenue – their videos are now ‘monetised’. Content creators earn enough money to make a living from creating content, so they leave their day jobs and begin creating content full time. So, advertisers gave their money to Google, 1.8 billion people watched the ads every month, sales increased, content creators get their cut, and everyone lived happily ever after. The End.

…Except it wasn’t.

In February 2017, The Times found that YouTube were placing ads for Disney, Pepsi, GSK, Johnson & Johnson, and the UK Government next to Hezbollah recruitment videos and other extremist content. Not long after this The Wall Street Journal questioned advertisers, asking if they were quite happy having their brands associated with blatantly anti-Semitic and racist content. The tried and true business tactic of ‘throw money at this problem until it goes away’ wasn’t going to work, rather, ‘let’s get the hell out of Dodge’ – another tried and true business tactic – was implemented. One after another, big name brands abandoned YouTube and took their ad-bucks with them.

A perfect s**tstorm was brewing.

The ramifications of this mass-exodus were immediate. Google wiped $750million from their income but content creators who made their living from YouTube ad revenue were the worst affected, with some losing around 80% of their income overnight. YouTuber Felix Kjellberg – better known as PewDiePie – dubbed the fiasco ‘Adpocalypse’. But why was Adopocalypse allowed to happen? YouTube has a strict upload policy, so how did videos filled with brazen hate-speech find their way onto the site in the first place, let alone be accompanied by ads? To put this into context, the numbers that YouTube deals with are staggering. 30 million unique visitors watch 5 billion videos daily, while 300 hours of video is uploaded every minute. Running costs aside, with so much content uploaded and consumed, it’s not possible to manually review every single video that is uploaded to YouTube, rather, an algorithm does a lot of the heavy lifting. But the thing with algorithms is that while they are very clever, they are, at the same time, very stupid.

YouTube had to restore advertiser confidence and fast. It unleashed their self-teaching algorithms onto the platform which scoured all content on YouTube, including thumbnails, descriptions, and titles. The algorithms segmented the videos by topic and content, and where applicable, also highlighted videos that met one of nine criteria that could be considered sensitive in nature. The criteria consisted of: Controversial issues and sensitive events, drugs and dangerous products or substances, harmful or dangerous acts, hateful content, inappropriate language, inappropriate use of family entertainment characters, incendiary and demeaning, sexually suggestive, and violence. If the algorithms decided a video didn’t fall under those criteria, the video is ‘Ad Friendly’ and it was monetised. If the algorithm decided the video met even just one of those criteria, it was considered ‘Non-Ad Friendly’, and was demonetised. This whole process made it incredibly easy for advertisers to pick and choose what content they wanted their ads to run on. Pretty clever, right?

Except in their haste to appease advertisers and entice them back to the platform, YouTube used technology that, well, just didn’t work properly. The algorithms took content at face-value and didn’t consider context or nuance, and there is evidence to suggest that demonetised videos were also suppressed by the algorithms, hiding them from recommended feeds if they thought the video featured content that advertisers might take umbrage to. In one instance, a video dissuading people from committing suicide was demonetised and suppressed in this manner. The algorithm detected the word “Suicide”, equated it with “Inappropriate content”, demonetised it, and hid the video from the people who needed to see it the most.

In the past year things have been steadily improving, but it’s looking likely that the sh**tstorm will exist in a state of perpetuity – at least until Google’s AI is clever enough to read between the lines and understand the complexity of human communication.



Post by

Edward Marlin