

Discover more from Life Since the Baby Boom
In the last two articles, I wrote about my time in Enterprise and in Ads. When we finished Part II, I was in Ads, helping out Google’s Owned and Operated Properties: YouTube in particular. We now resume with YouTube:
Ads-Serving Models
Most of the ads you see on YouTube now are associated with individual videos. The “pre-roll” ads for VRBO that come on before the video you wanted to watch, the ones that interrupt your video in the middle, etc. I had nothing to do with those. My expertise, such as it was, was the ads that show when you search for something.
Let’s go to YouTube now and search for “cats”
you don’t see any ads there. But in 2009, there were other videos on the sides, usually product ads. That’s where I came in. Showing an ad in response to a query: that’s what AFS does (Ads For Search, a Google ads product for properties other than Google Search).
As I mentioned earlier, machine learning (often now called “Artificial Intelligence” in the press) was and is the basis of how Google chooses ads. The program “learns” which aspects of an ad and a query best predict if someone will click on it.
Note that I said “which aspects of an ad”: that includes every piece of data you can find about it, including the number of words, the image, its use of language, the advertiser’s history, the ad’s history if it’s old, and on and on. The machine learning algorithm just processes millions or billions of examples, labelled by human raters like my cousin, a topic I covered in the last episode. Anyhow, I got to thinking:
Since these ads are themselves YouTube videos, we have a lot of metrics on them. We know their quality rating, how long viewers stayed on them, how many views they got, and so forth. Surely we must be using that information to decide whether to show them as an ad. Right?
Wrong. None of that was being used. So I set to work constructing a new model for YouTube. I was now in AFS officially, reporting to a guy in Pittsburgh, which at the time was located in a building adjacent to the Carnegie-Mellon campus. Nearly everyone was a Ph.D. from CMU, and I’d have to say they had very little interest in what I was doing.
Incidentally, the Google office is now in a converted Nabisco factory. It’s very cool. I actually walked in the cargo net hammock.
Confession: I was constantly swinging for the fences here, as they say in baseball, when you try too hard to hit a home run. I wanted to do something so great that I’d get promoted and become a legend, instead of just doing a job. This led me to some very bad decisions, and this was one of them. The “Convertiness” project I’ll get to in a second, was the other.
“Endorsements”
Ads ran a “demo days” program, where anyone could mock up a demo of how their favorite new idea would look. They had a special system that facilitated demos but wouldn’t work in production, called “AdsMonkey,” if I remember correctly.
At the time, Facebook was gaining users by the millions, and social networking was on everyone’s mind. This was 2010. So naturally, I started thinking about how “social networking” and ads were related. I came up with the idea of an “endorser,” i.e. someone who has expertise in some particular area, who endorses certain ads. As a user, you would signal that you trusted this endorser, and thus his endorsed ads would be displayed to you. He’d get paid for that, of course.
There was more to it than that, but if you think of Instagram Influencers, that’s the idea. This led to two patent filings by Google, Endorsements Used in Ranking Ads and Payment model with endorsements, and I thought that one of these had actually issued as a patent, but now they’re listed as Abandoned. When you file a patent application, the lawyers rarely tell you what happens with it. Someone decided it wasn’t worth pursuing, and naturally didn’t tell me. Maybe it really was a patent, but they elected not to pay the maintenance fees? Don’t know.
Convertiness
This was my 10-run home run swing that led to a strikeout. I should have listened to the conventional wisdom. It’s not always wrong.
(Disclaimer: most likely a lot of the details here have changed since 2010. This explanation may be a little hard to follow, sorry about that.)
Vocabulary: a “conversion” means a customer bought something, or signed up for something. That’s the Holy Grail of advertising. If you know that someone bought the Thing you were selling, then your ad paid for itself (assuming you make more on the Thing than the ad cost). If the Google account rep can say “look, of these 600 ad clicks we charged you for, 100 of them converted” the ad buyer might be happy.
The problem for Google is that it doesn’t always know if someone who clicked on the ad actually converted. Advertisers can report conversions, but they don’t have to. Also, it’s one thing if the user goes to the ad landing page and eventually clicks “buy,” but what if they bought it in a physical store later that week? What if the advertiser doesn’t choose to report the conversion to Google? Many advertisers do not.
Secondly, believe it or not, Google does not know everything you do (I know, hard to believe, right?). You click on the ad, and that is logged, but once you’re on the advertiser’s site, it’s invisible. That means that Google doesn’t know whether you went to the Buy page or not.
Toolbar
Cast your memory back to a time when doing a Google search required you to go to “google.com.” Nowadays, most browsers have “search” built in: you can type in a link or you can do a search (on your favorite search engine). But back then, the browser only let you type a link. Google Toolbar aimed to solve that
It installed a search box (or “toolbar”) in the top of the browser. Here’s the interesting part, which seems almost inconceivable now: during the installation, it asked “are you willing to send Google all your web traffic, to help us debug this product?” You could say “no” and Toolbar still worked; it was totally voluntary.
Amazingly enough, a fair number of people clicked “yes” to let Google see everything they were doing. This data wasn’t being used in any way, but I had an idea:
Use that data to learn the pattern when a user converts. That is:
Using the fact that advertiser reported a conversion, learn what user behaviors best predicted that (call it “Convertiness”), and
If no conversion occurred, learn what predicts that.
If I wanted to be trendy and pretentious, I could say, “I used AI to understand web behavior.” But honestly, I just enlisted a statistician Bill Heavlin to help. There are statistical tools that identify the critical variables that best predict what you’re interested in, and that’s what he used. AI is not the magic answer to everything.
From the Toolbar logs, I extracted a large set of user actions, both for ad clicks that led to a conversion, and ad clicks that did not. Bill went to work on the data. Here’s what he found :
Users sometimes just want a piece of information. They are searching, and once they’ve found it, then leave and do something else.
Users who end up buying something go through many pages before they buy. Intuitively, they have to browse and compare, choose their product, provide their shipping address, their credit card, and all that.
So Convertiness really meant, in practice, “visited a lot of pages after the ad click.” We know that Convertiness does lead to a conversion if there was one reported. So we can assume that if users were heavily engaged with the ad site it’s likely they bought something. An ad with good Convertiness must be having conversions, or at the very least, holding onto users instead of repelling them.
Even though the vast majority of traffic comes from users who don’t have Toolbar, or who do but don’t share the data with Google, you can track Convertiness as a measure of ad quality. In fact, I did add it to the “good click model” which looks at a vast array of measures to determine if an ad click is likely to be a “good click.” I also added it to RASTA, so you could easily track the change in Convertiness for any experiment
But How Much Data Is There?
When I explained this to Diane Tang and others in Ads, their objection was usually, “there isn’t enough data, and Toolbar is going away.”
They were right. I was too enamored of my genius to pay attention. Toolbar did go away. Even worse, in RASTA the change was deemed “not statistically significant” because there wasn’t enough traffic.
Incidentally, if you’re thinking here: why couldn’t we get the Chrome browser to send us that same information, assuming the user opted in? That idea went nowhere, because of privacy concerns. The lawyers wouldn’t hear of it.
The End in Ads
So, in 2010, I had Convertiness, which was going nowhere although I refused to recognize that; and I had the YouTube model with some viewership metrics added in. I didn’t give enough attention to the latter, which made the Pittsburgh group’s only question, “when can you get rid of this?” It consumed about 1% as much space as their numerous models, which I thought made their request kinda petty, but whatever.
Anyhow, my manager in Mountain View gave all his attention to the SmartASS team (not unreasonable since they generate all the money!) and ignored my request for a meeting. I got miffed and transferred to Maps. I should have stayed in Ads. You can tell from these articles that I really liked the stuff.
In the next articles, I’ll talk about Maps, Patents, and the ending of my time at Google.
Working at Google: Ads, Continued
Hey Albert, thanks for the information you provide here. I have a question if you don't mind.
In order to attract more subscribers to my Substack, I am trying Google Ads.
Here is what I don't understand. The Ad report says that 241 people clicked the link to my Substack. However, Substack's STATS report on traffic says only 10 Views from googleads.g.doubleclick.net
Where is the disconnection?