Thanks for meeting today and thanks ahead of time for being our Ask Me Anything guest March 27-30 – we’re excited to have you featured!
I’ve been wanting to interview you for our thought leader series. The strategies you and your team have built have been replicated by many in the industry. Thank you for sharing your best practices to our community of publishers.
Let’s start –
CH: In 2017, what are some strategies you will possibly implement to continue your revenue growth in programmatic advertising?
ED: In terms of 2017 revenue optimization, we’re still not working with every partner we want to work with or plan to work with in the header. We are continually evaluating new opportunities to add unique demand. That said, I think the biggest thing we tried to do in 2016 that we’re still improving in 2017 is to better understand the impact of the header partners we have enabled and work to optimize for performance. For instance, building the capabilities to track and measure the impact to things like GPT fire time, ad render percentage, dynamic timeouts and lift.
We’ve developed ad tech that allows us to gain insight into a lot of things that were previously a black box before to better understand impact to overall performance, not just revenue. This allows us as a team to get to a place where we can say, “All right, the partner in this set up is working well, not working well or we need to dig further into that implementation to see if there’s something we can do to improve things.”
We wanted to take a step back and further understand the partners that we did have enabled, the actual lift they were driving individually and how they were working with one another so we could determine not only which are driving the most lift but also what additional levers, for example impact to latency, we could optimize. It’s slightly ironic but in a field that focuses on ad optimization it’s really challenging to get to a point where you can do true optimizations!
CH: Sounds like you did a lot of A/B testing to optimize the bids.
ED: I credit our engineering team and our yield team for all the headway we’ve made in A/B testing. Before we could A/B test, we needed to reach a point where we had the technology and infrastructure to support that setup and that’s something they drove.
It’s very challenging to accurately A/B test a programmatic ad stack. For publishers wondering how they’re going to continue to optimize in 2017 and beyond, investments in testing infrastructures are essential for understanding.
As you get deeper into header bidding, performance becomes much more complicated to understand because your lift percentage per partner goes down significantly. So it’s really important to be able to get to a point where you can determine that, yes, the additional partner you added is returning revenue, but how much have they actually increased overall revenue share versus just spreading the pie around?
CH: Latency has been an issue in header bidding, causing less ads to render and money left on the table. How has your team addressed this?
ED: Haha, well again, this goes back to engineering and technology. We’ve put significant investment in growing our engineering team, and really believe that engineering should have a dedicated role in ad ops, in addition to a traditional yield team. I think that’s essential if you’re serious about header bidding and serious about understanding the impact it can have, not only on revenue but also your user experience, overall setup and UX.
When we began to examine latency, we looked at the difference between page view growth and impression growth. As page views grow, you would hope that impressions grow to a similar level. By looking at these metrics, you can understand your impressions per page view, or ads rendered percentage, and see how adding or subtracting header partners is going to impact that. But, obviously, there’s a lot that can impact that final metric and it’s not a perfect one-to-one measurement but it’s a start.
To understand and then combat latency, we look at things like, “This is the impact to GPT fire time or this is the number of partners that have responded by the time your time out has run.” Those sort of things allowed us to say, “By adding this new partner, this is adding X amount of lift but an additional X% to your GPT fire time.” So really, in order to better understand the impact of latency, we had to get to a point where we could understand how running header overall was impacting our ability to serve ads to the user.
CH: What’s some new technology that you are excited about? Single bid architecture, HTTP2, server to server side, mobile video, in-view ads?
ED: All of those ideas are exciting developments, both for the industry as a whole and publishers. That said, before we jump to the new I’d like to stress what I think the key for publishers to understand; you want to get to a point where you are able to assess the true impact of your ad set up to overall performance, well beyond a DFP report. A lot of times as publishers, we think about what’s next but what should come first is that holistic understanding of what we’re doing now, and whether or not that setup is in fact, ideal.
Take header timeouts for example. Right now we have a static timeout but we’re working on a machine learning model that will push back what the appropriate timeout is for the time a user spends on a certain page within our website’s flow.
This allows us to not only provide a better user experience, but also make sure we’re maximizing our ad serving potential. A static timeout assumes behavior is the same for all pages and users and usually, that’s not the case.
CH: Other than programmatic advertising, what other ways do you monetize your sites and why have you chosen these strategies?
ED: We’ve been experimenting with what to do when users are running ad blockers. The sites that we run are primarily providing a service, so we want to determine what is a good middle ground where users can come to our sites and continue to use our tools for free, but if they’re ad blocking, we’re not able to monetize which is what allows us to bring them the tools for free. This doesn’t work from a business perspective and we have to figure out what that middle ground is. We’ve tested users opting into Google Consumer Survey or watching a 30-second video in a pre-roll setup that allows us to keep the tool free for the user while also providing an excellent resource for free.
This again ties back to understanding the impact on the users. In the situations that are opportunities to add revenue, what we really want to be cautious of and continue to have better information around is what kind of impact that has to the user experience. As a result, if we add a 30-second pre-roll, at a certain point in the user session, it’s not enough to just measure the revenue, you have to understand what kind of spike that has, what’s the exit rate? Then you have to do some of the decision making to say, well, exit rate is probably going to spike but at what point are we OK with that if revenue also spikes? It’s a tough balance and we try and work closely with our product team, who will at times push back, on what the right balance is.
CH: Okay, last question, how do you work with your product teams to optimize your ad strategy?
ED: One thing that we’ve realized as an organization is that it’s important not to silo. For example, the more engineering can be involved, or drive, ad optimization decisions the better because you have the intersection of divisions in the company and you can share goals, initiatives and challenges – that’s motivating to a team.
We take that same approach when we think about product. Product owns the user session, they run the funnel. Two ways we work with product is to figure out and optimize our ads without actual ad initiatives like adding a header partner. An example is viewability. Product owns the placements and they own the page layout, but it was really important for us to explain to them why viewability is important – not just today, but why this work will be important for years to come. Working in advertising we get that but we can’t assume it’s obvious to everyone.
We have an initiative to make all of our ad placements above 65% viewable by the end of the year. These are goals shared across teams and we asked ourselves: What is the layout that’s going to work for both the product team, the user flow and for the ad team? So, that’s an opportunity where you can share goals and resources to achieve something in common.
The more you can tie in goals between engineering, ads, and product, and instead of siloing them, the more they can understand the goals and initiatives of the different divisions of the company, we found that extremely successful, and it also just makes for a better place to work because it fosters understanding and teamwork.
CH: Thank you Emry for sharing the key successes and strategies your team has put in place – and continuing to develop. I’m looking forward to the questions our community will ask you in the March 27-30 Ask Me Anything.
Emry serves as the Vice President of Advertising for Chegg.com. Emry joined Chegg in the acquisition of ImagineEasy Solutions where he had been the companies first hire and founding member of StudyBreak Media. StudyBreak Media focuses on advertising optimization, utilizing custom ad tech, data science, analytics and engineering to maximize the value of their inventory. StudyBreak Media was an early adopter of header bidding and strives to be at the vanguard of programmatic technology.