The best way to describe Direct Persuasion: we don’t accept the status quo. We operate in an environment where we are taught to question everything, re-evaluate benchmarks and compete internally for the best results possible. We are constantly reallocating budget between things such as media platform, creative concept, and even client Facebook pages to achieve the best performance.
We evaluate our benchmarks weekly. Using Datorama, our internal reporting platform, we review each team or channel’s results. With multiple platforms comparing performance isn’t easy as reviewing one metric alone won’t give a fair result i.e. Video view completion rate is flawed to compare Facebook versus CTV as CTV is always going to be 100% and look more favorable. Therefore, we have built and use a proprietary quality score metric to allow us to compare results and rank results fairly by channel for our clients. This way, when we say things like “this is doing very well,” we have context and evidence of success, or know exactly when and where to improve. Saying things like “this is crushing” or “we need to cut the budget, performance is terrible” doesn’t cut it. We constantly use evidence and metrics to tell the story.
In order to ensure benchmarks are being challenged each week and the teams aren’t grading their own homework, we challenge the “pros” every so often. A rookie team – buyers who weren’t as familiar with the buying tactic or platform – are given a set of geographies to focus on. The “pro” team – who work these platforms every day – are given the others. Every day we monitor results. To everyone’s surprise, there have been quite a few instances where the rookie team won. This lights a much needed fire under every buyer, and allows us to continue to redefine “good results.”
Competition forces teams to think outside of the box and try different bidding strategies. Both the programmatic team at DP, through DV360, and the Google team, through AdWords, could access YouTube inventory. Our internal DV360 team found that max cost-per-view bidding on skippable inventory drove ~10% higher completion rates vs cpm/max lift bidding – a strategy that the internal AdWords team had not tested yet and wasn’t recommended by our Google representatives or Google best practices.
This bake-off has continued across platforms, specifically programmatic. We have dozens of tools at our disposal to access the “same” inventory. All platforms have some proprietary tool that makes accessing the inventory “better,” but at the end of the day, we weren’t taking a company rep’s word for it. We decided to prove it for ourselves.
We had access to programmatic inventory through three platforms specifically – all demand side platforms (DSPs) that had access to ad units on the internet. Buyer 1, while excellent for CTV buying and advanced retargeting features, had weak reporting based on UTC time and 24-hour delays. Buyer 2 had real-time reporting and excellent customer service, but their pixel structure was rudimentary and offered no brand lift products. Finally, Buyer 3 had first-in-the-industry tools and something competitors have mimicked since, but banned certain types of targeting, leaving us with a less efficient buying experience.
Once we decided which platforms were best to buy which inventory, we wanted to discover who was the best at buying it.
We held a bakeoff between agencies to determine the effectiveness of programmatic persuasion buys across preroll and CTV inventory. Agencies could use as many DSPs as they felt necessary as long as they stayed within targeting parameters. Results were measured by a weighted value based on app and device diversity, persuasion metrics (VCR, AVOC, Fill Frequency, CTR, Viewability), and first to third party data ratios. Direct Persuasion outperformed both Buyer 1 and Buyer 2, scoring 21% higher on the overall weighted metric vs. the next best agency.
And finally, with individuals. We had an army of junior analysts who we trained very quickly. All of them were driven, and wanted to keep learning. The best way to challenge them is always through ownership. We assigned specific campaigns, geographies, creatives, or audiences to focus on. We assigned another junior the remaining. They had to get the best results possible, and use one another’s results to help them get there.
We can brag all day about fancy tools or innovative bidding strategies, but at the end of the day, competition always fuels our optimization machine, and the machine delivers killer results.