A Conversion Conversation with Circle Media’s Eddie Aguilar
The importance of gaining trust
When it comes to optimization, it’s easy to look at the metrics with blinders on. However, there is a great deal of value and opportunity to be had by looking at the larger picture — not only with regards to customers but within organizations as well. I recently spoke to Eddie about how he takes a very human-centric approach to optimization, how he connected thousands of Experimenters, as well as the importance of gaining trust with stakeholders.
Rommil: Hi Eddie, thanks for taking the time to chat today! How are you?
Eddie: Glad to be chatting with you too! Luckily, I’m doing well despite the global pandemic.
That’s good to hear!
You’ve been testing for a long time. How about we start things off with you sharing a bit about yourself, a bit of your career journey?
I’m Eddie, a 30-year-old optimizer, living in the city of Portland, OR originally from Miami, FL. My personal hobbies include hiking, biking, surfing, skateboarding, programming, gaming, and reading.
My optimization career started when I helped manage one of the largest StarCraft eSports websites WGTour in 2004 — I was 14. I considered myself a PHP developer at the time and knew there had to be ways to build better digital experiences on the frontend. I then learned about A/B Testing from others that also managed WGTour. Some of them now build programs at Microsoft, Google, Spotify, Criteo, and so on.
Any career move I made after WGTour always included optimizing. However, I can say my biggest growth was when I joined the startup FunnelEnvy. I was FunnelEnvy’s first optimizer, and the founder’s growth knowledge about testing was light years ahead of my own. It would eventually lead me to go on and build multiple testing programs across many amazing brands that all reaped the benefits of experimenting.
Currently, I lead and manage the experimenting program for Circle Media, a category leader in screen time management and parental controls.
Very cool. Beyond all that, I’ve read you’ve co-founded CROtricks. Could you tell us a bit more about CROtricks and the inspiration behind it?
Do you remember IRC Freenode? The developer community?
Not really. But I’d love to hear about it.
When I started CRO tricks, I didn’t see a community that resembled the IRC culture that can help marketers get started with digital experimenting by just chatting with another expert.
My optimizer friend, Sarunas Strolia, and I decided to do something about it, so we created a slack community of marketers looking to learn more about conversion optimization, and how to get started. That community grew from 0 to 1000+ in less than a year.
Wow, that’s amazing!
Now, you have slack groups such as Measure Slack that have over 10,000+ marketers, most of them who are extremely versed in experimentation and are available to help when the right questions are asked.
Let’s talk about optimization. What’s your personal approach to optimization? How do you decide where to start?
While each business is different, I like to take my time learning the business, its problems, and its goals. While analyzing consumer business data is a necessary part of optimizing, I have personally found surveying employees about the actual pains they run into a much better place to start.
That’s very interesting. Can you give us an example?
An example is discovering issues within the sales or customer support departments that can easily be tested while producing value to the business as a whole. Not only does this allow the potential consumer to have the information they need in advance, it also gives sales a better chance at closing.
Where I find a lot of experimenters falling short when they first start optimizing is that we’re all human. Data can only give you so much. Learning where the data came from and how it came to be can give you a much more holistic view of the company and where you should start your optimization program.
The end goal of my approach is to have a positive impact across the business, not just their digital experience.
“I have personally found surveying employees about the actual pains they run into a much better place to start.”
How do you convince stakeholders who are hesitant to run Experiments to trust in that process?
I think most stakeholders who never experimented before are more terrified of the risks involved than the actual process. They’re scared of losing revenue, or they’re scared that the data is being misinterpreted — typically called risk aversion.
However, it’s always good to discuss with stakeholders that the risks are always going to be present with or without experimenting, what their risk tolerance is, and how optimization can ultimately reduce uncertainty.
You and your team will never know if a certain strategy will produce value without actually testing it. How else did we land on the moon?
“I think most stakeholders who never experimented before are more terrified of the risks involved than the actual process.”
100%. Those concerns are constant. What else is constant is that senior leaders will ask, “How will this impact our annual revenue?” How do you respond to that? And do you have advice for others facing tough questions like this?
Based on the statistical approach to the experiment, your results should already help answer this question. I personally like having constraints to my experiments, constraints you can actually measure and explain; this helps form a constructive answer. One other thing I like to do in order to build a better experimentation culture with leadership is to run the winner vs the loser again and validate the experiment once more. But be aware you would need to also consider the seasonality change, your iterative testing and promo periods.
If you don’t know how to answer using the statistical approach, I suggest learning more about what causal inference is and how it drives your AB experiment. Google’s study on the causal impact when intervening is a great read. The study will educate how to further your explanation of “How will this impact our annual revenue,” especially if you’re using Google Optimize as your experimentation tool.
What’s your opinion on how to staff Experimentation? What roles do you need to fill to succeed?
Booking.com in my personal opinion has one of the best approaches on staffing experimentation: one strategist, one developer, one designer, and one copywriter. That’s the dream team for optimization to solely focus on building better digital experiences.
You shouldn’t stress a full team if you’re just starting with experimentation. You don’t need all these roles to succeed within optimization; however, you definitely need at least one person who understands all of these roles and can perform parts of these roles until you can scale your team effectively. At the very least, it’s helpful to have the necessary resources outside of your program that can provide support.
“…one strategist, one developer, one designer, and one copywriter.”
How do you describe the perfect Experimentation culture?
Open to the process of experimenting, learning how to grow by accepting “failure” as a possibility, finding better solutions by iterative testing, taking risks and setting attainable goals.
Do you have any advice for those looking to get into this field?
Learn the statistical approaches behind experimentation, and learn what it takes to develop these experiments.
If you’re doing website optimization, the architecture of the website is something to always think about. Not all websites are built the same. Each one has it’s nuances/issues that can be solved, and improved.
Think about all users, not just users that are in specific buckets. Sometimes optimizing is literally fixing an accessibility issue that can impact millions. You wouldn’t believe how many design/development teams I’ve seen proceed with a UI and not think about the users that are colour blind. I personally learned this one with co-workers who are colour blind, and I never stopped thinking about accessibility since.
Having seen so many Experiments, could you describe your favourite experiment from recent history.
I guess recent is relative? I have this one experiment I ran within a Marvel video game around in-game purchases. The design was very stale, nothing described which one was being purchased the most, or which one had the best value based on the sale. By revamping the design to help showcase the “featured” items, we were able to increase in-game purchases by 367%.
Very cool. Congrats on that!
Finally, it’s time for the Lightning round!
Frequentist or Bayesian?
Both have their places, but I prefer Bayesian for AB Testing.
If you couldn’t be in Experimentation what would you do?
Chemistry or Biology, so I would still somehow be involved in experimentation.
That’s a bit of a cheat, but since you’re a nice guy I’ll let that slide lol
What is your biggest Experimentation pet-peeve?
I try to not have a pet-peeve. It’s not helpful, and we’re all human; things happen. You just need to always try your best and shoot for the best outcomes. This is where being redundant with your process or QA helps rule out outliers and wrong result interpretation while reducing pet-peeves :)
Finally, describe Eddie in 5 words or less.
These are my personal core values, and I think they describe myself very well and what I always try to shoot for.
Ambitious, Calm, Helpful, Inspired, and Patient.
Amazing. Eddie, thank you for joining the Conversation!
Thank you for having me, and hopefully, someone finds this useful.
If you enjoyed this conversation, here are others you might like:
A Panel Conversation on A/B Testing Statistics with Chris Stucchio, John Meakin, and Georgi…
Learn from Experimentation’s statistics leaders
For the rest of our conversations with Experimenters from around the world, visit Experiment Nation.