How Alex Ruban Cuts CPI by 40%: The Creative Testing Framework Every UA Team Misses
Alex Ruban has spent years decoding the chaos of modern mobile growth at places such as Product Madness, Wargaming, and Pixonic, to name a few. This has made him equal parts data scientist, creative whisperer, and UA firefighter. What follows is his playbook: sharp, tested, and built inside some of the most competitive gaming markets.
How did you blend programmatic precision with influencer authenticity - what metrics decide when an influencer is truly incremental to paid UA?
My process blends creator authenticity with programmatic rigor:
Launch & instrument
- Activate many creators within 1–3 days to create visible increase in install volumes.
- Give each tracking link + QR; add in-game bonuses and simple contests to push tracking link clicks (not just views).
- Focus on YouTube as maybe the best platform for mobile gaming campaigns.
- Build forecasting models based on historical data and channel affinity.
Measure & allocate
- Baseline installs/revenue by geo/platform before the burst.
- During the burst, capture tracked clicks/installs and organic lift.
- Allocate organic uplift to creators proportionally to tracked activity.
- Incremental installs = (Burst installs − baseline trend); same for revenue.
Decide incrementality (pass/fail)
- ROAS / Payback: (Tracked + allocated organic revenue) ÷ creator cost
- Quality: D1/D7 retention, payer rate, ARPPU, LTV curve vs. broad UA cohorts.
- Funnel: Link/QR CTR, click→install CVR, IPM-equivalent.
Scale rules
- Predictable views (30–60-day channel median), price to target ROAS.
How did your focus evolve from channel execution to holistic growth - what frameworks now guide how you prioritise channels or experiments?
In my early UA years I focused on getting hands-on with every channel: AdNetworks, SRNs, DSPs, influencers, incent and pre-loads. Once I understood how each one behaved, I became confident planning full marketing budgets and mapping out growth trajectories. Operating channels end to end teaches you a lot. You run campaigns, analyse performance, order and test creatives, and forecast LTV. I was lucky to experience all of that very early, which helped me see how different parts of the game business connect to user acquisition.
That’s when my focus naturally shifted from pure execution to a more holistic view of growth. Today I prioritise channels where I already know the likely performance pattern for the game I’m scaling. Around that, I run experiments with audience targeting, target ROAS and creative variations. My general rule is simple: keep at least 10% of spend dedicated to testing.
This keeps the core engine stable while constantly pushing for new insights and extra efficiency.
You’ve cut CPI by 40% before. What’s your current creative testing process: iteration pace, signal thresholds, and when do you kill or scale a concept?
The core idea never changes: show the right message to the right audience. Everything else is execution. My process is built around testing as much as possible and collecting learnings, but the tricky part is understanding what “good” looks like on each network. Every ad network serves differently, so the same concept will show different CTR, CVR and IPM on Google, Applovin or Unity.
My framework is simple: get the maximum performance out of the chain Concept → Channel → Placement → Format. Placement is heavily tied to audience motivation. For example, interstitials behave differently because they interrupt the flow, while rewarded placements convert better since players expect value. I adjust concepts based on that motivation.
Killing or scaling a concept is less academic today because ML algorithms now optimize on install probability in real time. They decide what to scale almost instantly. But for big products with brand guidelines or strong IP, manual creative testing still matters. In those cases, I test on channels with the most control and transparency, where I can see the exact audience, true CTR/CVR and post-install metrics. Usually these are SRNs. Ideally you calculate testing thresholds for your game specifically, so you know the right sample size for statistical significance.
What lessons have you learned running large-scale video UA? Any new formats or creators outperforming classic rewarded or influencer videos lately?
With large-scale video UA the main lesson is simple: you need more iterations and more concepts than you think. Creative fatigue hits faster at scale, and once it happens your whole campaign starts slipping. High volume also gives you more data, which opens the door to deeper experiments. You can play with format mixes like playable plus video end card, or video plus playable, or drop UGC into the flow to refresh the message.
But even with new formats, everything still comes down to execution. No “perfect” format will save a weak creative. What I enjoy lately is how gaming and non-gaming teams are borrowing from each other. Top-performing campaigns often lean into real-action clips, messy UGC, or a more human tone. These styles keep players engaged longer than polished, traditional rewarded videos.
AI creatives are interesting. When executed well they perform, but most companies still haven’t adopted a proper workflow. The “AI slope” is real: lots of AI videos feel cheap or uncanny, and ML algorithms quickly downrank them because users don’t watch them. But when someone uses AI properly, with good direction and editing, the results can actually spike and go viral.
You implemented anti-fraud and automation tools early - what’s your current fraud-filter stack, and which parts of UA have you successfully automated?
Fraud stacks look different depending on whether the game is IAP, IAA or hybrid, but the goal is always the same: block pre-install manipulation and stop fake post-install behaviour. Some teams build this in-house, others rely on third-party tools, and both approaches can work as long as you cover the main attack surfaces. You need protection against tracking-link hijacking, click injection, spoofed installs, and fabricated in-app events. Most modern solutions can detect these patterns through device signals, behavioural checks and probability scoring.
On the automation side, the biggest wins come from removing manual review work. My current setup automatically flags suspicious installs, assigns a fraud score, and either blocks attribution in real time or pushes the flagged data into a report that the network uses to correct billing. This keeps fraud out of the LTV models and saves a lot of time on reconciliations. The system runs continuously, so as new fraud patterns appear, rules and thresholds get updated without waiting for a monthly clean-up.
The combination of automated flagging plus automated reconciliation has been the most reliable way to keep UA data clean at scale.
How has your measurement stack evolved with SKAN 4+ and GDPR? Which mix of MMM, incrementality, or LTV modeling is working best today?
I’d say the measurement stack didn’t really “evolve,” it adapted. We used to rely on clean user-level attribution with full granularity. Now we work around probabilistic signals, conversion value mapping and the limitations of SKAN 4+. For IAP-heavy games this hit prediction accuracy quite noticeably. LTV modelling is still possible, but it requires more assumptions and more guardrails. For IAA games it’s much simpler and remains the most stable part of the stack.
There’s no single measurement setup that works for every game. Some companies never built proper pLTV pipelines in the first place, so the SKAN shift exposed that gap. The teams performing best today usually have LTV modelling and prediction as their core engine, then layer MMM and incrementality on top.
MMM and incrementality become essential once you start spending at scale. You need to understand the impact of awareness, creative rotations, channel mix and seasonality on your total marketing outcome, not just on attributed installs. MMM helps you allocate budgets across big buckets, while incrementality clarifies what’s truly moving the needle.
In the end the strongest approach is a mix of all three: prediction for day-to-day UA decisions, incrementality for validation, and MMM for high-level budget planning.
How do you align UA, creative, and product teams - any rituals or dashboards that keep everyone chasing the same north star?
For me alignment starts with getting everyone on the same page about the value of working together. Each team has its own agenda and that’s completely normal. I prefer to surface those needs first, then build a communication flow where the focus is on generating and sharing insights.
Product usually wants a deeper understanding of the audience and clear, data-driven hypotheses. The creative team cares about placements, formats and player motivations. The UA team can bridge both sides by sharing granular performance data, audience signals and results from format or placement tests. Once everyone sees how their input feeds the bigger picture, collaboration becomes natural.
For dashboards I like a simple, shared creative performance tracker. It shows concepts, formats and placements in one view and highlights top and bottom performers. This keeps everyone chasing the same north star without getting lost in channel-level noise.
The most important ritual is still the basics: sit together, review winners and losers, validate hypotheses and capture everything in a clean, structured experiment log. Excellence is just consistent practice
Having been with UA Society for years, what’s one misconception you still see about UA in gaming, and what emerging trend most excites you for 2025?
One misconception that never seems to die is the idea that UA can fix anything. Some people assume that once a game ships, marketing should magically make it profitable. That’s simply not how it works. Not every game is fun, not every product decision lands, and even the best marketers can’t “sell anything to anyone.” If product and marketing weren’t aligned from the moment the core design was written, the outcome is basically random.
What excites me for 2025 is the shift toward deeper collaboration and shared ownership. We’re finally moving past the old “throw it over the wall” model. Product, creative and UA teams are starting to build features, creatives and audience insights together from day one. Add to that better tooling, faster creative pipelines and more transparent data, and we’re getting much closer to a world where marketing influence is built into the game instead of added at the end.









