Why Your ‘Golden Cohort’ Lies - And What to Watch Before You Scale Spend with CrazyGames

Why Your ‘Golden Cohort’ Lies - And What to Watch Before You Scale Spend with CrazyGames image
By Mariam Ahmad 27 January 2026

Ecem Calban, Senior UA Manager at CrazyGames, argues that in a market where global gaming UA spend reached about $25 billion in 2025, blended cost-per-install metrics are no longer enough to guide scale decisions. With CPIs rising across platforms, early retention and predictive lifetime value signals are now the more reliable indicators of when to accelerate spend. Ecem reframes UA decision-making around real-time cohort quality and algorithm learning rather than install cost alone. Here's what she does.

 

When managing a multi-million annual user acquisition budget across many titles, what concrete signals told you it was time to accelerate spend versus protect efficiency?

I don’t rely on blended CPI to LTV when deciding to scale, because performance usually degrades in marginal spend and new cohorts. The first thing I check is whether our predictive LTV model is holding. If D1 retention and early monetization signals stay consistent as we increase spend, that’s my green light. For newer titles, I look for a golden cohort first (that initial batch of high-intent users who come in cheap and hit our LTV targets). But I’ve learned not to get too excited about it, because that golden cohort almost always degrades as you scale. Early adopters and genre fans get exhausted quickly, and if you’re not careful, you end up overspending while chasing the same quality at 3x the CPI.

Once performance starts to stabilize, I reset my targets and focus on how well the setup scales. I’m comfortable increasing spend as long as the algorithm is still learning well. That means steady conversion volume, no learning issues, and broad targeting still bringing in new users. When marginal CPI starts rising faster than volume, or frequency goes up in key segments, I slow down. At that point, I prefer to protect efficiency and move budget to another campaign or geo instead of forcing scale.

After running hundreds of game and creative tests early in your career, how did your decision-making framework evolve once the cost of being wrong increased at scale?

Early in my career, I tested a lot: creatives, audiences, and bid strategies. That helped me build a strong foundation. As I started managing larger budgets across multiple titles, my approach changed. I moved from testing everything to testing what actually drives impact.

Today, I focus on experiments that can change direction, like new user segments, big creative ideas, or meaningful bidding changes. I track early signals from day zero and use them to adjust quickly, while being more selective with where real budget goes.

My view on failure also changed. With large budgets, you cannot kill a campaign after two bad days, but you also cannot let it run for weeks hoping it turns around. The key is telling signal from noise. If CPI is high but predicted LTV holds, I give it time. If both CPI and LTV are getting worse, I cut fast.

I also stopped expecting perfect calls. We have killed the games that looked strong at first, and scaled ideas that seemed weak early on but later found an audience. In 2026, the teams that win are not the ones who always guess right, but the ones who learn fast, adapt, and keep moving.

Across platforms like Meta, TikTok, and Google, where do you believe human creative judgment still clearly outperforms platform algorithms - and where it does not?

When it comes to creative performance, algorithms are very good at finding what converts. For a new game or campaign, I test a few clear concepts and let the algorithm show me what works. Those early  signals are meaningful, and I trust them.

Where human judgment still matters is understanding why something works and if it can scale. I have seen creatives with strong CTR and install rates that bring in the wrong users. Retention then drops. This is where a human needs to step in and stop it before it becomes a bigger issue.

Platform differences also matter. What works on TikTok often does not work on Mintegral, and Applovin responds to different themes than Meta. The algorithm does not explain these differences. It only optimizes what you give it. Knowing which message or game feature to lead with on each platform is still a human decision.

I also use clear creative grouping. Instead of mixing many random ads in one campaign, I group them by theme, like IAP, gameplay, or story. This reduces noise and helps the algorithm learn faster. The best results come when humans set the structure and the algorithm scales what works inside it.

Incentivised and offerwall traffic often divides UA teams. Based on your experience onboarding and scaling these partners, under what conditions does this traffic genuinely create incremental value?

Based on my experience, incentivised traffic only creates true incremental value under very specific conditions.

First, the UA team needs a strong data and tech setup. You cannot rely only on the MMP or network reports. You need internal data to spot fraud and understand which funnels make sense for incentivised traffic, and tie payouts to real user milestones instead of predicted bids. Being able to measure LTV at key milestones is critical.

Second, not every game is a good fit, even if the genre looks right. I have worked on ad monetised games with non linear progression that were competitive on CPI but failed in incentivised campaigns. Users completed the tasks and churned right away, without engaging further. In these cases, incentivised traffic does not add value unless the product is adapted. Otherwise, it is better not to push the channel.

Finally, this channel has become more competitive over time. Scale is harder to find and costs are higher. In practice, it now favors large, IAP heavy titles that can absorb higher CPIs. If you cannot reach meaningful scale, I do not see incentivised traffic as a good use of time or budget.

LTV models are widely discussed but often poorly trusted. What practical mistakes do teams make when operationalising LTV prediction for real-time UA decisions?

One common mistake is trusting LTV models too literally. Most models are built on past data, but not every game has enough clean or stable history to train a reliable model. When the data is weak, the model can easily point teams in the wrong direction.

Another issue is that models don’t handle external changes very well. If a new competitor enters the market or the environment shifts, the model doesn’t understand why performance changes, it just reacts to the numbers.

I still find LTV models very useful as a directional tool, but the mistake is letting them make decisions on their own. UA teams need to stay alert and combine the model with judgment and context, especially for real-time decisions.

Moving from mobile-first hypercasual games to a browser-based gaming platform, which UA assumptions stopped being valid almost immediately?

Actually, my role at CrazyGames is a bit different than that assumption suggests and I think it's worth clarifying because it's a pretty unique model.

Crazy Games has been a major web gaming platform for years, but in 2025 we launched a new initiative: cross-platform publishing. We identify titles that are performing well on web and bring them to mobile and beyond. It's essentially the reverse of the traditional funnel but we're using web as a discovery and validation layer, then expanding proven winners into mobile app stores.

From a UA perspective, this flips a lot of assumptions. Normally, you're spending money upfront to test if a game has legs. Here, we already have behavioral data from millions of web sessions such as retention curves, session depth, engagement patterns before we even consider a mobile launch. It dramatically reduces early-stage risk. We're not guessing if a game will be successful; we know it already. 

For me, the UA challenge is that web and mobile audiences don't always translate 1:1.So our UA strategy focuses on validating whether the web audience has a mobile equivalent. Can we find similar users on Applovin, Google or Meta who engage the same way? It gives me a much broader UA perspective on understanding how user behavior shifts between web and mobile, which metrics translate and which don’t.

Looking back, which skills or projects (tooling, data, creative, partnerships) delivered the highest long-term career leverage - and which are commonly overvalued by junior UA managers?

Looking back, the things that helped my career the most were taking initiative and being open to trying new things. Working on new ideas and testing a lot taught me much more than optimizing the same setups again and again. I also learned a lot by going outside my own bubble — talking to UA managers at other companies and seeing how they work gave me better perspective. I think junior UA managers often focus too much on short-term tactics and platform tricks. Those can work for a while, but they don’t last.

SIGN UP TO OUR NEWSLETTER

You’ll receive our leading content, news and info about upcoming webinars, podcasts and of course discounts to our live Gamesforum events

Sign up now