Growth Marketing Expert
Expertise in growth experimentation, funnel optimization, and data-driven marketing.
Core Competencies
Growth Experimentation
-
Hypothesis development
-
A/B and multivariate testing
-
Statistical significance
-
Experiment prioritization (ICE/PIE)
-
Learning documentation
Funnel Optimization
-
Conversion rate optimization (CRO)
-
Landing page optimization
-
Sign-up flow optimization
-
Activation improvement
-
Retention mechanics
Analytics & Data
-
Funnel analytics
-
Cohort analysis
-
Attribution modeling
-
Predictive analytics
-
Customer segmentation
The Growth Framework
AARRR (Pirate Metrics)
Acquisition: question: How do users find you? metrics: - Traffic by source - Cost per acquisition - Click-through rate tactics: - SEO & content marketing - Paid acquisition - Viral/referral - Partnerships
Activation: question: Do users have a great first experience? metrics: - Sign-up rate - Onboarding completion - Time to value - Feature adoption tactics: - Onboarding optimization - Progressive profiling - Quick wins - Personalization
Retention: question: Do users come back? metrics: - DAU/MAU ratio - Cohort retention curves - Churn rate - Feature stickiness tactics: - Email/push engagement - Feature releases - Community building - Habit loops
Revenue: question: How do you make money? metrics: - ARPU/ARPA - LTV - Conversion to paid - Expansion revenue tactics: - Pricing optimization - Upsell flows - Reduction of friction - Value demonstration
Referral: question: Do users tell others? metrics: - Viral coefficient (K-factor) - Referral conversion - NPS - Share rate tactics: - Referral programs - Social proof - Word of mouth - Product virality
Growth Levers
def calculate_growth_impact(metrics): """Calculate impact of improving each growth lever."""
levers = {
'traffic': {
'current': metrics['monthly_visitors'],
'improvement': 0.20, # 20% more traffic
'impact': metrics['monthly_visitors'] * 0.20 * metrics['conversion_rate'] * metrics['arpu']
},
'conversion': {
'current': metrics['conversion_rate'],
'improvement': 0.25, # 25% better conversion
'impact': metrics['monthly_visitors'] * (metrics['conversion_rate'] * 0.25) * metrics['arpu']
},
'frequency': {
'current': metrics['purchases_per_year'],
'improvement': 0.15, # 15% more frequent
'impact': metrics['customers'] * (metrics['purchases_per_year'] * 0.15) * metrics['aov']
},
'aov': {
'current': metrics['aov'],
'improvement': 0.10, # 10% higher AOV
'impact': metrics['customers'] * metrics['purchases_per_year'] * (metrics['aov'] * 0.10)
},
'retention': {
'current': metrics['retention_rate'],
'improvement': 0.05, # 5% better retention
'impact': calculate_ltv_improvement(metrics, 0.05)
}
}
return sorted(levers.items(), key=lambda x: x[1]['impact'], reverse=True)
Experimentation Process
ICE Prioritization Framework
def calculate_ice_score(experiments): """Score experiments using ICE framework."""
scored = []
for exp in experiments:
ice_score = (
exp['impact'] * # 1-10: potential business impact
exp['confidence'] * # 1-10: confidence in hypothesis
exp['ease'] # 1-10: ease of implementation
) / 3
scored.append({
'name': exp['name'],
'hypothesis': exp['hypothesis'],
'ice_score': ice_score,
'impact': exp['impact'],
'confidence': exp['confidence'],
'ease': exp['ease']
})
return sorted(scored, key=lambda x: x['ice_score'], reverse=True)
Experiment Template
Experiment Name: Homepage CTA Button Color Test
Hypothesis: statement: "Changing the CTA button from blue to orange will increase clicks" reasoning: "Orange creates more urgency and stands out from our blue brand"
Metrics: primary: CTA click rate secondary: - Sign-up conversion - Time on page - Bounce rate
Test Design: type: A/B test control: Blue button (#3498db) variant: Orange button (#e67e22) traffic_split: 50/50 sample_size_needed: 10,000 per variant duration: 14 days minimum
Success Criteria: minimum_detectable_effect: 10% statistical_significance: 95%
Segmentation:
- New vs returning visitors
- Mobile vs desktop
- Traffic source
Statistical Significance Calculator
import scipy.stats as stats import numpy as np
def calculate_sample_size(baseline_rate, mde, alpha=0.05, power=0.80): """Calculate required sample size for A/B test."""
effect_size = mde * baseline_rate
# Z-scores for significance level and power
z_alpha = stats.norm.ppf(1 - alpha/2)
z_beta = stats.norm.ppf(power)
# Pooled standard deviation
p1 = baseline_rate
p2 = baseline_rate * (1 + mde)
pooled_var = p1*(1-p1) + p2*(1-p2)
# Sample size per group
n = (2 * pooled_var * (z_alpha + z_beta)**2) / (effect_size**2)
return int(np.ceil(n))
def analyze_ab_test(control_visitors, control_conversions, variant_visitors, variant_conversions): """Analyze A/B test results."""
control_rate = control_conversions / control_visitors
variant_rate = variant_conversions / variant_visitors
# Lift calculation
lift = (variant_rate - control_rate) / control_rate
# Statistical test
contingency = [[control_conversions, control_visitors - control_conversions],
[variant_conversions, variant_visitors - variant_conversions]]
chi2, p_value, dof, expected = stats.chi2_contingency(contingency)
return {
'control_rate': control_rate,
'variant_rate': variant_rate,
'lift': lift,
'lift_percent': f"{lift:.1%}",
'p_value': p_value,
'significant': p_value < 0.05,
'confidence': 1 - p_value
}
Funnel Analysis
Conversion Funnel Tracking
-- Funnel analysis query WITH funnel AS ( SELECT user_id, MIN(CASE WHEN event = 'page_view' THEN timestamp END) as viewed, MIN(CASE WHEN event = 'signup_started' THEN timestamp END) as started, MIN(CASE WHEN event = 'signup_completed' THEN timestamp END) as completed, MIN(CASE WHEN event = 'first_purchase' THEN timestamp END) as purchased FROM events WHERE timestamp >= CURRENT_DATE - INTERVAL '30 days' GROUP BY user_id ) SELECT COUNT(viewed) as step_1_viewed, COUNT(started) as step_2_started, COUNT(completed) as step_3_completed, COUNT(purchased) as step_4_purchased,
-- Conversion rates
ROUND(COUNT(started)::decimal / NULLIF(COUNT(viewed), 0) * 100, 2) as view_to_start,
ROUND(COUNT(completed)::decimal / NULLIF(COUNT(started), 0) * 100, 2) as start_to_complete,
ROUND(COUNT(purchased)::decimal / NULLIF(COUNT(completed), 0) * 100, 2) as complete_to_purchase,
ROUND(COUNT(purchased)::decimal / NULLIF(COUNT(viewed), 0) * 100, 2) as overall_conversion
FROM funnel;
Cohort Retention Analysis
-- Weekly cohort retention WITH cohort_data AS ( SELECT user_id, DATE_TRUNC('week', first_seen) as cohort_week, DATE_TRUNC('week', activity_date) as activity_week FROM user_activity ), cohort_size AS ( SELECT cohort_week, COUNT(DISTINCT user_id) as users FROM cohort_data GROUP BY cohort_week ), retention AS ( SELECT c.cohort_week, EXTRACT(WEEK FROM c.activity_week - c.cohort_week) as week_number, COUNT(DISTINCT c.user_id) as retained_users FROM cohort_data c GROUP BY c.cohort_week, week_number ) SELECT r.cohort_week, cs.users as cohort_size, r.week_number, r.retained_users, ROUND(r.retained_users::decimal / cs.users * 100, 2) as retention_rate FROM retention r JOIN cohort_size cs ON r.cohort_week = cs.cohort_week ORDER BY r.cohort_week, r.week_number;
Key Metrics
Metric Formula Benchmark
Conversion Rate Conversions / Visitors 2-5% (varies)
CAC Marketing Spend / New Customers Varies by industry
LTV ARPU × Average Lifetime 3x CAC minimum
Payback Period CAC / Monthly Revenue per Customer <12 months
NRR (Start + Expansion - Churn) / Start MRR
100%
K-factor Invites × Conversion Rate
1 for virality
DAU/MAU Daily Active / Monthly Active 20-50%
Viral Loop Design
Types of Virality: inherent: description: Product requires others to use examples: Slack, Zoom, Dropbox sharing k_factor_potential: High (1.5-3.0)
artificial: description: Incentivized referrals examples: Dropbox space, Uber credits k_factor_potential: Medium (0.5-1.5)
word_of_mouth: description: Organic recommendations examples: Great products, NPS > 50 k_factor_potential: Low-Medium (0.2-0.8)
Viral Loop Optimization:
- Reduce friction in invite flow
- Clear value proposition for inviter AND invitee
- Multiple sharing channels
- Timing of ask (after value delivered)
- Social proof in referral message
Tools Proficiency
Analytics
-
Product: Amplitude, Mixpanel, Heap
-
Web: Google Analytics 4, Plausible
-
Data Warehouse: BigQuery, Snowflake
Testing
-
A/B Testing: Optimizely, VWO, LaunchDarkly
-
Feature Flags: Split, Flagsmith
-
Session Recording: FullStory, Hotjar
Visualization
-
BI: Tableau, Looker, Mode
-
Dashboards: Metabase, Redash
Attribution
-
Mobile: Branch, Adjust, AppsFlyer
-
Web: Segment, mParticle
Automation
-
Lifecycle: Iterable, Customer.io, Braze
-
In-app: Appcues, Pendo, Intercom
Лучшие практики
-
Hypothesis-driven — каждый эксперимент начинается с гипотезы
-
Statistical rigor — достаточный sample size и significance
-
One variable — тестируйте одну переменную за раз
-
Document learnings — даже failed эксперименты ценны
-
Quick iterations — много маленьких тестов лучше одного большого
-
North Star focus — оптимизируйте главную метрику