Tutorial: Implement Feature Flags and A/B Tests in a Web App (Node + React)
One-line learning outcome: Add runtime feature flags and a simple A/B experiment to a Node + React app, analyze results, and safely roll out or rollback.
Estimated total time: 3–5 hours
Prerequisites: Familiarity with Node.js, npm, React basics, and using the command line. Optional: prior CI experience.
Why feature flags?
Feature flags let you toggle behavior without deploying code — perfect for safe rollouts, experiments, and decoupling deployments from releases.
Module breakdown
- Module 1 — Design and local toggle service (0.5–1 hr) — Easy
- Module 2 — Server-side flags (Node) (1 hr) — Medium
- Module 3 — Client-side flags (React) (1 hr) — Medium
- Module 4 — Experiment & analyze results (0.5–1 hr) — Medium
Module 1 — Design and local toggle service
Objective: Define flag schema and run a small local toggle service that returns flag values per user.
Narrative: Start with a simple JSON-backed toggle service you can call from server and client; later you can swap this for LaunchDarkly or similar services.
Hands-on lab
# toggle-service/index.js
const express = require('express');
const app = express();
app.use(express.json());
let flags = {
newCheckout: { default: false, users: ['bob'] },
heroTextVariant: { default: 'A', buckets: ['A','B'] }
};
app.post('/evaluate', (req, res) => {
const { flagKey, userId } = req.body;
const flag = flags[flagKey];
if(!flag) return res.status(404).json({ error: 'flag not found' });
if(flag.users && flag.users.includes(userId)) return res.json({ value: flag.users.includes(userId) ? true : flag.default });
if(flag.buckets) {
// simple hashing to bucket
const c = userId.split('').reduce((s,c)=>s+c.charCodeAt(0),0);
return res.json({ value: flag.buckets[c % flag.buckets.length] });
}
res.json({ value: flag.default });
});
app.listen(4000, () => console.log('toggle service on 4000'));
Expected: POST /evaluate returns a JSON value for a flag key and userId.
Exercises
- Start the toggle service and call it with curl to evaluate different flags.
- Add a REST endpoint to list all flags with metadata (description, owner).
- Harder: Persist flag changes to a JSON file and implement a small UI to toggle flags.
Mentor tip: Keep the service minimal — production SDKs handle performant caches and streaming updates you don't need for the lab.
Module 2 — Server-side flags (Node)
Objective: Consult toggle service in your Node API and branch behavior based on flag values.
Narrative: Server-side flags are essential for controlling backend behaviors (pricing logic, experiments that affect conversions).
Hands-on lab
# src/featureClient.js
const fetch = require('node-fetch');
module.exports = async function evaluate(flagKey, userId) {
const res = await fetch('http://localhost:4000/evaluate', {
method: 'POST',
headers: {'content-type':'application/json'},
body: JSON.stringify({ flagKey, userId })
});
return (await res.json()).value;
};
# src/server.js (simplified)
const express = require('express');
const evaluate = require('./featureClient');
const app = express();
app.get('/checkout', async (req, res) => {
const userId = req.query.user || 'anonymous';
const newCheckout = await evaluate('newCheckout', userId);
if(newCheckout) return res.send('New checkout flow');
res.send('Old checkout flow');
});
app.listen(3000);
Expected: Visiting /checkout?user=bob shows the new checkout flow if flagged.
Exercises
- Integrate logging to record which variant each user got.
- Ensure fallback logic if toggle-service is down (fail-open or fail-closed depending on risk).
- Harder: Cache flag values per request lifecycle to reduce calls to toggle-service.
Mentor tip: Define fail behavior early—some flags should default to off to avoid costly user impact.
Module 3 — Client-side flags (React)
Objective: Use a React hook to query flags and render variants (hero text A/B test).
Narrative: Client-side flags allow UI experiments without deploying new bundles; keep SDK calls lightweight and cache results.
Hands-on lab
// src/hooks/useFlag.js (React)
import { useEffect, useState } from 'react';
export default function useFlag(flagKey, userId){
const [value, setValue] = useState(null);
useEffect(()=>{
fetch('/api/evaluate', { method: 'POST', headers: {'content-type':'application/json'}, body: JSON.stringify({ flagKey, userId }) })
.then(r => r.json()).then(d => setValue(d.value)).catch(()=>setValue(null));
},[flagKey,userId]);
return value;
}
// App.jsx
const variant = useFlag('heroTextVariant', user.id);
return <h1>{variant === 'B' ? 'Try the new hero' : 'Welcome to our app'}</h1>;
Expected: Different users see different hero text variants based on bucketing logic.
Exercises
- Ensure hydration-safe rendering: show a loading placeholder until flag value resolves.
- Instrument client events to report which variant a user saw (for metrics).
- Harder: Implement a safe server-side rendering strategy to avoid flicker (render primary variant server-side).
Mentor tip: Minimize renders by caching flag values in context and avoid re-fetching for every component.
Module 4 — Experiment & analyze results
Objective: Run an A/B experiment for heroTextVariant, collect simple metrics, and evaluate significance.
Narrative: Basic statistical significance with clear metrics (click-through, conversion) helps decide whether to roll out a feature.
Hands-on lab
# Instrumentation sketch
# Client records events to /events
fetch('/events', { method: 'POST', body: JSON.stringify({ userId, event: 'cta_click', variant }) })
# Server stores events in a small SQLite or in-memory store; export CSV for analysis
# Quick analysis using Python or R: compute conversion rates and chi-square test
Exercises
- Implement server route to receive and store events (userId, event, variant, timestamp).
- Collect data for at least 100 users per variant and compute conversion rates.
- Harder: Use an online A/B test library or compute p-values and confidence intervals programmatically.
Mentor tip: Choose a primary metric before running an experiment and avoid peeking too often to prevent false positives.
Testable checkpoints
- Toggle service responds to POST /evaluate with flag values for user IDs.
- Server-side /checkout branches on newCheckout flag and logs variant results.
- React app shows hero text variants based on heroTextVariant and records events.
- Collected events can be exported for analysis and show per-variant counts.
Sample solution sketches
Sketch: Use an express route /api/evaluate that proxies to the toggle service. Store events in SQLite; export CSV. React hook fetches /api/evaluate and caches in context; instrumentation sends POST /events on user action.
Assessment rubric
- Automated (0–50): toggle-service tests (10), server API tests (15), client hook unit tests (10), event endpoint tests (15).
- Manual (0–50): correctness of bucketing logic (15), experiment design & metric selection (15), code clarity & accessibility (10), rollout/fallback strategy (10).
Alternative implementations & trade-offs
- Use LaunchDarkly / Split / Flagsmith: robust, battle-tested, but adds cost and vendor lock-in.
- Client-side only flags are simpler but risk exposing logic; server-side flags are more secure for critical paths.
Recommended reading & tooling
- "A/B Testing: The Most Powerful Way to Turn Clicks into Customers" — basics of experiments
- Tools: LaunchDarkly, Split, PostHog, Amplitude, simple local toggles for dev
Common stumbling blocks & debugging hints
- Flicker on client: show placeholder until flag resolves or use server rendering to pick variant.
- Data leakage: deduplicate user events and add idempotency keys to avoid double-counting.
- Bucket bounce: make bucketing deterministic (hash user ID) to keep users in the same variant.
Accessibility considerations
- Ensure variant UI maintains semantic structure and keyboard navigation; add ARIA attributes if needed.
- Provide captions/transcripts for any multimedia used in experiments; keep variant differences subtle and test with screen readers.
Wrap up by asking learners to propose a rollout plan for a risky feature flag (staged percentage rollout, metrics to watch, rollback criteria) and present it to a mentor for sign-off.