I've spent 16 years shaping how the web loads, renders, and performs.
Now I'm building what comes next — full-stack systems powered by AI.
Engineering leader. Full-stack builder. Agentic engineer.
01 — About
From a kid tinkering with HTML on borrowed laptops in Mumbai to leading the UI engineering org at one of the world's largest ad-tech companies. The web changed my life — so I decided to spend mine making it faster.
I've spent sixteen years shaping how the web loads, renders, and performs — first as a practitioner writing every line of CSS myself, then as an engineering leader turning those hard-won insights into platform-level standards. At Media.net, I built and led a 36-person cross-functional org spanning front-end devs, full-stack engineers, designers, and SDETs. I've conducted 300+ interviews, run six years of performance appraisals, and still review code every week. Now? I'm back at the keyboard, building AI-native full-stack products that ship faster than most teams can spec.
Here's what I've learned: great front-end engineering is invisible. Users never think about your component architecture or your build pipeline — they just feel fast or slow, smooth or janky, delightful or frustrating. My job is to make sure they only feel the first one in each pair. Every millisecond matters. Every interaction tells a story.
02 — Experience
Some people job-hop. I went deep. Five roles at one company, each one earned — because the hardest problems are the ones worth staying for.
03 — Skills
Frameworks come and go. What lasts is knowing when to reach for each one — and when to put it down. Here's what I reach for most.
04 — The AI Chapter
In 2024, I stopped managing and started building again. Full-stack this time. With AI as my co-pilot, I'm shipping more in a month than teams used to ship in a quarter.
Here's the thing nobody tells you about senior leadership: you start to miss the building. So I made a bet. I went hands-on again — but this time with a superpower. I'm using LLMs not as toys, but as engineering multipliers. Claude, Gemini, and the latest frontier models aren't replacing my judgment — they're amplifying it. I architect the systems, the AI accelerates the execution. The result? Full-stack applications from zero to production in days, not months.
What I'm building: CPA lander builders that generate high-converting pages on demand. SERP page serving platforms that scale to millions of requests. App builders that turn prompts into production code. This isn't vaporware — it's shipping, live, making money.
My hot take: the engineers who'll win the next decade aren't the ones who can write the most code — they're the ones who know what to build and can direct AI to build it with them. Taste, architecture, and product instinct matter more than ever. The keyboard is optional. The thinking isn't.
Agentic Engineering, Not Vibe Coding
There's a difference between prompting an LLM and hoping for the best, and engineering systems where AI implements while you own architecture, quality, and taste. I practice what the industry is calling Agentic Engineering — orchestrating AI agents through structured workflows: spec-driven development, incremental implementation, context engineering, and rigorous verification loops. I don't vibe code. I conduct the orchestra.
Tools of the Trade — 2026
The AI developer toolchain isn't a buzzword list anymore — it's a production stack. Here's mine:
Agentic IDEs: Cursor for AI-native editing with Background Agents and the fastest autocomplete in the market. Claude Code for terminal-first agentic development with a 1M-token context window — no IDE, no editor, just raw execution. Windsurf's Cascade for multi-step, self-recovering agent workflows when the task needs autonomy.
Foundation Models: Claude Opus 4.6 for deep architectural reasoning and complex system design. Claude Sonnet 4.6 for high-throughput generation — the workhorse. Gemini for multimodal tasks and cross-referencing large contexts.
The MCP Ecosystem: Model Context Protocol is the connective tissue — 97M+ monthly SDK downloads, 6,400+ registered servers, backed by Anthropic, OpenAI, Google, and Microsoft. I use MCP servers for GitHub, PostgreSQL, Supabase, Slack, and custom internal tools. It's the protocol that replaced every point-to-point AI integration.
Skills, Plugins & Automation: Claude Code's 2,400+ skill ecosystem and plugin marketplace for encoding repeatable engineering workflows — from spec-driven development to automated code review. Claude Cowork for orchestrating multi-agent tasks across documents, data, and codebases. Custom MCP gateways for access control and auditability at scale.
This isn't a list of things I've tried. It's what I ship with every single day.
From Team Lead to AI Conductor
Leading 36 engineers taught me that great output comes from clear specs, tight feedback loops, and knowing when to delegate vs. when to dive in. Turns out, the same principles apply to orchestrating AI agents. Define the task. Provide context. Verify the output. Iterate. The skill isn't replaced — it's amplified. I went from managing people to conducting an orchestra of human intuition and machine execution.
05 — Projects
I believe in learning by building. These aren't side projects — they're the systems, teams, and platforms I've poured years into.
06 — Leadership
300+ interviews. Six years of appraisals. A cross-functional team of front-end devs, full-stack engineers, designers, and SDETs. Here's what actually matters.
07 — Thinking
Most of my production work is NDA-bound. So instead of showing proprietary code, here's how I think — the architectural patterns, workflows, and principles behind what I ship.
My Agentic Engineering Workflow
Every project I build follows the same loop — whether I'm working alone or orchestrating a team of AI agents:
1. Spec first, always. Before a single line of code, I write a plain-English spec: what the system does, who it's for, the constraints, the non-goals. This isn't documentation — it's the context window for every AI agent that follows. Garbage specs produce garbage code, whether the author is human or LLM.
2. Incremental implementation with verification. I never hand an agent an entire project. I break it into small, testable slices — a route, a component, a data pipeline. Each slice gets implemented, tested, and reviewed before the next one starts. The same way I'd manage a junior dev, except the feedback loop is 30 seconds instead of a day.
3. Context engineering is the real skill. The quality of AI output is directly proportional to the quality of context you feed it. I curate what goes into the prompt — relevant files, architectural decisions, constraints, prior art — the way a senior engineer curates a code review. This is the skill that separates agentic engineers from prompt hobbyists.
4. Human owns architecture. AI owns execution. I decide the system boundaries, the data flow, the technology choices, the tradeoffs. The AI implements. This isn't about ego — it's about accountability. When things break at 3am, the human who made the architectural call is the one who needs to understand why.
Grounded in the emerging discipline of agentic engineering and the Model Context Protocol ecosystem.
Working Under NDA — and Why That's a Signal, Not a Gap
Most of my highest-impact work — the ad platform serving tens of millions of impressions, the internal tools, the team I built — is bound by NDA. I can't link you to a repo. I can show you something better: how I think about problems.
When I describe "sub-200ms P95 response times on a SERP platform," I'm not giving you a marketing line — I'm telling you I've instrumented latency percentiles, profiled Redis cache hit rates, and made deliberate tradeoffs between freshness and speed. When I say "200+ component design system adopted by four teams," I'm telling you I've solved the hard problems: versioning, breaking change management, accessibility compliance, and getting designers and engineers to agree on tokens.
The patterns are transferable. The judgment is portable. The NDA protects the specifics — but the thinking behind them is what you'd be hiring.
For open-source contributions and personal projects, visit my GitHub. Follow my thinking on X/Twitter.
Where I'm Headed — The Three-Tier Future
The industry is converging on a model where engineers operate across three tiers of AI orchestration, and I'm building my practice around all three:
Tier 1 — Interactive. In-session pairing with a single agent. This is Cursor, this is Claude Code in the terminal. You're the conductor — steering every decision in real time. I use this for architectural spikes, complex debugging, and anything where judgment matters more than speed.
Tier 2 — Parallel orchestration. Running 3-10 agents simultaneously in worktrees, each tackling a different slice of the spec. Claude Cowork, local orchestrators, background agents. You front-load the work (writing a good spec) and back-load it (reviewing the output), but the middle runs without you.
Tier 3 — Async cloud agents. Assign a task, close the laptop, come back to a pull request. This is the factory model — when you can orchestrate dozens of agents in parallel, the quality of your specification becomes the bottleneck, not your coding speed.
The engineers who thrive in the next two years won't be the ones who type the fastest. They'll be the ones who specify the clearest, verify the hardest, and ship the most. That's the bet I'm making.
Lessons From 16 Years of Building — and Breaking — Things
The best code you write is the code that makes other people's code better. I learned this leading a design system adoption across four teams. Standards compound. Conventions liberate. The boring work is the work that scales.
Hire for curiosity, not credentials. 300+ interviews taught me that the best engineers aren't the ones with the fanciest degrees — they're the ones who ask "why?" before "how?". I built my hiring rubrics around learning velocity, not trivia recall.
Performance is empathy. Shaving 200ms off page load isn't a technical exercise — it's giving millions of people their time back. I've carried this mindset from CSS optimization to AI-native architecture: if the user can feel it, it matters.
The hardest leadership moments are the ones that matter most. Layoffs, restructures, difficult feedback — the things that keep you up at night are the things that forge real judgment. I wouldn't trade a single one of those conversations.
Stay hands-on. The moment a leader stops building, they start guessing. I went back to the keyboard after years of managing because the tools changed so fundamentally that leading without building felt dishonest. Best decision I ever made.
08 — Contact
Got an interesting problem? Building something ambitious? Or just want to nerd out about performance and pixels? I'm always up for a good conversation.
Also available for: speaking engagements, podcast appearances, technical advisory, and writing collaborations on agentic engineering & AI-native development.