Coding With Chatgpt vs Claude Code
ChatGPT vs Claude Code: How I Built an Analytics Stack (and When Each One Won)
I just finished standing up a lightweight analytics stack. I started with a single analytics.js
in the browser, asked ChatGPT to reverse-engineer the backend from that client, and then pulled the repo into Claude Code to harden it, build tests, configs, and the “don’t-break-what-works” edits.
A few observations:
- ChatGPT is a fantastic architect and first-draft generator. I asked it to write an NGINX config for the backend, add security headers, and enable HTTP/2 + QUIC. It produced a full, working setup on the first pass, plus a clean FastAPI scaffold for the ingest endpoints. Great velocity.
- Claude Code is a careful surgeon. When I later needed targeted tweaks, adding specific analytics API endpoints, adjusting CORS/CSP, and wiring health checks, ChatGPT sometimes rewrote big sections of a config that were already correct (especially NGINX). Claude Code, operating repo-wide from the terminal, tended to find the exact lines and propose minimal diffs without collateral damage.
That pattern repeated: when the change was architectural or green-field, ChatGPT flew. When it was surgical, like threading changes through NGINX, systemd, app config, and tests, Claude Code earned its keep.
What happened in my build (short version)
-
Seed the backend from the browser client. I uploaded
analytics.js
and had ChatGPT discover endpoints, build a schema, and a FastAPI backend. It also drafted an NGINX server block with HTTP/2 + QUIC and sane security headers. It all started up cleanly. -
Reality shows up. I needed to:
- add a dedicated
/collect
endpoint (with strict method/size limits), - adjust CORS and CSP to allow
s.ado.im
and our analytics domain, - expose a few other endpoints,
- and keep logs tidy for troubleshooting.
- add a dedicated
-
Where things didn’t go quite right. Posting all of my configs back to ChatGPT sometimes led to over-eager rewrites (e.g., it would regenerate an NGINX block that already worked and break something.)
-
Hand off to Claude Code. I pointed it at the files and asked Claude Code for specific changes (e.g., update the CSP to allow
ado.im
). It searched the files, proposed small and specific diffs, and left the rest alone. Tests, configs, and little glue scripts landed clean.
Net effect: ChatGPT got me from zero to “it runs.” Claude Code kept it correct as the surface area grew.
Steal these prompts
Reverse-engineer a backend from a browser client (ChatGPT):
You’re a senior backend engineer. I’m pasting analytics.js from a site.
Review the file and determine the minimal service that accepts those events.
Deliver: (1) endpoint paths + JSON schema, (2) FastAPI app with pydantic models,
(3) input validation + size limits, (4) a basic test for one happy path and one failure path.
Keep it small and runnable with uvicorn.
First-pass secure NGINX (ChatGPT):
Draft an NGINX server block for my FastAPI backend behind a reverse proxy.
Requirements: HTTP/2 and QUIC/HTTP/3, strict security headers (CSP, X-Content-Type-Options,
Referrer-Policy, Permissions-Policy), gzip off for JSON, and a /healthz that bypasses auth.
Explain each header briefly. Output a single server block I can drop in.
Surgical config edits without collateral damage (Claude Code):
Add a /collect endpoint and allow origin https://s.ado.im via CORS and CSP.
Read the @site_nginx.conf file. Propose the smallest diffs only.
Repo-wide safety check before deploy (Claude Code):
Scan the repo for secrets in configs, hard-coded origins, and oversized timeouts.
List findings by file with recommended minimal edits. Produce diffs only; no rewrites.
Generate a short smoke-test script that hits /healthz and a sample /collect with valid JSON.
Stop before applying changes.
Guardrails I keep on
- Ask for minimal diffs. Literally say “propose the smallest change” and “stop before applying.”
- Never paste secrets or tenant data. Remove secrets before pasting them in.
- Gate config changes with a command. “Show
nginx -t
output you expect and why,” or “what willpytest -q
run?” - Commit small, review diffs. Treat the AI like a junior pair: review every patch, insist on tests.
- Keep a rollback plan. Stash the last-known-good NGINX and app configs before experimenting.
Bottom line
- ChatGPT is my architect and fast drafter. It excels at blank-page problems, stand up a service, write the first NGINX block, sketch the data model, get a demo running.
- Claude Code is my surgeon. It shines when the job is precise change, minimal blast radius. You can update three files, and keep everything else exactly the same.
Start with ChatGPT when you need momentum and scaffolding. Hand things to Claude Code when you’re threading needles in a live codebase. Switching tools isn’t a failure mode, it’s the workflow.
Where do you draw the line between architect vs. surgeon? If you’ve used both ChatGPT and Claude Code, what’s your personal rule of thumb for when each wins and why?