← Back to work
Case study · 01Onyx Fiber Ops

Architecting a multi-team operations platform for a 200‑rep fiber sales business.

How I replaced spreadsheets, GHL pipelines, and ad-hoc payroll workflows with a single internal operations platform — serving 8 divisions, ~240 reps, and feeding executive reporting in real time. Built solo, partnered with their ops manager.

Role
Architect & technical lead
Team
Sole engineer & designer, partnered with their ops manager
Stack
Next.js, TypeScript, Supabase, Node, GHL, Sequifi
Timeline
2024 — ongoing

The problem

Onyx Fiber is a door-to-door fiber-internet sales business operating across multiple US divisions. Each division ran semi-independently: their own reps, their own pipeline in GoHighLevel, their own ad-hoc spreadsheets for compensation, and their own version of the truth for executive reporting.

The cost was hidden but enormous. Reps couldn’t see clear performance data. Division leaders couldn’t compare across teams. The executive team was receiving end-of-week reports that were a week stale and didn’t agree with one another. Sequifi (payroll) and GHL (CRM) drifted out of sync, so commissions had to be reconciled by hand every pay period.

The mandate

I was brought in to architect and lead the build of one internal platform that:

  • Replaced spreadsheets and manual reconciliation with a single source of truth.
  • Ingested data from everyprovider Onyx works with — each ISP exports installs in its own report format on its own cadence.
  • Synced bi-directionally with GHL (pipeline) and Sequifi (payroll) so reps and managers worked in their familiar tools while the platform stayed authoritative.
  • Surfaced a TSS/TSI (Team Strength Score / Team Improvement Score) ranking so division leaders could compare apples to apples.

How I approached it

The technical challenge wasn’t a single hard problem — it was making eight messy realities reconcile. I broke it into three layers:

  1. Ingestion layer.Each ISP report has its own column ordering and quirks — some send CSV, some XLSX, some PDF. I built a typed adapter pattern so adding a new provider is a single file with a parse function and a schema mapping. Reports flow into a normalised installs table.
  2. Sync layer. GHL and Sequifi expose webhooks and REST endpoints but neither agrees with the other on rep identity. I designed a stable internal rep_id, with reconciler jobs that match by phone + email + name and surface conflicts to a human queue. The CRM and payroll stayed in their teams’ hands — our platform became the source of truth.
  3. Reporting layer.Real-time dashboards built on Supabase’s Postgres — division leaders see installs, intakes, install rate, and TSS/TSI live; executives get the same data, rolled up. The custom TSS/TSI engine is a SQL function that runs on every install event, so rankings are never stale.

Decisions I’m glad I made

Supabase over a custom Node + Postgres stack.The team was small and the surface area was huge; Supabase’s auth + RLS + realtime cut the boilerplate I’d otherwise have written for weeks. Where it doesn’t fit (heavy ingestion jobs, third-party syncs) I run a separate Node service.

TypeScript everywhere — types from the database up. We generate TS types from the Supabase schema and use them across the Next.js app and the Node worker. A column rename ripples through type errors in 30 seconds; nothing ships with a stale shape.

Human-in-the-loop for the messy bits.Reconciling reps and installs across systems is inherently fuzzy. Rather than guessing, the platform surfaces conflicts to an ops queue. People are good at the last 5% of resolution; code shouldn’t fake confidence it doesn’t have.

Outcome

  • Payroll reconciliation went from ~6 hours per pay period to under 30 minutes.
  • Division leaders now make staffing decisions on live data, not Friday-afternoon spreadsheets.
  • Onboarding a new ISP provider went from “weeks of glue code” to one adapter file (~150 lines).
  • The TSS/TSI ranking became the language division leaders use in weekly stand-ups — the platform shaped how the org talks about performance.

What I’d do differently

Build the ops-conflict queue earlier. We tried to auto-resolve too much in v1 and spent the first month chasing edge cases. The moment we surfaced conflicts to a human queue, velocity tripled.

If this kind of work sounds like your team’s problem space —