Sample Report · Jordan Park is a fictional composite engineer assembled from realistic patterns. Real client reports name a real person and cite every claim.
VettLab
May 3, 2026   High Confidence
Strong Recommend Sleeper

Scouting Report: Jordan Park

Staff ML Systems Engineer · Lumen Labs · San Francisco, CA · Stanford CS PhD

Scout's Take
"Plus-plus systems instincts with research credentials to match. Floor is a Staff IC at any top-5 AI lab — they're already there. Ceiling is founding engineer at the next major inference startup, or a return to academia in 5-7 years to start a hot serving-systems lab. Tenure pattern says they'll move; the question is whether for the right founder. Recruit them now while comp bands undervalue compiler-savvy serving engineers."
— VettLab, May 2026
Scouting Card
JP

Jordan Park

Staff ML Systems Engineer, Lumen Labs
San Francisco, CA
PhD Computer Science, Stanford (2022)
BS Computer Science, MIT (2018)
they / them
Technical Depth
9/10
Experience
7/10
Academic Pedigree
9/10
Public Visibility
7/10
Recruitability
8/10
Network Strength
8/10
Overall Grade
A−
Est. Comp (TC)
$580 – 720k
Edge
Compiler + LLM Serving
Combine Measurables
3
Papers
1 first-author MLSys
1,847
Citations
Google Scholar
4.2k
GitHub Stars
412 in last 12mo
3
Conf. Talks
MLSys, GTC, CUDA Mtp
13mo
Tenure
Current role
Projection
Floor
Staff IC at any frontier AI lab. Already in band — Anthropic / OpenAI / DeepMind would re-hire on a phone call.
Ceiling
Founding engineer / first systems hire at a $100M+ AI infra startup. Or, in 5-7 years, faculty at an R1 building a serving-systems lab.
Bust Scenario
Founder mismatch → bounces to a 4th gig in 18 months → settles for senior IC with "serial mover" reputation.
Player Comp Technical arc resembles Tim Dettmers circa 2021 — pre-bitsandbytes-virality but post-PhD foundation. Compiler instincts more developed than Dettmers at the equivalent stage; weaker on the social-media flywheel.
Profile Dimensions
Career Timeline
2014 – 2018

BS Computer Science, MIT

TA for 6.824 (Distributed Systems) two semesters. Senior thesis on cache-coherent distributed key-value stores.

Summer 2017

SWE Intern, Google Brain

Worked on TPU compiler optimization. First exposure to large-model training systems.

Summer 2018

Research Intern, NVIDIA

CUDA kernel work for sparse linear algebra. Resulted in two cuSPARSE patches.

2018 – 2022

PhD Computer Science, Stanford (Systems Group)

Advisor: Prof. Helen Vasquez (composite). Thesis: "Compiler-Assisted Kernel Specialization for Mixed-Precision LLM Inference." Three publications during PhD; one MLSys best-paper runner-up.

Aug 2022 – Mar 2024

Member of Technical Staff, Anthropic

Inference team. Contributed to latency improvements on the Claude 3 family. 19-month tenure.

Apr 2024 – Present

Staff ML Systems Engineer, Lumen Labs

Employee #15 at a Series B AI infra startup. Building the next-gen serving stack: custom CUDA kernels, distributed inference, mixed-precision compiler.

Risk Factors

Watch-Outs Before Engaging

Network Connections

Prof. Helen Vasquez

Stanford CS Systems Group
PhD advisor. Hall-of-fame systems researcher with deep ties across NVIDIA, Google, and the AI lab circuit. Strongest single warm path.
Thesis Advisor

Anthropic Inference Alumni

2022–2024 Cohort
Roughly a dozen ex-colleagues now scattered across AI labs and infra startups. Tight-knit cluster. Backchannel reference quality is high.
Former Co-workers

vLLM / Mosaic Contributor Circle

OSS Collaboration
Active in PRs and design discussions across two of the major LLM serving projects. Path in via maintainer DMs.
OSS Network

"Kernel Mafia" SF

Informal Cluster
Loose group of ex-NVIDIA, ex-DeepMind, ex-Anthropic kernel writers in SF. Meet at CUDA Engineers Meetup and private dinners. High-density warm intros.
Informal
Recruiter Notes

Likely Motivations

  • Wants to ship serving systems that move the frontier, not maintain mature stacks
  • Compiler-systems intersection is rare and they know it — will pick the role with the best technical canvas, not the highest comp
  • Founding-engineer narrative will resonate harder than Staff-at-FAANG
  • Academic re-entry is a real outside option in 5-7 years; reference it when discussing what the next 24mo enables

Compensation Expectations

  • Current TC at Lumen estimated $580-720k (cash + equity at Series B prices)
  • Frontier labs would pay $700k-1M+ for a Staff IC with this profile today
  • Founding-engineer role: cash flexibility, but expect 1-3% equity at seed/A or it won't compete with frontier-lab base
  • Sign-on bonuses move them less than equity refresh structure

Warm Intro Paths

  • Prof. Vasquez (Stanford) — if you have a Stanford CS systems contact, this is the highest-conversion intro
  • Ex-Anthropic inference colleague — backchannel a current Anthropic IC; the "should I take a call from X" filter is tighter at frontier labs
  • vLLM core maintainer — OSS collaboration history opens that door
  • CUDA Engineers Meetup organizer — SF event circuit, in-person warm path

Information Gaps

  • What's the rumored side project? Could be hobby, could be stealth co-founder move
  • Reasons for leaving Anthropic at 19mo — performance, team change, or proactive pull?
  • Stated comp expectations not public; estimate is range-anchored, not source-confirmed
  • Management appetite unknown — will they accept a tech-lead role that includes hiring?
Full Intelligence Report

Executive Summary

Jordan Park is a Staff ML Systems Engineer at Lumen Labs (Series B AI infrastructure startup), employee #15, in San Francisco. PhD from Stanford CS Systems Group (2022) under a hall-of-fame advisor; BS from MIT (2018). Spent 19 months at Anthropic on the inference team before joining Lumen in April 2024. Specialty is the compiler-systems intersection — mixed-precision LLM serving, custom CUDA kernels, distributed inference. Three peer-reviewed papers (one first-author at MLSys, two second/third-author at NeurIPS), 1,847 citations, and a 4.2k-star OSS project. Highly recruitable on signal: short current tenure, public output gap since February, podcast hint of a stealth side project. The technical core is plus-plus; the open questions are management appetite and what they'd actually move for.

Profile

NameJordan Park
Pronounsthey / them
Current RoleStaff ML Systems Engineer at Lumen Labs (employee #15)
LocationSan Francisco, CA
EducationPhD CS, Stanford (2022, Systems Group); BS CS, MIT (2018)
SpecialtyLLM inference systems, CUDA kernels, mixed-precision compilation, distributed serving
Career arcMIT → Stanford PhD → Anthropic (19mo) → Lumen Labs (current, 13mo)

Technical Footprint

Selected Publications

Open Source

Talks

Assessment: Depth over breadth. The publication record is small but every paper is substantive at venues that matter for systems work. The OSS work signals taste — they pick high-leverage projects and contribute material features, not drive-by typo fixes.

Network & Connections

Stanford Systems lab is the academic anchor; the Anthropic inference cohort is the industry anchor. Both are dense, opinionated, gossip-prone groups — backchannel references will be candid and cheap to obtain. The "kernel mafia" SF cluster (informal, 30-50 senior engineers across NVIDIA, Anthropic alumni, ex-DeepMind) is the realistic warm-path layer for outreach. The vLLM contributor list is a public-but-underused on-ramp.

Digital Presence

LinkedIn Active — 2.1k connections, sparse posts, recommendations from Stanford and Anthropic alumni
GitHub Active — @jordanparker (composite handle), 4.2k cumulative stars, 412 in last 12 months
Google Scholar Active — h-index 8, three first/co-author papers, 1,847 citations
Personal Blog Quiet — jordanpark.dev. Four deep-technical posts; last post November 2025.
Twitter / X Dormant — 6.4k followers; no posts since February 2026
Podcast Appearance Confirmed — Latent Space, January 2026 (~52 minutes, mentioned a "weekend project" without specifics)

Reputation & Red Flags

Positive Signals

Concerns / Red Flags

Recruiter Notes

The pitch that lands isn't "more compensation" — it's "the technical canvas you can't get at a frontier lab." Lumen is small enough that they probably already have outsized scope; the move would have to either (a) shrink the company further toward founding-engineer scope, or (b) widen the canvas materially (e.g. a frontier lab giving them a whole serving-systems org to design from scratch).

Open with the technical problem you're solving, not with the comp number. Get to comp on call 2 once they've decided the work is interesting. If you're a recruiter representing a founder, lead with the founder — this person will pattern-match on whether the founder thinks like Vasquez or like a McKinsey graduate.

Reference checks: Anthropic inference alumni will give the truth fast. Stanford lab alumni will give the truth slowly. Don't ask Lumen colleagues until very late in the process — the company is small enough that the question signals intent.

Format Notes

This is a sample composite report. Jordan Park is fictional — assembled from realistic patterns we observe across Staff-level ML systems engineers in the AI infrastructure niche. Names, papers, advisors, employer details, and citation counts are illustrative.

In a real client report, this section is titled Sources and lists every URL with a brief note on what it confirms (LinkedIn for current role, the actual paper PDF for publications, Google Scholar for citation counts, personal blog or Twitter for direct quotes, etc.). Every material claim in the report is anchored to one of those sources, and single-source claims are explicitly flagged in-line.