NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Three types of LLM workloads and how to serve them (modal.com)
rippeltippel 3 days ago [-]
> Gallia est omnis divisor in partes tres.

OCD-driven fix: The correct Latin quote is "Gallia est omnis divisa in partes tres".

charles_irl 3 days ago [-]
oof ty, willfix
ZsoltT 3 days ago [-]
> we recommend using SGLang with excess tensor parallelism and EAGLE-3 speculative decoding on live edge Hopper/Blackwell GPUs accessed via low-overhead, prefix-aware HTTP proxies

lord

charles_irl 3 days ago [-]
Sorry to lead with a bunch of jargon! Wanted to make it obvious that we'd give concrete recommendations instead of palaver.

The technical terms there are later explained and diagrammed, and the recommendations derived from something close to first principles (e.g. roofline analysis).

omneity 2 days ago [-]
Very cool insights, thanks for sharing!

Do you have benchmarks for the SGLang vs vLLM latency and throughput question? Not to challenge your point, but I’d like to reproduce these results and fiddle with the configs a bit, also on different models & hardware combos.

(happy modal user btw)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 06:25:22 GMT+0000 (Coordinated Universal Time) with Vercel.