Year2024
RoleSolo Developer
Duration2 months
ClientSide Project
StackTensorFlow.js, WebGPU, Vanilla JS
05

The challenge

Most ML demos require uploading images to a server, raising privacy concerns and adding latency. I wanted to prove on-device inference could feel just as snappy.

The approach

Loaded a quantized MobileNet model in the browser with WebGPU acceleration. Built a simple, fast UI for live webcam classification and batch image testing.

The outcome

  • Inference at 60fps on modern hardware
  • Zero data leaves the device — full privacy
  • Featured on a major dev community

Like what you see?

Have a project I could help with? Let's talk.

Get in touch