← Back to Trophy

Trophy Drone Detection — Proof of Concept

The production specs on our homepage are the target. This page shows what's working right now — a real prototype running real-time drone detection on edge hardware today.

What's proven

Built and running

Real-time detection pipeline

19 FPS YOLO inference on Raspberry Pi 5

Dual camera system

IMX296 global shutter + IMX708 NoIR operational

Autonomous gimbal tracking

PID-controlled pan/tilt servo lock-on

Live video stream

MJPEG with bounding box overlay

Multi-object tracking

ByteTrack with Kalman prediction

Full Rust stack

Zero-dependency edge deployment

Detection in Action

What it sees

Screen captures from the live prototype. All inference runs on-device — no cloud, no latency.

Screenshots and demo footage coming soon — we're capturing real detection runs from the prototype.

Building Detection

Training a drone-specific model

The prototype runs our first custom-trained drone detection model — purpose-built to detect small drones against sky and terrain backgrounds. We're iterating on the training data and model architecture to improve range and reduce false positives.

Why a custom model

Generic object detection models aren't optimized for the drone detection problem. Drones are small, fast, and viewed against cluttered backgrounds at long range. Our purpose-built model is trained on real drone imagery to maximize detection range and minimize false positives.

Data sources

The training dataset combines real-world drone footage, our own test captures from the prototype's cameras, and synthetic data augmentation to cover edge cases like different lighting, weather, and backgrounds.

The training loop

Collect footage. Annotate drone positions frame by frame. Train a YOLO model on the annotated data. Export to ONNX and deploy to the Pi. Test in the field. Review failures, collect more targeted data, and iterate. Each cycle improves real-world performance on the actual hardware the model will run on.

Current status

The first version of our custom drone detection model is running on the prototype. The full pipeline — capture, detect, track, and gimbal control — runs in real time on edge hardware. We're actively iterating: testing in the field, reviewing failures, collecting more targeted data, and retraining to improve performance each cycle.

POC Specifications

Current hardware

Compute

Processor Raspberry Pi 5 (4GB RAM, Broadcom BCM2712)
Inference Runtime ONNX Runtime 1.23.0 (aarch64, CPU inference)
Performance Real-time inference with custom drone detection model

Sensors

Day camera Sony IMX296 — global shutter, 1456x1088, 60fps
Night camera Sony IMX708 NoIR — no IR filter, 4608x2592, IR-sensitive
Gimbal Pan/tilt servos on GPIO 12 + 13, PID-controlled

Software

Language Rust — memory safe, real-time, zero runtime overhead
Model Custom-trained drone detection model (YOLO-based, ONNX)
Stream MJPEG via axum on port 8554

Live

See it running

The POC prototype streams live video with bounding box overlays. Connect to the same network to watch real-time detection.

Demo video coming soon.

Roadmap

POC to production

POC (now)
Production (target)
Raspberry Pi 5
Custom compute module
2 cameras (visual + NIR)
5 sensor bays (visual, IR, acoustic, radar, RF)
Custom drone detection model v1
Custom drone detection model
Detection only
Detection + modular countermeasures
Bench-top prototype
Ruggedized, backpack portable, all-weather
USB power
72h active / 7-day standby battery