See through walls with WiFi. No cameras. No wearables. Just radio waves.
WiFi DensePose turns commodity WiFi signals into real-time human pose estimation, vital sign monitoring, and presence detection β all without a single pixel of video. By analyzing Channel State Information (CSI) disturbances caused by human movement, the system reconstructs body position, breathing rate, and heartbeat using physics-based signal processing and machine learning.
# 30 seconds to live sensing β no toolchain requireddocker pull ruvnet/wifi-densepose:latest
docker run -p 3000:3000 ruvnet/wifi-densepose:latest
# Open http://localhost:3000
[!NOTE]
CSI-capable hardware required. Pose estimation, vital signs, and through-wall sensing rely on Channel State Information (CSI) β per-subcarrier amplitude and phase data that standard consumer WiFi does not expose. You need CSI-capable hardware (ESP32-S3 or a research NIC) for full functionality. Consumer WiFi laptops can only provide RSSI-based presence detection, which is significantly less capable.
Hardware options for live CSI capture:
Option
Hardware
Cost
Full CSI
Capabilities
ESP32 Mesh (recommended)
3-6x ESP32-S3 + WiFi router
~$54
Yes
Pose, breathing, heartbeat, motion, presence
Research NIC
Intel 5300 / Atheros AR9580
~$50-100
Yes
Full CSI with 3x3 MIMO
Any WiFi
Windows, macOS, or Linux laptop
$0
No
RSSI-only: coarse presence and motion
No hardware? Verify the signal processing pipeline with the deterministic reference signal: python v1/data/proof/verify.py
27 ADRs covering signal processing, training, hardware, security, domain generalization
π Key Features
Sensing
See people, breathing, and heartbeats through walls β using only WiFi signals already in the room.
Feature
What It Means
π
Privacy-First
Tracks human pose using only WiFi signals β no cameras, no video, no images stored
π
Vital Signs
Detects breathing rate (6-30 breaths/min) and heart rate (40-120 bpm) without any wearable
π₯
Multi-Person
Tracks multiple people simultaneously, each with independent pose and vitals β no hard software limit (physics: ~3-5 per AP with 56 subcarriers, more with multi-AP)
π§±
Through-Wall
WiFi passes through walls, furniture, and debris β works where cameras cannot
π
Disaster Response
Detects trapped survivors through rubble and classifies injury severity (START triage)
Intelligence
The system learns on its own and gets smarter over time β no hand-tuning, no labeled data required.
Feature
What It Means
π§
Self-Learning
Teaches itself from raw WiFi data β no labeled training sets, no cameras needed to bootstrap (ADR-024)
π―
AI Signal Processing
Attention networks, graph algorithms, and smart compression replace hand-tuned thresholds β adapts to each room automatically (RuVector)
π
Works Everywhere
Train once, deploy in any room β adversarial domain generalization strips environment bias so models transfer across rooms, buildings, and hardware (ADR-027)
Performance & Deployment
Fast enough for real-time use, small enough for edge devices, simple enough for one-command setup.
Feature
What It Means
β‘
Real-Time
Analyzes WiFi signals in under 100 microseconds per frame β fast enough for live monitoring
docker pull ruvnet/wifi-densepose:latest β live sensing in 30 seconds, no toolchain needed
π¦
Portable Models
Trained models package into a single .rvf file β runs on edge, cloud, or browser (WASM)
π¬ How It Works
WiFi routers flood every room with radio waves. When a person moves β or even breathes β those waves scatter differently. WiFi DensePose reads that scattering pattern and reconstructs what happened:
No training cameras required β the Self-Learning system (ADR-024) bootstraps from raw WiFi data alone. MERIDIAN (ADR-027) ensures the model works in any room, not just the one it trained in.
π’ Use Cases & Applications
WiFi sensing works anywhere WiFi exists. No new hardware in most cases β just software on existing access points or a $8 ESP32 add-on. Because there are no cameras, deployments avoid privacy regulations (GDPR video, HIPAA imaging) by design.
Scaling: Each AP distinguishes ~3-5 people (56 subcarriers). Multi-AP multiplies linearly β a 4-AP retail mesh covers ~15-20 occupants. No hard software limit; the practical ceiling is signal physics.
Why WiFi sensing wins
Traditional alternative
π
No video, no GDPR/HIPAA imaging rules
Cameras require consent, signage, data retention policies
WiFi sensing gives robots and autonomous systems a spatial awareness layer that works where LIDAR and cameras fail β through dust, smoke, fog, and around corners. The CSI signal field acts as a βsixth senseβ for detecting humans in the environment without requiring line-of-sight.
Use Case
What It Does
Hardware
Key Metric
Cobot safety zones
Detect human presence near collaborative robots β auto-slow or stop before contact, even behind obstructions
2-3 ESP32-S3 per cell
Presence latency <100ms
Warehouse AMR navigation
Autonomous mobile robots sense humans around blind corners, through shelving racks β no LIDAR occlusion
ESP32 mesh along aisles
Through-shelf detection
Android / humanoid spatial awareness
Ambient human pose sensing for social robots β detect gestures, approach direction, and personal space without cameras always on
Onboard ESP32-S3 module
17-keypoint pose
Manufacturing line monitoring
Worker presence at each station, ergonomic posture alerts, headcount for shift compliance β works through equipment
Industrial AP per zone
Pose + breathing
Construction site safety
Exclusion zone enforcement around heavy machinery, fall detection from scaffolding, personnel headcount
Ruggedized ESP32 mesh
Alert <2s, through-dust
Agricultural robotics
Detect farm workers near autonomous harvesters in dusty/foggy field conditions where cameras are unreliable
Weatherproof ESP32 nodes
Range ~10m open field
Drone landing zones
Verify landing area is clear of humans β WiFi sensing works in rain, dust, and low light where downward cameras fail
Ground ESP32 nodes
Presence: >95% accuracy
Clean room monitoring
Personnel tracking without cameras (particle contamination risk from camera fans) β gown compliance via pose
These scenarios exploit WiFiβs ability to penetrate solid materials β concrete, rubble, earth β where no optical or infrared sensor can reach. The WiFi-Mat disaster module (ADR-001) is specifically designed for this tier.
Use Case
What It Does
Hardware
Key Metric
Search & rescue (WiFi-Mat)
Detect survivors through rubble/debris via breathing signature, START triage color classification, 3D localization
Portable ESP32 mesh + laptop
Through 30cm concrete
Firefighting
Locate occupants through smoke and walls before entry; breathing detection confirms life signs remotely
Portable mesh on truck
Works in zero visibility
Prison & secure facilities
Cell occupancy verification, distress detection (abnormal vitals), perimeter sensing β no camera blind spots
Non-invasive animal activity monitoring in enclosures or dens β no light pollution, no visual disturbance
Weatherproof ESP32 nodes
Zero light emission
π§ Self-Learning WiFi AI (ADR-024) β Adaptive recognition, self-optimization, and intelligent anomaly detection
Every WiFi signal that passes through a room creates a unique fingerprint of that space. WiFi-DensePose already reads these fingerprints to track people, but until now it threw away the internal βunderstandingβ after each reading. The Self-Learning WiFi AI captures and preserves that understanding as compact, reusable vectors β and continuously optimizes itself for each new environment.
What it does in plain terms:
Turns any WiFi signal into a 128-number βfingerprintβ that uniquely describes whatβs happening in a room
Learns entirely on its own from raw WiFi data β no cameras, no labeling, no human supervision needed
Recognizes rooms, detects intruders, identifies people, and classifies activities using only WiFi
Runs on an $8 ESP32 chip (the entire model fits in 55 KB of memory)
Produces both body pose tracking AND environment fingerprints in a single computation
Key Capabilities
What
How it works
Why it matters
Self-supervised learning
The model watches WiFi signals and teaches itself what βsimilarβ and βdifferentβ look like, without any human-labeled data
Deploy anywhere β just plug in a WiFi sensor and wait 10 minutes
Room identification
Each room produces a distinct WiFi fingerprint pattern
Know which room someone is in without GPS or beacons
Anomaly detection
An unexpected person or event creates a fingerprint that doesnβt match anything seen before
Automatic intrusion and fall detection as a free byproduct
Person re-identification
Each person disturbs WiFi in a slightly different way, creating a personal signature
Track individuals across sessions without cameras
Environment adaptation
MicroLoRA adapters (1,792 parameters per room) fine-tune the model for each new space
Adapts to a new room with minimal data β 93% less than retraining from scratch
Memory preservation
EWC++ regularization remembers what was learned during pretraining
Switching to a new task doesnβt erase prior knowledge
Hard-negative mining
Training focuses on the most confusing examples to learn faster
Better accuracy with the same amount of training data
Architecture
1
2
3
WiFi Signal [56 channels] β Transformer + Graph Neural Network
ββ 128-dim environment fingerprint (for search + identification)
ββ 17-joint body pose (for human tracking)
Quick Start
1
2
3
4
5
6
7
8
9
10
11
# Step 1: Learn from raw WiFi data (no labels needed)cargo run -p wifi-densepose-sensing-server -- --pretrain --dataset data/csi/ --pretrain-epochs 50# Step 2: Fine-tune with pose labels for full capabilitycargo run -p wifi-densepose-sensing-server -- --train --dataset data/mmfi/ --epochs 100 --save-rvf model.rvf
# Step 3: Use the model β extract fingerprints from live WiFicargo run -p wifi-densepose-sensing-server -- --model model.rvf --embed
# Step 4: Search β find similar environments or detect anomaliescargo run -p wifi-densepose-sensing-server -- --model model.rvf --build-index env
Training Modes
Mode
What you need
What you get
Self-Supervised
Just raw WiFi data
A model that understands WiFi signal structure
Supervised
WiFi data + body pose labels
Full pose tracking + environment fingerprints
Cross-Modal
WiFi data + camera footage
Fingerprints aligned with visual understanding
Fingerprint Index Types
Index
What it stores
Real-world use
env_fingerprint
Average room fingerprint
βIs this the kitchen or the bedroom?β
activity_pattern
Activity boundaries
βIs someone cooking, sleeping, or exercising?β
temporal_baseline
Normal conditions
βSomething unusual just happened in this roomβ
person_track
Individual movement signatures
βPerson A just entered the living roomβ
Model Size
Component
Parameters
Memory (on ESP32)
Transformer backbone
~28,000
28 KB
Embedding projection head
~25,000
25 KB
Per-room MicroLoRA adapter
~1,800
2 KB
Total
~55,000
55 KB (of 520 KB available)
The self-learning system builds on the AI Backbone (RuVector) signal-processing layer β attention, graph algorithms, and compression β adding contrastive learning on top.
π Cross-Environment Generalization (ADR-027 β Project MERIDIAN) β Train once, deploy in any room without retraining
WiFi pose models trained in one room lose 40-70% accuracy when moved to another β even in the same building. The model memorizes room-specific multipath patterns instead of learning human motion. MERIDIAN forces the network to forget which room itβs in while retaining everything about how people move.
What it does in plain terms:
Models trained in Room A work in Room B, C, D β without any retraining or calibration data
Handles different WiFi hardware (ESP32, Intel 5300, Atheros) with automatic chipset normalization
Knows where the WiFi transmitters are positioned and compensates for layout differences
Generates synthetic βvirtual roomsβ during training so the model sees thousands of environments
At deployment, adapts to a new room in seconds using a handful of unlabeled WiFi frames
Key Components
What
How it works
Why it matters
Gradient Reversal Layer
An adversarial classifier tries to guess which room the signal came from; the main network is trained to fool it
Forces the model to discard room-specific shortcuts
Geometry Encoder (FiLM)
Transmitter/receiver positions are Fourier-encoded and injected as scale+shift conditioning on every layer
The model knows where the hardware is, so it doesnβt need to memorize layout
Hardware Normalizer
Resamples any chipsetβs CSI to a canonical 56-subcarrier format with standardized amplitude
Intel 5300 and ESP32 data look identical to the model
Virtual Domain Augmentation
Generates synthetic environments with random room scale, wall reflections, scatterers, and noise profiles
Training sees 1000s of rooms even with data from just 2-3
Rapid Adaptation (TTT)
Contrastive test-time training with LoRA weight generation from a few unlabeled frames
Zero-shot deployment β the model self-tunes on arrival
Cross-Domain Evaluator
Leave-one-out evaluation across all training environments with per-environment PCK/OKS metrics
A 3-agent parallel audit independently verified every claim in this repository β ESP32 hardware, signal processing, neural networks, training pipeline, deployment, and security. Results:
33-row attestation matrix: 31 capabilities verified YES, 2 not measured at audit time (benchmark throughput, Kubernetes deploy).
Verify it yourself (no hardware needed):
1
2
3
4
5
6
7
8
9
# Run all testscd rust-port/wifi-densepose-rs && cargo test --workspace --no-default-features
# Run the deterministic proofpython v1/data/proof/verify.py
# Generate + verify the witness bundlebash scripts/generate-witness-bundle.sh
cd dist/witness-bundle-ADR028-*/ && bash VERIFY.sh
Creates self-contained tar.gz with test logs, proof output, firmware hashes, crate versions, VERIFY.sh
π¦ Installation
Guided Installer β Interactive hardware detection and profile selection
1
./install.sh
The installer walks through 7 steps: system detection, toolchain check, WiFi hardware scan, profile recommendation, dependency install, build, and verification.
π‘ Signal Processing & Sensing β From raw WiFi frames to vital signs
The signal processing stack transforms raw WiFi Channel State Information into actionable human sensing data. Starting from 56-192 subcarrier complex values captured at 20 Hz, the pipeline applies research-grade algorithms (SpotFi phase correction, Hampel outlier rejection, Fresnel zone modeling) to extract breathing rate, heart rate, motion level, and multi-person body pose β all in pure Rust with zero external ML dependencies.
π§ Models & Training β DensePose pipeline, RVF containers, SONA adaptation, RuVector integration
The neural pipeline uses a graph transformer with cross-attention to map CSI feature matrices to 17 COCO body keypoints and DensePose UV coordinates. Models are packaged as single-file .rvf containers with progressive loading (Layer A instant, Layer B warm, Layer C full). SONA (Self-Optimizing Neural Architecture) enables continuous on-device adaptation via micro-LoRA + EWC++ without catastrophic forgetting. Signal processing is powered by 5 RuVector crates (v2.0.4) with 7 integration points across the Rust workspace, plus 6 additional vendored crates for inference and graph intelligence.
The Rust sensing server is the primary interface, offering a comprehensive CLI with flags for data source selection, model loading, training, benchmarking, and RVF export. A REST API (Axum) and WebSocket server provide real-time data access. The Python v1 CLI remains available for legacy workflows.
βοΈ Development & Testing β 542+ tests, CI, deployment
The project maintains 542+ pure-Rust tests across 7 crate suites with zero mocks β every test runs against real algorithm implementations. Hardware-free simulation mode (--source simulate) enables full-stack testing without physical devices. Docker images are published on Docker Hub for zero-setup deployment.
All benchmarks are measured on the Rust sensing server using cargo bench and the built-in --benchmark CLI flag. The Rust v2 implementation delivers 810x end-to-end speedup over the Python v1 baseline, with motion detection reaching 5,400x improvement. The vital sign detector processes 11,665 frames/second in a single-threaded benchmark.
WiFi DensePose is MIT-licensed open source, developed by ruvnet. The project has been in active development since March 2025, with 3 major releases delivering the Rust port, SOTA signal processing, disaster response module, and end-to-end training pipeline.
WiFi signals penetrate non-metallic debris (concrete, wood, drywall) where cameras and thermal sensors cannot reach. The WiFi-Mat module (wifi-densepose-mat, 139 tests) uses CSI analysis to detect survivors trapped under rubble, classify their condition using the START triage protocol, and estimate their 3D position β giving rescue teams actionable intelligence within seconds of deployment.
Capability
How It Works
Performance Target
Breathing Detection
Bandpass 0.07-1.0 Hz + Fresnel zone modeling detects chest displacement of 5-10mm at 5 GHz
4-60 BPM, <500ms latency
Heartbeat Detection
Micro-Doppler shift extraction from fine-grained CSI phase variation
π¬ SOTA Signal Processing (ADR-014) β 6 research-grade algorithms
The signal processing layer bridges the gap between raw commodity WiFi hardware output and research-grade sensing accuracy. Each algorithm addresses a specific limitation of naive CSI processing β from hardware-induced phase corruption to environment-dependent multipath interference. All six are implemented in wifi-densepose-signal/src/ with deterministic tests and no mock data.
π€ AI Backbone: RuVector β Attention, graph algorithms, and edge-AI compression powering the sensing pipeline
Raw WiFi signals are noisy, redundant, and environment-dependent. RuVector is the AI intelligence layer that transforms them into clean, structured input for the DensePose neural network. It uses attention mechanisms to learn which signals to trust, graph algorithms that automatically discover which WiFi channels are sensitive to body motion, and compressed representations that make edge inference possible on an $8 microcontroller.
Without RuVector, WiFi DensePose would need hand-tuned thresholds, brute-force matrix math, and 4x more memory β making real-time edge inference impossible.
π¦ RVF Model Container β Single-file deployment with progressive loading
The RuVector Format (RVF) packages an entire trained model β weights, HNSW indexes, quantization codebooks, SONA adaptation deltas, and WASM inference runtime β into a single self-contained binary file. No external dependencies are needed at deployment time.
# Export model package./target/release/sensing-server --export-rvf wifi-densepose-v1.rvf
# Load and run with progressive loading./target/release/sensing-server --model wifi-densepose-v1.rvf --progressive
# Export via Dockerdocker run --rm -v $(pwd):/out ruvnet/wifi-densepose:latest --export-rvf /out/model.rvf
Built on the rvf crate family (rvf-types, rvf-wire, rvf-manifest, rvf-index, rvf-quant, rvf-crypto, rvf-runtime). See ADR-023.
𧬠Training & Fine-Tuning β MM-Fi/Wi-Pose pre-training, SONA adaptation
The training pipeline implements 8 phases in pure Rust (7,832 lines, zero external ML dependencies). It trains a graph transformer with cross-attention to map CSI feature matrices to 17 COCO body keypoints and DensePose UV coordinates β following the approach from the CMU βDensePose From WiFiβ paper (arXiv:2301.00250). RuVector crates provide the core building blocks: ruvector-attention for cross-attention layers, ruvector-mincut for multi-person matching, and ruvector-temporal-tensor for CSI buffer compression.
Create a feature branch (git checkout -b feature/amazing-feature)
Commit your changes
Push and open a Pull Request
π Changelog
Release history
v3.0.0 β 2026-03-01
Major release: AETHER contrastive embedding model, AI signal processing backbone, cross-platform adapters, Docker Hub images, and comprehensive README overhaul.
Project AETHER (ADR-024) β Self-supervised contrastive learning for WiFi CSI fingerprinting, similarity search, and anomaly detection; 55 KB model fits on ESP32
AI Backbone (wifi-densepose-ruvector) β 7 RuVector integration points replacing hand-tuned thresholds with attention, graph algorithms, and smart compression; published to crates.io
Cross-platform RSSI adapters β macOS CoreWLAN and Linux iw Rust adapters with #[cfg(target_os)] gating (ADR-025)
Docker images published β ruvnet/wifi-densepose:latest (132 MB Rust) and :python (569 MB)
Project MERIDIAN (ADR-027) β Cross-environment domain generalization: gradient reversal, geometry-conditioned FiLM, virtual domain augmentation, contrastive test-time training; zero-shot room transfer
10-phase DensePose training pipeline (ADR-023/027) β Graph transformer, 6-term composite loss, SONA adaptation, RVF packaging, hardware normalization, domain-adversarial training