Etavrian
keyboard_arrow_right Created with Sketch.
News
keyboard_arrow_right Created with Sketch.

Google's SensorLM Translates 59M Hours of Wearable Data - What It Reveals

Reviewed:
Andrii Daniv
1
min read
Jul 29, 2025
Minimalist smartwatch with data streams and curious avatar analyzing insights

Google Research unveiled SensorLM on 28 July 2025 in Mountain View. The foundation model converts raw heart-rate, motion and temperature streams from devices like Fitbit and Pixel Watch into concise text summaries, potentially simplifying clinical reports and everyday fitness and sleep

Key Details

  • Trained on 59.7 million hours of de-identified wearable data from 103,643 volunteers across 127 countries.
  • Collection window: 1 March–1 May 2024, totalling roughly 2.5 million person-days.
  • Signals include optical heart rate, accelerometer, gyroscope and skin temperature.
  • Model sizes span 75 million to 1.5 billion parameters.
  • A dual pipeline of contrastive learning and generative pre-training automatically paired sensor patterns with text captions.
  • Zero-shot tests identified 20 activities without task-specific labels, and few-shot trials reached high accuracy with only five examples.
  • Captions outperformed general large language model baselines on coherence metrics.
  • Scaling studies followed known scaling laws, showing steady gains as data and compute increased.
  • The project brings together teams from Google Research, Google Health and DeepMind.

Background

Wearable trackers have shipped widely since 2013, logging movement, heart rate and sleep. Early systems relied on hand-labeled events, limiting progress. Recent foundation models proved that large-scale multimodal pre-training can bridge modalities such as vision and language. SensorLM adapts these ideas to continuous biometric streams, learning direct links between sensor signals and natural language.

Volunteers consented to share de-identified readings under research agreements that follow global privacy guidelines. The model was evaluated on human activity recognition and on zero-shot classification / few-shot learning / cross-modal retrieval tasks, demonstrating robust generalisation.

Further Reading

The complete methodology and results are detailed in SensorLM: Learning the Language of Wearable Sensors and its downloadable paper.

Quickly summarize and get insighs with: 
Author
Andrew Daniv, Andrii Daniv
Andrii Daniv
Andrii Daniv is the founder and owner of Etavrian, a performance-driven agency specializing in PPC and SEO services for B2B and e‑commerce businesses.
Reviewed
Andrew Daniv, Andrii Daniv
Andrii Daniv
Andrii Daniv is the founder and owner of Etavrian, a performance-driven agency specializing in PPC and SEO services for B2B and e‑commerce businesses.
Quickly summarize and get insighs with: 
Table of contents