Google Research unveiled SensorLM on 28 July 2025 in Mountain View. The foundation model converts raw heart-rate, motion and temperature streams from devices like Fitbit and Pixel Watch into concise text summaries, potentially simplifying clinical reports and everyday fitness and sleep
Wearable trackers have shipped widely since 2013, logging movement, heart rate and sleep. Early systems relied on hand-labeled events, limiting progress. Recent foundation models proved that large-scale multimodal pre-training can bridge modalities such as vision and language. SensorLM adapts these ideas to continuous biometric streams, learning direct links between sensor signals and natural language. Volunteers consented to share de-identified readings under research agreements that follow global privacy guidelines. The model was evaluated on human activity recognition and on zero-shot classification / few-shot learning / cross-modal retrieval tasks, demonstrating robust generalisation. The complete methodology and results are detailed in SensorLM: Learning the Language of Wearable Sensors and its downloadable paper.Key Details
Background
Further Reading

Quickly summarize and get insighs with:
Author

Andrii Daniv
Andrii Daniv is the founder and owner of Etavrian, a performance-driven agency specializing in PPC and SEO services for B2B and e‑commerce businesses.
Reviewed

Andrii Daniv
Andrii Daniv is the founder and owner of Etavrian, a performance-driven agency specializing in PPC and SEO services for B2B and e‑commerce businesses.
Quickly summarize and get insighs with:
Table of contents
Table of contents