Skip to content
Tools · Apr 28, 2026

NVIDIA releases NV-Raw2Insights-US, an AI system for adaptive ultrasound imaging from raw sensor data

In collaboration with Siemens Healthineers, NVIDIA has developed a reconstruction model that processes raw ultrasound signals to generate personalized sound-speed maps for each patient, enabling real-time adaptive image focusing without traditional beamforming assumptions.

Trust69
HypeSome hype

3 sources · cross-referenced

ShareXLinkedInEmail
TL;DR
  • NVIDIA released NV-Raw2Insights-US, an AI model that learns directly from raw ultrasound sensor data rather than reconstructed images, enabling patient-specific sound-speed estimation for adaptive image focusing.
  • The system uses NVIDIA's Holoscan Sensor Bridge to stream high-bandwidth raw data from ultrasound probes via DisplayPort to GPU infrastructure, where inference occurs in real time.
  • Deployment has been demonstrated on NVIDIA Blackwell-class GPUs and IGX systems, with modular architecture designed to support integration of additional AI models on the same raw data pipeline.
  • The work builds on published research in differentiable beamforming and deep learning-based sound speed estimation, though the model is currently under investigational development and not yet cleared for clinical use.

NVIDIA and Siemens Healthineers have jointly developed NV-Raw2Insights-US, an AI model designed to reconstruct ultrasound images directly from raw sensor signals rather than from pre-processed data. Traditional ultrasound imaging uses hand-engineered beamforming pipelines that make simplifying assumptions—notably assuming constant sound speed throughout tissue—and discard much of the original signal information during reconstruction. The new approach trains on raw ultrasound channel data, the uncompressed echoes captured by the probe, to learn patient-specific characteristics of how sound propagates through individual anatomy.

The core capability demonstrated is real-time estimation of local sound-speed variation within a patient. By inferring a personalized sound-speed map, the system adapts image focusing parameters on the fly, correcting for anatomical differences that traditional fixed-speed assumptions cannot accommodate. NVIDIA frames this as part of a broader 'Raw2Insights' class of models that extract actionable clinical insights from raw sensor data rather than finished images.

Hardware integration uses NVIDIA's open-source Holoscan Sensor Bridge (HSB), an FPGA implementation that bridges existing ultrasound scanners to GPU infrastructure. A demonstration uses an Altera Agilex-7 FPGA paired with an ACUSON Sequoia scanner to capture raw data via DisplayPort outputs and transmit it over Ethernet to NVIDIA IGX or DGX systems running Holoscan software. Inference on Blackwell-class GPUs produces sound-speed estimates that are streamed back to the scanner to improve live imaging focus.

The architecture is positioned as modular and software-defined, enabling continuous updates and integration of new AI models without hardware changes. The team released the model weights, dataset, and GitHub repositories on Hugging Face. However, the technology remains under investigational development without FDA clearance or availability for clinical sale, and future availability is not guaranteed. The work builds on published research including differentiable beamforming methods and deep learning-based sound speed estimation from 2023.

Sources
  1. 01NVIDIA (via Hugging Face)Adaptive Ultrasound Imaging with Physics-Informed NV-Raw2Insights-US AI
  2. 02IEEE Transactions on Medical ImagingUltrasound Autofocusing: Common Midpoint Phase Error Optimization via Differentiable Beamforming
  3. 03arXivInvestigating Pulse-Echo Sound Speed Estimation in Breast Ultrasound with Deep Learning
Also on Tools

Stories may contain errors. Dispatch is assembled with AI assistance and curated by human editors; despite the trust-score filter, mistakes happen. We correct publicly — every article links to its revision history. Nothing here is financial, legal, or medical advice. Verify before relying on any claim.

© 2026 Dispatch. No ads. No sponsorships. No paid placement. Reader-supported via Ko-fi.

Built by a person who cares about honest AI news.