Inserting a needle into the vein in someone’s arm and drawing a blood sample, or inserting an intravenous line, is a common medical procedure and often an essential first step in patient care. The challenge of obtaining successful venous access ranges from easy to very difficult depending on the subject’s veins and physiology as well as the skill of the medical technician. Oftentimes, it ends in a frustrating unsuccessful outcome that requires retries, delays, and even additional help.
This clinical procedure is performed more than 1.4 billion times yearly in the United States. However, according to clinical studies, it fails in 27% of patients without visible veins, 40% of patients without palpable veins, and 60% of emaciated patients. Instrumentation with ultrasound imaging is available to assist clinicians in locating the vein, but manual needle insertion under ultrasound guidance requires careful hand-eye coordination for steady placement and control of both the probe and needle. Near-infrared (NIR) imaging systems are also used, but they have a penetration depth of only about 3 mm and tend to be ineffective with obese patients.
Venipuncture Device
Now, a team based at Rutgers University has developed and is field-testing a nearly autonomous robotic system that locates a likely suitable vein, inserts the needle, and even draws the blood sample. This venipuncture device is designed to safely perform blood draws on peripheral forearm veins. The system combines ultrasound imaging and miniaturized robotics to identify suitable vessels for cannulation and robotically guide an attached needle toward what’s called the lumen center.
A clinician does the setup of general positioning of the machine with respect to the subject’s arm, sterilizing/wiping the target zone, applying ultrasound hydrogel, and selecting the target vein’s center as displayed on a monitor. These coordinates are then used by the device to determine the necessary kinematics to ensure that the needle tip intersected the ultrasound imaging plane at the vessel center.
Once aligned and steady, the operator then initiates the insertion procedure, and the injection-axis carriage drives the attached needle tip forward at a 25-degree angle relative to the participant’s forearm to the target of the vein center, inserts the needle, and draws a 5-ml blood sample.
System Setup
The system consists of two major mechanical assemblies: a two-dimensional ultrasound probe on a linear motion carriage and the single-dimension needle thrust, both linked by a microcontroller for coordinated control (Fig. 1).
1. Exploded view and major functional components of the handheld venipuncture device.
Guidance and task execution employs a combination of force-vs.-displacement feedback profiling along with ultrasonic imaging (Fig. 2). Real-time analysis of that profile data indicates the likely success of the procedure, including the desirable sudden “breakthrough” that occurs as force suddenly drops over a short distance when the vein wall is pierced.
2. Robotic device set-up and operation: (a) Handheld venipuncture device. (b) Computer-aided design (CAD) displaying key components of the two-degree-of-freedom (DoF) device. Angle of insertion (θ) is fixed at 25 degrees. (c) Device operation: (i) Ultrasound (US) imaging plane provides a cross-sectional view of target vessel. (ii) Once a vessel is located by the device, the needle is aligned via the Z-axis motion (Zm) DoF motor; the Zm motor (blue arrow) is responsible for aligning the needle trajectory with the vessel depth (Z-axis) to ensure the needle tip reaches the vessel center exactly at the ultrasound imaging plane. (iii) Once trajectory is aligned, the needle is inserted via the injection motion (Inj m) DoF motor (green arrow) and automatically halted once the tip has reached the vessel center. (d) Device positioned over the upper forearm during the study. (e) Ultrasound image depicting the needle tip present in the target vessel after a successful venipuncture. Vessel wall is identified by a yellow dashed ellipse. The Z-axis in the image indicates the vessel depth and the Y-axis indicates the sagittal position of the vessel. Positions of the vessel and needle tip are recorded with respect to the ultrasound transducer head (top of image).
If the force/distance profile indicates the attempt will be unsuccessful (veins aren’t rigid, of course, and can move or roll during the process) or if no blood flows, the needle withdraws and the operator guides the machine to a new possible site. A graphical user interface shows the ultrasound image, force sensor, injection motor velocity, and position profiles for both Z-axis and injection motors (Fig. 3).
3. The major displays of the graphical user interface (GUI) for the handheld venipuncture device software include the ultrasound image stream, force sensor, injection motor velocity, and desired versus actual position for both Z-axis and injection motors. The red line in the ultrasound image is the needle trajectory. This is where the needle will intersect the ultrasound imaging plane. The user is tasked with manually placing the device such that the imaged vessel (dark ellipse) is centered with the needle trajectory line.
Efficacy
The results thus far on a limited number of test subjects are favorable and comparable to or better than those of clinical standards. The overall success rate was 87% on all 31 participants and 97% on the 25 participants who had easy-access veins, with an average procedure time of 93 ± 30 seconds.
The researchers note that future versions of this system device could be extended to other areas of vascular access, such as IV catheterization, central venous access, dialysis, and arterial line placement. Further, the system could be combined with an integral blood-assessment subsystem for an “all-in-one” blood-draw and test-result arrangement.
Details of the project are in the team’s paper “First-in-human evaluation of a hand-held automated venipuncture device for rapid venous blood draws” published in Technology (World Scientific). It focuses primarily on sensor data, profiles, algorithms, and results and with less discussion of the machine’s actual construction. While that paper is behind a paywall, it’s also available here as an HTML page with a link to a downloadable PDF file.
By submitting this form and personal information, you understand and agree that the information provided here will be processed, stored and used to provide you with the requested services in accordance with Endeavor Business Media’s Terms of Service and Privacy Policy.
As part of our services, you agree to receive magazines, e-newsletters and other communication about related offerings from Endeavor Business Media, its brands, affiliates and/or third-party partners consistent with Endeavor’s Privacy Policy. Contact us at [email protected] or by mail to Endeavor Business Media, LLC, 331 54th Avenue N., Nashville, TN 37209.
You can unsubscribe from our communications at any time by emailing [email protected].
To give a boost to machine-learning functionality in its MCUs and DSPs, NXP incorporated the open-source Glow neural-network compiler to the technology mix.
These days, neural networks (NNs) are synonymous for machine learning (ML) and artificial intelligence (AI) even though they’re only part of these arenas. Still, deep neural networks (DNNs) are what’s hot and driving the adoption in everything from health monitoring to stock trading. It’s allowing self-driving cars to be practical and motor controllers to provide predictive-maintenance information to the cloud.
Today, ML hardware acceleration garners the bulk of the mindshare, but those lowly micros without it are equally applicable to handling ML chores, albeit less demanding on the computation side. One of the tools needed for this to occur are compilers that take ML models and turn them into code. This is where the open-source Glow compiler comes into play.
The Glow compiler is part of the PyTorch ML framework. It spans the gamut of hardware and software platforms from the cloud, to Microsoft Windows on the desktop, to micros controlling motors. Facebook developed the original Glow compiler.
NXP added support for Glow in its eIQ Machine Learning Software. The company’s implementation of the Glow compiler targets Arm Cortex-M cores and Cadence Tensilica HiFi 4 DSPs. This includes its i.MX RT series of crossover MCUs. The NXP support incorporates platform-specific optimizations that take advantage of ML hardware acceleration.
The Glow NN compiler does better than TensorFlow Lite for comparable ML models (see figure). The i.MX RT685 includes a Tenisilica HiFi4 DSP that delivers the major performance advantage over non-ML accelerated hardware. Part of the boost comes from utilization of Cadence’s Neural Network Library (NNLib), the source of the Tensilica HiFi4 DSP architecture.
The NXP Glow compiler provides a significant performance boost compared to TensorFlow Lite.
The MCUXpresso SDK that supports the hardware and Glow compiler is available for free from NXP. PyTorch support includes ONNX models and Microsoft’s MMdnn toolset.
As with most embedded applications, developers need to determine what algorithms and applications will be applicable to their solution. They also must pinpoint the capabilities that are possible given the constraints of the target platform, the time available to craft a solution, as well as the requirements of the application and the capabilities of the developer, tools, and target. This job becomes more challenging when one considers the range of possible ML models, the NN compilers, etc., involved in building such a system.
NNs are no different than using something like a fast Fourier transform (FFT) to provide the needed results that can be utilized by another portion of the application. Granted, NN tasks are often more computationally demanding, but this simply means that the appropriate tools and systems must be employed. The big difference with NN solutions is the range possibilities, tradeoffs, and optimizations available to a developer. The NXP Glow compiler support is just one aspect of this, but it can provide a performance boost that may make hardware acceleration unnecessary or allow something like voice or image recognition possible where it wouldn’t have been before.
By submitting this form and personal information, you understand and agree that the information provided here will be processed, stored and used to provide you with the requested services in accordance with Endeavor Business Media’s Terms of Service and Privacy Policy.
As part of our services, you agree to receive magazines, e-newsletters and other communication about related offerings from Endeavor Business Media, its brands, affiliates and/or third-party partners consistent with Endeavor’s Privacy Policy. Contact us at [email protected] or by mail to Endeavor Business Media, LLC, 331 54th Avenue N., Nashville, TN 37209.
You can unsubscribe from our communications at any time by emailing [email protected].