Publications

2023

Kobayashi, Satoshi, Franklin King, and Nobuhiko Hata. (2023) 2023. “Automatic Segmentation of Prostate and Extracapsular Structures in MRI to Predict Needle Deflection in Percutaneous Prostate Intervention”. International Journal of Computer Assisted Radiology and Surgery 18 (3): 449-60. https://doi.org/10.1007/s11548-022-02757-2.

PURPOSE: Understanding the three-dimensional anatomy of percutaneous intervention in prostate cancer is essential to avoid complications. Recently, attempts have been made to use machine learning to automate the segmentation of functional structures such as the prostate gland, rectum, and bladder. However, a paucity of material is available to segment extracapsular structures that are known to cause needle deflection during percutaneous interventions. This research aims to explore the feasibility of the automatic segmentation of prostate and extracapsular structures to predict needle deflection.

METHODS: Using pelvic magnetic resonance imagings (MRIs), 3D U-Net was trained and optimized for the prostate and extracapsular structures (bladder, rectum, pubic bone, pelvic diaphragm muscle, bulbospongiosus muscle, bull of the penis, ischiocavernosus muscle, crus of the penis, transverse perineal muscle, obturator internus muscle, and seminal vesicle). The segmentation accuracy was validated by putting intra-procedural MRIs into the 3D U-Net to segment the prostate and extracapsular structures in the image. Then, the segmented structures were used to predict deflected needle path in in-bore MRI-guided biopsy using a model-based approach.

RESULTS: The 3D U-Net yielded Dice scores to parenchymal organs (0.61-0.83), such as prostate, bladder, rectum, bulb of the penis, crus of the penis, but lower in muscle structures (0.03-0.31), except and obturator internus muscle (0.71). The 3D U-Net showed higher Dice scores for functional structures ([Formula: see text]0.001) and complication-related structures ([Formula: see text]0.001). The segmentation of extracapsular anatomies helped to predict the deflected needle path in MRI-guided prostate interventions of the prostate with the accuracy of 0.9 to 4.9 mm.

CONCLUSION: Our segmentation method using 3D U-Net provided an accurate anatomical understanding of the prostate and extracapsular structures. In addition, our method was suitable for segmenting functional and complication-related structures. Finally, 3D images of the prostate and extracapsular structures could simulate the needle pathway to predict needle deflections.

Naito, Masahito, Fumitaro Masaki, Rebecca Lisk, Hisashi Tsukada, and Nobuhiko Hata. (2023) 2023. “Predicting Reachability to Peripheral Lesions in Transbronchial Biopsies Using CT-Derived Geometrical Attributes of the Bronchial Route”. International Journal of Computer Assisted Radiology and Surgery 18 (2): 247-55. https://doi.org/10.1007/s11548-022-02723-y.

PURPOSE: The bronchoscopist's ability to locate the lesion with the bronchoscope is critical for a transbronchial biopsy. However, much less study has been done on the transbronchial biopsy route. This study aims to determine whether the geometrical attributes of the bronchial route can predict the difficulty of reaching tumors in bronchoscopic intervention.

METHODS: This study included patients who underwent bronchoscopic diagnosis of lung tumors using electromagnetic navigation. The biopsy instrument was considered "reached" and recorded as such if the tip of the tracked bronchoscope or extended working channel was in the tumors. Four geometrical indices were defined: Local curvature (LC), plane rotation (PR), radius, and global relative angle. A Mann-Whitney U test and logistic regression analysis were performed to analyze the difference in geometrical indices between the reachable and unreachable groups. Receiver operating characteristic analysis (ROC) was performed to evaluate the geometrical indices to predict reachability.

RESULTS: Of the 41 patients enrolled in the study, 16 patients were assigned to the unreachable group and 25 patients to the reachable group. LC, PR, and radius have significantly higher values in unreachable cases than in reachable cases ([Formula: see text], [Formula: see text], [Formula: see text]). The logistic regression analysis showed that LC and PR were significantly associated with reachability ([Formula: see text], [Formula: see text]). The areas under the curve with ROC analysis of the LC and PR index were 0.903 and 0.618. The LC's cut-off value was 578.25.

CONCLUSION: We investigated whether the geometrical attributes of the bronchial route to the lesion can predict the difficulty of reaching the lesions in the bronchoscopic biopsy. LC, PR, and radius have significantly higher values in unreachable cases than in reachable cases. LC and PR index can be potentially used to predict the navigational success of the bronchoscope.

Kobayashi, Satoshi, Fumitaro Masaki, Franklin King, Daniel A Wollin, Adam S Kibel, and Nobuhiko Hata. (2023) 2023. “Feasibility of Multi-Section Continuum Robotic Ureteroscope in the Kidney”. Journal of Robotic Surgery. https://doi.org/10.1007/s11701-023-01530-0.

Our objective was to evaluate the feasibility of a multi-section continuum robotic ureteroscope to address the difficulties with access into certain renal calyces during flexible ureteroscopy. First, the robotic ureteroscope developed in previous research, which utilizes three actuated bendable sections controlled by wires, was modified for use in this project. Second, using phantom models created from five randomly selected computer tomography urograms, the flexible ureteroscope and robotic ureteroscope were evaluated, focusing on several factors: time taken to access each renal calyx, time taken to aim at three targets on each renal calyx, the force generated in the renal pelvic wall associated with ureteroscope manipulation, and the distance and standard deviation between the ureteroscope and the target. As a result, the robotic ureteroscope utilized significantly less force during lower pole calyx access (flexible ureteroscope vs. robotic ureteroscope; 2.0 vs. 0.98 N, p = 0.03). When aiming at targets, the standard deviation of proper target access was smaller for each renal calyx (upper pole: 0.49 vs. 0.11 mm, middle: 0.84 vs. 0.12 mm, lower pole: 3.4 vs. 0.19 mm) in the robotic ureteroscope group, and the distance between the center point of the ureteroscope image and the target was significantly smaller in the robotic ureteroscope group (upper: 0.49 vs. 0.19 mm, p < 0.001, middle: 0.77 vs. 0.17 mm, p < 0.001, lower: 0.77 vs. 0.22 mm, p < 0.001). In conclusion, our robotic ureteroscope demonstrated improved maneuverability and facilitated accuracy and precision while reducing the force on the renal pelvic wall during access into each renal calyx.

Banach, Artur, Masahito Naito, Franklin King, Fumitaro Masaki, Hisashi Tsukada, and Nobuhiko Hata. (2023) 2023. “Computer-Based Airway Stenosis Quantification from Bronchoscopic Images: Preliminary Results from a Feasibility Trial”. International Journal of Computer Assisted Radiology and Surgery 18 (4): 707-13. https://doi.org/10.1007/s11548-022-02808-8.

PURPOSE: Airway Stenosis (AS) is a condition of airway narrowing in the expiration phase. Bronchoscopy is a minimally invasive pulmonary procedure used to diagnose and/or treat AS. The AS quantification in a form of the Stenosis Index (SI), whether subjective or digital, is necessary for the physician to decide on the most appropriate form of treatment. The literature reports that the subjective SI estimation is inaccurate. In this paper, we propose an approach to quantify the SI defining the level of airway narrowing, using depth estimation from a bronchoscopic image.

METHODS: In this approach we combined a generative depth estimation technique combined with depth thresholding to provide Computer-based AS quantification. We performed an interim clinical analysis by comparing AS quantification performance of three expert bronchoscopists against the proposed Computer-based method on seven patient datasets.

RESULTS: The Mean Absolute Error of the subjective Human-based and the proposed Computer-based SI estimation was [Formula: see text] [%] and [Formula: see text] [%], respectively. The correlation coefficients between the CT measurements were used as the gold standard, and the Human-based and Computer-based SI estimation were [Formula: see text] and 0.46, respectively.

CONCLUSIONS: We presented a new computer method to quantify the severity of AS in bronchoscopy using depth estimation and compared the performance of the method against a human-based approach. The obtained results suggest that the proposed Computer-based AS quantification is a feasible tool that has the potential to provide significant assistance to physicians in bronchoscopy.

2022

Dominas, Christine, Sharath Bhagavatula, Elizabeth Stover, Kyle Deans, Cecilia Larocca, Yolanda Colson, Pierpaolo Peruzzi, et al. (2022) 2022. “The Translational and Regulatory Development of an Implantable Microdevice for Multiple Drug Sensitivity Measurements in Cancer Patients”. IEEE Transactions on Bio-Medical Engineering 69 (1): 412-21. https://doi.org/10.1109/TBME.2021.3096126.

OBJECTIVE: The purpose of this article is to report the translational process of an implantable microdevice platform with an emphasis on the technical and engineering adaptations for patient use, regulatory advances, and successful integration into clinical workflow.

METHODS: We developed design adaptations for implantation and retrieval, established ongoing monitoring and testing, and facilitated regulatory advances that enabled the administration and examination of a large set of cancer therapies simultaneously in individual patients.

RESULTS: Six applications for oncology studies have successfully proceeded to patient trials, with future applications in progress.

CONCLUSION: First-in-human translation required engineering design changes to enable implantation and retrieval that fit with existing clinical workflows, a regulatory strategy that enabled both delivery and response measurement of up to 20 agents in a single patient, and establishment of novel testing and quality control processes for a drug/device combination product without clear precedents.

SIGNIFICANCE: This manuscript provides a real-world account and roadmap on how to advance from animal proof-of-concept into the clinic, confronting the question of how to use research to benefit patients.

2021

Masaki, Fumitaro, Franklin King, Takahisa Kato, Hisashi Tsukada, Yolonda Colson, and Nobuhiko Hata. (2021) 2021. “Technical Validation of Multi-Section Robotic Bronchoscope With First Person View Control for Transbronchial Biopsies of Peripheral Lung”. IEEE Transactions on Bio-Medical Engineering 68 (12): 3534-42. https://doi.org/10.1109/TBME.2021.3077356.

This study aims to validate the advantage of new engineering method to maneuver multi-section robotic bronchoscope with first person view control in transbronchial biopsy. Six physician operators were recruited and tasked to operate a manual and a robotic bronchoscope to the peripheral area placed in patient-derived lung phantoms. The metrics collected were the furthest generation count of the airway the bronchoscope reached, force incurred to the phantoms, and NASA-Task Load Index. The furthest generation count of the airway the physicians reached using the manual and the robotic bronchoscopes were 6.6 ±1.2th and 6.7 ±0.8th. Robotic bronchoscopes successfully reached the 5th generation count into the peripheral area of the airway, while the manual bronchoscope typically failed earlier in the 3 rd generation. More force was incurred to the airway when the manual bronchoscope was used ( 0.24 ±0.20 [N]) than the robotic bronchoscope was applied ( 0.18 ±0.22 [N], ). The manual bronchoscope imposed more physical demand than the robotic bronchoscope by NASA-TLX score ( 55 ±24 vs 19 ±16, ). These results indicate that a robotic bronchoscope facilitates the advancement of the bronchoscope to the peripheral area with less physical demand to physician operators. The metrics collected in this study would expect to be used as a benchmark for the future development of robotic bronchoscopes.

Banach, Artur, Franklin King, Fumitaro Masaki, Hisashi Tsukada, and Nobuhiko Hata. (2021) 2021. “Visually Navigated Bronchoscopy Using Three Cycle-Consistent Generative Adversarial Network for Depth Estimation”. Medical Image Analysis 73: 102164. https://doi.org/10.1016/j.media.2021.102164.

[Background] Electromagnetically Navigated Bronchoscopy (ENB) is currently the state-of-the art diagnostic and interventional bronchoscopy. CT-to-body divergence is a critical hurdle in ENB, causing navigation error and ultimately limiting the clinical efficacy of diagnosis and treatment. In this study, Visually Navigated Bronchoscopy (VNB) is proposed to address the aforementioned issue of CT-to-body divergence. [Materials and Methods] We extended and validated an unsupervised learning method to generate a depth map directly from bronchoscopic images using a Three Cycle-Consistent Generative Adversarial Network (3cGAN) and registering the depth map to preprocedural CTs. We tested the working hypothesis that the proposed VNB can be integrated to the navigated bronchoscopic system based on 3D Slicer, and accurately register bronchoscopic images to pre-procedural CTs to navigate transbronchial biopsies. The quantitative metrics to asses the hypothesis we set was Absolute Tracking Error (ATE) of the tracking and the Target Registration Error (TRE) of the total navigation system. We validated our method on phantoms produced from the pre-procedural CTs of five patients who underwent ENB and on two ex-vivo pig lung specimens. [Results] The ATE using 3cGAN was 6.2 +/- 2.9 [mm]. The ATE of 3cGAN was statistically significantly lower than that of cGAN, particularly in the trachea and lobar bronchus (p < 0.001). The TRE of the proposed method had a range of 11.7 to 40.5 [mm]. The TRE computed by 3cGAN was statistically significantly smaller than those computed by cGAN in two of the five cases enrolled (p < 0.05). [Conclusion] VNB, using 3cGAN to generate the depth maps was technically and clinically feasible. While the accuracy of tracking by cGAN was acceptable, the TRE warrants further investigation and improvement.

Lee, Eung-Joo, William Plishker, Nobuhiko Hata, Paul B Shyn, Stuart G Silverman, Shuvra S Bhattacharyya, and Raj Shekhar. (2021) 2021. “Rapid Quality Assessment of Nonrigid Image Registration Based on Supervised Learning”. Journal of Digital Imaging 34 (6): 1376-86. https://doi.org/10.1007/s10278-021-00523-5.

When preprocedural images are overlaid on intraprocedural images, interventional procedures benefit in that more structures are revealed in intraprocedural imaging. However, image artifacts, respiratory motion, and challenging scenarios could limit the accuracy of multimodality image registration necessary before image overlay. Ensuring the accuracy of registration during interventional procedures is therefore critically important. The goal of this study was to develop a novel framework that has the ability to assess the quality (i.e., accuracy) of nonrigid multimodality image registration accurately in near real time. We constructed a solution using registration quality metrics that can be computed rapidly and combined to form a single binary assessment of image registration quality as either successful or poor. Based on expert-generated quality metrics as ground truth, we used a supervised learning method to train and test this system on existing clinical data. Using the trained quality classifier, the proposed framework identified successful image registration cases with an accuracy of 81.5%. The current implementation produced the classification result in 5.5 s, fast enough for typical interventional radiology procedures. Using supervised learning, we have shown that the described framework could enable a clinician to obtain confirmation or caution of registration results during clinical procedures.

2020

Tsumura, Ryosuke, Doua P Vang, Nobuhiko Hata, and Haichong K Zhang. (2020) 2020. “Ring-Arrayed Forward-Viewing Ultrasound Imaging System: A Feasibility Study”. Proceedings of SPIE–the International Society for Optical Engineering 11319. https://doi.org/10.1117/12.2550042.

Current standard workflows of ultrasound (US)-guided needle insertion require physicians to use their both hands: holding the US probe to locate interested areas with the non-dominant hand and the needle with the dominant hand. This is due to the separation of functionalities for localization and needle insertion. This requirement does not only make the procedure cumbersome, but also limits the reliability of guidance given that the positional relationship between the needle and US images is unknown and interpreted with their experience and assumption. Although the US-guided needle insertion may be assisted through navigation systems, recovery of the positional relationship between the needle and US images requires the usage of external tracking systems and image-based tracking algorisms that may involve the registration inaccuracy. Therefore, there is an unmet need for the solution that provides a simple and intuitive needle localization and insertion to improve the conventional US-guided procedure. In this work, we propose a new device concept solution based on the ring-arrayed forward-viewing (RAF) ultrasound imaging system. The proposed system is comprised with ring-arrayed transducers and an open whole inside the ring where the needle can be inserted. The ring array provides forward-viewing US images, where the needle path is always maintained at the center of the reconstructed image without requiring any registration. As the proof of concept, we designed single-circle ring-arrayed configurations with different radiuses and visualized point targets using the forward-viewing US imaging through simulations and phantom experiments. The results demonstrated the successful target visualization and indicates the ring-arrayed US imaging has a potential to improve the US-guided needle insertion procedure to be simpler and more intuitive.

Gao, Yuanqian, Kiyoshi Takagi, Takahisa Kato, Naoyuki Shono, and Nobuhiko Hata. (2020) 2020. “Continuum Robot With Follow-the-Leader Motion for Endoscopic Third Ventriculostomy and Tumor Biopsy”. IEEE Transactions on Bio-Medical Engineering 67 (2): 379-90. https://doi.org/10.1109/TBME.2019.2913752.

BACKGROUND: In a combined endoscopic third ventriculostomy (ETV) and endoscopic tumor biopsy (ETB) procedure, an optimal tool trajectory is mandatory to minimize trauma to surrounding cerebral tissue.

OBJECTIVE: This paper presents wire-driven multi-section robot with push-pull wire. The robot is tested to attain follow-the-leader (FTL) motion to place surgical instruments through narrow passages while minimizing the trauma to tissues.

METHODS: A wire-driven continuum robot with six sub-sections was developed and its kinematic model was proposed to achieve FTL motion. An accuracy test to assess the robot's ability to attain FTL motion along a set of elementary curved trajectory was performed. We also used hydrocephalus ventricular model created from human subject data to generate five ETV/ETB trajectories and conducted a study assessing the accuracy of the FTL motion along these clinically desirable trajectories.

RESULTS: In the test with elementary curved paths, the maximal deviation of the robot was increased from 0.47 mm at 30 ° turn to 1.78 mm at 180 ° in a simple C-shaped curve. S-shaped FTL motion had lesser deviation ranging from 0.16 to 0.18 mm. In the phantom study, the greatest tip deviation was 1.45 mm, and the greatest path deviation was 1.23 mm.

CONCLUSION: We present the application of a continuum robot with FTL motion to perform a combined ETV/ETB procedure. The validation study using human subject data indicated that the accuracy of FTL motion is relatively high. The study indicated that FTL motion may be useful tool for combined ETV and ETB.