外科手術(shù)的進(jìn)步對(duì)急性和慢性疾病的管理,延長(zhǎng)壽命和不斷擴(kuò)大生存范圍都產(chǎn)生了重大影響。如圖1所示,這些進(jìn)步得益于診斷,成像和外科器械的持續(xù)技術(shù)發(fā)展。這些技術(shù)中,深度學(xué)習(xí)對(duì)推動(dòng)術(shù)前手術(shù)規(guī)劃尤其重要。手術(shù)規(guī)劃中要根據(jù)現(xiàn)有的醫(yī)療記錄來(lái)計(jì)劃手術(shù)程序,而成像對(duì)于手術(shù)的成功至關(guān)重要。在現(xiàn)有的成像方式中,X射線(xiàn),CT,超聲和MRI是實(shí)際中最常用的方式。基于醫(yī)學(xué)成像的常規(guī)任務(wù)包括解剖學(xué)分類(lèi),檢測(cè),分割和配準(zhǔn)。
圖1:概述了流行的AI技術(shù),以及在術(shù)前規(guī)劃,術(shù)中指導(dǎo)和外科手術(shù)機(jī)器人學(xué)中使用的AI的關(guān)鍵要求,挑戰(zhàn)和子區(qū)域。
1、分類(lèi)
分類(lèi)輸出輸入的診斷值,該輸入是單個(gè)或一組醫(yī)學(xué)圖像或器官或病變體圖像。除了傳統(tǒng)的機(jī)器學(xué)習(xí)和圖像分析技術(shù),基于深度學(xué)習(xí)的方法正在興起[1]。對(duì)于后者,用于分類(lèi)的網(wǎng)絡(luò)架構(gòu)由用于從輸入層提取信息的卷積層和用于回歸診斷值的完全連接層組成。
例如,有人提出了使用GoogleInception和ResNet架構(gòu)的分類(lèi)管道來(lái)細(xì)分肺癌,膀胱癌和乳腺癌的類(lèi)型[2]。Chilamkurthy等證明深度學(xué)習(xí)可以識(shí)別顱內(nèi)出血,顱骨骨折,中線(xiàn)移位和頭部CT掃描的質(zhì)量效應(yīng)[3]。與標(biāo)準(zhǔn)的臨床工具相比,可通過(guò)循環(huán)神經(jīng)網(wǎng)絡(luò)(RNN)實(shí)時(shí)預(yù)測(cè)心臟外科手術(shù)后患者的死亡率,腎衰竭和術(shù)后出血[4]。ResNet-50和Darknet-19已被用于對(duì)超聲圖像中的良性或惡性病變進(jìn)行分類(lèi),顯示出相似的靈敏度和更高的特異性[5]。
2、檢測(cè)
檢測(cè)通常以邊界框或界標(biāo)的形式提供感興趣區(qū)域的空間定位,并且還可以包括圖像或區(qū)域級(jí)別的分類(lèi)。同樣,基于深度學(xué)習(xí)的方法在檢測(cè)各種異常或醫(yī)學(xué)狀況方面也顯示出了希望。用于檢測(cè)的DCNN通常由用于特征提取的卷積層和用于確定邊界框?qū)傩缘幕貧w層組成。
為了從4D正電子發(fā)射斷層掃描(PET)圖像中檢測(cè)前列腺癌,對(duì)深度堆疊的卷積自動(dòng)編碼器進(jìn)行了訓(xùn)練,以提取統(tǒng)計(jì)和動(dòng)力學(xué)生物學(xué)特征[6]。對(duì)于肺結(jié)節(jié)的檢測(cè),提出了具有旋轉(zhuǎn)翻譯組卷積(3D G-CNN)的3D CNN,具有良好的準(zhǔn)確性,靈敏度和收斂速度[7]。對(duì)于乳腺病變的檢測(cè),基于深度Q網(wǎng)絡(luò)擴(kuò)展的深度強(qiáng)化學(xué)習(xí)(DRL)用于從動(dòng)態(tài)對(duì)比增強(qiáng)MRI中學(xué)習(xí)搜索策略[8]。為了從CT掃描中檢測(cè)出急性顱內(nèi)出血并改善網(wǎng)絡(luò)的可解釋性,Lee等人[9]使用注意力圖和迭代過(guò)程來(lái)模仿放射科醫(yī)生的工作流程。
3、分割
分割可被視為像素級(jí)或體素級(jí)圖像分類(lèi)問(wèn)題。由于早期作品中計(jì)算資源的限制,每個(gè)圖像或卷積都被劃分為小窗口,并且訓(xùn)練了CNN來(lái)預(yù)測(cè)窗口中心位置的目標(biāo)標(biāo)簽。通過(guò)在密集采樣的圖像窗口上運(yùn)行CNN分類(lèi)器,可以實(shí)現(xiàn)圖像或體素分割。例如,Deepmedic對(duì)MRI的多模式腦腫瘤分割顯示出良好的性能[10]。但是,基于滑動(dòng)窗口的方法效率低下,因?yàn)樵谠S多窗口重疊的區(qū)域中會(huì)重復(fù)計(jì)算網(wǎng)絡(luò)功能。由于這個(gè)原因,基于滑動(dòng)窗口的方法最近被完全卷積網(wǎng)絡(luò)(FCN)取代[11]。關(guān)鍵思想是用卷積層和上采樣層替換分類(lèi)網(wǎng)絡(luò)中的全連接層,這大大提高了分割效率。對(duì)于醫(yī)學(xué)圖像分割,諸如U-Net [12][13]之類(lèi)的編碼器-解碼器網(wǎng)絡(luò)已顯示出令人鼓舞的性能。編碼器具有多個(gè)卷積和下采樣層,可提取不同比例的圖像特征。解碼器具有卷積和上采樣層,可恢復(fù)特征圖的空間分辨率,并最終實(shí)現(xiàn)像素或體素密集分割。在[14]中可以找到有關(guān)訓(xùn)練U-Net進(jìn)行醫(yī)學(xué)圖像分割的不同歸一化方法的綜述。
對(duì)于內(nèi)窺鏡胰管和膽道手術(shù)中的導(dǎo)航,Gibson等人 [15]使用膨脹的卷積和融合的圖像特征在多個(gè)尺度上分割來(lái)自CT掃描的腹部器官。為了從MRI進(jìn)行胎盤(pán)和胎兒大腦的交互式分割,將FCN與用戶(hù)定義的邊界框和涂鴉結(jié)合起來(lái),其中FCN的最后幾層根據(jù)用戶(hù)輸入進(jìn)行了微調(diào)[16]。手術(shù)器械界標(biāo)的分割和定位被建模為熱圖回歸模型,并且使用FCN幾乎實(shí)時(shí)地跟蹤器械[17]。對(duì)于肺結(jié)節(jié)分割,F(xiàn)eng等通過(guò)使用候選篩選方法從弱標(biāo)記的肺部CT中學(xué)習(xí)辨別區(qū)域來(lái)訓(xùn)練FCN,解決了需要精確的手動(dòng)注釋的問(wèn)題[18]。Bai等提出了一種自我監(jiān)督的學(xué)習(xí)策略,以有限的標(biāo)記訓(xùn)練數(shù)據(jù)來(lái)提高U-Net的心臟分割精度[19]。
4、配準(zhǔn)
配準(zhǔn)是兩個(gè)醫(yī)學(xué)圖像,體積或模態(tài)之間的空間對(duì)齊,這對(duì)于術(shù)前和術(shù)中規(guī)劃都特別重要。傳統(tǒng)算法通常迭代地計(jì)算參數(shù)轉(zhuǎn)換,即彈性,流體或B樣條曲線(xiàn)模型,以最小化兩個(gè)醫(yī)療輸入之間的給定度量(即均方誤差,歸一化互相關(guān)或互信息)。最近,深度回歸模型已被用來(lái)代替?zhèn)鹘y(tǒng)的耗時(shí)和基于優(yōu)化的注冊(cè)算法。
示例性的基于深度學(xué)習(xí)的配準(zhǔn)方法包括VoxelMorph,它通過(guò)利用基于CNN的結(jié)構(gòu)和輔助分割來(lái)將輸入圖像對(duì)映射到變形場(chǎng),從而最大化標(biāo)準(zhǔn)圖像匹配目標(biāo)函數(shù)[20]。提出了一個(gè)用于3D醫(yī)學(xué)圖像配準(zhǔn)的端到端深度學(xué)習(xí)框架,該框架包括三個(gè)階段:仿射變換預(yù)測(cè),動(dòng)量計(jì)算和非參數(shù)細(xì)化,以結(jié)合仿射配準(zhǔn)和矢量動(dòng)量參數(shù)化的固定速度場(chǎng)[21]。提出了一種用于多模式圖像配準(zhǔn)的弱監(jiān)督框架,該框架對(duì)具有較高級(jí)別對(duì)應(yīng)關(guān)系的圖像(即解剖標(biāo)簽)進(jìn)行訓(xùn)練,而不是用于預(yù)測(cè)位移場(chǎng)的體素級(jí)別轉(zhuǎn)換[22]。每個(gè)馬爾科夫決策過(guò)程都由經(jīng)過(guò)擴(kuò)張的FCN訓(xùn)練的代理商進(jìn)行,以使3D體積與2D X射線(xiàn)圖像對(duì)齊[23]。RegNet是通過(guò)考慮多尺度背景而提出的,并在人工生成的位移矢量場(chǎng)(DVF)上進(jìn)行了培訓(xùn),以實(shí)現(xiàn)非剛性配準(zhǔn)[24]。3D圖像配準(zhǔn)也可以公式化為策略學(xué)習(xí)過(guò)程,其中將3D原始圖像作為輸入,將下一個(gè)最佳動(dòng)作(即向上或向下)作為輸出,并將CNN作為代理[25]。
參考文獻(xiàn):
[1] G. Litjens, T. Kooi, B. E.Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. VanGinneken, and C. I. Sa′nchez, “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, pp. 60–88, 2017.
[2] P. Khosravi, E. Kazemi, M.Imielinski, O. Elemento, and I. Hajirasouliha, “Deep convolutional neural networks enable discrimination of heterogeneous digital pathology images,” EBioMedicine, vol. 27, pp. 317–328, 2018.
[3] S. Chilamkurthy, R. Ghosh, S.Tanamala, M. Biviji, N. G. Campeau, V. K. Venugopal, V. Mahajan, P. Rao, and P.Warier, “Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study,” The Lancet, vol. 392, no. 10162, pp. 2388–2396,2018.
[4] A. Meyer, D. Zverinski, B.Pfahringer, J. Kempfert, T. Kuehne, S. H. Su¨ndermann, C. Stamm, T. Hofmann, V.Falk, and C. Eickhoff, “Machine learning for real-time prediction of complications in critical care: a retrospective study,” The Lancet RespiratoryMedicine, vol. 6, no. 12, pp. 905–914, 2018.
[5] X. Li, S. Zhang, Q. Zhang, X.Wei, Y. Pan, J. Zhao, X. Xin, C. Qin, X. Wang, J. Li et al., “Diagnosis of thyroid cancer using deep convolutional neural network models applied to sonographic images: a retrospective, multicohort, diagnostic study,” The LancetOncology, vol. 20, no. 2, pp. 193–201, 2019.
[6] E. Rubinstein, M. Salhov, M.Nidam-Leshem, V. White, S. Golan, J. Baniel, H. Bernstine, D. Groshar, and A.Averbuch, “Unsupervised tumor detection in dynamic PET/CT imaging of the prostate,” Medical Image Analysis, vol. 55, pp. 27–40, 2019.
[7] M. Winkels and T. S. Cohen,“Pulmonary nodule detection in CT scan with equivariant CNNs,” Medical image analysis, vol. 55, pp. 15–26, 2019.
[8] G. Maicas, G. Carneiro, A. P.Bradley, J. C. Nascimento, and I. Reid,“Deep reinforcement learning for active breast lesion detection from DCE-MRI,” in Proceedings of International Conference on Medical image computing and Computer-Assisted Intervention (MICCAI). Springer, 2017, pp.665–673.
[9] H. Lee, S. Yune, M. Mansouri,M. Kim, S. H. Tajmir, C. E. Guerrier, S. A. Ebert, S. R. Pomerantz, J. M.Romero, S. Kamalian et al., “An explainable deep-learning algorithm for the detection of acute intracranial hemorrhage from small datasets,” NatureBiomedical Engineering, vol. 3, no. 3, p. 173, 2019.
[10]K. Kamnitsas, C. Ledig, V. F.Newcombe, J. P. Simpson, A. D. Kane, D. K. Menon, D. Rueckert, and B. Glocker, “Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation,” Medical image analysis, vol. 36, pp. 61–78, 2017.
[11]J. Long, E. Shelhamer, and T.Darrell, “Fully convolutional networks for semantic segmentation,” in proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2015, pp. 3431–3440.
[12]O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proceedings of International Conference on Medical Image Computing and computer-Assisted Intervention (MICCAI). Springer, 2015, pp. 234–241.
[13]O. C¸i¸cek, A. Abdulkadir, S.S. Lienkamp, T. Brox, and O. Ronneberger,¨ “3D U-Net: learning dense volumetric segmentation from sparse annotation,” in Proceedings of InternationalConference on Medical Image Computing and Computer-Assisted Intervention(MICCAI). Springer, 2016, pp. 424–432.
[14]X.-Y. Zhou and G.-Z. Yang,“Normalization in training U-Net for 2D biomedical semantic segmentation,” IEEERobotics and Automation Letters, vol. 4, no. 2, pp. 1792–1799, 2019.
[15]E. Gibson, F. Giganti, Y. Hu,E. Bonmati, S. Bandula, K. Gurusamy, B. Davidson, S. P. Pereira, M. J.Clarkson, and D. C. Barratt, “Automatic multi-organ segmentation on abdominal CT with dense networks,” IEEE Transactions on Medical Imaging, vol. 37, no. 8,pp.1822–1834, 2018.
[16]G. Wang, W. Li, M. A. Zuluaga,R. Pratt, P. A. Patel, M. Aertsen, T. Doel, A. L. David, J. Deprest, S.Ourselin et al., “Interactive medical image segmentation using deep learning with image-specific fine-tuning,” IEEE Transactions on Medical Imaging, vol.37, no. 7, pp. 1562–1573, 2018.
[17]I. Laina, N. Rieke, C.Rupprecht, J. P. Vizca′ıno, A. Eslami, F. Tombari, and N. Navab, “Concurrentsegmentation and localization for tracking of surgical instruments,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).Springer, 2017, pp. 664–672.
[18]X. Feng, J. Yang, A. F. Laine,and E. D. Angelini, “Discriminative localization in CNNs for weakly-supervised segmentation of pulmonary nodules,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). Springer, 2017,pp. 568–576.
[19]W. Bai, C. Chen, G. Tarroni,J. Duan, F. Guitton, S. E. Petersen, Y. Guo, P. M. Matthews, and D. Rueckert,“Self-supervised learning for cardiac MR image segmentation by anatomical position prediction,” in International Conference on Medical Image Computing and ComputerAssisted Intervention. Springer, 2019, pp. 541–549.
[20]G. Balakrishnan, A. Zhao, M.R. Sabuncu, J. Guttag, and A. V. Dalca, “VoxelMorph: a learning framework for deformable medical image registration,” IEEE Transactions on Medical Imaging,2019.
[21]Z. Shen, X. Han, Z. Xu, and M.Niethammer, “Networks for joint affine and non-parametric image registration,”in Proceedings of the IEEE Conference on Computer Vision and pattern recognition (CVPR), 2019, pp. 4224–4233.
[22]Y. Hu, M. Modat, E. Gibson, W.Li, N. Ghavami, E. Bonmati, G. Wang, S. Bandula, C. M. Moore, M. Emberton etal., “Weaklysupervised convolutional neural networks for multimodal image registration,” Medical Image Analysis, vol. 49, pp. 1–13, 2018.
[23]S. Miao, S. Piat, P. Fischer,A. Tuysuzoglu, P. Mewes, T. Mansi, and R. Liao, “Dilated FCN for multi-agent2D/3D medical image registration,” in Proceedings of AAAI Conference on artificial intelligence, 2018.
[24]H. Sokooti, B. de Vos, F.Berendsen, B. P. Lelieveldt, I. Iˇsgum, and M. Staring, “Nonrigid image registration using multi-scale 3D convolutional neural networks,” in Proceedings of International Conference on Medical Image Computing and computer-Assisted Intervention (MICCAI). Springer, 2017, pp. 232–239.
[25]R. Liao, S. Miao, P. deTournemire, S. Grbic, A. Kamen, T. Mansi, and D. Comaniciu, “An artificial agent for robust image registration,” in Proceedings of AAAI Conference on Artificial Intelligence, 2017.
商用機(jī)器人 Disinfection Robot 展廳機(jī)器人 智能垃圾站 輪式機(jī)器人底盤(pán) 迎賓機(jī)器人 移動(dòng)機(jī)器人底盤(pán) 講解機(jī)器人 紫外線(xiàn)消毒機(jī)器人 大屏機(jī)器人 霧化消毒機(jī)器人 服務(wù)機(jī)器人底盤(pán) 智能送餐機(jī)器人 霧化消毒機(jī) 機(jī)器人OEM代工廠 消毒機(jī)器人排名 智能配送機(jī)器人 圖書(shū)館機(jī)器人 導(dǎo)引機(jī)器人 移動(dòng)消毒機(jī)器人 導(dǎo)診機(jī)器人 迎賓接待機(jī)器人 前臺(tái)機(jī)器人 導(dǎo)覽機(jī)器人 酒店送物機(jī)器人 云跡科技潤(rùn)機(jī)器人 云跡酒店機(jī)器人 智能導(dǎo)診機(jī)器人 |