Background and Aims
Methods
Results
Conclusions
Abbreviations:
AI (artificial intelligence), DHXU (Affiliated Dongnan Hospital of Xiamen University), EGC (early gastric cancer), EGCCap (early gastric cancer captioning model), FDZS (Endoscopy Center of Zhongshan Hospital), ME-NBI (magnifying endoscopy with narrow-band imaging), MESDA-G (magnifying endoscopy simple diagnostic algorithm for early gastric cancer), NPV (negative predictive value), PPV (positive predictive value), WHFH (Wuhan First Hospital)Purchase one-time access:
Academic & Personal: 24 hour online accessCorporate R&D Professionals: 24 hour online accessOne-time access price info
- For academic or personal research use, select 'Academic and Personal'
- For corporate R&D use, select 'Corporate R&D Professionals'
Subscribe:
Subscribe to Gastrointestinal EndoscopyReferences
- Changing profiles of cancer burden worldwide and in China: a secondary analysis of the global cancer statistics 2020.Chin Med J. 2021; 134: 783-791
- Cancer burden and trends in China: a review and comparison with Japan and South Korea.Chin J Cancer Res. 2020; 32: 129
- Magnifying endoscopy with narrow-band imaging achieves superior accuracy in the differential diagnosis of superficial gastric lesions identified with white-light endoscopy: a prospective study.Gastrointest Endosc. 2010; 72: 523-529
- Magnifying narrowband imaging is more accurate than conventional white-light imaging in diagnosis of gastric mucosal cancer.Gastroenterology. 2011; 141: 2017-2025
- Diagnostic performance and limitations of magnifying narrow-band imaging in screening endoscopy of early gastric cancer: a prospective multicenter feasibility study.Gastric Cancer. 2014; 17: 669-679
- An efficient diagnostic strategy for small, depressed early gastric cancer with magnifying narrow-band imaging: a post-hoc analysis of a prospective randomized controlled trial.Gastrointest Endosc. 2014; 79: 55-63
- Magnifying endoscopy simple diagnostic algorithm for early gastric cancer (MESDA-G).Dig Endosc. 2016; 28: 379-393
- Novel zoom endoscopy technique for diagnosis of small flat gastric cancer: a prospective, blind study.Clin Gastroenterol Hepatol. 2007; 5: 869-878
- Deep learning-based classification and mutation prediction from histopathological images of hepatocellular carcinoma.Clin Translat Med. 2020; 10: e102
- Novel deep learning radiomics model for preoperative evaluation of hepatocellular carcinoma differentiation based on computed tomography data.Clin Translat Med. 2021; 11: e570
- Development and validation of an individualized nomogram to identify occult peritoneal metastasis in patients with advanced gastric cancer.Ann Oncol. 2019; 30: 431-438
- Deep learning radiomic nomogram can predict the number of lymph node metastasis in locally advanced gastric cancer: an international multicenter study.Ann Oncol. 2020; 31: 912-920
- Identifying early gastric cancer under magnifying narrow-band images with deep learning: a multicenter study.Gastrointest Endosc. 2021; 93: 1333-1341
- Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow band imaging.Gastric Cancer. 2020; 23: 126-132
- Convolutional neural network for differentiating gastric cancer from gastritis using magnified endoscopy with narrow band imaging.Dig Dis Sci. 2020; 65: 1355-1363
- Automatic image and text-based description for colorectal polyps using BASIC classification.Artif Intell Med. 2021; 121: 102178
- Automatic medical image interpretation: state of the art and future directions.Pattern Recogn. 2021; 114: 107856
- On the automatic generation of medical imaging reports.in: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Vol 1 (Long Papers). 2018: 2577-2586
- Evaluating diagnostic content of AI-generated radiology reports of chest X-rays.Artif Intell Med. 2021; 116: 102075
- Medical image captioning using optimized deep learning model.Comput Intell Neurosci. 2022; 2022: 9638438
- Automatic caption generation of retinal diseases with self-trained RNN merge model.in: Advanced computing and systems for security. Springer, 2020: 1-10
- Deep residual learning for image recognition.in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778
- Bleu: a method for automatic evaluation of machine translation.in: Proceedings of the 40th annual meeting of the Association for Computational Linguistics. 2002: 311-318
- Cider: consensus-based image description evaluation.in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 4566-4575
- METEOR: an automatic metric for MT evaluation with improved correlation with human judgments.in: Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. 2005: 65-72
- Rouge: a package for automatic evaluation of summaries.in: Text summarization branches out. 2004: 74-81
- Spice: Semantic propositional image caption evaluation.in: European Conference on Computer Vision. Springer, 2016: 382-398
- Axiomatic attribution for deep networks.in: International Conference on Machine Learning: PMLR. 2017: 3319-3328
- Boosted attention: leveraging human attention for image captioning.in: Proceedings of the European Conference on Computer Vision. 2018: 68-84
Pavlopoulos J, Kougia V, Androutsopoulos I. A survey on biomedical image captioning. Proceedings of the Second Workshop on Shortcomings in Vision and Language. 2019. p. 26-36.
- Length-controllable image captioning.Springer International Publishing, Cham, Switzerland2020: 712-729
- Learning to read chest x-rays: recurrent neural cascade model for automated image annotation.in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 2497-2506
- Design and development of a multimodal biomedical information retrieval system.J Comput Sci Eng. 2012; 6: 168-177
- Radiology Objects in COntext (ROCO): a multimodal image dataset.in: Intravascular imaging and computer assisted stenting and large-scale annotation of biomedical data and expert label synthesis. Springer, 2018: 180-189
- Preparing a collection of radiology examinations for distribution and retrieval.J Am Med Inform Assoc. 2016; 23: 304-310
- Chestx-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases.in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 2097-2106
- Chexpert: a large chest radiograph dataset with uncertainty labels and expert comparison.in: Proceedings of the AAAI Conference on Artificial Intelligence. 2019: 590-597
References
- The Paris endoscopic classification of superficial neoplastic lesions: esophagus, stomach, and colon: November 30 to December 1, 2002.Gastrointest Endosc. 2003; 58: S3-S43
- Update on the Paris classification of superficial neoplastic lesions in the digestive tract.Endoscopy. 2005; 37: 570-578
- Gastrointestinal epithelial neoplasia: Vienna revisited.Gut. 2002; 51: 130-131
- Microsoft COCO captions: data collection and evaluation server.(arXiv preprint arXiv:150400325)2015 (Available at: https://arxiv.org/abs/1504.00325. Accessed February 5, 2022)
- Bleu: a method for automatic evaluation of machine translation.in: Proceedings of the 40th annual meeting of the Association for Computational Linguistics. 2002: 311-318
- Cider: consensus-based image description evaluation.in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 4566-4575
- METEOR: an automatic metric for MT evaluation with improved correlation with human judgments.in: Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. 2005: 65-72
- Rouge: a package for automatic evaluation of summaries.in: Text summarization branches out. 2004: 74-81
- Spice: semantic propositional image caption evaluation.in: European Conference on Computer Vision. Springer, 2016: 382-398
Article info
Publication history
Footnotes
DISCLOSURE: All authors disclosed no financial relationships. Research support for this study was provided by grants from the National Natural Science Foundation of China (81900548, 82022036, 91959130, 81971776, 62027901, 81771924, 81930053), Smart Medical Program of Shanghai Municipal Health Commission (2018ZHYL0204), the Natural Science Foundation of Shanghai (22015831400, 22Y11907500, 20DZ1100102), Shanghai Municipal Human Resources Development Program for Outstanding Young Talents in Medical and Health Sciences (2018YQ33), the National Key R&D Program of China (2017YFA0700401, 2017YFA0205200, 2017YFC1309100, 2017YFC1308700), the Beijing Natural Science Foundation (L182061, Z20J00105), Strategic Priority Research Program of the Chinese Academy of Sciences (XDB38040200), Project of High-Level Talents Team Introduction in Zhuhai City (Zhuhai HLHPTP201703), and the Youth Innovation Promotion Association CAS (Y2021049, 2017175).