Automatic pancreas segmentation using ResNet-18 deep learning approach
DOI:
https://doi.org/10.20535/SRIT.2308-8893.2022.2.08Keywords:
Deep Learning, Dice Coefficient, Fully Connected Layer (FCN), Residual Network (ResNet-18), Visual Geometry Group (VGG)Abstract
The accurate pancreas segmentation process is essential in the early detection of pancreatic cancer. The pancreas is situated in the abdominal cavity of the human body. The abdominal cavity contains the pancreas, liver, spleen, kidney, and adrenal glands. Sharp and smooth detection of the pancreas from this abdominal cavity is a challenging and tedious job in medical image investigation. Top-down approaches like Novel Modified K-means Fuzzy clustering algorithm (NMKFCM), Scale Invariant Feature Transform (SIFT), Kernel Density Estimator (KDE) algorithms were applied for pancreas segmentation in the early days. Recently, Bottom-up method has become popular for pancreas segmentation in medical image analysis and cancer diagnosis. LevelSet algorithm is used to detect the pancreas from the abdominal cavity. The deep learning, bottom-up approach performance is better than another. Deep Residual Network (ResNet-18) deep learning, bottom-up approach is used to detect accurate and sharp pancreas from CT scan medical images. 18 layers are used in the architecture of ResNet-18. The automatic pancreas and kidney segmentation is accurately extracted from CT scan images. The proposed method is applied to the medical CT scan images dataset of 82 patients. 699 images and 150 images with different angles are used for training and testing purposes, respectively. ResNet-18 attains a dice similarity index value up to 98.29±0.63, Jaccard Index value up to 96.63±01.25, Bfscore value up to 84.65±03.96. The validation accuracy of the proposed method is 97.01%, and the loss rate value achieves up to 0.0010. The class imbalance problem is solved by class weight and data augmentation.
References
Pradip M. Paithane, Dr.S.N.Kakarwal, and Dr.D.V.Kurmude, “Top-Down Method used for Pancreas Segmentation”, International Journal of Innovative and Exploring Engineering (IJITEE), 9-3, pp. 2278–3075, 2020.
Amal Farag, Le Lu, Holger R. Roth, Jiamin Liu, Evrim Turkbey, and Ronald M. Summers, “A Bottom-Up Approach for Pancreas Segmentation Using Cascaded Superpixels and (Deep) Image Patch Labeling”, IEEE Transactions on image processing, 26-1, 2017. doi:10.1109/TIP.2016.2624198.
Pradip M. Paithane and S.A. Kinariwal, “Automatic Determination Number of Cluster for NMKFC-means algorithm on Image Segmentation”, IOSR-JCE, 17-1, 2015.
Pradip M. Paithane and Dr.S.N. Kakarwal, “Automatic Determination Number of Cluster for Multi Kenel NMKFCM algorithm on image segmentation”, Intelligent System Design and Applications, Springer Cham, 2017, pp. 80–89.
Justin Ker, Lipo Wang, Jai Rao, and Tchoyoson Lim, “Deep Learning Applications in Medical Image Analysis”, IEEE Transactions, pp. 9375–9389, 2018.
Yingege Qu, Pheng Ann Heng, and Tien-Tsin Wong, Image Segmentation using the levelset Method. New York: Springer, 2004.
Xin-Jiang, Renjie-Zhang, and Shengdong-Nie, “Image Segmentation Based on Level set Method”, International conference on Medical Physics and Biomedical Engineering, Elsevier, 2012.
Joris R Rommelse, Hai-Xiang Lin, and Tony F. Chann, A Robust Level Set Algorithm for Image Segmentation and its Parallel Implementation. Springer, 2014.
P.M. Paithane, S.N. Kakarwal, and D.V. Kurmude, “Automatic Seeded Region Growing with Level Set Technique Used for Segmentation of Pancreas”, Proceedings of the 12th International Conference on Soft Computing and Pattern Recognition (SoCPaR 2020), 1383.
Qianwen Li, Zhihua Wei, and Cairong Zhao, “Optimized Automatic Seeded Region Growing Algorithm with Application to ROI Extraction”, IJIG, 17-4, 2017.
H.K. Abbas, A.H. Al-Saleh, H.J. Mohamad, and A.A. Al-Zuky, “New algorithms to Enhanced Fused Images from Auto-Focus Images”, Baghdad Sci. J., 10, 18(1):0124, 2021
R.J. Mitlif and I.H. Hussein, “Ranking Function to Solve a Fuzzy Multiple Objective Function”, Baghdad Sci. J., 10, 18(1):0144, 2021.
O. Bandyopadhyay, B. Chanda, and B.B. Bhattacharya, “Automatic Segmentation of bones in X-ray images based on entropy measure”, Int. J. Image Graph., 16-01, 2016.
I. Bankman, Handbook of medical imaging: processing and analysis. New York: Acdameic Press, 2000.
Shuo Cheng and Guohui Zhou, “Facial Expression Recognition Method Based on Improved VGG Convolution Neural Network”, IJPRAI, vol. 34, no. 7, 2020.
Pikul Vejjanugraha, Kazunori Kotani, Waree Kongprawechnon, Toshiaki Kondo, and Kanokvate Tungpimolrut, “Automatic Screening of Lung Diseases by 3D Active Contour Method for Inhomogeneous Motion Estimation in CT Image Pairs”, Walailak J. Sci.Tech, 18-2, 2021
Mizuho Nishio, Shunjiro Noguchi, and Koji Fujimoto, “Automatic Pancreas Segmentation Using Coarse-Scaled 2D Model of Deep Learning: Usefulness of Data Augmentation and Deep U-Net”, Appl. Sci., 2020. doi: 10.3390/app10103360.
Srikanth Tammina, “Transfer Learning using VGG-16 with deep convolution Neural Network for classifying Images”, IJSR Publications, 9-10, 2019.
H.R. Roth et al., Deep Organ: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation. MICCAI, 2015.
Robin Wolz, Chengwen Chu, Kazunari Misawa, Michitaka Fujiwara, Kensaku Mori, and Daniel Rueckert, Automated Abdominal Multi-Organ Segmentation With Subject-Specific Atlas Generation, IEEE Transactions On Medical Imaging, 32-9, 2013.