Detection Diabetic Retinopathy Illnesses Using a Combination of Xception and NASNetMobile
Keywords:
Convolutional Neural Network, Image Classification, Transfer LearningAbstract
Convolutional Neural Networks (CNNs) are one of the various forms of deep learning that have become effective tools in computer vision. Deep Neural Networks (DNNs) with a lot of parameters have changed the field of machine learning, and their influence is especially noticeable in network architecture. CNNs are very good at image classification problems because of their ability to concentrate on objects in images and extract information using spatial relationships. This work presents a novel CNN model for image classification based on transfer learning that integrates two advanced architectures: Xception and NASNetMobile. Using Xception and NASNetMobile, the model uses photographs with specified dimensions, often known as "best windowing of images," as input to categorize the images into two groups. To avoid overfitting issues in CNN, a dropout layer is introduced after the outputs of these designs are concatenated using a concatenate layer. This study assessed the proposed model for an exceptionally challenging dataset associated with diabetic retinal disease. This dataset, named “Diabetic Retinopathy 224*224 Grayscale images” from the “APTOS 2019 Blindness Detection” dataset on Kaggle, contains 3662 images, with 1875 depicting abnormal cases and the remaining 1805 representing normal cases. In this test, the model performed exceptionally well, with an accuracy of 97.50%.