999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Optimal Deep Learning Based Inception Model for Cervical Cancer Diagnosis

2022-08-24 12:57:58TamerAbuKhalilBassamAlqarallehandAhmadAlOmari
Computers Materials&Continua 2022年7期

Tamer AbuKhalil, Bassam A.Y.Alqarallehand Ahmad H.Al-Omari

1Computer Science Department, Faculty of Information Technology, Al Hussein bin Talal University,Ma’an, 71111, Jordan

2MIS Department, College of Business Administration, University of Business and Technology,Jeddah, 21448, Saudi Arabia

3Faculty of Science, Computer Science Department, Northern Border University, Arar, 91431, Saudi Arabia

Abstract: Prevention of cervical cancer becomes essential and is carried out by the use of Pap smear images.Pap smear test analysis is laborious and tiresome work performed visually using a cytopathologist.Therefore,automated cervical cancer diagnosis using automated methods are necessary.This paper designs an optimal deep learning based Inception model for cervical cancer diagnosis (ODLIM-CCD) using pap smear images.The proposed ODLIM-CCD technique incorporates median filtering (MF) based pre-processing to discard the noise and Otsu model based segmentation process.Besides,deep convolutional neural network(DCNN)based Inception with Residual Network (ResNet) v2 model is utilized for deriving the feature vectors.Moreover, swallow swarm optimization (SSO) based hyperparameter tuning process is carried out for the optimal selection of hyperparameters.Finally, recurrent neural network (RNN) based classification process is done to determine the presence of cervical cancer or not.In order to showcase the improved diagnostic performance of the ODLIM-CCD technique, a series of simulations occur on benchmark test images and the outcomes highlighted the improved performance over the recent approaches with a superior accuracy of 0.9661.

Keywords:Median filtering; convolutional neural network; pap smear; cervical cancer

1 Introduction

Cervical cancer is worldwide disease cancer that is very popular amongst females, and simultaneously, this is a curable and preventable cancer [1].Human Papilloma Virus (HPV) or so-called cervical disease is a very common kind of female cancer around the world [2].The Papanicolaou research is becoming the keystone of cervical screening for the previous sixty years.The Papanicolaou test,also known as the Pap smear/Pap test, was conducted by Georgios Papanikolaou in the year of 1940.It comprises exfoliating cells from the cervix development to enable microscopic estimation of these cells to track precancerous/cancerous caries.The multiple aspects are due to prescription medication,body immune suppression of cells, and cigarettes.Screening smear test appears that time consuming and frequently outcomes are incorrect.Depiction contains dead blood/disproportionate on a cell,overlapped as well as irregular patterning [3].Trend analyses and computerized tracking of the smear test are becoming an impressive function of screening images.

Up till now, different kinds of studies have focused on preventive HPV deoxyribonucleic acid(DNA) testing, HPV vaccine, Pap smear problems, and other recommendations for the preventive of cancer.Prevention of secondary screening still remains a significant portion because screening plays an important part in HPV vaccine since the vaccine does not fairly recompense for higher risk HPV [4].Cervical cancer is common cancer amongst females worldwide however during that time,it is the furthest treatable and preventable cancer.Mostly, cervical cancer initiates by pre-cancerous variations and grows relatively slow [5].Clinicians in the medical centre have a problem recognizing the cancer cells since the nucleus of the cells is occasionally really difficult for observing with bare eyes.Conversely, the challenges to recognizing the precise information of the cancer stage.Many persons get the consequence that they are on stage 2, afterward re-testing, it can be essentially on stage 4 that the possibility to cure is lower [6].This occurs since the clinician cannot detect the precise sample and balance data exactly.Currently, the method of computerized image analyses utilized for assisting artificial diagnoses of or tumors/cell abnormalities in histopathology/cytopathology could offer precise and objective assessment of nuclear morphology [7].But, the proficient clinician would have distinct perceptions regarding the cancer stage in accordance with image screening.

This paper designs an optimal deep learning based Inception model for cervical cancer diagnosis(ODLIM-CCD)using pap smear images.The proposed ODLIM-CCD technique incorporates median filtering (MF) based pre-processing to discard the noise and Otsu model based segmentation process.Besides, deep convolutional neural network (DCNN) based Inception with Residual Network(ResNet) v2 model is utilized for deriving the feature vectors.Moreover, swallow swarm optimization(SSO) based hyperparameter tuning process is carried out for the optimal selection of hyperparameters.Finally, recurrent neural network (RNN) based classification process is done to determine the presence of cervical cancer or not.In order to showcase the enhanced diagnostic performance of the ODLIM-CCD technique, a series of simulations occur on benchmark test images and the outcomes highlighted the improved performance over the recent approaches.

2 Related Works

In Moldovan [8], the cervical cancer diagnoses can be advanced with machine learning (ML)technique where the features are chosen by linear relation, and the information is categorized into the support vector machine (SVM) method.The hyperparameter of the SVM is chosen by chicken swarm optimization (CSO) model.The technique is validated and tested on the open-source cervical cancer (Risk Factor) Dataset from the UCI ML Repository.In Karim et al.[9] a method with ensemble method using SVM as the base classifiers are considered.The ensemble approach using Bagging method attained a precision of 98.12% with higher f-measure, accuracy, and recall values.In Li et al.[10], they presented a deep learning (DL) architecture for the precise recognitions of Low-Grade Squamous Intraepithelial Lesion (LSIL) (includes cervical cancer and Cervical intraepithelial neoplasia (CIN)) with time-lapsed colposcopic image.The presented architecture includes 2 major modules, viz., feature fusion network and key frame feature encoding network.In many fusion methods are related, each of them outperforms the present automatic cervical cancer diagnoses system with an individual time slot.Agraph convolutional network with edge features (E-GCN) is established that an accurate fusion method, because of its outstanding explainability reliable with medical training.

In Erkaymaz et al.[11], cervical cancer is recognized by 4 fundamental classifications:Na?ve Bayes(NB), K-nearest neighbor (KNN), multilayer perceptron (MLP), and decision tree (DT) methods and random subspace ensemble model.The Gain Ratio Attribute Evaluation (GRAE) feature extraction is employed for contributing to classification accuracy.The classification result attained by each dataset and reduced dataset is related based on efficiency criteria like Specificity performance criteria,accuracy, root mean square error (RMSE), and sensitivity.William et al.[12] proposed a summary of an advanced publication focuses on automatic diagnoses and classification of cervical cancer from pap-smear image.It analyses 30 journal papers attained automatically by 4 systematic datasets examined by 3 sets of keywords: (1) Pap-smear Images, Automated, Segmentation; (2) Cervical Cancer, Segmentation, Classification; (3) Machine Learning, pap-smear Images, Medical Imaging.The analysis establishes that few models are utilized often when compared to another model: e.g.,KNN, filtering, and thresholding are the commonly utilized methods for preprocessing, classification,and segmentation of pap-smear images.

In Hemalatha et al.[13], the frequently employed neural networks.Dimensionally reduced Cervical Pap smear Datasets with Fuzzy Edge Recognition technique is taken into account for classifier.The 4 NNare related and the appropriate networks for classifying the datasets are estimated.Huang et al.[14] suggest an approach of cervical biopsy tissue image classification depends upon ensemble learning-support vector machine (EL-SVM) and least absolute shrinkage and selection operator (LASSO) methods.With the LASSO method for FS, the average optimization time has been decreased by 35.87 s when guaranteeing the exactness of the classification, later serial fusion has been carried out.The EL-SVM classification has been utilized for identifying and classifying 468 biopsy tissue images, and the error and ROC curves are utilized for evaluating the generalization capacity of the classification.

Haryanto et al.[15] focus on creating the classifier method of Cervical Cell Images with the convolutional neural network (CNN) method.The dataset utilized is the image datasets SIPaKMeD.The CNN method has been employed with the AlexNet framework and non-padding system.Nithya et al.[16] aim at detecting cervical cancer and the datasets utilized in the study comprising imbalanced target classes, missing values, and redundant features.Therefore, this work aims at handling this problem via incorporated FS method for attaining an optimum feature subset.The subset obtained by this combined method is applied in increased predictive tasks.The optimal and best feature subsets are according to the efficacy of the classifier in forecasting the outcomes.In Rahaman et al.[17],they offer a complete analysis of an advanced method depends on DL method for the analyses of cervical cytology images.Initially, present DL and its simplified framework which is employed in this region.Next, discusses the open-source cervical cytopathology dataset and assessment metrics for classification and segmentation method.Next, a complete study of the current growth of DL method for the classification and segmentation of cervical cytology images hasbeen proposed.Lastly,examine the current method and appropriate methods for the analyses of pap smear cell.

3 The Proposed Model

In this study, a novel ODLIM-CCD technique is derived to classify cervical cancer using pap smear images.The proposed ODLIM-CCD technique incorporates MF based pre-processing, Otsu based segmentation, Inception with ResNet v2 model based feature extraction, SSO based hyperparameter tuning, and RNN based classification process is done to determine the presence of cervical cancer or not.Fig.1 showcases the overall process of ODLIM-CCD model.

Figure 1: Overall process of ODLIM-CCD model

3.1 MF Based Pre-Processing

The MF has been non-linear signal modeling technologies dependent upon statistics.A noisy valueof digital images or order has been exchanged with median values of region (masks).The pixel of maskshave been ordered under the sequence of their gray level, and the median value of fixed has been stored for replacing the noisy values.The MF resultant hasg(x,y) =med{f(x-i,y-j),i,j∈W}, wheref(x,y),g(x,y) have the original as well as resultant images correspondingly,Windicates the 2D mask:the mask size hasn×n(wherenhas generally odd) as 3×3, 5×5, and so on., the mask shape namely linear, square, circular, cross, and so on.

3.2 Otsu Based Segmentation

The pre-processed images are segmented using Otsu technique to determine the affected regions.Otsu (1979) is a segmentation method utilized for finding an optimum threshold value of the images depending on increasing the between class variances.These approaches are utilized for finding the threshold optimal value which separates the images into several classes [18].These methods identifyLvintensity level of grey images and the likelihood distribution can be evaluated using Eq.(1).It is applied to color images, whereas Otsu is employed for all the channels.

In whichilrefers to an intensity level determined in the ranges of (0≤il≤Lι)- 1).representNPthe overall amount of image pixels.hiindicates the amount of the presence of intensityilin the images represented as a histogram.The histogram is standardized in a likelihood distributionPhj.Based on the thresholding value (th)/probability distribution, the class is defined forbi-level segmentation:

Whereas ω0(th) &ω1(th) indicates cumulative probability distributions forC1&C2, as illustrated in Eq.(3).

It is essential to detect the average intensity levels μ0&μ1by Eq.(4) while this value has beenc,the Otsu dependent between-classdetermined using Eq.(5).

It is noted that σ1&σ2in Eq.(5) represent the differences ofC1&C2determined in the following:

Let μT=ω0μ0+ω1μ1&ω0+ω1= 1 depends on the σ1&σ2values, Eq.(7) shows the objective function.Thus, the optimization issues can be decreased for finding the intensity levels which increases Eq.(7)

In whichTH= [th1,th2, ...,thk- 1] represent a vector having various thresholds,Lindicates maximal gray level, where the differences are evaluated using Eq.(9).

whereasidenotes a certain class,ωi, as well as μjrepresent the possibility of existence and the mean of a level, correspondingly.For multilevel thresholding, this value attains:

for mean values:

3.3 Inception with ResNet v2 Based Feature Extraction

CNN is certain type of neural network in which the weight is learned for the application of a sequence of convolution on the input image, being the filter weight shared over a similar convolution layer.This design and related learning mechanism is thoroughly discussed.

CNN replaces fully connected (FC) affine layer A through operator C determined as smaller convolutional kernel.This localize computation, efficiently reduces the amount of variables inUθ.The resultant network is determined by:

Convolution layerjis defined as a setof this kernel, and accept as input a tensorxjof dimensionshj×wj×cj.Convolvexjwith all thisj+1 filters and stack the output result in a tensorxj+1of dimensionshj×wj×cj+1.

All these convolution layers are followed by non-linear pointwise function and the spatial sizehj×wjof output, tensor is reduced using pooling operators Pj:.In CNN model, learnable weight lies in convolutional kernel, and the training procedure results in detecting an optimum method of filtering the training data thus inappropriate data is removed and the error(loss) in the training set is reduced as much as possible.

As above mentioned, the amount of algorithmic developments have been presented in the last few years.e.g., the execution of 1×1 convolution facilitates a certain type kind of convolutional layer named inception block that is key to the success of Inception framework [19].Additionally,skip connection represent other steps to improve the training dynamic of deep convolutional neural network (DCNN), results in a significant framework named ResNet.In this case, the aim is to let practitioners for training CNN made up of a huge amount of layers when evading problems associate with the error gradient vanishing in its backpropagation.These 2 CNN architectures design paradigms have turn out to be a default choice for the Computer Vision field, as well as in retinal image analyses,rapidly developing the modern image based automated diagnoses.

Inception-ResNet-V2 (IRV2) proposed by Google Company is employed by an advanced method to classify mammograms.It is extremely a fusion of GoogLeNet (Inception) and ResNet.Inception has been popular network using a similar layer framework utilized in GoogleNet.Currently, Inception v1-v4 is a common method of GoogleNet.Therefore, the residual learning based ResNet has efficient when compared to ILSVRC 2015 that drives deeper to the152layers.The previous network framework is determined by a non-linear conversion of input, while Highway Network permits only certain output from the conventional network layer that should be often trained.Furthermore, the real input enhancement is transmitted to the successive layer.Simultaneously.Simultaneously,ResNet safeguards the information by straightforward data forwarding of an output.

In Residual-Inception model, the Inception is employed since it is made up of low processing difficulty than the original Inception model.The amount of layers in this technique for each module is 5, 10, and 5, respectively.According to the conventional research, IRV2 has determined from that maps the original costs of Inception-v4 model.Now, methodological variations between non-residual and residual Inceptions are Inception-ResNet, i.e., denoted as BN algorithm was employed at the traditional layers.

3.4 SSO Based Hyperparameter Tuning

For optimally adjusting the hyperparameter involved in the Inception with ResNet v2 model, the SSO algorithm is utilized.The SSO method stimulated from the communal motion of swallows and the interactions between flock members have attained better outcomes.This method has proposed a metaheuristic model on the basis of the specific features of swallows, include intelligent social relations,fast flight, and hunting skills.Now, the method is equivalent to particle swarm optimization (PSO)algorithm; however, it has exclusive features that could not be established in an equivalent method,include the usage of 3 kinds of particles: Leader Particles (li), Explorer Particles (ej), and Aim- less Particles (0j), all of them have specific tasks in the group.Theejparticle is accountable to search the problem space.It can be accomplish this search performance in the effect of a number of variables[20]:

1.Location of the local leader (LL).

2.Location of the global leader (GL).

3.The optimal individual experience alongside the path.

4.The preceding path.

The particle uses the succeeding formula to search and continue the path:

Eq.(13) illustrates the velocity vector undeer the path of the global leader.

Eqs.(14) and (15) estimates the acceleration co-efficient (αHL) that straightforwardly affects individual experiences of all the particles.

Eqs.(4) and (5) estimate the acceleration co-efficient (βHL) that straightforwardly affects the collective experience of all the particles.Actually, these 2 acceleration coefficients are quantified assuming the location of all the particles regarding the global leader and the optimal individual experience.

The 0jparticle has a wholly arbitrary behavior and move by the space without reaching certain purposes and share the outcomes with another flock member.Indeed, this particle increases the possibility of detecting the particle that hasn’t been examined with theejparticle.As well,when another particle gets stuck in a best local point, there is hope that this particle saves them.This particle uses the subsequent for arbitrary movement:

3.5 RNN Based Classification

At the last stage, the feature vectors are provided into the RNN model, which identifies the presence or absence of cervical cancer.The RNN is a kind of feed forward neural networks (FFNN).The RNN has been desired to model orders on FFNN as it has a cyclic connection.The letters X,H, andyutilized for indicating an input order, a hidden vector order, and resultant vector order correspondingly.X= (x1,x2,...xT) has been input order.Witht= 1 toT, a standard RNN quantity the hidden vector orderH= (h1,h2,...hT) as specified in Eq.(19) and the resultant vector orderY= (y1,y2,...yT) as represented in Eq.(20).

wherextrefers the input vector

WxhWeight on the hidden layer

htHidden state vector

bhBias on the hidden layer

ytOutput vector

At this point,σ refers the function as non-linearity function,Wimplies the weight matrix, andbindicates the bias factor.

For accommodating a variable-length order input, the RNN has been employed to backpropagation training time (BPTT).This technique was primarily trained to utilize training data in back-propagation(BP)trained period, and the resultant error gradient has been saved to any time step.The RNN was extremely tough for training, however, it causes the gradient for bursting/disappearing with the entire trained with BPTT technique.Thisphas been declared.Fig.2 illustrates the structure of RNN.

Figure 2: RNN structure

4 Experimental Validation

The performance validation of the ODLIM-CCD technique takes place using benchmark Herlev pap smear image dataset, which contains 918 images into normal and abnormal classes.Fig.3 illustrates a few sample images.

Figure 3: Sample images

Tab.1 offers a detailed comparative analysis of the ODLIM-CCD model with other ones.Fig.4 and Fig.5 demonstrated that the ODLIM-CCD technique has outperformed the existing techniques interms of different measures under varying runs.For instance, the ODLIM-CCD technique has gained effective outcomes with an average precision of 96.68% whereas the MLP, random forest(RF), and SVM models have obtained lower average precision of 96.59%, 96.07%, and 95.42%respectively.Moreover, the ODLIM-CCD approach has reached effective outcomes with an average recall of 97.39% whereas the MLP, RF, and SVM techniques have attained lower average recall of 95.61%, 95.38%, and 95.10% correspondingly.Furthermore, the ODLIM-CCD approach has gained effective outcomes with an average accuracy of 96.61% whereas the MLP, RF, and SVM techniques have obtained reduced average accuracy of 96.12%, 95.18%, and 94.90% respectively.Likewise, the ODLIM-CCD methodology has obtained effective outcomes with an average F-score of 97.17%whereas the MLP, RF, and SVM systems have gained minimum average F-score of 95.85%, 95.34%,and 95.20% correspondingly.

Table 1: Result analysis of various number of folds on proposed ODLIM-CCD technique with recent approaches

Table 1: Continued

Figure 4: Precision analysis of different models under varying runs

In order to ensure the enhanced cervical cancer classification outcome of the ODLIM-CCD technique, a detailed comparative analysis is made in Tab.2.

Table 2: Comparative analysis of existing with proposed ODLIM-CCD technique with respect to distinct measures

Fig.6 portrays the comparative analysis of the ODLIM-CCD system with other ones in terms of precision.The figure has shown that the C4.5 and logistic regression (LR) classifiers have obtained reduced precision of0.315and0.459respectively.Besides,the sss has gained slightly increased precision of 0.78 whereas the extreme learning machine (ELM), extreme gradient boosting (XGBoost), and Gradient Boosting models have accomplished near optimal precision of 0.9367, 0.9421, and 0.9618 respectively.However, the ODLIM-CCD technique has showcased better outcomes with a higher precision of 0.9668.

Figure 6: Comparative precision analysis of ODLIM-CCD technique with recent methods

Fig.7 show cases the comparative analysis of the ODLIM-CCD technique with other ones with respect to recall.The figure has shown that the C4.5 and LR classifiers have obtained reduced recall of 0.302 and 0.214 respectively.Besides, the DLP-CC model has gained slightly increased recall of 0.752 whereas the ELM, XGBoost, and Gradient Boosting models have accomplished near optimal recall of 0.9599, 0.9663, and 0.9699 respectively.However, the ODLIM-CCD technique has showcased better outcomes with a higher recall of 0.9739.

Figure 7: Comparative recall analysis of ODLIM-CCD technique with recent methods

Fig.8 depicts the comparative analysis of the ODLIM-CCD technique with other ones in terms of accuracy.The figure exhibited that the C4.5 and LR classifiers have obtained reduced accuracy of 0.780 and 0.828 respectively.In addition, the DLP-CC model has gained somewhat higher accuracy of 0.771 whereas the ELM, XGBoost, and Gradient Boosting models have accomplished near optimal accuracy of 0.9407, 0.9515, and 0.9565 correspondingly.At last, the ODLIM-CCD technique has showcased better results with a superior accuracy of 0.9661.

Figure 8: Comparative accuracy analysis of ODLIM-CCD technique with recent methods

Fig.9 portrays the comparative analysis of the ODLIM-CCD technique with other ones with respect to F1-score.The figure depicted that the C4.5 and LR classifiers have obtained lower F1-score of 0.763 whereas the ELM, XGBoost, and Gradient Boosting approaches have accomplished near optimal F1-score of 0.952, 0.9577, and 0.9636 correspondingly.Finally, the ODLIM-CCD technique has outperformed optimum outcome with the maximum F1-score of 0.9717.

Figure 9: Comparative F-score analysis of ODLIM-CCD technique with recent methods

5 Conclusion

IIn this study, a novel ODLIM-CCD technique is derived to classify cervical cancer using pap smear images.The proposed ODLIM-CCD manner incorporates MF based pre-processing, Otsu based segmentation, Inception with ResNet v2 model based feature extraction, SSO based hyperparameter tuning and RNN based classification process.Moreover, SSO based hyperparameter tuning process is carried out for the optimal selection of hyperparameters.Finally, RNN based classification process is done to determine the presence of cervical cancer or not.In order to showcase the enhanced diagnostic efficiency of the ODLIM-CCD technique, a series of simulations occurs on benchmark test images and the outcomes highlighted the improved performance over the recent approaches.In future,the ODLIM-CCD technique can be executed on the cloud server for remote healthcare monitoring applications.

Funding Statement:This Research was funded by the Deanship of Scientific Research at University of Business and Technology, Saudi Arabia.

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 婷婷六月综合网| 青草视频网站在线观看| 亚洲天堂视频网| 久久精品一品道久久精品| 丁香婷婷综合激情| 国产亚洲一区二区三区在线| 中文字幕乱码中文乱码51精品| 婷婷亚洲综合五月天在线| 亚洲中文字幕久久无码精品A| 免费全部高H视频无码无遮掩| 亚洲欧美自拍中文| 久久久成年黄色视频| 成人综合在线观看| 日本不卡视频在线| 91在线视频福利| 欧美一级爱操视频| 高清色本在线www| 欧美不卡视频一区发布| 国产免费羞羞视频| 在线99视频| 日韩高清欧美| 亚洲最猛黑人xxxx黑人猛交| 91视频99| 欧美不卡二区| 露脸真实国语乱在线观看| 欧美曰批视频免费播放免费| 国产第八页| 凹凸精品免费精品视频| 亚洲成人高清无码| 国产99精品视频| 1级黄色毛片| 久久精品国产999大香线焦| 亚洲精品成人片在线观看| 亚洲中文字幕无码mv| 欧美有码在线| 国产小视频在线高清播放| 男女性色大片免费网站| 国产91高清视频| 亚洲精品无码抽插日韩| 日日拍夜夜操| 午夜a级毛片| 日本爱爱精品一区二区| 精品国产福利在线| 97国产在线观看| 人人澡人人爽欧美一区| 在线人成精品免费视频| 色成人亚洲| 午夜精品福利影院| 一级毛片a女人刺激视频免费| 国产成人综合日韩精品无码不卡| 精品国产一二三区| 国产人成午夜免费看| 国产成人一区二区| 国产精品专区第1页| 欧美国产综合视频| 亚洲人成成无码网WWW| 伊人久久综在合线亚洲91| 无码福利日韩神码福利片| 巨熟乳波霸若妻中文观看免费| 久久这里只有精品23| 亚洲精品国产自在现线最新| 动漫精品啪啪一区二区三区| 久久青青草原亚洲av无码| 亚洲丝袜第一页| 久久综合伊人77777| 欧美一区精品| 久久久久国产一级毛片高清板| 亚洲天堂视频在线观看| 久久免费视频6| 亚洲黄色高清| 国语少妇高潮| 亚洲天堂啪啪| 国产精品对白刺激| 亚洲综合色婷婷中文字幕| 青青青国产精品国产精品美女| 国产综合在线观看视频| 久久综合色视频| 国产成人无码AV在线播放动漫| 久热精品免费| 日韩福利在线观看| 国产在线观看成人91| 欧美色图第一页|