999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

An Enhanced Deep Learning Method for Skin Cancer Detection and Classification

2022-11-10 02:30:36MohamedAboElSoudTarekGaberMohamedTahounandAbdullahAlourani
Computers Materials&Continua 2022年10期

Mohamed W.Abo El-Soud,Tarek Gaber,Mohamed Tahoun and Abdullah Alourani

1Department of Computer Science and Information,College of Science in Zulfi,Majmaah University,Al-Majmaah,11952,Saudi Arabia

2Faculty of Computers and Informatics,Suez Canal University,Ismailia,41522,Egypt

3School of Science,Engineering,and Environment,University of Salford,UK

Abstract:The prevalence of melanoma skin cancer has increased in recent decades.The greatest risk from melanoma is its ability to broadly spread throughout the body by means of lymphatic vessels and veins.Thus,the early diagnosis of melanoma is a key factor in improving the prognosis of the disease.Deep learning makes it possible to design and develop intelligent systems that can be used in detecting and classifying skin lesions from visible-light images.Such systems can provide early and accurate diagnoses of melanoma and other types of skin diseases.This paper proposes a new method which can be used for both skin lesion segmentation and classification problems.This solution makes use of Convolutional neural networks (CNN) with the architecture two-dimensional (Conv2D) using three phases:feature extraction,classification and detection.The proposed method is mainly designed for skin cancer detection and diagnosis.Using the public dataset International Skin Imaging Collaboration(ISIC),the impact of the proposed segmentation method on the performance of the classification accuracy was investigated.The obtained results showed that the proposed skin cancer detection and classification method had a good performance with an accuracy of 94%,sensitivity of 92% and specificity of 96%.Also comparing with the related work using the same dataset,i.e.,ISIC,showed a better performance of the proposed method.

Keywords:Convolution neural networks;activation function;separable convolution 2D;batch normalization;max pooling;classification

1 Introduction

Skin cancer is one of the most dangerous types of cancer.It is caused by deoxyribonucleic acid(DNA) damage and can lead to death.The cell-damaged DNA begins to grow unexpectedly and rapidly increases.In 2021,it was estimated that 196,060 new cases of melanoma would be diagnosed in the USA alone,including 101,280 noninvasive (in situ),and 106,110 invasive cases.Thus,the demand for effective and rapid clinical examination methods is continuously growing[1].According to statistical data from the World Health Organization(WHO),2-3 million non-melanoma skin cancers and 132,000 melanoma skin cancers occur globally each year[2].Therefore,modern medical science is seeking to assist dermatologists in their diagnoses without the need for special or expensive equipment.This model will help remote patients by providing a fast and accurate detection method for skin cancer.The early detection of skin cancer is associated with a better prognosis,allowing melanoma to be treated successfully.However,detecting the early signs of skin cancer by texture,shape,color,and size is challenging because the cancerous structures have many features in common with normal skin tissue.To improve the recognition rate,computer-aided dermoscopy(CAD)has been used[3].

Because of the severity of melanoma,significance of early diagnosis,shortage of trained professionals in some regions,and less than perfect unaided classification methods,there exists a strong motivation to develop and utilize computer-aided diagnosis(CADx)systems to aid in the classification of skin lesions.Traditional computer vision algorithms are mainly used as classifiers to extract features such as the shape,size,color,and texture in order to detect cancer.However,they are challenges in the detection of skin lesions,including low contrast,hair artifacts,irregular color illumination,and boundaries.Nowadays,artificial intelligence(AI)has gained the aptitude to address these problems.Deep learning utilizes a group of interconnected nodes and can be used effectively in the detection of melanoma.Its structure is similar to that of the human brain in terms of neural connectivity.Neural network nodes work collectively to solve specific problems by training on specific tasks[4].The convolutional neural network (CNN) algorithm is one of the most widely recognized deep-learning algorithms[1].In this study,we investigated such a system.Specifically,we examined an intelligent medical imaging-based skin lesion diagnosis system to assist in determining whether the skin lesion shown in a dermoscopic image is malignant or benign.

There are several papers addressed the early skin cancer detection problem using deep learning[1-3].However,the results reported in these papers still do not satisfy the required performance in early melanoma detection.Therefore,the main aim of this study is to use the latest developments in deep learning to implement a classifier that is capable of examining an image containing a skin lesion and predicting an outcome(malignant or benign)with a sufficiently high degree of confidence to enhance current early melanoma detection methods.More specifically,it is desirable to have an intelligent model which can differentiate between malignant skin lesions from benign ones and also to predict,based on a photo of a suspicious mole or patch,the occurrence of malignant skin lesions and other types of diseases that would require medical assistance.

The main goal of this method is to classify skin images and diagnose melanoma (skin cancer)with improved accuracy by utilizing deep learning models.This was achieved by proposing a method consisting of four basic stages:segmentation,feature extraction,feature selection,and classification.To achieve,the classification,an enhanced CNN was proposed which makes use of two-dimensional convolutional layer(Conv2D)and a new order of the CNN layers Conv2D.

The main contribution of the proposed method is as follows:1-proposing a novel order of the CNN layers and its architecture,two-dimensional convolutional layer(Conv2D)and use them for the early detection of melanoma(skin cancer)from images.2-Evaluating the proposed method with the well-known cancer detection metrics specificity,sensitivity,and accuracy.3-Comparing and analysing the results with the related work.

The rest of this paper is organized as follows.A brief survey of the literature is provided in Section 2.Section 3 provides an overview of the techniques and algorithms used in the proposed method.The proposed method is presented in Section 4,and Section 5 discusses the experimental work and its results.Finally,conclusions are presented in Section 6.

2 Related Work

Melanoma is a common type of cancer that affects a large number of people worldwide.Deep learning methods have been shown to have the ability to classify images with high accuracy in different fields.This study utilized deep learning to automatically detect melanomas in dermatoscopy images.This section reviews some of these iterations.

In[5],the proposed algorithm was divided into two parts:the hair removal process and a deep learning technique for the classification of skin lesions utilizing dermatoscopy images.They utilized morphological operators and in-painting in the hair removal process,and then the deep learning technique was utilized to detect and remove any hairs remaining in the image.This was a pre-processing stage for the classification of melanomas of hair-bearing skin and added extra features to the images.In addition,they showed that the pre-processing stage increased the classification accuracy and assisted in melanoma detection.However,they did not explore the skin colors of people using various lesion images.They evaluated skin cancer detection and the classifier utilizing dataset dermoscopic image(PH2).

In[6],they proposed an artificial bee colony for the detection of melanoma.Although the computation time was very fast for melanoma image detection,the specialist would still have to perform a careful analysis depending on the patient’s information.They obtained the best results in the utilization of the image databases compared with other related works.The accuracy,sensitivity,and specificity of the classification using their methodology nearly reached a 100% success rate.The proposed technique was more suitable for melanoma detection than other methods.In[7],they proposed a classification method for 12 skin lesions that achieved high results.In addition,they presented studies on the model decision constriction process using interpretability methods.However,this method needs to be further tested with different data(ages and ethnicities),which would improve the results.In[8],they focused on the implications of interesting interactive Content-Based Image Retrieval(CBIR)tools and examined their classification accuracies.Based on their results,this system may help people as educational tools and in image interpretation,allowing them to diagnose similar images.However,to confirm these results,more studies with physicians and experts are needed.In[9],they evaluated the performance of a combination of data balancing methods and machine learning techniques related to the classification of skin cancer.Residual Networks (ResNet),which used random forest techniques,was used to extract features that satisfied the best recall value by utilizing a pipeline by adding noisy and synthetic cleaning before the training.

In[10],they presented a CNN to improve patient phenotyping accuracy without re-quiring any inputs from users.Then,they considered the deep learning interpretability by calculating the gradientestablished saliency in order to define the phases related to various phenotypes.They proposed the utilization of deep learning to assist clinicians during chart review by highlighting phrases regarding patient phenotypes.In addition,this methodology could be used to support the definitions of billing codes from phases.In[11],they proposed six interpretable and discriminative representations for distinguishing skin lesions by mixing the accepted dermatological standards.Their experiments were based on outperforming the deep features and low-level features.The performance on clinical skin disease images (198 categories) was found to be comparable to that of dermatologists.In[12],they proposed an unsupervised deep learning framework as a sparse stacked autoencoder for the detection of translucency from clinical Basal cell carcinoma (BCC) image patches.This framework had a detection accuracy of 93%,with a sensitivity and specificity of 77%and 97.1%,respectively.The results of this framework could be used for translucency detection in skin patch images.This framework will be developed to infer translucency in skin image lesions related to skin lesion patches.Furthermore,a CADx system was used for BCC based on the translucency and diagnostic features.

In[13],they presented different non-invasive methods for the classification and detection of skin cancer.The detection of melanoma requires different steps such as preprocessing,seg-mentation,feature extraction,and classification.In this paper,they presented a survey on various algorithms such as the Support Vector Machine (SVM),Asymmetry,Border,Color and Diameter (ABCD)rule,genetic algorithm,and CNN.Each algorithm had advantages and disadvantages.From their results,the SVM had the least disadvantages,but the advantages of back propagation and K-means clustering neural networks outweighed those of the other algorithms.In[14],they built a CNN model to predict new cases of melanoma.They divided the CNN model into three phases.The first phase was the preparation of the dataset,which included four processes.The first process was segmentation,which was used to detect the Region of Interest(ROI)in digital images.The second process was preprocessing,which used a bilateral filter to maintain sharp edges in the image.The third process was reducing the dimensions and complexity of the images by converting them into grayscale and then utilizing the Canny edge detection algorithm to detect the edges of the objects in the images.The fourth process involved the extraction of the final object using the bitwise algorithm.The second phase was the CNN layers,which were based on convolution layers (applied three times),max-pooling layers(applied three times),and fully connected layers(applied four times).The last phase was testing the CNN model to obtain results with an accuracy of 0.74.

In[15],they utilized image acquisition,preprocessing,segmentation,noise removal,and feature extraction.We utilized supervised machine learning using the cubic regression method to train the machine,which automatically detected that the skin cancer stage was benign or a melanoma.In[16],they used deep learning models in its core implementations to construct models to assist in predicting skin cancer.The deep learning models were tested on datasets,and a metric area under the curve of 99.77%was observed.In[17],they proposed a technique that utilized a meta-heuristic algorithm for a CNN to train the biases and weights of the network based on back propagation.The objective of this technique was to minimize the error rate of the learning step for the CNN.The proposed technique was tested on images from the Dermquest and DermIS digital databases and compared with ten other classification techniques.

In[18],they introduced a two-step auto classification framework for skin melanoma images that utilized transfer learning and adversarial training to detect melanoma.In the first step,they took advantage of inter-category variance to distribute data for a conditional image synthesis task by learning inter-category synthesizing and mapping using representative category images from the overrepresented samples utilizing non-paired image-to-image translation.In the second step,they trained a CNN to classify melanoma using a training set associated with the synthesized under-represented category images.This classifier as trained by decreasing the focal loss process,which helped the model learn from difficult examples,while decreasing the significance for easy examples.They demonstrated through many experiments that the proposed MelaNet algorithm improved the sensitivity by a margin of 13.10% and an area under the receiver operating characteristic curve (AUC) of 0.78% from 1627 images.In[19],they proposed the eVida M6 model.The automatic extraction of the ROI within a dermatoscopic image provided a significant improvement in the classification performance by eliminating pixels from the image that did not provide the classifier with lesion information.This model was a reliable predictor,with an excellent balance between the overall accuracy (0.904),sensitivity(0.820),and specificity(0.925).

It is possible to build an intelligent system to detect melanoma skin cancer using the deep learning Conv2D method.The proposed method includes components for system creation,data setup load,network building,training the network,testing the network,and code generation.Different deep learning CNN algorithms can be sequentially used.The main objective of the work presented here was the early detection of melanoma skin cancer using an enhanced CNN that produces the best accuracy in the detection of melanoma.

3 Preliminaries

This section presents an overview of the CNN algorithm used in the proposed framework.It highlights three different CNN layers:depthwise separable 2D convolution,batch normalization,and max pooling 2D layers.The CNN layers are shown in Fig.1.

Figure 1:The architecture of convolution neural network(CNN)layers

3.1 Depthwise Separable Convolution 2D Layer

Separable convolutions consist of first performing a depthwise spatial convolution(which acts on each input channel separately) followed by a pointwise convolution that mixes the resulting output channels.The depth multiplier argument controls the number of output channels generated per input channel in the depthwise step,as shown in Fig.2[20].

3.2 Rectified Linear Units(ReLU)

An ReLU is an activation function that is used to improve the training in CNN deep learning and has a strong mathematical and biological basis.It works using a thresholding value of zero wheny <0 In contrast,it produces a linear function wheny≥0,as shown in Fig.3[21].The output of ReLU activation function is 0 wheny <0 on axis x.The output is a linear with slope 1 wheny≥0 on axis x.

Figure 2:The architecture of depthwise separable convolution 2D

Figure 3:The representation of Rectified Linear Units(ReLU)activation function

3.3 Batch Normalization Layer

Batch normalization is a technique for training deep neural networks that standardizes the inputs to a layer for each mini-batch.This stabilizes the learning process and dramatically reduces the number of training epochs required to train deep networks[22].

3.4 Max Pooling 2D Layer

Pooling layers provide an approach to down-sampling feature maps by summarizing the presence of features in the patches of the feature map.Two common pooling methods are average pooling and max pooling,which summarize the average presence of a feature and the most activated presence of a feature,respectively.

3.5 Flatten Layer

The flatten layer is built after the depthwise separable convolution layer and is used to decrease the dimensions of the parameters.These parameters include tags and features used for classification and detection.In addition,the flatten layer does not make a difference to the batch size as shown in Fig.4[23].

Figure 4:Flatten layer example

3.6 Softmax Activation Function

The softmax function is a series of sigmoid algorithms and is utilized for multiple class classification.This layer will have the same number of classes as the number of neurons.It can be represented as Eq.1[24].

4 The Proposed Method

This paper proposes an automated classification method to detect the presence of melanoma,which consists of four main stages,as depicted in Tab.1.The purpose of these stages is to filter areas in the skin images that may contain a skin lesion to detect the presence of melanoma.We trained,tested,and validated this method using the resultant dataset,as shown in Algorithm 1.

Table 1:An automated classification method

Algorithm 1:Continued Oi =N■j=1 Fj×Ki j +bi;(2)where Oi is the final output for depthwise separable convolution,the size of Oi is IGg×IGg×M,the input for feature maps is Fj,the size of Fj is IGf ×IGf ×N,the depthwise kernel is Ki j,the size of Ki j is K ×K and the bias is bi 3.Apply activation function(ReLU):Y =images/BZ_1091_365_948_393_994.pngOi; if Oi >0 0; Otherwiseimages/BZ_1091_684_948_712_994.png;(3)4.Apply Batch Normalization Layer:Input:Input feature Y ∈RC×N×W×H where C is the number of channels,N is batch size,W is width of the feature and H is height of the feature.Centering Transformation Centering=Yn =Y -E(Y);(4)Scaling Transformation Scaling=Ys = Yn √Var(Y)+ξ;(5)Affine Transformation Affine=Ya =Ysγ +α;(6)where E(Y) is the mean, Var(Y) is the variance, ξ is utilized to ignore zero variance,Y is scale for learned factor and α is bias factor.5.Apply Max Pooling 2D Layer:P=MaxPool(Ya) (7)where P is the output of Max Pooling and Ya is the output of Max-Pooling.6.Repeat the previous layers three times;7.Apply flatten layer:FLi =P×Vi =C■c=1 W■w′=1h′=1 H■P(c,w-w‘,h-h‘)×Vi(c,w‘,h‘);(8)where P ∈RC×M×N,Mand N are spatial dimension,Vi ∈RC×W×H and i is the index of result channel.8.Apply Softmax activation function Softmax=η(FLi)= eFLi K∑i=1eFLi(9)9.Decision:Evaluating an image is Benign or Melanoma.

The Conv2D algorithm summarizes the model in nine steps.In the first step,we enter an image that is either benign or melanoma.In the second step,we apply a depthwise separable 2D convolution layer to increase the efficiency of an image and reduce the complexity.In the third step,we apply the activation function(ReLU)to solve the problem of the vanishing gradient and allow the model to perform better and learn faster.In the fourth step,we apply a batch normalization layer by performing centering,scaling,and affine transformation to decrease the number of required training epochs.In the fifth step,we apply a max-pooling 2D layer to progressively decrease the total number of computations and parameters in the network.In the sixth step,we apply the previous layers three times.In the seventh step,we apply a flatten layer to flatten the output of the max-pooling layer into one column in order to input this column into an artificial neural network(ANN)for further processing.In the eighth step,we apply the softmax activation function,which is utilized for nonlinear problems to distinguish between classes.In the last step,the model evaluates the final resultant image as either benign or melanoma as it is shown in Fig.5.

Figure 5:The architecture of conv2D method

The below Tab.2 gives a summary of the model like the output image and the number of filters for each layer.The first SeparableConv2D layer of 3 × 3 feature maps was calculated 32filters over the image input.Similarly,the second SeparableConv2D layer calculates 64 filters and the third SeparableConv2D layer computes 128 filters.The next main layer is Maxpooling2D that is after each SeparableConv2D layer.The objective of this layer is to down sample the feature maps and reduces the dimensionality of the images.The output of the first Maxpooling2D layer is 74 × 74.Similarly,the second Maxpooling2D layer is 36×36,the third Maxpooling2D layer is 17×17 and the fourth Maxpooling2D layer is 7×7.In this method,the images are evaluated as“melanoma”or“benign”cases,producing two classifiers whose performances were tested using accuracy,specificity,precision,sensitivity,and f1-score metrics.

Table 2:A summary of Conv2D model

5 Experiments and Analysis

5.1 Dataset

The image dataset utilized in the proposed work was a known public dataset for skin cancer,both malignant and benign,which was published in[25].It contains skin cancer images that are used to evaluate the detection of skin cancer.It consists of 10018 images,which were divided into two sections.The test section consisted of 5003 and 5015 images for benign and melanoma tumors,respectively.The training section consisted of 3502 and 3511 images for benign and melanoma tumors,respectively.Radiologists confirmed all the datasets and their annotations.To evaluate the proposed deep learning Conv2D method,2637 images were utilized to train the proposed method,and the remaining were utilized to test and validate the proposed method.

All the experiments were implemented on an Apple MacBook Air 13 laptop with an 8-core Graphics Processing Unit (GPU) and a 512 GB Solid State Drive (SSD).The implementation was compiled using Python 3.8 on the Apple Mac-Book Air 13.We implemented a model to evaluate the proposed Conv2D method and their parameters which giving the best performance.We designed two major scenarios using a public dataset which was published in[25]to evaluate the proposed method.The results of these scenarios were evaluated using several known measures for skin cancer detection systems.We used the accuracy,specificity,sensitivity,precision and f1-score metrics.These experiments are conducted in order to verify that statistical analysis’s results can be used in any datasets.We will define the applied measures int these experiments,following the ones in[26].More details about the experiments are given below.

5.2 Evaluation Metrics

The accuracy represented the number suitable classifications over the total evaluated elements,as expressed in the following mathematical equation.The specificity and sensitivity metrics were used to evaluate the performance in the field of medicine[27].

The specificity represented the number of correctly classified negative elements,as expressed in the following mathematical equation[27].

The sensitivity represented the number of correctly classified positive elements,as expressed in the following mathematical equation[27].

The precision represented the number of correctly classified elements out of all the positive elements classified,as expressed in the following mathematical equation[27].

The f1-score represented the harmonic mean of the recall and precision[27].

5.3 Experiments and Their Results

Two major scenarios were designed and conducted using a public ISIC dataset[25]to evaluate the proposed method.The description of each scenario,its results and discussion are given below.

5.3.1 Scenario 1:Conv2D Epochs

The goal of this scenario is to investigate the impact of number of epochs on the performance of the proposed solution.The best results will be specified depended on the highest classification measures achieved.To run the experiments of this scenario,the following settings were used.

1.All the images were resized to the size of 150×150.

2.The model divided these images into 70%for training,10%for validation and 20%for testing with learning rate value of 0.01.

3.The used activation functions are ReLU and Softmax.

4.The set of Conv2D layers were fixed for the number of epochs of testing for these images(25,50,75,100,125 and 159)epochs.

5.3.2 The Results of Scenario 1

The results of training and testing of the various number of epochs are compared as given in Tab.3.From this table,it can be noticed that the best results were obtained when 100 epochs were used on the input images.These results were evaluated using the metrics:specificity;sensitivity;precision,accuracy and f1-scores.The following code was used to create the graph of testing and validation accuracy vs.epochs as it is shown in Fig.6.It can be noticed that the accuracy oftesting and validation increases when the number of epochs increases until the number of epochsarrives 100,the testing accuracy is 94%and the validation accuracy is 93.6%.

Table 3:The results of scenario 1:Conv2D epochs

5.3.3 Scenario 2:Conv2D Learning Rate

The objective of this scenario is to determine the effect of the learning rate of the best Conv2D learning rate(obtained from Scenario 1).We aim to test which value of the learning rate would give the highest metrics results.In this scenario,the following steps were followed.

1.For each of the above setup,learning rateη=0.001,0.005,0.01,0.05,0.1,0.5 was tested and the obtained results were recorded.

2.For each of the above setup,the set of layers were fixed for each experiment with the same activation functions.

5.3.4 The Results of Scenario 2

Based on the results given in Scenario 1,the number of epochs was 100 that all these measures were tested with different learning rates.All metrics results were presented in Tab.4.The best results of the whole metrics were obtained utilizing the “ReLU”and “Softmax”activation functions,the learning rate was 0.01.The importance of the proposed method was its early detection ability for melanoma skin cancer using the deep learning Conv2D.

Table 4:The results of scenario 2:Conv2D learning rate

5.4 Comparison with Related Work

To further evaluate our obtained results,we compared them with the results of the relatedwork discussed in Section(2).The compared work was selected based on proposals for using adeep learning Conv2D for the early detection of melanoma using a public dataset in terms of theaccuracy,specificity,and sensitivity.A summary of this comparison is provided in Tab.5.In[5],their results included those classification during validation and testing.Their model divided the PH2images into 70%for training,10% for validation,and 20% for testing with a learning ratevalue of 0.001.They repeated their operations with three classes (common nevus,atypical nevus,and melanoma),and with two classes (benign and melanoma).The best accuracy results of theirproposed system were 96%,86%,and 88%for melanoma,common nevus,and atypical nuclei,respectively.In[14],they used the ISIC dataset with their proposed method,with 600 images for testing and 150 images for validation in detecting melanoma using a CNN with 25 epochs.The accuracy of the proposed method was 74%.In[18],the dataset used with their proposedCNN method was randomly divided into 70%,10%,and 20% for training,validation,and testing sets,respectively,with different learning rates between 0.2 and 0.9.The performance results or the sensitivity,specificity,The Positive Predictive Values(PPV),Negative Predictive Values (NPV),and accuracy were 95%,92%,84%,95%,and 91%,respectively.In[17],the dataset used with their proposed approach consisted of 10 melanoma images and 727 benign images.The sensitivity results of their model were 89%and 100%for the benign and melanoma images,respectively,and their F-score results were 21%and 94%for the melanoma and benign images,respectively.In[19],their datasets were divided into 375 melanoma images and 1620 benign images for training and 30 melanoma images and 119 benign images for validation purposes,as well as 117 melanoma images and 481 benign images for testing,without applying any reduction data or augmentation process.Their specificity,sensitivity,accuracy,and balanced accuracy results were 96%,82%,90%,and 87%,respectively.

Table 5:Comparison with related work

From this table,it can be observed that the results obtained by the proposed method were the best in terms of accuracy,specificity,and sensitivity.In addition,our results were obtained fromthe largest dataset of the compared studies except in[17].However,their datasets included two skin cancer databases:DermIS Digital Database and Dermquest Database with 3 classifiers.This means that our results are more reliable in terms of scalability.

6 Conclusions

In this study,the proposed Conv2D method based on a deep learning CNN was implementedusing 3297 images provided by Kaggle.The proposed framework started with image preprocessing to extract the ROI images themselves,and then augmented some images to produce more data.The resulting data were trained using a CNN with many layers,including a separable Conv2D layer,activation(“ReLU”)layer,batch normalization layer,max pooling2D layer,and dropout layer to filter regions within the images that could contain skin lesions to detect the presence of melanoma.Testing the method produced promising results,with an accuracy of 0.94.In addition,our results were obtained from the largest dataset of most of the compared studies.This means that our results are more reliable in terms of scalability.In future work,we plan to investigate whether other deep learning techniques would further improve the accuracy results and other metrics.

Acknowledgement:The authors would like to thank the deanship of scientific research and Re-search Center for engineering and applied sciences,Majmaah University,Saudi Arabia,for their support and encouragement;the authors would like also to express deep thanks to our College(College of Science at Zulfi City,Majmaah University,AL-Majmaah 11952,Saudi Arabia)Project No.31-1439.

Funding Statement:The work and the contribution were supported by Research Center for engineering and applied sciences and College of Science at Zulfi City,Majmaah University

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 婷婷五月在线| 国产精品成人久久| 一级毛片在线免费视频| 天天干伊人| 婷婷99视频精品全部在线观看| 毛片大全免费观看| 国产在线精品网址你懂的| 九九九九热精品视频| 精品少妇人妻av无码久久| 香蕉蕉亚亚洲aav综合| 日韩av无码DVD| 毛片免费在线视频| 日韩精品久久无码中文字幕色欲| 精品1区2区3区| 欧美一级高清片欧美国产欧美| 国产不卡在线看| 精品国产自| 激情综合网激情综合| 国产十八禁在线观看免费| 国产男女免费视频| 成人91在线| 亚洲国产精品国自产拍A| 久久a毛片| 亚洲国产欧美自拍| 欧美一区福利| 尤物亚洲最大AV无码网站| 国产亚洲视频免费播放| 亚洲香蕉久久| 538国产视频| 亚洲swag精品自拍一区| 波多野结衣一二三| 国产精品久久久久久久伊一| 国产成人一二三| 亚洲an第二区国产精品| 国产96在线 | 亚洲无码一区在线观看| 国产精品流白浆在线观看| 国产精品密蕾丝视频| 无码区日韩专区免费系列| 日韩欧美91| 呦视频在线一区二区三区| 五月天综合网亚洲综合天堂网| 91在线播放国产| 精品国产自在在线在线观看| 456亚洲人成高清在线| 久久国产精品影院| 久久亚洲国产最新网站| 亚洲视频无码| 国产一区二区三区日韩精品| 国产精品成| 99re热精品视频中文字幕不卡| 99这里只有精品在线| 欧美 国产 人人视频| 欧洲欧美人成免费全部视频| 久久毛片免费基地| 又大又硬又爽免费视频| 国产在线欧美| 香蕉久久永久视频| 色哟哟精品无码网站在线播放视频| 亚洲成人在线免费观看| 无码国产偷倩在线播放老年人| 97在线观看视频免费| 911亚洲精品| 国产在线一区二区视频| 久久久精品无码一二三区| 综合色区亚洲熟妇在线| 久久综合丝袜长腿丝袜| 久夜色精品国产噜噜| 国产91精品久久| 无码粉嫩虎白一线天在线观看| 国产精品人成在线播放| 在线日韩日本国产亚洲| 亚洲无码精彩视频在线观看 | 亚洲欧洲日韩国产综合在线二区| 黑色丝袜高跟国产在线91| 农村乱人伦一区二区| 国产对白刺激真实精品91| 国产女人18水真多毛片18精品| 亚洲高清在线播放| 国产女人爽到高潮的免费视频 | 91国内视频在线观看| 精品视频福利|