999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Smart MobiNet:A Deep Learning Approach for Accurate Skin Cancer Diagnosis

2024-01-12 03:47:20MuhammadSulemanFaizanUllahGhadahAldehimDilawarShahMohammadAbrarAsmaIrshadandSarraAyouni
Computers Materials&Continua 2023年12期

Muhammad Suleman ,Faizan Ullah ,Ghadah Aldehim ,Dilawar Shah ,Mohammad Abrar,3 ,Asma Irshad and Sarra Ayouni

1Department of Computer Science,Bacha Khan University,Charsadda,Pakistan

2Department of Information Systems,College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University,P.O.Box 84428,Riyadh,11671,Saudi Arabia

3Faculty of Computer Studies,Arab Open University,Muscat,Oman

4School of Biochemistry and Biotechnology,University of the Punjab,Lahore,Pakistan

ABSTRACT The early detection of skin cancer,particularly melanoma,presents a substantial risk to human health.This study aims to examine the necessity of implementing efficient early detection systems through the utilization of deep learning techniques.Nevertheless,the existing methods exhibit certain constraints in terms of accessibility,diagnostic precision,data availability,and scalability.To address these obstacles,we put out a lightweight model known as Smart MobiNet,which is derived from MobileNet and incorporates additional distinctive attributes.The model utilizes a multi-scale feature extraction methodology by using various convolutional layers.The ISIC 2019 dataset,sourced from the International Skin Imaging Collaboration,is employed in this study.Traditional data augmentation approaches are implemented to address the issue of model overfitting.In this study,we conduct experiments to evaluate and compare the performance of three different models,namely CNN,MobileNet,and Smart MobiNet,in the task of skin cancer detection.The findings of our study indicate that the proposed model outperforms other architectures,achieving an accuracy of 0.89.Furthermore,the model exhibits balanced precision,sensitivity,and F1 scores,all measuring at 0.90.This model serves as a vital instrument that assists clinicians efficiently and precisely detecting skin cancer.

KEYWORDS Deep learning;Smart MobiNet;machine learning;skin lesion;melanoma;skin cancer classification

1 Introduction

Skin cancer is a sort of cancer that occurs when abnormal skin cells develop without control.The most common source of skin malignancy an injury to the skin’s Deoxyribonucleic acid(DNA)from the sun’s harmful ultraviolet(UV)beams[1].This damage can cause mutations in the skin cells that lead to the formation of cancerous neoplasms[2].Skin tumor is a frequent form of cancer that occurs when skin cells undergo abnormal exponential growth.It is often triggered by prolonged vulnerability to ultraviolet (UV) radiation from the sun and is one of the most prevalent forms of cancer around the globe.Artificial intelligence (AI) has been increasingly predominant in the healthcare sector in recent years,particularly in cancer diagnosis.AI,and more specifically deep learning has exposed great promise in the early recognition and identification of skin cancer.Skin cancer is a dangerous medical illness that,if untreated,can be fatal.There are different kinds of skin tumors however,melanoma is the most deadly and assertive form of the disease[3].Multiple devastating aspects can raise an individual’s probability of developing skin cancer.These include excessive exposure to the sun,especially during childhood and adolescence,living in sunny or high-altitude climates,having a personal background of skin cancer or a family background of skin cancer,and having a weak immune system[4].Excessive exposure to the sun is one of the most hazardous factors for skin cancer.People who spend a lot of time outdoors,especially without adequate protection,are more likely to develop skin cancer.The sun’s UV rays can cause harm to the skin’s DNA,resulting in mutations that can result in skin cancer.Living in sunny or high-altitude climates can also increase a person’s risk of growing skin cancer[5].Medical practitioners often face difficulties while diagnosing such diseases due to human-related issues like tiredness,excessive patient load,and limited ability.So,the machine learning and deep learning community is trying hard to aid the doctors in correct diagnosis of such deadly diseases[6].However,in earlier approaches to skin cancer diagnosis,several limitations make Smart MobiNet a more effective method.Some of these limitations include Lack of Accessibility[7],Diagnostic Accuracy[8],limited Training Data [9],and Scalability [10].By addressing these limitations,the proposed model offers a more effective and practical solution for skin cancer diagnosis,improving accessibility,accuracy,and scalability compared to earlier approaches.MobiNet is a deep-learning model that has been specifically designed for the categorization of skin lesions.This is an innovative model that has shown high accuracy in distinguishing between normal and cancerous skin lesions.Smart MobiNet is based on the popular MobiNet architecture.The use of Smart MobiNet in skin cancer diagnosis has several advantages.Firstly,it is a non-invasive and low-cost method of diagnosis,which makes it accessible to a broader range of patients.Secondly,it has been shown significant performance in terms of accuracy,specificity,and sensitivity in the detection of malignant skin lesions.The goal of the study presented in this article is to analyze the use of deep learning MobiNet model,for the identification and diagnosis of skin cancer.This study proposed a deep learning architecture for Smart MobiNet.The goal is to develop a dependable and correct tool for early skin cancer detection that is accessible to a broader range of patients,particularly those in rural or remote areas.The research aim is to present a detailed analysis of the usage of deep learning with a focus on Smart MobiNet for the identification and diagnosis of skin cancer.In addition,this paper contributes to the following areas.

? This paper presents a new lightweight model,i.e.,MobiNet based on CNN.

? The newly proposed MobiNet is used in healthcare to improve the prediction accuracy of skin cancer using image datasets.

? This paper presents a data augmentation technique that combines the commonly used approaches.

Thus,this study tries to contribute to the growing body of research in the application of AI and deep learning in the healthcare sector,particularly in skin cancer diagnosis.An overview of existing techniques is presented in Section 2.Section 3 presents the proposed deep learning architecture of Smart MobiNet.The results of the research are presented with a discussion in Section 4.

2 Related Work

A variety of techniques have been used by the research community for the identification of skin cancer.These techniques can broadly be divided into two classes namely conventional machine learning and deep learning techniques.The next subsections present an overview of the existing work in the diagnosis of skin cancer.

2.1 Conventional Machine Learning

Conventional Machine Learning approaches have been widely used for computer-assisted cancer identification through biological image study.In this part,some of the mechanisms completed in the recent past for skin cancer diagnosis using machine learning techniques have been discussed as shown in Table 1.

Table 1:Machine learning techniques for skin cancer detection

Reference [16] used a wiener filter,a dynamic histogram equalization method,and an active contour segmentation mechanism to take out features from skin cancer images.A Support Vector Machine (SVM) binary classifier based on a gray-level co-occurrence matrix (GLCM) was adopted to categorize the retrieved features.The authors reported an accuracy of 88.33%,95% sensitivity,and 90.63% specificity on a dataset of 104 dermoscopy pictures.Another study [11] presented a hybrid technique for skin cancer classification and prediction.They used Contrast Limited Adaptive Histogram Equalization and meddle filter techniques for image consistency,and the Normalized-Otsu algorithm for skin lesion segmentation.They extracted 15 features from the fragmented pictures and fed them into a hybrid classifier including a neural network centered on deep learning and hybrid Ada-Boost-SVM.They reported a classification precision of 93%on a dataset of 992 pictures belonging to cancerous and normal lesions.However,the hybrid approach took a long time during the training and testing phases.Reference[13]used a multilevel contrast stretching algorithm to separate the forefront from the background in the first stage.They then used a threshold-based technique to extract features like central distance,related labels,texture-feature analysis,and boundary connections in the second phase.In the third phase,they introduced an enhanced feature extraction criterion and dimensionality reduction,which combined conventional and current feature extraction techniques.They used an MSVM classifier and reached a good accuracy on the International Symposium on Biomedical Imaging(ISBI)dataset.

In recent work,reference [17] used Generative Adversarial Networks (GANs) for skin lesion classification.To improve the GAN,they enforced the data augmentation concept.Based on the experiment,the average specificity score was 74.3%,sensitivity score was 83.2%.These works show that Conventional Machine Learning techniques can be effectively used for skin cancer diagnosis.However,there are two major drawbacks of Conventional Machine Learning.First,it needs manual feature extraction,which can be a time-consuming and tedious process.Second,it may not perform well on large datasets.The subsequent section elaborates the discussion on how deep learning techniques overcome these limitations.

2.2 Deep Learning

In the present years,the potential of artificial intelligence (AI) to enhance or replace current screening techniques for skin cancer has been explored by researchers.Convolutional neural networks(CNNs),which are a sort of deep neural network,have confirmed high accuracy in visual imaging challenges and are commonly used in clinical photo analysis and cancer detection as shown in Table 2.The advantages of using CNNs for skin cancer detection are mentioned,including their automatic feedback training and automatic feature extraction capabilities.The passage also includes specific examples of studies that have used CNNs for skin cancer detection.Reference[18]used an in-depth learning approach to extract Ad hoc customized features from pictures and merge them with an indepth learning technique to learn added functions.They then classified the whole feature set into cancerous or noncancerous lesions using a deep learning approach,achieving an accuracy score of 82.6%,sensitivity score of 53.3%,78%AUC(area under the curve),and specificity of 89.8%,on the ISIC dataset.But their sensitivity and specificity rates were low.Reference [19] developed a CAD(Computer Aided Diagnosis)system using 19,398 pictures,achieving a mean specificity of 81.3%and sensitivity of 85.1%.Reference[20]categorized malicious skin cancer with 92.8%sensitivity and 61.1%specificity using CNN on the publicly available dataset ISIC with 12,378 dermoscopy images.However,several training parameters take a prolonged period to train the model and needed a dominant GPU(Graphical Processing Unit),creating the method impracticable.

Table 2:Deep learning techniques for skin cancer

Finally,references[24–26]presented a DCNN solution for automatic skin lesion diagnosis,which includes three key stages:feature extraction with the Inception V3 model,contrast enhancement,and lesion boundary extraction with CNN.

3 Methodology

This section presents an overview of the proposed technique for skin cancer classification as shown in Fig.1.ISIC [27] 2019 dataset is collected from International Skin Imaging Collaboration (ISIC).Dataset anomalies were eliminated using rescaling and normalization.To avoid model overfitting,several traditional data augmentation techniques have been applied.The data was then divided into a 70:30 ratio for training and testing,respectively.Three architectures including CNN,MobiNet,and Smart MobiNet for the identification of skin cancer have been applied in the experiments.

3.1 ISIC-2019

The International Skin Imaging Collaboration created the ISIC dataset,a global warehouse for Dermoscopic images,to enhance access to innovative knowledge.Hosting the ISIC Challenges,it was created to encourage technical research in automated algorithmic analysis and for clinical training purposes.Several Dermoscopic image databases make up the ISIC-2019 Challenge’s training data set.The most typical skin lesions include squamous cell carcinoma,basal cell carcinoma,seborrheic keratosis,actinic keratosis,dermatological lesions,and solar lentigo.There are 25,331 images in all,grouped into 8 categories,available for training.The test database holds 8,238 images whose labels are not widely available.Additionally,an isolated outlier class that was not found in the training set is present in the test data set and needs to be recognized by generated techniques.An automated assessment method analyzes predictions on the ISIC-2019 test data set as shown in Table 3.The ISIC-2019 Challenge aims to categorize skin-surface microscopy images into nine different diagnostic groups namely Basal cell carcinoma(BCC),Melanoma(MEL),Benign keratosis(BKL),Melanocytic nevus(NV),Actinic Keratosis(AK),Dermatofibroma(DF),Vascular Lesion(VASC),Squamous cell carcinoma(SCC).

Figure 1:Proposed methodology

Table 3:Data set description of ISIC-2019

3.2 Preprocessing

Pre-processing refers to the modifications needed to data before the algorithm receives it.A procedure for altering messy data into clean data sets is data pre-processing.In a different sense,when data are gathered from diverse sources,it is done so in a raw method that makes analysis difficult.In machine learning tasks,the data format needs to be accurate to obtain more effective outcomes from the applied model.Image normalization and image rescaling are being utilized in this study.In rescaling the original image is converted into 224×244 to minimize computation cost.

3.3 Data Augmentation

Modern deep-learning model advancements are attributed to the abundance and variety of available data.Substantial amounts of data are needed to enhance the results of machine learning models.However,collecting such massive amounts of data is time-and money-consuming.Data augmentation was applied to inflate the dataset.Without gathering new data,it is a method that allows us to significantly increase the diversity and amount of data that is available.It is a widespread practice to train huge neural networks using a variety of approaches,such as adding noise,padding,cropping,horizontal flipping,and adjusting brightness,to create new data using the augmentation of images.The training images in this project are augmented to make the model more adaptive to new input,which improves testing accuracy as shown in Table 4.These parameters are selected to generate a diversified set of images that will overcome the issue of model overfitting,generalization,and validity of the model.The resultant dataset will have a randomly rotated image with 10 degrees,zoom with 0.1 ratio,can be vertically or horizontally flipped,or height or width shift with 0.1 ratio.In addition,the image can be generated with a single technique or any combination of these techniques.These are commonly used techniques[28]which are combined in this article.

Table 4:Data augmentation parameters

3.4 Convolutional Neural Networks

CNN is a deep learning technique that is commonly used in image processing applications such as skin cancer detection as shown in Fig.2.The convolutional layer,pooling layer,activation layer,and dense layer are the main components of ConvNets.

Figure 2:Classification of skin cancer using CNN

The convolution of two equations f and h in the continuous domain is stated as follows:

For discrete signals,the comparable convolution operation is defined as

This situation of 1D convolution for 2D convolution is described by:

In this scenario,functionhis considered as a filter (kernel),and it is used to convolute over imagef.At each pixel location,the kernel and picture are convoluted,and the result is a twodimensional array known as a feature map.The convolution layer output is activated using a nonlinear activation layer such as Parameterized Rectified Liner Unit (PReLU),Rectified Linear Unit(ReLU),SoftMax,Arbitrary-sized Leaky Rectified Liner Unit(RLReLU),Exponential Linear Units(ELU),and Leaky Rectified Liner Unit(L-ReLU).Deep learning methods require activation functions to perform properly.These calculations are used to figure out the correctness,the model’s output,and the impact on the model’s efficiency.Convergence and convergence speed are influenced by these functions.Later the convolutional layer,a pooling layer is typically used.Down-sampling using spatial pooling while keeping the most prominent features.To prevent over-fitting,it decreases the number of parameters.Sum pooling,average pooling,max pooling,and are some examples of pooling processes.In addition to selecting various pooling filters,you can also define the stride and kernel size.The final layer is known as the dense layer.The ConvNet model’s prediction is provided by this layer.

Max pooling is a discretization algorithm that uses samples.Applying an N × N max filter to the picture creates the feature map by choosing the highest pixel value in each stride.As in sum and average pooling,the sum and average of the pixel values are added to the feature map Fig.3 illustrates the operation of Max Pooling.

Figure 3:Max pooling operation

To feed feature maps to Artificial Neural Network ANN,a single column vector of the image pixels is needed.Therefore,the feature maps were flattened to get column vectors as shown in Fig.4.

Figure 4:Flattening operation

When the fully connected layer is applied,it receives input from the convolution/pooling layer above and creates a vector of N-dimensional,where N stands for the number of classes to be identified.As a result,based on the probability of the neurons,the layer selects the properties that relate to a certain class the most.

3.5 Smart MobiNet

Smart MobiNet is a novel architecture that aims to enhance the accuracy and efficiency of skin cancer detection.This architecture is an extension of the MobiNet framework and integrates added features and optimizations to further enhance its performance.One of the key features of Smart MobiNet is its multi-scale feature extraction approach.This involves the incorporation of multiple convolutional layers with different kernel sizes and strides as shown in Fig.5,which operate on various levels of image resolution.This enables the network to better capture fine-grained details and patterns in skin lesion images,which are critical for the right diagnosis.Another critical aspect of Smart MobiNet is the incorporation of attention mechanisms,which enable the network to selectively focus on important regions of the image while ignoring irrelevant information.This is achieved through attention modules that dynamically adjust the importance of different feature maps based on their relevance to the task at hand.This approach enables the network to better distinguish between benign and malignant skin lesions,even in cases where the lesions are small or subtle.

Figure 5:An illustration of smart MobiNet

Smart MobiNet also incorporates various optimizations for efficiency,such as depthwise separable convolutions,which reduce the number of parameters and computations needed while keeping high accuracy.Additionally,the architecture includes various regularization techniques,such as dropout and weight decay,to prevent overfitting and improve generalization performance.

Smart MobiNet is a promising approach to skin cancer detection,combining the accuracy and efficiency of MobiNet with advanced features and optimizations for improved performance.Its multi-scale feature extraction and attention mechanisms enable the network to better capture critical information from skin lesion images,which can potentially lead to a faster and more correct diagnosis of skin cancer.

Input depth for depthwise convolution with one filter per input channel is defined as:

where the mthfilter in K is employed to the mthchannel in F to form the mthchannel of the filtered output feature map G,and K is the depthwise convolutional kernel of sizeSK×SK×M.The computational cost of depthwise convolution is:

In comparison to conventional convolution,depthwise convolution is incredibly efficient.However,it does not combine input channels to produce new features;it just filters the input channels.To create these new features,an added layer that calculates a weighted sum of the results of depthwise convolution using 1×1 convolution is needed.

Depthwise separable convolution,which was first introduced in,is the result of combining depthwise convolution and 1×1 (pointwise) convolution.The computational cost of depthwise separable convolution is:

which is the addition of the depthwise and pointwise 1×1 convolutions.Convolution can be presented as a two-step filtering and combining process,which resulting in a computation reduction of:

As compared to conventional convolution,depthwise convolution is extremely effective.However,it does not merge input channels to create new features;it just simply filters the input channels shown in Table 5.To create these newest features,an added layer that calculates a weighted sum of the depthwise convolution results using 1×1 convolution is needed.

Table 5:Smart MobiNet architecture

Table 6:Performance indicators of CNN

Table 7:Reported results of smart MobiNet

Smart MobiNet incorporates multiple convolutional layers for fine-grained detail capturing at different image resolutions,while traditional architecture does not emphasize multi-scale feature extraction to the same extent.The proposed architecture integrates attention modules to focus on vital image regions,aiding in distinguishing normal and abnormal tissues.Moreover,Smart MobiNet uses depth wise separable convolutions and other optimizations to reduce parameters and computational load.Ordinary architecture lacks these optimization measures.

3.6 Performance Metrics

For performance evaluation of the work at hand,the following metrics have been used:

In the above equations TP represents True Positive predictions,TN represents True Negative predictions.FP represents False Positive and FN represents the number of False Negative predictions.

4 Results and Discussion

The results produced in this research are presented in this section.Python is used for experiments.For skin cancer detection experiments were performed using CNN,MobiNet,and the proposed Smart MobiNet.

The following analysis and graphical explanations highlight the significance of the performance metrics used to compare the new and existing techniques,including accuracy,recall,precision,and F1 score.In terms of different performance indicators,Tables 6 and 7 present the outcomes.

CNN demonstrated a classification accuracy of 0.86,suggesting a high level of precision.Furthermore,the study shows a precision rate of 0.82,a sensitivity of 0.83,and an F-measure of 0.82,so illustrating its efficacy in accurately detecting true positives while simultaneously achieving a harmonious equilibrium between precision and sensitivity.

The above table displays the performance metrics pertaining to the proposed Smart MobiNet.The model demonstrated a high level of accuracy,with a classification rate of 0.89,indicating a significant number of correctly identified instances.The precision,sensitivity,and F-measure all exhibit a value of 0.90,which signifies a high level of accuracy in correctly detecting actual positive instances and achieving a harmonious trade-off between precision and recall.In general,the Smart MobiNet exhibits a robust performance in the identification of skin cancer.

The accuracy,precision,F1 score,and recall,performance results of the proposed and current approaches are revealed in Table 8.This table presents a comparative analysis of performance outcomes expressed as percentages,encompassing accuracy,precision,F1 score,and recall,across different methodologies.The Smart MobiNet approach exhibits improved performance across all criteria in comparison to alternative models,including Resnet50,VGG16,MobileNet,and a traditional CNN.The proposed technique demonstrates superior performance in terms of accuracy,precision,F1 score,and recall compared to the other models mentioned above,hence emphasizing its efficacy in the identification of skin cancer.

Table 8:Comparison of reported results of smart MobiNet with existing state-of-the-art techniques

The significance of the proposed skin tumor lesion model is classifying the three distinct types including basal cell carcinoma,melanoma,and nevus.Fig.6 labels the confusion matrix using training data for the proposed model of Skin cancer lesion classification.

Fig.7 demostrates the Area Under the Curve (AUC),which provides an overview of the ROC curve,and shows the high achieved accuracy of BCC,Melanoma,and nevus.

Figure 6:Confusion matrix

Figure 7:AUC for BCC,Melanoma,and Nevus

5 Conclusions

Skin tumor is one of the most prevalent kinds of cancer among all the other types.Melanoma is among the most dangerous kinds of skin tumors.If this kind of skin cancer is detected promptly,it can be completely treated.However,it will not be able to treat if it gets destructive and spreads to other organs of the body.Therefore,early identification of melanoma can improve a person’s chances of recapturing and stop transmission to others.From the medical point of view,a diverse range of factors should be considered for diagnosis and treatment of skin cancer.Still,deep-learning communities are trying hard to aid medical practitioners in the right and prompt diagnosis.For small-to-large-size medical images,a capable system with ample accuracy and speed has been developed.Deep learning algorithms can assist dermatologists and medical professionals in enhancing current solutions and making quick,inexpensive diagnoses.The goal of this project was to develop the Smart MobiNet network,CNN,that can effectively diagnose melanoma.The proposed Smart MobiNet method was implemented on the ISIC 2019 skin cancer dataset.Results showed that the proposed method proves higher accuracy.One limitation of the Smart MobiNet model is its susceptibility to dataset bias.If the training dataset used to develop the model lacks diversity in terms of skin types,populations,or geographical regions,it may result in a biased model with limited generalizability.In such cases,the model’s performance may not be dependable when applied to skin cancer detection in different populations or with varying skin types.To overcome this limitation,it is essential to ensure a more diverse and representative dataset during the model training phase to enhance its effectiveness and applicability across various real-world scenarios.

Acknowledgement:We thank our families and colleagues who provided us with moral support.

Funding Statement:Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2023R387),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.

Author Contributions:The contributions of the authors are as follows: conceptualization,M.S.;methodology,F.U.and M.A.;software,F.U.and S.A.,D.S.;validation,F.A.and M.S.;draft preparation,M.S.,F.U.,G.A.,S.A.,D.S.;review and editing,A.I.and S.A.;visualization,F.U.;supervision,A.I.,D.S.;funding acquisition,G.A.All authors have read and agreed to the published version of the manuscript.

Availability of Data and Materials:Datasets analyzed during the current study are available on the ISIC[27]website.

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 国产成人午夜福利免费无码r| 国产又大又粗又猛又爽的视频| 老司机精品99在线播放| 国产免费久久精品99re不卡| 成人午夜久久| 日韩高清成人| 日本黄网在线观看| 亚洲第一成年网| 国产免费久久精品99re不卡 | 青青草原国产一区二区| 国产99精品视频| 国产在线观看一区二区三区| 大陆精大陆国产国语精品1024| 九色综合视频网| 国产波多野结衣中文在线播放| 久久99蜜桃精品久久久久小说| 亚洲国产在一区二区三区| 99ri国产在线| 欧美日韩一区二区三区四区在线观看| 亚洲 欧美 偷自乱 图片| 成AV人片一区二区三区久久| 呦女精品网站| 在线色综合| 黄色三级网站免费| 欧美亚洲另类在线观看| 久久精品视频一| 青青极品在线| 最新精品久久精品| 久久综合伊人 六十路| 色婷婷成人| 欧美三级不卡在线观看视频| 色婷婷在线影院| 亚洲美女久久| 免费一级α片在线观看| 东京热av无码电影一区二区| 四虎成人在线视频| 亚洲婷婷丁香| av大片在线无码免费| 在线国产毛片手机小视频| 成人一区专区在线观看| 亚洲动漫h| 狠狠干欧美| 国产丝袜丝视频在线观看| 青青草一区| 国产区人妖精品人妖精品视频| 国产喷水视频| 亚洲成aⅴ人在线观看| 中文字幕 91| 伊人91在线| 国产精品亚洲日韩AⅤ在线观看| 久久特级毛片| 91无码网站| 中文字幕伦视频| 亚洲欧美一区在线| 日韩精品免费一线在线观看| 亚洲国产一区在线观看| 国产无码性爱一区二区三区| 在线五月婷婷| 婷婷六月激情综合一区| 国产成人无码Av在线播放无广告| 波多野结衣国产精品| 国产精品男人的天堂| 亚洲中文精品人人永久免费| 狠狠色综合网| 欧美人人干| 九九热免费在线视频| 天天摸夜夜操| 精品久久久久久成人AV| 无码福利日韩神码福利片| 国产精品尤物在线| 色悠久久综合| 国产成a人片在线播放| 欧美啪啪精品| 国产成人综合亚洲欧洲色就色| 中文无码精品A∨在线观看不卡| 99视频全部免费| 亚洲区一区| 国产91精品调教在线播放| 亚洲国产精品久久久久秋霞影院| 尤物精品视频一区二区三区| 亚洲国产在一区二区三区| 亚洲国产精品久久久久秋霞影院 |