999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Application of the Deep Convolutional Neural Network for the Classification of Auto Immune Diseases

2023-12-12 15:50:22FayazMuhammadJahangirKhanAsadUllahFaseeUllahRazaullahKhanInayatKhanMohammedElAffendiandGauharAli
Computers Materials&Continua 2023年10期

Fayaz Muhammad,Jahangir Khan,Asad Ullah,Fasee Ullah,Razaullah Khan,Inayat Khan,Mohammed ElAffendi and Gauhar Ali,?

1Department of Computer Science and Information Technology,Sarhad University of Science and Information Technology,Peshawar,25000,Pakistan

2Department of Computer Science,University of Engineering and Technology,Mardan,23200,Pakistan

3EIAS Data Science and Blockchain Lab,College of Computer and Information Sciences,Prince Sultan University,Riyadh,11586,Saudi Arabia

ABSTRACT IIF(Indirect Immune Florescence)has gained much attention recently due to its importance in medical sciences.The primary purpose of this work is to highlight a step-by-step methodology for detecting autoimmune diseases.The use of IIF for detecting autoimmune diseases is widespread in different medical areas.Nearly 80 different types of autoimmune diseases have existed in various body parts.The IIF has been used for image classification in both ways,manually and by using the Computer-Aided Detection(CAD)system.The data scientists conducted various research works using an automatic CAD system with low accuracy.The diseases in the human body can be detected with the help of Transfer Learning(TL),an advanced Convolutional Neural Network(CNN)approach.The baseline paper applied the manual classification to the MIVIA dataset of Human Epithelial cells(HEP)type II cells and the Sub Class Discriminant(SDA)analysis technique used to detect autoimmune diseases.The technique yielded an accuracy of up to 90.03%,which was not reliable for detecting autoimmune disease in the mitotic cells of the body.In the current research,the work has been performed on the MIVIA data set of HEP type II cells by using four well-known models of TL.Data augmentation and normalization have been applied to the dataset to overcome the problem of overfitting and are also used to improve the performance of TL models.These models are named Inception V3,Dens Net 121,VGG-16,and Mobile Net,and their performance can be calculated through parameters of the confusion matrix(accuracy,precision,recall,and F1 measures).The results show that the accuracy value of VGG-16 is 78.00%,Inception V3 is 92.00%,Dense Net 121 is 95.00%,and Mobile Net shows 88.00%accuracy,respectively.Therefore,DenseNet-121 shows the highest performance with suitable analysis of autoimmune diseases.The overall performance highlighted that TL is a suitable and enhanced technique compared to its counterparts.Also,the proposed technique is used to detect autoimmune diseases with a minimal margin of errors and flaws.

KEYWORDS Indirect immune fluorescence;computer-aided diagnosis;transfer learning;confusion matrix

1 Introduction

The Anti-Nuclear Antibody (ANA) test has been used to identify specific types of antibodies in humans.There is a wide range of antibodies that causes harm to normal body cells [1].When the defence system of humans is compromised,it produces a specific type of antibody known as an autoantibody.These antibodies cause skin,muscle,and joint damage and various autoimmune diseases such as Scleroderma and Arthritis [2,3].The ANA test has been performed for symptom detection of autoimmune diseases such as fever,headache,weakness,nausea,hair loss,and so on.The ANA test is carried out by using Indirect Immune Fluorescence (IIF) detection technology.The ANA test is the most reliable and well-suited test for detecting autoimmune diseases [4].The IIF test not only detects diseases in the human body but also reveals vital information about the presence or absence of antinuclear antibodies in human cells.The IIF test is performed in three steps.In the first step,fluorescence light passes through the body’s mitotic cells.In the second step,light intensity is classified into positive,negative,and intermediate cells,and finally,further positive and intermediate cells are classified into six different types of staining patterns [5].IIF technology has been performed manually and automatically,but the manual method could be more efficient and provide better results.In an Automatic method,the IIF test has been performed with the assistance of transfer learning or machine learning.Convolutional Neural Network (CNN) is used for feature extraction automatically by tuning the convolutional layers.It involves the number of max pooling and flattering layers,providing full access to the hiding layers [6].Transfer learning is a further subdivision of deep learning in which different models are used to train those neural networks already being solved.One or more trained layer of one model is being used in another trained model.It is generally a supervised learning context in which the initials are identical but have different outputs[7].Transfer learning results in fewer generalization errors and less process time required[8].A variety of models of transfer learning,including VGG-16,Inception V3,Dense Net 121,and Mobile Net,are used to train data sets of medical images.These models are used to pre-train datasets containing medical images to detect autoimmune diseases in the human body.Some datasets of medical images are very short and have a minimal number of images.To solve the problem,data augmentation and fine-tuning are the best ways to artificially increase the size of medical images through wrapping and a sampling of data.It improves the features of images while keeping the label images in their original position [9].Colour transformation,transfer neural,and erasing type are examples of data augmentation parameters.More data augmentation is taken artificially from the source and added to a trained dataset.To address the overfitting issue,a dataset of medical images was first pretrained using transfer learning,followed by data augmentation and fine-tuning.Finally,the results are evaluated using confusion matrix parameters.Model performance must be classified using specific parameters such as precision,recall,F1 measures,and accuracy.Every model shows their distinct value of accuracy,precision,recall,and F1 measures.Regarding performance,the VGG16 model shows 78.000%accuracy,Inception V3 92.000%,Dense Net 121 95.000%,and Mobile Net 88.000%.Dense Net 121 has the highest accuracy among all due to optical optimization;in this feature,extra layers are removed from the training data to reduce the overfitting of images.These are used to determine which models are most effective for analyzing autoimmune diseases.The comparison in Table 1 identifies that the current research is more reliable than the existing ones.The baseline paper followed the Computer Aided Diagnostic (CAD) approach for autoimmune detection.The author proposed an automated solution for detecting autoimmune diseases in Human Epithelial cells(HEP)type II cells and image classification based on Subclass Discriminant Analysis (SDA) and achieved 90% accuracy in their results.The performance has been improved in the current research,showed 95%accuracy in results,and defeated the current work.

Table 1:Accuracy comparison between the proposed and base work

The remainder of the paper is constructed as follows: Section 2 depicts Related Work,while Section 3 explains the Proposed Work of using CNN models or classification of immune diseases.Section 4 contains the results of the experimental analysis,while Section 5 contains the conclusion.

2 Related Work

Bar et al.[10] described their work using a nearest-neighbour classifier,which was used to modify partial components of images.Foggia et al.investigated several different types of hand-crafted features that did not work automatically,and those features were used to test their ability to hold certain applicable elements required for cell identification.Catalano et al.presented their work on the Grey-Level Co-Occurrence Matrix (GLCM),which had been used for image classification.William et al.created code books to study various feature descriptors.Shin et al.[11] described Local Binary Patterns (LBP) for feature analysis and as input data for classifiers.Kather et al.[12]described textural and statistical features for image detection and classification.The essential feature in the analysis was the grey zone matrix,a type of statistical feature used for image classification.Zuo et al.[13]investigated Light Emitting Diode(LED)coding for feature analysis and used it as initial data for the Support Vector Machine(SVM)classifier.Several handcrafted features were mentioned in the quasi-exhaustive literature review.Poostchi et al.[14]investigated several different features such as morphological,global,rotation invariant,and so on for feature extraction,as well as several different types of linear binary patterns for cancer disease identification in different hybrid types of cells.Chelghoum et al.[15]described several techniques in which various multi-algorithms were formed to divide object classes,minimize intra-variance,and maximize inter-variance of classes based on features.It had been done for automated detection of lesion areas for classified diseases such as malaria.Most of the presented classification algorithms followed the same steps:image preprocessing,segmentation,extraction,and classification[16].Farooqi et al.[17]discussed big data in the healthcare field and the hurdles like ICT and security challenges through which big data must be adopted successfully.Deniz et al.[18] described the images of mining and the detection of the performance of transfer learning models.The images were investigated using image-based software artefacts,and the paper did not discuss big data[19,20].COVID-19 detection using X-ray analysis of the chest was discussed by Swati et al.[21].Khan et al.[22]explained the new technology of Wireless Sensor Networks(WSN),which collects information from sources and delivers it to their destination wirelessly.Transfer learning was used in their study,and four different models were trained on a dataset of chest X-Ray images,and their performance measures were examined.

Badawi et al.[23,24]explained rice diseases on plant using advanced deep learning techniques and well-known models of transfer learning were also used to compare these models without using finetuning or data augmentation,and the results of their work showed that the efficiency of models was nearly 92.3%.The conventional Machine Learning-based method has been improved over the past few decades,but it still needs to improve its accuracy.Al-Kahtani et al.[25]studied the Internet of Things(IoT)in health care to collect data ideally for the necessary analysis,especially during COVID-19.

The existing research on deep learning is based on artificial neural networks with several hidden layers that function as a classifier.These are used to introduce advanced image classification technology[26].Generally,various mathematical phenomena work behind all of these classifications,providing them with a proper framework.Finally,all previous related work based on applying deep learning for the detection of autoimmune diseases was discussed,as well as limitations of their works have also been mentioned.These limitations serve as a starting point for future research [27].The technology of indirect autoimmune fluorescence detection is critical for detecting and analysing autoimmune diseases[28].Scientists provide a three-step analysis for it;in the first step,fluorescence light passes through the body’s cells,and then in the second step,their intensity can be calculated and finally classify them as negative,positive,or intermediate.These favourable and intermediate are further classified into six types of staining patterns.These processes are managed with a Computer Aided Diagnostic system(CAD)[29].

i)Indirect Autoimmune Fluorescence(IIF):IIF is the most commonly used method for detecting and testing Anti-Nuclear Antibodies.It is best suited to display high-quality images for ANA testing.IIF is an image-based test used to analyze autoimmune diseases in the human body,such as skin and joint diseases [30,31].There are nearly 80 different types of autoimmune diseases that exist and produce severe an impact on the human body.IIF acts as a substrate for HEP Type II cells(shown in Fig.1)and is used in humans to detect autoantibodies.These antibodies cause autoimmune diseases by damaging normal body cells [32].IIF is used as a reference for analyzing autoimmune immune system diseases in normal body cells.The IIF test detects the presence or absence of antibodies and provides a wealth of additional information[33].The European Association for Social Innovation(EASI)has also recognized these antibodies.It also mentioned several clinically relevant IIF data[34].

ii)IIF Analysis Procedure:The IIF test has been performed in three stages.Light passes through the body’s normal cells in the first stage.Secondly,light intensity is classified as positive,negative,and intermediate signals.Finally,positive and intermediate signals are further classified into six staining patterns: cytoplasmic,centromere,nucleolar,refined speckled,homogeneous,or coarse speckled,as shown in Fig.1.

The IIF can perform manually and automatically;however,manual IIF takes significant time and effort.IIF has been automatically performed by using a CAD.IIF is primarily performed in the medical field using CAD to reduce time and flaws in results.Fig.2 shows the whole process of IIF analysis.

Figure 1:Six staining patterns of Hep-2 type II cells

Figure 2:IIF process of analysis

3 Proposed Method

The research has been carried out on a MIVIA data set using Python language for their experimental analysis,and Co-lab is used as a stimulator of Python.The MIVIA dataset has been used to train on the most popular transfer learning models,including Inception V3,VGG-16,Dense Net 121,and Mobile Net.Data augmentation and fine-tuning are later used to improve model performance and solve the problem of overfitting in medical image data.Fig.3 depicts the proposed work’s framework.

3.1 Data Set

The MIVIA data set of HEP type II contains 1457 images.Table 2 shows how these images are classified into six different classes.The MIVIA data set is easily accessible via online resources.

Figure 3:Framework of proposed solution

Table 2:MIVIA dataset

3.2 Tools and Languages

Python is a programming language used in research,and it runs on a web-based application called Colab.Colab and Python use a single interface and require little human effort to solve a complex problem.Python has its own set of libraries,each performing its function.For example,Numpy is used for image visualization,Pandas for data framework,SK learns for model selection,and Keras for deep learning modelling.All of these libraries are used to stimulate research work.

3.2.1 Transform to RGB(Red Green Blue)

RGB’s function is to convert image data into a greyscale.It aids in data training with minimal time investment and requires less memory for model execution.RGB is a colour-coding system for black and white colours.

3.2.2 Data Augmentation and Normalization

It has been required to use the standard type of features to minimize overfitting in data and converts them into digits of 0 and 1,and this method is commonly known as a label of coding.In the preceding step,data augmentation addresses the issue of dataset oversizing during model training.Data augmentation is a deep learning-based method for creating new data from existing data.

3.2.3 Model Convolutional Neural Network(CNN)

CNN works on machine learning principles,taking initial images and assigning multiple tasks to different image components to differentiate them.CNN requires very little preprocessing when compared to other methods.In the traditional method,image filters were created by humans after much effort,and Fig.4 depicts the entire structure of CNN[35].

Figure 4:Convolutional neural networks(CNN)

Four different transfer learning models have been run to train the dataset in the proposed research work.Such as

i) Mobile Net

ii) Inception V3

iii) Dense Net-121

iv) VGG-16

All of these models are advanced transfer learning models that are used for image processing.These models also use deep learning algorithms and different versions of CNN models.

3.2.4 Performance Measure

It is necessary to assess the efficiency of all transfer learning models and draw conclusions using a confusion matrix.These matrices indicate the accuracy of the proposed research.The efficiency of models is evaluated using all four parameters of the confusion matrix,as shown in Fig.5.

Figure 5:Performance measures

3.2.5 Classification and Comparison

Finally,the results of all these models are compared using confusion matrix parameters(precision,accuracy,F1 measures,and recall).It is a comparative analysis to compare the performance of all models and determine which model is best for disease analysis in the human defence system.

4 Results and Analysis

All results and practical implementation of transfer learning models for detecting and analysing autoimmune disease are explained in work mentioned.For prediction and classification,a deep learning approach and CNN is used.Finally,the results of all transfer learning models have been compared in terms of accuracy,precision-recall,and F1 measures.The following steps are used to perform research work.

4.1 Data Normalization

The scaling feature is used to normalize independent features of the data.Scaling,standardization,and normalization of image features are accomplished in two ways.

? Standardization:As expressed in Eq.(1),the process excludes the observed observation using complete observations.Columns are divided using the standard deviation method,and then observation will occur[36].

4.2 Data Augmentation and Data Splitting

The augmentation and splitting improve already trained data using an advanced data set.There are 1457 images in the data,and it generates 6875 training sets.Table 3 describes data splitting and data augmentation.

Table 3:Data augmentation and data splitting

Table 4 shows the values for training at 60%,validation at 20%,testing at 20%,and the complete dataset at 100%.The same method is used for data augmentation,yielding 6875 training sets.

Table 4:Data augmentation and data splitting

4.3 Models of Transfer Learning

Four different transfer learning models have been trained for research below.

4.3.1 VGG-16

The VGG-16 model is well-known for transfer learning and is used to train CNN on image Net datasets.VGG-16 has 14 million images and 22,000 different categories.VGG-16 achieved the highest possible accuracy in image training,but millions of images were trained.The VGG-16 confusion matrix is depicted in Fig.6.

Figure 6:VGG16 model confusion matrix

The confusion matrix of VGG-16 explains the total actual and predicted label.Fig.7 below represents the VGG-16 model training accuracy and validation accuracy.

Figure 7:VGG-16 model epochs(x-axis)and accuracy(y-axis)

Fig.8 below represents training loss as well as validation loss.

As a result,the loss validation percentage is meagre compared to the training results.Table 5 summarizes the precision,recall,F1 measures,and accuracy results.In terms of performance,the results show that model’s accuracy is 78.000%,F1 measures are 79.500%,recall is 81.333%,and the obtained precision score is 85.000%.

Figure 8:VGG-16 model epochs(x-axis)and Loss(y-axis)

Table 5:VGG-16 performance measure

4.3.2 Inception V3 Model

Except for the last fully connected layer,the Inception V3 model is used.This final layer makes all the layers untrainable and only trains the lower layer with the help of transfer learning.The lower layer is trained to improve the model’s efficiency and get the best possible results.Fig.9 represents the Inception V3 confusion matrix.

Figure 9:Inception V3 model confusion matrix

The results in Fig.10 below show the Inception V3 model training accuracy and validation accuracy.

Figure 10:Inception V3 model epochs(x-axis)and accuracy(y-axis)

Fig.11,mentioned below,shows the differentiation between training and validation loss.

Figure 11:Inception V3 model epochs(x-axis)and loss(y-axis)

So,in results,loss validation shows a very minimal percentage as compared to training results.Table 6,mentioned below,explains the results regarding precision,recall,F1 measures,and accuracy.

Table 6:Inception V3 performance measure

In terms of their performance,the results show that the model’s accuracy is 92.000%,F1 measures are 91.666%,recall is 91.833%,and the obtained precision score is 92.166%.

4.3.3 Dens Net 121 Model

Dens Net 121 is explicitly created to modify vanishing gradients.A high-level neural network is to blame for the decrease in accuracy.Model optical optimization is used in Dens Net 121 rather than other models.Model optical optimization is used to assess the impact on model construction and performance[37].Fig.12 depicts the Dense Net 121 confusion matrix.

Figure 12:Dens Net 121 model confusion matrix

The below Fig.12 shows proposed model training and validation accuracy.

The difference between validation and training scores is depicted in Fig.13.The validation and training accuracy are represented by the x and y-axis,respectively.Fig.14 depicts the distinction between training and validation loss.

So,in the results,validation loss shows a very minimal percentage compared to training results.Table 7,mentioned below,explains the results regarding precision,recall,F1 measures,and accuracy.

In terms of performance,results show that the model’s accuracy is 95.000%,F1 measures are 94.500%,recall is 94.833%,and the obtained precision score is 94.500%.

Figure 14:Dense Net-121 model epochs(x-axis)and losses(y-axis)

4.3.4 Mobile Net Model

It is one of the first computer vision models based on mobile devices.The Mobile Net model is used to improve accuracy and minimize flaws.This model type aggregates deep learning classification performance[38].Fig.15 represents the Mobile Net Confusion Matrix.

Figure 15:Mobile Net model confusion matrix

Based on Mobile Net Model,Fig.16 for training and validation accuracy is below.

Figure 16:Mobile Net model epochs(x-axis)and accuracy(y-axis)

The difference between training and validation loss is shown in Fig.17 below.

Figure 17:Mobile Net model epochs(x-axis)and loss(y-axis)

So,in the results,validation loss shows a very minimal percentage compared to the training results.Table 8,mentioned below,explains the results regarding precision,recall,F1 measures,and accuracy.

Table 8:Mobile Net performance measure

In terms of their performance,the results show that the model’s accuracy is 88.000%,F1 measures are 87.833%,recall is 87.333%,and the obtained precision score is 90.500%.

4.3.5 Models Comparison

Table 9,given below,shows the comparison between the accuracy of all these four models.

Table 9:Models comparison

VGG-16 has an accuracy of 78.000%,Inception V3 has an accuracy of 92%,Mobile Net has an accuracy of 88%,and Dense Net 121 is 95.000% for detecting and analyzing autoimmune diseases.Due to the model optimization feature,Dens Net 121 has the highest accuracy of all models.In Dens Net 121 model optimization,extra layers are removed from the training data to avoid overfitting and complexity.The Dens Net model has achieved the highest accuracy among all proposed models.

5 Conclusion

The proposed research aims to explain the step-by-step methodology for detecting autoimmune diseases using an advanced Convolutional Neural Network (CNN) based deep learning approach instead of a manual one.It is the most efficient technique used to analyze images related to medical health.The MIVIA dataset of HEP type II cells has been used as a reference for detecting autoimmune diseases.These medical images have been inserted using specialized libraries to read and write the data.The data augmentation technique is used for sizing and dividing data into dependent and independent classes.After data augmentation,the images are trained on four well-known models of transfer learning VGG-16,Inception V3,Dens Net 121,and Mobile Net.Transfer learning is a subdivision of deep learning in which different models are used to train those neural networks already being solved.The parameters of the confusion matrix have measured the performance of all these four models in terms of precision,accuracy,recall,and F1 measures.Mobile Net achieved 88.000%accuracy,Dens Net 121 achieved 95.00%,Inception V3 achieved 92.00%,and VGG-16 achieved 78.000% accuracy.Among all of these models,Dens Net 121 has the highest accuracy for detecting and analyzing autoimmune diseases due to the feature of model optimization.Transfer learning is a highly effective deep-learning technique for detecting autoimmune diseases with the highest possible accuracy.

5.1 Contribution

The major contribution of the proposed work is to detect autoimmune disease by using Convolutional Neural Network (CNN) based transfer learning approach.By using transfer learning best possible accuracy has been achieved by up to 95%which easily helps to detect autoimmune diseases in the human body efficiently.

5.2 Future Work

It is necessary to perform more practical work and generate new algorithms of deep learning in the future.It will show reliable results with minimum time and effort.

5.3 Limitation

Tunning optimization of models has been required to achieve more accurate results for the detection of autoimmune diseases.Secondly,it is necessary to use the dropout method instead of augmentation and normalization to resolve the problem of overfitting which randomly dropout the hiding layers of images.

Acknowledgement:The authors would like to acknowledge Prince Sultan University and EIAS Lab for their valuable support.Further,the authors would like to acknowledge Prince Sultan University for paying the Article Processing Charges(APC)of this publication.

Funding Statement:This work was supported by the EIAS Data Science and Blockchain Lab,College of Computer and Information Sciences,Prince Sultan University,Riyadh Saudi Arabia.

Author Contributions:The authors confirm contribution to the paper as follows: study conception and design:F.Muhammad,J.Khan,F.Ullah;data collection:A.Ullah;analysis and interpretation of results: F.Muhammad,F.Ullah,G.Ali,I.Khan;draft manuscript preparation: F.Muhammad,R.Khan,G.Ali;Validation: M.E.Affendi,I.Khan;Supervision: J.Khan,A.Ullah.All authors reviewed the results and approved the final version of the manuscript.

Availability of Data and Materials:The data is on google drive https://drive.google.com/drive/folders/1 Vr4w3jQ2diY3_eR59kBtVbkyxNazyI_8?usp=sharing.

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 波多野结衣中文字幕久久| 亚洲AⅤ综合在线欧美一区| 国产成人高清精品免费| 国产精品久久国产精麻豆99网站| 欧美午夜理伦三级在线观看| 久久黄色视频影| 国产尹人香蕉综合在线电影| 国产迷奸在线看| a天堂视频在线| 国产一区二区三区夜色| 又污又黄又无遮挡网站| 欧美不卡视频在线观看| 99热最新在线| 国内a级毛片| 午夜性爽视频男人的天堂| 一本无码在线观看| 国产一二三区在线| 国产系列在线| 国产尤物jk自慰制服喷水| 激情综合五月网| 91国内外精品自在线播放| 亚洲AV无码不卡无码| 亚洲精品国产乱码不卡| 啊嗯不日本网站| 亚洲天堂777| 精品视频福利| 国产另类视频| 国产极品粉嫩小泬免费看| 国产精品粉嫩| 97在线免费| 毛片大全免费观看| 日韩国产欧美精品在线| 亚洲天堂免费在线视频| 亚洲AⅤ永久无码精品毛片| 亚亚洲乱码一二三四区| 国产96在线 | 8090午夜无码专区| 国产亚洲高清视频| 日韩欧美中文字幕在线韩免费| 精品一区二区三区自慰喷水| 国产成人1024精品| 国产黄色爱视频| 亚洲精品福利视频| 亚洲国内精品自在自线官| 国产日韩精品一区在线不卡| 亚洲人成日本在线观看| 色AV色 综合网站| 欧美一区精品| 高清久久精品亚洲日韩Av| 亚洲中文久久精品无玛| 国产91九色在线播放| 深夜福利视频一区二区| 国产成人精品亚洲77美色| 全免费a级毛片免费看不卡| 澳门av无码| 欧美成人午夜影院| 欧美日韩一区二区三区四区在线观看| 99九九成人免费视频精品| 91在线国内在线播放老师| av色爱 天堂网| 毛片网站免费在线观看| 午夜爽爽视频| 国产成在线观看免费视频| 日韩一级毛一欧美一国产| 九九九精品成人免费视频7| 99久久精品免费看国产免费软件| 国产精品无码AV中文| 久久综合伊人 六十路| 亚洲国产日韩一区| 国产一级在线播放| 国产精品密蕾丝视频| 在线欧美日韩国产| 亚洲免费播放| 91视频首页| 色悠久久久久久久综合网伊人| 97国产在线视频| 四虎影视永久在线精品| 国产地址二永久伊甸园| 无遮挡国产高潮视频免费观看| 亚洲国产日韩在线成人蜜芽| 一本大道香蕉中文日本不卡高清二区 | 亚洲第一极品精品无码|