999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Iterative Semi-Supervised Learning Using Softmax Probability

2022-11-11 10:48:24HeewonChungandJinseokLee
Computers Materials&Continua 2022年9期

Heewon Chung and Jinseok Lee

Department of Biomedical Engineering,College of Electronics and Information,Kyung Hee University,Yongin-si,Gyeonggi-do,17104,Korea

Abstract: For the classification problem in practice, one of the challenging issues is to obtain enough labeled data for training.Moreover, even if such labeled data has been sufficiently accumulated, most datasets often exhibit long-tailed distribution with heavy class imbalance,which results in a biased model towards a majority class.To alleviate such class imbalance, semisupervised learning methods using additional unlabeled data have been considered.However,as a matter of course,the accuracy is much lower than that from supervised learning.In this study,under the assumption that additional unlabeled data is available,we propose the iterative semi-supervised learning algorithms, which iteratively correct the labeling of the extra unlabeled data based on softmax probabilities.The results show that the proposed algorithms provide the accuracy as high as that from the supervised learning.To validate the proposed algorithms, we tested on the two scenarios: with the balanced unlabeled dataset and with the imbalanced unlabeled dataset.Under both scenarios,our proposed semi-supervised learning algorithms provided higher accuracy than previous state-of-the-arts.Code is available at https://github.com/HeewonChung92/iterative-semi-learning.

Keywords: Semi-supervised learning; class imbalance; iterative learning;unlabeled data

1 Introduction

Image classification is a problem to categorize images into one of the multiple classes.It has been considered one of the most important tasks since it is the basis for other computer vision tasks such as image detection,localization and segmentation[1-6].Since AlexNet[7]was introduced,deep neural networks(DNNs)have evolved remarkably via VGG-16[8],GoogLeNet[9],ResNet[10],Inception-V3[11],especially to solve the image classification tasks.DNNs have been widely used for a variety of tasks and set the new state-of-the-art, sometimes even surpassing human performance on image classification tasks.

However,when dealing with the classification problem in practice,we face many practical issues,and one of the most challenging issues is acquiring enough labeled data for training.The acquisition of the labeled data often requires a lot of time while also requiring professional and delicate works.A recent study reported that physicians spent an average of 16 minutes and 14 seconds per encounter using electronic health record(EHRs),with chart review(33%),documentation(24%),and ordering(17%)functions accounting for most of the time [12].The manual labeling of medical images also requires intensive labor[13,14].In addition,even if the labeled data is acquired enough,there is another challenging issue referred to as imbalanced dataset.For instance, for the classification of a specific disease data, there is much more information about the data from healthy subjects than those from patients.

To resolve these issues, semi-supervised learning methods using additional unlabeled data have been being considered a lot.Semi-supervised learning is a machine learning approach that combines a small amount of labeled data with a large amount of unlabeled data during training[15-17].In this study,we propose a novel semi-supervised learning algorithms providing the performance at the level of supervised learning by focusing on automatically and accurately labeling additional unlabeled data.More specifically,to accurately label the unlabeled data,we use a softmax probability as a confidence index and decide whether to assign a pseudo-label to the unlabeled data.The data with labels are used continuously for training.Finally, the process is repeated until the pseudo-labels are assigned to all unlabeled data with high confidence.Our proposed approach is innovative because it effectively and accurately labels the unlabeled data using a simple mathematical function of softmax.For classification problems,softmax is essential part of a model,usually used in the last output layer.Thus,we expect to be able to effectively label the unlabeled data without additional computational complexity.

This paper is organized as follows.Section 2 lists some related works.Section 3 provides a specific motivation of dealing with unlabeled data.In Section 4, we introduce our proposed iterative semisupervised learning using softmax probabilities.In Section 5, the performance of our algorithm is verified by comparative experiments.The conclusion and future work are described in Section 6.

2 Related Works

The difficulty of acquiring labeled data and the imbalanced data issue have been investigated by many research groups [18-21].One of the popular approach to handle the imbalanced data issue is with data-level techniques including over-sampling and under-sampling[22-24].The under-sampling is a technique to balance an imbalanced dataset by keeping all of the data in the minority group and decreasing the size of the majority group.This technique is mainly used when the amount of data belonging to minority and majority groups is large.The over-sampling is a technique to balance an imbalanced dataset by increasing the size of the minority group.This technique is mainly to duplicate minority data by randomly selecting the data from the minority group.A more advanced technique is the synthetic minority oversampling technique (SMOTE), which generates a new data point by selecting a point on a line connecting a randomly chosen minority class sample and one of its k nearest neighbors[25].Let us denote the synthetic data point byxnew,which can be expressed as

wherexis a random data belonging to a minority group,xnearis one of theknearest neighbors ofx.The parameterλis independent and identically distributed number uniformly distributed on[0,1].This SMOTE has the advantage that of being able to increase the size of the minority group without duplicating the data.Similar to SMOTE,adaptive synthetic sampling(ADASYN)technique generates a new data point based on the k nearest neighbors [26].It generates more data that are harder to learn compared to the data that are easier to learn by considering the data distribution.Thus, it can adaptively shift the decision boundary to focus on the hard-to-learn data.Since the data-level techniques from over-sampling approach balance out the number of each group of data,the trained models have worked well in a variety of applications.However, such over-sampling techniques are available when the data is represented as a vector.

Another approach to handle the imbalanced data issue is with algorithmic methods.In the algorithmic approach, the learning process is adjusted in a way that emphasizes the importance of the minority group data.Most commonly,the cost or loss function is modified to weigh more towards the minority group data or to weigh less towards the majority group data[18,27,28].Such a sample weighting in loss function is to weigh the loss computed for different samples differently based on whether they belong to the majority or the minority group.For the weight factors,inverse of number of samples or inverse of square root of the number of samples can be considered.Recently,Cui et al.[29]introduced the effective number of samplesEnc,which can be defined as

wherencis the number of samples in classcandβis a hyperparameter on [0,1].By using the effective number of samples, the weight factor 1/Encweigh the loss from the data according to the majority or the minority group.This algorithm approach also worked well in a variety of applications.Nevertheless, the imbalanced dataset issue is not completely solved.The fundamental solution is to increase the number of data with diversity by acquiring more new data.

As we mentioned above,the most challenging part of acquiring data is labeling new data.It not only takes a lot of time, but also requires professional and delicate works.Recently, Yang et al.[30]demonstrated that pseudo-label on extra unlabeled data can improve the classification performance,especially with the imbalanced dataset.The method is based on the fact that the unlabeled data is relatively easy to obtain while the labeled one is difficult to obtain.Based on the trained model with original data,extra unlabeled data was subsequently labeled.Accordingly,it was shown that the trained model with additional unlabeled data provided better performance.However,the pseudo-labels also can be biased towards a majority of data.Thus,the improvement from usage of the extra unlabeled data is limited.In our work,we focus on how to more correctly label the unlabeled data,which eventually provides better performance.

3 Preliminaries and Motivation

Given a simple binary classification from the dataPXYwith a mixture of two Gaussians,consider that each class data has the labelY: +1or-1.Also,consider the data distribution ofX|YiswhenY= +1.Similarly,whenY= -1,the data distribution ofX|Yis,whereμ1>μ2.Given one sample x, if, then x can be classified into +1; otherwise -1.Accordingly, the classifier can be expressed asf (x) = sign,where the termneeds to be learned based on the data set X and the corresponding label set Y.

However, given imbalanced training data, the termin the trained classifier will be shifted to the mean value of a minority class.If a majority of data has the labelY= +1,then the classifier can be derived aswhereα >0.Fig.1a illustrates an example of a biased classifier,which focuses mainly on improving the classification performance of a majority class.Such a class imbalance issue can be resolved by balancing data class via data sampling approach such as oversampling or under-sampling as shown in Fig.1b: in this example, the predicted decision boundary is closer to the actual boundary after using under-or over-sampling method.Similarly, sampling weighting methods also change the predicted decision boundary to the actual boundary.

Fig.1c illustrates another example of a biased classifier,which focuses on improving the performance of a majority class.However,in this example,the number of data from a minority class is too small to generalize the data corresponding to the minority class.Since the data from the minority class does not generalize to the actual distribution,any sampling approach cannot improve the performance as shown in Fig.1d:in this example,the predicted decision boundary is almost unchanged even after using under-or over-sampling method.Similarly,sampling weighting methods also have little effect on the predicted decision boundary.

To alleviate the class imbalance issue, Yang et al.[30] recently demonstrated that pseudo-label on extra unlabeled data can improve the classification performance, especially with the imbalanced dataset,theoretically and empirically.More specifically,a base classifierfBwas first trained based on the original imbalanced training data.Subsequently, extra unlabeled data was labeled usingfB.At last,by re-trainingfBwith the additional pseudo-label data,the classifier was shown to be improved.However,the pseudo-labels also can be biased towards a majority of data,which results in the incorrect labeling,especially for a minority of data.Thus,the improvement from usage of the extra unlabeled data is limited.In this study,we present the algorithms that can improve the labeling accuracy,which eventually improves the overall classification performance.

Figure 1: Examples of a biased classifier and the effects of data-level techniques; (a)an example of a biased classifier,(b)the effect of under-or over-sampling method(the predicted decision boundary closer to the actual boundary), (c)another example of a biased classifier, (d)the effect of under-or over-sampling method(little effect on the predicted decision boundary)

4 Iterative Semi-Supervised Learning Using Softmax Probability

4.1 Algorithm Description

In this study,we propose the semi-supervised learning algorithms,which iteratively corrects the labeling of the extra unlabeled data.Algorithm 1 presents the pseudo-code of our proposed algorithm named iterative semi-supervised learning based on softmax probability (ISSL-SP).Let denote the original labeled data and the extra unlabeled data byDataoriandDataun, respectively.Regarding the instance perspectives,let denote theithextra unlabeled data and the corresponding label byand,respectively.Let also denote theithoriginal labeled data and the corresponding label byand,respectively.Before applying the algorithm ISSL-SP,we first train a base classifierfBusing the original training dataDataori.In the first stage,we consider the softmax probabilities corresponding to each class for,wherefor the number of unlabeled data.For each of,if the maximum value of the softmax probabilities is equal or greater than 0.99,we assigned the corresponding the class to.Here,the optimized threshold value of 0.99 was found throughout this study,and the trade-off between accuracy metrics and the threshold value is described in Results.On the other hand,if the maximum value of the softmax probabilities is less than 0.99,we assign the labelas undefined.Every iteration,we updatefBusing all available data for training:fBtofnew.Finally,we arrange the data with labels assigned as undefined,and repeat the entire process until all the data is labeled in a specific class.In this way,ISSL-SP improves the overall classification performance by assigning the labels only with high softmax probability.

Algorithm 1 Iterative semi-supervised learning based on softmax probability (ISSL-SP).This algorithm is given a base classifier fB which was trained with original training data Dataori.We consider that the data has the label:1,2,...Require 1:Dataori:original train data 2:Dataun:extra unlabeled data 3:fB:base classifier providing softmax probability//fB was trained with Dataori 4:function ISSL-SP(fB,Dataun,n(Dataun))//n(Dataun):the number of Dataun 5:fnew=fB 6:while n(Dataun)>0 do 7:for i=1 to n(Dataun)do 8://Datai un: ith unlabeled data 9:probs=fBimages/BZ_1349_694_2141_712_2187.png//softmax probabilities for each class 10:if max(probs)≥0.99 then 11://0.99 or higher is considered correctimages/BZ_1349_552_2141_570_2187.pngDatai un 12:Labeliun =argmax(probs)13:else 14:Labeliun =-1//undefined 15:end if 16:end for 17:Update fnew based on the all available data including Dataori and Dataun with Labelun >0 18:Update Dataun with Labeli un =-1 19:end while 20:return fnew 21:end function

4.2 Algorithm Insight

Based onwithfromfB,let denote the data corresponding to=+1 bySimilarly,let denote the data corresponding to= -1 by.As we mentioned above,our aim is to learn.Here,withand,the estimator can be constructed by

wheren+andn-are the numbers of the, respectively.Given the distribution of,and that of,the estimator can be expressed by

4.3 A variant of ISSL-SP

ISSL-SP algorithm can be extended in a variety of forms.Algorithm 2 presents the pseudo-code named ISSL-SP with re-labeling all the initial unlabeled data(ISSL-SPR).As a variant of ISSL-SP,ISSL-SPR is the same as ISSL-SP,except that all of the unlabeled data is labeled again every iteration:the line 18 in ISSL-SP (Algorithm 1)is missing.Since the updated classifierfnewis trained with ever increasing data, it can provide better performance as the process is repeated; and thus, it may be necessary for the initial unlabeled dataDataunto be labeled over and over again.To sum up, ISSLSP labels only the data assigned by undefined while ISSL-SPR labels all initial unlabeled data over again.

Algorithm 2 A variant of ISSL-SP:ISSL-SPR.This algorithm is the same as ISSL-SP,except that all of the unlabeled data are labeled again.Require 1:Dataori :original train data 2:Dataun:extra unlabeled data 3:fB:base classifier providing softmax probability//fB was trained with Dataori 4:function ISSL-SPR(fB,Dataun,n(Dataun))//n(Dataun):the number of Dataun 5:fnew =fB 6:while True do 7:Same from lines 7 to 17 in Algorithm 1 8:if n(Labelun ==-1)==0 then(Continued)

Algorithm 2 Continued 9:break end if 10:11:end while 12:return fnew 13:end function

5 Dataset and Experiment Setup

5.1 Dataset

To evaluate our proposed algorithms of ISSL-SP and ISSL-SPR, we mainly used two datasets of CIFAR-10 [31] and the street view house number (SVHN)[32].The two datasets include images and the corresponding class labels.In addition, they have additional unlabeled data with similar distributions: 80 Million Tiny Images [33] includes the unlabeled images for CIFAR-10, and extra SVHN[32]includes the unlabeled images for SVHN.Tab.1 summarizes the four datasets of CIFAR-10,80 Million Tiny Images,SVHN and extra SVHN.More specifically,for training,80 Million Tiny Images includes 500,000 unlabeled images while CIFAR-10 includes 50,000 labeled images.The extra SVHN includes 531,131 unlabeled images while SVHN includes 73,257 images.

Table 1:Summary of four datasets:CIFAR-10,80 Million Tiny Images,SVHN and extra SVHN.80 Million Tiny Images are unlabeled images for CIFAR-10.Extra SVHN images are unlabeled images for SVHN

5.2 Experimental Setup

In this study,we conducted experiments on artificially created long-tailed data distribution from CIFAR-10 and SVHN.Tab.2 summarizes the trained data randomly drawn from datasets of CIFAR-10, 80 Million Tiny Images, SVHN and extra SVHN.The class imbalance ratio was defined as the number of the most frequent class divided by that of the least frequent class[29-31].

Table 2: Summary of trained data randomly drawn from datasets from datasets of CIFAR-10,80 Million Tiny Images, SVHN, extra SVHN and CINIC-10.For the unlabeled data Dataun, we considered two scenarios with different imbalance ratios

For CIFAR-10 and SVHN,we randomly drew samples to make the imbalance ratio of 50,which is denoted byDataori.For the unlabeled dataDataun, we considered two scenarios with different imbalance ratios.In Scenario 1,we assumed that the unlabeled data was balanced with the imbalance ratio of 1.In Scenario 2,we assumed that the unlabeled data was imbalanced with the imbalance ratio of 50.For both scenarios,we almost balanced out the numbers of labeled and unlabeled data:13,996Dataoriand 13,990Dataunfrom CIFAR-10 and 80 Million Tiny Images while 2,795Dataoriand 2,790Dataunfrom SVHN and extra SVHN.Finally,we evaluated each of the trained models on the isolated and balanced testing dataset[30,31,34,35].

We implemented and trained the models using Pytorch.For all experiments,we used the stochastic gradient descent(SGD)optimizer with batch size of 256 and binary cross-entropy for the cost function.The entire experiments were performed on NVIDIA GeForce GTX 1080 Ti GPU.

5.3 Evaluation Metrics

To analyze the performance,the labeling percentage was defined as the number of the labeled data amongDataundivided by the number ofDataun:

To evaluate the performance,we used sensitivity(recall),specificity,precision,accuracy,balanced accuracy(BA)and F1 score as

where TP,TN,FP,and FN represent the true positive,true negative,false positive,and false negative,respectively.In addition,we also used the metrics of top-1 error.

6 Results

6.1 With Balanced Unlabeled Data:Scenario 1

Tab.3 summarizes the results when unlabeled data is balanced.It shows sensitivity, specificity,accuracy,BA,F1 score and top-1 error.Note since the testing dataset is balanced,the F1 score can be both macro average and weighted average.For the CIFAR-10 dataset,if onlyDataoriis used for training as a baseline,the top-1 error is 28.76%.IfDataunis additionally used for training without iteration[30],the top-1 error is 24.93%,which is slightly decreased.On the other hand,ifDataunis ideally given with 100%labeling accuracy and additionally used for training,the top-1 error is significantly dropped to 8.83%,which can be considered the lowest bound.Our proposed algorithms ISSL-SP and ISSL-SPR provide the top-1 error of 14.92%and 10.79%,respectively,which are much lower than that from the method[30],and are very close to the lowest bound.Similarly,for the SVHN dataset,withDataorionly,the top-1 error is 28.10%.IfDataunis additionally used for training without iteration[30],the top-1 error decreases to 25.73%.With the ideal condition usingDataun100% labeling accuracy, the top-1 error is 9.17% as the lowest bound.Our proposed algorithms ISSL-SP and ISSL-SPR provide the top-1 error of 14.87%and 11.09%,respectively,which are also much lower than that from the method[30], and are very close to the lowest bound.More detailed results are presented in Supplementary Tabs.1 and 2.In addition, the results show that ISSL-SPR provides slightly higher accuracy than ISSL-SP,indicating that the updated classifier needs to re-label the entire initial unlabeled data.

Table 3: With balanced and unlabeled data from CIFAR-10 and SVHN datasets

Fig.2 plots labeled percentages and top-1 errors using ISSL-SP and ISSL-SPR according to each iteration.It shows that the labeled percentage increases and top-1 error decreases as the labeling processing is repeated.Also, the tendency to change with each iteration can be observed in both algorithms of ISSL-SP and ISSL-SPR.

Figure 2:(Scenario 1:with balanced unlabeled data)Labeled percentages and top-1 errors using ISSLSP and ISSL-SPR according to each iteration

6.2 With Balanced Unlabeled Data:Scenario 2

Tab.4 summarizes the results when unlabeled data is imbalanced.It shows sensitivity,specificity,accuracy,BA,F1 score and top-1 error.For the CIFAR-10 dataset,withDataorionly,the top-1 error is 28.76%.IfDataunis additionally used for training without iteration[30],the top-1 error decreases to 25.85%.As the lowest bound,ifDataunis ideally given with 100%labeling accuracy and additionally used for training,the top-1 error is 11.62%.Our proposed algorithms ISSL-SP and ISSL-SPR provide the top-1 error of 18.58% and 14.87%, respectively, which are also much lower than that from the method [30], and are very close to the lowest bound.Similarly, for the SVHN dataset, withDataorionly, the top-1 error is 28.10%.IfDataunis additionally used for training without iteration [30], the top-1 error decreases to 25.25%.With the ideal condition usingDataun100%labeling accuracy,the top-1 error is 11.47%as the lowest bound.Our proposed algorithms ISSL-SP and ISSL-SPR provide the top-1 error of 14.14%and 13.62%,respectively,which are also much lower than that from the method[30], and are very close to the lowest bound.More detailed results are presented in Supplementary Tabs.3 and 4.In addition,similar to the scenario 1,the results show that ISSL-SPR provides slightly higher accuracy than ISSL-SP,indicating that the updated classifier needs to re-label the entire initial unlabeled data.

Table 4: With imbalanced and unlabeled data from CIFAR-10 and SVHN datasets

Fig.3 plots labeled percentages and top-1 errors using ISSL-SP and ISSL-SPR according to each iteration.It also shows that the labeled percentage increases and top-1 error decreases as the labeling processing is repeated.

Figure 3:(Scenario 2:with balanced unlabeled data)Labeled percentages and top-1 errors using ISSLSP and ISSL-SPR according to each iteration

6.3 Effect of Softmax Threshold Values

To investigate the effect of the softmax threshold values,we changed the threshold values from 0.5 to 0.999:by the increment of 0.01 from 0.5 to 0.9,and the increment of 0.001 from 0.9 to 0.999.Fig.4 shows the accuracy metrics of F1 score,balanced accuracy and top-1 error according to the softmax threshold values.The results show that the threshold value of 0.99 provides the highest accuracy values.Throughout this study,we have used the softmax threshold value of 0.99 for the simulation results.

Figure 4:F1 score,Balanced accuracy and top-1 errors according to softmax threshold values

7 Conclusion and Discussion

In this study,we propose new semi-supervised learning algorithms,which iteratively corrects the labeling of the extra unlabeled data based on softmax probabilities.We first train a base classifier using original labeled data,and evaluate unlabeled data using softmax probabilities.For each unlabeled data,if the maximum value of the softmax probabilities is equal or greater than 0.99,we assign the unlabeled data with the corresponding class.Every iteration,we update the classifier using all available data for training.Regarding the labeling, ISSL-SP considers only the remaining unlabeled data while ISSLSPR considers the entire initial unlabeled data.To validate the proposed algorithms,we tested on the two scenarios: with balanced unlabeled dataset and with imbalanced unlabeled dataset.The results show that the two proposed algorithms,ISSL-SP and ISSL-SPR,provide the accuracy as high as that from supervised learning,where the unlabeled data is given 100%labeling accuracy.

Comparing the performance of the two algorithms of ISSP-SP and ISSP-SPR,ISS-SPR outperforms ISS-SP regardless of the datasets and the imbalance ratio of unlabeled data.The results indicate that the updated classifier needs to re-label the entire initial unlabeled data.Furthermore, ISS-SPR outperforms previous state-of-the-arts.In the future work,we plan to validate the algorithm efficacy using more extended datasets.In addition,we need to investigate an optimum strategy to reduce the lengthy training time caused by the iteration process.

Supplementary Table 1: Results from Scenario 1 with CIFAR-10

Supplementary Table 1:Continued

Supplementary Table 1:Continued

Supplementary Table 2: Results from Scenario 1 with SVHN

Supplementary Table 2:Continued

Supplementary Table 3: Results from Scenario 2 with CIFAR-10

Supplementary Table 3:Continued

Supplementary Table 4: Results from Scenario 2 with SVHN

Supplementary Table 4:Continued

Funding Statement:This work was supported by the National Research Foundation of Korea (No.2020R1A2C1014829),and by the Korea Medical Device Development Fund grant,which is funded by the Government of the Republic of Korea Korea government (the Ministry of Science and ICT;the Ministry of Trade,Industry and Energy;the Ministry of Health and Welfare;and the Ministry of Food and Drug Safety)(grant KMDF_PR_20200901_0095).

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 丝袜亚洲综合| 亚洲AⅤ综合在线欧美一区 | 国产精品理论片| 亚洲第一视频网站| 国产一级毛片在线| 欧美一区二区精品久久久| 青青草原国产一区二区| 亚洲综合九九| 精品国产一区二区三区在线观看 | 久久中文字幕不卡一二区| 黄色在线不卡| 亚洲精品动漫在线观看| 最新亚洲人成无码网站欣赏网| 久操线在视频在线观看| 国产成人精品无码一区二| 欧美一区二区三区国产精品| 日韩欧美高清视频| 日韩成人在线网站| 精品国产乱码久久久久久一区二区| 多人乱p欧美在线观看| 4虎影视国产在线观看精品| 精品久久人人爽人人玩人人妻| 日韩精品免费在线视频| 久久婷婷五月综合97色| 国产超薄肉色丝袜网站| 98超碰在线观看| 久久精品国产精品青草app| 永久免费av网站可以直接看的| 国产91无码福利在线| 91人人妻人人做人人爽男同| 国产日本一线在线观看免费| 毛片免费高清免费| 欧美亚洲日韩不卡在线在线观看| 一本色道久久88亚洲综合| 亚洲天堂自拍| 高清精品美女在线播放| 亚洲不卡av中文在线| 91毛片网| 欧美视频在线播放观看免费福利资源| 在线日韩日本国产亚洲| 国产精品永久在线| 玖玖精品在线| 72种姿势欧美久久久久大黄蕉| 99久久免费精品特色大片| 国产成人一区免费观看| 亚洲色图综合在线| 国产福利在线观看精品| 精品精品国产高清A毛片| 久久国产乱子伦视频无卡顿| 2021国产在线视频| 日本亚洲欧美在线| 国产在线精品美女观看| 成人福利一区二区视频在线| 国产精品成人AⅤ在线一二三四 | 久久96热在精品国产高清| 欧美成人一级| 国产精品人人做人人爽人人添| 女高中生自慰污污网站| 波多野结衣无码中文字幕在线观看一区二区| 国产黑丝视频在线观看| 国产精品视频导航| 成人免费一级片| av一区二区三区在线观看| 夜精品a一区二区三区| 色综合久久无码网| 精品亚洲麻豆1区2区3区| 国产真实乱子伦视频播放| 精品国产成人国产在线| 91精品啪在线观看国产91九色| 中文字幕首页系列人妻| 国产成人午夜福利免费无码r| 亚洲男人天堂久久| 国产在线精品网址你懂的| 四虎国产在线观看| 亚洲精品爱草草视频在线| 特级毛片8级毛片免费观看| 成人午夜久久| 久久无码av一区二区三区| 久久香蕉国产线看精品| 青草国产在线视频| 影音先锋丝袜制服| 国产无人区一区二区三区|