999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

A Novel Binary Emperor Penguin Optimizer for Feature Selection Tasks

2022-03-14 09:28:10MinakshiKalraVijayKumarManjitKaurSaharAhmedIdrisSabanztrkandHammamAlshazly
Computers Materials&Continua 2022年3期

Minakshi Kalra,Vijay Kumar,Manjit Kaur,Sahar Ahmed Idris, ?Saban ?ztürk and Hammam Alshazly

1Computer Science Department,Government College Bahadurgarh,Bahadurgarh,124507,India

2CSE Department,National Institute of Technology Hamirpur,177005,India

3School of Engineering and Applied Sciences,Bennett University,Greater Noida,201310,India

4College of Industrial Engineering,King Khalid University,Abha,Saudi Arabia

5Department of Electrical and Electronics Engineering,Amasya University,Amasya,Turkey

6Faculty of Computers and Information,South Valley University,Qena,83523,Egypt

Abstract:Nowadays,due to the increase in information resources,the number of parameters and complexity of feature vectors increases.Optimizationmethods offer more practical solutions instead of exact solutions for the solution of this problem.The Emperor Penguin Optimizer(EPO)is one of the highest performing meta-heuristic algorithms of recent times that imposed the gathering behavior of emperor penguins.It shows the superiority of its performance over a wide range of optimization problems thanks to its equal chance to each penguin and its fast convergence features.Although traditional EPO overcomes the optimization problems in continuous search space,many problems today shift to the binary search space.Therefore, in this study, using the power of traditional EPO, binary EPO (BEPO) is presented for the effective solution of binary-nature problems.BEPO algorithm uses binary search space instead of searching solutions like conventional EPO algorithm in continuous search space.For this purpose,the sigmoidal functions are preferred in determining the emperor positions.In addition,the boundaries of the search space remain constant by choosing binary operators.BEPO’s performance is evaluated over twenty-nine benchmarking functions.Statistical evaluations are made to reveal the superiority of the BEPO algorithm.In addition,the performance of the BEPO algorithm was evaluated for the binary feature selection problem.The experimental results reveal that the BEPO algorithm outperforms the existing binary meta-heuristic algorithms in both tasks.

Keywords: Metaheuristics; optimization algorithms; emperor penguin optimizer; intensification; diversification; feature selection

1 Introduction

Technological developments enable information to increase and spread easily.This situation makes it challenging to choose the relevant information among all the information.The increase in the number of information is often confusing as irrelevant knowledge is involved.It is misleading to think of this information as just literature.In real-world applications, the information obtained from the devices is increasing.The reason for this is the data provided by the technological devices added to the traditional production process.These data present all the details regarding the functioning of the system in almost every field from the production process to the health system.The reason for collecting such data is usually the control of the production process, classification,or making a decision [1].Today, developments in artificial intelligence (AI) methods allow the mentioned processes to be performed automatically.However, the decision-making processes of AI algorithms depend on the quality of the information produced by these devices.The large size of the data obtained in real-world applications is not beneficial for automatic decision-making systems (curse of dimensionality) [2].Therefore, optimization of feature vectors emerges as an essential task.In the literature, feature selection or feature dimensionality reduction approaches are generally preferred using optimization approaches [3].In this way, the performance of AI methods is increased by eliminating unrelated features.In the past, the effect of optimization could be seen on the performance in these tasks performed using hand-crafted methods.Today, these tasks are usually performed using automated methods.The adaptation of optimization algorithms to these techniques has a significant share in the realization of automatic methods.Nowadays,optimization studies of deep automatic architectures using external optimization algorithms are available in the literature.

The usage areas of optimization algorithms are not limited to a specific framework, it is obvious that optimization is required in almost every field today.As technological advances increase and automation becomes widespread, many complex problems arise that need to be considered.Optimization algorithms, which provide great support to the solutions of the mentioned problems,have been recommended since the past and still continue to be recommended.As the accumulation of knowledge increases and problems arise, drawbacks in current optimization techniques emerge.The historical development of optimization techniques is a very comprehensive topic and can be studied in detailed literature research [4].Nearly all of the metaheuristic techniques are developed for solving real-life problems.These are widely used for the past few years for solving complex problems.Some of them may not provide the exact solution to specific problems.They provide the optimal solution in a cost-effective manner.These algorithms are generally categorized into five main classes namely evolutionary, physics, bio-inspired, swarm, and nature-inspired based metaheuristics.Fig.1 shows the categorization of metaheuristics optimization techniques.

Figure 1: Categorization of metaheuristic algorithms

Compared with exact search approaches, metaheuristic optimization approaches are more successful in terms of performance (also, they are less time-consuming).One of the most important reasons for this is that they do not have to analyze the entire search space during the research process.High computational complexity is an undesirable feature today, although exact search approaches guarantee to find the best solution when they have sufficient time and hardware power.Metaheuristic approaches are a kind of random search engine.But randomness here is usually in a directed form.Thanks to this manageability, they can determine the most suitable solution in a short time.The evolutionary-based algorithms are motivated by biological evolution.Some of these are Genetic Algorithm (GA) [5], Genetic Programming (GP) [6], and Evolution Strategy(ES) [7].The second category algorithms mimic the concepts drawn from Physics.The wellknown algorithms are Gravitational Search Algorithm (GSA) [8], Simulated Annealing (SE) [9],Big Bang Big Crunch (BBBC) [10-14], Galaxy-Based Search Algorithm (GBSA) [15].Swarm algorithms are influenced by the cooperative behavior of swarms present in nature.Some of the algorithms included in this category are Spotted Hyena Optimizer (SHO) [16], Grey Wolf Optimizer (GWO) [17], Particle Swarm Optimization (PSO) [18], Bat Algorithm (BA) [19], Ant Colony Optimization (ACO) [20], Bee collecting pollen algorithm [21], Wolf Pack Search Algorithm [22],Moth-flame Optimization Algorithm (MFO) [23,24], Seagull Optimization Algorithm (SOA) [25],Emperor Penguins Optimizer (EPO) [26].All these suggested approaches aim to find the most appropriate solution to specific problems with the least effort.In this process, they are inspired by some systems in nature.

The above-mentioned optimization algorithms can be applied to the feature selection problem [27].A feature selection process is an approach used to determine and select the most convenient features in a feature vector.Feature selection is a way to distinguish the relevant feature from the irrelevant ones and retrieving only substantial features.The main objective of the feature selection problem is to improve prediction performance and reduce data dimensionality.By relieving problems such as the curse of dimensionality, it increases the performance of the main decision-making algorithm.Decision-making main algorithms generally perform tasks such as classification, segmentation, tracking, feature selection, feature reduction and produce a meaningful result.The beginning of feature selection studies is very old, and detailed literature analysis can be reached [28].In this study, the effects and contributions of remarkable optimization algorithms are examined in the literature analysis.Binary GWO (BGWO) is a type of GWO that includes the conversion of continuous numbers to binary [29].After the high performance of the BGWO algorithm, various variants of this algorithm are suggested in the literature.Quantum GWO(QI-GWO) is one of the most popular of these [30].On the other hand, binary PSO is one of the most frequently used algorithms in solving binary optimization problems [31].There are also various variants of BPSO algorithms, which is almost one of the leading binary optimization algorithms [32].The BPSO algorithm also has similar problems with traditional PSO.On the other hand, the binary GSA (BGSA) is frequently encountered in the literature to relieve feature selection problems [33].Binary BA (BBA) performs feature selection task by separating the search space by n-cube [34].Binary SOA (BSOA) was recently introduced and validated for the feature selection problem [35].

Binary versions of almost all meta-heuristic optimization algorithms are available in the literature.Our analysis on these algorithms clearly shows that it carries the problems that continuous optimization versions have.The proposed BEPO algorithm is proposed to eliminate current problems in the literature and for a better solution.The main difference between the original EPO and the proposed BEPO is that EPO works well in continuous search space.In contrast, BEPO works in discrete search space.For this, BEPO utilizes a sigmoidal transfer function that can change the positions of search agents between the values 0 and 1.Performance indicators of the proposed BEPO algorithms are calculated according to 29 benchmark test functions.Then they are compared with the existing binary metaheuristic algorithms in the literature.Furthermore, the feature selection problem is solved through BEPO.The main contributions of this paper can be concluded as follows.

? The binary version of EPO and search space range values are proposed and analyzed deeply.

? The sigmoidal binarization function is proposed to convert search agent values to 0 and 1.

? The performance of the proposed BEPO algorithm is analyzed in detail using many benchmark test functions and benchmark datasets.

? Extensive feature selection experiments are concluded based on BEPO.

The remainder of this paper is organized as follows.First, the preliminary concepts of EPO are mentioned in Section 2.Then, the binary variant of the emperor penguin optimizer algorithm is presented in Section 3.Results and discussions are depicted in Section 4.Section 5 illustrates the application of the proposed algorithm on the feature selection problem.Finally, concluding remarks are explained in Section 6.

2 Continuous Emperor Penguin Optimizer

2.1 Inspiration

Aptenodytes forsteri is the scientific name of the Emperor penguin.As compared to all different types of penguins, they are the largest ones.With respect to the size, all the males and females are very much alike.They have a blackhead, white stomach, wings, and tail and live in Antarctica.In order to survive in the winters of Antarctica, they huddle together to keep themselves warm and protect themselves from cold wind.The huddling mechanism of penguins is emphasized into four main phases: creating a huddling boundary, evaluating the temperature profile, calculating the distance among emperor penguins, and relocating mover [26].

2.2 Mathematical Modelling

The above-mentioned four phases of emperor penguins are mathematically modeled.The foremost motivation behind this algorithm is to find the efficient mover and update the position of emperor penguins.

Penguins create a huddle boundary of polygon shape.The wind flow is also considered during the generation of huddle boundaries.The complex variable is used to illustrate the behavior of huddle boundary creation.Let us assume that the velocity of wind and its gradient are denoted by ?andφ.

Based on Eq.(1), the mathematical formulation of potential function (P) is given below:

whereirepresents the imaginary constant andμis a random vector.The location of penguins is randomly modified on the basis of the best-fit penguin’s location.The best-fit penguin is found at the epicenter of L-shaped polygon [26].

The penguins create a huddle to maintain the high temperature inside the huddle for survival in winter.It is assumed that when the radius of the polygon (R) is more significant than one, then temperature (t) is set to zero.Otherwise, temperature (t) is set to one.The temperature difference(T) between huddle temperature and outside the huddle boundary is computed as in [26]:

wheretis temperature profile andxis current iteration.Maxiterationis the maximum limit of iteration.

The best-fit emperor penguin plays the most critical role in determining the location of other emperor penguins.The locations of all other emperor penguins are updated accordingly.During the huddle boundary creation, the distance among the emperor penguins is calculated.The mathematical formulation of distance computation is given below.

whereNrepresents the movement parameter for maintaining the gap among penguins, and its value is set to 2.Qgrid(Accuracy)is the polygon grid accuracy,Rand()is the random function whose value varies in the range of [0,1].

The mathematical formulation ofB()is given below:

whereeis expression function,gandlare control parameters whose value lies in the range of [2,3]and [1.5, 2], respectively.

According to the mover (i.e., best-fit emperor penguin), the positions of emperor penguins are updated.The mover is used to vacate the current position of penguins and change the positions of other penguins.The position update function is given below.

Algorithm 1: Emperor Penguin Optimizer Algorithm Input: Initialize the emperor penguins population -→Qep(s=1,2.....n)Output: best optimal solution →Q Procedure EPO Initialize the parameters T, →Y, →Z,B(),R, and Maxiteration Compute fitness value of each search agent While (s <Maxiteration) do FITNESS(Qep)R ←Rand()If (R >1) then t ←0 Else t ←1 End if T ←images/BZ_1979_563_1456_594_1502.pngt- Maxiteration s-Maxiterationimages/BZ_1979_944_1456_975_1502.pngFor i ←1 to n do For j ←1 to n do Compute the vectors →Y and →Z using Eqs.(6) and (8)Compute the functions B(→Y) using Eq.(9)Update positions of the current agent using Eq.(10)End for End for Upgrade parameters T, →Y, →Z, and B()Reorganize the search agents that go beyond the boundary FITNESS(Qep)Update →Q if it is better than the previous value s ←s+1 End while Return →Q End Procedure Procedure FITNESS(Qep)For i ←1 to n do FIT[i]←FITNESS_FUNCTION(Qep)End for FITbest ←BEST(FIT[])Return FITbest End procedure

(Continued)

?

3 Proposed Binary Emperor Penguin Optimizer

3.1 Inspiration

Many of the optimization problems we encounter in real life require the discrete/binary search field.For this, problems having continuous variables can be transformed into binary variables.The binary metaheuristic algorithms help in solving the discrete problems that are feature selection [36], unit commitment problem [37], and dimensionality reduction [38].However, the existing algorithms are still far from producing optimal solutions [39-43].Hence, a novel binary metaheuristic is needed to resolve this problem.

EPO is easy to understand and apply in a real-life scenario.Due to these advantages, a novel binary variant of EPO (BEPO) is developed.In the proposed algorithm, the position updating mechanism has been modified.As per the knowledge of the authors, there is no such algorithm present so far.

3.2 Mathematical Implementation

The proposed algorithm uses discrete search space instead of uninterrupted search space.The discrete search space can be deliberated as hyperspace.The penguins of BEPO are either moved to closer or extreme corners of the hypercube by switching the binary bits.In BEPO, the position updating mechanism has been modified.

The Original EPO algorithm is designed to allow penguins to move in continuous search space.However, BEPO uses discrete search space to solve binary problems.For this purpose,updating the position updating mechanism ensures that only 0 and 1 are produced.The sigmoidal transfer function is used to perform this process.The mathematical formulation of the sigmoidal transfer function is given below:

whereQep(s+1)is the updated position of the emperor penguin at iterations.

It forces penguins to transfer in binary space.It should be lying in the range of [0,1].The value of transition function increases with an increase inY.Penguins are moving away from the best penguin to their preceding best location.The value of a transition function decreases with a decrease inY.Fig.2 shows the mapping process from continuous space into discrete space.

Figure 2: The transition from continuous space to discrete space

4 Performance Evaluation

In order to show the superior performance of the proposed BEPO algorithm fairly, comparisons are made with recently developed binary metaheuristic algorithms.In this process,benchmarking standard test functions are used.In addition, 29 independent simulations are performed for each metaheuristic algorithm.Among these independent experiments, the best mean and standard deviation parameters for each method are selected.Finally, comparison tables are created according to this approach.

4.1 Benchmark Test Functions

BEPO is validated on twenty-nine benchmark functions such as unimodal (F1-F7) [44],multimodal (F8-F13) [45], fixed dimensional multimodal (F14-F23) [45] and composite functions(F24-F29) [46].The intensification capability of BEPO is established through unimodal functions(F1-F7).Multimodal functions (F8-F13) are used to validate the diversification ability of BEPO.The appropriate stability between intensification and diversification of BEPO is tested through F14-F29 test functions.Parametric values are kept the same to have fair comparison results in experiments using all these test functions.

4.2 Parameter Settings

The BEPO algorithm is evaluated on the above-mentioned test functions in Section 4.1 and compared with four binary metaheuristic algorithms, namely Binary Grey Wolf Optimizer(BGWO) [29], Binary Gravitational Search Algorithm (BGSA) [33], Binary Particle Swarm Optimization (BPSO) [47], and Binary Bat Algorithm (BBA) [48].The parameter settings of these algorithms are set as mentioned in their original papers.

Tab.1 depicts the parameters settings of these algorithms.Two criteria are prioritized while selecting the parameters of the algorithms specified in Tab.1.First, it is essential to preserve the parameters of the original algorithm.The second is that they are compatible with the general presentation in this study.These algorithms are implemented on MATLAB R2018a on Windows 7 with 4 GB RAM.These algorithms have run 30 independent times, and each run consists of 1000 iterations.The standard deviation (Std) and average (Avg) are calculated for the best solution obtained from each run and tabulated in tables.

Table 1: Parameter settings

4.3 Performance Analysis

BEPO is evaluated on test functions that are mentioned in Section 4.1.In addition, the convergence analysis of BEPO and other metaheuristics are also compared.

Tab.2 depicts the unimodal test results obtained from BEPO and the existing binary metaheuristic techniques.For F1, F2, F3, F5, F6, and F7 test functions, BEPO suggests better results than the existing metaheuristics in terms of fitness value.BPSO is the subsequent best algorithm for F1, F2, F3, and F7 test functions.For the F5 function, BGWO is the succeeding algorithm.BBA is the following best algorithm for the F6 function.For the F4 function, BPSO performs better than the other state-of-the-art (SOTA) algorithms.Obtained results show that BEPO has a better intensification ability than the others.This is due to the adaptive nature of the proposed BEPO algorithm.In addition, keeping the original EPO properties of the BEPO algorithm is one of the main reasons for the high performance.

Table 2: Results obtained from unimodal functions

The results obtained from BEPO and other binary metaheuristic techniques on multimodal test functions are depicted in Tab.3.For F8-F12 test functions, BEPO outperforms the existing binary metaheuristic techniques.For F8 and F11 test functions, BGSA and BPSO are the succeeding algorithms, respectively.For F9, F10, and F12 test functions, BBA is the subsequent best algorithm.When the performance indicators of the F13 function are examined, it is seen that the BGWO algorithm produces higher performance than other methods.BEPO is the succeeding technique for this test function.It can be seen from results that BEPO has better diversification capability than the existing binary metaheuristics.The sigmoidal transfer function and adaptive nature of BEPO are responsible for diversification.

Table 3: Results of multimodal test functions

Tab.4 illustrates the results obtained from BEPO and the above-mentioned algorithms on fixed dimensional multimodal test functions.BEPO offers better results than the existing metaheuristics for F14-F22 test functions except for F17 and F21.For the F17 function, BGSA outperforms the other SOTA techniques in terms of the fitness function.Furthermore, BGWO and BPSO perform better results than the binary metaheuristic techniques for F21 and F23 functions.The results divulge that BEPO can explore the global optimal solution.The results obtained from BEPO and binary metaheuristics on composite test functions are shown in Tab.5.BGSA offers the optimal solution for the F24 test function.For this function, BEPO is the subsequent best algorithm.BEPO outperforms the binary metaheuristics for F25-F29 functions except for F27.For the F27 function, BBA gives an optimal solution.BEPO is the succeeding algorithm.

Table 4: Results of fixed-dimension multimodal test functions

The proposed BEPO algorithm has better diversification and intensification than the existing algorithms.This is due to the adaptive nature of the proposed BEPO algorithm.Fig.3 illustrates the convergence curves obtained from BEPO and the existing binary metaheuristics on F7, F9,F14, and F27 functions.It is observed from this figure that BEPO has better convergence than the existing algorithms.

5 Performance Evaluation of BEPO for Feature Selection Tasks

In this section the performance of the BEPO algorithm is analyzed in detail on the feature selection problem.Feature selection is a well-known optimization problem that uses discrete search space.The main purpose of feature selection techniques in the literature is to increase prediction performance.In order to demonstrate the performance of BEPO in the feature selection problems,it is compared with the well-known existing binary metaheuristics.

5.1 Datasets Used and Experimental Setup

The eight UCI datasets [49] have been used to validate the performance of BEPO.The detailed description of these datasets is mentioned in Tab.6.BEPO is compared with the recently developed binary metaheuristics that were mentioned in Section 4.2 using the same parameter settings.The following fitness function is used to evaluate the feature selection performance of the proposed BEPO algorithm and compare it with current SOTA techniques.

whereError(d)is the rate of error obtained from the algorithm on the feature set,d.|lenfeat_sub|represents the length ofd, and |Total_feat|denotes the total number of features.ξandψare set to 0.99 and 0.01, respectively as mentioned in [46].

Table 6: Description of datasets

5.2 Result and Discussion

The performance of the proposed BEPO algorithm is compared with the current binary metaheuristic algorithms in Tabs.2, 3, 4, and 5.In all tables, the BEPO algorithm mostly produces higher performances compared to other algorithms.However, for some test functions, it produces similar results or lower performance compared with other algorithms.This situation is naturally met because all proposed algorithms are nature-inspired.In functions where the best solution is not produced by BEPO, BEPO is generally among the top three best solutions.From this point of view, BEPO can be considered as the most suitable algorithm for real-life problems.Fig.3 provides information about the speed at which the binary algorithms reach the solution.According to Fig.3, the BEPO algorithm can reach the most appropriate solution quite quickly compared to other algorithms.When all these binary SOTA approaches are evaluated in terms of speed, the proposed BEPO algorithm stands out for real-life applications.

Tab.7 presents the results of comparing the BEPO algorithm with other binary metaheuristic methods for feature selection tasks.Accordingly, experiments of all mentioned binary optimization algorithms are carried out using eight benchmark datasets.The resulting performances show that BEPO produces higher performances in all datasets for feature selection tasks than other algorithms according to the average best fitness value.In Tab.7, the average best fitness values produced by BGWO and BGSA algorithms are quite striking compared to the others.Fig.4 compares the learning speed of the proposed BEPO algorithm and other algorithms.It is observed from the figure that BEPO has better convergence than the other algorithms.In the feature selection task, it is seen that the difference between the number of iterations decreases.However, it turns out that the proposed BEPO algorithm is more suitable for the solution of real-life feature selection problems in terms of both performance and speed.In some large dimensional datasets,BEPO may have slow computational speed due to the utilization of transfer functions.

Table 7: Average fitness value obtained from BEPO and other algorithms

Figure 4: Convergence analysis of BEPO and binary metaheuristics on (a) Glass, (b) Wine datasets

6 Conclusion and Future Work

Nowadays, optimization solutions that enable us to reach the most suitable solutions in almost every field in the shortest time are mostly offered in the continuous search space.However, studies in the literature show that many real-life problems are more appropriate in binary search space and in a discrete way.For this purpose, binary versions of existing optimization algorithms are being studied extensively.In this study, the binary version of the well-known EPO algorithm,which is dubbed BEPO, is proposed to solve binary problems in the binary search space.The position updating mechanism of the original EPO is modified to switch to the binary search space.For this purpose, binarization is performed using the sigmoidal transfer function.The performance of the proposed BEPO algorithm is tested for general optimization and feature selection problems in two separate sections.First, the performance of the proposed BEPO algorithm is evaluated using 29 benchmark test functions.Experimental results reveal that BEPO outperforms the existing binary metaheuristics.The superior results of the proposed method are presented according to different evaluation criteria and compared with SOTA methods.The statistical significance of the proposed algorithm is analyzed through ANOVA.In the second stage of the experiments,the effects of BEPO algorithm on the solution of the feature selection problem are analyzed.The results obtained show that BEPO can be used effectively in feature selection tasks with high performance.The detailed analysis performed in this study reveals that the proposed BEPO algorithm is the most suitable method for both real-life general optimization and real-life feature selection problems.In future studies, the use of BEPO algorithm with hybrid structures will be studied.Effects of the BEPO algorithm hybridization on performance will be investigated in detail.Also, BEPO variants will be studied to solve different real-world problems.

Acknowledgement:The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work.

Funding Statement:This work was supported by the Deanship of Scientific Research at King Khalid University through the Research Groups Program under Grant Number RGP.1/95/42.

Conflicts of Interest:The authors declare that they have no conflicts of interest.

主站蜘蛛池模板: 久久国产精品国产自线拍| 欧美亚洲国产视频| 国产女人18毛片水真多1| 中文一区二区视频| 国产伦片中文免费观看| 久无码久无码av无码| 成人免费黄色小视频| 中文字幕1区2区| 中国一级毛片免费观看| 国产精品女主播| 特级做a爰片毛片免费69| 久久这里只有精品2| 久久久无码人妻精品无码| 欧美日本在线播放| 亚洲精品视频网| 91精品免费久久久| 国产精品xxx| 婷婷开心中文字幕| 永久在线精品免费视频观看| 亚洲国产中文在线二区三区免| 久久99精品久久久久纯品| 国产精品99r8在线观看| 日韩a在线观看免费观看| 亚洲动漫h| 98精品全国免费观看视频| 国产精品免费入口视频| 性视频久久| 伦精品一区二区三区视频| 欧美日韩一区二区三| 高清无码一本到东京热| 伊人色在线视频| 成年A级毛片| 国内精品伊人久久久久7777人| 精品丝袜美腿国产一区| 欧美成人国产| 午夜福利免费视频| 中文字幕首页系列人妻| 亚洲国产一区在线观看| 久久精品一卡日本电影| 污网站免费在线观看| 国产成人精品优优av| 国产在线一区二区视频| 2019年国产精品自拍不卡| 91久久青青草原精品国产| 亚洲精品欧美日本中文字幕| 国产精品护士| 亚洲狼网站狼狼鲁亚洲下载| 亚洲日本在线免费观看| 欧美在线视频a| 91精品专区| 亚洲人成影院在线观看| 色九九视频| 亚洲欧美成人综合| 91免费国产高清观看| 久久婷婷五月综合色一区二区| 久久成人18免费| 国产毛片基地| www亚洲精品| 曰韩免费无码AV一区二区| 国产毛片高清一级国语 | 日韩在线网址| 久久综合婷婷| 国产极品嫩模在线观看91| 亚洲欧美激情另类| 国产成人永久免费视频| 91在线免费公开视频| 中文字幕在线日韩91| 毛片一级在线| 污网站免费在线观看| 91年精品国产福利线观看久久| 亚洲国产综合自在线另类| 国产在线视频欧美亚综合| 久久毛片网| 在线观看无码av五月花| 特级做a爰片毛片免费69| 欧美亚洲一区二区三区导航| 秋霞午夜国产精品成人片| 亚洲欧美激情小说另类| 免费a在线观看播放| 国产日韩精品欧美一区灰| 99精品国产自在现线观看| 综合五月天网|