999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Enhancing Parkinson’s Disease Prediction Using Machine Learning and Feature Selection Methods

2022-08-23 02:20:50FaisalSaeedMohammadAlSaremMuhannadAlMohaimeedAbdelhamidEmaraWadiiBoulilaMohammedAlasliandFahadGhabban
Computers Materials&Continua 2022年6期

Faisal Saeed,Mohammad Al-Sarem,Muhannad Al-Mohaimeed,Abdelhamid Emara,4,Wadii Boulila,5,Mohammed Alasli and Fahad Ghabban

1College of Computer Science and Engineering,Taibah University,Medina,41477,Saudi Arabia

2School of Computing and Digital Technology,Birmingham City University,Birmingham,B47XG,United Kingdom

3Information System Department,Saba’a Region University,Mareeb,Yemen

4Computers and Systems Engineering Department,Al-Azhar University,Cairo,11884,Egypt

5RIADI Laboratory,National School of Computer Sciences,University of Manouba,Manouba,2010,Tunisia

Abstract: Several millions of people suffer from Parkinson’s disease globally.Parkinson’s affects about 1% of people over 60 and its symptoms increase with age.The voice may be affected and patients experience abnormalities in speech that might not be noticed by listeners,but which could be analyzed using recorded speech signals.With the huge advancements of technology,the medical data has increased dramatically,and therefore,there is a need to apply data mining and machine learning methods to extract new knowledge from this data.Several classification methods were used to analyze medical data sets and diagnostic problems,such as Parkinson’s Disease(PD).In addition,to improve the performance of classification,feature selection methods have been extensively used in many fields.This paper aims to propose a comprehensive approach to enhance the prediction of PD using several machine learning methods with different feature selection methods such as filter-based and wrapper-based.The dataset includes 240 recodes with 46 acoustic features extracted from 3 voice recording replications for 80 patients.The experimental results showed improvements when wrapper-based features selection method was used with K-NN classifier with accuracy of 88.33%.The best obtained results were compared with other studies and it was found that this study provides comparable and superior results.

Keywords: Filter-based feature selection methods; machine learning;parkinson’s disease;wrapper-based feature selection methods

1 Introduction

Parkinson’s disease(PD)is a long term degenerative disorder of the central nervous system which causes both motor and non-motor symptoms[1].The exact causes of PD are unknown and unclear,but it is supposed to include risk factors which are both genetic and environmental.More than 10%of patients with PD have a first-degree relative with PD disease.In addition,PD is more prevalent between people who are disclosed to some pesticides and the people with past history of head injury,while PD risk is lower for patients who smoke [2].PD mainly affects neurons in a certain region of the mid brain that is known as substantia nigra, dopamine-producing brain cells, which leads to inadequate dopamine secretion in this region[3].

In the early stage of the PD,the main symptoms are shaking,difficulty with walking and slowness of movement.The common symptoms with late phase of PD are anxiety, dementia and depression.Moreover, emotional problems, sleep and sensory symptoms may also occur [4,5], in addition to Parkinsonian syndrome [6].These symptoms are mainly used to diagnose typical PD, in addition to examinations such as neuroimaging.There is no total recovery for PD, however treatment aims to improve the symptoms [7,8].The medical decision support systems (MDSS) have increasingly used a significant diagnosis and treatment method that uses artificial intelligence (AI) methods on a clinical dataset to assist clinicians to make better decisions[9,10].Recent improvements in machine learning,AI and statistical learning have improved decision support system(DSS),which has helped to introduce intelligent decision systems[10,11].Some studies reported that the artificial intelligence cannot be effective without learning [12].There are many types of machine learning methods such as Support Vector Machine (SVM), Na?ve Bayes (NB), K-nearest Neighbor (KNN), Multilayer Perceptron, Decision Tree (DT) and Random Forests (RF) that have been used to solve medical decision problems.

There is a significant overlapping between ML and data mining which often use the same procedures, but whereas ML concentrates on prediction, based on previously definite properties learned from the training data,data mining concentrates on the detection of unknown properties in the clinical data.The machine learning(ML)techniques have a significant role to play in the medical disease diagnosis field and are widely used in bioinformatics[13,14].

Recently,the variety of medical data is continuously increasing,therefore,effective classification and prediction algorithms are required.The previous studies on machine learning research reported that the accuracy of a classification algorithm can be influenced by many agents[15].ML algorithms are used to analyze medical data sets and diagnostic problems [12].Subsequently, improvement of medical decisions,treatments,and decrease financial costs will occur[14,16].

In addition,feature selection plays an important role in the explanation of medical data.Feature selection technique constitutes a significant issue of global combinatorial optimization in machine learning,which is used to decrease the number of features from the original features,removes irrelevant or redundant features without incurring much loss of information,as well as simplification of models to make them easier to interpret and shortening training times[17].Therefore,a good feature selection method is required to accelerate processing time and predictive accuracy.There are three types of feature selection algorithms,which are:filter(extract features from the data set without any learning),wrapper(use learning techniques to estimate useful features)and hybrid(gather the feature selection step and the classifier construction)[18,19].

Recently,the medical field is the most favorable field to use machine learning methods.Therefore,Na?ve Bayes(NB),Support Vector Machine(SVM),K-nearest Neighbor,Multilayer Perceptron and Random Forests as well as feature selection methods have been suggested to solve medical decision problems, such as the prediction of Parkinson’s disease.In this paper, the main contributions in the domain of prediction of Parkinson’s Disease can be summarized as follows.

1.A comprehensive approach was used to investigate the performance of several feature selection methods and machine learning methods in order to enhance the prediction of PD.

2.These feature selection methods include both filter-based methods such as(Information gain IG,Principle Component Analysis PCA)and wrapper methods that include different search methods such as First Best Greedy Stepwise PSO Method.

3.A comparative analysis was conducted to examine the performances of all methods/combinations used and the best prediction results were reported.

This paper is organized as follows: Section 2: the related works.Section 3: discussion of the methods.Section 4:experimental results and discussion.Section 5:conclusions and future works.

2 Related Studies

Several works have investigated the diagnosis of PD, in which many machine learning methods were applied such as Support Vector Machine,neural network,Na?ve Bayes,K-nearest neighbor and Random Forests.In this paper,several datasets were used to search for related studies on Parkinson’s disease,including Scopus,IEEE Xplore,Science Direct and Google Scholar.

In[20]a supervised ML method was proposed that combined the Principal Components Analysis(PCA)to extract features and SVM as classification method to identify PD patients.The main goal of this method was to determine patients that will be diagnosed with PD or with Progressive Supranuclear Palsy(PSP).The experiments were conducted on data of several patients with clinical and demographic features.The results depicted good accuracy of the proposed method in identifying the PD patients compared to existing related works.

In addition, the authors in [21] proposed an expert system of PD using features extracted from recordings of patients’voice.They developed a Bayesian classification approach to deal with the dependence to match the replication-based experimental design.The experiments were performed on voice recordings involving 80 subjects,50%of them had PD.The aim was to identify which subjects had no PD and which did have the disease.Naranjo et al.addressed the problem of identifying PD patients using the extracted acoustic features from repeated voice recordings.The proposed method was based on two steps,namely variable selection,and classification.The first step aims to reduce the number of features,while the next step uses a regularization method named LASSO(Least Absolute Shrinkage and Selection Operator)as a classifier.The proposed method was tested on the previously described database and showed a good capacity for PD discrimination.

In addition,the authors in[22]addressed the problem of PD diagnosis by developing an approach that investigated gait and tremor features that were extracted from the voice reordering data.They started by filtering data to remove noises,then,using this data to extract gait features they detectedthe peak and measuredthe pulse duration.The average accuracy obtained for the identifying PD patients by the proposed approach was satisfactory.

The authors in [23] proposed a method to automatically detect PD by using the convolutional neural network (CNN).The authors suggested considering electroencephalogram (EEG) signals to build a thirteen-layer CNN model.The proposed approach experimented with EEG signals of 20 Parkinson’s disease patients (50% men and 50% women).The CNN method obtained interesting results to identify PD patients;however,its performance should be evaluated using a large population.

Recently, Mostafa et al.[24] tried to enhance the diagnoses of PD by using several methods of feature evaluation and classification.They used a multi-agent system to evaluate multiple features by using five classification methods,namely DT,NB,NN,RF, and SVM.To evaluate the proposed method,they conducted several experiments using original and filtered datasets.The results depicted that this method enhanced the performance of ML methods used by finding the best set of features.

In addition,several methods were applied by[25–27]in order to predict Parkinson’s disease.These methods applied several machine learning and feature selection methods to enhance the prediction of Parkinson’s disease and other studies utilized machine learning and deep learning to improving prediction of diseases[28–38].This paper extends these efforts by applying a comprehensive approach to investigate the performance of several machine learning with feature selection methods.

3 Methods

There are many feature selection techniques available,and we have considered the utilization of the following feature selection techniques:Filter-based technique,Correlation-based Feature Subset Selection(CfsSubsetEval),Principle Component Analysis(PCA),and Wrapper technique.The aforementioned techniques use different strategies or search algorithms to generate subsets and progress the search processes including (i) Best First (ii) Greedy Stepwise, (iii) Particle Swarm Optimization(PSO),and(vi)Ranker(see Fig.1).

Figure 1:Filter-based approach vs.wrapper-based approach

The dataset used in this paper is available online at UCI Machine Learning Repository[14].The dataset contains acoustic features of 80 patients,50%of them suffering from Parkinson’s disease.The data set has 240 recordings with 46 acoustic features extracted from 3 voice recording replications per patient.The data set is well-balanced by gender and class label(whether the patients have Parkinson’s disease or not).

The experimental protocol was designed for evaluating the combination of the above techniques and search algorithms when they were used with the following classification models:(i)Na?ve Bayes,(ii)Support Vector Machine(SVM)1Both,the c-SVM and nu-SVM are examined.,(iii)K-Nearest Neighbor(K-NN),(vi)Multi-Layer Perceptron(MLP) and (v) Random Forest (RF).The experiments were carried out on WEKA tool version 3.8 and MacBook Pro with OS X Yosemite version 10.10.5 as an operating system.To evaluate the performance of each classifier,we first ran feature selection in order to find the representative features and then we applied the classification models.Additionally,10-fold cross validation was applied and the results have been reported in terms of Accuracy,Recall,Precision and F-score.Finally,we analyzed the results achieved from the experimentations.As stated earlier, the main goal of the research is to enhance the prediction of Parkinson’s disease.However, this work also provides a useful guide to selecting the best feature selection technique for different classification models.

3.1 Feature Selection Techniques

Several feature selection techniques were applied before feeding the data into the classifier.The filter-based techniques consider the relevance between the features.Thus, they have low complexity,acceptable stability and scalability [39].A disadvantage of this type of technique is that it might ignore some informative features,especially when the data is coming in stream[40].The filter-based approaches can be either univariate or multivariate [41].The univariate methods examine features according to the statistically-based criterion such as Information Gain (IG) [42–44].Multivariate methods compute feature dependency before ranking the feature.In addition,Principle Component Analysis(PCA)is a common statistical method that is used for data analysis.PCA reduces the size of the data sets by selecting a set of features that represents the whole data set.Since PCA is a conversion technique,the principal components of the first variables is the component with the highest variance value.Then,other principal components are ordered with descending variance values[45].In addition,the wrapper-based techniques evaluate the quality of the selected features using the performance of the learning classifier.

Regarding the search strategies, the search algorithms follows either sequential forward search(SFS),or sequential backward search(SBS).The SFS starts with a single feature and then iteratively adds or removes features until some terminating criterion is met whereas SBS starts with the whole feature set and then continues with adding and deleting operations.Since the SBS method attempts to find solutions ranged between suboptimal and near optimal regions[41],it is worth fully employing optimization techniques to figure out the subset that leads to maximizing the learner’s performance,in particular,with the wrapper approach.At this end,the wrapper-based method can take advantage of various optimization methods such genetic algorithm[46,47]and ant colony optimization algorithm(ACO)[48].

3.2 Machine Learning Classifiers

In machine learning, the data classification is still an attractive domain.Lately, there are many proposed algorithms that have been examined in several domains such as NB, SVM, K-NN, MLP and RF,which are presented briefly in the next subsections.

3.2.1 Support Vector Machine

The basic idea behind SVM algorithm is to construct a hyperplane between groups of data.The quality of the hyperplane is evaluated by measuring to which degree it can maintain the largest distance from the points in either class [39].Therefore, as it is presented in Fig.2, the higher the separation ability of the hyperplane,the lower the error in the value[49].The computational complexity of SVM isO(n2)[50,51].

Figure 2: SVM illustration.The larger margin separating the data points, the higher accuracy we obtained

3.2.2 Na?ve Bayes

Na?ve Bayes (NB) is a probabilistic classifier that is based on Bayesian theorem.It is called Na?ve because the classifier works on a strong features independence assumption.In literature,there are several variants of NB: simple Na?ve Bayes, Gaussian Na?ve Bayes, Multinomial Na?ve Bayes,Bernoulli Na?ve Bayes and Multi-variant Poisson Na?ve Bayes in which the main different among them is the way the probability of the target class is computed.The time complexity of Na?ve Bayes isO(d×c)wheredis the query vector’s dimension,andcis the total classes.

3.2.3 K-Nearest Neighbor

K-NN is a type of lazy learning,in which there is no explicit training phase and all computations are deferred until classification.It is a method of classifying data based on the nearest training data points in the feature space.The K-NN classifier uses the Euclidean distance measure, or another measure such as Euclidean squared, Manhattan, and Chebyshev, to estimate the target class.The performance of the classifier depends upon the parameter k,while the best value of k depends upon the dataset.In general, the greater the value of k, the lower the noises in the classification, but the boundaries between the classes become less distinct as shown in Fig.3.The time complexity of K-NN isO(n×m), where n is the number of training examples and m is the number of dimensions in the training set[52].

3.2.4 Multilayer Perceptron Model

The MLP is a classical feedforward neural network classifier in which the errors of the output are used to train the network[53].MLP consists of three layers of nodes:(i)input layer,(ii)at least one or more hidden layer(s),and(iii)output layer.The input layer is connected to the hidden layers which are connected to the output layer.All the layers are processed by weighted values.Fig.4 represents a MLP with a single hidden layer.MLP is one-way error propagation where back-propagation techniques have been utilized to train and test these weight values.The time complexity of MLP isO(n2).

Figure 3:K-NN model.When k=3,the classifier predicts a new point as B class(Fig.a),whilst,when k=5,the point is determined as a class A.(a)K-NN model with K=3(b)K-NN model with K=5

Figure 4:MLP model with 1 input layer,1 hidden layer,and 1 output layer

3.2.5 Random Forests

The Random Forests(RF)classifier is a type of ensemble method that combines multiple decision tree predictions.In RF, the trees are generated randomly by selecting attributes at each node.The output of the ensemble is tree votes with the most popular class.The pseudo-code of the Random Forest ensemble is presented in Tab.1.The time complexity of Random Forest of sizeTand maximum depthD(excluding the root)isO(T×D)[54].

Table 1: Pseudo-code of RF model

Table 1:Continued

The random forest method is more robust to errors and outliers.Therefore,the problem of overfitting is not faced.The accuracy of the model depends mainly on the strength of the base classifiers and measure of the dependence between them[55].

4 Experimental Results

The experiments were conducted such that 10-fold cross validation was applied for each classifier.The performance of each classifier was measured by the accuracy, precision, recall and F-score.Tabs.2–12 show the experimental results of several machine learning methods both with and without different feature selection methods.

Table 2: The performance of classifiers without features selection

Table 3: Performance of classifiers with CfsSubsetEval Feature Selection Combinations

Table 3:Continued

Table 4: Performance of classifiers with features selection based on information gain

Table 5: Performance of classifier with features selection based on PCA

Table 6: Summary of the accuracy of classifiers with filter-based features selection methods

Table 7: Performance of classifiers for wrapper-based method with Na?ve Bayes as base classifier

Table 8: Performance of classifiers for wrapper-based methods with c-SVM as base classifier

Table 8:Continued

Table 9: Performance of classifiers for wrapper-based methods with nu-SVM as base classifier

Table 10: Performance of classifiers for wrapper-based methods with MLP as base classifier

Table 11: Performance of classifiers when wrapper-based methods with K-NN are applied

Table 12: Performance of classifiers for wrapper-based methods with RF as base classifier

5 Discussion

Tab.2,shows the performance of all classifiers used before applying features selecting methods.The results showed Na?ve Bayes obtained the best performance using all evaluation measures compared to the other classifiers.It obtained 82.92%,83.30%,82.90%and 82.90%for accuracy,precision,recall and F-score respectively.

The number of features was reduced using correlation based feature selection (CfsSubsetEval)method to 23,17,18 for the search methods of First Best,Greedy Stepwise and POS respectively,as shown in Fig.5.The performance of with CfsSubsetEval combinations for each classifier is shown in Tab.3.The results showed that no improvements were obtained by most of the combinations,except for RF with Greedy Stepwise and POS methods.

Tab.4 showed the performance of classifiers used when features selection method based on information gain was applied.As shown in Fig.5, the number of features was reduced to 10.The results showed that no improvements were reported on the performance of all classifiers after applying this feature selection method.

Figure 5:Number of remaining features after applying features selection methods

In addition,Tab.5 shows the performance of all classifiers when features selection method based on PCA was applied.The results showed that only SVM methods obtained better performance after applying this features selection method.The number of features was reduced to 20 as shown in Fig.5.

Tab.6 summarizes the performance of filter based features selection methods.The results showed that feature selections with PCA obtained the best performance when SVM classifier was applied.

Tabs.7–12 show the performance of wrapper-based features selection methods using different base classifiers.In each table,First Best,Greedy Stepwise and PSO search methods were applied.

Tab.7 showed that,when Na?ve Bayes was used as the base classifier for wrapper-based feature selection method, the performance of NB using PSO search method was enhanced to 0.854, 0.855,0.854 and 0.854 for accuracy,precision,recall and F-score respectively.The performance of the other classifiers using this method was reduced.

Tab.8 shows the performance of classifiers when the wrapper-based features selection method with c-SVM as the base classifier was applied.The results showed the enhancements obtained by all classifiers using all search methods.However,the best performance was obtained by SVM using First Best and Greedy Stepwise search methods.

However, Tab.9 shows the performance of classifiers when wrapper-based features selection method with nu-SVM as the base classifier was applied.The results showed that the enhancements were obtained by applying c-SVM,K-NN and RF,especially when the POS search method was used.

In addition,Tab.10 shows the performance of classifiers when wrapper-based features selection method with MLP as base classifier was applied.The results showed that the enhancements were obtained by applying MLP and RF for the three search methods.The best results were obtained using MLP classifier.

Moreover, Tab.11 shows the performance of classifiers when wrapper-based features selection method with K-NN as base classifier was applied.The results showed that the enhancements were obtained by applying K-NN and RF for the First Best and POS search methods.The best results were obtained using K-NN classifier with accuracy,precision,recall and F-scores of 0.883,0.884,0.883 and 0.883 respectively.

Tab.12 shows the performance of classifiers when wrapper-based features selection method with RF as base classifier was applied.The results showed that the enhancements were obtained by applying MLP and RF for the three search methods.The best results were obtained using RF classifier.

Tab.13 shows a comparison of different wrapper-based features selection methods(using different base classifiers).The results showed that the best performing classifier was K-NN associated with the wrapper-based feature selection with KNN as base classifier,obtaining 88.33%accuracy.The number of features was reduced(with the best performance obtained)to 20,5 and 22 using First Best,Greedy Stepwise and PSO search methods.

Table 13: Best Results for wrapper-based techniques

Finally, Tab.14 shows a comparison of using different features selection methods (filter and wrapper base methods).It shows that the best performance was obtained by K-NN classifier associated with wrapper-based feature selection method with K-NN as base classifier and using Best First and PSO search method.

Table 14: Comparison between filter-based and wrapper-based techniques

For this paper a comparison has been conducted between the best performing methods and the previous studies on predicting Parkinson’s disease using the same dataset,and other datasets,as shown in Tab.15.The comparison results showed that the best performing method(K-NN classifier associated with wrapper-based feature selection method with K-NN as base classifier and using Best First and PSO search method)obtained comparable and superior results.

Table 15: Comparison with previous studies

6 Conclusions and Future Works

This paper examined the performance of several classifiers with filter-based and wrapper-based features selections methods to enhance the diagnosis of Parkinson’s disease.Different evaluation metrics were used including accuracy, precision, recall and F-score.The experiments compared the performance of machine learning on original and filtered datasets.The results showed that wrapperbased features selection method with K-NN enhanced the performance of predicting Parkinson’s disease,with the accuracy reached to 88.33%.In future work,more machine learning and deep learning methods could be applied with these combinations of features selection methods.In addition,other features selection methods could be investigated to improve the performance of predicting Parkinson’s disease.

Acknowledgement:The authors extend their appreciation to the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work;project number(77/442).Also, the authors would like to extend their appreciation to Taibah University for its supervision support.

Funding Statement:This research was funded by the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia under the Project Number(77/442).

Conflicts of Interest:The authors declare that they have no conflicts of interest.

主站蜘蛛池模板: 97超碰精品成人国产| 青青草a国产免费观看| 国产欧美综合在线观看第七页| 国产嫩草在线观看| 欧美一区二区福利视频| 波多野吉衣一区二区三区av| 91精品视频播放| 色婷婷久久| 99视频在线免费| 全裸无码专区| 国产毛片不卡| 日韩免费毛片视频| 国产一区二区丝袜高跟鞋| 亚洲黄色高清| 亚洲三级影院| 九色免费视频| 精品久久香蕉国产线看观看gif| 99999久久久久久亚洲| AV在线天堂进入| 国产微拍一区| 一级爱做片免费观看久久 | 57pao国产成视频免费播放| 国产精品高清国产三级囯产AV | 伊人久久大香线蕉成人综合网| 亚洲国产成人精品青青草原| 国产h视频免费观看| 成人免费视频一区二区三区 | jizz在线观看| 午夜福利在线观看入口| 欧美精品成人| 国产91成人| 国产精品综合色区在线观看| JIZZ亚洲国产| 视频二区欧美| 99激情网| 青青操视频免费观看| 国产精品亚洲专区一区| 国产欧美另类| 国产在线观看一区精品| 成人在线观看一区| 欧美yw精品日本国产精品| 欧美精品一二三区| 国产成人精品日本亚洲77美色| 另类专区亚洲| 国产91丝袜| 国产精品亚洲日韩AⅤ在线观看| 亚洲AV无码久久精品色欲| 91免费观看视频| 91精品啪在线观看国产91九色| 亚洲V日韩V无码一区二区| 老司机精品99在线播放| 免费jjzz在在线播放国产| 国产精品大尺度尺度视频| 97se亚洲综合不卡 | 亚洲一区网站| 国产成人免费| 精品久久久无码专区中文字幕| 欧美另类图片视频无弹跳第一页| 99久久精品视香蕉蕉| 国语少妇高潮| 欧美天堂在线| 亚洲AV人人澡人人双人| 国产微拍一区二区三区四区| 精品视频第一页| 亚洲一区精品视频在线| 亚洲欧洲自拍拍偷午夜色| 亚洲成aⅴ人片在线影院八| 亚洲无码精品在线播放| 国产传媒一区二区三区四区五区| 久久精品亚洲专区| 国产Av无码精品色午夜| 97se亚洲综合在线| 91日本在线观看亚洲精品| 欧美亚洲日韩不卡在线在线观看| 2018日日摸夜夜添狠狠躁| 激情综合激情| 欧美人人干| 亚洲区视频在线观看| 高清视频一区| 中文字幕一区二区人妻电影| 国产高清不卡| 日本黄色不卡视频|