999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

An Optimized Approach to Vehicle-Type Classification Using a Convolutional Neural Network

2021-12-15 07:08:06ShabanaHabibandNoreenFayyazKhan
Computers Materials&Continua 2021年12期

Shabana Habib and Noreen Fayyaz Khan

1Department of Information Technology,College of Computer,Qassim University,Buraidah,51452,Saudi Arabia

2Department of Computer Science,Islamia College University,Peshawar,Pakistan

Abstract: Vehicle type classification is considered a central part of an intelligent traffic system.In recent years, deep learning had a vital role in object detection in many computer vision tasks.To learn high-level deep features and semantics, deep learning offers powerful tools to address problems in traditional architectures of handcrafted feature-extraction techniques.Unlike other algorithms using handcrated visual features, convolutional neural network is able to automatically learn good features of vehicle type classification.This study develops an optimized automatic surveillance and auditing system to detect and classify vehicles of different categories.Transfer learning is used to quickly learn the features by recording a small number of training images from vehicle frontal view images.The proposed system employs extensive dataaugmentation techniques for effective training while avoiding the problem of data shortage.In order to capture rich and discriminative information of vehicles, the convolutional neural network is fine-tuned for the classification of vehicle types using the augmented data.The network extracts the feature maps from the entire dataset and generates a label for each object(vehicle)in an image,which can help in vehicle-type detection and classification.Experimental results on a public dataset and our own dataset demonstrated that the proposed method is quite effective in detection and classification of different types of vehicles.The experimental results show that the proposed model achieves 96.04%accuracy on vehicle type classification.

Keywords: Vehicle classification; convolutional neural network; deep learning; surveillance

1 Introduction

Surveillance systems have achieved good results in terms of security.Image analysis, such as detecting a moving vehicle in an image, is a challenging task that can be solved by analyzing the foreground [1].Dramatic improvements have been observed in the areas of speech recognition and document recognition genomics for automation technologies [2].Major issues in surveillance systems include brightness, lighting, occlusion of shadows, and fragmentation, and all have a negative impact on objects to be detected [3,4].Much research has been done on vehicle-type recognition systems that include handcrafted feature-extraction techniques such as speeded-up robust features (Surf), local binary patterns (LBPs), histogram of gradients (HOG) [5], annular coil, radar detection radio wave or infrared contour scanning, vehicle weight, and laser sensor measurement [6,7].Automatic detection and classification of vehicle types using a convolutional neural network (CNN) is an unsolved problem.Deep CNNs [8,9], as well as extensively annotated datasets (e.g., ImageNet [10]), have brought remarkable progress in image recognition.Deep learning approaches are useful for feature extraction, and selection without prior knowledge has been investigated [11].The most popular form of deep learning model, the CNN, consists of a series of convolutional layers followed by pooling layers and fully connected layers.The convolutional layer forms feature maps within which each unit is associated with a set of weights called filters.The pooling layer computes the sampling feature maps by summarizing the presence of feature maps.The fully connected layers are used for classification.All the weights are updated by gradient-based learning.

In this research, we employed the AlexNet CNN architecture and customized its network layers and options according to our classification objectives.Image features are the primary elements for any object detection.The model extracts the features from the training dataset through convolutional and pooling layers.The proposed model works on backpropagation gradients that help to reduce the discrepancy between the correct output and that produced by the system.The CNN helps learns the semantics of the categories of vehicle images so as to produce accurate detection and classification results.

The key contributions of this work follow.

? Vehicles are automatically detected and classified irrespective of brightness, lighting, occlusion of shadows, and fragmentation.

? The AlexNet model is used to classify vehicle types by customizing layers according to the problem domain.

? Deep learning helps to automatically learn features through filters in the convolutional layer.

? The system helps in vehicle-tracking systems in commercial parking areas and assists in counting the number of vehicles on a road.

2 Related Work

The classification of vehicles is a challenging problem in the field of vision-based surveillance.There is huge within-class variability in vision-based vehicle detection systems, as vehicles may differ in color, size, and shape, illumination can vary, and background can be cluttered.Furthermore, the appearance of vehicles depends on their posture and might be affected by neighboring objects [12].Shallow classification models are used by traditional image classification systems, such as support vector machine (SVM) [13], Bayesian [14], random forest (RF) [15], and boosting [16],to extract features for classification, such as local binary patterns (LBP), histogram of oriented gradients (HOG) [17], and scale-invariant feature transform (SIFT) [18].These methods rely on hand-designed features.Shallow models are trained by original training data with limited fitness in representation learning [8].Fu et al.[19] proposed a hierarchical multi-SVM method for vehicle classification.Other methods include a real-time system for multiple vehicle detection and classification using a Gaussian mixture model with hole filling algorithm (GMMHF), Gabor kernel for feature extraction, and multi-class vehicle classification [20].Use of a CNN to classify images marks a massive revolution.Some deep learning techniques surpass humans on tasks like face recognition and image classification [21–23].Machine learning techniques have seen successful application, and are considered the best choice compared to neural networks and support vector machine for vehicle detection and classification [24].Roadside LiDAR sensors [25], frequencymodulated continuous-wave (FMCW) radar signals [26], sensors for vehicle classification and counting [27], and distributed optical sensing technology in vibration-based vehicle classification systems [28] also play a significant role in vehicle classification.

3 Proposed System

Increased traffic has become an issue in many towns and cities, causing serious traffic congestion problems.This paper develops a simple and efficient vehicle-type recognition system using a CNN model.The framework, as shown in Fig.1, includes three steps:a) preparing the dataset;b) feature extraction by CNN; and c) classification of test data.

Figure 1:Proposed framework for vehicle-type detection and classification

3.1 Preparing the Dataset

For effective learning patterns, a huge amount of data is required for a deep learningbased approach.The effective deployment of deep learning models requires abundant high-quality data [13].To attain the desired accuracy, we apply four data-augmentation techniques to extend the dataset.Tab.1 shows the extended dataset.Data augmentation includes flipping, skewness,rotation, and translations for geometric transformation invariance.The second column in Tab.1 lists techniques, and the third column shows their invariance parameters.

The four augmentation techniques and 16 parameters extend each sample to form 16 samples.Tab.2 shows eight classes of vehicles:bike, bus, car, horse buggy, jeep, rickshaw, truck, and van.The third column presents the number of images of each vehicle type before and after augmentation.The purpose of augmentation is the effective deployment of the deep learning model.It also helps to avoid overfitting and memorizes the targeted details of the training images.All of the images in the dataset are preprocessed to the size of 227 (width) × 227 (height) × 3(color channels) to prepare for training.

Table 1:Different techniques of data augmentation with respective parameters

Table 2:Statistics of vehicle type data set before augmentation and after augmentation

3.2 Feature Extraction by CNN

In the proposed system, the AlexNet CNN architecture is fine-tuned [9], as shown in Fig.2.AlexNet has eight layers.Five are convolutional (conv) layers, where conv1, conv2, conv3, and conv4 are followed by the max-pooling layer, and the last three are fully connected layers.Dataset features are extracted by applying a deep CNN model containing manifold convolutional layers.A feature map (fmap) that represents a higher-level abstraction of the input data is generated by each convolutional layer.The fmaps of the starting convolutional layers extract low-level features such as color, shape, corners, and edges, and the fmap of the last convolutional layer includes the highlevel features, which are forwarded to the fully connected layers for classification.Tab.3 provides an architectural analysis of each layer of the fine-tuned AlexNet model.The first convolutional layer is produced by applying a filter of size 11×11×3 on an image of size 227×227×3.Images are convolved with their respective filters.The convolutional layers detect the same features at different locations in an image.Layers learn all the edges and blobs of the images in the dataset by learning the 34,944 parameters.

Figure 2:Architecture of fine-tuned Alex net CNN mode

Table 3:LayerWise analysis of CNN architecture of Alex net model

The number of parameters of the convolutional layer is formulated as

where

WC=number of weights of the convolutional layer,

BC=number of bases of the convolutional layer,

PC=number of parameters of the convolutional layer,

K=size of kernels used in the convolutional layer,

N=number of kernels,

C=number of channels of the input image.

The size (O) of the output tensor (image) of the maximum pool layer is formulated as

where

O=size of output image,

I=size of input image,

S=stride of the convolution operation,

Ps=pool size.

The number of parameters is formulated as

where

Wff=number of weights of an FC layer which is connected to an FC layer,

Bff=number of biases of an FC layer which is connected to an FC layer,

Pff=number of parameters of an FC layer which is connected to an FC layer,

F=number of neurons in the FC layer,

F-1=number of neurons in the previous FC layer.

Fig.3 shows the features extracted from an image by the deep CNN model.In the first convolutional layer, conv1 extracts the edges and blobs, conv2 and conv3 extract the texture, conv4 and conv5 extract the object parts, and the last fully connected layer detects the object classes.

3.3 Vehicle Type Classification

We discuss the classification of multiple categories of vehicles through several experiments on the dataset.The parameters used in the AlexNet model were customized for optimal results.The network was trained by splitting the dataset into 70% for training and 30% for validation.The activations of the pre-trained model learned patterns in different datasets through transfer learning.All the layers of the pre-trained network were extracted, except the last three layers,which were configured for 1000 classes.We fine-tuned these three layers for our classification problem.Training was optimized by setting the minibatch size to 7, maximum epochs to 35, and learning rate to 1e-5.Experiments were evaluated using MATLAB with a deep learning toolbox.The proposed model was trained on a GPU.The elapsed time was 47 s.The model took random images from the validation dataset and accurately labeled them according to their type.Fig.4 shows random images from the training dataset, classified with their class labels from the testing(validation) dataset.Images were correctly classified according to their type, except the one in the third row and first column; this means that the model needed more data for training of that particular image to give better results.

Figure 3:Features extraction of the 5 convolutional layers

Figure 4:Random images of vehicles from training dataset

3.4 Evolution Method

We present accuracy as the evaluation method.This is computed with the help of a confusion matrix, as shown in Fig.5, which shows the performance of the algorithm for each class of vehicle.Accuracy shows the percentage of the correctly predicted class in the entire testing dataset,and is formulated as

Figure 5:Confusion matrix of the optimized vehicle classification approach

4 Results and Discussion

We discuss the experimental assessment for the detection and classification of vehicle types.We evaluated the dataset with the stochastic gradient descent with momentum (SGDM) and adaptive moment estimation (Adam) algorithms with epochs ranging from 10 to 40.The optimizers were used to change the weights of CNN to reduce the losses.During training, optimization is a key component that helps the model to adjust weights during backpropagation [29,30], which is formulated as:

where j = 0, 1 represents the feature index number.

4.1 Results with SGDM Optimizer

The gradient vectors were accelerated in the right direction, leading to fast convergence with the help of SGD with momentum (SGDM) [31].We trained the model with an SGDM optimizer by applying different epochs.The SGDM algorithm is formulated as:

h(i);h(j) are the training data

?=updated weight

α=learning rate

ΔJ=cost function.

The model first trained with 10 epochs, giving 94.06% accuracy with a collapse time of 15 s.For better performance, the model was trained further with 15, 20, 25, 30, 35, and 40 epochs.The number of iterations is directly proportional to the number of epochs.It is observed that the system showed the best result on epoch 35, with 95.02% accuracy.System performance then started to decline due to overfitting.

4.2 Results with ADAM Optimizer

The Adam optimizer iteratively updated the network weights in the training dataset [29].After training the model with an SGDM optimizer, it was assessed with an Adam optimizer for improved accuracy.The Adam algorithm is formulated as:

where

m(t) and v(t) are the first estimation moment and second estimation moment respectively.

The model was trained with the same number of epochs as previously described.It is observed that the best result obtained with the Adam optimizer was at 35 epochs, with 90.10% accuracy.The literature shows that SGDM performs better than Adam in reducing the loss [31].The experimental results in Tabs.4 and 5 show that the accuracy of the model with SGDM was 95.02% with 46 s training time, while the accuracy with Adam was 90.10%, with a training time of 50 s.Therefore, SGDM provided better accuracy than Adam.

4.3 Model Accuracy with SGDM Optimizer and Different Learning Rates

For more convincing results, the model was evaluated with different learning rates with 35 epochs, which gives the best results in Tabs.4 and 5.The learning rate determines the step size at each iteration in an optimization algorithm.It is a tuning parameter that minimizes the loss function [32,33].The model was trained again with SGDM and 35 epochs, with learning rates ranging from 1e-3 to 1e-6.The learning rate affected the quick convergence of the model toward local minima.It improved the performance of our model from 95.02% to 96.04% by adjusting the learning rateα= 1e-5, as shown in Tab.6.

Table 4:Model accuracy with SGDM optimizer

Table 5:Model accuracy with ADAM optimizer

Table 6:Model accuracy with SGDM optimizer with different learning rates

Figs.6 and 7 show the training progress of the model used in this study.The upper graphs show the accuracy of the model, which is measured by the performance estimated on a set of samples from the test data.The lower graph shows the loss of the model.It is computed by the gradient of the loss function concerning the parameters [34], and it shows the gap between the actual and expected output scores.During the first epoch, the rate of accuracy increased from 20% to 80% due to backpropagating gradients that updated the maximum weights of the filters in our classification task.For this, SGDM was set at a learning rate of 0.00001.This allowed for fine-tuning to make progress in the remaining epochs.The accuracy fluctuated between 80%and 95% because the maximum number of weights had been trained.Similarly, during the first epoch, the loss decreased dramatically from 2.5 to 0.5 in each iteration; the model minimized the error by updating the weights.By the completion of 35 epochs, the proposed model had 96.04%accuracy on the validation dataset.

Figure 6:Best result with SGDM optimizer

Figure 7:Best result with Adam optimizer

4.4 Comparative Analysis

We evaluated the proposed method against state-of-the-art deep learning methods [35].Dong used CNN for automatic feature extraction by classifying four categories of vehicles, with 89.4%accuracy.Another method, based on a comparative analysis of ANN, SVM, and logistic regression, classified small vehicles and big vehicles with 93.4% accuracy.Huttuman automatically extracted features using deep neural networks, and classified four classes of vehicles with 97%accuracy.Adu Gyamfi used a deep CNN to classify 13 vehicle classes with 89% accuracy.The last two methods, as mentioned in Tab.7, used LeNet, AlexNet, VGG-16, and an inception module for vehicle classification, with respective accuracies of 80% and 80.3% which are much lower than those of our proposed system.Our method achieved better accuracy than all of the above methods except the Huttuman approach, whose accuracy was high, but could classify only four classes of vehicles while the proposed method classified eight classes of vehicles.Tab.7 shows a comparative analysis of the proposed method.

Table 7:Comparative analysis of the proposed method

5 Conclusion

Our method detects and classifies multiple classes of vehicles through a deep learning model.It can help in vehicle tracking systems for surveillance in big parking slots where security is a concern.It can help solve traffic issues by directing large vehicles to one side of a road and keep the traffic moving by knowing what vehicles are ahead in a queue.This research will help by classifying the vehicles in parking zones and automatically allocate tickets according to vehicle types.The accuracy of the model can be improved by increasing the sample size.

Acknowledgement:We acknowledge the overall paper editing support by Dr.Sheroz Khan and Dr.Muhammad Islam, Department of Electrical Engineering and Renewable Engineering, College of Engineering & Information Technology, Onaizah Colleges, Al-Qassim, Saudi Arabia;2053, Saudi Arabia.We thank LetPub (www.letpub.com) for its linguistic assistance during the preparation of this manuscript.

Funding Statement:This work is supported by the Information Technology Department, College of Computer, Qassim University, 6633, Buraidah 51452, Saudi Arabia.

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 亚洲国产成人久久精品软件| 色亚洲激情综合精品无码视频 | 免费国产在线精品一区 | 极品私人尤物在线精品首页| 国产精品亚洲天堂| 国产极品美女在线| 一级毛片高清| 动漫精品中文字幕无码| 97综合久久| 91亚洲免费视频| 亚洲一区第一页| 国产黄网永久免费| 久久成人免费| 国产欧美日韩精品综合在线| 99er这里只有精品| 经典三级久久| 欧美第二区| 亚洲人成网址| 无码一区18禁| 韩日免费小视频| 日韩AV无码免费一二三区| 国产sm重味一区二区三区| 日韩国产一区二区三区无码| 国产成人亚洲精品蜜芽影院| 免费人成视网站在线不卡| 在线色国产| 一区二区欧美日韩高清免费| 热99精品视频| 国产人在线成免费视频| 日韩黄色大片免费看| 亚洲精品福利网站| 国产精品亚洲αv天堂无码| 91亚洲精品国产自在现线| 广东一级毛片| 日韩欧美视频第一区在线观看| 国产精品观看视频免费完整版| 中美日韩在线网免费毛片视频| 久久久国产精品免费视频| 欧美成人手机在线观看网址| 国产精欧美一区二区三区| 国产欧美日韩专区发布| 国产婬乱a一级毛片多女| 福利视频一区| 夜夜爽免费视频| 国产精品私拍在线爆乳| 日韩在线视频网| 亚洲成综合人影院在院播放| 99精品久久精品| 亚洲最新在线| 久久久久九九精品影院 | 国产免费看久久久| 日本高清视频在线www色| 69综合网| 日韩高清中文字幕| 国产特级毛片aaaaaa| 2021最新国产精品网站| 婷婷综合色| 最新无码专区超级碰碰碰| 色丁丁毛片在线观看| 国产精品大尺度尺度视频| 国产精品毛片在线直播完整版| 国产AV毛片| a级毛片免费播放| 亚洲成人一区在线| 青青操视频在线| 久久精品嫩草研究院| 亚洲美女久久| 亚洲人人视频| 欧美另类第一页| 东京热av无码电影一区二区| 久热这里只有精品6| 色亚洲激情综合精品无码视频 | 欧美日韩国产在线人成app| 亚洲欧美人成人让影院| 免费国产高清精品一区在线| 性视频久久| 日本免费一区视频| 欧美劲爆第一页| 国产精品成人免费综合| 亚洲天堂2014| 91精品视频网站| 亚洲AV人人澡人人双人|