999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Intelligent Autonomous-Robot Control for Medical Applications

2021-12-11 13:31:14RihemFarkhHaykelMarouaniKhaledAlJaloudSaadAlhuwaimelMohammadTabrezQuasimandYasserFouad
Computers Materials&Continua 2021年8期

Rihem Farkh,Haykel Marouani,Khaled Al Jaloud,Saad Alhuwaimel,Mohammad Tabrez Quasim and Yasser Fouad

1College of Engineering,Muzahimiyah Branch,King Saud University,Riyadh,11451,Saudi Arabia

2Laboratory for Analysis,Conception and Control of Systems,Department of Electrical Engineering,National Engineering School of Tunis,Tunis El Manar University,1002,Tunisia

3King Abdulaziz City for Science and Technology,Riyadh,12354,Saudi Arabia

4College of Computing and Information Technology,University of Bisha,Bisha,67714,Saudi Arabia

Abstract:The COVID-19 pandemic has shown that there is a lack of healthcare facilities to cope with a pandemic.This has also underscored the immediate need to rapidly develop hospitals capable of dealing with infectious patients and to rapidly change in supply lines to manufacture the prescription goods (including medicines) that is needed to prevent infection and treatment for infected patients.The COVID-19 has shown the utility of intelligent autonomous robots that assist human efforts to combat a pandemic.The artificial intelligence based on neural networks and deep learning can help to fight COVID-19 in many ways,particularly in the control of autonomous medic robots.Health officials aim to curb the spread of COVID-19 among medical, nursing staff and patients by using intelligent robots.We propose an advanced controller for a service robot to be used in hospitals.This type of robot is deployed to deliver food and dispense medications to individual patients.An autonomous line-follower robot that can sense and follow a line drawn on the floor and drive through the rooms of patients with control of its direction.These criteria were met by using two controllers simultaneously:a deep neural network controller to predict the trajectory of movement and a proportional-integral-derivative(PID)controller for automatic steering and speed control.

Keywords: Autonomous medic robots; PID control; neural network control system; real-time implementation; navigation environment; differential drive system

1 Introduction

The use of robotics, automation applications, and artificial intelligence in public healthcare is growing daily [1-3].Robots support doctors and medical personnel to perform complicated functions with accuracy, and to reduce the workload of medical staff, improving the efficacy of healthcare services [4].To minimize the spread of COVID-19, several work functions have been allocated to robots, such as cleaning and food processing jobs in contaminated areas [5].In hospitals, service robots are versions of mobile robots with high payload capability but restricted degrees of freedom (Fig.1).However, surgical robots are precise, flexible, and reliable systems with a minimal error margin, typically within millimeters [6,7].

Mobile robots are machines controlled by software and integrated sensors, including infrared,ultrasonic, webcam, GPS, and magnetic sensors, and they can move from one location to another to perform complex tasks [8].Wheels and DC motors are used to move robots [9].In addition to their use in healthcare, mobile robots are used in agricultural, industrial, military, and search and rescue applications, helping humans to accomplish complicated tasks [10].Line-follower robots can be used in many industrial logistics applications, such as the transport of heavy and dangerous materials, the agriculture sector, and library inventory management systems.These robots are also capable of monitoring patients in hospitals and warning doctors of concerning symptoms [11].

Figure 1:Line-follower service robots

A growing number of researchers have focused on smart-vehicle navigation because traditional tracking techniques are limited due to the environmental instability under which a vehicle moves.Therefore, intelligent control mechanisms, such as neural networks, are needed.They solve the problem of vehicle navigation by using their ability to learn the non-linear relationship between input and sensor values.A combination of computer vision techniques and machine learning algorithms is necessary for autonomous robots to develop “true consciousness” [12].Several attempts have been made to improve low-cost autonomous cars using different neural network configurations.For example, the use of convolutional neural networks (CNNs) has been proposed for self-driving vehicles [13].A collision prediction system has been constructed that combines a CNN with a method of stopping a robot in the vicinity of the target point while avoiding a moving obstacle [14].CNNs have also been proposed for autonomous driving control systems to keep robots in their allocated lanes [15].A multilayer perceptron network, which was implemented on a PC with an Intel Pentium 350 MHz processor, has been used for mobile-robot motion planning [16].In another study, the problem of navigating a mobile robot was solved using a local neural network model [17].

Several motion control methods have been proposed for autonomous robots:proportionalintegral-derivative (PID) control, fuzzy control, neural network control, and combinations of these control algorithms [18].PID control is used by most motion control applications, and PID control methods have been extended with deep-learning techniques to achieve better performance and higher adaptability.High-end robotics with dynamic and reasonably accurate movement often require these control algorithms for operation.For example, a fuzzy PID controller has been used for a differential drive autonomous mobile-robot trajectory application [19].A PID controller has also been designed to enable a laser sensor-based mobile robot to detect obstacles and avoid them [20].

In this paper, we outline the construction of a low-cost PID controller combined with a multilayer feed-forward network based on a back-propagation training algorithm using the Arduino controller board for the smooth control of an autonomous line-follower robot.

2 Autonomous Line-Follower Robot Architecture

In this section, the architecture and system block diagram for the line-follower robot are described.First, a suitable configuration was selected to develop a line-follower robot using three infrared sensors connected through the Arduino Uno microcontroller board to the motor driver integrated circuit (IC).This configuration is illustrated in the block diagram shown in Fig.2.

Figure 2:Block diagram of the medic line-follower robot

Implementing the system on the Arduino Uno ensured that the robot moved in the desired direction; the infrared sensors were read for line detection; error was predicted; the speed of the left and the right motors was determined using a PID controller and the predicted error; and the line-follower robot operated by reading the infrared sensors and controlling the four DC motors.

2.1 Mobile Robot Construction

The proposed autonomous-robot design (Fig.3) can easily be modified and adapted to new research studies.The physical appearance of the robot was evaluated, and its design was based on several criteria, including functionality, material availability, and mobility.

Seven part types were used in the construction of the robot:

(1) Four wheels.

(2) Four DC motors.

(3) Two base structures.

(4) A controller board based on the Arduino ATmega 32 board.

(5) An L298N IC circuit for DC-motor driving.

(6) An expansion board.

(7) Line-tracking module.

Figure 3:Medic line-follower robot prototype

2.2 Arduino Uno Microcontroller

The Arduino Uno is a microcontroller board based on the ATmega328P (Fig.4).It has 14 digital I/O pins, six analog inputs that can be used as pulse width modulation (PWM) outputs,a USB connector, a power jack, an ICSP header, a reset button, and a 16-MHz quartz crystal.The Arduino microcontroller board is suitable for driving a PWM signal for the DC motor [21].

Figure 4:Arduino Uno based on the ATmega328P

2.3 Tracking Sensor

The line-tracking sensor used is capable of detecting white lines on black, and black lines on white (Fig.5).The single line-tracking signal provides a stable output TTL signal for more accurate and more stable line tracking.Multi-channel operation can easily be achieved by installing the necessary line-tracking robot sensors [22].

Specifications:

Power Supply:+5 V, operating Current:<10 mA

Operating temperature range:0°C to + 50°C

Output Interface:Three-wire interface (1—signal, 2—power, and 3—power supply negative)output Level:TTL level.

Figure 5:Tracking sensor

2.4 L298N Motor Driver

The L298N motor driver, which consists of two complete H-bridge circuits, is capable of driving a pair of DC motors.This function makes it suitable for use in robotics because most robots run on two or four driven wheels with a power current of up to 2A and a voltage from 5 to 35V DC.Fig.6 shows the pin assignments for the L298N dual H-bridge module.IN1 and IN2 on the L298N board are used to control the direction.The PWM signal was sent to the Enable pin of the L298N board, which drives the motor [22].

Figure 6:L298N IC motor driver

Fig.7 shows a diagram indicating the wire connections used for the design and implementation of the mobile robot.This diagram includes the embedded system, line-tracking sensors, DC motors, and L298 IC circuit for driving the motors.

3 Neural Network-Based PID Controller for a Line-Follower Robot

A neural network-based PID controller was designed to control the robot.The sensors were numbered from left to right.An error value of zero referred to the robot being precisely on the center of the line.A positive error value meant that the robot had deviated to the left, and a negative error value meant that the robot had deviated to the right.The error value can be a maximum of ±2, which corresponds to the maximum deviation (Fig.8).

Figure 7:Connection wires

Figure 8:Errors and sensor positions

The primary advantage of this approach is that the three sensor readings are replaced with a single error term, which can be fed to the controller to compute the motor speeds so that the error term becomes zero.The output of the tracking line sensor array is fed to the neural network,which calculates and estimates the error term from the sensor values.

The estimated error input was used by the PID controller to generate new output values.The output value was used to determine the left speed and right speed for the two motors on the line-follower robot.

Fig.9 shows the control algorithm used for the line-follower robot.

Figure 9:Mobile robot motion-control-based proportional-integral-derivative (PID) neural network controller

3.1 PID Controller

The objective of a PID controller is to keep an input variable close to the desired set-point by adjusting an output.Its performance can be “tuned” by adjusting three parameters:KP,KI,andKD.The well-known PID controller equation is shown here in continuous form:

where Kp, Ki, and Kd refer to proportional, integral, and derivative gain constants, respectively.

For implementation in a discrete form, the controller equation is modified using the backward Euler method for numerical integration, as in

where u(kT) and e(kT) are control and error signals in discrete time at T sampling time.

The PID controller determines both the left and right motor speeds following a predicted error measurement of the sensor.The PID controller generates a control signal (PID value), which is used to determine the left and right speeds of the robot wheels.This is a differential drive system, in which a left turn can be executed if the speed of the left motor is reduced, and a right turn is executed if the speed of the right motor is reduced.Right_Speed and Left_Speed are calculated using Eq.(3):

Right_Speed and Left_Speed are used to control the duty cycle of the PWM applied at the input pins of the motor driver IC.PID constants (Kp, Ki, Kd) were obtained using the Ziegler-Nichols tuning method (Tab.1) [23].Initially, the Ki and Kd of the system were set to 0.Kp was then increased from 0 until it reached the ultimate gain Ku, and at this point, the robot continuously oscillated.Ku and Tu (oscillation period) are then used to tune the PID (see Tab.1).

Table 1:Proportional-integral-derivative tuning parameters for controlling a mobile robot

After substantial testing, using a classical PID controller was deemed unsuitable for the linefollower robot because of the changes in line curvature.Therefore, only the last three error values were tallied instead of adding all the previous values to solve this problem.The modified controller equation is provided in Eq.(4):

This technique provides a satisfactory performance for path tracking.

3.2 Artificial Neural Network Mathematical Model

An artificial neural network (ANN) is a computing system that consists of many simple but highly interconnected processing elements.These elements are processed by variations in external inputs.The ANN structure presented in this work was built using multiple weighted hidden layers, which were supervised by a feed-forward network using a back-propagation algorithm.This method is widely used in many applications [24].ANNs are best suited to human-like operations,such as image processing, speech recognition, robotic control, and power system protection and control management.An ANN can be compared to a human brain.The human brain operates rapidly and consists of many neurons or nodes.Each signal or bit of information travels through a neuron, where it is processed, calculated, manipulated, and then transferred to the next neuron cell.The overall processing speed of each neuron or node may be slow, but the overall network is very fast and efficient.

3.3 ANN Structure

The first layers used in the ANN are the input layer and the highest layer, which is the output layer.Each bit of information is processed through the hidden layers (intermediate layers) and output layers.The signals or information data are manipulated, calculated, and processed in each layer, before being transferred to the respective network layers.When information is processed through layers or nodes in an ANN, more efficient results can be obtained when the number of hidden layers increases.The ANN operates through these layers and calculates and overwrites the results as it is trained with input data.This makes the entire network not only efficient but also capable of predicting future outcomes and making necessary decisions.This is why ANNs are often compared to the neuron cells of the human brain.

The ANN developed in this study consists of three inputs or neurons (Fig.10).The first input is dedicated to the left sensor, the second input is dedicated to the middle sensor, and the third input is dedicated to the right sensor.Following best practices, the first hidden layer is composed of the same number of neurons as the input layer.The second hidden layer comprises two neurons, and the output layer comprises one neuron for error prediction.The hidden layers use a sigmoid function as the activation function, and the output layer uses a linear function.

Figure 10:Neural network structure for controlling a mobile robot

Three-layered feed-forward networks were supervised by the back-propagation algorithm in the ANN.Therefore, three functions needed to be constructed:one for the inputs to the first hidden layers, another for the first hidden layer to the second hidden layer, and a third for the hidden layers to output layers or targets.Fig.11 shows a block diagram illustrating the equations of the final output.

Figure 11:Neural network equations

4 Implementation of PID Neural Network Controller

The ANN algorithm that was developed and implemented in an embedded system for the proposed mobile robot was programmed using C language.

There are some challenges associated with the implementation of an ANN in a very small system.These challenges were more significant back in the era of inexpensive microcontrollers and hobbyist boards.However, Arduinos, like so many of today’s boards, are capable of quickly completing their required tasks.

The Arduino Uno used in this work is based on Atmel’s ATmega328P microcontroller.Its 2K of SRAM is adequate for a sample network with three inputs, five hidden cells, and one output.By leveraging the Arduino’s GCC language support for multidimensional arrays and floating-point math, the programming process becomes very manageable.

The neural network architecture and algorithm design apprenticeship are monitored offline by the Keras Python library.

4.1 ANN Implementation on Arduino

A single neuron received an input (In) and produced an output activation (Out).The activation function calculated the neuron’s output based on the sum of the weighted connections feeding that neuron.The most common activation function, which is called the sigmoid function, is the following:

To compute the hidden unit’s activations, we used the following C code:

To compute the output units’activations, we used the following C code:

4.2 Implementation of PID Controller on Arduino

We used the code below to implement the digital PID controller.

The neural network architecture and algorithm design apprenticeship were monitored offline by Keras.

4.3 Implementation of the Proposed Neural Network Using Python and Keras

Keras is a powerful and easy-to-use Python library for developing and evaluating deeplearning models.It wraps around the numerical computation libraries Theano and TensorFlow and allows users to define and train neural network models using a few short lines of code.

1-The random number generator was initialized with a fixed seed value.In this way, the same code was run repeatedly, and the same result was obtained.

The code:

from keras.models import Sequential

from keras.layers import Dense

import numpy

numpy.random.seed(7)

2-The models in Keras are defined as a sequence of layers.A fully-connected network structure with three layers was used.Fully-connected layers were defined using the Dense class.

The first layer was created with the input_dim argument by setting it to three for the three input variables.

The number of neurons in the layer was specified as the first argument, and the activation function was specified using the activation argument.

The sigmoid activation function was used for all layers.

The first hidden layer had three neurons and expected three input variables (e.g.,input_dim=3).The second hidden layer had three neurons, and the third hidden layer had two neurons.The output layer had one neuron to predict the class.

The code:

model = Sequential ()

model.add(Dense(3, input_dim = 3, activation = ‘sigmoid’, use_bias = False))

model.add(Dense(2, activation = ‘sigmoid’, use_bias = False))

model.add(Dense(1, activation = ‘linear’, use_bias = False))

3-Once the model is defined, it can be compiled.The loss function must be specified to evaluate a set of weights.The optimizer is used to search through different weights in the network.For the case being evaluated, themseloss and thermspropalgorithm were used.

The code:

# Compile model

model.compile(loss=’mse’, optimizer=’RMSprop’, metrics=[rmse])

The proposed model can be trained or fitted to the loaded data by calling thefit()function.The training process runs for a fixed number of iterations through a dataset called epochs.The batch size is the number of instances evaluated before performing a weight update in the network using trial and error.

The code:

# Fit the model

model.fit(X, Y, epochs = 10000, batch_size = 10)

The proposed neural network was trained for the entire dataset, and network performance was obtained for the same dataset.

The code:

# evaluate the model

scores = model.evaluate(X, Y)

The complete code

from keras.models import Sequential

from keras.layers import Dense

import numpy

from keras import backend

# from sklearn.model_selection import train_test_split

# from matplotlib import pyplot

# fix random seed for reproducibility

numpy.random.seed(7)

def rmse(y_true, y_pred):

return backend.sqrt(backend.mean(backend.square(y_pred - y_true), axis=?1))

# # split into input (X) and output (Y) variables

X = numpy.loadtxt(’input_31.csv’, delimiter=’,’)

Y = numpy.loadtxt(’output_31.csv’, delimiter=’,’)

#(trainX, testX, trainY, testY) = train_test_split(X, Y, test_size=0.25, random_state=6)

# create model

model = Sequential ()

model.add(Dense(3, input_dim=3, activation=’sigmoid’,use_bias=False))

model.add(Dense(2, activation=’sigmoid’,use_bias=False))

model.add(Dense(1, activation=’linear’,use_bias=False))# Compile model

model.compile(loss=’mse’, optimizer=’RMSprop’, metrics=[rmse])

# Fit the model

model.fit(X, Y, epochs=10000)

# evaluate the model

#result = model.predict(testX)

#print(result- testY)

scores = model.evaluate(X, Y)

Keras was used to train the proposed ANN according to specific sensor positions placed on the path line.These sensor positions are shown in Fig.8 and listed in Tab.2.

Table 2:Follower-logic target/neural network manual training data

After running 1000 iterations with a decaying learning rate, the weight of each dense was found with root-mean-square error rmse = 0.13%.

The weights connecting the input layer to the hidden layer were stored in a dense_1 matrix with three rows and 10 columns.

dense_1

? [ 1.5360345, ?3.6838768, ?1.076581],

? [?1.1123676, ?2.4534192, 1.8363825],

? [?3.039003, 1.5306051, 2.718451]

dense_2

? [?2.8450463, 1.5029238],

? [3.0052474, ?0.7608043],

? [?0.91276926, ?2.0846417]

dense_3

? [ 2.7599642],

? [?3.3178754]

A comparison between real targets and the predictions calculated by Keras is presented in Tab.3.

Table 3:Comparison between real targets and predictions calculated by Keras

The vehicle with the trained neural network controller was successfully tested on several paths.We found that its speed varied to cope with real situations, and the robot moved faster on a straight path and reduced its speed on curves.

5 Conclusions

We developed a mobile-robot platform with a fixed four-wheel configuration chassis and a combined PID and deep-learning controller to guarantee a smooth tracking line.We found that the proposed method is well suited to mobile robots, because they are capable of operating with imprecise information.More advanced controller-based CNNs can be developed in the future.

Acknowledgement:The authors would like to extend their sincere appreciation to the Deanship of Scientific Research at King Saud University for its funding of this research through the Research Group No.RG-1439/007.

Funding Statement:The authors received funding for this research through Research Group No.RG-1439/007.

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 国产性猛交XXXX免费看| 精品亚洲欧美中文字幕在线看| 天堂在线www网亚洲| 国产亚洲精品97在线观看| 国产99视频精品免费观看9e| 国产精品久久自在自线观看| 欧美中文字幕一区二区三区| 成人亚洲视频| 国产成人亚洲综合A∨在线播放| 97se亚洲| 国产成人综合久久精品尤物| 国产精品美人久久久久久AV| 99精品视频九九精品| 国产区免费| 亚洲IV视频免费在线光看| 色妞www精品视频一级下载| 日本不卡在线播放| 国产视频一区二区在线观看 | 亚洲视频在线观看免费视频| 在线观看国产精品日本不卡网| 中国一级特黄大片在线观看| 国产 日韩 欧美 第二页| 久久精品免费看一| 一级一级一片免费| 97人人做人人爽香蕉精品| 免费精品一区二区h| 久久狠狠色噜噜狠狠狠狠97视色| 色偷偷一区| 国产高清无码第一十页在线观看| 国产精品七七在线播放| 青青青视频91在线 | 在线中文字幕日韩| 亚洲一区二区精品无码久久久| 一本色道久久88| 成人一区在线| 亚洲成人福利网站| 18禁色诱爆乳网站| 国产精品国产主播在线观看| 中日无码在线观看| 亚洲伊人天堂| 91九色视频网| 91色爱欧美精品www| 日韩第八页| 91丨九色丨首页在线播放 | 国产成人艳妇AA视频在线| a级毛片免费网站| 亚洲人成网站18禁动漫无码| 午夜电影在线观看国产1区| 亚洲成人免费看| 亚洲精品中文字幕午夜| 亚洲欧美综合另类图片小说区| 五月天天天色| 亚洲AV无码乱码在线观看代蜜桃| 亚洲香蕉久久| 九九久久99精品| 91精品国产91久无码网站| 71pao成人国产永久免费视频| 性欧美在线| 免费激情网址| 日本高清在线看免费观看| 亚洲精品无码高潮喷水A| 国产精品播放| 亚洲中文精品久久久久久不卡| 欧美另类一区| yy6080理论大片一级久久| 永久在线精品免费视频观看| 日韩毛片免费| 亚洲资源站av无码网址| 国产91视频免费观看| 亚洲国产精品人久久电影| 999国产精品| 伊人久久久大香线蕉综合直播| 午夜精品久久久久久久无码软件| 98超碰在线观看| 国产成人精品男人的天堂| 成人精品免费视频| 精品国产aⅴ一区二区三区| 在线观看免费AV网| 制服丝袜亚洲| 亚洲国语自产一区第二页| 国产在线视频自拍| 亚洲国产一区在线观看|