999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Design of humanoid robot shooting system based on OpenMv

2020-07-24 05:40:14GuochenNIUTongZHU
機床與液壓 2020年12期

Guo-chen NIU, Tong ZHU

(College of Electronic Information and Automation, Civil Aviation University of China, Tianjin 300300, China)

Abstract: Aiming at the shortcomings of long time and low efficiency of humanoid robot images using linear CCD as image acquisition module, a kind of humanoid robot based on OpenMv is designed. The design of the humanoid robot shooting system completes the hardware and software design firstly, then converts the acquired image into the HSV color model, uses the image segmentation technology to segment the target and the background, and then calculates the centroid coordinates. Finally, the corresponding signal is output through the expert control strategy. Adjust the movement of the robot. Experiments show that the system has high reliability and adaptability, and won the first prize in the robot competition of the five provinces (municipalities and autonomous regions) in North China, which verifies the applicability, reliability and effectiveness of the method.

Key words: OpenMv, Humanoid robot, Image segmentation, Target recognition, Expert control

1 Introduction

Humanoid robots are favored by researchers because of their wide range of needs in the fields of medical care, entertainment and services, and as an ideal platform for evaluating multiple discipline technologies such as artificial intelligence. It is one of the hot topics in the field of robotics research. At the same time, compared to other types of robots wheeled, tracked and peristaltic type, which may be better adapted to the human living environment, work space broader, richer form of action, and its higher energy efficiency. So the research has important application value[1]. This paper designs a kind of human shooting robot for the robot competition of the five provinces of North China. The robot competition venue material is made of white solid wood particle board. The project is that the robot moves the ball (10 cm in diameter) from the starting point to the basket 50 cm away from the starting point. The ball is successfully grabbed and placed in the pitching area (the basket is 0.7m or less). For the pitching area put the ball into the basket.

Environmental perception is the premise and basis for humanoid robots to have the ability to make decisions about autonomous behavior[2]. Humanoid robots generally use cameras to obtain environmental information. Due to the computing power of the processor, the size and weight of the camera can affect the performance of the camera used. At present, the color block segmentation method is often used for target recognition, and there are problems such as inaccurate target recognition and large environmental interference. In terms of target positioning, Jamzad et al. [3] proposed a geometric positioning model of the target. Since the robot is a wheeled mobile robot and the camera poses fixed, the model is simple and effective, so it is susceptible to the surrounding environment. Bandlow [4] obtains the positioning of the robot itself and the positioning of the target object by recognizing the relative positional relationship between the fixed identification object and the target object, but the positioning accuracy is not high when the fixed identification object is far away from the robot, so the target object is The positioning error is large.

Therefore, this paper designs a humanoid robot hardware and software system based on OpenMv. The image segmentation algorithm based on pixel threshold algorithm and image region based algorithm is used to calculate the target coordinates through the imaging relationship between the target and the camera, which improves the shooting accuracy of the shooting robot. At the end of the paper, the shooting experiment results of humanoid robot are given.

2 Hardware system design

The humanoid shooting robot adopts a modular design method, which is convenient for experimental testing and maintenance. The main hardware system should include power module, visual OpenMv module, servo drive module and communication module, etc. The block diagram of the structure is shown in Fig.1.

Fig.1 System block diagram

The functions of each module are as follows:

Power module: Adopts dual battery power supply mode, the servo drive module and the visual processing module are respectively powered by different batteries to ensure the stability of the control signal.

Visual OpenMv module: OpenMv is an open source, low cost, and powerful machine vision module. It is based on the STM32F427 and integrates the OV7725 camera chip.This module can processes the data through image processing, and the processing speed is faster.

Steering gear: The robot joints can realize different walking postures by combining angle and speed.

Communication module: Bluetooth is a device for short-distance wireless data exchange. The Bluetooth module uses HC-05[5], which has embedded Bluetooth protocol. The operating parameters such as the relative position and threshold of the robot and the guide line can be displayed in real time, which facilitates debugging and improvement of related algorithms.

3 Software system design

When the robot control system is running, the robot posture, OpenMv, etc. are initialized first. Then, gather the field information through OpenMv, convert the collected image into the HSV color model, use image segmentation technology to segment the target and the background, and then calculate the centroid coordinates. Finally, through the expert control strategy output, the corresponding signal is obtained to adjust the robot’s movement, so as to achieve the offensive and defensive movement of the shooting robot. The system software flow chart is shown in Fig.2.

Fig.2 System main program flow chart

3.1 Color space model conversion

The RGB color space is susceptible to changes in light intensity, and the correlation between R, G, and B values is very high. For identifying a specific color, it is difficult to determine the threshold and the distribution range in space, which is not conducive to the target object. The segmentation and identification are easy to cause misjudgment, resulting in inaccurate final identification. In order to solve this problem, it is convenient to color processing and recognition, and decided to adopt the HSV color model[6]. The HSV color model has two characteristics: First, the luminance components are independent of color information. They can be processed separately and independent of each other. The change of this component does not affect the color information of other images. Second, the HSV color space is more in line with human visual perception. The perceptual characteristics of the outside world are conducive to image processing.

Fig.3 Conical space model

Fig.4 Tone angle coordinates

The conversion formula from RGB color space to HSV color space is shown in (1).

(1)

Among them

(2)

3.2 Target search

After the robot gets the search command, it performs the in situ search first. Since the camera is mounted on the robot head, it is possible to perform a comprehensive search in the visible direction by turning the head servo. In order to ensure that the target is in the central area of the field of view, it is necessary to determine the position of the target in the field of view. When the target is on the left side of the field of view, the subtraction is performed, and the robot is turned left. When the target is on the right side of the field of view, the addition is performed, and the robot is right. When the target appears in the central area, the target is in the central area of the field of view, and the robot goes straight. In this way, the target is always in the central area of the robot’s field of view. The target search is mainly used to roughly observe the position of the target object, and realize the positioning of the target by the left turn, the right turn and the forward action of the robot, and finally determine the specific position of the target through the target recognition.

3.3 Image processing and target recognition

After the robot searches for the target, it performs target recognition. This paper mainly adopts image segmentation technology based on color features. The specific process is shown in Fig.5.

Fig.5 Target recognition flow chart

The image captured by the camera may contain some other backgrounds. In order to better identify the target, the extracted image needs to be segmented. Since the target is significantly different from the background, the threshold-based image segmentation method is mainly used here[8].

Since the illumination affects the image, the captured image is subject to various disturbances and some isolated noise points appear. Filtering is an effective means to reduce the influence of noise on the image. Median filtering is a nonlinear signal processing technique that can effectively suppress noise. The principle is to replace the current pixel with the median value of the gray value of all pixels in the neighborhood window of the point. The gray value of the point makes the surrounding pixels closer to their true value. This method has a good filtering effect on some particle noises, and can well preserve the edge details of the target image so as not to be blurred. By this method, the error segmentation can be effectively reduced and the running time can be shortened.

After the image is filtered, the image is segmented by the threshold segmentation method to extract the target image. In this paper, the maximum inter-class variance method is used to select the image threshold. The maximum inter-class variance method[9] is a global-based binarization algorithm. According to the gray-scale characteristics of the image, the image is divided into two parts: the foreground and the background. The larger the variance between the two, the more The greater the difference between the two parts that make up the image. When some targets are misclassified into backgrounds or partial backgrounds are misclassified into targets, the difference between the two parts will be reduced. When the threshold is taken to maximize the variance between classes, it means that the probability of misclassification is the smallest.

Let the acquired image beI(x,y), the image gray scale varies from 1 toL, the set is represented asS={1,2,…,L}, and the pixel of gray leveliisni, then the probability of all pixels and individual pixels of the image can be formulated (3) and formula (4).

(3)

(4)

Taking theTin the setSas the threshold, the image is divided into two sets of gray levels: two types ofC1andC2, ofS1={1,2,…,T} andS2={T+1,T+2,…,L}S2, and the ratio of the number of pixels in the foreground to the whole image is recorded asω1, and the average gray levelμ1, The ratio of background pixels to the entire image isω2, and its average gray levelμ2. The total average gray level of the image is denoted byμ, and the variance between classes is denoted asg. Then there are:

(5)

Tis the best segmentation threshold when the inter variance is the largest.

g=ω1·ω2·(μ1-μ2)2

(6)

After the data is transferred to the main control system to separate the target from the background, the target gray value is 0 and the background gray value is 1. After image segmentation, the contour information of the target object is extracted to further improve the accuracy. According to formula (7), the center point of the target region can be calculated and used as the tracking feature point, and the data is transmitted to the main control system for further determination.

(7)

Wherepiexlnumrepresents the number of pixels in the target area;xsum,ysumis the sum of the abscissa and ordinate of all pixels in the target area;centerx,centeryrepresents the position of the center point of the target area identified in the image.

3.4 Target positioning

When the camera recognizes the target object, it needs to adjust the head servo to make the target in the field of view, and realize the positioning of the robot on the target object. Since the coordinates of the target in the image coordinate system are obtained from the image, and the image coordinate system is a Cartesian coordinate system, it is necessary to analyze the image to obtain the coordinates of the target in the image coordinate system, and then convert it to the actual coordinate system. In order to control the robot’s positioning target[11].

In order to achieve this goal, the robot’s foot is the base point, and the vertical robot’s front face is oriented to the right in the positivex-axis direction, the front face of the robot is in the positive direction of they-axis, and the robot coordinate system is thez-axis positive direction along the robot’s force plane. As shown in Fig.6, the pointOis the origin of the coordinate system.

Fig.6 Geometric relationship between target and camera imaging

WhereCis the camera’s optical center position,His the height of the camera’s optical center from the ground,Dis the distance from the target to the robot, planeO′ is the camera’s imaging plane, the center coordinate is (u0,v0), and pointPis the target’s centroid space coordinate point. The projections on the imaging plane areQ(u,v),φandηare the angles of the target for the horizontal and vertical directions of the robot, respectively, andαandβare the angles of the target with respect to the horizontal and vertical directions of the optical axis of the camera, respectively. In the target positioning process, only the angle information and distance information of the target relative to the robot, that is, the angleφbetween the target and the positive direction of the robot and the distanceDfrom the target to the robot, are needed to determine the position information of the target in the robot coordinate system.The camera’s small hole model is available:

(8)

Where (u,v) is the target centroid image coordinate obtained after image segmentation;θpanandθtiltare the camera’s side yaw angle and pitch angle;fis the camera focal length; dxand dyare the physical size of the image unit pixel horizontal and vertical directions, respectively,axandayare the camera internal parameter.

According to Fig.6, combined with formula (1), the angle information of the target relative to the horizontal direction of the robot can be obtained:

φ=α+θpan

(9)

Then the distance information of the target relative to the robot is studied.

Fig.7 Side view of the imaging relationship

ξis the angle of view of the camera in the vertical direction,ωis the angle between the farthest field of view and the ground, andδis the angle between the nearest field of view and the ground. According to the geometric relationship, the following formula can be obtained:

(10)

According to the imaging relationship, you can get:

(11)

WhereNis the number of pixels in the vertical direction of the image.

The distance from the target to the robotDcan be obtained from equation (10)(11) as shown in equation (12).

(12)

WhereNis the number of pixels in the vertical direction of the image.

By real-time transmission of the distance between the robot and the ball and the position control robot to perform related motion adjustment, the robot can perform the ball grabbing action when the position is less than 2 cm. When the machine judges the ball, the front basket is processed again, and the ball is successfully put into the basket through the adjustment action.

4 Experimental results and analysis

Through the above software and hardware design, the corresponding robot hardware platform is developed, as shown in Fig.8. The robot’s attitude movement was designed[12], including: large right turn, small right turn, straight walk, large left turn and small left turn.

Fig.8 Humanoid robot platform

Experiment with the humanoid shooting robot and its visual system as the experimental platform. The target is to select a 10cm orange ball. Considering that the sensor’s visual range is limited by the height of the robot body, the distance between the robot and the robot is limited to 70 cm. Then, using the method proposed in this paper to target the acquired image, the identified ball is marked with the external contour moment, and its centroid coordinates are calculated. The result is shown in the figure. Fig.9 shows, from left to right, RGB images, HSV images, threshold-divided images, and extracted contours and coordinate pictures of the ball at different positions.

Fig.9 Target segmentation results

Table 1 gives experimental data for the positioning of the beads. In the table,Lrealrepresents the actual distance between the ball and the robot,Lcalis the theoretical distance obtained from the positioning model, ΔLis the absolute error of the distance between the two, andδis the relative error of the distance between the two. It can be seen that the absolute error of each experimental result is within 3, and the relative error is up to 5.3%, which can accurately locate the target, further verifying the effectiveness of the algorithm.

Table 1 Distance measurement results

Through the distance between the robot and the ball, and the relative position information, the expert knowledge can be introduced into the humanoid robot shooting control, and then the control rule is formed: when the robot and the ball have a relative abscissa of less than 48 cm, the robot makes a right turn action; When the relative abscissa of the robot and the ball is greater than 82cm, the robot makes a left turn; when the relative abscissa of the robot and the ball is in the range of [48, 82], the robot makes a straight walk; when the distance between the robot and the ball is less than 2cm When the robot makes a shooting action. A large number of experiments have proved that the shooting robots in this paper meet the design requirements, and can overcome the external environmental factors and achieve shooting without human intervention.

5 Conclusion

In this paper, through the design of the hardware and software system of the humanoid shooting robot, by using the OpenMv module as the main control system, the surrounding environment information is collected for image preprocessing and image segmentation methods, which can accurately identify the target. At the same time, using the geometric relationship between the target and the camera imaging to calculate the target coordinates. Experiments prove that the robot can accurately locate the target, successfully complete the task of humanoid robot autonomous shooting, and has good environmental adaptability and anti-interference. The robot was successfully applied to the competition of the 5th North China Province college robot competition humanoid robot competition project, and won the first prize of Tianjin Division and the first prize of North China District.

主站蜘蛛池模板: 亚洲一区网站| 国产精品久久久久久久伊一| 国产高颜值露脸在线观看| 欧美综合一区二区三区| 午夜国产小视频| 老司机午夜精品网站在线观看 | 激情无码字幕综合| www亚洲精品| 亚洲国产精品一区二区第一页免| 国产小视频在线高清播放| 99ri国产在线| 国产男女免费视频| 国产亚洲视频免费播放| 色男人的天堂久久综合| 在线高清亚洲精品二区| 国产二级毛片| 亚洲美女一级毛片| 欧美日本一区二区三区免费| 精品国产三级在线观看| 国产jizz| 亚洲综合久久成人AV| 青青网在线国产| 操国产美女| 国产精品美女自慰喷水| 国产区在线观看视频| 22sihu国产精品视频影视资讯| 黄色一级视频欧美| 毛片网站在线看| 欧美国产日韩在线| 国产精品吹潮在线观看中文| 99精品一区二区免费视频| 国产精品va免费视频| 免费国产好深啊好涨好硬视频| 四虎免费视频网站| 国产成人做受免费视频| h视频在线观看网站| jizz国产在线| 久久综合色天堂av| 久久精品国产精品一区二区| 亚洲Av激情网五月天| 国产伦精品一区二区三区视频优播| 亚洲精品在线影院| 夜夜操天天摸| 丁香五月婷婷激情基地| 精品伊人久久久久7777人| 91久久偷偷做嫩草影院免费看| 亚洲AⅤ永久无码精品毛片| 国产在线视频导航| 丝袜无码一区二区三区| 亚洲一级无毛片无码在线免费视频| 丰满人妻中出白浆| 午夜毛片福利| 亚洲成a人在线观看| 亚洲欧美在线看片AI| 国产精品久久久久鬼色| 免费在线a视频| 99久久国产综合精品2023| 国产系列在线| 国产办公室秘书无码精品| 国产91丝袜在线播放动漫 | 一级片一区| 日韩毛片基地| 久久国产高清视频| 亚洲三级a| 欧美全免费aaaaaa特黄在线| 香蕉久人久人青草青草| 亚洲日韩精品欧美中文字幕| 夜精品a一区二区三区| 丁香五月婷婷激情基地| 中文字幕在线日韩91| 国产手机在线观看| 手机永久AV在线播放| 亚洲伊人久久精品影院| 国内精品视频| 在线观看免费国产| 在线观看无码a∨| 色精品视频| 色综合五月婷婷| 日韩免费毛片| 国产在线八区| 91九色最新地址| 国产一二视频|