999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

A 3D Measurement Method Based on Coded Image

2021-12-15 08:12:24JinxingNiuYayunFuQingshengHuShaojieYangTaoZhangandSunilKumarJha
Computers Materials&Continua 2021年11期

Jinxing Niu,Yayun Fu,Qingsheng Hu,Shaojie Yang,Tao Zhang and Sunil Kumar Jha

1Institute of Mechanics,North China University of Water Resources and Electric Power,Zhengzhou,450011,China

2IT Fundamentals and Education Technologies Applications,University of Information Technology and Management in Rzeszow,Rzeszow,100031,Poland

Abstract:The binocular stereo vision system is often used to reconstruct 3D point clouds of an object.However,it is challenging to find effective matching points in two object images with similar color or less texture.This will lead to mismatching by using the stereo matching algorithm to calculate the disparity map.In this context,the object can’t be reconstructed precisely.As a countermeasure,this study proposes to combine the Gray code fringe projection with the binocular camera as well as to generate denser point clouds by projecting an active light source to increase the texture of the object,which greatly reduces the reconstruction error caused by the lack of texture.Due to the limitationof the camera viewing angle,a one-perspective binocular camera can only reconstruct the 2.5D model of an object.To obtain the 3D model of an object,point clouds obtained from multiple-view images are processed by coarse registration using the coarse SAC-IA algorithm and fine registration using the ICP algorithm,which is followed by voxel filtering fusion of the point cloud.To improve the reconstruction quality,a polarizer is mounted in front of the cameras to filter out the redundant reflected light.Eventually,the 3D model and the dimension of a vase are obtained after calibration.

Keywords:3D reconstruction;structural light;SAC-IA;ICP;voxel filtering

1 Introduction

Artificial intelligence-based computer vision has been replacing human vision in many fields,because of its excellent performance in describing and recognizing the objective world by processing camera-captured images.Specifically,it is expected to perform like the human visual system to process three-dimensional (3D) images,achievement of which can be greatly helpful for research and development in the industry [1-3].For instance,it can help accurately identify the defective parts of an object with a high efficiency,which greatly reduces the workload that is originally done by humans.

As one of the most representative technologies in the field of optical 3D measurement,the fringe projection technology is favored by various features,including non-contact,high universality,high resolution,high precision,and high speed [4,5].Meanwhile,the binocular stereo vision and coded light method has been commonly used to reconstruct and measure 3D objects in many fields (e.g.industrial production,cultural relic protection,and 3D printing) [6-8],and they have the advantages of low equipment cost and high model accuracy.In this study,the fringe projection technology is used to reconstruct and measure a 3D object,which is accompanied by analyses of (1) the principle of binocular camera imaging,calibration,and correction,(2) stereo matching algorithms,such as the block matching (BM) algorithm and the semi-global stereo matching(SGBM) algorithm,(3) performances of the integrated binocular camera and Gray code fringe projection in reconstructing a single perspective model of an object,(4) roles of coarse-and fineregistration algorithms in generating point clouds,and (5) effects of the integration of sample consensus initial alignment (SAC-IA) and iterative closest point (ICP) algorithms.Eventually,a complete 3D model of a vase is established after point cloud fusion processing,followed by the corresponding reconstruction error analysis.

2 Principle

2.1 Binocular Stereo Vision

The principle of stereo vision technology is to reverse the camera imaging process that projects 3D points in the real world to a 2D image plane,which can be simply described by the pinhole imaging model [9],as illustrated in Fig.1.

Figure 1:Pinhole imaging model

According to the similar relationship of the triangles,the following relationship exists:

where Z is blabla,f is the focal length of the camera,X is blabla,X’is blabla,Y is blabla,Y’is blabla,and the sign in the formula indicates the direction,which is positive from left to right.P(X,Y,Z)andare respectively the object space point and the corresponding image point in the camera coordinate systemO-xyz.

There are three coordinate systems,namely the pixel coordinate system,the camera coordinate system,and the world coordinate system (Fig.2).Therefore,it is necessary to establish a conversion relationship between the object point and the image point.P(X,Y,Z),andP(u,v)are corresponding point coordinates in the world coordinate system,the camera coordinate system,and the pixel coordinate system,respectively.The image becomes upright if the imaging plane is projected symmetrically in front of the pinhole.In this context,formula (1) can be rewritten as:

Figure 2:The relationship between three coordinate systems

2.2 Calibration

Camera calibration is to calculate the internal and external parameters of the imaging system.Specifically,the structured light system calibration essentially determines the relationship between a point in the world coordinate system and the counterpart in the camera coordinate system [9].A diffuse reflection board is used in this study for calibration (Fig.3),which has a black and white checkerboard pattern with 12×9 grids.Each grid size is 20m×20mm.The top right vertex is selected as the origin Owof the world coordinate system,the horizontal and longitudinal sides of the checkerboard are the Xwaxis and the Ywaxis,respectively,while the Zwaxis is perpendicular to the checkerboard.The three coordinate axes Xw,Yw,and Zwconform to the right-hand rule.TheZ-coordinate of each point on the checkerboard plane is 0.The internal and external parameter matrices are obtained according to the chessboard camera calibration method by Zhang et al.[10].

2.3 Stereo Matching

Stereo matching is the key to 3D reconstruction,since it affects directly the accuracy of the reconstructed 3D model.BM and SGBM algorithms are commonly used for local stereo matching [11],and the matching units are generally associated with the image features,such as corners,contour edges,and inflection points.When applying the feature-based stereo matching algorithm,the first step is to extract the feature regions of the left and right images,while the second step is to take the left image as the reference and find the matching points on the corresponding bipolar line in the right image.The sparse disparity map can be obtained after repeating the above process to find all the feature point pairs.

Figure 3:Diffuse reflection calibration board

2.4 Structured Light Coding and Decoding

As an active 3D measurement method [12],the coded structured light method is favored by its high measurement speed,high matching accuracy,and thus suitability for images with few features.It is able to add some texture information to the measured object and obtain better stereo matching by projecting a series of coded patterns to the object.

Binary coding is commonly used in structured light,where black is denoted as 0 and white is denoted as 1.In this way,different stripes can be distinguished in the projected image.Gray code is developed from,and more reliable than,binary code [13,14].Assuming that a binary code can be represented asBn-1Bn-2...B2B1B0and its corresponding Gray code asGn-1Gn-2...G2G1G0,the transformation relationship between them can be expressed by formula(3),where Gn-1Gn-2...G2G1G0and Bn-1Bn-2...B2B1B0have the same highest bit,and the rest are the XOR values between the current bit binary code and the previous bit binary code.The operational symbol ⊕represents the XOR operation.

Compared with binary code,gray code is advantageous in the aspect that only one code value is different in the corresponding bits between adjacent numbers.As shown in Tab.1,two bits in the binary code change their values when 1 becomes 2 in decimal;in contrast,only one bit changes in the Gray code.Therefore,Gray code has a certain self-correcting ability in the process of decoding,and correspondingly it has improved error tolerance during coding and decoding.

2.5 Point Cloud Registration and Fusion

In order to reconstruct a complete 3D model,it is necessary to first obtain point clouds from different angles and then splice multiple overlapping point clouds into a complete 3D model through point cloud registration and fusion [15,16].However,the point cloud data of the object to be constructed are vulnerable to the influences of camera lens distortion,light intensity,and surface texture.Therefore,it is required to first preprocess the point cloud data before registration(e.g.point cloud smoothing and filtering).

Point cloud registration is essentially the process of obtaining the rotation and translation matrices between the source point cloud and the target point cloud.

Table 1:Binary and gray code comparison

where CTandCSare the target point cloud and the source point cloud,respectively,R is the rotation matrix,andtis the translation matrix.

Point cloud registration is generally divided into two categories,namely coarse registration and fine registration.Coarse registration is to find an approximate relationship matrix that can render the target point clouds and source point clouds in the same coordinate system,whereas fine registration is to further optimize the relationship matrix when the initial value is known.In this paper,the SAC-IA algorithm is used for the coarse registration of point clouds [17,18],while the ICP algorithm is used for the precise registration of point clouds [19].

An average fusion method named voxel filtering is used to keep the integrity and smoothness of the point cloud model.Specifically,the object space is divided into many 3D voxel grids with small side lengths,where all points will fall inside.The area with high density has more points in one voxel grid,whereas the area with low density has fewer points in one voxel grid.The side length of the voxel grid can be set to ensure that there is approximately one point inside one voxel grid.The voxel grid is also used for filtering,and the average value of all points in one grid is used as the new point value.This method can not only merge the overlapping areas,but also ensure that the density of the point cloud after fusion tends to be consistent.

3 Experiments

The experimental platform consists of a binocular camera with a resolution of 1280×720 pixels,a projector with a maximum resolution of 1980×1080 pixels,a diffuse reflection calibration plate,a rotating turntable,a black background cloth,and a laptop.A small vase (127.0mmhigh,72.0mmwide,and 18.0mmbottle mouth) is selected for 3D reconstruction and measurement(Fig.4).Experiment steps are as follows:

Step 1:Image calibration.The binocular camera calibration algorithm from the OpenCV library is used,and it requires the calibration images to be taken from different angles.A total of 13 images are taken by the left camera to solve the internal and external parameter matrices of the left camera,and the same operation is carried out for the right camera (Fig.5).

After calibration,the camera parameter matrices Kland Kr,distortion coefficients Dland Dr,rotation matrices Rland Rrand translation vectorstlandtrof the left and right cameras are obtained:

Two images of non-coplanar alignment are corrected to conform to coplanar alignment.Fig.6 shows the images of the chessboard before and after the correction.

Figure 4:Experiment platform

Figure 5:Calibration process diagram of left and right camera

Figure 6:The images of chessboard before and after correction

Figure 7:Point cloud maps obtained by BM and SGBM algorithms

Step 2:Image matching.The feature areas of the left and right images are first extracted,and then the polar lines are taken in sequence.For each feature point in the left image,the point on the polar line in the right image that conforms to the given matching threshold is identified as the matching point.The above process is repeated to find all the matching points of the binocular image to obtain a sparse disparity map.The experiment proves that the BM algorithm is favored by rapid image processing,but the accuracy is poor.In contrast,the SGBM algorithm takes a slightly longer time than the BM algorithm,but the accuracy is greatly improved.The disparity map is then transformed into the 3D space,and the point cloud map under one perspective is obtained (Fig.7).

Step 3:Coding and decoding.A vase is projected by Gray code patterns in vertical and horizontal directions (Fig.8).It is necessary to first decode the Gray code in two directions separately and then merge the decoded images to obtain a whole image.

Figure 8:Gray code pattern projected on the vase

Figure 9:Point cloud image before and after registration

Step 4:Point cloud registration.The decoding results of the left and right images are used to reconstruct the point clouds.The rotating platform is to get the multi-viewpoint clouds.SAC-IA and ICP algorithms are used to register the point cloud data.Fig.9 shows the point cloud images before and after registration.

Step 5:Point cloud fusion.Since the distance between the camera and the object is almost constant,the density of the object point cloud is basically the same.Therefore,a fixed side length of the voxel grid,which is 1.0 mm,is suitable for all the point clouds in this experiment,with which the obtained 3D point cloud model has a uniform density (Fig.10).

Figure 10:Comparison of point clouds before and after fusion

4 Results

In order to determine the accuracy of the reconstruction,it is necessary to measure the reconstructed 3D model and compare it with the actual vase.A polarizer is mounted in front of the cameras to filter out the redundant reflected light,which can help improve the reconstruction quality.The measurement results of the 3D model are shown in Fig.11,and the comparison between the model and the actual vase is shown in Tab.2.In general,the reconstruction errors for vase height and width are less than 1 mm in all groups no matter whether the polarizer is place.However,the placement of the polarizer improves the reconstruction error of the vase mouth from>1 mm to<1 mm.The object reconstruction and measurement accuracy can achieve the millimeter level,which should be improved in the next step.

Figure 11:Measurement results of reconstructed vase model

Table 2:Comparison table of measurement results

5 Conclusion

In this study,the principle of the binocular stereo vision is first described,followed by the calibration of the binocular camera,the acquisition of internal and external parameter matrices of the cameras,the generation of the disparity map and point cloud map of the vase based on the stereo matching algorithm,and the investigation into the registration and fusion methods for the point cloud.Experiment results show that the reconstructed 3D model of a vase has a satisfactory performance,which can meet the need for rapid measurement of an object.

Acknowledgement:The authors would like to thank the anonymous reviewers and the editor for the very instructive suggestions that led to the much-improved quality of this paper.

Funding Statement:This work was supported by Henan Province Science and Technology Project under Grant No.182102210065.

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 亚洲乱码在线播放| 久久综合成人| 亚洲成AV人手机在线观看网站| 成人午夜久久| 亚洲精品天堂自在久久77| 亚洲欧美一区二区三区麻豆| 香蕉视频在线观看www| 国产美女精品在线| 19国产精品麻豆免费观看| 国产全黄a一级毛片| 久久精品无码国产一区二区三区| www.精品国产| 99久久精品视香蕉蕉| 伊伊人成亚洲综合人网7777| 亚洲国产天堂久久综合226114| 欧洲av毛片| 99国产精品一区二区| 国产色爱av资源综合区| AV不卡无码免费一区二区三区| 18禁黄无遮挡网站| 夜精品a一区二区三区| 国产在线第二页| 久久夜夜视频| 欧美日韩免费| 亚洲精品动漫| 一本大道香蕉高清久久| 欧美成人第一页| 国产91麻豆视频| 国产丝袜91| 自拍偷拍一区| 欧美综合区自拍亚洲综合绿色| 国产精品夜夜嗨视频免费视频| 欧美一级高清视频在线播放| 在线观看的黄网| 日韩午夜片| 久久人搡人人玩人妻精品一| 18黑白丝水手服自慰喷水网站| 国产麻豆精品久久一二三| 久久精品最新免费国产成人| 国产一区二区福利| 欧美成人一区午夜福利在线| 亚洲视频免费播放| 久久免费视频6| 亚洲最大看欧美片网站地址| 欧美激情二区三区| 午夜不卡福利| 国产毛片不卡| 欧美劲爆第一页| 国产精品一区二区国产主播| 国产欧美在线观看精品一区污| 国产精品免费p区| 国产精品九九视频| 久久久亚洲色| 久久人体视频| 色噜噜狠狠色综合网图区| 国产亚洲精品97在线观看| 伊人色综合久久天天| 国产一二三区视频| 91精品国产一区| 美女被操91视频| 久久香蕉国产线看观看式| 不卡无码网| 一级全黄毛片| 亚洲swag精品自拍一区| 久久国产亚洲欧美日韩精品| 国产在线高清一级毛片| 欧美人与牲动交a欧美精品| 91亚洲精品第一| 中文字幕波多野不卡一区| 日本久久久久久免费网络| 亚洲精品手机在线| 国产精品黑色丝袜的老师| 欧美va亚洲va香蕉在线| 欧美性精品不卡在线观看| 一本大道香蕉中文日本不卡高清二区| 99re热精品视频国产免费| 欧美日韩国产在线人成app| 熟妇丰满人妻| 日韩av手机在线| 91热爆在线| 在线视频亚洲欧美| 欧美在线视频不卡第一页|