999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

MC-LRF based pose measurement system for shipborne aircraft automatic landing

2023-09-02 10:18:02ZhuoZHANGQiufuWANGDomingBIXiolingSUNQifengYU
CHINESE JOURNAL OF AERONAUTICS 2023年8期

Zhuo ZHANG, Qiufu WANG, Doming BI, Xioling SUN,*, Qifeng YU

a College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073, China

b AVIC Shenyang Aircraft Design & Research Institute, Shenyang 110850, China

KEYWORDS Automatic landing;Data processing and image processing;Laser range finder;Monocular camera;Pose measurement

Abstract Due to the portability and anti-interference ability,vision-based shipborne aircraft automatic landing systems have attracted the attention of researchers.In this paper,a Monocular Camera and Laser Range Finder(MC-LRF)-based pose measurement system is designed for shipborne aircraft automatic landing.First, the system represents the target ship using a set of sparse landmarks,and a two-stage model is adopted to detect landmarks on the target ship.The rough 6D pose is measured by solving a Perspective-n-Point problem.Then, once the rough pose is measured, a region-based pose refinement is used to continuously track the 6D pose in the subsequent image sequences.To address the low accuracy of monocular pose measurement in the depth direction,the designed system adopts a laser range finder to obtain an accurate range value.The measured rough pose is iteratively optimized using the accurate range measurement.Experimental results on synthetic and real images show that the system achieves robust and precise pose measurement of the target ship during automatic landing.The measurement means error is within 0.4° in rotation, and 0.2% in translation, meeting the requirements for automatic fixed-wing aircraft landing.Received 5 July 2022; revised 19 August 2022; accepted 27 September 2022.

1.Introduction

Shipborne aircraft are widely used in marine missions such as reconnaissance, surveillance, search, and payload delivery.1Shipborne aircraft landings are considered as one of the most crucial and dangerous types of aircraft missions,2and obtaining the relative position and attitude of the target ship is essential for the safety of the aircraft.Recently, several guidance methods have been applied to pose measurement to meet the needs of shipborne aircraft automatic landing,including radar landing guidance, optoelectronic landing guidance, and navigation satellite guidance.3,4These guidance systems need the support of shipborne measurement equipment and communication links.Therefore, when faced with a complex electromagnetic environment, it is challenging to meet the requirements of aircraft landing.

In view of the above problems,cameras,as light and cheap sensors, have been applied to aircraft automatic landing guidance due to their strong anti-interference ability and rich information5provision ability.Visual guidance systems for automatic guidance landing have been widely researched throughout the world, including airborne and shipborne systems.6Without a communication link, the airborne visual guidance system is capable of independently measuring the position, attitude, and motion parameters via airborne imaging and processing equipment.7Therefore,the core technology must obtain accurate and efficient 6D pose parameters (3D rotation and 3D translation) using monocular images.This paper adopts the monocular visual measurement scheme due to the limited installation space and wide depth measurement range.The visual guidance system employed in this paper captures real-time images with the camera and then processes the images to calculate the relative pose parameters between the aircraft and the ship.Generally, vision-based guidance methods are divided into cooperative and non-cooperative methods.Cooperative methods are designed to provide cooperation targets for landing.However, cooperative methods depend on the stability of the cooperative targets, which are affected by factors such as the surface and manipulation of the ship.Therefore, this paper focuses on the noncooperative methods in which the point, line, or contour features of the target ship are extracted to establish the 2D-3D correspondence.8Then, the pose parameters are recovered by solving Perspective-n-Point (PnP) or Perspective-n-Line(PnL) problems.9–11In traditional non-cooperative methods,the extracted target texture information changes easily with the environment.As stated in a review,12the critical advances of the methods based on geometric features are challenging to find in complex objects.Due to the characteristics of the monocular camera, the wide depth range directly suppresses the accuracy of pose estimation.

Fig.1 MC-LRF-based pose measurement system for shipborne aircraft automatic landing.

To address these problems,we designed a landing guidance system named the Monocular Camera-Laser Range Finder(MC-LRF) system.Fig.1 shows that this system primarily consists of a monocular camera and a Laser Range Finder(LRF).First, the target ship and landmarks are detected in the images from the monocular camera by PP-tinypose.13Then, the ship’s pose is roughly estimated by solving a PnP problem.14,15During this process, both synthetic and real images are generated to train the network.In contrast to the generally rigid body representation, we innovatively use a set of landmarks found in each image to construct the target representation.In the synthetic images, we apply the background and texture randomization of the target while rendering for geometric feature learning.Furthermore, to achieve tracking,the parameters are refined in the subsequent frames via the region-based method.When pose tracking fails, the pose estimation is provided again as a new initial value.Our guidance system takes advantage of the LRF, which provides accurate depth measurements.Finally, the orthogonal iterative algorithm9uses the accurate depth values to amend the pose parameter errors.The experimental results for both synthetic and real images show that the proposed MC-LRF guidance system achieves high precision and robust pose measurements when applied to shipborne aircraft landing.The primary contributions of this paper are twofold:

(1) A shipborne aircraft automatic landing system that integrates a monocular camera and a laser range finder is designed.The system realizes high precision and robust 6D pose parameter measurements between the aircraft and ship.

(2) A representation of the target ship using a landmark set is proposed and achieves flexibility, efficiency, and robustness.

The remainder of this paper is organized as follows: Section 2 presents the related work on aircraft automatic landing system design.Section 3 introduces the 6D pose estimation and refinement algorithm, which is based on MC-LRF block optimization.The experimental details and validation results are presented in Section 4, and Section 5 concludes the paper.

2.System design

This section introduces the composition of the MC-LRF guidance system and its working principle as shown in Fig.1.The red point S indicates the laser spot on the object’s surface.The red line represents a laser beam.We will present the system composition,coordinate system definition and MC-LRF block design.

2.1.System composition

Successful shipborne aircraft landing depends on the guidance system and the pose measurement algorithm shown in Fig.2.The MC-LRF guidance system consists of an MC-LRF block and a pose measurement block.The MC-LRF block includes a monocular camera and an LRF, which obtains optimized depth values,and the monocular camera collects RGB images of the target ship to provide reliable features.The LRF is a vital part of this system, and is installed beside the monocular camera to measure the precision depth values in real time.The missions that require the pose measurement block include object detection, landmark detection, initial pose calculation,pose optimization, and pose refinement.After a mission is complete, the system sends the pose measurement results to the flight control system.

2.2.Coordinate system definition

As indicated in Fig.1,the coordinate frames for shipborne aircraft automatic landing are defined as follows: the target ship coordinate frame is defined as OW- XWYWZW, and the monocular camera coordinate frame OC- XCYCZCincludes the image coordinate frame OμV.

The target ship coordinate frame is the world cooperative frame, which is also known as the global coordinate frame.The origin OWof the target ship’s coordinate frame is located at the center of the target ship deck, the YWaxis is perpendicular to the base of the target ship,the ZWaxis is parallel to the axis of the ship’s direction of motion, and the XWaxis is perpendicular to the YZ plane,forming a right-handed coordinate frame.In our experiment, the MC-LRF block coordinate frame needs to calibrate; in other words, it needs to calculate the transformation relationship between the monocular camera’s coordinate frame and the LRF’s coordinate fame.In the MC-LFR block, the monocular camera and the LRF are considered as a whole and constitute the camera coordinate system.The origin OCof the MC-LRF block coordinate frame is located at the optical centre of the camera.When this system performs pose measurement, the LRF can be adjusted so that the light point hits the target ship’s surface.Therefore,the relationship between the aircraft and the MC-LRF block is calibrated in practical applications.

2.3.MC-LRF block design

As shown at the bottom of Fig.1, the monocular camera and the LRF are fixed together and known as the MC-LRF block.OLrepresents the LRF’s light-emitting point,and S is the light point hit by the LRF on the target ship’s surface.l denotes the orientation vector of a laser beam.In our experiment, the 6D pose parameters RCWand TCWcan be calculated in real time,and they represent the transformation from the world coordinate system to the camera coordinate system.Before using the MC-LRF guidance system for pose measurement, the laser light-emitting point OLand laser beam l must be calibrated.Then, the measurement results from the LRF coordinate system are converted to the camera coordinate system.The measurement distance can be obtained,and the translation value is optimized as TLRF.Notably, the rotation error is difficult to eliminate during monocular camera pose measurement, so we aim to optimize the 6D poses using the TLRF.

3.MC-LRF-based pose measurement

Fig.2 MC-LRF guidance system schematic.

The single RGB image captured by the monocular camera includes the target ship and background.As shown in Fig.3,the pose measurement block includes three steps:object detection, landmark detection, and pose measurement.The target ship is detected, and the landmarks are regressed.Then, the initial 6D poses are calculated according to the detected landmarks by solving the PnP problem, and are optimized by orthogonal iteration and provides the optimized initial value for the region-based pose refinement algorithm.

First, the target ship is detected by PP-PicoDet,13and the ship landmarks are regressed using PP-tinypose.Then, the rotation and translation parameters are calculated by solving the PnP problem, which represents a rough initial pose.Finally,the pose is refined based on the region method to realize tracking.During the initial pose estimation and refinement,inaccurate translation values are corrected by the LRF’s TLRF,and the rotation value is optimized with the orthogonal iterative algorithm.Notably, the translation value is replaced by TLRFin each iteration.

3.1.Landmark set-based ship representation and synthetic training dataset generation

The depth neural network model makes texture and geometric features easy to extract,but the texture is often unstable under illumination and other environmental factors, which causes obvious obstacles during neural network training.In contrast,the primary geometric features of the target ship are relatively stable.In the existing research on human pose estimation16and face detection,17a discrete landmark set is used to represent the human body.Similarly, in our system, the rigid body target is creatively described as a set of landmarks, which are manually selected from the apparent geometric structure.As shown in Fig.4, the target ship is described with landmarks,which are sets of points or pixels in images that contain rich geometric information.The red points are the major geometric landmarks, which were chosen manually.The blue points are occluded.

They reflect the intrinsic structure or shape of the target ship since landmarks are fundamental in our aircraft automatic landing system.In addition, in network landmark regression,two widely-used methods that represent landmarks are coordinate regression and heatmaps18,19for deep learningbased semantic localization.Rather than directly regressing the numerical coordinate with a fully-connected layer, heat map-based methods are more accurate and robust since they predict the heatmap of the maximum activation point in the input image.Therefore, a heatmap-based method is used in the proposed automatic aircraft landing system to regress the semantic landmarks.

Different from object detection or classification datasets,pose parameter ground truth is difficult to label.Additionally,manual annotation is inaccurate, which causes difficulties of network training.To address the limitations of the target ship,overcome the difficulties of the pose dataset generation, and consider the influence of unstable texture features, we propose synthetic images and texture randomization methods to train our network.In addition to synthetic images,a few real images were also used for training in our study.When the synthetic images were generated,as shown in Fig.5,the target ship texture and background were randomly rendered in each image to reduce network interference.The scene of the target ship was simulated by Ref.20,which constructed high-quality images.Random images were taken from the MS COCO21dataset and used as the background and texture of the target ship’s model to make the network focus on the model’s geometry features.

3.2.Initial pose estimation based on corresponding 2D-3D landmarks

In the MC-LRF guidance system, the initial pose parameters are estimated based on the corresponding 2D-3D landmarks and landmark detection, as shown in Fig.6.The 2D landmarks are detected in the target ship and then the initial pose parameters are calculated by solving the PnP problem.The accuracy and efficiency of this system in terms of finding the initial pose estimation have attracted our attention, especially for mobile devices.To successfully and efficiently apply this system, we use the PP-tinypose22to obtain landmarks, as shown in Fig.6 ShuffleNetV2, which is used for target detection,is more robust on mobile devices than other structures.23Therefore, we chose PP-LCNet,24which includes an enhanced ShuffleNetV2 structure named Enhanced ShuffleNet (ESNet)that addresses the problem of expensive computation on

Fig.3 Schematic overview of pose measurement.

Fig.4 Representation of target ship.

Fig.5 Synthetic datasets preparation (i.e., the pipeline used to render simulation datasets).The target ship’s model was rendered with random poses in the scene that used MS COCO images as background and model textures.Middle block: the red points are the major geometric landmarks, which are chosen manually.

mobile devices.25–27The landmark choice is an important design parameter of our method that occurs after target detection.The landmark needs to be sufficiently localized on the object’s geometry and spread across the surface to provide a more stable landmark for pose calculation.

3.3.Region-based pose refinement

In the MC-LRF system, the initial pose in continuous images can be obtained through pose measurement, and needs to be refined efficiently.Region-based methods perform well compared with other methods for objects distinct from the background in terms of color, texture, etc.These methods typically use color statistics to model the probability that a pixel belongs to the object or to the background.The object pose is then optimized to best explain the segmentation of the image.Region-based pose refinement methods28–30have gained increasing popularity and achieved state-of-the-art performances.These methods assume that the object and background regions have different statistical properties.Based on the projected silhouette rendered from the 3D model,the probability density functions of foreground and background can be obtained, which is applied to construct a segmentation energy function.The object pose is estimated by iteratively optimizing the pose parameters that minimize the segmentation energy.Our system follows the method proposed by Ref.30With a known initial 6D pose and 3D object model,it is refined by this method to achieve pose tracking.This process refines the target’s pose in the current frame more efficiently.In our system,the initial 6D pose is optimized by the MC-LRF block and provides pose parameters that are needed as input values.Then, the iterative pose optimization process proposed in Ref.30is used to refine the approximately estimated pose parameters.In this paper, the pose is optimized with sparse viewpoint and correspondence line models.The energy function is as follows:

where D denotes the data from all correspondence lines,ω the sparse correspondence line domain, d the contour distance, l the pixel colour on the correspondence line, ξ the pose variation vector and ncis the number of randomly sampled points on the contour from the sparse viewpoint model.Then, optimization using the Newton method with Tikhonov regularization is used to calculate the pose variation vector as follows:

Fig.6 Architecture of top-to-down landmark detection.

where H and g are the Hessian matrix and gradient vector,respectively.I3×3represents the 3 × 3 identity matrix.λrand λtare the regularization parameters, which are used for rotation and translation.Then, the pose is iteratively refined according to:

ΔT=exp(ξ^)?SE(3) (3)

With an initial pose,the relative pose can be tracked within the successive monocular images by applying pose refinement to each frame.It is significant that the refinement’s translation value is optimized by the orthogonal iteration algorithm.

3.4.Orthogonal iteration optimization

In the proposed system, the rotation and translation of pose estimation and tracking are optimized using the orthogonal iteration algorithm9.

The orthogonal iteration algorithm defines pose estimation using an appropriate object space error function.This function can be rewritten in a way that an iteration is admitted based on the absolute orientation problem, which involves determining rotation value R and translation value T from corresponding pairs qjandpj.Among them,qjrepresents the 3D camera coordinates and pjrepresents the noncollinear 3D coordinates where j stands for the number of coordinates.The optimization problem can be expressed as:

where ‖A‖2=tr(ATA) and tr(AB)=tr(BA).When we have optimal solution of rotation value R^, the translation optimal vector can be expressed as:

Rather than depending heavily on the solution to the absolute orientation, we can obtain the predicted rotation value and the corrected T value with the MC-LRF guidance system.The specific flow of the orthogonal iterative algorithm is shown in Fig.7 right panel.TLRFstands for the optimized translation value and its depth orientation value is corrected by LRF system.According to initial rotation value and TLRF, continuously iteratively updating rotation and translation value is expressed as follows:

It is noteworthy that the depth value of T is replaced by TLRFin each iteration.The condition for stopping the iteration is that the objective function is sufficiently small or a predetermined upper limit on the number of iterations has been reached.

4.Experiments

To evaluate the performance of our guidance system, we report the MC-LRF guidance system performance on the synthetic and real images.

4.1.Experimental settings

In these experiments,the shipborne automatic landing process is simulated from approximately 2.0 km to 1.0 km distance expressed as D.In the different depth ranges, we rendered 2000 synthetic images with a resolution of 1280×1024 pixels.In the real experiments, the size of the target ship model is about 113 mm × 120 mm × 440 mm.Camera parameters are set according to the actual environment and Electronic Total Station is used to simulate a laser ranging finder.We collected 600 real images with a resolution of 1920 × 1200 pixels under Electronic Total Station coordinate system by monocular camera (DAHENG IMAGING MER-231-41U3C).All of the experiments were run on a laptop with an RTX 3060 GPU,ADM Ryzen 7 5800H CPU,and 32 GB of RAM.In addition,the efficiency experiments were run on an embedded platform with an ARM + AI module.In this section, to evaluate the pose accuracy performance, H3R17was also used for landmark detection.

4.2.Evaluation metrics

Similar to previous research,31the Normalized Mean Error(NME) relative to the bounding box was used to evaluate the landmark detection.

Fig.7 Orthogonal iteration algorithm architecture.

where the translation and rotation ground truth are represented by Rgand Tg, respectively.The rotation error (ER) is the angular error between the ground-truth quaternion and the predicted quaternion.The translation error(ET)is the normalized translation error.

4.3.Initial pose estimation results and analysis

To better ascertain the MC-LRF guidance system’s performance in terms of initial pose estimation, we report the individual predictions for 1000 synthetic images.These images include different depth range targets with random poses.The initial pose estimation process is divided into two steps: landmark detection and pose calculation.We present the results of landmark detection obtained by PP-tinypose and H3R method in Fig.8 and Table 1.

The red bounding box represents the results obtained by PP-PicoDet, and both methods can detect the landmarks in each image.In Table 1, we compare the NME results obtained by these networks, and the H3R method’s regression landmarks are more precise.The landmark detection error tends to decrease as the distance decreases.Note that there are obvious errors in both networks at a far distance;in these cases, because the target is too small, it occupies few pixels in the image, leading to feature reduction.On the contrary, when the large target of the bounding box is resized before the landmarks are input into the detection network, some of the features are discarded, which may be one of the reasons why the error does not continue to decrease at near distances.

Table 1 NME results of landmark detection using only RGB images.

After the corresponding 2D-3D landmarks are predicted,the initial pose parameters are calculated by solving the PnP problem.The predicted poses are reprojected on the original image in Fig.9(a),where the gold and purple objects represent the initial and optimized pose reprojections, respectively.The purple objects are obviously optimized in scale and rotation when compared to the gold objects,which are magnified to display the improvement in the results.In Figs.9 (b) and (c), the rotation and translation error curves show that the MC-LRF guidance system has significantly improved in terms of its ETvalue.Additionally,the translation optimization error demonstrates good robustness to changes in distance.ERis also optimized by the orthogonal iteration algorithm.

To perform a quantitative analysis, we summarize the pose estimation comparison results according to whether or not the LRF was used in Table 2.The pose error variation trend is related to the accuracy of landmark detection,and the rotation estimation results based on the H3R network are more precise than those produced by PP-tinypose.The translation estimation is similarly optimized by our guidance system,which evaluates the LRF performance when combined with different networks.Although the rotation accuracy becomes higher as the distance decreases,the translation error remains stable during the landing process.In addition,compared with weak pose optimization in near target images, the ERvalue’s mean rose about 36% for far target pose estimation.Note that the PPtinypose network has slightly lower accuracy than the H3R network, but its speed can reach 37 Frames Per Second(FPS) from target ship detection to pose estimation completion, which is significantly more efficient than the other method.

Fig.8 Exemplary landmark detection results on synthetic images.

Fig.9 Initial pose estimation results.

Table 2 Results of pose estimation at different distances.

4.4.Landing simulation results and analysis

Another experiment was designed based on 1000 successive synthetic images to evaluate the performance of the MCLRF guidance system during automatic landing missions.The automatic landing process includes landmark detection,initial pose estimation, pose refinement, and orthogonal iteration optimization.In Fig.10, the landmarks are detected by H3R and PP-tinypose.The group of images show the simulated process of aircraft automatic landing from 2.0 km to 0.5 km.The gold target represents the initial pose estimation results.The purple target represents the estimation results optimized by the MC-LRF block.The green target represents the refinement results.

The initial pose is calculated by solving the PnP problem,which is reprojected as a gold target.Then, the initial pose value is optimized as the purple target, which shows a noticeable improvement when compared to the gold target.Fig.11 provides the pose error curves obtained during the aircraft landing simulation, and the optimization effect has been fully reflected,especially that of the H3R method.Significantly,the robustness of the rotation error is better on this network.In comparison, the translation error is stable on both networks,and stable and accurate initial pose estimation is beneficial to pose refinement.

After the initial value is provided by pose estimation, the system refines the pose to achieve continuous tracking.When the tracking fails, the pose estimation again calculates a new initial value.To facilitate an accurate comparison, we estimated the initial value for each frame and used the refinement results from the first frame as an initial input in our experiment.As shown at the bottom of Fig.10, the optimized pose value is refined based on the region method to realize pose tracking,and the first frame of the sequence was used to initialize the tracking algorithm.The details of the reprojection results are emphasized, and there is a significant refinement from the purple target to the green target.To further focus on the MC-LFR guidance system’s performance in terms of pose refinement, the initial input pose values were applied without (Fig.12 (a)) and with (Fig.12 (b)) the system.The pose refinement result for each frame is more precise than the pose estimation result.Additionally, the pose refinement speed is approximately 1000 FPS, which is much higher than that for pose estimation (37 FPS) on the laptop.Therefore,pose refinement is more suitable for automatic landing after obtaining the initial pose estimation.There is an obvious error in the refinement algorithm in the grey area in Fig.12 where the pose value applied without being optimized by the proposed system was used as the input value.Because the input value was optimized with the LRF, the pose error converges rapidly and more precisely than that without optimization.

Fig.10 Exemplary initial pose estimation results and pose refinement on a synthetic dataset.

Fig.11 Initial pose estimation error curves of simulation landing process on synthetic scene using MC-LRF optimization vs a monocular camera only.

Fig.12 Initial pose estimation and pose refinement error curves obtained during landing process when using a synthetic dataset.

Table 3 Results of pose refinement during simulation loading.

Table 3 shows the refinement comparison between different input poses, which were optimized with or without the MCLFR system.The pose error will be confined to a small range when detecting successive motions to achieve pose tracking,which shows that the rotation and translation error can be refined to 0.4°and 0.2%.These experiments show that our system outperforms synthetic images and meets the aircraft automatic landing requirements.

4.5.MC-LRF system test in a real scene

In this section, we describe an experiment in which the equal proportions reduction model was placed on a complex background and 600 real images were collected.We illustrate the performance of our guidance system on the real images, as in the simulation experiment.The landmark detection and pose measurement results on 300 real images are shown in Fig.13 and Fig.14.The detection region is zoomed in on the top-right.The blue point is the prediction.

These results prove that our system is also effective in real scenes, especially for inaccurate depths at long distances.In addition, we found that rotation optimization is still affected by different distances.Due to the lack of accurate ground truth annotation,the metric is the ratio of the difference between the predicted and the measured depth values, which we call the LRF correction values.These values are approximately 9.54% and 25.47% for H3R and PP-tinypose, respectively,which shows that the initial pose estimation has significantly improved on actual experiments in which the MC-LRF system was applied.Moreover, on real images, the pose reprojection result from PP-tinypose demonstrates more precision than the H3R network,which is partly because of the different abilities of deep neural networks.The H3R + LRF method,which has excellent fitting ability, calculated more accurate landmarks using the synthetic images.On the contrary, the better generalization and efficiency of PP-tinypose made the landing process more robust and suitable for engineering applications.Since the training images were primarily composed of synthetic images,PP-tinypose is more suitable for real images.

Fig.13 Exemplary point detection of results on real images.

Fig.14 Exemplary pose estimation results including different scales of real ship models.

Therefore,we used the moving ship model in the real scene to simulate the automatic aircraft landing process.Note that the test images’background has new scenes that never appeared in the training images.Therefore, we tested the MC-LRF system and chose the PP-tinypose network to detect landmarks on 300 real images.In addition, the embedded ARM + AI module was also chosen to test the efficiency of the MC-LRF system guidance loading process.The landmarks detected by PP-tinypose and the initial pose reprojection results are shown in the top and middle panels in Fig.15.The gold target is the initial pose estimation results,the purple targets represent the estimation results optimized by the MCLRF block, and the green target represents the refinement results.The blue point is the prediction.

The MC-LRF system optimizes the initial pose values,represented by the purple targets, with obvious scale accuracy improvements compared with the gold target.Then, the pose refinement reprojection results are shown at the bottom of Fig.15.The details of the reprojection results emphasize the significant refinement from the purple target to the green target.

The LRF error ELRFvalue curve is shown in Fig.16,which illustrates the optimized process on a real landing experiment.Meanwhile, experiments show that the guidance system performs well on mobile platforms.On the embedded ARM + AI model, the PP-tinypose-based guidance system reaches 25 FPS from target ship detection to pose estimation completion.Furthermore, the speed of pose refinement can reach approximately 333 FPS on real images.In practical applications, pose estimation and region-based refinement can be replaced by the corrected translation output of the MC-LRF block in each iteration.The refinement accuracy and speed meet the requirements of practical applications.

Fig.16 LRF error values during aircraft automatic landing.

5.Conclusions

Fig.15 Exemplary initial pose estimation and pose refinement results on a real dataset.

In this work, we present a vision-based guidance system for shipborne aircraft automatic landing using a monocular camera and laser range finder.The system achieves high accuracy and is robust when estimating relative 6D pose parameters.The MC-LRF guidance system and 6D pose measurement algorithm are described in detail, and accurate successive 6D pose parameters for the landing process are calculated.The object and landmarks are detected by a deep neural network to establish 2D-3D landmark correspondence with the object model, which is provided to calculate the initial pose parameters by solving the PnP problem.Moreover, a region-based pose refinement method is proposed and applied to track the poses of successive motion after the initial pose.The MCLRF block is then used for accurate translation and optimization of the initial pose estimation as well as the pose refinement using orthogonal iteration.In this work, we address the problem of inaccurate 6D pose estimation during shipborne aircraft automatic landing.Extensive experimental results have been provided for both synthetic and real datasets during shipborne aircraft automatic landing, and the mean 6D pose parameter error can be refined to 0.4°and 0.2%on the synthetic dataset.The qualitative and quantitative results indicate that the system achieves high accuracy and efficiency during automatic aircraft landing guidance in real scene.In addition, these pose measurement techniques and our guidance system also can be applied to robotics, driverless vehicles, satellite docking and other fields32.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This study was co-supported by the National Natural Science Foundation of China,China(No.12272404)and the Postgraduate Research Innovation Project of Hunan Province of China, China (No.CX20210016).

主站蜘蛛池模板: 亚洲日本中文字幕天堂网| 欧美亚洲激情| 中文字幕无码制服中字| 亚洲精品高清视频| 国产精品人成在线播放| 中文字幕av一区二区三区欲色| 国产最爽的乱婬视频国语对白| 国产一区二区精品福利| 99在线观看免费视频| 日本五区在线不卡精品| 国模私拍一区二区三区| 亚洲天堂网在线观看视频| 国产鲁鲁视频在线观看| 亚洲无码A视频在线| 国产国产人在线成免费视频狼人色| 无码'专区第一页| 18禁黄无遮挡网站| 谁有在线观看日韩亚洲最新视频 | 国产后式a一视频| 国产成人免费高清AⅤ| 91精品最新国内在线播放| 欧美日韩国产高清一区二区三区| 精品久久777| 免费毛片全部不收费的| swag国产精品| 91 九色视频丝袜| 天天综合网色中文字幕| 日韩一区精品视频一区二区| 手机在线国产精品| 久久精品欧美一区二区| 亚洲欧洲日产无码AV| 色婷婷丁香| 老司机久久精品视频| 91久草视频| 男女性色大片免费网站| 久久国产精品嫖妓| 亚洲美女视频一区| 日韩国产亚洲一区二区在线观看| aa级毛片毛片免费观看久| 伊人色在线视频| 欧美日韩国产一级| 国产精品亚洲а∨天堂免下载| 狠狠做深爱婷婷综合一区| 国产精品久久自在自2021| 国产无码性爱一区二区三区| 欧美天堂久久| 日韩不卡免费视频| 亚洲资源在线视频| 9999在线视频| 青青草原偷拍视频| 深爱婷婷激情网| 中文字幕 欧美日韩| 天天摸夜夜操| 亚洲av无码成人专区| 黄色网页在线播放| 在线免费观看a视频| 欧美中文字幕无线码视频| vvvv98国产成人综合青青| 亚洲色图综合在线| 99久久精品视香蕉蕉| 亚洲人成在线精品| 中文无码精品a∨在线观看| 91视频精品| 无码不卡的中文字幕视频| 欧美成人看片一区二区三区 | 一本色道久久88| 久久综合成人| 99草精品视频| 国产91蝌蚪窝| 草草影院国产第一页| 亚洲区视频在线观看| h网址在线观看| 亚洲国产成人综合精品2020| 曰韩人妻一区二区三区| av一区二区人妻无码| 国产靠逼视频| 国产黑丝一区| av在线人妻熟妇| 国产清纯在线一区二区WWW| www.精品视频| 免费国产小视频在线观看| 国产在线精品99一区不卡|