999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

A Tensor—based Enhancement Algorithm for Depth Video

2018-05-07 07:05:28YAOMENG-qiZHANGWEI-zhong
科技視界 2018年5期

YAO MENG-qi ZHANG WEI-zhong

【Abstract】In order to repair the dark holes in Kinect depth video, we propose a depth hole-filling method based on tensor. First, we process the original depth video by a weighted moving average system. Then, reconstruct the low-rank sensors and sparse sensors of the video utilize the tensor recovery method, through which the rough motion saliency can be initially separated from the background. Finally, construct a four-order tensor for moving target part, by grouping similar patches. Then we can formulate the video denoising and hole filling problem as a low-rank completion problem. In the proposed algorithm, the tensor model is used to preserve the spatial structure of the video modality. And we employ the block processing method to overcome the problem of information loss in traditional video processing based on frames. Experimental results show that our method can significantly improve the quality of depth video, and has strong robustness.

【Key words】Depth video; Ttensor; Tensor recovery; Kinect

中圖分類號: TN919.81 文獻標識碼: A 文章編號: 2095-2457(2018)05-0079-003

1 Introduction

With the development of depth sensing technique, depth data was increasingly used in computer vision, image processing, stereo vision and 3D reconstruction, object recognition etc. As the carriers of the human activities, video contains a wealth of information and has become an important approach to get real-time information from the outside world. But due to the limitation of the device itself, gathering sources, lighting and other reasons, the depth video always contains noise and dark holes. Thus the quality of video is far from satisfactory.

For two dimensional videos, the traditional measures to denoising and repairing adopted filter methods based on frames[1]. But the continuous frames have a lot of redundant information, which will bring us much trouble. This representation method ensures the completeness of the videos inherent structure.

2 Tensor-based Enhancement Algorithm for Depth Video

2.1 A weighted moving average system[2]

When Kinect captures the video, the corresponding depth values are constantly changing, even at the same pixel position of the same scene. It is called Flickering Effect, which caused by random noise. In order to avoid this effect, we take the following measures:

1)Use a queue representing a discrete set of data, which saves the previous N frames of the current depth video.

2)Assign weighted values to the N frames according to the time axis. The closer the distance, the smaller the frame weight.

3)Calculate the weighted average of the depth frames in the queue as new depth frame.

In this process, we can adjust the weights and the value of N to achieve the best results.

2.2 Low-rank tensor recovery model

Low-rank tensor recovery[3] is also known as high order robust principal component analysis (High-order RPCA). The model can automatically identify damaged element in the matrix, and restore the original data. The details are as follows: the original data tensor D is decomposed into the sum of the low rank tensor L and the sparse tensor S,

The tensor recovery can be represented as the following optimization problem:

where,D,L,S∈RI1×I2×..×IN ,Trank(L) is the Tucker Rank of tensor L.

The above tensor recovery problem can be transformed into the following convex optimization problem.

Aiming at the optimization problem in (2), typical solutions[4] are as follows: Accelerated Proximal Gradient (APG) algorithm, Augmented Lagrange Multiplier (ALM) algorithm. In consideration of the accuracy and fast convergence speed of ALM algorithm, we use ALM algorithm to solve this optimization problem and generalize it to tensor. According to (2), we formulate an augmented Lagrange function:

2.3 Similar patches matching

There is a great similarity between frame and frame of video, so the tensor constructing by the video has a strong low rank property[5]. For a moving object in the current frame, if the scene is not switched, the similar part should be in its front frame and back frame. For each frame, set an image patch bi,j with size a×a as the reference patch. Then set a window B(i,j)=l·(a×a)×f centered on the reference patch,where is a positive integer and f is the number of original video frames. The similarity criterion of the patches is expressed by MSE, which is defined as

where N=a×a denotes the size of patch bij,Cij is the pixel value of the frame to be detected at present, and Rij is the pixel value of the reference frame. The smaller the value of MSE is, the more accurate the two patches match. Search for image patches bx,y which similar to reference patch in B(i,j),and put their coordinate values in set :

where t is threshold. It can be tested and determined according to the experimental environment. When MSE is less than or equal to this value, we can conclude that the test patch and reference patch are similar. Then add it to set i,j. The first n similar patches can be used to define as a tensor:

3 Experiment

3.1 Experiment setting

The experiment uses three videos to test. Some color image frames of the test video are as listed in Figure 1.

Fig.1. Test video captured from the Kinect sensor (a) Background is easy, the moving target is the man.(b) Background is complex, the moving target are two men, and they are far from the camera.(c) Background is messy, and the moving target is the man in red T-shirt, he is near the camera.

3.2 Parameter setting

In the same experimental environment, we compare our method with VBM3D[6] and RPCA. For VBM3D and RPCA algorithm, the source code is used, provided by the literature, to get the best result. For our algorithm, the parameters are all set empirically, so that the algorithm can achieve the best results. In all tests, we set some parameters as follows: the number of test frames is 120; the number of similarity patches is 30; the size of patch is 6*6, the maximum number of iterations is 180; tolerance thresholds are ?著1=10-5,?著2=5×10-8. We use Peak Signal-to-Noise Ratio (PSNR)[7] to quantatively measure the quality of denoised video images. And the visual effect of video enhancement can be observed directly.

3.3 Experiment results

In order to measure the quality of the processed image, we usually refer to the PSNR value to measure whether a handler is satisfactory. The unit of PSNR is dB. So the smaller the value, the better the quality. As can be seen from table 1, in the same experimental environment, the effect of the proposed method is better than other methods in the three groups of test videos. Fig.2 shows the enhancement result of moving object after removing the background by our method .

As we can see from Figure 3, the proposed method in this paper can remove noise very well and basically restore the texture structure of the video. The effect of video enhancement is satisfactory.

Fig.2. The enhancement result of moving object after removing the background by our method. (a) (b)(c)are depth video frame screenshot in original depth video a,b,c. (d)(e)(f) The enhancement results of moving object in video a, video b and video c respectively.

Fig.3. Depth video enhancement result (a)(b)(c) Depth video frame screenshot in original depth video a, video b and video c respectively. (d)(e)(f) The enhancement results in video a, video b and video c respectively.

Fig.4. The comparison results(partial enlarged view) of our method and other methods(VBM3D and RPCA method). (a)(b)(c) The enhancement results(partial enlarged view) of depth video a, video b and video c respectively with our method. (d)(e)(f) The enhancement results(partial enlarged view) of depth video a, video b and video c respectively with VBM3D.(g)(h)(i) The enhancement results(partial enlarged view) of depth video a, video b and video c respectively with RPCA.

We compare the results of our method used in this article with those of the VBM3D and RPCA method. In order to make the experimental results clearer, we put the partial magnification. By comparison, we can see that our method is superior to the other methods in denoising, repairing holes and maintaining edges.

4 Conclusion

In this paper, we propose a tensor-based enhancement algorithm for depth video, combining tensor recovery model and video patching. Experimental results show that the proposed method can effectively remove the interference noise and maintain the edge information. It is superior to the traditional methods in the processing of depth video.

References

[1]Liu J, Gong X. Guided inpainting and filtering for Kinect depth maps[C]. IEEE International Conference on Pattern Recognition, 2012:2055-2058.

[2]Zhang X, Wu R. Fast depth image denoising and enhancement using a deep convolutional network[C]//Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016: 2499-2503.

[3]Xie J, Feris R S, Sun M T. Edge-guided single depth image super resolution[J]. IEEE Transactions on Image Processing, 2016, 25(1): 428-438.

[4]Compressive Principal Component Pursuit, Wright, Ganesh, Min, Ma, ISIT 2012, submitted to Information and Inference, 2012.

[5]Chang Y J, Chen S F, Huang J D. A Kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities.[J]. Research in Developmental Disabilities, 2011, 32(6):2566-2570.

[6]Bang J Y, Ayaz S M, Danish K, et al. 3D Registration Using Inertial Navigation System And Kinect For Image-Guided Surgery[J]. 2015, 977(8):1512-1515.

[7]Zhongyuan Wang, Jinhui Hu, ShiZheng Wang, Tao Lu Trilateral constrained sparse representation for Kinect depth hole filling[J]. Pattern Recognition Letters, 65 (2015) 95–102

主站蜘蛛池模板: 成年网址网站在线观看| 人妻精品全国免费视频| 久久黄色影院| A级全黄试看30分钟小视频| 国产精品播放| 国产欧美中文字幕| 日本免费精品| 巨熟乳波霸若妻中文观看免费| 欧美日本不卡| 中文无码伦av中文字幕| av一区二区三区高清久久| 色婷婷亚洲综合五月| 久久久久国产一级毛片高清板| 国产精品漂亮美女在线观看| 久久伊人操| 欧美中文字幕在线播放| 自慰网址在线观看| 久久综合色视频| 国产黄色片在线看| 强乱中文字幕在线播放不卡| 国产微拍一区| 免费看a级毛片| 国产精品短篇二区| 欧美精品亚洲二区| 青青草综合网| 国产亚洲精久久久久久久91| 国产主播福利在线观看| 71pao成人国产永久免费视频| 岛国精品一区免费视频在线观看| 中文字幕人妻无码系列第三区| 成人国产一区二区三区| 高清不卡一区二区三区香蕉| 日韩不卡高清视频| 亚洲香蕉伊综合在人在线| 99久久人妻精品免费二区| 中文字幕永久视频| 亚洲综合激情另类专区| 国产一区二区三区视频| 国产精品综合久久久| 无码中文字幕加勒比高清| 欧美人在线一区二区三区| 国产精品自在拍首页视频8| 视频国产精品丝袜第一页| 亚洲人成网站观看在线观看| 国产欧美日韩va另类在线播放| 国产网站一区二区三区| 色综合a怡红院怡红院首页| 中文字幕色站| 成·人免费午夜无码视频在线观看 | 日韩欧美中文字幕在线精品| 青青操视频在线| 亚洲日本一本dvd高清| 茄子视频毛片免费观看| 亚洲国产成人麻豆精品| 大香网伊人久久综合网2020| 免费视频在线2021入口| 一本一本大道香蕉久在线播放| 亚洲香蕉伊综合在人在线| 狠狠色狠狠综合久久| 亚洲欧美日韩高清综合678| 在线观看国产网址你懂的| 成年人国产视频| 欧美另类图片视频无弹跳第一页| AV天堂资源福利在线观看| 国产精品林美惠子在线播放| 色综合色国产热无码一| 99久久精品免费观看国产| 久久精品电影| 日本AⅤ精品一区二区三区日| 高清免费毛片| 欧美日韩在线国产| 麻豆国产精品一二三在线观看| 香蕉久久国产超碰青草| 一级毛片在线免费视频| 国产激爽爽爽大片在线观看| 欧美午夜网| 色偷偷一区| 国产一区二区人大臿蕉香蕉| 人人91人人澡人人妻人人爽| 国产日韩av在线播放| 亚洲AV无码精品无码久久蜜桃| 精品国产三级在线观看|