999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Accurate and Computational Efficient Joint Multiple Kronecker Pursuit for Tensor Data Recovery

2021-12-11 13:30:58WeizeSunPengZhangJingxinXuandHuochaoTan
Computers Materials&Continua 2021年8期

Weize Sun,Peng Zhang,*,Jingxin Xu and Huochao Tan

1Guangdong Key Laboratory of Intelligent Information Processing,College of Electronics and Information Engineering,Shenzhen University,Shenzhen,518061,China

2Department of Housing and Public Works,Queensland,Australia

3Customer Service Centre,Guangdong Power Grid Corporation,Guangzhou,China

Abstract:This paper addresses the problem of tensor completion from limited samplings.Generally speaking,in order to achieve good recovery result,many tensor completion methods employ alternative optimization or minimization with SVD operations, leading to a high computational complexity.In this paper, we aim to propose algorithms with high recovery accuracy and moderate computational complexity.It is shown that the data to be recovered contains structure of Kronecker Tensor decomposition under multiple patterns,and therefore the tensor completion problem becomes a Kronecker rank optimization one, which can be further relaxed into tensor Frobenius-norm minimization with a constraint of a maximum number of rank-1 basis or tensors.Then the idea of orthogonal matching pursuit is employed to avoid the burdensome SVD operations.Based on these, two methods,namely iterative rank-1 tensor pursuit and joint rank-1 tensor pursuit are proposed.Their economic variants are also included to further reduce the computational and storage complexity,making them effective for large-scale data tensor recovery.To verify the proposed algorithms, both synthesis data and real world data,including SAR data and video data completion,are used.Comparing to the single pattern case,when multiple patterns are used,more stable performance can be achieved with higher complexity by the proposed methods.Furthermore, both results from synthesis and real world data shows the advantage of the proposed methods in term of recovery accuracy and/or computational complexity over the state-of-the-art methods.To conclude,the proposed tensor completion methods are suitable for large scale data completion with high recovery accuracy and moderate computational complexity.

Keywords: Tensor completion; tensor Kronecker decomposition; Kronecker rank-1 decomposition

1 Introduction

The problem of data recovery from limited number of observed entries, referred to as matrix completion or tensor recovery problem, had attracted significant attention in the pass decade and had been applied in various fields such as recommendation system [1,2], Bioinformatics data [3],computer vision [4,5] and image data [6,7].

In practice, the matrix data we observed is often of low rank structure.Therefore, the matrix completion problem can be solved by finding a low rank matrix to approximate the data matrix to be recovered.However, the process of rank minimization is NP-hard [8].In order to relax the non-convex NP-hard problem to a traceable and convex one, [9,10] proposed to minimize the matrix nuclear norm instead of matrix rank.Based on this concept, many algorithms, such as the singular values thresholding (SVT) [11] and singular value projection (SVP) [12] approaches.Unfortunately, a great number of singular value decomposition (SVD) operations or iterations are required for these methods, leading to high computational and storage complexity.

In real world systems, the observed data is sometimes of high dimensional structure.The recovery of these tensor data, which can be regarded as a generalization of that of matrix completion to the high dimensional case, is becoming more and more important.It is assumed that the observed data tensor contains a low rank structure and thus the missing entries can be recovered by the minimizing the tensor rank.In fact, several tensor decomposition methods and definitions of tensor rank are employed for the completion problem.Reference [13] proposed to minimize the number of rank-1 tensors from CANDECOMP/PARAFAC (CP) decomposition to recover the original tensor data.However, the decomposition is very costly, and it might fail to give reasonable results in some cases because of the ill-posedness property [14].Other than the CP decomposition, the Tucker decomposition that yields a core tensor and corresponding subspace matrices of each dimension, are employed for tensor completion [15,16].However, these methods require the knowledge of tensor rank, and might be not applicable in some applications.

Recently, some new types of tensor decomposition methods are proposed.By operating the 3-dimensional (3-D) tensors as 2-D matrices using the tensor product, the tensor singular value decomposition (t-SVD) is proposed [17].Then the corresponding tensor rank and tensor nuclear norm [18] are defined and used for tensor data recovery.This method can transform the decomposition of one 3-D tensor to slice-wise matrix ones.However, for tensor with number of dimension higher than 3, it will fold all the dimensions higher than 3 into the third one, resulting in a lost of higher order information.Other researchers, proposed to employ the Kronecker tensor decomposition (KTD) based on the Kronecker structure of general R-D tensor to solve the tensor completion problem [19].This approach employed the idea that folding an R-D tensor into a highorder tensor and then unfolding it into a matrix by general unfolding [20].By reducing the tensor computation to matrix computation, a similar matrix rank can be defined, and good recovery results can be obtained.However, the computational complexity of the KTD based methods are still unsatisfying because of the SVD operation of large matrices.

Furthermore, in order to improve the efficiency of data recovery, some orthogonal rank-1 matrix and tensor pursuit approaches had been proposed [21,22].These methods transfer the minimization of the matrix rank or tensor rank to an iterative rank-1 pursuit problem, and achieve a moderate data recovery result with low computational complexity.In this work, we will propose two novel tensor completion methods based on the minimization of tensor Kronecker rank as well as technique of matching pursuit, and apply the tensor data recovery methods to several real world data completion applications.

The remainder of the paper is organized as follows.Section 2 presents the notations and definitions.In Section 3, the tensor recovery algorithms are developed in details.Experiments are provided in Section 4 and conclusions are drawn in Section 5.

2 Notations and Definitions

The notations used in this paper is first defined.Scalars, vectors, matrices and tensors are denoted by italic, bold lower-case, bold upper-case and bold calligraphic symbols, respectively.The Frobenius-norm of a tensorAis defined asFurthermore, by stacking the firstF-th dimensions of anR-D tensorA∈CM1×M2×···×MRone by one into one dimension and the remaining dimensions into the other [23], the tensorAcan be unfolded into a matrix A={A}1→F∈CNF×M/NFwhereThei×iidentity matrix is symbolized as Ii, and the transpose and conjugate transpose of a vector or matrix are written asTandH.Furthermore, we definevec(A)as the vectorized result of the tensorA, and ˙a=vec(AΩ)be the vectorized ofAΩ, whereΩis the set indicating the location of tensorA.Now we go on to the definitions.

Definition 2.1.The 2-way folding of anR-D tensorA∈CM1×M2×···×MRby a pattern {K}where K=[K1,K2,...,KR],Mr=KrLrforr=1,2,...,Rwith integersKrandLr, isB=fold(A)∈CK1×···×KR×L1×···×LRwhereB(k1,k2,...,kR,l1,l2,...,lR)=A(m1,m2,...,mR)withmr=lr(Kr?1)+kr,kr=1,2,...,Krandlr= 1,2,...,Lr.This folding procedure in fact folds ther-th and(r+R)-th dimensions ofB.

We can now go on to define the Kronecker tensor product [19] of two tensorsA∈CK1×K2×···×KRandB∈CL1×L2×···×LRasX=B?A∈CM1×M2×···×MRwhereMr=KrLr,Y=fold{K}(X), {Y}1→R=abT, a={A}1→Rand b={B}1→R.

Definition 2.2.The(K×L)tensor Kronecker unfolding [19] of anR-D tensorA∈CM1×M2×···×MRwithMr=KrLrunder a pattern {K}is Y(k,l)=unfold{K}(Y)of the dimensionswhose entries(k,l)are given by Y(k,l)=Y(m)for all k=[k1,k2,...,kR],kr=1,2,...,Kr, l=[l1,l2,...,lR],lr=1,2,...,Lrand m=[m1,m2,...,mR],mr=(kr?1)Lr.This unfolding can be operated by first computeY=fold{K}(X)∈CK1×···×KR×L1×···×LRand then calculate Y={Y}1→R.Then the Kronecker tensor decomposition (KTD) [19] can be defines as the Definition 2.3.

Definition 2.3.The KTD of a tensorX∈CM1×M2×···×MRunderGpatterns1,2,...,Gis

Following this definition, the tensor rank ofXcan be defined as the total number of Kronecker terms the joint KTD approach generates, which isFurthermore, whenG=1 andFG=1, it is called Kronecker rank-1 tensor under the pattern {KG}[19].

3 Main Result

The task is to recover the R-D tensorX∈CM1×M2×···×MRfrom the partly observed tensorX,denoted asXΩwhereΩis the 1/0 set indicating the location of the observed entries.By writing the tensorXas a linear combination of several Kronecker rank-1 tensors under a given set of patterns {K1,K2,...,KG}, we get:

whereMg,fis the f-th Kronecker rank-1 tensor with theg-th pattern Kgandθg,fis the magnitude ofMg,fforg=1,2,...,Gandf=1,2,...,F.Note that here we assume that the ranks ofXgforg=1,2,...,GareF.By writingforf=1,2,...,Fandθ=[θ1;θ2;...;θG], the low rank tensor completion problem becomes:

where ‖θ‖0denotes the number of nonzero elements ofθ.For noisy scenario, the optimization problem is ‖XΩ?where ?is a small value.The (3) can be transformed into:

whereFis the maximum number of Kronecker rank.We can solve this problem by a greedy matching pursuit type method.

3.1 Iterative Multiple Kronecker Tensor Pursuit

In this Iterative multiple Kronecker Tensor Pursuit (IKTP) method, we update one Kronecker rank-1 basis tensorMnand one magnitudeθnwith one patternKh∈{K1,K2,...,KG}, whereh=(n?1)%G+1 in an iteration.Therefore, the problem (4) can be rewritten as

whereθ=[θ1,θ2,...,θN]TandN=GF.

Supposing that after the(k?1)-th iteration,(k?1)Kronecker rank-1 basis tensorsM1,M2,...,Mk?1and their weightsθk?1= [θ1,θ2,...,θk?1]Twere obtained.In thek-th iteration, the target is to pursue a new Kronecker rank-1 basis tensorMkfrom the residual termwhereis the recovered tensor in the(k?1)-th iteration.According to Definition 2.2, we defineRk=and theMk=unfold{Kh}(Mk),whereh=(k?1)%G+1, then the Mkcan be solved by optimizing:

where ‖uk‖=‖‖=1.Theukandvkare pair of top left and right singular vectors of Rk,which can be solved using the power method [23,24].Then the tensorMkcan be folded formMk=ukaccording to Definition 2.2.

After solving the new Kronecker rank-1 basis tensorMk, the weight vectorθkfor all currently available basis tensor {M1,M2,...,Mk}can be updated by:

By reshaping the tensorXΩandinto vectorsand lettingthe optimal solutionθkof (7) is calculated as

Then we can go on to the(k+1)-th iteration until allNranks are solved, and this iterative multiple Kronecker Tensor Pursuit (IKTP) method is now summarized in Tab.1.

Table 1:Iterative Kronecker tensor pursuit

3.2 Joint Multiple Kronecker Tensor Pursuit

The proposed IKTP approach updates one Kronecker rank-1 basis tensor according to one of theGpatterns alternatively.When the number of ranks to be recovered is large, a lot of iteration is required, making the proposed approach highly computational inefficient.To overcome this problem, we propose a Joint multiple Kronecker Tensor Pursuit (JKTP) method that updates all theGKronecker rank-1 basis tensors in one iteration under the given patterns.

According to (4), suppose that after the(k?1)-th iteration, the Kronecker rank-1 basis tensorsM1,1,...,MG,1,M1,2,...,MG,2,...,M1,k?1,...,MG,k?1and their weightsθk?1= [θ1;θ2;...;θk?1], whereforf=1,2,...,k?1, have already been computed.In thek-th iteration, the target becomes to calculate G new Kronecker rank-1 basis tensorsM1,k,...,MG,kwith unit Frobenius norm, from the recover tensor and the residual term arerespectively.

Under the given patterns {K1,K2,...,KG}, we defineandMg,k=unfold{Kg}forg=1,2,...,G, and proposed to solve theMg,k,g=1,2,...,G, by:

where ‖ug,k‖=‖vg,k‖=1.Theug,kandvg,kare pair of top left and right singular vectors of,which can be solved using the power method [25,26].The new Mg,kcan be available by computingThen theMg,kcan be retrieve from the updated Mg,kforg=1,2,...,G.

After solving the new Kronecker rank-1 basis tensorsM1,k,...,MG,k, we update the weightsθk=[θ1;θ2;...;θk] for all currently available basis tensorsM1,1,...,MG,1,M1,2,...,MG,2,...,M1,k,...,MG,kby solving the following least squares regression problem:

Then the optimal solutionθkof (10) is given by

Table 2:Joint Kronecker tensor pursuit

3.3 Economic Algorithms

In each iteration, the proposed IKTP algorithm will track all pursued bases and save them in the memory, leading to a high requirement of storage space.Furthermore, when the number of iteration is large, the LS solution (8) will compute the inverse of a large matrix, making it be rather computationally unattractive.To overcome these problems, an economic updating scheme

which updateα=[α1,α2] instead of the magnitude termθcan be used in Algorithm 1, where theαcan be figured by

And the updated recovered tensor becomesYk=α1Yk?1+α2Mk.

Similarly, the economic updating technique can be used to updateα=[α1,α2,...,αG+1] for the JKTP approach instead of (10), whereαis calculated as

And the recovered tensor can be obtained by

4 Performance Evaluation

In this section, we conduct experiments based on both visual data and SAR imaging data completion.The proposed IKTP and JKTP methods as well as they economic realization, IKTPecon and JKTP-econ, are evaluated, and the state-of-the-art algorithms, including Tucker [15],KTD [19], R1TP [21] and R1MP [22] are employed for comparison.For all algorithms, the stop criterion is met if the two conditions are reached:‖Xk?Xk?1‖F < ε= 10?6or a maximum number ofimaxis satisfied.All the experiments are performed using MATLAB running on Inter(R) Core(TM) i7-8700@3.2 GHz for 100 Monte Carlo trials.

4.1 Image Recovery

First, the problem of image recovery by JKTP, JKTP-econ, IKTP, IKTP-econ, Tucker [15],KTD [19], and R1TP [22] methods from randomly sampled entries is tackled.Six images of dimensions 256×256×3 and 1024×1024×3 that separated into 2 groups are used for testing and they are shown in Fig.1, we can infer that the completion result of the images from the first group might be better than that from the second one, as the pixels in the image form the former group is more related to each other.

Figure 1:Images for testing:group 1 contain some patterns and those in group 2 are not

In the first test, the convergence performances of the proposed algorithms are verified.The images of dimensions 256×256×3 are used for the proposed methods.The patterns used in this test are {K1}=[8,8,1], {K2}=[16,16,1],{K3}=[32,32,1], {K4}=[64,64,1], and the observation rate is set to be 0.2.Note that in order to achieve a good performance, the patterns are suggested to be set as square as possible.By writing ‘JKTP-i,’‘IKTP-i,’‘JKTP-econ-i’and ‘IKTP-econ-i,’i=1,2,3,4, we mean that the proposed approach will use i patterns out if all 4 patterns for data recovery, and the patterns used are randomly selected in each Monte Carlo trial.The PSNR results of the two groups images data under numbers of iteration in Figs.2 and 3, respectively.It is shown that the JKTP and JKTP-econ algorithms can converge within 15 iterations and the IKTP and IKTP-econ approaches will converge within 25 iterations when the n umber of patterns is larger than 3.In the remainder image and video experiments, the maximum numbers of iteration are set to be 20 and 30 for these methods, respectively, and we will use at least 3 patterns for proposed algorithms.

Figure 2:PSNR vs.numbers of iterations with 20% observed entries for group 1 images

In the second test, the completion performances of the proposed approaches under different number of patterns and the state-of-the-art methods for image recovery under 10% observation rate are tested.The four patterns are the same as those in the first test.The average PSNR and SSIM results as well as the computational time in seconds of different methods on the first image, namely, the ‘Group 1:Beans’under the resolution of 256×256×3 are shown in Tab.3.Similar performance can be observed for other images in Fig.1, and thus their recovery results are not shown here.It is shown that when more pattern used, better performance the proposed approached can achieve.Tucker and R1MP methods as well as the KTD approach with i = 1,2,3,4 patterns are also included for comparison.It is shown that the JKTP and JKTP-econ methods give inferior results.For i ≥2, all the proposed algorithms can give better performance than the Tucker and R1MP approaches.Furthermore, the computational complexity of the proposed methods are much smaller than KTD, making them more attractive in dealing image data.

Figure 3:PSNR vs.numbers of iterations with 20% observed entries for group 2 images

Table 3:Image recovery performance of different algorithms

In the third test, the performance under different percentage of observation percentages,ranging from 2% to 10 %, is evaluated.The images with resolution 1024×1024×3 are used.We test the proposed methods with 4 patterns {K1}=[16,16,1], {K2}=[32,32,1], {K3}=[64,64,1]and {K4}=[16,16,1].The other parameters for the proposed approaches are the same as those in the previous test.The PSNR and SSIM results of the two groups images are shown in Figs.4 and 5, respectively.Generally speaking, the proposed methods can give 3 to 7dB gain over the other algorithms in all cases.The average computational time of the JKTP, JKTP-econ, IKTP,IKTP-econ, KTD, R1MP and Tucker methods are 54.41, 30.54, 14.75, 9.51, 1.57×103, 17.54 and 77.13 s.It is shown that the proposed methods run slightly slower than the R1MP methodology because the R1MP is matrix based rank-1 pursuit method while the proposed methods are tensor based ones.Comparing to the other methods, the computational complexity of proposed algorithms are much smaller, indicating the efficiency of the proposed approaches especially when the dimensions of the data to be recovered is large.

Figure 4:Average PSNR and SSIM vs.observation percentage of group 1 images (a) PSNR(b) SSIM

4.2 Video Inpainting

In this part, we test the algorithms JKTP, JKTP-econ, IKTP, IKTP-econ, Tucker [15],R1MP [19] and R1TP [21] under ‘Gun Shooting’video [25,26] completion from randomly sampled entries, and the first and last frames of the video are shown in Fig.6.Note that the KTD approach is not include for comparison because of its high computational complexity.The video is of dimensions 200×520×3×80.

And the situations of different number of frames ranging from 10 to 80 are tested.The observation rate is set to 0.03, and for all the methods, a maximum number of 50 iterations is used.The patterns for the proposed methods are {K1}= [20,52,3,5], {K2}= [40,104,3,5],{K3}= [50,130,3,5] and {K4}= [20,52,1,5], {K5}= [40,104,1,5], {K6}= [50,130,1,5].The PSNR and SSIM results are shown in Fig.7 and the computational complexity results are shown in Fig.8.Note that for the proposed methods, ‘?3’means that only the first 3 patterns are used, while ‘?6’means that all 6 patterns are employed in data recovery.Generally speaking, the proposed methods give the best PSNR performance when the number of frames used is higher than 20.It is worth noting that although the proposed methods with 3 patterns perform worse than R1TP method when the number of frames used is less than 20, when more frames are used,the recovery performance of the proposed approaches increase rapidly, and far outperform the other methods.

Figure 5:Average PSNR and SSIM vs.observation percentage of group 2 images (a) PSNR(b) SSIM

Figure 6:video data for testing

4.3 High Resolution SAR Imaging

In the end, we evaluate the algorithms with under-sampled high resolution SAR imaging data [27,28].We employ the model shown in Fig.9 to conduct our experiments as follows.First, scan the test scene to generate the dataXby using the SAR system, and the scanning parameters follow those in [27].Note that the true imageIcan be computed from the dataXby the FFT process.Furthermore, the dataXcan be sampled randomly under an observation rate to obtainMΩ.Finally, the tensor recovery methods can be applied for the recovery of the original data as, and the recovered image can be computed asThe mean square error (MSE)betweenXandis used to evaluate the performance of the proposed four methods, R1MP [27],Tucker [18] and KTD [22] methods.The tested high resolution SAR imaging scene with the size 6200×12000 is shown in Fig.10a.

Figure 7:PSNR and SSIM results vs.number of frames (a) PSNR (b) SSIM

Figure 8:Computational complexity vs.number of frames

Figure 9:Model of SAR image recovery

Figure 10:The original, sampled and recovered SAR images (a) Original (b) Sampled (c) JKTP(d) JKTP-econ (e) IKTP (f) IKTP-econ (g) R1MP (h) Tucker

Considering that the size 6200 × 12000 of the SAR imaging data is too large to be computed effectively by one computer, we will not complete the whole SAR data in one tune.We complete small subdata whose data dimensions are 620 × 50, 620 × 200 and 620 × 400 in one time and repeat the process for 2400, 600 and 300 times, respectively, for the whole image.The patterns for the proposed methods with different completed data size are= [155,2],

The situations of different observation percentages ranging from 10% to 30% are tested and a maximum number of 40 iterations is used.The SAR imaging scene sampled under the observation percentage 10% is shown in Fig.10b, and the SAR imaging scenes with the size of 620×200 recovered by all the algorithms under 10% observation rate are shown in Figs.10c-10f.The completion results of the four methods including MSE performances and CUP times are listed in Tab.4.Generally speaking, the proposed methods can give the best performance with moderate computational complexity among all the algorithms.

Table 4:Completion results with different step-sizes of completion under 10% to 30% observed entries

5 Conclusion

In this paper, the novel tensor completion algorithms combining the ideas of tensor Kronecker Decomposition and rank-1 tensor pursuit, named IKTP and JKTP, are proposed and applied to color image, video and SAR image inpainting.A novel weight update algorithm, to reduce the time and storage complexity, is also derived.Experimental results show that the proposed methods outperform the state-of-the-art algorithms in terms of PSNR, SSIM and MSE.For the tensor based algorithms, the results also show the advantage of the proposed methods in terms of computational complexity.

Funding Statement:The work described in this paper was supported in part by the Foundation of Shenzhen under Grant JCYJ20190808122005605, and in part by National Science Fund for Distinguished Young Scholars under grant 61925108, and in part by the National Natural Science Foundation of China (NSFC) under Grant U1713217 and U1913203.

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 国产一区二区三区视频| 丝袜高跟美脚国产1区| 国产91丝袜在线播放动漫 | 日韩大片免费观看视频播放| 国产一在线观看| 99热最新网址| 日韩精品成人网页视频在线 | 亚洲三级a| 一级全免费视频播放| 日韩中文字幕免费在线观看| 中文字幕首页系列人妻| 天堂在线亚洲| 日韩专区第一页| 成人在线不卡| 依依成人精品无v国产| 久草中文网| 亚洲一区网站| 欧美视频在线播放观看免费福利资源| 黄色福利在线| 精品久久久久无码| 91视频首页| 欧美激情成人网| 国产成人啪视频一区二区三区 | 欧美伦理一区| 亚洲妓女综合网995久久| 中文字幕无码中文字幕有码在线| 成人在线观看一区| 国产成人91精品免费网址在线| 久久精品女人天堂aaa| 六月婷婷激情综合| 成人午夜精品一级毛片| 午夜a视频| 国产精品99久久久| 亚洲日韩精品欧美中文字幕| 色噜噜狠狠色综合网图区| 国产精品亚洲一区二区三区z| 亚洲V日韩V无码一区二区| 亚洲资源在线视频| 亚洲中文字幕久久精品无码一区| 久久激情影院| 国产精品女主播| 亚洲日韩高清无码| 人与鲁专区| 日韩国产黄色网站| 伊在人亞洲香蕉精品區| 免费国产在线精品一区| 成人一区专区在线观看| 乱人伦视频中文字幕在线| 久久久久88色偷偷| 亚洲国产欧美目韩成人综合| 日韩在线2020专区| 一区二区三区国产精品视频| 国产成人精品无码一区二| 日本成人在线不卡视频| 成年午夜精品久久精品| 亚洲天堂在线视频| 久久人体视频| 亚洲嫩模喷白浆| 99视频在线观看免费| 54pao国产成人免费视频| 午夜视频日本| 国产精女同一区二区三区久| 欧美亚洲综合免费精品高清在线观看| 国产精品香蕉| 日韩精品高清自在线| 白丝美女办公室高潮喷水视频| 亚洲69视频| 高清无码手机在线观看| 国产欧美自拍视频| 亚洲浓毛av| 亚洲欧美自拍中文| 一级黄色网站在线免费看| 手机在线国产精品| 欧美一道本| 亚洲av无码成人专区| 国产性精品| 亚洲中文无码h在线观看| 狠狠亚洲婷婷综合色香| 欧美成人影院亚洲综合图| 欧美在线一级片| 国产三级韩国三级理| 国产亚洲精品91|