999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

On the Filtered Krylov-Like Sequence Method for Solving Non-Symmetric Eigenvalue Problems

2022-11-05 01:43:44TanXueyuanChengLan

Tan Xueyuan Cheng Lan

(1. Jiangsu Key Laboratory for NSLSCS,School of Mathematical Science,Nanjing Normal University,Nanjing 210046,China;2. School of Mathematics and Statistics,Hunan First Normal University,Changsha 410205,China)

Abstract The filtered Krylov-like sequence method, which integrates the standard Krylov subspace method with the polynomial filtering technique, is efficient for computing several extreme eigenvalues of symmetric matrices.In this paper, we generalize this method to compute the eigenvalues with largest real parts and the corresponding eigenvectors of non-symmetric matrices. The filtered Krylov-like sequence method can be expected to show great superiority and robustness over the standard Krylov subspace methods. Numerical experiments are carried out to show competitiveness of the new method.

Key words Eigenvalue Eigenvector Filtered Krylov-like subspace Chebyshev polynomial Non-symmetric matrix

1 Introduction

Denote by Rnthen-dimensional real vector space,and by Rn×nthen-by-nreal matrix set. The same notations can be carried over to complex vector and matrix spaces. We consider the following standard eigenvalue problem

whereA ∈Rn×nis a large, sparse and non-symmetric matrix, (λ,x), withλ ∈C andx ∈Cn, is an eigenpair of matrixA. We mainly focus on computing the eigenvalues with largest real parts and the corresponding eigenvectors of matrixA. Here and in the subsequent discussions,‖·‖denotes the Euclidean vector norm.

In recent decades,a large number of efficient and reliable solvers have been devoted to the symmetric matrix or non-symmetric matrix eigenvalue problems in the standard or generalized cases. The commonly used techniques can be classified in three categories. The first category includes the so-called gradient-type approaches, such as the steepest descent method [6], the conjugate gradient method [2, 28], the locally optimal preconditioned conjugate gradient method [9], to name just a few. The gradient-type methods devise a simple iterative scheme in which the Rayleigh-Ritz procedure is performed in a fixed subspace of finite dimension. The iterative scheme is attractive since it is economical in terms of storage and computation effort. However, the convergences of the gradient-type methods are rather slow, and for this reason some form of preconditioning is often suggested as an acceleration technique. The second class of iteration methods for eigenvalue problems is the Krylov subspace methods [18, 21]consisting of the Lanczos method for symmetric matrix eigenvalue problems and the Arnoldi method for non-symmetric matrix eigenvalue problems. In essence,the classical Krylov subspace method is based on the implementation of the Rayleigh-Ritz procedure by projecting the original matrixAonto a sequence of enlarging Krylov subspaces and the classical Krylov subspace method attains a linear convergence rate locally. Furthermore, as a result of increasing dimension, the Krylov subspace method may suffer from large storage demands and poor numerical properties for loss of orthogonality. Thus, different preconditioning techniques and their improvements have been employed to deal with these breakdowns,we refer to[3,10,26]and the references therein. Finally,the Davidson-type methods such as the Davidson method [4], Jacobi-Davidson method [25] and their variations [13, 16, 17, 24], have received much attention by researchers. At each iteration step of the Davidson-type methods,a corresponding correction equation needs to be solved, bringing elegant convergence rate except for the additional computational costs.

Polynomial filtering techniques,consisting of matrix-vector products for the entire iteration process,are valuable tools to enhance those components of the wanted parts of the spectrum and weed out the unwanted ones, and have excellent success to obtain superior convergence factor for eigenvalue problems. Currently, the principal polynomial filtering methods are the filtered Lanczos method [1, 5]and the Chebyshev-Davidson method [30, 31] for symmetric matrix eigenvalue problems, and the Arnoldi-Chebyshev method[20,22]for non-symmetric matrix eigenvalue problems.

In[14], Miao proposed a filtered Krylov-like sequence method for computing extreme eigenvalues and associated eigenvectors of symmetric matrices. The filtered Krylov-like sequence method combines the standard Krylov subspace method with polynomial filtering techniques,which results in a new method that behaves more stable and robust than the standard Krylov subspace method. In this paper, we aim to generalise the filtered Krylov-like sequence method from symmetric matrices to non-symmetric ones,whereas non-symmetric matrix eigrnvalue problems can be arbitrarily difficult since the eigenvalues of non-symmetric matrices are complex. Recall that for symmetric eigenvalue problems,we bring forward an interval containing the undesired eigenvalues in each iteration. Similarly,we should determine in advance a complex plane which contains the undesired eigenvalues for the non-symmetric matrix eigenvalue problems. Stated more rigorously,the complex plane is an ellipse which is expected to be found readily and efficiently but represents a significant challenge. In this paper,we come up with a convenient and useful way to determine an ellipse,which is of vital importance in complex Chebyshev accelerating techniques.

The paper is organized as follows. In Section 2,we introduce some basic facts about the Chebyshev accelerating technique, and then generalise the filtered Krylov-like sequence method to non-symmetric matrices. In Section 3,we provide a practically useful way,although not optimal,to determine an ellipse.Some numerical examples are carried out in Section 4 to exhibit the advantages of the filtered Krylov-like sequence method,and,in the last section,we end this paper by some concluding remarks.

2 The Filtered Krylov-Like Sequence Method

Polynomial filter accelerating techniques have been explored extensively to improve the convergence properties and robustness of the Lanczos/Arnoldi and Davidson methods;see[5,12,20,26,31].

We begin with some brief introductions on the Chebyshev accelerating techniques,and we refer to[20,21]for more details. Denote by{xi}ni=1the unit eigenvectors of the diagonalizable non-symmetric matrixA,and assume that the approximate eigenvectorxapproximatesx1and admits the eigen-decompositionx=μ1x1+μ2x2+···+μnxn. Then the resulting vectorzm=pm(A)xfiltered by a polynomialpm(λ)of degreemcan be represented as

In order to ensure that vectorzmconverges to the desired eigenvectorx1, we prefer to seek a polynomialpm(λ) such that every|pm(λi)| (i/= 1) is as small as possible compared with|pm(λ1)|.As a matter of fact,it can be transformed technically to seek a polynomial achieving The performance of the filtered Krylov subspace method depends on the polynomial filterp(t) chosen in advance, thus, in order to gain an efficient polynomial filter, one should call other methods to obtain approximations for both desired and undesired eigenvalues to determine the parameters involved in the polynomial filter,e.g.,the ellipseE(d,c,a)involved in the complex Chebyshev polynomial[20].

In fact,the filtered Krylov-like sequence method can overcome this drawback,because it can update the polynomial filtersp(j)(t)dynamically using the approximations obtained step by step, which shows great superiority over a fixed polynomial filter. In addition,the filtered Krylov-like sequence method also differs from the existing filtered Krylov subspace methods, e.g., the filtered Lanczos method in [5], in that the former method projects matrixAother thanp(A)(p(t)is the polynomial filter)onto the current approximate subspace to extract the desired eigenpair.

Next,we present the filtered Krylov-like sequence method for finding eigenvalues with largest real parts,which is also suitable for finding eigenvalues with smallest real parts with minor modifications,of large and sparse non-symmetric matrices.

Method 2.1 (The Filtered Krylov-Like Sequence Method)

Step 1 of Method 2.1 presents the initialization procedure in which the filtered Krylov-like sequence method can be expected to be especially effective if the initial vector is constructed approximately well enough. In general, we can not, for reasons of eigenvector information shortage, start with the suitable approximate eigenvector. Alternatively,a random vector can be taken as the initial vector.

Step 2(b)gives the choices of the best ellipse and the polynomial which are essential in the design of robust eigenvalue algorithms. The determination of the best ellipse will be covered in the later section,and the construction of?-1 polynomial filters in each iteration depends on the application of different problems. A simple choice, which is utilized in our numerical experiments, isp(1)(λ) =λ,p(j)(λ) =pj(λ),j= 2,3,··· ,?-1,wherepj(λ)is the scaled Chebyshev polynomial filter of degreejdefined in(2.3).

Steps 2 (c)-(f) are the Rayleigh-Ritz process which can be summarized as follows. First, an orthonormal basis for the projection subspace in(2.4)is constructed. Next,the matrixHlof the projection by projecting matrixAonto the orthonormal basis of(2.4)is obtained. Then the desired eigenvalues or eigenvectors can be well approximated from those of the projected matrixHl. A first observation is that we can obtain the orthonormal basis of subspace(2.4)by applying the modified Gram-Schmidt procedure.Note also that loss of orthogonality can be severe if we assign parameterl(here and in what follows,we refer tolas therestartnumber) a large value, thereby performing some form of re-orthogonalization is often used,see reference[18]for details.

3 Further Discussion

which leads to the validity of the estimate in(3.1).

If the imaginary parts of the undesired eigenvalues of matrixAare relatively small,that is,the ellipseE(d,c,a)determined by the Chebyshev iteration is fat,it is beneficial to compute the eigenvalues with the largest real parts. In fact,ifλ1is well separated from the others{λi}ni=2,say,the real or imaginary parts ofλ1is very large, which also indicates that|λ1-d|is relatively large compared witha, we can chose proper parametersaandcto ensure the maximum damping coefficientκas small as possible.

However,for the case that the imaginary parts of the undesired eigenvalues of matrixAare dominant,if the ellipseE(d,c,a)determined by the Chebyshev iteration is thin, the upper bound ofκin Theorem 3.1 may be very large since|a|>|λ1-d| may hold. This is disadvantageous for us to compute the eigenvalues with the largest real parts but advantageous for us to compute the eigenvalues with the largest imaginary parts. Therefore, for computing the eigenvalues with the largest real parts in this case, in the actual realizations of the proposed method,we prefer to determine an ellipse which is closer to a fat one.We will discuss how to determine the ellipse briefly.

According to Theorem 3.1, an applicable and effective method, based on the case that the imaginary (real) parts of the undesired eigenvalues of matrixAare dominant, can be carried out for the determination of the best ellipse. Now we present a synopsis of the method. For the detailed discussions,the reader is referred to the recent survey[15]. Let{λi}ni=r+1be those unwanted eigenvalues,x+= maxr+1≤i≤nRe(λi),x-= minr+1≤i≤nRe(λi)andy+= maxr+1≤i≤nIm(λi). Without loss of generality,we assume thatr=1. The algorithm to achieve this result is as follows.

Algorithm 3.1 (Determine an ellipse)

1. Inputx+,x-,y+,c,setξ=Re(λ1).

Note thatλi,r+1≤i ≤nused in Algorithm 3.1 are unknown. In a practical implementation,we may exploit the Ritz values for eigenvalue approximations. To go a little further,a small enough parametercis selected in advance,for instance,c=0.01,and a smaller than 0.01 number does not noticeably affect the algorithm.

4 Numerical Results

In this section, we use one example to examine the numerical behavior of the filtered Krylov-like sequence (FKS) method and compare it with the Arnoldi-Chebyshev (AC) method [20],the Chebyshev-Davidson (CD) method, and the inexact rational Krylov sequence (IRKS) method, for computing eigenvalues with largest real parts of non-symmetric matrices. We also exhibit the numerical behavior of the filtered Krylov-like sequence method with the fixed polynomial filters(FKS-F).

It is worth pointing out that the?- 1 polynomial filters of each iteration in the FSK method are flexible. In our practical implementations,p(1)(t) =t,p(j)(t) =cj(t),j= 2,3,··· ,?-1,wherecj(t)is a complex Chebyshev polynomial of degreejand?is the restart number mentioned earlier. Note also that the framework of the CD method for non-symmetric matrix eigenvalue problems differs slightly from that for symmetric ones in that an interval is replaced by an ellipse. Moreover,for the IRKS method,the Generalized Minimal Residual(GMRES)method is used to solve the inner correction equation and the iteration step is chosen to be equal to the degree of the Chebyshev filter. The experimental results indicate that the FKS method is superior to the other four approaches in terms both of the iteration counts(“IT”)and computing time(“CPU”),and very often in terms of the number of matrix-vector products(“MV”),with the IRKS method being slightly better on a MV aspect.

Our numerical tests are implemented by making use of the MATLAB(version R2014b)on a personal computer with 2.6 GHz central processing unit(Intel Core i5), 8.00 GB memory, and Macintosh(OS X 10.10 Yosemite)operating system.

Example 4.1 The test problem is derived from the following two-dimensional partial differential system

on the unit square[-1,1]×[-1,1]with homogeneous Dirichlet boundary conditions. We denote byNthe number of interior nodes on each side of the square and byh= 1/(N+1) the mesh size, then by finite difference discretization on anN×Ngrid,we obtain the standard non-symmetric matrix eigenvalue problem(1.1). Note that the matrixAresulting from the discretization is of sizen=N2.

In Example 4.1, we take the four choices of the coefficient functionsω(x,y),γ(x,y),μ(x,y) andν(x,y)as follows:

Tables 1-4 show the numerical results of applying the five above-mentioned algorithms, to the problem described in Example 4.1 under four different sets of the coefficient functions. Herepandrestartrepresent the Chebyshev polynomial degree (≥2) and the iteration number of Arnoldi process respectively.

Table 1 Numerical results for Example 4.1 with p=20

Table 2 Numerical results for Example 4.1 with p=25

Table 3 Numerical results for Example 4.1 with p=16

Table 4 Numerical results for Example 4.1 with p=20

As is shown, the FKS method clearly improves on the first three techniques (AC, CD and FKS-F)in terms of IT, CPU (in seconds) and MV. Note that the FKS method can significantly reduce the computational cost,since in most cases the execution times of the first three algorithms are three to four times higher than that of the FKS method, and when the total number of the matrix-vector products are counted the results of the FKS method are also found to be advantageous. As for the IRKS method wenotice that it can offer better performance than the FKS method with respect to the cost of matrix-vector operations,and the FKS seems to keep one step ahead of the IRKS concerning the total CPU time.

Figures 1-2 depict the convergence history of the residual norms(RES)of the four different cases under rather arbitrary choices of the polynomial degreepand the restart number. Along the vertical axis and the horizontal axis we have‖r(k)‖and the iteration numbers respectively. We can notice from Figure 1 as well as Figure 2 that the CD and FKS-F methods tend to converge very slowly, and the number of iterations of the FKS method to achieve a 10-8reduction does not exceed 100,which is far below those required for the other four criteria. Figures 3-4 get some insight into the behavior of the matrix-vector operations of the FKS method in the four cases. As the restart number rises the curves of the MV of the FKS method are in a zigzag, exhibiting a non-monotonic behavior with respect to the restart number .Theoretical understanding of the optimal choice of the restart number, however, is still lacking. On the practical side,values of the restart number between 20 to 30 seem to be reliable for the FKS method.

Figure 2 The pictures of the residual norms with respect to the number of iteration steps for the AC,CD,FKS-F,IRKS and FKS methods of case III (left) with restart= 27, p = 20 and case IV (right) with restart= 29, p = 20,where‘‘--”,‘‘-.”,‘‘.”,‘‘+”and‘‘-”represent the AC,CD,FKS-F,IRKS and FKS methods,respectively

Figure 3 The pictures of the number of matrix-vector products with respect to the restart numbers for the FKS method of case I(left)and case II(right)

Figure 4 The pictures of the number of matrix-vector products with respect to the restart numbers for the FKS method of case III(left)and case IV(right)

5 Concluding Remarks

In this paper, we generalized the filtered Krylov-like sequence method for computing eigenvalues with largest real parts of non-symmetric matrices. Numerical results showed great superiority of the new method over some state-of-the art methods. Admittedly, although the two parameters?andpaffect the convergence properties of the FKS method to a large extent, their optimal and practical selections are challenging issues to be considered.

主站蜘蛛池模板: 国产福利小视频高清在线观看| 一级毛片免费观看久| 中文字幕乱码中文乱码51精品| 性欧美在线| 欧美视频在线不卡| 国产美女一级毛片| 日韩国产综合精选| 亚洲成a人片77777在线播放| 99手机在线视频| 男女男免费视频网站国产| 一区二区在线视频免费观看| 无码粉嫩虎白一线天在线观看| 亚洲国产综合精品一区| 97青青青国产在线播放| 97亚洲色综久久精品| 日本91视频| 九色视频在线免费观看| 欧美一道本| 噜噜噜久久| 人妻无码一区二区视频| 国产欧美专区在线观看| 亚州AV秘 一区二区三区| 欧美啪啪一区| 青青草国产在线视频| 制服无码网站| 日韩在线1| 国产三区二区| 97国产在线观看| 婷婷综合在线观看丁香| 国产aⅴ无码专区亚洲av综合网 | 全部免费特黄特色大片视频| 精品国产自在现线看久久| 久久精品国产精品青草app| 国产丰满成熟女性性满足视频| 久久中文字幕av不卡一区二区| 青青青视频91在线 | 日韩福利视频导航| 久久久久久高潮白浆| 亚洲一级毛片| 国产91在线|日本| 午夜免费视频网站| 秋霞午夜国产精品成人片| 少妇精品在线| 99视频只有精品| 国产精品吹潮在线观看中文| 成人国产免费| 情侣午夜国产在线一区无码| 亚洲成人播放| 一本无码在线观看| 色婷婷在线播放| 麻豆国产精品| 国产黑丝一区| 久久免费视频6| 精品成人一区二区| 国产美女91视频| 国产亚洲精品自在线| 亚洲av无码牛牛影视在线二区| www成人国产在线观看网站| 亚洲女同欧美在线| 国产剧情无码视频在线观看| 精品国产中文一级毛片在线看 | 蜜臀AV在线播放| 91午夜福利在线观看| 亚洲天堂网2014| 亚洲一区二区三区在线视频| 欧美国产日韩另类| 国产白浆一区二区三区视频在线| 思思99思思久久最新精品| 国内精品久久人妻无码大片高| 色哟哟国产精品| 看你懂的巨臀中文字幕一区二区 | 免费一级毛片| 韩日午夜在线资源一区二区| 国产亚洲视频在线观看| 亚洲乱码在线播放| 欧美a在线视频| 免费看一级毛片波多结衣| 天堂岛国av无码免费无禁网站| 中文字幕av一区二区三区欲色| 国产v精品成人免费视频71pao| 欧美国产日韩在线播放| 一本大道香蕉中文日本不卡高清二区|