999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Robustness of reinforced gradient-type iterative learning control for batch processes with Gaussian noise☆

2016-06-07 09:54:10XuanYangXiaoRuan

Xuan Yang,Xiao'e Ruan*

Department of Applied Mathematics,School of Mathematics and Statistics,Xi'an Jiaotong University,Xi'an 710049,China

1.Introduction

In industrial community,petrochemical processes,microelectronic manufacturing and metallurgical processes are typical batch processes,each of which repetitively executes a given task over a fixed duration[1].For conventional practical execution,an industrial batch process is tuned by a proportional-integral-derivative(PID)controller so that the controlled system may operate with desired performance[2,3].However,for some plants,the PID-controller-tuned systems operate with unsatisfactory transient performance such as slower response with longer settling time or faster response with oscillatory overshoot[4,5].As a consequence,the dissatisfaction may influence the product quality when the controlled system operates repetitively.Under this circumstance,it is necessary to design a PID controller in a trade-off mode.To improve the transient performance,some intelligent techniques have been developed such as expertise knowledge for selecting piecewise PID controller gains[6,7]and iterative learning control(ILC)strategies for generating a sequence of upgraded control commands[8-13]and so on.

The ILC strategy was first proposed by Arimoto et al.,with an ILC scheme applied to a robotic manipulator while it is repetitively attempting to track a desired trajectory[12].The basic mechanism of the ILC strategy is to generate the control signal for the next operation by compensating for the current control signal with its proportional tracking error,error derivative and/or error integral so that the tracking performance of the next operation gets better.Owing to its good learning efficiency and a prior less system knowledge requirement,the ILC theme has attracted much attention not only in robotic society for manipulator's trajectory tracking but also in industrial fields such as batch process for transient performance improvement[8-13]and CD disk recording[14,15].The fundamental ILC updating rules are constructed on the basic postulate that the desired trajectory is iteratively invariant while the system repetitively operates over a fixed finite time interval with resettable initial states[16,17].One of the key ILC contributions is convergence analysis,which has been progressive by assessing the tracking error in forms of lambda-norm,Lebesgue-p norm or discrete frequency Parseval energy[12,18,19]and so on.For practical applicability,the robustness of ILC schemes to system parameter uncertainty,noise perturbation and initial state shifts has been analyzed[20-22].Besides,when the system dynamics is identified,the system information is utilized to construct optimized/optimal ILCs for fastening the learning convergence[23-29].

For the optimized ILCs,the main efforts have been made on the norm-optimal ILC and the parameter-optimized ILC[23-26].In these investigations,Both ILCs can guarantee the tracking error measured for mean-square norm to monotonically reduce,though its robustness to external noise is not involved.In addition,as typical optimization methodologies,Newton and quasi-Newton methods have been harnessed to compose optimized iterative learning control updating laws[27,28],where the convergence is analyzed regardless of the robustness.Progressively,a gradient-type iterative learning control(GILC)updating law has been constructed for the system with uncertainty[29],where the weighted learning gain consists of a scalar matrix and the robust monotone convergence of the GILC scheme is achieved.However,its learning performance gives milder convergence and weaker resistance to uncertain noise.The reason is perhaps that the scalar learning gain matrix does not sufficiently take the advantage of system knowledge.Besides,because the searching path of the gradient type ILC algorithm is saw-tooth and the learning step gets very small when the output closes the desired trajectory,especially,when the system is ill-conditioned,the tracking behavior of the GILC scheme goes tagging.Thus the scheme needs to be reinforced with system knowledge so as to enhance the tracking performance.Additionally,as system external noise is inevitable in practical application,the robustness of the learning scheme to noises is necessarily explored.

This paper develops a reinforced gradient-type iterative learning control(RGILC)algorithm for a class of discrete linear time-invariant systems with external Gaussian noise.The idea is to make use of system matrices and a proper learning step to weigh the gradient.The robustness of the RGILC algorithm to external noise is analyzed in virtue of mathematical expectation and the range of learning step is specified for convergent assumption.Numerical simulations are presented to illustrate the validity as well as the effectiveness.

2.RGILC Algorithm and Preliminaries

Consider a class of discrete linear time-invariant single-input,single output batch process control systems as follows.

where S={0,1,2,…,N-1}denotes the set of discrete time sampling with N referring to the total sampling numbers,index i stands for the sampling number and subscript k∈? marks the iteration or batch index,are n-dimensional state,scalar input and scalar output at the i th sampling time of the k th iteration,i∈S are load noise and measurement noise,respectively,A,B and C are constant system matrices with appropriate dimensions satisfying CB≠0.When the learning process is realizable,i.e.,for a desired trajectory yd(i),i∈S,there exists a unique control input signal ud(i)such that

where,H is termed as Markov parameter matrix of system(1),vectorsand?kare named as input,output,load noise and measurement noise super vectors,respectively.Then system(1)is compacted as

Let yd=[yd(1),yd(2),…,yd(N)]Tbe a predetermined desired trajectory that the system output should follow as an ideal target,where T denotes the transpose operator and ek=yd-ykdenotes the tracking error vector.The objective of developing an iterative learning control algorithm for system(3)is to generate a sequence of input super vectors{uk}so that it may stimulate the system(3)to track the desired trajectory ydas precisely as possible as the iteration index goes to infinity,namely,

where‖?‖2denotes 2-norm of a vector and E{·}represents the mathematical expectation operator.

Before mentioning the ILC scheme,it is worthy to mind that,theoretically,the desired control input vector can be achieved by inversing the system as ud=H-1ydregardless of noise when the Markov matrix H is invertible.However,in reality,especially for rapid responding dynamics with large-scale Markov matrix,the inversion technique needs a complex computation,which is sensitive to system parameter perturbation or computation error accumulation.Sometimes,the inversion method incurs divergence of the learning scheme[8].

One of feasible ILC manners to make use of system knowledge is a gradient-type ILC updating mechanism briefed as follows.

For system(3),define a sequence of iteration-wise quadratic objective functions taking forms of

It is easy to derive that the gradient of function J(uk)with respect to argument ukis?J(uk)=-HTek.Then a(descent)gradient-type ILC(GILC)scheme is constructed as

where Λ=αI represents a scalar learning gain matrix with learning step α.The details may refer to Ref.[29].

For the GILC algorithm(6),by replacing its learning gain matrix Λ with a symmetric matrix α(2I-αHTH),a reinforced GILC(RGILC)updating law is developed as follows:u1:given arbitrarily;

2.1.Basic Assumptions

3.Robustness Analysis and Step Specification

The robustness of the proposed RGILC algorithm(7)applied to system(3)disturbed by Gaussian white noise refers to that,for a given desired trajectory ydand an appropriate beginning input u1,the output ykasymptotically falls into a neighborhood of desired trajectory ydas the iteration number goes to infinity,if variancesare bounded.That is,is specified by a small boundary.For the two given ILC algorithms L1 and L2 applied to system(3)disturbed by Gaussian white noise with the same variance,ILC algorithm L1 is said to be more robust than L2 if the outputs of(3)driven by L1 fall into a smaller neighborhood of the desired trajectory ydthan that driven by L2.

In order to analyze the robustness of the RGILC algorithm(7),following Lemmas are required.

Lemma 1.For a real symmetric matrix M∈?n×nand a vector v∈?n,the inequalityholds.

Proof.By definition of induced norm,it yields

Since system(3)is stochastically perturbed by external noise,it is impossible to accurately compute the tracking error in the learning process.We will estimate the mathematical expectation of tracking error vector in the sense of 2-norm.

Theorem 1.Assume that the updating law(7)is imposed to system(3)with load noise and measurement noise satisfying assumptions A1 and A2.Thenis bounded,if the spectral radius of matrix(satisfies

Proof.When the RGILC scheme(7)is applied to system(3),the relation of adjacent tracking error vectors is derived as

Further,considering that,the expectation ofcan be derived as

Remark 1.It is evident from formula(17)that the output of system(3)disturbed by external Gaussian white noise falls into a neighborhood of desired trajectory ydwhen the RGILC algorithm(7)is used,that is,the RGILC algorithm is robust to external Gaussian white noise.Also,the capability of the RGILC algorithm to resist the Gaussian white noise can be improved by decreasing the magnitude of~ρ when the variances of load noise and measurement noise are bounded within admissible ranges.Clearly,a small magnitude of~ρ can be obtained by tuning step α.

Remark 2.It is analogously deduced thatgenerated by the GILC algorithm(6)iswhere).Obviously,the inequality

always holds for.This means that the RGILC algorithm(7)is more robust than the conventional GILC algorithm(6)in rejecting Gaussian white noise with the same variances.Particularly,it is easy to derive that the convergent speed of the RGILC algorithm(7)is faster than the GILC algorithm(6)when the external noise is null.

The range of learning step α for ensuring the convergent assumption~ρ<1 is specified as follows.

Theorem 2.The spectral radius of matrixsatisfies

Proof.The assumption CB≠0 means that the relative degree of the system is unity.Consequently,the Markov parameter matrix H is nonsingular and symmetric matrix HHTis positive definite.Thus all eigenvalues of matrix HHTare positive.

Taking the assumptioninto account,it is derived that

This means that the inequalityis true,so that the assumptionholds.

This completes the proof.

Remark 3.Although Theorem 1 is derived under the assumption that the relative degree is unity(i.e.CB≠0),the results can be extended to the case with relative degree of.Under this circumstance,by the definition of relative degree[31],the relationshold.Considering

In this case,elements ofare null no matter what the elements of ukare.This means that tracking over subintervalmakes no sense.However,it can be deduced that system(18)driven by the RGILC algorithm(7)can track desired trajectory over subintervaland analogous conclusion can be drawn by referring to the analysis technique of Theorem 1.

4.Numerical Simulations

To manifest the effectiveness of the proposed RGILC algorithm,an injection molding batch process is considered.The process consists of three stages: filling,packing and cooling[32,33].In order to maintain the quality of product in the filling stage,the injection velocity is an important variable to be controlled so as to repetitively follow a given reference.The RGILC algorithm(7)is adopted for improving the transient performance of the injection velocity.

The dynamics of the injection velocity x1,k(i)and nozzle pressure x2,k(i)controlled by valve openness uk(i)in the filling phase is identified as[32]

The output is set as yk(i)=x1,k(i).The first equation in(19)is a linear time-invariant second-order difference equations.In order to accord with the standard linear-time system description by the first-order difference equations,we introduce variable x3,k(i)=0.03191x1,k(i-1)-5.617uk(i-1).Then the dynamics of controlled system(19)is reformed as

whereis the state vector and yk(i)is the output.Let the discrete time sampling set be S={0,1,2,…,20}.It is calculated thatand CB≠0.Besides,it can be estimated that the range of learning step is 0<α<0.0018,which ensures that the conditionholds.In the following simulations,the learning step is set as α=0.0015.

When the initial state is set as xk(0)=0 and the beginning control input is chosen as u1=0,the desired trajectory,a desired injection velocity trajectory,is chosen as

Case 1.Gaussian noise is null.

Fig.1 displays the tracking performance of system(20)driven by the proposed RGILC algorithm(7),where the dotted curve is the desired trajectory,the dashed and solid ones are the outputs at 5th and 10th operation/batch,respectively.It is noticed that the output of system(20)tracks the desired trajectory very well as the iteration increases.Fig.2 shows a comparison of tracking error in 2-norm generated by proposed RGILC(7)with that by existing GILC(6),where the solid curve is produced by RGILC and the dotted one is from GILC.It is evident that the convergent speed of RGILC scheme(7)is faster than that of GILC algorithm(6).

Case 2.Gaussian noise is nonzero.

With external noise present,system(20)disturbed by Gaussian white noise is described as

Fig.1.Tracking performance of the RGILC with no noise.

Fig.2.Comparative tracking errors.

where ξare load noise and measurement noise,respectively.In order to illustrate the robustness of the proposed RGILC algorithm,we consider two groups with different variances of load noise and measurement noise.One group is0.0036,the other one is

Fig.3.Tracking performance of RGILC with

Fig.4.Tracking performance of RGILC with

Fig.3 exhibits the outputs of system(21)for noise variancesFig.4 presents the outputs for noise vari-.It is seen that the output of system(21)stimulated by the RGILC algorithm(7)can track the desired trajectory asymptotically as the iteration increases,though the noise is present but the variances are small.This manifests that the proposed control law is feasible and effective.

Fig.5.Comparative tracking errors for

Fig.6.Comparative tracking errors for

Fig.5 shows a comparative expectation of the tracking error produced by RGILC(7)with that by GILC(6)for noise variances being0.0025 and σ^2=0.0036.The mathematical expectationcalculated as 100-runs average is upper bounded by number 4 while the iteration index is larger than 15.Fig.6 displays the comparison for noise variancesThe mathematical expectationin the simulations is approximated by the mean value of,which is computed from the repetitive learning process independently tested 100 times.Analogously,is upper bounded by the number 8.These imply that the proposed RGILC is robust to the external Gaussian White noise.Besides,Figs.5 and 6 also display that the tracking error,valuated asproduced by the RGILC strategy,is smaller than that by the existing the existing GILC scheme.

5.Conclusions

In this paper,a reinforced gradient-type iterative learning control scheme is proposed for improving the tracking performance of batch processes disturbed by external Gaussian white noise.The robustness of the algorithm is analyzed from the perspective of statistical view.The analysis results conveys that the decent tracking performance of the proposed RGILC algorithm can be achieved when the learning step is specified in a proper interval and the variances of external noise are sufficiently small.Compared with conventional GILC,the proposed algorithm is more efficient to resist the external Gaussian white noise.

[1]D.E.Seborg,T.F.Edgar,D.A.Mellichamp,F.J.Doyle III,Process dynamics&control,3rd ed.John Wiley&Sons,2011.

[2]A.G.Khalore,S.Singh,Performance overview of relay feedback tuning of PID controller,India Conference(INDICON),2012 Annual IEEE 2012,pp.198-204.

[3]Y.X.Zhang,Q.Qiu,Q.M.Zhao,W.G.Zheng,Overview on intelligent control strategy of BLDC motor based on PID algorithm,Appl.Mech.Mater.347(2013)98-101.

[4]G.J.Silva,A.Datta,S.P.Bhattacharyya,New results on the synthesis of PID controllers,IEEE Trans.Autom.Control 47(2)(2002)241-252.

[5]P.D.Robert,B.W.Wan,J.Lin,Steady-state hierarchical control of large-scale industrial process:A survey,IFAC/IFORS/IMACS symposium large-scale systems:Theory and applications,1 1992,pp.1-10.

[6]I.Pan,S.Das,A.Gupta,Tuning of an optimal fuzzy PID controller with stochastic algorithms for networked control systems with random time delay,ISA Trans.50(1)(2011)28-36.

[7]S.Imai,T.Yamamoto,Design of a multiple linear models-based PID controller,Int.J.Adv.Mechatron.Syst.4(3)(2012)141-148.

[8]K.S.Lee,J.H.Lee,Iterative learning control-based batch process control technique for integrated control of end product properties and transient profiles of process variables,J.Process Control 13(7)(2003)607-621.

[9]M.Zhou,S.Wang,X.Jin,Q.Zhang,Iterative learning model predictive control for a class of continuous/batch processes,Chin.J.Chem.Eng.17(6)(2009)976-982.

[10]C.Chen,Z.H.Xiong,Y.Zhong,Design and analysis of integrated predictive iterative learning control for batch process based on two-dimensional system theory,Chin.J.Chem.Eng.22(7)(2014)762-768.

[11]J.H.Lee,K.S.Lee,Iterative learning control applied to batch processes:An overview,Control.Eng.Pract.15(10)(2007)1306-1318.

[12]S.Arimoto,S.Kawamura,F.Miyazaki,Bettering operation of robots by learning,J.Robot.Syst.1(2)(1984)123-140.

[13]Z.H.Xiong,J.Zhang,Optimal iterative learning control for batch processes based on linear time-varying perturbation model,Chin.J.Chem.Eng.16(2)(2008)235-240.

[14]B.G.Dijkstra,O.H.Bosgra,Noise suppression in buffer-state iterative learning control,applied to a high precision wafer stage,Control applications,2002,Proceedings of the 2002 international conference on IEEE,2 2002,pp.998-1003.

[15]C.I.Kang,C.H.Kim,An iterative learning approach to compensation for the servo track writing error in high track density disk drives,Microsyst.Technol.11(8-10)(2005)623-637.

[16]H.S.Ahn,Y.Q.Chen,K.L.Moore,Iterative learning control:Brief survey and categorization,IEEE Trans.Syst.Man Cybern.Part C Appl.Rev.37(6)(2007)1099-1121.

[17]J.X.Xu,A survey on iterative learning control for nonlinear systems,Int.J.Control.84(7)(2011)1275-1294.

[18]X.Ruan,Z.Bien,Q.Wang,Convergence characteristics of proportional-type iterative learning control processes in the sense of Lebesgue-p norm,IET Control Theory Appl.6(5)(2012)1-8.

[19]M.Norrl?f,S.Gunnarsson,Time and frequency domain convergence properties in iterative learning control,Int.J.Control.75(14)(2002)1114-1126.

[20]D.Meng,Y.Jia,J.Du,F.Yu,Robust iterative learning control design for uncertain time-delay systems based on a performance index,IET Control Theory Appl.4(5)(2010)759-772.

[21]D.Meng,Y.Jia,J.Du,S.Yuan,Robust discrete-time iterative learning control for nonlinear systems with varying initial state shifts,IEEE Trans.Autom.Control 54(11)(2009)2626-2631.

[22]W.Guan,Q.Zhu,X.D.Wang,X.H.Liu,Iterative learning control design and application for linear continuous systems with variable initial states based on 2-D system theory,Math.Probl.Eng.2014(2014)1-5(Article ID 970841).

[23]D.H.Owens,J.H?t?nen,Iterative learning control—An optimization paradigm,Annu.Rev.Control.29(1)(2005)57-70.

[24]J.D.Ratcliffe,P.L.Lewin,E.Rogers,J.J.Hatonen,D.H.Owens,Norm-optimal iterative learning control applied to gantry robots for automation applications,IEEE Trans.Robot.22(6)(2006)1303-1307.

[25]N.Amann,D.H.Owens,E.Rogers,Predictive optimal iterative learning control,Int.J.Control.69(2)(1998)203-226.

[26]D.H.Owens,K.Feng,Parameter optimization in iterative learning control,Int.J.Control.76(11)(2003)1059-1069.

[27]T.Lin,D.H.Owens,J.H?t?nen,Newton method based iterative learning control for discrete non-linear systems,Int.J.Control.79(10)(2006)1263-1276.

[28]E.A.Konstantin,Iterative learning control based on quasi-Newton methods,Proceeding of the 37th IEEE conference on December 1998,pp.170-174.

[29]D.H.Owens,J.J.H?t?nen,S.Daley,Robust monotone gradient-based discrete time iterative learning control,Int.J.Robust Nonlinear Control 19(6)(2009)634-661.

[30]B.Fang,J.Zhou,Y.Li,Matrix theory,Tsinghua University Press,Beijing,China,2004.

[31]C.J.Chien,C.Y.Yao,An output-based adaptive iterative learning controller for high relative degree uncertain linear systems,Automatica 40(1)(2004)145-153.

[32]Y.Wang,D.Zhou,F.Gao,Iterative learning model predictive control for multi-phase batch processes,J.Process Control 18(2008)543-557.

[33]R.Zhang,L.Gan,J.Lu,F.Gao,New design of state space linear quadratic fault tolerant tracking control for batch processes with partial actuator failure,Ind.Eng.Chem.Res.52(2013)16294-16300.

主站蜘蛛池模板: 国产午夜无码专区喷水| 视频二区中文无码| 久久精品人妻中文视频| 免费一级大毛片a一观看不卡| 亚洲经典在线中文字幕| 亚洲成人www| 亚洲一级色| 成人在线天堂| 伊人久久久久久久久久| 日本一区二区不卡视频| 国内99精品激情视频精品| 欧美自拍另类欧美综合图区| 久久婷婷色综合老司机| 国产综合精品日本亚洲777| 欧美人与牲动交a欧美精品 | AV熟女乱| 免费在线色| 在线中文字幕网| a欧美在线| 国产精品天干天干在线观看| 国语少妇高潮| 成人字幕网视频在线观看| 亚洲第一区欧美国产综合| 国产国产人在线成免费视频狼人色| 久久综合色视频| 亚洲v日韩v欧美在线观看| 日本高清免费一本在线观看| 欧美色99| 草逼视频国产| 国产精品极品美女自在线| 亚洲精品久综合蜜| 99九九成人免费视频精品| 欧美精品在线免费| 97在线观看视频免费| 青青草综合网| www亚洲天堂| 本亚洲精品网站| 露脸真实国语乱在线观看| 国产欧美日韩另类精彩视频| 免费无码网站| 色悠久久综合| 精品偷拍一区二区| 国产va视频| 国产精品第一区| 一区二区在线视频免费观看| 丰满人妻被猛烈进入无码| 视频二区亚洲精品| 亚洲欧洲天堂色AV| 国产精品任我爽爆在线播放6080 | 欧美97色| 婷婷综合在线观看丁香| 亚洲最猛黑人xxxx黑人猛交| 久青草网站| 亚洲一本大道在线| 99精品热视频这里只有精品7| 色偷偷男人的天堂亚洲av| 国产一级视频久久| 四虎在线高清无码| 欧美精品亚洲二区| 国产精品久久国产精麻豆99网站| 亚洲中文字幕国产av| 亚洲va视频| 91精品福利自产拍在线观看| 色呦呦手机在线精品| 成人国产精品网站在线看| 亚洲国产中文综合专区在| 97久久精品人人| 精品午夜国产福利观看| 最新精品国偷自产在线| 伊人欧美在线| 久久久精品久久久久三级| 在线观看国产网址你懂的| 亚洲不卡网| 自偷自拍三级全三级视频 | 日韩小视频网站hq| 午夜免费小视频| 五月激激激综合网色播免费| 91娇喘视频| 精品国产免费观看| 在线精品自拍| 国产婬乱a一级毛片多女| 成人日韩精品|