999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Why neural networks apply to scientific computing?

2021-08-14 13:56:04ShaoqiangTangYangYang

Shaoqiang Tang ,Yang Yang

HEDPS and LTCS, College of Engineering, Peking University, Beijing 100871, China

Keywords:Neural networks Universal approximation theorem Backpropagation algorithm

ABSTRACT In recent years,neural networks have become an increasingly powerful tool in scientific computing.The universal approximation theorem asserts that a neural network may be constructed to approximate any given continuous function at desired accuracy.The backpropagation algorithm further allows efficient optimization of the parameters in training a neural network.Powered by GPU’s,effective computations for scientific and engineering problems are thereby enabled.In addition,we show that finite element shape functions may also be approximated by neural networks.

Most scientific/engineering problems are formulated in terms of equations.The major components in an equation-based computation are function approximation and numerical scheme.The former designates finitely many entities to represent functions in a discrete manner,while the latter determines these entities by direct or recursive evaluations.For instance,piecewise constant or polynomials are adopted in finite difference and finite volume methods,whereas linear combination of shape functions is used in finite element methods.

Particularly in recent several decades,neural networks emerged as an effective paradigm in scientific and engineering explorations,succeeded first in pattern recognition,control theory and engineering,and signal processing,then expanded horizon of applications to much broader scope of scientific computations such as fluid motion detection,parameter identification,etc.[ 1,2 ].

As neural networks differ literally from most standard computational methodologies,a natural question arises:why neural networks apply to scientific computing? We see three thrusts in neural networks that contribute to the answer.First,by universal approximation theorem,deep neural networks are capable to approximate functions.Secondly,backpropagation algorithm substantially enhances the efficiency in updating undetermined parameters in a neural network.Finally,graphics processing unit (GPU) facilitates high-performance parallel implementation of the backpropagation algorithm.In this tutorial letter,we shall expose the universal approximation theorem and the backpropagation algorithm.We further introduce a neural network based on finite element shape functions.

Define a neural network

In a feed forward neural network (FFNN),only two types of functions are used,namely,affine functions and a nonlinear activation function.The basic building block is illustrated in Fig.1 .

Fig.1. Building block for FFNN.

The structure in Fig.1 refers to a function

The activation function is commonly chosen as the Sigmoid function

or the rectified linear unit (ReLU) function

Please refer to Figs.2 and 3,respectively.Their derivatives areσ(z)(1 ?σ(z))for the Sigmoid function,and the Heaviside function for the ReLU function.

Fig.2. Sigmoid function.

Fig.3. ReLU function.

A general neural network comprises a number of aforementioned building blocks,as illustrated in Fig.4.A neuronis identified by its layer (?∈{1,2,...,L}),and numbering (j∈{1,2,...,n?})in the?-th layer.For the sake of clarity,we do not mark the weightsand biasesin the figure.Notice that the number of neurons varies from layer to layer in general.In this example,the first layer contains only one neuron.So does the last layer with.These two layers are called as the input layer and output layer,respectively.All layers in between are referred to as hidden layers.

Fig.4. An example of FFNN.

Fig. 5. Sigmoid function with different scalings:solid for σ(z),dotted for σ(10z)and dashed for σ(100z).

Same as before,a neuron at a later layer attains its value from neurons in the precedent layer,i.e.,

with

We notice that usually the same activation function is used over the whole neural network,except that the output layer in general is free of activation function (in another word,uses the identity function).

Universal approximation theorem

Now we demonstrate that for any given continuous function and accuracy tolerance,we may construct an FFNN with specific weights and biases to approximate it.To be specific,in this part we take the Sigmoid function as activation function.

To this end,we plotσ(z),σ(10z),σ(100z)in Fig.5.It is observed thatσ(wz)approximatesH(z)to any desired accuracy,so long aswis chosen big enough.We illustrate withw100 in the following discussions.

Next,take any two pointsx1

This is realized by a neural network segment in the subplot(a) of Fig.6,withf0(x).The resulting functionf0(x)is shown for a specific casex12,x24,h2 in subplot (b).

Now,piling such segments together,we construct a neural network to represent a piecewise constant functionf(x)which equals tohiwithin(xi,xi+1)for alli.As an illustration,we display a neural network in subplot (a) of Fig.7,to give a piecewise constant functionf(x)in subplot (b).With the input neuronx,the output neuron givesf(x),aiming at approximation of a target function?(x)2+4x?x2over the interval [0,4].

Of course,the currentf(x)is a poor approximation.However,from Calculus we know that every continuous function may be approximated by a piecewise constant function to any desired accuracy,so long as the partition is fine enough.Therefore,with enough many neurons in the hidden layer,we can manipulate the weights and biases to reach the accuracy tolerance.In a more general setting and with a more rigorous arguments,this is termed as the universal approximation theorem [3,4].

Fig.6. Segment of FFNN to represent a square pulse function: a FFNN segment; b f0(x).

Fig.7. Use FFNN to represent a piecewise constant function φ(x),with a goal to approximate a given function ?(x)2+4x ?x2 in [0,4]: a FFNN; b resulting function f(x)(solid) and the target function ?(x) (dashed).

We further illustrate the capability of FFNN in approximating a multi-variate function.Inspired by the single variable function case,we only need to construct an FFNN segment representing a piecewise constant function.To be specific,we consider

In fact,it may be approximated by a compound function

wherewis chosen big enough,e.g.,100.By FFNN in Fig.8,we realize this compound function withw100.The input neurons arex,y,and the output givesf(x,y).The subplot(b) corresponds to a specific choice ofx14,x26,y13,y27,h2.

Fig. 10. Use FFNN based on finite element shape function to approximate a given function ?(x)2+4x ?x2 in [0,4]:resulting function f(x) (solid) and the target function ?(x) (dashed).

It is worth mentioning that a neural network with multiple hidden layers is usually called as a deep neural network.So this FFNN is a deep one.

Backpropagation algorithm

The above constructed FFNN examples illustrate the capability of neural networks in approximating functions.As a matter of fact,it is training procedure instead of explicit construction,that makes neural networks so powerful.

Starting with an FFNN with suitably big number of layers and suitably big number of neurons in the hidden layers,we iteratively update the weightsand biasesFor the single variable case,we take a training setT{(xs,ys)|s1,2,...,S} to minimize a loss function

Fig.8. FFNN segment to represent a piecewise constant function in two space dimensions: a FFNN segment; b resulting function f(x,y).

with a typical choice

Heref(x;stands for the FFNN output under hyperparametersMore precisely,for a given functiony?(x),we selectSpointsx1,x2,...,xS,and evaluate their valuesy1?(x1),y2?(x2),...,yS?(xS).In a training process,we optimize the hyper-parameters so as to make the difference between true data and FFNN predicted data as small as possible.

The crucial and most expensive step in the optimization,regardless of whatever method is adopted,is the evaluation of the gradient,namely,derivatives ofLwith respect to the hyperparameters.Notice that it suffices to compute the derivatives ofl,and the outputfor each pointxs.We have

On the other hand,from Eqs.(5) and (6),we compute with chain rule that

As mentioned before,the Sigmoid function has a simple form for evaluating derivativeσ′(z)σ(z)(1 ?σ(z)).So we obtain

Accordingly,for the data point(xs,ys),we first do a forward sweep,namely,plug inxs,and calculate valuesrecursively from?1 throughL.Then we do a backward sweep,starting from Eq.(12),for?L?1 through 1,compute

This is referred as a backpropagation algorithm,sometimes abbreviated as BP algorithm.

BP algorithm is advantageous over direct/forward gradient evaluations in terms of operation count.In addition,Eq.(10) implies a data-wise computation for the overall gradient;and for each data point,BP algorithm may be naturally implemented in a vector/matrix fashion albeit nonlinear operations (multiplication) involved.Accordingly,it well suits parallel computing with GPU’s.

Neural network based on finite element shape function

Recently,we proposed a new type of neural network that incorporates finite element shape functions [5].The key idea is to construct an FFNN building block that approximates linear shape function.

Take the ReLU function Eq.(3) as the activation function.Linear shape function on the interval [x1,x2] may be recast in a compound function form.

It is realized by FFNN in Fig.9.

Fig.9. FFNN building block to realize a linear finite element shape function.

As this produces piecewise linear functions,we may stack a number of such building blocks to approximate continuous function as well.See Fig.10 for an illustration to approximate the function?(x)2+4x?x2.A nice feature of this approach is the adaptivity of nodal point position,becausexi’s may be taken as hyperparameters as well.

Conclusions

Neural networks form a new paradigm for scientific computing.The universal approximation theorem elucidates their capability in representing functions.In this tutorial,we constructed FFNN’s to represent piecewise constant functions,both single variable and multivariate.Backpropagation algorithm considerably expedites the evaluations of gradients,hence makes training process feasible to find optimal hyper-parameters.The algorithm is naturally scalable,particularly fits GPU computations.We also present a finite element shape-function-based neural network structure,which readily applies to computational mechanics.

By FFNN,the capability of neural networks in approximating functions has been demonstrated.Other types of networks have also been developed over the years,usually with more hidden layers,including CNN (convolutional neural network),RNN (recurrent neural network),ResNet (residual network),etc.More hidden layers lead to a deeper network,and allow larger capacity for representations.There is yet limited rigorous results about how depth and width influence the performance of a neural network,and how to define the optimal architecture for scientific computing or general applications.Toward substantial understanding and wide applications,much work has been done,and much more is needed.

DeclarationofCompetingInterest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (Grants 11521202,11832001,11890681 and 11988102).We would like to thank Prof.Guowei He for stimulating discussions.

主站蜘蛛池模板: 日本三区视频| 伦精品一区二区三区视频| 欧美日韩一区二区三| 国产十八禁在线观看免费| 91成人在线观看| 先锋资源久久| 97av视频在线观看| 国产在线第二页| 天天婬欲婬香婬色婬视频播放| 国产视频欧美| 久久精品女人天堂aaa| 亚洲天堂色色人体| 69综合网| 国产日产欧美精品| 免费人成视网站在线不卡| 国产在线八区| 欧美一级在线| 亚洲一区色| 国产三级国产精品国产普男人 | 亚洲精品麻豆| 国产精品99久久久久久董美香| 亚洲男人天堂2018| 97青草最新免费精品视频| 国产乱人激情H在线观看| 中文字幕日韩久久综合影院| 国产区福利小视频在线观看尤物| 91国内在线观看| 精品无码国产自产野外拍在线| 国产精品亚洲综合久久小说| 免费看的一级毛片| 九色综合视频网| 91色在线观看| 国产91视频免费观看| 一级毛片高清| 亚洲免费毛片| 91成人免费观看| 亚洲欧美日韩综合二区三区| 最新国产网站| 中文成人在线| 国产男女免费视频| 狠狠做深爱婷婷久久一区| 精品无码人妻一区二区| 国产传媒一区二区三区四区五区| 国产高清国内精品福利| 国产99视频精品免费观看9e| 91午夜福利在线观看| 国产在线精品人成导航| 国产成人av一区二区三区| 亚洲系列无码专区偷窥无码| 国产一区二区在线视频观看| 国产黄色片在线看| 激情在线网| 国产美女精品一区二区| 久久久精品无码一二三区| 国产精品护士| 国产第四页| 久久免费观看视频| 91久久国产热精品免费| 国产成人高清精品免费软件| www中文字幕在线观看| 国产欧美成人不卡视频| 无码福利日韩神码福利片| AV无码无在线观看免费| 日韩av资源在线| 真人免费一级毛片一区二区| 99视频有精品视频免费观看| 国产精品成人第一区| 亚洲中文字幕23页在线| 九九免费观看全部免费视频| 亚洲综合日韩精品| 亚洲资源站av无码网址| 国产高清自拍视频| 日韩在线播放中文字幕| 成人综合网址| 综合久久五月天| 欧美性精品| 18禁高潮出水呻吟娇喘蜜芽| 亚洲啪啪网| 亚洲三级a| av午夜福利一片免费看| 亚洲一级毛片在线观| 亚洲无线视频|