999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Deep learning-based evaluation of factor of safety with confidence interval for tunnel deformation in spatially variable soil

2021-12-24 02:49:16JinzhangZhangKokKwangPhoonDongmingZhangHongweiHuangChongTang

Jinzhang Zhang, Kok Kwang Phoon, Dongming Zhang, Hongwei Huang,Chong Tang

a Key Laboratory of Geotechnical and Underground Engineering of Ministry of Education, Tongji University, Shanghai, 200092, China

b Department of Geotechnical Engineering, Tongji University, Shanghai, 200092, China

c Department of Civil and Environmental Engineering, National University of Singapore,117576, Singapore

d Singapore University of Technology and Design, 487372, Singapore

Keywords:Deep learning Convolutional neural network (CNN)Tunnel safety Confidence interval Random field

ABSTRACT The random finite difference method (RFDM) is a popular approach to quantitatively evaluate the influence of inherent spatial variability of soil on the deformation of embedded tunnels.However,the high computational cost is an ongoing challenge for its application in complex scenarios. To address this limitation, a deep learning-based method for efficient prediction of tunnel deformation in spatially variable soil is proposed. The proposed method uses one-dimensional convolutional neural network(CNN) to identify the pattern between random field input and factor of safety of tunnel deformation output. The mean squared error and correlation coefficient of the CNN model applied to the newly untrained dataset was less than 0.02 and larger than 0.96, respectively. It means that the trained CNN model can replace RFDM analysis for Monte Carlo simulations with a small but sufficient number of random field samples (about 40 samples for each case in this study). It is well known that the machine learning or deep learning model has a common limitation that the confidence of predicted result is unknown and only a deterministic outcome is given. This calls for an approach to gauge the model’s confidence interval. It is achieved by applying dropout to all layers of the original model to retrain the model and using the dropout technique when performing inference. The excellent agreement between the CNN model prediction and the RFDM calculated results demonstrated that the proposed deep learning-based method has potential for tunnel performance analysis in spatially variable soils.

1. Introduction

The metro tunnels play a more significant role in the urban areas with the rapid development of urbanization(Jin et al.,2021).Any accidents would cause massive casualties, economic losses,and severe social adverse effects (Frangopol, 2011; Huang et al.,2017a; Fan et al., 2021a; Hu et al., 2021). Meanwhile, the construction and service environment of the underground tunnel is hidden(Jiang et al.,2020;Gong et al.,2021),and the surrounding soil of the shield tunnel has apparent spatial variability (Phoon and Kulhawy,1999; Zhang et al., 2020a, Zhang et al., 2021a). The spatial variability has an essential effect on the behavior of geotechnical structures embedded in the soil. The influence of spatial variability on tunnel performance is not considered in traditional methods so that the averaging of the results cannot evaluate the real failure mechanisms in the sense of the randomness of soil properties (Griffiths et al., 2002; Zhang et al.,2021b). The spatial variability is often modeled by random field theory (Vanmarcke, 1977). The random finite element method(RFEM) and difference method (RFDM) are two typical methods used to analyze the effect of spatial variability on geotechnical structures (Griffiths and Fenton, 2004; Huang et al., 2017b).Random field analysis has been widely used to evaluate slope stability (Wang et al., 2020a), foundation settlement (Fenton and Griffiths, 2008), bearing capacity (Shen et al., 2020, 2021) and tunnel deformation(Huang et al.,2017b;Zhang et al.,2021b).The majority of probabilistic analyses are based on the Monte Carlo simulation(Chen et al.,2019;Yang et al.,2019;Wang et al.,2021a),which has relatively low efficiency.High computational cost is an ongoing challenge for probabilistic analysis using random field analysis (Zhang et al., 2020b).

There are also some alternative methods to improve the computational efficiency,such as the response surface method(Li et al., 2011, 2015; Zhou et al., 2021), importance sampling (Ching et al., 2009), and subset simulation (Jiang and Huang, 2016).However, some literature indicates that there are still some limitations with these methods, such as insufficient increase in computational efficiency (Huang et al., 2016) and unsatisfactory results due to the prior assumption (Liu and Cheng, 2016). Soft computing techniques have recently attracted more attention to estimating the geotechnical properties(Tao et al.,2020;Han et al.,2021; Xiao et al., 2021) and predicting the performance of geotechnical structures (Lawal and Kwon, 2021; Lin et al., 2021a,b). Pan and Dias (2017) proposed an efficient reliability method combining adaptive support vector machine and Monte Carlo simulation with low computational cost. Wang et al. (2020b, c)developed efficient multivariate adaptive regression splines and extreme gradient boosting based reliability analysis approach to evaluate the earth dam slope failure probability.Deep learning is a branch of machine learning, which has a much more powerful learning capability (Huang et al., 2018). In the field of deep learning, convolutional neural network (CNN) model has been commonly adopted to solve geotechnical problems such as image recognition for crack and leakage defects (Zhao et al., 2020, Zhao et al.,2021),rock classification(Chen et al.,2021a),and waveform signals identification (Zhang et al., 2018). Wang and Goh (2021)have adopted the two-dimensional (2D) CNN model to analyze slope reliability in spatially variable soil.However,the conversion process from random field numbers to pictures as the input of the 2D CNN model to learn will increase extra error. Zhang et al.(2021c) estimated the vertical scale of fluctuation based on cone penetration test (CPT) data using one-dimensional CNN with excellent performance. The current research mainly focuses on the slope reliability analysis, but tunnel safety evaluation in spatially variable soil using the deep learning method is seldom studied in the past.

For classification problems,the machine learning model usually returns the probability for each class. However, the machine learning or deep learning model typically returns a single value for regression problems which is a standard limitation. It means that the uncertainty of deep learning model prediction is unknown.Confidence interval is an excellent way to quantify the uncertainty of the prediction(Zhang et al.,2021b;Han et al.,2022).In order to gauge the model’s confidence, the original model needs to be retrained to return many different predictions for one input data when performing inference. Then, the confidence intervals of the model can be calculated through the distribution of different predictions.

This study aims to propose a deep learning-based method to evaluate the factor of safety of tunnel deformation in spatially variable soil with confidence intervals. The CNN model was developed to discern the relationship between the random field input and the factor of safety of tunnel output. The “dropout”technique was used to re-engineer the original model to return multiple predictions.The rest of this paper starts with introducing the framework of the proposed method and random field theory.Then, the architectures of the CNN model are summarized. Next,the predicted results of CNN model are analyzed,and the prediction performance is evaluated. Finally, the process of generating confidence intervals and determining the right dropout value is introduced, and the result with confidence intervals is illustrated.

2. Framework of the proposed method

The flowchart of the proposed method to efficiently predict the factor of safety of tunnel in spatially variable soil with confidence interval is shown in Fig.1. There are following three main steps:

(1) Step 1:RFDM analysis.The deterministic numerical model is established and verified first. The random field of soil properties is then generated and mapped into the numerical model to calculate the factor of safety of tunnel. A large number of Monte Carlo simulations are adopted to quantitatively evaluate the influence of spatial variability on factor of safety of tunnel. The random field of soil properties and the calculated result of RFDM analysis will be used as the input and output of the CNN model, respectively.

(2) Step 2: The architecture and training of CNN model. The random field of the numerical domain was first transferred to a column vector using the same format. The transferred column vectors are adopted as the input data.Then,the input data are preprocessed using the StandardScaler method,which is a widely used preprocessed method (Zhang et al.,2020c). The percentages of the training and testing sets are set at 80% and 20%, respectively (Chen et al., 2021b; Wang et al., 2021b). Next, the CNN model was trained, and the optimal model was obtained. Effective prediction of the factor of safety of tunnel only requires the random field input data.It is worth noting that the prediction is a single value for one input sample.

Fig.1. Implementation flowchart for the proposed method.

(3) Step 3: Generating the confidence intervals of predictions.The purpose of this step is to obtain many predictions for each input sample.As shown in Fig.1,the dropout technique was used to re-engineer the optimal model obtained in Step 2.However,the dropout is only active at the stage of training CNN model in Keras.Thus,the learning rate should be set to 1 to trick Keras into thinking still in the training stage. Then,the confidence interval can be calculated through the distribution of predictions with dropout. The dropout value plays a crucial role in the distribution of predictions. The optimal dropout value needs to be determined in this step.The confidence interval can be adopted to measure the uncertainty of the prediction.

3. Random field modeling of tunnel in spatially variable soil

3.1. Random field theory

It is widely accepted that Young’s modulus(Es)has considerable spatial variability and is the most significant parameter among all soil properties to impact the deformation of tunnel embedded in soil (Huang et al., 2017b). Although the correlation between the parameters also affects the calculated results (Wang et al., 2020b;Zhang et al.,2021d),it is not the main point of this study.Therefore,only the Esis considered to be a spatially random property. The lognormal distribution is adopted to characterize the variability of Es,which is widely used and will not generate negative values.The Markovian or single exponential autocorrelation model is by far the most popular (Cami et al., 2020). Thus, it is adopted in this study,which is expressed as

where τxand τyare the horizontal and vertical distances between two points, respectively; δhand δvrepresent the scales of fluctuation in the horizontal and vertical directions,respectively;and ρ(τx,τy)is the correlation coefficient between two points.The Karhunen-Loeve expansion technique is used to discretize the random field in this study(Phoon et al.,2002;Tao et al.,2021).The Esvalue of each cell in the numerical model is assumed to be the value simulated by the random field value at the centroid of each cell (Ching and Hu,2016; Zhang et al., 2021b).

As shown in Table 1,six cases were designed in this study,which is the same as the case ANI-1 to ANI-6 in Zhang et al.(2021b).It can be seen that δhfor all cases is the same. Thus, the designed cases were used to reveal the effect of δvon tunnel performance which plays a more significant role than δh(Huang et al., 2017b). The coefficient of variation(COV)was set to 0.5 for all cases.Following the typical characteristics of Shanghai soft clays, the mean value was set to 20 MPa (Zhang et al., 2015). The generated random field of these cases will be adopted as the input of the CNN model,and the calculated result of the RFDM model will be used as the output.

3.2. Finite difference modeling

Fig. 2 shows the adopted finite difference model and boundary conditions. Numerical simulation was calculated using FLAC3D software in this study. A shield tunnel embedded into a spatially variable soil with its outer diameter D = 6.2 m, lining thicknesst=0.35 m and the cover depth C=16 m is considered(Wang et al.,2020d). The stress reduction method was adopted to simulate the excavation of the tunnel which can well simulate the stress relaxation of surrounding soil caused by delayed installation of tunnel lining (M?ller, 2006). The relaxation ratio was fixed at 0.25 based on the Shanghai tunneling practice, corresponding to a typical volume loss at 5‰(Huang et al.,2017b).The concrete segments are modeled by solid elements with the linear elastic model. The interface elements in FLAC3D were employed to simulate the behavior of tunnel joints (Zhang et al., 2019). For material parameters of the concrete lining, Young’s modulus, unit weight and Poisson’s ratio were 34.5 GPa, 25 kN/m3and 0.2, respectively, as shown in Table 2.The soil was modeled as an elastoplastic medium,following the Mohr-Coulomb yield criterion,which is most widely used in random field analysis(Huang et al.,2017b).The parameters of soil properties, i.e. Poisson’s ratio υs, unit weight γ, effective cohesion c′and internal friction angle φ′,can be found in Table 2,as those values are frequently used for Shanghai clay (Zhang and Huang, 2014). The lateral earth pressure coefficient was set to 0.625 in numerical model (Wei and Yang, 1988). The numerical model used in this study is the same as that in Zhang et al.(2021b).For clarity, validation of the numerical model and more detailed information can be found in Zhang et al. (2021b).

Table 1 Scales of fluctuation designed in anisotropic random fields.

Fig. 2. Numerical model and boundary conditions.

3.3. Factor of safety of tunnel

The structural safety of the tunnel has been one of the most concerning issues for the governor and engineers (Huang et al.,2017a, 2021; Fan et al., 2021b; Lei et al., 2021). The horizontal tunnel convergence is a commonly used index to evaluate tunnel serviceability safety (Gong et al., 2014). For the factor of safety of tunnel serviceability limit state(SLS),FS,it can be estimated by the ratio of the limited deformation(i.e. 0.4%D is selected(Gong et al.,2014)) to the calculated horizontal convergence (ΔDh). The calculated factor of safety of each RFDM simulation will be used as the output of the CNN model in this study. The FSis defined as

Table 2 Parameters of tunnel and soil properties in numerical model.

4. CNNs

Fig. 3 shows the architecture of constructed CNN model in this study.The Keras is used to build up the CNN model,a Python-based open-source deep learning library. The major components of the established CNN model are an input layer,two convolutional layers,two pooling layers, an activation layer, and an output layer. The architecture of CNN model refers to the configuration of these layers. In the current study, all experiments were performed on a computer equipped with one Intel Core i7-10710U CPU, with 1.61 GHz of base frequency and 16 GB of random-access memory(RAM).

4.1. Input layer

As shown in Fig. 3, the format of the input data is a 1 × 5312 matrix. The original data form a random field of Young’s modulus which is mapped in the numerical model. There are 5312 cells in the RFDM model, as introduced in Section 3.2. These cells were firstly transformed into a column vector using a uniform format.Then, it was preprocessed using StandardScaler (Zhang et al.,2020c). As shown in Table 1, six cases were generated to reveal the effect of the vertical scale of fluctuation on tunnel deformation performance. Meanwhile, the influence of training samples on prediction performance was also investigated. A total of nine different training samples were considered in this study: 5,10,15,20,25,30,40,50 and 100 samples for each combination case with different δvand δhvalues,as given in Table 1.It is worth mentioning that the nine CNN models were trained using the same architecture and parameters. Only the number of training samples is different.

4.2. Convolutional layer

The convolutional layer plays the role of feature recognition in the CNN model. There are two convolutional layers in the architecture for building the CNN model.The number of filters was set to 32, and the kernel size and stride were equal to 5 and 1, respectively. These parameters of convolutional layer were confirmed using a trial-and-error approach (Zhang et al., 2020c). For the first convolutional layer,the input data size is 1×5312,and the output after convolutional layer is 1×5308×32,as illustrated in Fig.3.In this study, the two convolutional layers were used for better learning features for prediction. Rectified linear unit (ReLU) function was used as activation function in convolutional layer and pooling layer, which is the most widely used in neural networks(Wang et al., 2021c).

Fig. 4. Influence of number of training epochs on MSE value.

4.3. Pooling layer

Pooling layer can compress the input features to simplify the computational complexity of the network, and the main features were extracted at the same time. There are mainly two pooling methods:max pooling and average pooling.They summarized the most activated and average presence of the input feature. Max pooling method which was widely used in CNN model was adopted in this study(Dorafshan and Azari,2020).The size and stride of max pooling were both set as 2 in the two pooling layers in this study,as shown in Fig.3.Therefore,it can be seen that the size of the output matrix after the pooling layer will be reduced by half.

4.4. Fully connected layer and output

The fully connected layer maps the learned features to the output target. As plotted in Fig. 3, the size of the flattened matrix is 1 × 42,400, thus three fully connected layers were adopted containing 256, 64 and 16 neurons, respectively. The sigmoid function was used as the activation function in the fully connected layer. Meanwhile, the dropout was used to prevent overfitting by randomly reduce some neurons during the training process. It is a powerful method to improve the generalization capability of models. The value of dropout was set to 0.3 in this study. The output of the CNN model is the factor of safety of tunnel.

Fig. 3. The architecture of the constructed CNN model.

Fig. 5. Prediction performance of CNN model on training and testing sets.

5. Results of CNN model

5.1. Results on training and testing datasets

The effect of the number of training epochs on mean squared error(MSE)value on training and testing sets is shown in Fig.4.A training epoch refers to the process of inputting all data into the network to complete the forward calculation and back propagation(Chen et al.,2021b).As introduced in Section 4,there are nine CNN models trained according to different training samples.Fig.4 shows the relationship between the number of training epochs and the MSE value of CNN model trained with 100 training samples for each case, as shown in Table 1. The batch size and learning rate during the training process of CNN model were set to 16 and 0.001,respectively. It can be seen that the MSE values for both training and testing sets were progressively decreased with epochs. A suitable epoch should satisfy the MSE value reach a convergence state. In this sense, the epoch value was set to 100 in this study.

Fig. 5 presents the prediction performance of CNN model on training and testing sets. The value of the predicted/actual (RFDM analysis)result was used to evaluate the prediction performance of CNN model. It can be seen that the predicted result is relatively close to the calculated one from the RFDM analysis on both training and testing sets.The mean(median)values of training and testing sets were 0.991 and 0.982, respectively; the median values of training and testing sets were 0.983 and 0.976, respectively; the COV values were all relatively small, indicating that a good prediction performance was achieved.

5.2. Prediction performance of trained CNN model

Fig.6. Actual versus predicted Fs values for different CNN models trained with different sample sizes:(a)5 samples,(b)10 samples,(c)15 samples,(d)20 samples,(e)25 samples,(f) 30 samples, (g) 40 samples, (h) 50 samples, and (i) 100 samples.

Fig.6 shows the comparison of CNN predicted Fsvalues against the RFDM calculated Fsvalues using the nine CNN models trained with different sample sizes (5, 10, 15, 20, 25, 30, 40, 50 and 100 samples)in this study.There are 600(100 samples×6 cases)newly independent samples which are used as input data to evaluate the prediction performance corresponding to the 600 Monte Carlo samples. The MSE and the correlation coefficient (R) were separately calculated for the six subplots.The overall observation is that the predictions made by the trained CNN model are pretty consistent with the RFDM calculated result in all nine subplots, as given in Fig. 6a-i. Except in Fig. 6a, the R value is larger than 0.9,while the MSE value is smaller than 0.05, indicating a strong correlation between the CNN model predictions and RFDM calculations.This is understandable,after all,because only five samples of each case were used for training in Fig. 6a. Meanwhile, an evident phenomenon is that the R value is much larger than 0.96 and the MSE value is less than 0.02 when the training samples were larger than 25 samples for each case,as shown in Fig.6e-i.It means that the trained CNN model can replace the time-consuming RFDM calculation.

Due to using the interface elements to simulate the tunnel joints, the computational efficiency is relatively low. The time required for one RFDM analysis is about 6 min in this study.On the contrary,the prediction for 100 samples using trained CNN models only needs about 1 s using the same computer.The computational efficiency of using the CNN model is approximately 36,000 times of that using the RFDM analysis. This indicates that the CNN model has great potential as a surrogate model to replace the RFDM simulations.

Fig.7 shows the boxplot comparison of prediction performance of CNN model for nine different training samples.It can be seen that the mean and median values gradually tend to 1 as the number of training samples increases.At the same time,it can be seen that the distribution of predicted results is more concentrated with the increase of training samples, indicating good performance of the prediction.The boxplots of CNN models trained with 40,50 and 100 samples for each case are similar.

Fig. 8 shows the effect of training samples on the R and MSE values.The R and MSE values rapidly increase and decline with the increase in number of training samples, respectively. When the training samples are more than 40, the rate of this trend change slows significantly. This means that the prediction performance of the CNN model with only 40 samples of each case can generate a relatively good and stable prediction. Therefore, there is a compromise as long as the model is not overfitted or underfitted.

29. Iron: Iron was the only metal not sacred to the goddess so it is used to punish the evil mother who represents vengeful female energy. IRReturn to place in story.

5.3. Comparison of the effect of scale of fluctuation on tunnel for RFDM and CNN model

Fig. 7. Comparison of prediction performance of CNN models for nine training samples.

Fig. 8. Effect of training samples on R value and MSE.

In order to examine the performance of the trained CNN model for a detailed problem, comparison of the influence of δvon Fsof tunnel for RFDM and CNN model was performed. Fig. 9 shows the effect of δvon the mean value and COV of Fsof tunnel obtained from the RFDM and CNN model.The tendency of CNN model’s prediction is very similar to the result of RFDM calculation.As given in Fig.9a,the mean values of Fsof tunnel for RFDM and CNN model are 1.45 and 1.42, respectively. The error ((CNN model - RFDM)/RFDM)between RFDM and CNN model is only 2.1%. For the COV of Fsof tunnel, the values of RFDM and CNN results are 0.49 and 0.47,respectively, as shown in Fig. 9b. The error is also less than 5%,which is only 4.1%.The predicted results of the trained CNN model have no significant difference from the calculated results of RFDM analysis.

To further evaluate the prediction performance of the trained CNN model, the histogram comparison between the results obtained from the RFDM and CNN model for different cases is shown in Fig.10.The histograms of RFDM and CNN model are very similar.The distribution of factor of safety obtained from the CNN model is relatively more concentrated than the RFDM result.The limit value of 90% exceeding probability is selected as the evaluation index in this section.Although the limit value of the CNN model is less than the RFDM result, the error between the limit values of RFDM and CNN model for different cases is less than 5%.This indicates that the trained CNN model has the potential to replace the timeconsuming RFDM calculation.

6. Generating confidence intervals of CNN model’s predicted result

It is well known that the machine learning or deep learning model typically returns only a single value for regression problems.Therefore, the trained model cannot give any evaluation of the uncertainty about the prediction. The trained model can never be perfect due to the uncertainty from the errors in the model itself or noise in the input data.The prediction performance will be better if it can give confidence intervals about the predicted value. For the same input data,the original trained model needs to be retrained to return a set of different predictions. Then, the model’s confidence intervals can be calculated through the distribution of these predictions. The confidence interval is an approach to quantify the uncertainty of the prediction.

6.1. Retraining the CNN model

Fig. 9. Effect of δv on (a) mean and (b) COV of tunnel Fs obtained from RFDM and CNN model.

To gauge the prediction’s confidence interval, the technique of“dropout” was used to retrain the original trained CNN model in this study. Dropout means that the neural network unit is temporarily dropped from the network according to a certain probability(Srivastava et al., 2014). The dropout is commonly applied during the deep learning network training process to prevent overfitting by randomly cutting the coadaptation between units.The aim is to generate a “thinned” model based on the original model by applying dropout when performing inference in this section. The configuration and weights of the original pre-trained model were firstly taken.Then,the specified amount of dropout is applied to all layers to create a new thinned model.

There is a crucial problem that dropout is active only during the training process in Keras. Keras will turn off dropout by default when performing the CNN model. Thus, the first thing is to trick Keras to retrain the CNN model using the dropout to all layers.There is an alternative way to deal with it by setting the learning rate to 1 to trick Keras into thinking still in the training process.The input of the new thinned model with dropout is the same as the original CNN model. The output is the predicted factor of safety of tunnel in this study.

6.2. Generating confidence intervals

Through the above process, the thinned model with dropout will generate enough predictions by sampling sufficient times. In this study, the retrained model with dropout generated 300 predictions for each input sample.The estimated confidence intervals will be more accurate with the increasing number of generated predictions. Correspondingly, the process of prediction will last longer.The number of 300 predictions is a fair compromise.Then,a distribution of the predictions from the retrained model can be obtained.The lower and upper limits for a given confidence interval will also be calculated.

6.3. Determining the right dropout

In addition,the reasonable value of dropout plays a vital role in this process. If the dropout value is too large, the predicted value will be very diverse,thus the estimated confidence intervals will be too large.Conversely,if the dropout value is too small,the predicted result will be very similar, and thus the calculated confidence intervals will be too small.In order to determine the optimal dropout value,the consistency between the distribution of prediction error by the retrained model and the original model was used as the evaluation index. Firstly, the error distribution of the predicted value of the original CNN model minus the actual value can be obtained. Then, for a specific dropout value, the error distribution(dropout prediction - median prediction) of the retrained model with dropout can also be calculated.

Fig.10. Histogram comparison between the results obtained from RFDM and CNN model for different cases with constant δh value of 60 m and different δv values of(a)1.5 m,(b)3.1 m, (c) 6.2 m, (d) 15 m, (e) 30 m, and (f) 60 m.

Fig.11. Effect of dropout on the prediction performance for retrained model.

Fig.12 presents the prediction error histogram for the original and retrained CNN models with different dropouts. The three error distributions of dropout values are selected to compare the original CNN model’s results. The error distribution of the model with a dropout value of 0.1 is too narrow, while that with a dropout value of 0.5 is too broad. The error distribution of the model with a dropout value of 0.3 is relatively better matched with the actual results. Meanwhile, the comparison of lower and upper limits for 80% and 95% confidence intervals also explains this result. The upper and lower limits for two confidence intervals of the model with a dropout value of 0.3 are much closer to the actual results.

6.4. Results with confidence interval

Fig. 13 shows the confidence intervals generated by the retrained model with the optimal dropout value.It is worth noting that the factor of safety results of 100 simulations are arranged in order from the smallest to the largest for better display. The black and grey solid lines represent the actual results (RFDM calculated results) and the predicted results of the original CNN model,respectively. Although the prediction results of the original CNN model are relatively good,there is still a gap for some simulations.This means there will be a difference for some simulations if only one result was predicted using the original model. As plotted in Fig.13,the prediction with confidence intervals can better contain the actual results even when there is a gap between the original CNN model prediction and the actual results.This indicates that the dropout technique used to retrain the original model is a useful approach to estimate the model uncertainty.

Fig. 12. Histogram of prediction error for original CNN model and retrained model with dropout. CI represents the confidence interval.

Fig.13. Prediction performance for the retrained model with confidence intervals.

7. Conclusions

This paper presents an efficient deep learning-based approach to estimate the factor of safety of tunnel deformation in spatially variable soil with confidence interval. The one-dimensional CNN was developed to discern the nonlinear pattern between the random field of soil property input and the factor of safety of tunnel target. The CNN model can avoid the high computational cost required in the traditional Monte Carlo simulations. Meanwhile,this study uses the dropout technique to generate the confidence interval to quantify the uncertainty of predictions. Based on the results of the analysis, the following conclusions can be drawn:

(1) The proposed CNN-based prediction method provides an effective way to estimate the factor of safety of tunnel deformation in spatially variable soil. The CNN model performs well, with R value between the predicted and RFDM results greater than 0.96 and MSE value less than 0.02. The results show that the CNN model can successfully provide accurate regressions between the random field of soil properties and factor of safety of tunnel deformation.Meanwhile,it can significantly improve the computational efficiency of RFDM analysis.

(2) The number of training samples also plays a significant role in the prediction performance of the CNN model. The prediction performance firstly improves rapidly and then tends to be stable with the increase of training samples. In this study, only 40 training numbers of each case can generate a relatively good and stable result.

(3) The dropout technique was used to retrain the original model to generate a set of predictions for one input data. It can be achieved by applying dropout to all layers of the original model and setting the learning phase to 1 to trick Keras into thinking still in the process of training. Then, the confidence interval can be calculated based on the distribution of predictions.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This study was supported by the National Natural Science Foundation of China (Grant Nos. 52130805 and 52022070) and Shanghai Science and Technology Committee Program (Grant No.20dz1202200).The financial supports are gratefully acknowledged.

主站蜘蛛池模板: 欧美一级高清免费a| 亚洲综合欧美在线一区在线播放| 亚洲一区精品视频在线| 中日韩欧亚无码视频| 无码综合天天久久综合网| 97国产成人无码精品久久久| 欧洲成人免费视频| 婷婷99视频精品全部在线观看| 日本午夜三级| 狠狠色噜噜狠狠狠狠奇米777| 国产成人精品综合| 久久香蕉国产线看观看精品蕉| 欧美日韩国产精品综合| 亚洲va视频| 久草性视频| 青青热久麻豆精品视频在线观看| 久久综合国产乱子免费| 欧美高清日韩| 日日噜噜夜夜狠狠视频| 国产天天色| 亚洲国产日韩一区| 国产va视频| 久久性妇女精品免费| 国产日韩欧美一区二区三区在线| 丁香婷婷激情网| 成人综合在线观看| 国产精品吹潮在线观看中文| 99ri精品视频在线观看播放| 国产毛片不卡| 中文无码伦av中文字幕| 国产黑丝一区| 国产成人做受免费视频| 亚洲最大福利视频网| 国产亚洲欧美在线人成aaaa| 91成人免费观看| 精品一区二区三区波多野结衣 | 自拍偷拍欧美日韩| 福利国产微拍广场一区视频在线| 成人精品午夜福利在线播放| 精品国产www| 福利在线一区| 免费一级毛片在线播放傲雪网| 亚洲婷婷在线视频| 精品丝袜美腿国产一区| 四虎亚洲国产成人久久精品| 乱人伦99久久| 性欧美久久| 性网站在线观看| 中文字幕在线视频免费| 久久激情影院| 波多野结衣中文字幕一区| 精品综合久久久久久97| 国产va在线观看免费| 成年人久久黄色网站| 97se亚洲综合| 日本黄色不卡视频| 亚洲天堂视频在线免费观看| 波多野结衣的av一区二区三区| 国产免费久久精品44| 58av国产精品| 996免费视频国产在线播放| 亚洲欧美在线看片AI| 成人午夜福利视频| 国产日本欧美亚洲精品视| 亚洲欧美人成电影在线观看| 亚洲国产天堂久久综合| 人妻21p大胆| 一本无码在线观看| 中美日韩在线网免费毛片视频| 有专无码视频| 国产剧情国内精品原创| 日本一区二区三区精品视频| 亚洲天堂网在线观看视频| 久久综合色天堂av| 国内精品伊人久久久久7777人| 日韩激情成人| 国产欧美性爱网| 日本色综合网| 国产精品污视频| 欧洲极品无码一区二区三区| 亚洲精品天堂在线观看| 99精品在线看|