Akshay Kumar, Gaurav Tiwari
Department of Civil Engineering, IIT Kanpur, India
Keywords:Statistical uncertainty Resampling reliability Moving least square response surface(MLSRSM)Sobol’s global sensitivity Correlation coefficient
A B S T R A C T
Significant efforts have been made in recent years to develop efficient reliability methods for addressing stability problems, for example the first/second-order reliability methods (F/SORMs),point estimate methods (PEMs), and Monte Carlo simulations(MCSs). These methods are capable of considering uncertainties emanating from different sources when analyzing reliability problems of rock structures (Duzgun et al., 1995; Mauldon and Ureta, 1996; Langford and Diederichs, 2013; Low and Einstein,2013; Lu et al., 2013; Zhao et al., 2014; Wang et al., 2016;Ahmadabadi and Poisel,2016;Pandit et al.,2018;Tiwari and Latha,2019,2020).Further, these methods can be coupled with a variety of advanced numerical tools and constitutive models for efficient and more realistic analyses of rock structures based on in situ conditions (Liu et al., 2020; Morovatdar et al., 2020; Zhang et al.,2020; Song et al., 2021; Zhao et al., 2021; Zheng et al., 2021).Although mathematical formulations of these methods are varied,their basic input parameters are similar,i.e.distribution parameters(mean and standard deviation, MD and SD) and types of distributions of input rock properties.Hence,the accuracy of these methods depends upon significantly the accuracy achieved in the estimation of input parameters. This in turn depends upon the quality and quantity of the laboratory and in situ test data. While the quality can be maintained by following the ISRM suggested standard guidelines in testing, a major problem is associated with the quantity of available data. This is due to high costs and practical difficulties such as site preparation, sample collection, sample disturbance,and data interpretation associated with laboratory and in situ testing of rocks(Duzgun et al.,2002;Wyllie and Mah,2004;Ramamurthy,2014).Due to these reasons, the amount of test data remains limited for most of the rock projects, leading to statistical uncertainties in both distribution parameters and types of rock properties. Traditional reliability approaches ignore this fact and assume that the statistics of rock properties derived from the limited data can represent the population statistics well. This may lead to high inaccuracies in the estimated results(Luo et al.,2013;Pandit et al., 2019).
This issue has been highlighted very recently by different researchers, leading to the development of a bootstrap-based advanced resampling reliability approach. This approach couples resampling statistics tool (bootstrap) with reliability methods to model the uncertainties in sample statistics of input properties and their effect on the reliability estimations of geotechnical structures(Luo et al.,2013;Li et al.,2015;Pandit et al.,2019).These statistical tools are widely used in hydrology and economics-related problems (Noguchi et al., 2011; Onoz and Bayazit, 2012; Fong, 2013).However, studies on their practical applicability for the reliability estimation of rock structures are very scarce (Pandit et al., 2019).The possible reasons could be as follows:
(a) Computational non-viability of bootstrap-based approach for analyzing the reliability of huge rock structures (requires 104-106numerical simulations);
(b) Non-availability of guidelines regarding their applications to a variety of problems involving explicit/implicit performance functions (PFs), single/multiple PFs and correlated/noncorrelated random properties;
(c) Non-availability of guidelines regarding allowable values of stability indicators (like allowable values of reliability index and probability of failures in traditional reliability analyses);and
(d) Non-familiarity of rock practitioners with these methods due to involvement of significant mathematical and computational complexities.
In this context, the study aims to develop a generalized and computationally efficient resampling reliability approach applicable to a variety of rock slopes and tunnel problems. These problems may vary in terms of the nature/number of PFs and input properties, i.e. explicit/implicit PFs, single/multiple PFs, and correlated/non-correlated random properties. This approach was developed by employing an alternate resampling statistical tool,i.e.jackknife, contrary to recent approaches which employ the bootstrap approach.Further,this approach also uses reliability tools like Latin hypercube sampling (LHS), Sobol’s global sensitivity, moving least square-response surface method (MLS-RSM), and Nataf’s transformation to enhance its applicability to a variety of problems.The proposed approach was demonstrated,and its performance in terms of accuracy and efficiency was systematically compared with a bootstrap-based approach for four real cases encompassing different types:(1)structurally-controlled slope failure(single and explicit PF; correlated properties), (2) stress-controlled slope failure(single and implicit PF;non-correlated properties),(3)circular tunnel in closely jointed rocks (multiple (two) and explicit PFs;non-correlated properties), and (4) tunnel in sparsely jointed brittle rocks (multiple (three) and implicit PFs; non-correlated properties). Further, the importance measure ranking of different types of uncertainties was performed in input properties: uncertainty in distribution parameters, and distribution types. This was achieved by estimating the effect of these uncertainties separately on output reliability estimates. Fig.1 shows the complete description of the performed analyses.
This section describes the theoretical details of the components employed in the proposed (jackknife-based) and bootstrap-based reliability approaches. Both the approaches were implemented in Matlab(2016) using these components.
Let the values of a rock property be evaluated usingNnumber of laboratory/in situ testing,which gives the original sample(OS),i.e.X={X1,X2,X3,…,XN}with mean()and standard deviation(SD),SNas follows:

Also,the best-fit probability distribution(PDB)was evaluated by estimating Akaike information criterion (AICOS) values for candidate probability distributions (PDs)as

whereLOSandKOSare the maximum likelihood estimator and number of parameters of candidate PD required to fully characterize it.Candidate PDs are the ones that satisfy the goodness-of-fit tests, i.e. Kolmogorov-Smirnov (KS) and Chi-Square test (Ang and Tang, 1984). Candidate PD with minimumAICOSwas considered as thePDB.
Further, for the studies considering the correlation between input properties, the Kendall correlation coefficient (τOS) between the correlated properties, i.e.X1andX2, can be evaluated as

2.2.1. Jackknife approach
The proposed approach employs the jackknife approach to characterize the uncertainties in the statistics of rock properties due to its superior computational efficiency,as will be explained in later sections. In the jackknife approach, the characterization of uncertainties in sample statistics is performed by generatingNjackknife samples. These samples are one size lesser than the original sample sizeN- 1, i.e.Ji= {Ji,1,Ji,2,Ji,3,…,Ji,N-1} (ith jackknife sample).Samples are generated by eliminating one point from the original sample, i.e.Xwithout disturbing any other data,as shown in Fig.2.There is an associated mean(),SD(Si)andPDBassociated with each jackknife reconstituted sample (RS) as

Fig. 2. Details of procedure used to generate reconstituted samples using both jackknife and bootstrap approaches from the original sample.

This procedure is repeatedNtimes to estimate the statistical parameters andPDBforNjackknife samples.For the studies considering the correlation between properties, the Kendall correlation coefficient between correlated properties, i.e.X1andX2associated with each jackknife reconstituted sample(τiRS)can be evaluated by

A bias correction is applied to sample statistics,T, to overcome the limitation of the smaller sample size (Liu et al., 2020):

Uncertainty incan be characterized by estimating its jackknife statistics,i.e.jackknife mean([]mean)and jackknife coefficient of variation (COV) (COVXN) as

Uncertainty inSican be evaluated by estimating its jackknife statistics, i.e. jackknife mean ([SN]mean) and jackknife COV (COVSN)ofSias

Uncertainty inPDBcan be characterized by estimating jackknife statistics, i.e. mean (μAICRS) and COV (COVAICRS) ofAICRSofrthcandidate PD as

Also, the uncertainty in τiRScan be evaluated by estimating its jackknife statistics, i.e. jackknife mean (μτRS) and jackknife COV(COVτRS) as follows:

2.2.2. Bootstrap approach
Most of the approaches available in the literature employ bootstrap approach to characterize the uncertainties in the statistics of rock properties.The idea behind bootstrap is to use the data of rock properties(a sample study)at hand as a “surrogate population” for approximating their sampling distributions by resampling (with replacement) from the sample data and creating a large number of“phantom samples” known as bootstrap samples (Singh and Xie,2010). This helps to overcome the requirement of extracting and testing more rock samples(repeated samples of the same size)from the rock mass(population of interest)multiple times.Hence,it helps to gather the required information effectively and rapidly. This method assumes that such samples for the same phenomenon will resemble closely the observed one.When assuming independent and identically distributed samples within the original data, a virtualn-sample composed ofnvalues resampled independently from the original sample (with replacement) is a reasonable simulation of what could happen with natural data fluctuation(Rocquigny,2012).Thisapproachwasused for the reliabilityanalysis of rock structuresto perform a comparative analysis with the proposed approach. Characterization of uncertainties in sample statistics in this approach is similar to the jackknife approach;however,the difference lies in the resampling procedure.Reconstituted samples are generated by performing random sampling with replacement from original sampleX,as shown in Fig.2,which gives theN-sized bootstrap sample,i.e.Bi={Bi,1,Bi,2,Bi,3,…,Bi,N} (ith bootstrap sample).Each vectorXihas an equal probability of being chosen.The sample size is kept the same as the original sample size to avoid any inaccuracy in sample statistics(Johnson,2001).There will be an associated mean,SD(Si)andPDBassociated with each bootstrap sample:

This procedure is repeated forNstimes to estimate the uncertainty in sample statistics. Also, the Kendall correlation coefficient(τiRS) between the correlated properties, i.e.X1andX2, associated with each bootstrap reconstituted sample can be evaluated:

Uncertainty incan be evaluated by estimating its bootstrap statistics, i.e. bootstrap meanand COVas

Uncertainty inSican be evaluated by estimating its bootstrap statistics, i.e. bootstrap mean ([SNs]mean) and COV (COVSNs) as

Uncertainty inPDBcan be evaluated by estimating bootstrap statistics, i.e. mean (μAICRS) and COV (COVAICRS) ofAICRSofrth candidate PD as

Also, the uncertainty in τiRScan be evaluated by estimating its bootstrap statistics, i.e. bootstrap mean (μτRS) and COV (COVτRS) as

Sensitive input properties were identified and ranked based on their influences on PFs.A detailed global sensitivity analysis was used to accomplish this step. This procedure reduces the computational efforts by allowing the users to resample sensitive properties only in the analysis. In the current study, Sobol’s global sensitivity analysis(GSA)(Saltelli et al.,2008)was used to perform the analysis.It considersthewholevariations intheparameter rangesof theinputspace,which is not considered in the local sensitivityanalysis(LSA).The GSA computes variance-based sensitivity indices, through which the relative contributions of input parameters on the variability of output can be quantified.Total order/effects index(STi)was used to evaluate the sensitivity of input rock properties. It includes the interaction effects between input properties, which can be mathematically expressed as

whereE(?) andV(?) represent the expectation and variance,respectively;X~idenotes all components of input vectorXexceptXi. A Monte Carlo based numerical procedure provided by Saltelli(2002) was used to computeSTi. It involves the generation of quasi-random samples for input vectorXand arranging them in matricesAandB:

wherekrepresents the base sample,andnis the dimension of the input vector.Using matricesAandB,matrixCican be constructed containing all elements ofBexcept theith column, which is taken fromA.

Outputs are evaluated for input realizations given in matricesA,BandCiusing the PF (i.e.Y=G(X), whereG(?) is the PF) and output column matrices(i.e.YA,YBandYCi)are constructed.Total order/effects index forXican be calculated from Janon estimators as (Janon et al., 2014):

Response surface method (RSM) was used to investigate stability problems, lacking explicit PFs/input-output relations, to provide surrogate functional relationships. This leads to a significant reduction in computational efforts by eliminating the requirements of a large number of repeated numerical simulations.In the present study, MLS-RSM was used due to its proven accuracy for a variety of stability problems. In the MLS-RSM, an explicit polynomial approximation()of the implicit PF(G)in terms of the random variables (h= {h1,h2,…,hd}) can be given as shown below(Krishnamurthy, 2003):

wherep(h) = [1h1h2…]1×mis a quadratic polynomial basis of function (m= 2d+ 1);a(h) is a set of unknown coefficients, expressed as functions of the design pointhto consider the variation of the coefficients for every new design point.a(h) can be determined by

whereY= [G(h1)G(h2)…G(hn)]Tis the matrix of known PF values obtained from numerical modeling atnsampling points.MatricesAandBare defined as given below:

where

wherewi(h) is the weighting or smoothing function.In this study,the spline weighting function (C1continuous) with compact support was considered as given below:

where γ = |h-hi|2/li,liis the size of the domain of influence,which is chosen as twice the distance between the (1+2d)th experimental/sample point and the design pointh,dis the number of random variables. The LHS-based space-filling design was adopted for generating random input vectors realizations (sampling points) from input parameter distributions (Montgomery,2001).
The accuracy of the constructed RSM was estimated using Nash-Sutcliffe Efficiency (NSE) (Moriasi et al., 2007; Pandit and Babu,2018). NSE can be estimated by generatingpoff-sample points of input properties using LHS.For these off-sample points,the PF values,i.e.were estimated using the original numerical program and RSM,respectively.NSE can then be estimated by

Nataf’s transformation was used to incorporate the effect of correlation between random variables(Li et al.,2011;Tang et al.,2015;Liu et al., 2020). Nataf’s transformation constructs an approximate joint PDF of correlated random variables using their marginal distributions and correlation coefficient between them. According to Nataf’s transformation,the approximate joint PDF of two correlated variablesX1andX2,i.e.f(X1,X2) can be expressed as

wheref1(X1) andf2(X2) are the marginal PDFs ofX1andX2,respectively;Z1= φ-1(F1(X1)) andZ2= φ-1(F2(X2)) are the standard normal variables, where φ-1(?) is the inverse standard normal cumulative distribution function (CDF);F1(X1) andF2(X2)are the marginal CDFs ofX1andX2,respectively;θ is the Pearson’s linear correlation coefficient betweenZ1andZ2which can be evaluated from the Kendall correlation coefficient (τ) betweenX1andX2as θ = sin(0:5πτ).
The basic idea behind the resampling reliability is to estimate the value of output stability indicators(SIs),i.e.reliability index(β)and probability of failure (Pf) for each reconstituted sample and then to characterize the uncertainties in SIs. This algorithm employs all components explained in the previous section for the analysis. For the problems considering β as SI, MCSs were performed to estimate the PF values, i.e.G(h) for random realizations of input properties generated for each reconstituted sample.β can then be estimated by

where μG, σGare the mean and SD ofG(h) values obtained from MCSs.For problems consideringPfas SI,MCSs were performed forMtimes (total realizations) and the number of simulations for whichG(h)lie in failure region was counted,i.e.Mfand thenPfcan be estimated as given below:

By performing these reliability analyses forNreconstituted samples generated using jackknife/bootstrap approaches (Section 2.2),Nvalues of SIs can be estimated.Uncertainty in SIs can then be characterized by estimating their COV (SICOV) using

whereSISDandSImeanare the standard deviation and mean of SIs.The generalized algorithm and steps of the proposed approach are shown in Fig. 3. The algorithm was implemented via MATLAB.
As can be seen in Fig.3,analysis was performed for three cases to rank the sources of statistical uncertainties among (i) distribution parameters (mean and SD) and (ii) distribution types of input properties based on their effect on the reliability estimates. In this study,resampling reliability analysis was performed for three cases which consider Case (i) Uncertainties in distribution parameters and types (UDPT) which uses both distribution parameters and distribution types of RS, Case (ii) uncertainty in distribution parameters (UDP) which uses distribution parameters of RS and distribution types of OS, and Case (iii) uncertainty in distributions types (UDT) which uses distribution types of RS and distribution parameters of OS. UDP and UDT cases ignore the uncertainties in distribution types and distribution parameters, respectively,assuming that the statistics of rock properties derived from limited original data can represent the population statistics well.

Fig. 3. Generalized flowchart to perform re-sampling reliability analysis for both jackknife and bootstrap based approaches.
This example is a slope stability problem corresponding to a type-1 problem, i.e. single and explicit PF considering the correlation between random properties. For this problem, a potential planar rockslide along sparsely jointed quartzite rock mass (unit weight of 26 kN/m3) located along the Rishikesh-Badrinath highway, India, was considered (Pain, 2012). Fig. 4 shows the geometrical dimensions of the slope under consideration.Analysis was performed as explained in Fig. 3.

Fig. 4. Details of the geometry and forces acting on the slope for example 1.
Step-1: Estimation of rock properties
As the slope was structurally controlled,rock joint properties of the critical joint were estimated through ISRM suggested standard methods (Pain, 2012). The original sample of the properties is presented in Table A1 in Appendix.
Step-2: Estimation of parameters and type of probability distributions of rock properties from original sample
Table 1 summarizes the distribution parameters (i.e. mean and SD) andPDBof properties estimated from the original sample through Eqs. (1) and (2) as described in Section 2.1.

Table 1 Statistical parameters of rock properties estimated from original sample and external forces considered for example 1.
Step-3: Derivation of PF
PF for this slope was the factor of safety(FOS)and expression of PF was derived using limit equilibrium method (LEM) through evaluating the driving and resisting shear forces of failing block(abcd) along the critical joint ‘a(chǎn)b’ (see Fig. 4) (Shukla and Hossain,2011).Resisting shear strength was assumed to be governed by the Barton-Bandis strength criterion.The derived expression of FOS is

All the terms in FOS expression(Eq.(33))are explained in Fig.4.It shouldbenotedthat the expression ofPFforthisproblemwasderived using LEM due to the clear identification of failure surface through kinematic and field analysis. Joint ‘a(chǎn)b’ of low shear strength was daylighting inside the slope face with an approximate similar dip direction.Thismakes itfeasibletoact asaslidingsurface forthefailing rock mass.As per mathematical requirements,LEM needs a defined failure surface to perform the force/moment equilibrium available for this slope.Hence the expression of FOS was derived using LEM.
Step-4: Computation of Sobol’s total order indices and identification of sensitive properties
Once the PF was derived, sensitive rock properties were identified by performing GSA(see Section 2.3).Analysis was performed usingk= 105quasi-random samples (k) (Section 2.3) and results are summarized in Fig.5a.It is observed that the sensitivities of φrand JRC are significantly higher than that of JCS,as indicated by the higher values ofSTi. Hence, the resampling reliability analysis was performed by characterizing statistical uncertainties of these properties, i.e. φrand JRC only.
Step-5: Generation of reconstituted samples from original sample for sensitive rock properties
It can be seen that the original sample of rock properties contains 19 data points only, which can be considered statistically small and insufficient.Hence,the proposed methodology was used for the resampling reliability analysis to consider statistical uncertainties in sensitive properties, i.e. φrand JRC, arising due to limited data. To quantify the statistical uncertainties of properties in the next step,19 jackknife samples were generated (see Section 2.2.1). Further, the uncertainties were also quantified using the bootstrap approach in the next step by generating 10,000 reconstituted bootstrap samples (see Section 2.2.2) to evaluate the accuracy of the jackknife-based approach. These RSs were also used for the resampling reliability analysis in Step 7.
Step-6: Quantification of statistical uncertainties in sensitive properties
Statistical uncertainties in sensitive properties,i.e. φrand JRC,
were characterized using the proposed (jackknife) approach as explained in Section 2.2.1. Results are shown in Table 2 and Fig. 5b-d. It can also be observed that the jackknife means of distribution parameters(mean and SD)and correlation coefficient for both the properties coincided with their corresponding values estimated from the original sample (see Table 1). Values of mean and SD of φrfrom the original sample were estimated to be 31.96 and 2.1873; for JRC these values were estimated to be 10.18 and 2.5538,respectively;and τOSbetween φrand JRC was estimated to be 0.2635 (see Section 2.1). However, there were large jackknife COVs associated with the sample statistics (mean, SD and correlation coefficient) of both properties reflecting the associated statistical uncertainties in the sample statistics due to limited data.These statistical uncertainties cannot be characterized using the traditional approach, which assumes that the statistics estimated from the original sample are accurate without any statistical uncertainties.

Fig.5. (a)Total order Sobol’s indices indicating the sensitivities of properties on the reliability estimation for example 1;(b)PDFs of sample statistics of JRC obtained from jackknife approach for the example 1;(c)PDFs of sample statistics of φr obtained from jackknife approach for the example 1;(d)PDF of correlation coefficient between JRC and φr obtained from jackknife approach for the example 1; and (e) PDFs of reliability indices obtained from jackknife-based approach for all the cases for example 1.

Table 2 Jackknife statistics of sample mean, SD and AIC values for sensitive properties for example 1.
Further, an analysis using the bootstrap approach was performed from 10,000 bootstrap reconstituted samples generated in the previous step and the results were compared with those of the jackknife approach.Results are shown in Table 2.It can be observed that the percentage differences in bootstrap means and jackknife means of distribution parameters (mean and SD) for both properties, i.e. φrand JRC ranged 0.01%-4.57%, while the differences in their COVs ranged 4.79%-10.40%. For τRSbetween φrand JRC,percentage differences in bootstrap mean and jackknife mean was 2.94%, while the difference in their COVs was 0.63%. This further verifies the accuracy of the proposed approach in characterizing the statistical uncertainties in distribution parameters of input properties.
A similar analysis was performed to analyze the accuracy of the jackknife approach in characterizing the uncertainty in distribution types of input properties.AICRSwas evaluated for candidate PDs for each reconstituted sample using the jackknife approach and the results were compared with those of the bootstrap approach.It was found that the efficiency of the jackknife approach was questionable,as the differences in the bootstrap and jackknife statistics ofAICRSfor candidate PDs of both the properties,i.e.φrand JRC were significant.Further, the probabilities of a candidate PD being chosen as best fit PD (PDB) was significantly different for both approaches. It is observed that the PD with maximum probability being chosen as best fit (Number of times PD chosen as best fit/total number of reconstituted samples) was also differing. Uniform distribution was found to have a maximum probability of being chosen as the best fit for JRC from the jackknife approach, i.e. 100% compared to 91.56%from the bootstrap approach. For φr, gamma distribution had a maximum probability of 84.21%being chosen as the best fit from the jackknife approach,whereas lognormal distribution had a maximum probability of 41.44%being chosen as the best fit from the bootstrap approach. Overall, it can be concluded that the jackknife approach can efficiently characterize the statistical uncertainties in the distribution parameters.At the same time,its accuracy is questionable for the uncertainty characterization of distribution types.
Step-7: Reliability analysis considering statistical uncertainties in sensitive properties
Finally, reliability analysis was performed initially using the proposed (jackknife-based) approach considering uncertainties in both distribution parameters and types,i.e.J-UDPT,as discussed in Section 3. All external forces, including seismic and water force along the tension crack, were considered as random variables(without resampling) with their statistical parameters shown in Table 1. These parameters were chosen based on the extensive literature review(Hoek and Bray,1981;IS:1893,2002;Ahmadabadi and Poisel,2016).Results are summarized in Table 3 and Fig.5e.It was observed that the mean estimate of β coincided with the point estimate of β, i.e. 0.6155 estimated using the traditional reliability approach(see Table 3).For the traditional reliability analysis,MCSs were performed on the PF derived in Step 3 with 10,000 random realizations of input properties generated using joint PDF between φrand JRC (see Eq. (29)) and statistics derived from the original sample (Table 1).β was then estimated using Eq. (30). The coincidence of the mean estimate of β from J-UDPT with its point estimate from the traditional approach can be attributed to the reason that the proposed approach draws samples from the original sample (i.e. parent sample). Hence, the mean estimates of β converge to the point estimate of β derived from the original sample (Liu et al., 2020). This indicates the significant accuracy possessed by the proposed approach with minimal computational efforts.However,significant uncertainty in the estimated β,i.e.COV of 15.43%, was also observed from J-UDPT analysis due to high statistical uncertainties in input rock properties that the traditional reliability approach could not capture.
Further,an analysis was performed using B-UDPT to compare the results with the proposed J-UDPT approach. Results are shown in Table 3,and it can be observed that the results of both the approaches were in good agreement with percentage differences in β statistics ranging 1.65%-7.26%.Another important observation was related to the identification of the uncertainty source affecting the statistics of β significantly. Table 3 shows that while the results of the J-UDP approach(neglects uncertainty in distribution types)agree well with the B-UDPT and J-UDPT approaches, there is significant underestimation in the COV of β estimated by J-UDT approach (neglects uncertainty in distribution parameters). The underestimation was approximately 57.16%,56.99%and 54.04%,as compared to J-UDPT,JUDP and B-UDPT approaches,respectively.Interestingly,the mean of βestimatedfromJ-UDT,J-UDPandJ-UDPTalmostcoincidedwitheach other.These trends were evident for the bootstrap-based B-UDP and B-UDT approaches also,when compared with the B-UDPT approach,as shown inTable 3.An important factor is that the effect of statistical uncertainties in distribution types of input properties on the statistics of β is negligible compared with distribution parameters.
It can be concluded that the proposed approach,i.e.the J-UDPT approach, was incapable of modeling the uncertainties in distribution types of input properties accurately. However, it is highly accurate and appropriate to model the β statistics for structurally controlled rock slopes since there is a negligible effect of uncertainties in distribution types on the reliability estimates of this slope. Hence, J-UDP could be a better choice as compared with JUDPT due to its satisfactory accuracy and reduced computational complexities. Further, both J-UDPT and J-UDP were computationally more efficient as compared to bootstrap-based approaches due to a reduced number of required reliability assessments, i.e.19 as compared to 10,000.For this case,expected performance levels for the considered approaches as per US Army Corps of Engineers(1997) classification are reported in Table 3. It can be observed that this slope is unstable and requires external stabilization.

Table 3 Estimated statistics of reliability index (β) for example 1.
This example is a slope stability problem corresponding to a type-2 problem, i.e. single and implicit PF ignoring correlation between random properties. The rock slope under consideration supports the piers of the world’s highest railway bridge over river Chenab in India(Tiwari and Latha,2020).Rock mass at the site was heavily jointed dolomite (unit weight of 25 kN/m3), making this slope prone to stress-controlled circular failures.
Step-1:Estimation of rock properties
Table A2 in Appendix shows the original sample of the rock properties estimated using ISRM suggested standard methods(Tiwari and Latha, 2020).
Step-2: Estimation of parameters and type of probability distributions of rock properties from original sample
Table 4 summarizes the distribution parameters andPDBof rock properties estimated from original sample through the Eqs.(1)and(2).
Step-3:Derivation of performance function (PF)
It can be observed that this slope lacks a clearly defined failure surface as observed for example 1. Application of LEM will require assuming a pre-defined critical failure surface to perform force/moment equilibrium for this slope.Any prior assumption regarding the failure surface can significantly affect the accuracy of the derived expression of FOS. Hence, it was decided to use MLS-RSM coupled with numerical analysis to derive an expression for PF,which removes this mathematical requirement of LEM.This is due to the non-requirement of any prior assumptions regarding failure surfaces for the numerical analysis.Expression for FOS was derived using MLS-RSM (Section 2.4) by generatingn= 150 LHS points of input properties based on their probabilistic characteristics given in Table 4. FOSs for these sampling points were estimated using the shear strength reduction (SSR) method by performing analysis through Phase2(Rocscience, 2014), which gives vectorY(Section 2.4). Rock mass was assumed to follow elastic-perfectly plastic Hoek-Brown strength criterion.A typical Phase2model of the slope is shown in Fig.6.Oncensampling points,i.e.[h1,h2,…,hn]and vectorY,were known,PF value for any new pointhcan be obtained as explained in Section 2.4. Response surface was constructed between intact rock properties (Ei, UCS andmi) and GSI as input(d= 4(see Section 2.4)),and FOS as direct output,which reduced the computational efforts involved in estimation of rock massproperties for each realization of intact rock properties and GSI.It is to be noted that the relations provided between the intact rock and rock mass properties(Hoek et al.,2002;Hoek and Diederichs 2006)were implicitly embedded during the response surface construction. Nash-Sutcliffe efficiency (NSE) index of the constructed response surface was estimated to be 0.9824 forp=25 off-sample points, which correspond tovery goodperformance rating.

Table 4 Statistical parameters of rock properties estimated from original sample and external forces considered for example 2.
Step-4: Computation of Sobol’s total order indices and identification of sensitive properties
Once the MLS-RSM was constructed for the slope,sensitive rock properties were identified by performing GSA on this RSM.Analysis was performed usingk= 105number of quasi-random samples(see Section 2.3). Results are summarized in Fig. 7a.STiof UCS and GSI were significantly higher (up to 90%) than those ofEiandmi.Hence, the resampling was performed by considering statistical uncertainties of sensitive properties,i.e.UCS and GSI only.Properties considered and ignored for resampling are summarized in Table 4.

Fig. 6. Typical numerical model prepared in Phase2 for the analysis of example 2.
Step-5: Generation of reconstituted samples from original sample for sensitive rock properties
The original sample of rock properties contains 22 data points only, which are statistically small and insufficient. Hence, the proposed methodology was used for the resampling reliability analysis to consider statistical uncertainties in sensitive properties,i.e.UCS and GSI in the analysis. 22 jackknife samples were generated initially to characterize the statistical uncertainties in the next step,as explained in Section 2.2.1. Further, uncertainties were also quantified in the next step using the bootstrap approach by generating 10,000 reconstituted bootstrap samples (Section 2.2.2)for comparative analysis. These RSs were also used for the resampling reliability analysis in Step 7.
Step-6: Quantification of statistical uncertainties in sensitive properties
Statistical uncertainties in distribution parameters and types of sensitive properties were initially estimated using the proposed approach (jackknife). Analysis was performed using 22 jackknife samples generated in the previous step and results are shown in Fig.7b and c.Due to the length limitation of the article,more details are included in Table A3 in Appendix for interested readers. It can be observed from Fig. 7b and c that the jackknife mean of reconstituted samples statistics (mean and SD) matches well with the point estimates of sample statistics estimated from the original sample. However, significant jackknife COVs were associated with the sample statistics, signifying statistical uncertainties in data statistics due to limited data. Further, results were compared with the bootstrap analysis performed usingNs= 10,000 reconstituted samples. It was observed that the percentage differences in bootstrap means and jackknife means of distribution parameters(mean and SD) for both properties, i.e. UCS and GSI, were 0.01-4.75%.Corresponding differences in the COVs ranged 3.14%-18.22%.However,considerable differences exist in the jackknife statistics ofAICRSof candidate PDs as compared to their bootstrap statistics for both UCS and GSI. This indicates the inefficiency of the jackknife approach in characterizing the uncertainties associated with distribution types of input properties. Further, the probability of a candidate PD being chosen asPDBwas significantly different for both the approaches.It was observed that the PD with a maximum probability being chosen as the best fit was also differing.Lognormal distribution had a maximum probability (100%) being chosen as the best fit for GSI from the jackknife approach and 71.37% from the bootstrap approach. For the UCS, gamma distribution had a maximum probability of 90.91%being the best fit from the jackknife approach, whereas Weibull distribution was the best fit with a maximum probability of 36.78% from the bootstrap approach. It can be concluded that the jackknife approach can efficiently characterize the statistical uncertainties in the distribution parameters. However, its accuracy is questionable for the uncertainty characterization of distribution types.
Step-7: Reliability analysis considering statistical uncertainties in sensitive properties
Finally,reliability analysis was performed using the proposed JUDPT approach. Table 5 and Fig. 7d summarize the results of the estimated statistics and PDFs of β.For all analyses,the seismic force was considered as a random variable(with no resampling)with the statistical parameters of the seismic coefficient shown in Table 4.It can be observed that the mean of β coincided with the point estimate of β = 1.1978 estimated using the traditional reliability approach. Traditional reliability analysis was performed using 10,000 MCSs on constructed RSM in Step 3 using the statistics of input properties derived from the original sample (Table 4). However, a significant COV of β = 15.95% was observed in the J-UDPT due to statistical uncertainties of input properties,which could not be captured by the traditional reliability approach.
Further,a comparative analysis was performed between B-UDPT and J-UDPT approaches. Results are shown in Table 5. It can be observed that the results of both approaches were in good agreement, with percentage differences in statistics of β being 1.38%-11.65%. Similar to the previous example, it is observed that the results of the J-UDP approach agree well with the B-UDPT and JUDPT approaches; however, a significant difference exists in the COV of β estimated by the J-UDT approach. This difference was 64.89%,62.99%and 60.82%compared to J-UDPT,J-UDP and B-UDPT approaches,respectively.These trends were evident for bootstrapbased B-UDP and B-UDT approaches also as shown in Table 5. In summary,the effect of statistical uncertainties in distribution types of input properties on the statistics of β was negligible compared to that of distribution parameters.
It can be concluded that the results of this problem type were consistent with the previous case.Hence,the J-UDP approach could be the best choice to model the β statistics for stress-controlled rock slopes due to its high accuracy and reduced computational complexities.The expected performance levels evaluated from different approaches are reported in Table 5. It shows that this slope is unstable and requires external stabilization.

Fig.7. (a)Total order Sobol’s indices indicating the sensitivities of properties on the reliability estimation for example 2;(b)PDFs of sample statistics of UCS obtained from jackknife approach for the example 2; (c) PDFs of sample statistics of GSI obtained from jackknife approach for the example 2; and (d) PDFs of reliability indices obtained from jackknifebased approach for all the cases for example 2.
This example is a tunnel stability problem corresponding to a type-3 problem, i.e. multiple and explicit PFs ignoring correlation between random properties. For this, a circular tunnel of a radius(Ro)of 4 m is chosen,subjected to external hydrostatic stress(Po)of 8 MPa(see Fig.8).This tunnel was under preliminary consideration at Ganajur Mining Project in Karnataka, India, for geological investigation. Some of the parameters in this study were assumed due to the preliminary stage of the project.Based on the extensive geological investigation, rock mass at the site was identified as closely jointed Greywacke (unit weight of 27 kN/m3), which is the part of the greenstone Shimoga belt (Pandit et al., 2019).
Step-1: Estimation of rock properties
Table A4 in Appendix shows the original sample data of the rock properties estimated using ISRM suggested standard methods(Tiwari and Latha, 2020).
Step-2: Estimation of parameters and type of probability distributions of rock properties from original sample
Table 6 summarizes the estimated distribution parameters andPDBof rock properties from the original sample through Eqs. (1)and (2).

Table 5 Estimated statistics of reliability index (β) for example 2.

Table 6 Statistical parameters of rock properties estimated from original sample for example 3.
Step-3: Derivation of performance function (PF)
Analysis was performed using the analytical convergenceconfinement method (CCM), assuming rock mass as elasticperfectly plastic Hoek-Brown material. Multiple PFs were considered to analyze the tunnel reliability:(i)plastic zone radius(Rp)and(ii)tunnel wall convergence(ui).Expressions for PFs were adapted from literature by assuming that the tunnel is unsupported without internal/support pressure and rock mass yields with zero plastic volume change(Brown,1980).Details of the CCM and expressions for PFs, i.e.Rpandui, are presented in Appendix.
Step-4:Computation of Sobol’s total order indices and identification of sensitive properties
Multiple sensitivity analyses were performed using GSA to identify sensitive parameters affecting either of the two PFs:plastic radius (Rp) and tunnel wall convergence (ui). Analyses were performed usingk= 105quasi-random samples and the results are summarized in Fig. 9a. It is observed thatRpwas significantly sensitive tomi, UCS and GSI;uiwas sensitive to GSI only as indicated by theirSTi.None of the PFs was showing any sensitivity toEi.Hence,the resampling was performed formi,UCS and GSI and the statistical uncertainty ofEiwas ignored.It can be observed that the results presented here are in terms of intact rock properties and GSI directly. The reason is that these properties are directly estimated from field and laboratory testing. It should be noted that for each random realization of intact rock properties and GSI, rock mass properties were estimated(Hoek et al.,2002;Hoek and Diederichs,2006) for the estimation ofRpandui.

Fig. 8. Details of the geometry and forces acting on the tunnel for example 3.
Step-5: Generation of reconstituted samples from original sample for sensitive rock properties
The original sample of rock properties contains 22 data points only, which are statistically insufficient. Hence, the proposed methodology was used for the reliability analysis to consider statistical uncertainties in sensitive properties, i.e.mi, UCS and GSI.Initially,22 jackknife samples were generated to quantify statistical uncertainties in sensitive properties. Further, uncertainties were also quantified in the next step using the bootstrap approach by generating 10,000 reconstituted bootstrap samples for comparative analysis. These RSs were also used for resampling reliability analysis in Step 7.
Step-6: Quantification of statistical uncertainties in sensitive properties
Statistical uncertainties in sensitive properties, i.e.mi, UCS and GSI, were characterized using 22 jackknife samples generated in the previous step. Results are summarized in Fig. 9b and c. More details are included in Table A5 in Appendix for interested readers.It is observed that jackknife means of distribution parameters matched well with their original sample statistics. However, there were significant jackknife COVs associated with the sample statistics of all sensitive properties. Further, results of the jackknife approach were compared with those of bootstrap analysis performed using 10,000 reconstituted samples generated in the previous step. The percentage differences in bootstrap means and jackknife means of distribution parameters (mean and SD) for properties, i.e.mi, UCS and GSI were 0.05-11.05%. Differences in their COVs were 2.21%-7.72%. However, considerable differences exist in the jackknife statistics ofAICRSof candidate PDs as compared to their bootstrap statistics.Probability of a candidate PD being chosen asPDBwas significantly different for both approaches.The PD with a maximum probability being chosen as the best fit also differed. Weibull and normal distributions had the maximum probability (100%) being chosen as the best fit from the jackknife approach for bothmiand GSI as compared to those of 83.20% and 71.18% from the bootstrap approach. For UCS, gamma distribution had a maximum probability of 95.45% being the best fit from the jackknife approach,whereas Weibull distribution had a maximum probability of 48.24%being chosen as the best fit from the bootstrap approach. This again indicates the inefficiency of the jackknife approach in characterizing the uncertainties associated with distribution types of input properties.
Step-7: Reliability analysis considering statistical uncertainties in sensitive properties
Resampling reliability analysis was performed using the proposed J-UDPT approach.Pffor each reconstituted sample was estimated(using Eq.(31))by performing 10,000 MCSs on PFs and evaluating the number of failed samples(Mf).Analyses were performed for both PFs,i.e.Rpandui.Failed samples/simulations(Mf)were those for which the estimatedRpanduiexceeded their allowable values of 6 m and 40 mm,respectively.Allowable values ofRpanduiwere considered as 1:5Roand 0.01Ro, respectively (Chakraborty and Majumder, 2018).The selection of different allowable values may vary the magnitude of results.However,the analysis procedurewillremainthe same.Table7 and Fig. 9d summarize the analysis results. Mean estimates ofPfsestimated using the J-UDPT approach was coinciding with the point estimates ofPfsestimated using traditional reliability analysis, i.e.0.1731 and 0.4113 corresponding toRpandui,respectively.Pffor the traditional reliability analysis was estimated(using Eq.(31))by performing 10,000 MCSs on Eqs. (34)-(40) andMfwas estimated.Further,statistics of Pfestimated using J-UDPTapproach were in good agreement with those of B-UDPT approach for bothRpanduiwith minimal percentage differences of 1.57%-17.82%.These results verify the accuracy of the proposed approach for the resampling reliability analysis of this example.
Similar to previous examples,the results of the J-UDP approach agreed well with B-UDPT and J-UDPT approaches. However, significant differences were reported in COVs ofPfcorresponding to bothRpanduiestimated by the J-UDT approach. The differences were 80.55%,80.58%and 78.72%,respectively,as compared with JUDPT, J-UDP and B-UDPT approaches corresponding toRp. These differences were estimated to be 81.90%, 79.85% and 85.12% forui.These trends were evident for bootstrap-based B-UDP and B-UDT approaches also as shown in Table 7.Overall,the effect of statistical uncertainties in distribution types of input properties on the statistics ofPfwas negligible as compared to those of distribution parameters.
In conclusion, the results of this problem type were consistent with previous case studies. The J-UDP approach could be considered as the best choice to modelPfstatistics for stress-controlled failures along circular tunnels in closely jointed rock masses. For this particular case study, failure possibilities corresponding to estimatedPf(Cai, 2011) are reported in Table 7.

Fig.9. (a)Total order Sobol’s indices indicating the sensitivities of properties on the reliability estimation for example 3;(b)PDFs of sample statistics of mi obtained from jackknife approach for the example 3;(c)PDFs of sample statistics of UCS obtained from jackknife approach for the example 3;and(d)PDFs of probability of failures based on plastic radius performance function obtained from jackknife-based approach for all the cases for example 3.
This example is a tunnel stability problem corresponding to a type-4 problem, i.e. multiple and implicit PFs ignoring the correlation between random properties. For this, a deep tunnel constructed for nuclear waste disposal in Ontario, Canada, was considered(Langford and Diederichs,2015).Based on the extensive geological investigation,rock mass was classified as massive bluishgray to gray-brown argillaceous limestone (unit weight of 27 kN/m3) with very good joint surface conditions, making this tunnel prone to brittle failures(Intera Consulting Ltd.,2011;NWMO,2011).
Step-1: Estimation of rock properties
Table A6 in Appendix provides the original dataset of rock properties estimated through ISRM suggested standard methods(Intera Consulting Ltd., 2011; NWMO,2011).
Step-2: Estimation of parameters and type of probability distributions of rock properties from original sample
Table 8 summarizes estimated distribution parameters andPDBof rock properties from the original data set.

Table 7 Estimated statistics of probability of failure(Pf) for example 3.

Table 8 Statistical parameters of rock properties estimated from original sample for example 4.
Step-3:Derivation of PF
It was not feasible to obtain explicit PFs for this tunnel due to the involvement of difficult tunnel shape and complex failure mechanisms. Hence, MLS-RSM was coupled with numerical analysis to derive PFs expressions. Tunnel stability was analyzed for multiple(three) PFs: (i) damage zone depth (DZD), (ii) roof displacement(ur), and (iii) horizontal wall displacements (uh). Explicit expressions of these three PFs were derived by generatingn= 150 sampling points of rock properties using the LHS method. Values of DZD,uranduhwere determined for these sampling points using the cohesion weakening friction strengthening (CWFS) model implemented through FLAC-2D (Itasca, 2011).Ymatrix was evaluated corresponding to all PFs (see Section 2.4). A typical FLAC-2D model of this tunnel along with displacement measuring locations is shown in Fig.10.Dimensions of the numerical model were kept 100 m along both(horizontal and vertical)directions.Domain was discretized using 90,000 finite-difference zones. Displacements were restrained along three directions for bottom and side boundaries, respectively. The progressive excavation process was simulated in the numerical model using the excavation relaxation approach (Cai, 2008; Zhao et al., 2010). In this approach, in situ stresses are initially applied and forces on the grid points of excavation boundaries were recovered. Cross-sectional zones of excavations were then removed,and equivalent forces were applied to the opposite direction of recovered forces along boundary grid points. These equivalent forces were then gradually reduced to zero. CWFS model was implemented through a built-in straindependent Mohr-Coulomb failure model in FLAC-2D to allow the evolution of model parameters, i.e. cohesion and friction as a function of plastic strains.CWFS model was used for this case study due to the incapability of conventional strength models to model brittle rock failures. The initial friction angle (φini) was taken as 0°(Zhao et al., 2010) for all analyses. Other CWFS model parameters,i.e. mobilized friction angle (φmob), peak cohesion (cpeak) and residual cohesion (cres) were estimated from intact rock properties and GSI as suggested by Walton (2019). Plastic strain limits for cohesion(εpc)and friction angle(εpφ)were assumed to be 0.0018 and 0.0038,respectively(Walton,2019).Theoretical details of the CWFS model and guidelines/empirical relations to estimate required parameters are explained in Appendix.

Fig.10. Typical numerical model prepared in FLAC-2D illustrating damage zone depth(DZD) and displacement measurement nodes for wall and roof displacement for the analysis of example 4.
It should be noted that RSMs were constructed directly between intact rock properties and GSI (as inputs) and DZD/displacements(as outputs). This was done to reduce computational efforts involved in the CWFS parameters estimation for every realization of the intact rock properties and GSI.Empirical relations between rock properties and CWFS model parameters were implicitly embedded in constructed RSMs. NSE indices for constructed RSMs corresponding to DZD,uranduhwere estimated to be 0.9444, 0.8667 and 0.9544, respectively, forp= 25 off-sample points. Performances of all three RSMs can be rated asvery good.
Step-4: Computation of Sobol’s total order indices and identification of sensitive properties
Multiple GSAs were performed on the constructed RSMs to identify sensitive properties based on their influence on PFs.Analyses were performed usingk= 105number of quasi-random samples(Section 2.3)and results are shown in Fig.11a.It can be observed that DZD was only sensitive to UCS,while wall displacements were sensitive to bothEiand UCS.Sensitivities ofmiand GSI were negligiblefor all three PFs as indicated by their significantly lowSTi. Hence, reliability analysis was performed by resampling UCS andEionly.
Step-5: Generation of reconstituted samples from original sample for sensitive rock properties
The original sample of rock properties contains 23 data points only, which are statistically insufficient. Hence, the proposed methodology was used for the reliability analysis to consider statistical uncertainties in sensitive properties,i.e.UCS andEi.Initially,23 jackknife samples were generated to characterize statistical uncertainties in the next step.Further,statistical uncertainties were quantified using the bootstrap approach in the next step by generating 10,000 reconstituted bootstrap samples for comparative analysis.
Step-6: Quantification of statistical uncertainties in sensitive properties
Statistical uncertainties in distribution parameters and types of sensitive properties were initially estimated using the proposed approach (jackknife). Statistical uncertainties in sensitive properties, i.e. UCS andEiwere characterized using 23 jackknife samples and results are shown in Fig.11b-c. More details are included in Table A7 in Appendix for interested readers. Jackknife means of distribution parameters matched well with the original sample statistics for all sensitive properties.However,significant jackknife COVs signify associated statistical uncertainties due to limited data.Further, the results matched with those of bootstrap analysis. The percentage differences,in bootstrap means and jackknife means of distribution parameters for UCS andEi, ranged between 0.04 and 12.11%. Differences in their COVs were 4.11%-8.74%. In contrast,considerable differences existed in the jackknife statistics ofAICRSof candidate PDs as compared with their bootstrap statistics for both UCS andEi.Probability of a candidate PD being chosen asPDBwas significantly different for both approaches. The PD with the maximum probability being chosen as the best fit also differed.Lognormal distribution had a maximum probability of 95.65%being chosen as the best fit for UCS from the jackknife approach,compared with 64.76%from the bootstrap approach.ForEi,gamma distribution had a maximum probability of 69.56%being the best fit from the jackknife approach compared to 31.41%from the bootstrap approach. This again indicates the inefficiency of the jackknife approach in characterizing uncertainties associated with distribution types of input properties.

Fig. 11. (a) Total order Sobol’s indices indicating the sensitivities of properties on the reliability estimation for example 4; (b) PDFs of sample statistics of UCS obtained from jackknife approach for the example 4; (c) PDFs of sample statistics of Ei obtained from jackknife approach for the example 4; and (d)PDFs of probability of failures based on DZD performance function obtained from jackknife-based approach for all the cases for example 4.
Step-7: Reliability analysis considering statistical uncertainties in sensitive properties
Resampling reliability analysis was performed similar to the previous example. Failed samples/simulations (Mf) were those for which the estimated DZD,uranduhwere exceeding their allowable values of 3 m, 20 mm and 20 mm, respectively. These allowable values were directly adapted from the literature (Langford and Diederichs, 2015). Results are shown in Table 9 and Fig. 11d. The results of the J-UDPT and J-UDP approaches were in good agreement with the B-UDPT approach. This was evident by the minimal percentage differences in statistics ofPf= 0.06%-16.92%. Further,means ofPfcorresponding to DZD and wall displacements estimated using J-UDPT and J-UDP approaches coincided with their point estimates evaluated using the traditional reliability approach. Traditional reliability analysis was performed using 10,000 MCSs.Accuracy of the J-UDTapproach was poor compared with J-UDPTand J-UDP approaches due to the significant underestimation of COVs ofPf. Overall, the results were consistent with those of previous examples.
In conclusion,the results of this type of problem were consistent with the previous examples.The J-UDP approach is the best choice to modelPfstatistics for tunnels undergoing brittle failures.For this particular case, failure possibilities corresponding to different PFs are reported in Table 9.

Table 9 Estimated statistics of probability of failures(Pf) for example 4.
Rock masses are inherently variable due to heterogeneous distributions of different grains and discontinuities. Although rock masses can be modeled as deterministic problems by determining rock properties along every location and assigning them in numerical models, it is practically not feasible. In practice, rock properties are estimated at limited locations for modeling rock masses which brings in different knowledge-based uncertainties(Duzgun et al., 2002; Bedi, 2014). Rock slope and tunnel stability problems are generally issues associated with high uncertainties,which become more complex in presence of limited data. To characterize these uncertainties and estimate the‘true’value of any property,it is basically suggested to perform different types of tests in sufficient quantity along different locations at the site. For example,it is suggested to perform multiple in situ tests like plate load test, dilatometer test, radial jack test, along different site locations to get a better idea of rock mass deformability(Bieniawski,1978;Ramamurthy,2014).This is practically impossible as most of the time, limited tests are conducted inside drifts. This may give‘true’ estimates of statistics of the considered property for this small drift area, but the question remains on the extrapolation of the statistics estimated from these limited tests for full-scale rock mass. Even for high-budget projects like Chenab Railway Bridge and hydropower tunnels (Kazunogawa and Kannagawa in Japan),the number of tests conducted inside the drift to estimate the deformation modulus is less than 20-30 (Cai et al., 2004; Tiwari and Latha, 2020). This could be due to high costs, practical difficulties like site preparation,sample collection,sample disturbance,and data interpretation,etc.,associated with laboratory and in situ testing of rocks (Duzgun et al., 2002; Wyllie and Mah, 2004;Ramamurthy, 2014). Further, these difficulties intensify for small projects where budget constraints are significantly high. Traditional reliability approaches ignore this fact and assume that the statistics of rock properties derived from limited data can well represent the population statistics. This may finally lead to inaccuracies in the stability estimates of rock structures.
The proposed resampling reliability approach overcomes this issue by estimating distributions of SIs. This allows results to be represented as confidence intervals rather than a single/fixed-point estimate made by the traditional reliability approach. This fixedpoint estimate could be just a single possible estimate of the true value of SI (Liu et al., 2020). Given that the true value of SI is unknown in practical applications, the deviation of point estimate of this indicator from its true value cannot be quantified by traditional reliability analysis. On the contrary, interval estimates of SIs quantify upper and lower bounds within which their point estimate may vary. This interval estimate also includes its true value at a specified confidence level (Liu et al., 2020). Hence, interval estimates of SIs provide a more improved and comprehensive description of the rock slopes and tunnel reliability estimates. The practical utility of the proposed approach is significantly better than the bootstrap-based approach available in literature due to its high computational efficiency and generalization with the usage of additional reliability tools,in order to enhance its applicability for a wide range of problems.
In the present study, acceptable accuracies have been achieved for all examples with minimal differences in the statistics of SIs as compared with the bootstrap-based approach. However,the accuracy among the considered examples can be ranked in the order(1)structurally-controlled slope failure (7.3%), (2) stress-controlled slope failure (11.69%), (3) tunnel in sparsely jointed brittle rocks(16.93%),and(4)circular tunnel in closely jointed rocks(17.81%).The percentage difference here shows the maximum differences among the statistics of SIs corresponding to all considered PFs.
Although the obtained results show the generalized applicability of the proposed approach, its validity can be evaluated in future studies for the slopes and tunnels excavated in moderately jointed blocky rock masses.These problems are generally modeled using discontinuum approaches. For these problems, resampling will be required for both joint and intact rock properties.This study did not consider the epistemic uncertainty arising due to numerical errors involved in the deterministic solvers (FEM/FDM methods).The present approach assumes rock properties to be homogenously distributed along the rock mass, neglecting their spatial variation.Future studies should focus on modeling spatial variation of rock properties with limited data for more accurate estimates of the reliability and failure mechanisms (along weakest zones).
A jackknife-based generalized resampling reliability approach is introduced to model the statistical uncertainties in rock properties and their effect on reliability estimates of rock slopes and tunnels.Four cases were included to investigate the performance of the approach for a variety of in situ problems. This approach was observed to be superior to the traditional reliability approach,since it evaluates the lower and upper bounds of SIs along with their point estimates. This in turn enables to obtain a more reasonable and realistic reliability description for different problems. The approach was highly accurate with minimal percentage differences in the mean (0.75%-16.92%) and COVs (0.06%-17.81%) of SIs,compared with the bootstrap-based approach.Further,its accuracy was reinforced by minimal percentage differences (0.05%-11.67%)between the mean of SIs estimated with the proposed approach and their point estimates evaluated using traditional methods.The computational efficiency of this approach was significant, as the number of required numerical realizations was 99%less compared with the bootstrap-based approach. Effect of statistical uncertainties in the distribution types of input properties was observed to be minimal on SIs and hence, can be neglected in the analysis for all the practical reasons. Further, these results were observed to be independent of the nature (explicit/implicit) and number (single/multiple) of performance functions and corelatedness (correlated/non-correlated)of random properties.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Appendix. Supplementary data
Supplementary data to this article can be found online at https://doi.org/10.1016/j.jrmge.2021.11.003.
Journal of Rock Mechanics and Geotechnical Engineering2022年3期