999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Spatial distribution modeling of subsurface bedrock using a developed automated intelligence deep learning procedure:A case study in Sweden

2021-12-24 02:53:20AbbsAbbszdehShhriChunlingShnEmmllStefnLrsson

Abbs Abbszdeh Shhri, Chunling Shn, Emm Z?ll, Stefn Lrsson

a Division of Rock Engineering, Tyréns AB, Stockholm, Sweden

b Johan Lundberg AB, Uppsala, Sweden

c Division of Soil and Rock Mechanics, KTH Royal Institute of Technology, Stockholm, Sweden

Keywords:Automated intelligence system Predictive depth to bedrock (DTB) model Three-dimensional (3D) spatial distribution

ABSTRACT Due to associated uncertainties, modelling the spatial distribution of depth to bedrock (DTB) is an important and challenging concern in many geo-engineering applications.The association between DTB,the safety and economy of design structures implies that generating more precise predictive models can be of vital interest. In the present study, the challenge of applying an optimally predictive threedimensional (3D) spatial DTB model for an area in Stockholm, Sweden was addressed using an automated intelligent computing design procedure. The process was developed and programmed in both C++and Python to track their performance in specified tasks and also to cover a wide variety of different internal characteristics and libraries. In comparison to the ordinary Kriging (OK) geostatistical tool, the superiority of the developed automated intelligence system was demonstrated through the analysis of confusion matrices and the ranked accuracies of different statistical errors.The results showed that in the absence of measured data, the intelligence models as a flexible and efficient alternative approach can account for associated uncertainties, thus creating more accurate spatial 3D models and providing an appropriate prediction at any point in the subsurface of the study area.

1. Introduction

In many countries, including Sweden, subsurface modelling is increasingly becoming a necessary part of three-dimensional (3D)urban planning. To create an informative and useful subsurface model, different data types need to be combined. Because of the dynamic nature of the subsurface and variation of implemented data density during the planning process, subsurface modelling techniques are not easily interoperable. Moreover, planners and construction experts are primarily looking for knowledge on the location of geological discontinuities, such as the surface of crystalline bedrock and the boundaries of soft sediments, as well as their geo-engineering properties. Therefore, depth to bedrock(DTB), measured as the thickness of the sediments above the bedrock, is of great interest in subsurface geo-engineering modelling and risk assessment (Sundell et al., 2015; Gomes et al., 2016;Wei et al.,2016;Ghaderi et al.,2019;Yan et al.,2020).Accordingly,the information on the spatial distribution of DTB is an important issue in both the design and construction phases.This is a concern in European countries due to the vast variety of geological conditions and the need to find a solution to various challenges associated with the urbanisation of densely populated cities, while meeting environmental regulations (Athanasopoulou et al., 2019).Therefore,visualised DTB models that include the interpretation of sparse geotechnical measurements are important tools for identifying solutions.Despite the presented essential knowledge on field development(e.g.Glasgow,Stockholm,Helsinki,and Oslo),for the cities on thick sequences such as Rotterdam and Vienna, the digitisation of DTB is not an issue (Schokker et al., 2017; Abbaszadeh Shahri et al., 2020). However, because of the associated uncertainties (Baecher,1986), producing highly accurate DTB predictive models is a critical task that can have significant effects on the costs and risks of geo-engineering projects(Clarke et al.,2009;Mey et al., 2015).

The characterisation of DTB profiles is commonly interpreted through the sufficient sparse numbers of geotechnical soundings in and around a desired area. However, geotechnical investigations may suffer from certain limitations, for example, limited access to an entire area, costs of investigations and distance between the soundings. Consequently, with the increment of distance between two soundings, the uncertainty will increase abruptly, as many points are needed to estimate or remain totally unknown. To generate a continuous predictive DTB model, geophysical techniques(Abbott and Louie,2000;Dowd and Pardo-Iguzquiza,2005;Christensen et al., 2015; Nath et al., 2018), random fields (Uzielli et al., 2005; Li et al., 2015), geostatistical tools (Samui and Sitharam, 2011; Viswanathan et al., 2014; Kitter?d, 2017), and variogram-based methods (Maus,1999; MacCormack et al., 2018)as well as geomorphological-based models(Del Soldato et al.,2018)have been actively employed. In these methods, results from geotechnical soundings are interpolated to estimate the DTB in between the soundings that capitalises on the spatial structure and semivariance of the measured data (Goovaerts, 1997). Moreover,planar mesh generation, spatial interpolation and surface intersection are other generic widely used techniques in geological modelling (Mei, 2014). In addition to the abovementioned techniques, the falling weight deflectometer (Roesset et al., 1995),extracted attributes from satellite images (Kuriakose et al., 2009;Sun and Kim,2017;Yan et al.,2020),topographic data(Gomes et al.,2016),and signal analyses (Lane et al.,2008;Setiawan et al.,2018;Du et al., 2019) have been highlighted as effective models.

In site investigation, the geophysical techniques can provide supplementary information on sparse observations (e.g. borings,test pits and outcrops). However, these techniques are limited to small-scaled surveys in ground-based techniques (Erkan, 2008).Moreover,geophysical data need to be correlated with information from direct geotechnical methods,as data are generally interpreted qualitatively, and useful results can only be obtained by experts familiar with the particular testing method (Pazzi et al., 2019;Christensen et al., 2015). Compared to geotechnical soundings,these testing methods can be complex and time-consuming because of the need for specialised equipment and experienced operators as well as logistical issues (Clayton and Smith, 2013).

Geostatistical tools are one of the most used interpolation technologies for DTB maps (Abbaszadeh Shahri et al., 2020). By enhancing the spatial distribution of data, these techniques offer convenient options for management and provide continuity that can reproduce the trend of DTB. This feature allows the user to be precise in interpretation; however, the success of a produced DTB map depends on the quality of available information on the study area and independent variables (Deutsch, 1996). Systematic sampling uses a fixed grid to assign values in a regular pattern. Cell change cannot be accounted for in this method, and interpolation thus estimates the centre of each unmeasured grid cell (Baskan et al., 2009). This limitation implies that the location of points may be problematic when using random sampling distribution,and the coverage of adjacent areas may not be supported. Therefore,spatial reconstruction of a given finite number of observations at different locations implies that measurements have been taken under measurement noise(Stein,1999).

Random field theory is a mathematical definition using the Euler characteristic for smooth statistical maps that address the threshold problems in functional imaging (Brett et al., 2004). If datasets are limited in size, this method is not an appropriate alternative due to a complex training stage and computational potency(Fenton,1999).

In recent years, artificial intelligence (AI) techniques have shown remarkable computational and learning capabilities in addressing geotechnical problems. As DTB modelling deals with various uncertainties (Gomes et al., 2017; Hood et al., 2019;Abbaszadeh Shahri et al.,2020),the subcategories of AI techniques are appropriate alternatives to overcome the limitation and simplifications of the illustrated methods(e.g.Chang and Chao,2009;Hengl et al., 2017; Abbaszadeh Shahri et al., 2020). Furthermore,hybridising the AI techniques with metaheuristic algorithms can significantly optimise the model performance(Asheghi et al.,2019;Abbaszadeh Shahri et al., 2021). Different methods applied in DTB modelling are summarised in Table 1.

As illustrated, depending on the interpolation algorithm applied, different results can be observed in the produced geological DTB models.Therefore,it is not always clear which method can provide the most appropriate outcome.Accordingly,the resolution of complex 3D geological models can be increased and supplemented by ensuring accurate geospatial distribution of DTB. This study was motivated by the need to address such a challenge in a geo-engineering project in Stockholm, Sweden, where producing an adequately accurate quantitative model is of great interest. To find the optimum predictive DTB models,an automated AI training scheme was designed and developed,and then programmed using Python and C++. This allowed many different internal characteristics and optimiser to be tested in Python and C++.The proposed procedure was applied on 1968 datasets from soil-rock soundings in an urbanised area in Stockholm. Due to the use of automated programmes, the identified optimum models showed superior performance and more accurate spatial DTB compared to conventional ordinary Kriging(OK)technique.The results have an impact on the ability to reduce the number of boreholes and corresponding costs when using developed models.

2. Study area and data source

The study area encompasses a 20 km stretch of an ongoing highway project in Stockholm,Sweden.This area consists mainly of fine-to coarse-grained gneiss of sedimentary origin and mediumto coarse-grained metavolcanic rocks as well as occasional coarse-grained pegmatite passages. Sedimentary gneisses generally dominate in the area.According to the bedrock map provided by the Geological Survey of Sweden (SGU), the faults in the area include one with a SE-NW direction that is the result of structuraldeformation zone. The widths are decreased from 75-100 m in larger faults to 50 m in smaller faults.The plan for this highway in the NW-SE direction crosses the existing bedrock, where the road will be built as concrete tunnels in some sections. Among the executed geotechnical tests and acquired data, 1968 soil-rock soundings were compiled in the area (Fig. 1a). These soundings encompass a varied and complex set of data derived from subsurface explorations and in situ instrumentation. However, a lack of data needed to provide a consistent database is an ongoing challenge not only in this study but also in most geo-engineering applications.This limitation in the ability to improve the datasets was overcome by using a random data creator (RDC) (Abbaszadeh Shahri et al., 2020), an intelligent knowledge-based framework used to generate appropriate pseudo observations that can be used to compare,interpret,and describe the results.Accordingly,62 new pseudo datasets were generated for the area (Fig.1a, points with black +) to extend the region of influence for each soil-rock sounding and decrease the degree of variability in the extrapolation direction. These retrieved sparse data, which have been distributed alongside the planned road,can be used to supplement the collected DTB information from geotechnical soil-rock soundings, as this is the most common probing method used in Sweden that can be performed in both soil and rock. This method can provide good and accurate soil and rock interfaces,but there is uncertainty in the interpretation of bedrock levels when the top of the bedrock is cracked and brittle. The thickness of overlaid postglacial sediments varies from 0 to more than 100 m.An overview of the constructed digital elevation model (DEM) and geological setting of the area is shown in Fig.1b and c, respectively.

Table 1 A summary of applied techniques to predict the DTB.

Fig. 2. Simple configuration of multilayered ANN structure.

3. Artificial neural network processing paradigm

ANNs, as connectionist computing systems of processing elements, are configured for specific applications through a learning process that aims to mimic and replicate the operation of the human brain. Recent developments in system analyses and the significant proven advantages over traditional modelling approaches have led to extensive use of ANN techniques.Properly tuned ANNs improve the diagnostic performance and modifications and thus are easily adapted to incorporate new data.The goal is to fit outputs with a linear function of nonlinear transformed inputs where any gradient optimisation method may be used.

Fig.1. Colour plot of the DTB measured through soil-rock soundings with the generated RDC data(black+)(a),the overview of overlaid DEM of study area and satellite map taken from Google earth (b), and the geological map of the study area from SGU (c).

As presented in Fig.2,the received signal from the ith input(xi)is associated with weights connected to the jth neuron(wij)and is passed through one or more hidden layers to be processed. The output of the jth neuron of the kth hidden layer in the tth iteration(okj(t)), using activation function (f), is confined into a pre-defined range and then transferred as

where y is the actual output; αwand αbare the momentum constants that determine the influence of the past parameter changes on the current direction of movement in the parameter space,and αwusually varies within [0.1, 1] interval and is used to avoid instability in the updating procedure; ηwand ηbrepresent the learning rates;and ρki(t)is the error signal of the ith neuron in the kth layer, which is back-propagated in the network.

The outcome of the lth neuron in the mth output layer (l) is then calculated using the updated weight by

where nois the number of neurons in the output layer.

3.1. Developing optimum DTB predictive models

Advanced ANN techniques can be considered robust tools for DTB modelling. However, referring to dependency on the defined problem and a lack of a standardised method for configuration,the identification of an optimum model is a difficult and critical task(e.g.Vogl et al.,1988;Currry and Morgan,2006;Krasnopolsky et al.,2018; Ghaderi et al.,2019; Abbaszadeh Shahri et al.,2021). During the training procedure, the model should not be trapped in local minima nor overfit.To overcome these problems,the regularisation and tuning of internal characteristics (e.g. training algorithm,number and arrangement of neurons, learning rate, activation function and architecture) play a significant rule (Abbaszadeh Shahri, 2016). Using different combinations of these parameters makes learning faster and prevents convergence in local minima(Abbaszadeh Shahri et al., 2021). Overfitting occurs when a model fits the data in the training set, while incurring a larger generalisation error(Tetko et al.,1995).Regularisation refers to the process of modifying a learning algorithm to prevent overfitting by fixing the number of parameters in the model (Girosi et al.,1995). Early stopping is used as a form of regularisation to control the number of iterations that can be run before the training algorithm begins to overfit. Therefore, in each iteration, early stopping improves the performance of the learning algorithm on data outside of the training set (Zhang and Yu, 2005; Yuan et al., 2007).

Fig.3. The block diagram used to assess the optimum model.NHL-Number of hidden layers; HN - Hidden neurons; n- Number of used activation functions; m - Number of used training algorithms.

To find the optimum models, the presented method by Abbaszadeh Shahri et al. (2020) was updated using an automated identification process (Fig. 3). Using an iterative procedure, this process was then integrated with a constructive technique and programmed with both Python and C++.This was done not only to capture the capabilities of both programming languages but also to increase the power of the models and avoid the problems of overfitting and getting stuck in local minima. A programming language is a specification, and the success of an application will therefore depend on making an appropriate choice. Python and C++ were selected due to their popularity, history, and access to libraries. Moreover, these programming languages are turingcomplete (by design) from a theoretical standpoint, with quite similar semantics, even if their syntax is very different. Therefore,based on each programmed code, a wide variety of internal characteristics for both shallow and deep neural networks have been examined (Table 2). Deep neural network refers to models with multiple hidden layers that can be reused to compute the features of a combined structure with fewer weights (LeCun et al., 2015).This implies that after learning, deep structures can improve the generalisation to new examples (Kriegeskorte and Golan, 2019).However, the need for adequate computing power and data for learning are the main potential limitations of deep structures(Bengio, 2009).

Fig.3 shows the block diagram of the proposed method that can automatically capture the optimum models using the characteristics in Table 2. To constrain the search space and save time, the learning rate was set to 0.7 with a step size domain within [0.001,1]. Using three embedded switch cases in the programmes, the codes were automated to monitor all characterised training algorithms (TA) and activation functions (AF). The maximum numbers of hidden layers and neurons as user-defined parameters were set to 2 and 40,respectively.Therefore,the training strategy is followed by loops and switch cases,where the system starts with one hidden layer and checks the topology of 3-40-1 for all internal characteristics(Table 2).When the system switches to two hidden layers,the procedure automatically starts with 3-1-39-1 structure using the first TA and AF and is pursued to topology 3-39-1-1. Afterwards,it returns with the same TA,and the AF will switch to next case.This iterative process is repeated for all TAs,which are switched step by step to a variety of internal characteristics. Applying different step sizes for the learning rate in TAs, i.e. replacing the conjugate gradient descent(CGD)with adaptive momentum(AM)in step size of 0.001, allows the model to avoid the overfitting problem and maximises the chances of avoiding local minima (Abbaszadeh Shahri et al., 2020). A two-step termination criterion, including the root mean square error(RMSE)and number of iterations(set to 1000), was considered. Accordingly, the minimum RMSE and the maximum network coefficient of determination (R2) were stored and ranked for all trained structures in three runs.

Table 2 Applied internal characteristics to capture the optimum models.

Due to differences in terms of syntax,simplicity,use and overall approach to programming, there is considerable debate over the performance of Python and C++ in specified tasks. As shown in Table 3 and the programmed procedure in Fig. 3, a distribution of 40 neurons in two hidden layers provides 78 different topologies.Therefore, considering the combination of employed AFs between layers, in each round of training, the automated system monitors approximately 2000 topologies with different internal characteristics. This implies that the optimum models are screened among numerous examined structures,even those with similar topologies but different internal characteristics. Accordingly, the variation of network RMSE using 40 neurons in different topologies, starting from 3-1-39-1 to 3-39-1-1,is reflected in Fig.4.Summarised results(Table 3)show that the 3-28-12-1 and 3-25-15-1 topologies can be selected as optimal topologies. The differences in performance between these languages were expected,as distinctions are raised in terms of syntax, simplicity, use, and the overall approach to programming.This can technically be interpreted as the threading build of each employed language and procedure requirements to become machine code.

3.2. Outcomes of generated DTB models

The outcome and progression of predictive modelling are determined by the effectiveness of systemic feedback loops through structural changes that control whether individual models serve the required needs. Referring to Table 3, the predictability of the captured optimum models using tuned characteristics for both C++ and Python codes was plotted and presented. Fig. 5 shows the comparison between the fitness(Fig. 5a and b) and corresponding differences (Fig. 5c and d) for measured and predicted DTBs using training and testing datasets.According to the no free lunch theorem (Wolpert and Macready,1997), biases are a fundamental property of the results generated in inductive learning systems, and the assumption of an intelligent model free of biases is not reasonable. In the search space, the achieved possible minimum cost function (RMSE) introduces the bias of predictions. Considering the designed procedure and examined different internal characteristics(Fig.3),the reason for the observed biases can be referred to the implemented TAs and thus the trade-offs between accuracy, overfitting and overgeneralisation of each choice associated with the corresponding RMSE. Moreover, as the last layer only receives results generated in the previous layer, the detected biases state the differences in the mapping of fed data between the lower layer and its prediction. Traditionally, collected DTB field data are presented in two-dimensional (2D) digital versions of geological maps. However, assigning a vertical component in the areas without soil-rock sounding data in a way that provides a representative interpretation of the subsurface spatial DTB distribution is a challenging procedure (Abbaszadeh Shahri et al., 2020).Despite all the benefits of 2D mapping, there is a trend favouring in use of integrated 3D models with the ability to combine terrain data and aerial photos for geo-engineering applications. Such models provide a visual perspective of the study area, which enables more accurate interpretation through geological sequences.However, depending on the quality of the datasets used and the approach applied, the level of accuracy and the confidence of the model can vary in terms of their ability to prevent conflict with interpolation algorithms. In this respect, if adequate numbers of soil-rock soundings are not accessible, the pseudo data for unsampled locations can be estimated through the knowledge of experts or other methods,such as nearest neighbour and grid cells(Tacher et al., 2006; Abbaszadeh Shahri et al., 2020; Yan et al.,2020). These methods are effective and easy to use, but theuncertainty in the generated pseudo observations in faraway points is increased because of the lack of information nearby,as is the case in all interpolation methods. Accordingly, the generated pseudo data play an important role in building knowledge of phenomena within a specific topic, and data synthesis is thus at the centre of the scientific enterprise in the software engineering discipline (Cruzes and Dyba, 2011). In this study, the drawbacks and concerns over insufficient data were addressed using the RDC approach (Abbaszadeh Shahri et al., 2020). This intelligent knowledge-based system is iteratively applied on test point coordinates to generate new shuffled synthesised DTBs,prescribing the number of soundings and statistical noise for the region between soundings.Nevertheless,the generated 3D models are built over limited number of neighbouring data in relatively small-scale areas and thus it is not always clear which procedures can provide the most appropriate DTB surfaces.

Table 3 Characteristic of optimum ANN-based structures.

Fig. 4. Variation of network RMSE for developed optimum models using C++ and Python as a function of network topologies.

Fig. 5. Comparison of the predictability and corresponding residuals of identified optimum models subjected to developed codes using the training datasets (a, c) and testing datasets (b, d).

Fig. 6. Step-by-step addition of the 3D distributed spatial DTB predicted by optimum C++ topology: (a) Surface of the area, (b) measured DTB, and (c) estimated DTB.

Fig.6 shows the results of the creation of a visualised 3D model of the study area from applied data,depicting the retrieved outlines of the subsurface spatial distribution of DTB using the designed training system.The presented 3D model is computed directly from the soil-rock soundings, because the embedded automation procedure can be quickly regenerated. Accordingly, the rock outcrops in the area can be identified by integrating the generated ground surface (Fig. 6a) and spatial measured DTB (Fig. 6b). Such incorporation can provide a 3D subsurface model with high resolution and adequate predictive accuracy in geo-engineering projects(Fig. 6). This implies that the automated procedure provides more flexibility in the modelling process to be developed for future data.Therefore, it can be relevant to exploiting more comprehensive concepts on subsurface geological or petrophysical distributions.Accordingly,such models are a preferred tool for geo-engineers and decision planners in the observation and analysis of geoenvironmental engineering issues within a project.

4. Validation and discussion of DTB models

Fig.7. Comparing the predictability of predicted values using(a)C++,(b)Python,and(c)OK,and the calculated residuals between measured and predicted DTBs achieved from(d)C++, (e) Python, and (f) OK.

Table 4 Established confusion matrices of applied models.

Modelling the spatial distribution of subsurface DTB plays an important role in proper site characterisation.Integrating such DTB models with geological and geomechanical information can provide scalable 3D framework for geo-engineering applications.However, 3D spatial DTB modelling in complex terrain must have high resolution data to provide an accurate characterisation of subsurface features and a realistic overall depiction to capture any proven or hypothesised subsurface connections. Moreover, due to observed conflict in the results of interpolation algorithms in the manipulation and handling of all requirements, it is not always clear which modelling tools can best reflect DTB surfaces.Here,the validity of the developed models is discussed through comparison with traditional OK and performance analyses using confusion matrix. The models were then ranked according to different statistical metrics.

4.1. Ordinary Kriging

Kriging (Krige,1951) is one of the most commonly used probabilistic interpolation methods for unknown values of spatial and temporal variables (Dauphiné, 2017), which gives a least square estimate of data(Remy et al.,2011).In this algorithm,the distanceweighted incorporation with the spatial variability is followed to estimate the values of the unsampled locations(Miller et al.,2007).In OK as the most commonly applied Kriging method, the optimal weights for reducing the error variance are determined using the embedded semi-variogram to ensure an unbiased estimator and minimise the estimation variance (Wackernagel,1995). Using OK,the DTB can be locally estimated based on the neighbourhood locations as

where γ(h)is the semi-variogram,and m(h)reflects the number of observation pairs of DTB(xi)and DTB(xi+h)samples at distance h in locations xiand xi+h,respectively.Further,the spatial estimation of DTB for unsampled location,DTB(x0),is then calculated through the linear combination of the observed values, zi= Z (xi), and weights wi(x0) (i = 1, 2, …, N):

where widenotes the weight values around the unsampled location.In OK as a linear unbiased estimator,the sum of all the weights is equal to 1.

Therefore, OK in skewed data can better represent estimated error variance than the Kriging(Yamamoto,2005).Referring to the reasons given above, after a series of analyses, the predicted DTB subjected to training datasets using 12 lags due to better performance was selected and reflected in Fig.7.Generally,geostatistical and AI techniques can be used as forecasting strategies of subsurface or geological characteristics. However, because of high heterogeneity of spatial distributions in the prediction process, the success of the geostatistical interpolation algorithm (Fig. 7c) was significantly lower than that of AI models (Fig. 7a and b). The development of such modelling process provides an extensive collection of visual data to describe 3D objects.This is an important aspect of the procedure designed in the study,where the 3D objects of each point of the study area can be described using geo-location vectors that serve as a search key in the database. Comparing different models to compensate the weaknesses of the applied techniques can assist in finding a robust tool across the data sources(e.g. Chew,1989; Dickerson et al.,1997; Held, 2001; Domiter and Zalik, 2008; Mei et al., 2013). The differences between the measured DTB and those predicted by OK and ANN models are reflected in Fig. 7d-f.

Table 5 Compared CA, ME and improved progress of optimum models.

4.2. Progress control using confusion matrix

Confusion matrix or error matrix(Stehman,1997)is an intuitive visualised table layout to describe the performance of a model on a set of data. In this matrix, each row and column represent the predicted and actual classes,respectively.Therefore,each array[aij]shows the number of true labelled instances in a categorised class and thus provides an easy platform to find mislabelled classes.It is also able to show the relations between the individual classified outputs and the true labelled inputs.In practice,a confusion matrix conceptualises the error probabilities of developed models in assigning the individual predicted outputs into the classified input.Accordingly, the best performance is that obtained with zero values, except on diagonal arrays, and thus the better the performance, the better the effectiveness. Using confusion matrix, the classification accuracy(CA)and misclassification error(ME)for the applied models can be quantified (Asheghi et al., 2019). As presented in Tables 4 and 5, the ANN models (C++ and Python) have more true predicted instances(227 and 222,respectively)than the OK (174). The performance of C++ code with 75% correct estimation showed 2.7% and 24% progresses in precious predicted DTB model than Python code and OK, respectively.

4.3. Ranking error metrics

Statistical error metrics are commonly used to evaluate the performance of models. Here, the models were assessed using mean absolute percentage error (MAPE), mean absolute deviation(MAD), RMSE, R2and calculated residuals (CR), and indices of agreement(IA).The MAPE is one of the most popular indices used to describe the accuracy and size of the forecasting error,while the MAD shows the variability of datasets using the average distance between each data point and the mean. Using the IA, the compatibility of modelled and observed values is investigated (Willmott,1984), whereas residual represents a fitting deviation of the predicted value from measured one.Therefore,higher values of IA and R2as well as smaller MAPE,CR,MAD and RMSE can be interpreted as a higher predictability level. As shown in Table 6, C++ contributed the best total rank among the three methods. The reason for observed differences in the performance of the programming languages is related to the optimisation methods and initialised condition in the training procedure.

5. Concluding remarks

Due to the variation of subsurface bedrock topography, the production of a more accurate, generalised predictive model for unmeasured DTB areas is of great importance in geo-engineering projects. Such models can be developed and visualised using a learning scheme and finite number of datasets through the trained intelligence system platform. Furthermore, a robust 3D regional framework can reflect the potential subsurface risks associated with the spatial distribution of DTB in geo-engineering applications with a considerably more powerful geological understanding than traditional 2D maps and cross-sections. Moreover, combining the code tools and scientific approaches can assist in the creation of more comprehensive and useful 3D predictive models.

In this study, concerns associated with the generation of a 3D visualised subsurface predictive DTB model were addressed using an automated intelligence training system by means of C++ and Python computer programming environments. To enable more efficient learning, network models composed of different internal characteristics were examined to capture the optimum models.

The lack of data in a part of the study area was compensated using the RDC and 62 new pseudo datasets to extend the region of influence for each soil-rock sounding and decrease the degree of variability in the extrapolation direction.Topologies 3-28-12-1 and 3-25-15-1 were characterised as the optimum predictive DTB topologies for the 2028 data points. Referring to CA (Table 5), the model developed using C++ showed 2.7% and 24% progress in comparison to Python code and the OK technique, respectively.Subsequently, the ranked models using supplementary error indicators reflected almost superiority of the code developed in C++.Accordingly,the 3-28-12-1 topology trained by the CGD algorithm subjected to hyperbolic tangent (Hyt) activation function was selected as the most appropriate structure. The inability of OK to interpolate and handle outlier data was verified with the observed over/underestimated DTB values by using the randomised datasets.It was concluded that OK cannot be presumed to be a representative model for the entirety of the study area, while the developed intelligence models provide significant cost-effective and accurate enough tools in subsurface DTB geo-spatial prediction purposes.

In practice,the dedicated 3D predictive DTB model can present geospatial distribution and the boundary between the overlaid sediments from the hard rocks.This issue can play a significant role in the design phase for the city of Stockholm, which has many ongoing projects in underground openings and transport tunnels.From a geo-engineering point of view,DBT enables the modelling of induced vibration by tunnelling, landslide risk assessment and groundwater. This makes such a DTB model an indispensable tool for decision makers in urban development projects (e.g. building houses,roads,railways and bridges),where substantial land surface processes can be imposed.

Table 6 Results of statistical error criteria in evaluated model performance.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This research was funded through the support of the Swedish Transport Administration through Better Interactions in Geotechnics (BIG), the Rock engineering Research Foundation (BeFo)and Tyréns AB, for which the authors express their deepest gratitude. We also wish to thank our colleagues, who provided insight and expertise that greatly assisted us in our research.

主站蜘蛛池模板: 国产极品嫩模在线观看91| 全色黄大色大片免费久久老太| 国产本道久久一区二区三区| 日韩精品一区二区三区中文无码| 一区二区欧美日韩高清免费 | 国产第一页免费浮力影院| 国产91全国探花系列在线播放| 91探花国产综合在线精品| 1024你懂的国产精品| 中文字幕中文字字幕码一二区| 无遮挡国产高潮视频免费观看| 久热这里只有精品6| 欧美国产在线精品17p| 国产尤物jk自慰制服喷水| 亚洲精品桃花岛av在线| 激情无码视频在线看| 成人午夜天| 一级毛片网| 日韩免费无码人妻系列| 91午夜福利在线观看| 亚洲日本韩在线观看| 国产亚洲视频免费播放| 欧美亚洲欧美| 国产免费看久久久| 97免费在线观看视频| 色成人亚洲| 精品小视频在线观看| 人妻中文字幕无码久久一区| 国产视频入口| 国产视频久久久久| 婷婷亚洲最大| 国产精品一老牛影视频| 亚洲国产综合第一精品小说| 国产一区三区二区中文在线| 精品亚洲国产成人AV| 欧美精品成人一区二区在线观看| 99re热精品视频国产免费| 老司国产精品视频| 欧美福利在线| 青青草国产一区二区三区| 91色在线观看| 特级aaaaaaaaa毛片免费视频| 亚洲乱强伦| 亚洲AⅤ波多系列中文字幕 | 全部免费毛片免费播放| 自拍偷拍欧美日韩| 18禁黄无遮挡网站| 妇女自拍偷自拍亚洲精品| 国产一区成人| 三级欧美在线| 亚洲床戏一区| 亚洲久悠悠色悠在线播放| 2021精品国产自在现线看| 国产尤物在线播放| 国产主播在线一区| 成年人免费国产视频| 亚洲欧美不卡中文字幕| 国产精品美女自慰喷水| 精品伊人久久大香线蕉网站| 日韩精品无码免费一区二区三区| 亚洲日本一本dvd高清| 999精品视频在线| 免费毛片网站在线观看| 毛片免费在线视频| 国产h视频免费观看| 国产国产人成免费视频77777| 无码精品一区二区久久久| 老色鬼欧美精品| 97视频免费看| 一级香蕉视频在线观看| 欧美日韩午夜| 国产91视频观看| 国产成人精品男人的天堂| 国产精品片在线观看手机版| 激情无码视频在线看| 日本精品影院| 国产一区亚洲一区| 天堂在线www网亚洲| 日韩成人在线视频| 国产精品一区二区无码免费看片| 久久久久久久97| 国产免费黄|