Kexin Bi,Shuyuan Zhang,Chen Zhang,Haoran Li,Xinye Huang,Haoyu Liu,Tong Qiu
Beijing Key Laboratory of Industrial Big Data System and Application,Department of Chemical Engineering,Tsinghua University,Beijing 100084,China
Keywords:Ethylene thermal cracking PSE Intelligent manufacturing Molecularization and digitization Modeling and optimization
ABSTRACT Applications of process systems engineering (PSE) in plants and enterprises are boosting industrial reform from automation to digitization and intelligence.For ethylene thermal cracking,knowledge expression,numerical modeling and intelligent optimization are key steps for intelligent manufacturing.This paper provides an overview of progress and contributions to the PSE-aided production of thermal cracking;introduces the frameworks,methods and algorithms that have been proposed over the past 10 years and discusses the advantages,limitations and applications in industrial practice.An entire set of molecular-level modeling approaches from feedstocks to products,including feedstock molecular reconstruction,reaction-network auto-generation and cracking unit simulation are described.Multilevel control and optimization methods are exhibited,including at the operational,cycle,plant and enterprise level.Relevant software packages are introduced.Finally,an outlook in terms of future directions is presented.
Ethylene is one of the most important products in the petrochemical industry,and ethylene production determines the strength of a country’s industry.With global economic developments,ethylene production and total capacity has increased continuously.The global ethylene production was 151 million t.a-1in 2017,and this figure is expected to increase as a result of an increasing global population and rising living standards [1].Thermal cracking is the main ethylene production process,and cracking furnaces are core units to decompose feedstock into small molecules using heat from fuel gas[2].In recent years,intelligent reformation of traditional industries has received global attention.Intelligent manufacturing is regarded as a key concept and step for the integration of production step.By detailed analysis and evaluation of the production process,enterprises would be benefit from intelligent technologies and prolificacy for enhanced performance,effectiveness improvement and higher profit.American intelligent manufacturing (IM),German Industry 4.0,and other strategies for manufacturing upgrade have been implemented in various countries to gain a competitive advantage in the international market [3].Such intelligent technology innovations have aroused interest from enterprises and research institutes,especially in the area of molecularization and digitization.
Popular research areas on intelligent manufacturing include the entire ethylene plant production process,where PSE techniques are hotly discussed and extensively applied in multiple aspects.For furnace cracking,modeling,simulation and optimization tools could provide an improved understanding of the reaction process and operation control in cracking units.For ethylene industrial chains,relevant scheduling optimization software development could benefit decision makers by providing higher profits.Because the goal is to achieve molecular refining and intelligent manufacturing,numerical tools and software packages of all production steps from feedstock to product have been developed for digital insight or visualization of the entire production process.Tools or packages could be compiled and integrated on a unified industrial internet platform for convenient utilization by petrochemical enterprises.
Research on the above topics has progressed significantly in recent years.However,barriers remain when these achievements are applied in industrial practice.Feedstock diversification [4],model complexity [5],a lack of data [6] and uncertainty issues in the supply chain and market[7]are key difficulties that enterprises face.Process systems researchers and engineers must make huge effort in the intelligent manufacturing of thermal cracking production.
The scope of this paper was to provide an overview of recent advances in knowledge expression,numerical modeling and optimization applications of ethylene thermal cracking.Evolutions in intelligent manufacturing,molecularization and digitization of thermal cracking production have been given extensive attention.After the introduction,Section 2 presents a molecular characterization of feedstocks.Sections 3,4 and 5 describe intelligent models within the scope of cracking furnace,including automatic reaction-network generation,integrated modeling and simulation of reaction,heat transfer and the coking process,intelligent feature extraction and processing of cracking process.Sections 6,7 and 8 focus on optimization in multiple scale of the entire plants,including operational tuning of furnaces,cyclic scheduling and planning,dynamic simulation and optimization of startup and shutdown.Section 9 summarizes industrial software development and application for ethylene plants.Section 10 discusses the conclusions,main challenges and future research perspectives.
Molecular-level diversification and regional differentiation [8]are important factors to determine behavior in thermal cracking.Therefore,molecular characterization needs to be carried out to evaluate feedstock properties,predict detailed compositions and establish fundamentals for the accurate simulation and effective optimization of thermal cracking.Recent advances and methods of different molecular characterization methods can be summarized according to the molecular modeling method,of which the development and improvement is displayed in Fig.1.
The most direct approach to profile feedstock characteristics is by sending them into various analytical instruments.Initially,bulk properties that are easy to obtain,such as the specific gravity;boiling range;average molecular weight and sulfur,nitrogen,and metal contents content [9–13] are indexes for feedstock discrimination and classification of thermal cracking.Other properties,such as the true boiling point,could be converted from the simplest measured properties using statistical regression methods,such as partial least squares regression and support vector machine[14–16].With the development of spectral analysis technology,the detailed molecular composition can be provided by novel instrumental analysis,such as gas chromatography (GC),GC × GC,and gas chromatography-mass spectrometry (GC–MS)[17–19].However,these techniques are expensive,timeconsuming and subject to expert interpretation,which makes them inapplicable to intelligent manufacturing in practical industrial processes [20].Databases that are generated by property and spectral data collection provide a valuable calibration basis for other molecular characterization methods [21–24].As far as we know,high-throughput characterization using instruments is feasible only for light feedstock analysis in thermal cracking plants.
Instead of excessively detailed expressions of feedstock information using actual molecular compositions,equivalent molecular composition characterization methods have been proposed.These methods aim to generate a series of molecules according to categories to achieve an equivalence between the actual and generated composition in bulk properties and reaction characteristics.Among these methods,saturate,aromatic,resin,asphaltene(SARA)analysis and molecular-type homologous series (MTHS) analysis are used most extensively in equivalent molecular composition prediction.
The work of Jewell et al.[25]inspired the proposal of SARA analysis,which divides feedstock compositions according to their polarizability and polarity.Trauth et al.[26]proposed a simple flow diagram for the stochastic reconstruction of resid using Monte-Carlo(MC)-constructed molecules.Campbell et al.[27]supplemented the diagram with a quadrature method to select an optimal small set of molecules.Verstraete et al.[28]and de Oliveira et al.[29] added the termed reconstruction by entropy maximization(EM)as a second step to extend the diagram to two steps and improved the global optimization method as a genetic algorithm(GA).Even though feedstock molecular reconstruction methods based on SARA analysis are becoming mature and reliable,further correlation of kinetic or product yield and quality in thermal cracking using SARA data are rarely used.

Fig.1.Development and improvement of molecular characterization model.
In 1999,Peng [30] proposed another molecular-modeling method using a matrix to represent the composition of petroleum fractions.Researchers from the University of Manchester improved this method and termed it MTHS analysis.This diagram first generates a molar or mass fraction matrix that is composed of a homologous series and carbon number of the molecules.After that,a global optimization algorithm is usually used to adjust the composition values in the matrix for molecular equivalence.Ahmad et al.[31] combined a group contribution method to calculate the physical and thermodynamic properties of individual components with the MTHS analysis to achieve a fully automated diagram.Pyl et al.[32] attempted three reconstruction approaches,including a method that is based on the Shannon entropy criterion,an artificial neural network and a multiple linear regression model to generate the MTHS matrix,and evaluate these approaches by using principal component analysis.Pyl et al.[33] imposed probability density functions on homologous series of components of MTHS,and proved that gamma distribution is an adequate approximation of the experimentally measured composition.Bi et al.[34] modified the probability density function by adding regional features,weight features and uncertainties,which are determined to be high-performance.
Other advanced equivalent molecular models are being proposed constantly,such as EM hybrid approaches [35,36] and state–space representation methods[37].The basis of most methods is SARA and MTHS analysis or their modifications.Equivalent molecular composition characterization is a reasonable approach to analyze some types of liquid feedstock,such as naphtha,light diesel and hydrocracking tail oil.The generated mass or molar composition matrix is convenient for intelligent linking up with subsequent conversion.However,equivalent molecular models are ineffective when processing heavy feedstocks,and the kinetics of the molecular network with a high carbon number are unreliable [38].
Another alternative idea for molecular characterization is to generate a virtual molecule bond joint by structure fragments.The most successful and widely used approach is structureoriented lumping (SOL) proposed by Quann and Jaffe [39].They attempted to represent feedstock molecules as vectors of structural increments (single core and side chain structure).The autovectorization of molecular reconstruction makes it more convenient for the subsequent generation of reaction networks and kinetic parameters.
Further studies have progressed SOL method improvement significantly.Jaffe et al.[40]extended this method to heavy petroleum residues by adding a representation of multicore species.Tian et al.attempted to apply the SOL method to establish molecularreconstruction and reaction-network models for steam cracking and delay coking [41,42],and obtained good agreement with experimental data on product distribution predictions.Pan et al.[43] and Chen et al.[44] proposed a three-step SOL–MC–EM,and achieved a closer matching with actual analytical characteristics.The associating rules between SOL structure increments and group-contribution methods in the work of Chen et al.offer a new idea for molecularization and digitization when building a stochastic molecular library.
Different models of virtual molecular composition characterization based on SOL methods improve the intelligent level of thermal cracking.Vectorization and matrixing thinking makes the reaction rules easier to use when modeling and makes reaction networks of various sizes and complexities feasible to construct.SOL methods are of great significance in the modeling of heavy feedstocks in industrial practice,including in thermal cracking.However,for some types of light feedstock,such as naphtha,the modeling complexity is higher than the abovementioned equivalent molecular composition characterization,which makes the performance unacceptable for industrial use.
The ethylene cracking reaction(ECR)network describes numerous strongly-coupled reactions and associated kinetics,and reveals the chemical essence of the thermal degradation of hydrocarbons[45,46].Coupled with the transfer process,the ECR network connects the detailed composition of the product with that of the feedstock.Therefore,ECR network generation is central to cracking process modeling and simulation [47].Adequate simplification of reaction networks is a prerequisite for performance enhancement in subsequent steps.The detailed ECR knowledge and operations are illustrated in Fig.2.
3.1.1.Chain-reaction mechanism
Foundational thermal reaction theory is required to construct a reliable ECR network for pyrolysis.In 1934,Rice et al.proposed a chain-reaction mechanism to explain the general features of paraffinic hydrocarbon decompositions by the theory of free radicals[48].To date,the chain-reaction mechanism has been accepted extensively and applied as a foundation of ECR network modeling because of its quantitative precision and extensive applicability.Although an ECR network may involve thousands of elementary reactions,these reactions can be classified broadly into three categories:the initiation,propagation and termination reactions,by applying an analogical methodology [49].

In a typical unimolecular initiation reaction,a covalent C-C bond is broken to derive two free radicals.It is noteworthy that the concentration of reactive free radicals increases only in the initiation reaction

The propagation reactions are comprised of three subcategories,namely,the hydrogen-abstraction,radical-addition and radicalisomerization reactions,respectively.In hydrogen-abstraction reactions,a radical abstracts a hydrogen atom from the reactant molecule to produce a new radical and a new molecule.β-Scission is the reverse of radical addition,and often occurs after hydrogen abstraction.The resultant radical in the hydrogenabstraction reaction decomposes by breaking a β bond to produce a molecule with a double bond [53].These reactions account for most ethylene production in an ECR network.Besides β-scission,free radicals also probably undergo isomerization to favor more stable radicals to reach an equilibrium radical distribution,which has a significant influence on product distribution.

Two radicals combine to form a new molecule in the termination reaction,and the number of radicals decreases.
Chain-reaction mechanisms use a radical to describe the chemical reaction process in ECR networks and provide an essential way to understand complex reaction networks at a molecular level for successive modeling and simulation.
3.1.2.Molecular-reaction network

Fig.2.ECR knowledge and operations when applying in simulation.
The molecular-reaction network was developed on the basis of a chain-reaction mechanism.Initially,Sundaram et al.[54,55]proposed rigorous molecular reaction models for feedstocks with a low carbon number,including ethane,propane,butane and their mixtures.These models exceeded rivals at that time because of their excellent agreement with the experimental data and demonstrated the feasibility of building a thermal cracking model on rigorous molecular reaction schemes.Later,Damme et al.[56]developed molecular ECR models for naphtha in a pilot plant rather than in glass batch equipment under vacuum.The temperature,pressure and conversion profiles along the practical coil reactor were considered,to establish the influence of heat flux on the product distribution.Kumar et al.[57]presented a molecular-reaction network for overall naphtha decomposition by a first-order primary step and a set of secondary reactions.The built model agreed well with the experimental results even at varying temperatures,dilution ratios and space times,and exhibited a wide applicability.To adapt the flexible Kumar model in naphtha,the first-order reaction is supposed to be estimated for precise modeling.Gao et al.[58]established a standard feedstock library by calculating historical samples with measured characteristics,and the stoichiometric coefficients of the first-order reaction in the Kumar model were calculated with a fuzzy matching algorithm.This contribution expanded the application of the Kumar model in practical naphtha pyrolysis.
The molecular-reaction network combines the reaction mechanism and experimental data to simplify the model and improve the generality,which has advantages of a simple principle,low-cost computation and ease of development.However,imprecise experimental data results in the inaccurate regression of kinetic parameters and stoichiometric coefficients,which leads to inconsistencies between theory and practice.Practical feedstocks vary frequently in physicochemical properties,and it is difficult to deal with variations of assumed homogeneous mixtures.
3.1.3.Radical-reaction network
Owing to the pioneering radical-based modeling of Rice et al.[59],radical-reaction theory facilitated precise mathematical radical-reaction models,which avoid excessive resource consumption in their molecular counterparts for measuring the primary reaction coefficients.Sundaram et al.[60] built a radical-reaction model with 20 species and 133 reactions for C1–C4 hydrocarbons and their mixtures,which predicted the experimental cracking results for a wide range of temperatures.The presented reaction scheme allowed an accurate simulation in industrial gas cracking.Aribike et al.[61]extended radical-reaction network to the thermal decomposition of n-heptane.The inadequacy of the Rice–Kossiakoff theory was rationalized in terms of secondary reactions of higher alpha-olefins.Joo et al.[62] simulated industrial naphtha cracking with radical-reaction networks,which included 84 species and 365 reactions.After simplification with eigenvalue–eigenvector decomposition and tuning with real plant data,this model was used to optimize the operating conditions of cracking reactors by predicting product distributions.After an investigation of various feedstocks,an automatic reaction-network generator was required for rapid steam-cracking modeling to reduce the manual workload in reaction construction.Song et al.[63] developed a reaction mechanism generator (RMG) with implementation of advanced technologies,such as a graphical representation of reaction families,a hierarchical database to retrieve data and objectoriented technology.Without experimental kinetic parameter regression,the obtained model fitted practical results of n-heptane cracking well.The radical-reaction model that was generated by RMG verified traditional assumptions in steam cracking,μ-hypothesis and quasi steady-state approximation (QSSA).
Radical-reaction networks model the cracking process from a radical perspective,and often involve hundreds of components and thousands of reactions for practical feedstocks.By using an advanced computational capability,radical-reaction-network calculation can be conducted in an acceptable time with precise prediction.
Although generated reaction networks can simulate the pyrolytic process,it is computationally expensive to simulate hundreds of components and thousands of reactions in ECR networks.For example,the n-hexane ECR network that is produced by RMG contains 1178 reactions,of which only 55 reactions are adequate to simulate the cracking process without an obvious decrease in precision [63].Thus,it is necessary to reduce the reaction-network complexity to meet time requirements for practical application.In reverse,the reduced mechanism presents a way for chemists to obtain comprehensive insight into the reaction-network mechanism.Herein,we summarize reaction-network reduction methods into chemical-,mathematical-and Mechanism-digitizationreduction methods.
3.2.1.Chemical-reduction methods
For chemical reaction systems,an in-depth chemical analysis contributes significantly to discriminate critical intermediates and reactions [64],especially for complex ECR networks.QSSA,which is an extensively used chemical assumption,is applied by assigning time deviations of some concentrations to 0,which allows these concentrations to be calculated by algebraic equations[65].Turányi et al.[66] investigated several model reaction systems and empirically concluded that QSSA species are characterized by a high consuming reaction rate,low concentration and short induction period,and are usually radicals.Dente et al.[67]simplified hydrocarbon mixtures by a series of lumping procedures,with intermediate radicals that were approximated correctly by using QSSA.Turányi et al.[68] ranked elementary and eliminated unimportant reactions rigorously based on a reaction rate analysis in propane pyrolysis,which reduced the 66 reactions to <20 important reactions for different reaction times.With progress in computational capabilities,it has become possible to exploit local curvature information of potential-energy surfaces for various competing reaction pathways in complex chemical networks,after which critical reaction pathways are included in the simplified reaction network [69].The chemical method explores the thermodynamic and kinetic properties and uses professional knowledge to reduce reactions,which obtains an ECR network that conforms to chemical theory with few reactions and largely reduced calculations.
3.2.2.Mathematical-reduction methods
Mathematical methods analyze the partial derivative matrix of concentration to rate coefficients to refine the reaction-network kinetics,which characterizes the ability to dealt with the uncertainty of the ECR network with imprecise parameters.Vajda et al.[70]diagonalized the matrix of squared relative de viations of concentrations to time using eigenvalue–eigenvector decomposition,where the eigenvectors reveal strongly interacting reactions and the corresponding eigenvalues measure the significance of these separate mechanism parts.Lam et al.[71] reported a systematic method of computational singular perturbation to decouple the reaction network into fast and slow subspaces to conduct the simplification of kinetics model.This computational singular perturbation method can proceed routinely in the absence of experience and intuition and meet the specified threshold of a tolerable error limit when applied recursively[72].Till et al.[73]reduced lumped reaction networks with global sensitivity analysis,with the kinetic model retaining the fitness to experimental data and increasing the confidence of kinetic parameters.Lu et al.[74] applied a directed relation graph method to obtain skeletal mechanism from detailed mechanism.This directed relation graph was demonstrated to retain high fidelity on the ethylene oxidation mechanism,with species reduced from 70 to 33 and 463 elementary reactions replaced with 16 global reactions.Mathematical methods mainly analyze the ECR network by linear algebra theory and can be automated easily in the absence of chemical knowledge.
3.2.3.Mechanism-digitization reduction methods
Recently,approaches which integrates and digitize reaction mechanisms have enlightened novel methods for the model reduction of ECR networks.Prior knowledge such as network topology patterns and mass flow information could be embedded into the reduction methods.Fang et al.[51] were inspired by the analogy of a graph structure and proposed a network flow analysis algorithm(NFAA)based on the PageRank algorithm of website linkage analysis,which determines the importance of websites by the number and quality of links to a page.Similarly,the significance of species and reactions were ranked by analyzing flows in the reaction network,which contains information on the reaction mechanism and process simulation.A total of 2401 unimportant reactions of an ECR network with 4694 reactions were deleted by the NFAA procedure in an industrial case,with excellent agreement with industrial process data after model adjustment.Hua et al.[52]exploited the structure feature of ECR networks and presented a graph of chemical reactions by neighborhood assembly.A total of 34,942 extracted motifs were used in convolutional neural networks (CNN) to extract features.The extracted 992 features were combined with five operating conditions to predict the cracking product.This method had a faster running speed than traditional simulation models with an error that was controlled within 5%.
Mechanism-digitization reduction methods give a mechanismknowledge-based insight into the reaction network by structured transformation and feature analysis.Natural-language-liked reaction equations and rules are represented by structured data,such as vectors and matrix,for further mathematical reduction operations.The simplified reaction network is more interpretable and applicable for simulation.
The reaction unit is the most essential part of thermal cracking production,and can implement conversion from feedstocks to products.The final product distribution is determined mainly by reaction and heat transfer and could be affected by coking of the reaction unit.Corresponding models have been developed for thermal cracking reactors [46].For intelligent manufacturing applications,models that are built for various sections should be highperformance,but the model compatibility should be ideal for integration and further co-compiling[15].Given the detailed feedstock composition by using the molecular reconstruction model and the reaction network by auto-generation,integrated modeling and simulation of the reaction unit could be achieved through multiple approaches that are described in the following section.
The tubular reactor is where principle reactions occur during cracking.Reactor modeling and simulation can be summarized into three groups,including rigorous,surrogate and computational-fluid dynamics (CFD) [75] models.Early research on thermal cracking focused on simulation based on conservation equations and kinetics [76,77].This type of research developed into mechanism or rigorous models.Later,to meet the requirements of optimization and control in plants,trade-offs between fidelity and computational consumption led to the emergence of surrogate models.With improvements in computational power,CFD models were applied in thermal-cracking simulation to achieve a high level of accuracy.
4.1.1.Rigorous models
Currently,rigorous models are mathematical differential equations that describe the conservation,including the mass,momentum and heat balance [78].The most widely used and successfully commercialized rigorous model is the plug-flowreactor model [79,80],where one-dimensional simulation along the tube provides a reasonable simplification.A Reynolds number above 250,000 in the reactor is extremely high,thus radial profiles could be omitted and a quasi-steady-state model could be built for process description using the tube length as a scale[81].An example of a rigorous model is given in Eq.(1):

where Nmis the concentration of species m in the reaction tube,L is the length of the reaction tube,dt is the residence time of pyrolysis gas in this microsegment,vimis the stoichiometric coefficient of species m in reaction i,Sinis the flow area of the intube reactor,qv is the volume flow rate of the pyrolysis gas,riis the reaction rate of reaction i,NRis the total number of reactions,NSis the total number of species,P is the pressure in the reaction tube,f is the Fanning friction factor,α is the equivalent conversion coefficient of the tube segment,qmis the mass flow rate of the pyrolysis gas,Dinis the inner diameter of the tubular reactor,ρ is the pyrolysis gas density,T is the temperature in the reaction tube,q is the heat flux from the firebox,Dois the outer diameter of the tubular reactor,is the standard enthalpy of formation of species m,Cpmis the heat capacity of species m at constant pressure,CpH2Ois the heat capacity of water at constant pressure and NH2Ois the concentration of water in the reaction tube.The kinetics were included in the model through the calculation reaction rate ri,and the coking process was considered via parameters in the calculation of heat flux q and updating the inner diameter of the tubular reactor Din.To solve large-scale ordinary differential equations,numerical methods could be applied,such as Gear’s method [82].
Significant progress has been made on the rigorous mathematical modeling of thermal cracking reactors using plug-flow-reactor models in recent decades.The maturity of the methods provides opportunities for industrial application in plants to achieve intelligent manufacturing.However,these high-fidelity models are computationally complex,and the modeling process requires experience accumulation and a few industrial data Further applications in optimization and control are delayed by the tough acquisition and high time-consumption of these high-fidelity rigorous models.
4.1.2.Surrogate models
The rapid development of numerical fitting algorithms,especially machine-learning techniques,has promoted the application of surrogate models in thermal cracking.Instead of numerous parameter acquisition,complicated modeling and difficult solving,surrogate models usually use key variables and appropriate prediction methods to provide a sufficiently accurate approximation of the process.
Various modeling techniques are applied in research to explore the possibility of establishing a high-performance surrogate model.Wang and Tang [83] constructed a mathematical relationship between the input control variables and the product yields based on practical data using the least-square support vector machine.Sedighi et al.[84] used artificial neural networks (ANNs),neurofuzzy (NF) and polynomial models to investigate the coil outlet temperature (COT),the steam ratio and the feed flow rate on product yields;compared these models with kinetic models;and found that ANN and NF models show better results.Hough et al.[85] showed that ANN and decision-tree models could replace kinetic models,and the results of these methods agree acceptably with the experimental data.
Innovations in machine learning are boosting the diversified development of surrogate models.Although these models forsake mechanisms and accuracy to some extent,convenient modeling,rapid transplant and time-saving calculations are stimulating researchers for further development and promotion,especially in the domain of optimization,control,scheduling and planning.
4.1.3.CFD models
Unlike the rigorous and surrogate models,CFD models focus on the detailed profiles of tubular reactors,including concentration,temperature,heat flux and velocity distributions.During the past decades,advanced CFD simulations using Reynolds-averaged Navier-Stokes (RANS) approach,large eddy simulation (LES),and other techniques,have been widely applied to reactor models in researches.
Habibi et al.[86] combined the renormalization group k–ε turbulence model,the Finite Rate/Eddy-Dissipation model and a three-step chemical reaction scheme in RANS simulation,then compared the results of using the different radiation models in the reactor.Reyniers et al.[87] added a turbulence–chemistry interaction and dynamic zoning into the RANS simulation,and revealed a difference of 0.1%–0.3% (mass) between previous and newly proposed methods in light olefin yield with a speedup factor of 50–190.van Cauwenberge et al.[88] applied streamwise periodic boundary conditions in LES on an open-source CFD package OpenFOAM,which was successful for secondary flow capture compare to RANS approaches.Then van Cauwenberge et al.[89]assessed the accuracy of LES by benchmarking the simulations for reference cases.The applicability of the method for reactor design was demonstrated for the industrially relevant case of steam cracking,where a speedup factor up to 250 and the relative errors of the major products below 1% were obtained.
The CFD models are of a high accuracy,which could compare favorably with industrial collaboration data.The CFD models enable visualization and insight into the reactor and help understand the production process and mechanism.The feasibility of solving an extremely complex CFD model has increased from improvements in computing hardware in recent years.However,because of the high computational expense(which might be much higher than rigorous models),although several process designs have applied CFD-based approaches,these methods are not used extensively in industrial practice.
Combustion of fuel gas and air occurs in a firebox.Fuel-gas energy is converted into heat for feedstock pyrolysis.Classical models,such as the Belokon’s [90] and Lobo–Evans [91] methods are zero-or one-dimensional,in which the flue gas and tube surface are assumed to be isothermal.For industrial application,Hottel and Sarofim [92] proposed a zone method for discrete simulation of the firebox.Discrete zones are set up using isothermal surfaces,and the properties of each zone are assumed to be uniform.Zhou and Qiu[93]applied an adjusted Monte-Carlo integral method for direct exchange-area calculation to replace traditional numerical integral calculation,and demonstrated that the proposed method is simpler and agrees with industrial measuring data.Hua et al.[94] designed an intelligent hybrid model using ANN for zone model reduction.The time-consumption was reduced by 83%,whereas the accuracy remained acceptable compared with industrial data.The zone method and its variants are high-performance with extendibility and flexibility,and they are used widely by petrochemical enterprises and commercial software packages coupled with tubular reactor models.
A more accurate numerical simulation using CFD models could provide detailed profiles of the firebox and even the entire furnace.Hu et al.[95] considered the combustion scheme of CH4/H2/air mixture in furnaces containing both long-flame and radiation burners and embedded the reactions and kinetics into the Finite Rate/Eddy-Dissipation model,then coupled with reactors.Accurate calculation of the product yields would benefit the design of the specific reactors.Further,Hu et al.[96] evaluated four radiation models including adiabatic,P-1,discrete ordinates and discrete transfer radiation model in coupled furnace/reactor simulations to predict run lengths.Discrete ordinates models were finally recommended for the run length simulations for an industrial naphtha cracking furnace with a 130 kt.a-1capacity.Like the CFD models in reactor models,despite the accuracy,similar approaches of firebox modeling in industrial applications are hindered by model complexity and a high time-consumption.
Multiple types of combustion and heat transfer processes are contained in firebox models,thus various sub-models,such as chemistry-turbulence interaction,radiation,combustion models are also proposed.These sub-models are more or less included in the works mentioned above.For the sake of brevity,we do not tend to expand the descriptions here.
Coke is co-produced and deposited on the reactor wall,which leads to poor heat transfer.Coke-formation inhibition methods are being sought continuously for a longer cracking furnace running length [97].Three types of coke-formation mechanisms have been proposed in previous research [98],namely,pyrolytic coke formation,catalytic coke formation,and droplet condensation.For pyrolytic coke formation,which is also known as radical coke formation,Lahaye et al.[99] investigated carbon formation during steam cracking and proposed a reaction routine of aromatic components polymerization towards coke formation.For catalytic coke formation,catalytic reactions usually occur in the beginning of a cracking cycle.The coke formation is catalyzed by tube walls and could be prevented by coking inhibitor in industrial practice[100,101].For droplet condensation,which usually takes part in transfer line exchanger (TLE) tubes,typical high boiling hydrocarbons adhesion to low temperature surface would be rapidly converted through dehydrogenation reactions,which is considered as the main course for TLE coking[102].The corresponding numerical and kinetic modeling of these mechanisms is exerted to search for solutions of coke-deposition reduction and elimination [103].Coke combustion and gasification kinetic models are introduced for optimal decoking procedure design [104].
As an important side reaction,coking has an obvious impact on production,such as evolving the inner reactor diameter,heattransfer parameters,furnace running length and final product distributions[105].In mechanism models,the coking model could be embedded by adding partial differential equations that are implicated in coking reactions and updating the coking thickness and heat-transfer coefficients in an infinitesimal segment of space and time.If industrial data of coking thickness are provided in surrogate models,the coking profiles could be plugged directly into the model with a numerical value.Further investigations are required for an explicit reaction process and kinetics of coke formation.
By obtaining models of reactors,fireboxes and coke formation,the entire thermal cracking process in the furnace could be simulated by appropriate combination and information communication of the selected models(Fig.3).In the first step,flowsheeting of the pyrolysis production is essential to sort out model inputs and outputs,information flow between models and key variables for iteration if looping exists [106].Coupled simulation could be implemented for the joint solving of models.Over recent decades,significant progress has been made in coupling simulation of the entire furnace.Hu et al.[107] performed a coupled simulation of CFD models for fuel-gas combustion and flue-gas profiles,combined with software packages COILSIM1D and SimCO for cracking in the reactor coils.Fang et al.[108] established a coupled simulation system,including the firebox by using a recirculation zone method and the reactor by using a one-dimensional rigorous model.Zhang et al.[81]introduced multi-scale modeling of steam cracking,including a process-level model with COT correction,a reaction-level model with automatic generation and a reduction of reaction network,and multi-period optimization with a surrogate coke-formation model.Jin et al.[109] proposed a feedforward neural network as a surrogate model for tubular reactors to complete a simple and rapid pseudo-dynamic simulation with a consideration of the coking process.

Fig.3.Model integration approaches in EcSOS simulation software packages plotted by Bi et al [50].
These integrated models provide a comprehensive description of the cracking furnace and provide a solid foundation for the intellectualization of the entire thermal cracking process.Further intelligent manufacturing tasks,such as advanced process control(APC)/model-predictive control (MPC),real-time optimization(RTO),cyclic scheduling and product planning,could be implemented on the basis of this research.Additional efforts are being made continuously for performance improvement and practical application of the models.
Steam cracking is large-scale with frequent model switches,which yields several issues in process modeling,such as a heavy computational burden and a low-portability among various combinations of feedstocks and furnaces [110].It is recognized widely that robust features that are extracted from an industrial process can reduce the dimension and enhance the interpretability of a simulation model[62].Thus,the feature extraction methods show prospects in achieving convenient and high-efficiency modeling of the ethylene cracking process.
We discuss the indispensable role that intelligent feature extraction and processing play in cracking simulation from a perspective of novel network characterization and task-oriented portable modeling.
The ethylene cracking reaction (ECR) network contains thousands of reactions and components.Therefore,it needs to be carefully characterized for feature identification,so that it can be embedded efficiently in simulation models.Principal component analysis was introduced in reaction significance extraction and dimension reduction in early research.However,these approaches only considered the information of reaction rate and could not always provide reliable results.
Inspired by the PageRank algorithm[111],Fang et al.[51]combined complex network analysis and chemical reaction kinetics,and proposed NFAA.The algorithm was designed under the instruction of two core principles:(i) pages (nodes) with more links are usually better resources and(ii)pages(nodes)with higher ranking have more weight in “voting” with their links.In the first step,NFAA transforms the reaction network into Petri-net [112],then uses link and transition matrices as iteration coefficients to convert the evaluation vector that was initialized by feedstock composition into the final ranking values.In this process,the unstructured reaction network was transformed into structured data,which provided a feasible way to add reaction features into the steam-cracking models.The ECR network could be visualized using Gephi software in a more intuitive way,which benefits the analysis of the topologic structure of the ECR network and the subsequent research on underlying feature extraction (Fig.4).
Bi et al.[50]improved the NFAA and proposed an ingenious network characterization method by implementing a multiple subnetwork reconstruction process.The reconstruction process includes redistribution,autofocusing and refining steps,where multiple sub-networks could be constructed from the original reaction network with matrix calculation methods.Through the multiple sub-network reconstruction module,key interactions among component nodes are extracted,which could be used in mechanism analysis to capture evolving trends of reaction rates and product yields.The entire characterization approach proposed herein can profile various chemical processes and shows great potential to execute reliable plant-wide control and optimization.
One severe problem of cracking process simulation is the low portability among various combinations of feedstocks and furnaces.Therefore,task-oriented portable modeling technique is proposed with advantages of convenience,a good transferability and resource saving for modeling [85],and is expected to resolve this problem.These so-called “Black Box” portable models,such as machine-learning models,may lack theoretical foundation and demand extremely large industrial datasets,which limits their application in industrial practice.Thus,it is essential that features of the processes could be used in these models to enhance the interpretability and reduce data demand.
Researchers have made efforts in this aspect,especially for detailed effluent prediction.Niaei et al.[113]used ANNs with back propagation to predict the main product that yields thermal cracking of naphtha.Sedighi et al.[84]applied ANN training by a Levenberg–Marquardt optimization algorithm and adaptive neuro-fuzzy inference system with five layers to model thermal cracking of heavy liquid hydrocarbons.Plehiers et al.[114] proposed a framework of four deep-learning artificial neural networks for boilingpoint forecasting,feedstock reconstruction,detailed effluent prediction and property estimation in the fast and accurate modeling of steam cracking.These models achieved good performance,which may rival typical online analysis equipment or mechanism programming approaches.
In addition to using ANN or variant methods for reaction feature extraction,other machine-learning methods exist for task-oriented modeling for steam cracking.The NFAA transforms the unstructured ECR network to a structured matrix and benefits the development of portable models.On the basis of reactions and a component matrix that was constructed from NFAA,Hua et al.[52] proposed motif detection,which is a new approach of substrate graph neighborhood assembly discrimination.This approach determines mapping from the entire network to substrate graphs and imitates the Watts–Strogatz model of ‘small-world’ networks.The motif detection is similar to the two-dimensional pixels of a computer image,and is regarded as a receptive field,which is used as input to a CNN model (Fig.5).Compared with the ANN model,the CNN model learns topological features from the ECR network and improves the model prediction precision.Because of motif extraction from the ECR network,process knowledge has been learned and the interpretability has been enhanced,which makes the simulation a“Grey-Box”rather than a pure“Black-Box”model.

Fig.4.Visualization of network and the highest rankings of the reactions and species by Fang et al.[51].

Fig.5.CNN modeling and intensification framework for ethylene thermal cracking proposed by Hua et al.[52].
Because different cracking processes usually share similar reaction networks,reactor models and main product evolving trends,Bi et al.[115]presents a new transfer learning-based cracking yield prediction model to improve the modeling portability between different combinations of feedstocks and furnaces.In their work,a motif-feature matrix that was generated using Hua’s method was set as a key input to assimilate the reaction mechanisms in the knowledge-learning stage.The layer-transfer technique was used for knowledge sharing among models.The effective knowledge transmission and parameter reduction benefited the modeling process significantly,which makes it feasible to generate models with less time and fewer data resources.
Resulting from the rapid economy development,environmental pollution and high energy consumption are arousing continued attentions[116],which motivate the investigations on energy efficiency evaluation and energy utilization optimization.Geng et al.[117]compared three for energy optimization and prediction modeling of ethylene production systems,and pointed out that CNN integrating the cross-feature showed best performances and could bring an increase by 6.38%of the energy utilization efficiency and a reduction by 5.29% of the carbon emissions.Further,the research group in Beijing University of Chemical Technology proposed other models,such as a data envelopment analysis cross-model integrated interpretive structural model and analytic hierarchy process [118],input-output networks considering graphlet-based analysis [119],for energy features extraction from the structure of complex ethylene production system.
In summary,characterization and modeling based on intelligent feature extraction achieves information mining from reaction processes and subsequent model intensification using the information.Model visualization and interpretation aids in the comprehensive understanding of steam cracking,and for drafting further control and optimization strategies.
Staring from this chapter,optimization of the ethylene steam cracking process is introduced on multiple aspects,and the roadmap from industrial application to detailed formulations and solutions is plotted in Fig.6.

Fig.6.Roadmap for optimization problems in ethylene steam cracking process.
The operational level of the ethylene cracking furnace has significant relevance to product yields and energy consumption of steam cracking,which determines the profit from the plant.With an increasing emphasis on environmental protection and safety,factors such as gas emission,operational flexibility and stability have been considered,which promotes the development of control and optimization models.As a key step of intelligent manufacturing in ethylene production from plant-level scheduling to operational level control and optimization,research into operational tuning could be categorized into two parts,including optimal operational parameters that determine the key variables in lower levels and RTO with a control system that determines the operational status in upper levels.
A search for the most profitable,safest and cleanest steam cracking production has always been of interest,and significant effort has been made to determine the optimal operating parameters since the 1970s.Initially,optimization of operational conditions was limited by the calculation speed and modeling technique and the focus has been on single-objective optimization.Robertson and Hanesian [120] developed a calculation program that could handle 25 simultaneous reactions of up to 25 components,and used this program to find the optimal reaction conditions for the optimal annual production of ethylene.Recent research has focused on improving algorithms,which are aimed at achieving a higher performance.Nian et al.[121] proposed a hybrid evolutionary algorithm,which was termed differential evolution group-search optimization,and that integrates differential evolution and group search optimization to solve problems with changing feedstock properties.The case study with COILSIM1D as the model of yield shows a superior searching performance of the proposed algorithm.
With the continuous enhancement of computing power and modeling optimization technology,multi-objective optimization problems have attracted increased researcher attention.A comprehensive objective of higher total profit and lower energy consumption are considered when setting up mathematical formulations,which raises the intelligence of the production process to a new level.Nabavi et al.introduced the nondominated sorting genetic algorithm-II,which was adapted with the jumping gene operator (NSGA-II-aJG) in operational-level optimization[122]and furnace design[123]of LPG thermal crackers;maximizing annual ethylene and propylene production,selectivity and run length and minimizing the severity and total heat duty per year.Li et al.[124] dealt with the operational optimization of naphtha industrial cracking furnaces using a multi-objective particle swarm optimization and ANN hybrid model.They selected the decision variables based on a sensitivity analysis and the calculation results achieved a tradeoff between the ethylene and propylene yields.
In recent years,fuzzy systems have shown great potential in the application of operation parameter optimization.Xia et al.[125]developed a fuzzy C-means multiswarm competitive PSO algorithm for the optimal control of ethylene cracking integrated with a radial basis function neural network.The addition of fuzzy Cmeans clustering enhanced swarm diversity,which improved the performance of the optimization algorithm.Geng et al.[126]embedded a dynamic analytic hierarchy process in an adaptive multi-objective particle swarm optimization algorithm,which provides decision makers with alternative Pareto optimal solutions by fuzzy evaluation.
From simple formulations with a single objective function to comprehensive multi-objective optimization frameworks,extensive progress has been made in recent decades,especially in the development and improvement of intelligent optimization algorithms.To solve multi-objective optimization problems,NSGA,PSO and their variants have been discussed and studied extensively.Advances in optimization frameworks and algorithms benefit the application of various types of steam-cracking simulation models.
RTO is important to connect scheduling strategies and operational level implementation,and a control system is applied to achieve the optimal operational parameters in cracking units.Novel control concepts,such as APC or MPC,are usually combined with RTO approaches,and various frameworks are set up for industrial practice [127].
Steady-state RTO consists of rigorous simulation models and optimization algorithms,usually with a linking to control system and plant databases.Emoto et al.[128] proposed a steady-state closed-loop RTO model for an olefin plant,which achieved economic benefits.In this model,multivariable modeling and controllers were used for subsequent optimization using successive quadratic programs.Steady-state detection of facilities is usually required for steady-state RTO [129],thus the application of this method is limited to a single apparatus.
Because of limitations of the steady-state approaches,dynamic RTO was conceptualized and promoted to olefin plants.Dynamic approaches could achieve multi-apparatus reconciled optimization without steady-state detection dependence and with a shortened time-consumption.The compatibility with MPC/APC of dynamic RTO is higher,which implies an easier coupling with an industrial control system.Nath and Alzein[130]introduced the implementation of MPC and the online optimization of dynamic processes in an olefin plant.Higher continuous savings and an easier,less expensive application and maintenance were achieved by using controller robustness and a reasonable nonlinear-model intensification.Manenti et al.[131] proposed a dynamic RTO and demonstrated the feasibility of the approach in an industrial case study of a steam-cracking furnace.A dynamic simulation with models in SPYRO combines software like ROMeo and DynSim and demonstrated that the dynamic RTO had advantages of a higher product quality during process transients,an improved computational performance,and an increased user friendliness.
Increments of industrial information in the new era and frequent database updates may lead to plant–model mismatches [132],which does not allow optimal solutions in the RTO model to coincide with practice or may be infeasible.Thus,modifier adaptation[133,134] for the models is required for model maintenance in RTO,and corresponding models that are easier and convenient for construction and improvement should be embedded in RTO formulations.For example,neural-network-based models [135,136] and grey-box approximate models [137,138] could be applied in RTO frameworks,which are adaptable for continuous updates and various datasets because of the method extendibility,stability and robustness.
Researchers are active and productive in model building,algorithm design and development,and corresponding industrial application in control and optimization of steam cracking.Because of these contributions,the manufacturing level of ethylene plants is changing from automation to digitization and intellectualization,which assists the industrial system reformation.
Besides operational tuning within the scale of a single cycle during thermal cracking production,ethylene plants and petrochemical enterprises are facing optimization problems related to plant and supply chain scale.Multiple cracking furnaces are assigned for the pyrolysis of various feedstock types,whereas time slots of production cycles fluctuate and market information,such as supply/demand,is changing rapidly.
For profit-margin improvement,research has been devoted to strategic decision making of ethylene plants since Jain and Grossmann [139] first proposed the fundamental MINLP model for cracking furnace-system optimization in 1998.The following research can be divided into cyclic-scheduling and productionplanning problems in terms of scale and scope.
Cyclic scheduling research deals with the allocation of a set of limited resources over time to manufacture one or more products according to a batch recipe [140].For thermal cracking,the length of a time slot for continuous production is 20–60 days,whereafter cleanup for decoking occurs for one or two days.The feedstock allocation strategies,time arrangement of the production cycles and cleanups are the main outputs of the cyclic scheduling models with constraints of conservations,bounds and logic for operational variables [141].
The fundamental MINLP model that was proposed by Jain and Grossmann [139] addressed the problem of scheduling multiple feeds on parallel units for single ethylene products using exponentially decaying ethylene conversion correlations.Production conditions,including feed rate,COT,dilution ratio and processing time for each feedstock,are set to be fixed during optimization.Later,extensive attention has been given for model improvement and practical application.Schulz et al.[142]added a downstream plant model to predict an ethane recycle stream,and demonstrated that the recycle stream flow rate influences the optimal schedule significantly.Gao et al.[143] used Kumar’s molecular kinetics and coke mechanism for reaction modeling,and proposed a novel parallel hybrid multi-objective method that combines NSGA-II with successive quadratic programs for optimization problem solving.Jin et al.[144] used a surrogate model of FNN for production-process simulation,and presented a mixed-integer dynamic optimization(MIDO) problem to optimize the operating conditions and cyclic scheduling simultaneously.Su et al.[145] investigated cycling scheduling with the practical serial cracking of different feedstock types in one production run.An improved outer approximation algorithm was proposed and applied to solve the formulation.These works introduced more concepts and expert knowledge,including feedstock recycle,reaction mechanisms,dynamic optimization and multiple feed modes into the scheduling modeling,and developed compatible optimization algorithms for problem solving.
Consecutive studies were conducted by several research groups for continuous model improvement.Liu et al.[146] imposed nonsimultaneous cleanup logics on cyclic scheduling problems and considered multiple products.Subsequent work of their group at Lamar University[147–150] discussed issues,including secondary ethane cracking,dynamic scheduling,emission constraints and inherent process upset reduction.Yu et al.[151]introduced diverse learning-teaching–learning-based optimization algorithms to solve scheduling problems,and their work [152–154] focused mainly on applying the proposed optimization algorithms.Wang et al.[155] developed a novel synchronized decision-supporting framework that considered inventory management for insufficient supply or increased production burden.Zhang et al.[156] in the same group at Zhejiang University proposed robust optimization models for uncertainty analysis of ethylene plants.New robust formulations that were induced by flexible uncertainty sets helped to capture real-world performance in thermal cracking production.
Unlike cyclic scheduling,which is also termed short-term planning,production planning problems are formulated for long-term decisions at a larger scale.Market information,inventory and supply chain management,and upstream and downstream production should be incorporated into the modeling of production planning for decision making from the perspective of the entire enterprise.
Tjoa et al.[157] developed the main model in the Optience SCMart Suite,which is an optimization application platform for planning and scheduling.Furnace and reformers are correlated using nonlinear models to estimate the yields,and the planning tool was demonstrated in different case studies and scenarios for technical and economic decisions in a real petrochemical company.Wang et al.[158] proposed an optimization model that integrated the scheduling problem of cracking furnaces and downstream units under a synchronized global time scale.Zhao et al.[159]from the same group extended the model by involving operational parameters of cracking furnaces into optimization.Zhao et al.[160] coupled the upstream refinery and downstream ethylene plant,and applied the Lagrangian algorithm to decompose the integrated mathematical model into an MILP problem for the refinery and a small-scale MINLP problem for the ethylene plant.
Significant progress has been made in scheduling and planning problems of ethylene production from multiple aspects,but omissions remain.Although supply chain models for refinery or crudeoil scheduling have been discussed thoroughly,model transplant to ethylene production is rarely implemented.Kwon et al.[161]constructed a recursive two-stage programming framework with a hedging model to predict naphtha prices.Naphtha purchasing and production planning were determined in highly changeable chemical market conditions.The number of reported works needs to be expanded to fill the gap of supply chain technique applications.
Big-data techniques and machine-learning models are rarely mentioned in related research.Because of the advent of the information era,datasets that are accumulated in plants and enterprises have assumed an increasingly pivotal role in enterprise decision making.Appropriate utilization of these datasets by novel bigdata and machine-learning approaches is a promising trend for future work.
Flare minimization of ethylene plant startup and shutdown has always been an important component of steam-cracking research.Simulation software packages for general or specific chemical processes have been developed rapidly in recent years.Many researchers have optimized startup and shutdown processes through dynamic-simulation techniques.Progress on the dynamic simulation of startup and shutdown is summarized into the following categories,and the technical diagram is plotted in Fig.7.
The main approach for dynamic simulation and optimization of startup and shutdown is a three-step framework,namely ‘steady–dynamic–drive’,which was proposed by the Lamar University[162].This methodology can be divided into three steps:(i)development and validation steady-state simulation models,(ii)upgrading the steady-state models to dynamic-simulation models with real validations,(iii)using the validated dynamic-simulation models to examine flare-minimization procedures for their operational safety and operability.Xu et al.[163] developed a plant-wide dynamic simulation and tested a startup procedure with total recycles.Flaring of the dynamic-simulation-assisted startup was reduced by~60% compared with the shortest startup previously.However,the dynamic model was composed mainly of distillation columns,and other parts were simplified to improve the overall convergence.On the basis of parameter modification,Zhao et al.[164] proposed an algorithm to obtain the initial state of the dynamic startup-integrated cryogenic separation system model.Dynamic simulation is used to validate the proposed startup plans,but it lacks optimization for the startup plans.To depict a superstructure of the startup process,Song et al.[165] designed a short-term scheduling approach based on a resource-task network.Compared with operational experience or traditional steady-state simulation,this‘steady-dynamic-drive’framework provides explicit insight into the process dynamic behavior and meaningful suggestions for startup at an operational level.Dynamic simulation could explore various possibilities that are not feasible for testing in practice and enhance the quantitative evaluation and safety of startup and shutdown processes.

Fig.7.Technical diagram for dynamic simulation and optimization of startup and shutdown.
To integrate dynamic simulation and industrial practice,a proactive and cost-effective flare minimization strategy was proposed by Dinh et al.[166] CGC startup was the most critical operation during startup.On the basis of this work,Yang et al.[167]established a strict pressure-driven model of the compressor,and considered factors,such as anti-surge and process control to improve the operational safety of the CGC system.Song et al.[168]determined the steady-state operational parameters of nitrogen,mixed hydrocarbons and cracked gas in a CGC simulation model,and verified the safety and feasibility of the conversion among various working conditions through dynamic simulation.Zhang et al.[169] investigated the startup scheme by providing insight into apparatus in the compression and refrigeration flowsheet.Interrelations among parameters,including the compressor speed,super high-pressure steam production,safe operating ranges,and precooling time downstream of the cracking system,were analyzed for ethylene plant startup with different working media.The simulation optimization of the pyrolysis gas compressor reduces flare emissions,and ensures system safety and operability.
Other research on dynamic simulation includes cold-box optimization and plant-wide shutdown scheduling.Yang et al.[170]introduced rigorous modeling,sensitivity analysis and operation optimization for process operation of the integrated cold box and demethanizer system to reduce ethylene loss and energy consumption.Xu et al.[171] developed a systematic flare-minimization methodology for an olefin plant shutdown operation at a plant-wide level,and established procedures of flaring source generation.In their case study,the flared raw materials and emissions were reduced by 90.23%,the estimated economic saving was 91.03% and the social carbon cost saving was 90.37%.
Dynamic simulation is of significance in flare-emission reduction and process safety startup and shutdown,and optimization could yield resource saving and profit-margin improvement for the ethylene plant.These techniques are mature and welldesigned,and could be applied in intelligent manufacturing with slight manual intervention.However,many detailed problems remain to be solved in dynamic simulation,such as the addition of crosslines among devices and the influence of pressure and flow.
Compiling,interfacing and platformization are critical steps for industrial application and intelligent manufacturing of programmed models,and they provide software packages as final products to petrochemical plants and enterprises [172].Current software packages are available for thermal cracking production from multiple aspects.For brevity,we present Table 1 to introduce the main software.

Table 1Software packages for industrial use of thermal cracking
Other generic software,such as VMG Sim,gPROMS and ASPEN plus,can simulate steam cracking using built-in or external model packages [177–179].Open-source modeling environments and model libraries support import of programs in other languages and powerful solvers for nonlinear equations,and are major advantages of this software,which makes model establishment and development easy and flexible.
Industrial software packages build tangible bridges between research and practical production and intelligent manufacturing of thermal cracking,as solid contributions are made by software developers.Various software packages with distinctive features provide multiple choices for petrochemical enterprises.However,large quantities of modeling demands remain,especially in developing countries and newly built plants,which require corresponding package versions and regional databases to be established and applied.
Thermal cracking makes up a significant share of more than 95%in light-olefins production,and related intelligent manufacturing research promotes the improvement and development of the entire ethylene industrial chain.PSE techniques,such as modeling and simulation for feedstock characterization,reaction-network auto-generation and integrated furnace model building,model analysis and intensification with intelligent feature extraction,model control and optimization for operation tuning,scheduling,planning,starting up and shutting down,allow decision makers to acquire process knowledge,achieve profit increments,resource savings,safety enhancements or environment protection.The advances of these technologies provide industrial reformation with force and guiding breakthroughs,which accelerate the pace of national strategies of intelligent manufacturing.
With the onset of the big-data era,models and algorithms require updating and innovation when facing data dimensions and quantity issues.Novel computer and internet technologies exhibit great potential,but pose challenges in the reform trend of intelligent manufacturing.We believe that the following are promising and significant future directions:
(1) Machine-learning approaches and model intensification for olefin-production integrating process knowledge;
(2) Effective feature extraction from big data and its further application in process analysis and design;
(3) Control and optimization using information embedded in massive historical data;
(4) Proprietary research and development of software packages and their application in industrial internet.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgement
The authors gratefully acknowledge the National Natural Science Foundation of China for its financial support (U1462206).
Chinese Journal of Chemical Engineering2021年10期