Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Date: Sunday, 06/July/2025
10:00am - 4:00pmEFCE Meeting
Location: L226
1:00pm - 5:00pmRegistration
Location: Zone 2 - Cafetaria
2:00pm - 5:00pmWorkshop 1: do-mpc
Location: Zone 3 - Room E032
2:00pm - 5:00pmWorkshop 2: Effective course development
Location: Zone 3 - Aula D002
Engineering design activities range from classical process design to product design and manufacturing. This EURECHA-sponsored meeting focuses on discussion about this important feature of the chemical engineering curriculum and is divided into two parts: (a) a panel discussion in which invited panelists will present their approaches to the teaching of process design and associated courses; (b) a hands-on workshop in which participants working in teams will develop master plans for selected courses, with each team being mentored by one of the panel members. The first part of the workshop is open to all participants. The active part of the workshop will be open only to the active participants who register in advance (limited to 60 participants in total).
Course participants are invited to review a short video describing how the flipped class is used to teach process systems engineering (PSE) subjects (recommended before attending ESCAPE-35): click here to watch the video
2:00pm - 5:00pmWorkshop 3: AVEVA Software
Location: Zone 3 - Room E031
2:00pm - 5:00pmWorkshop 4: Biochemical reactors
Location: Zone 3 - Room E033
The workshop introduces basic optimization and constraint-based methods in metabolic engineering and the development of large-scale kinetic models with a view to integrate strain and process design. The workshop extends with an introduction to digital twins for bioreactors and the demonstration of physics-informed neural networks in process development and control applications.
5:00pm - 6:30pmWelcome Reception
Location: Zone 2 - Cafetaria
Date: Monday, 07/July/2025
8:00am - 8:30amRegistration
Location: Zone 2 - Cafetaria
8:30am - 9:00amOpening Session
Location: Zone 1 - Aula Louisiane
9:00am - 10:00amPlenary 1 - Prof. Wolfgang Marquardt - Long-Term Achievement Award (CAPE LTA)
Location: Zone 1 - Aula Louisiane
Chair: Zdravko Kravanja
Co-chair: Jan Van Impe

From Systems and Control to Process Systems Engineering and Beyond


This plenary presentation reviews academic career of Wolfgang Marquardt. He will give a glance of his early scientific contributions during his qualification phase as young junior researcher which set the initial condition for his later work. The focus of the presentation will be on his scientific activities at RWTH Aachen University. The talk will outline his long-term research strategy and introduce various thematic lines of his multi-faceted work in process systems engineering. He will sketch major ideas and results on selected topics, in particular, computer-aided support of mathematical modelling and engineering design processes, the exploitation of adaptivity in numerical methods for monitoring, estimation and control of large-scale process systems, optimal control methods and software and its applications, conceptual design and synthesis of hybrid separation processes, and integrated product, reaction pathway and process design with applications to tailor-made fuels from biomass. He will also sketch his roles and experience in science policy and research management in the last third of his career. The talk concludes with a few reflections on his learnings and a glimpse into the future of the field.
10:00am - 10:30amCoffee Break
Location: Zone 2 - Cafetaria
10:00am - 10:30amPoster Session 1
Location: Zone 2 - Cafetaria
 

IMPLEMENTATION AND ASSESSMENT OF FRACTIONAL CONTROLLERS FOR AN INTENSIFIED DISTILLATION SYSTEM

Luis Refugio Flores-Gómez1, Fernando Israel Gómez-Castro1, Francisco López-Villarreal2, Vicente Rico-Ramírez3

1Universidad de Guanajuato, Mexico; 2Instituto Tecnológico de Villahermosa, Mexico; 3Tecnológico Nacional de México en Celaya, Mexico

Process intensification is a strategy applied to chemical engineering which is devoted to the development of technologies that enhance the performance of the operations in a chemical process. This is achieved through the implementation of modified equipment and multi-tasking equipment, among other approaches. Although various studies have demonstrated that the dynamic properties of intensified systems can be better than the conventional configurations, the development of better control structures is still necessary (Wang et al., 2018). The use of fractional controllers can be an alternative to achieve this target. Fractional PID controllers are based on fractional calculus, increasing the flexibility of the controller by allowing fractional orders for the derivative and the integrative actions. However, this implies a higher complexity to perform the tuning of the controller. This work presents an approach to implement and assess fractional controllers in an intensified distillation system. The study is performed in the Simulink environment in Matlab, tuning the controllers through a hybrid optimization approach; first using a genetic algorithm to find an initial point, and then refining the solution with the fmincon algorithm. The calculations also involve the estimation of fractional derivatives and integrals with fractional order numerical techniques. As case study, the experimental dynamic data for an extractive distillation column has been used (Kumar et al., 1984). The data has been adjusted to fractional order functions. Since the number of experimental points is low, a strategy is implemented to interpolate data and generate a more adequate adjustment to the fractional order transfer function. Through this approach, the sum of the square of errors is below 2.9x10-6 for perturbations in heat duty, and 1.2x10-5 for perturbations in the reflux ratio. Moreover, after controller tuning, a minimal value for ISE of 1,278.12 is obtained, which is approximately 8% lower than the value obtained for an integer-order controller.

References

Wang, C., Wang, C., Cui, Y., Guang, C., Zhang, Z., 2018. Economics and controllability of conventional and intensified extractive distillation configurations for acetonitrile/ethanol/benzene mixtures. Industrial & Engineering Chemistry Research, 57, 10551-10563.

Kumar, S., Wright, J.D., Taylor, P.A. 1984. Modelling and dynamics of an extractive distillation column. Canadian Journal of Chemical Engineering, 62, 185-192.



Sustainable pathways toward a decarbonized steel industry

Selene Cobo Gutiérrez1, Max Kroppen2, Juan Diego Medrano2, Gonzalo Guillén-Gosálbez2

1University of Cantabria; 2ETH Zurich

The steel industry, responsible for about 7% of global CO2 emissions1, faces significant pressure to reduce its environmental impact. Various technological pathways are available, but it remains unclear which is the most effective in minimizing CO2 emissions without causing greater environmental harm in other areas. This work aims to conduct the prospective life cycle assessment of five steelmaking pathways to identify the most environmentally sustainable option in terms of global warming impacts and damage to human health, ecosystems, and resources. The studied processes are 1) blast furnace plus basic oxygen furnace (BF-BOF, the dominant steelmaking route at present), 2) BF-BOF with carbon capture and storage (CCS), 3) coal-based direct reduction of iron paired with an electric arc furnace (DRI-EAF), 4) DRI-EAF using natural gas, and 5) the more recently developed low-temperature iron oxide electrolysis (IOE). Life cycle inventories were developed using a detailed Aspen Plus® model for BF-BOF, data from the Ecoinvent V3.8 database2, and literature for the other processes. The results indicate that the BF-BOF process with CCS, gas-based DRI-EAF, and IOE are the most promising pathways for reducing the steel industry’s carbon footprint while minimizing overall environmental damage. If renewable energy and hydrogen produced via water electrolysis are available at competitive costs, DRI-EAF and IOE show the most promise. However, if low-carbon hydrogen is not available and the main electricity source is the global grid mix, BF-BOF with CCS has the lowest overall impacts. The choice of technology depends on the expected development of the energy system and the current technological stock. Retrofitting existing BF-BOF plants with CCS is a viable option, while constructing new DRI-EAF plants may be more advantageous due to their versatility and higher decarbonization potential. IOE, although promising, is not yet ready for immediate industrial deployment but could be a key technology in the long term. In conclusion, the optimal technology choice depends on regional energy availability and technological readiness levels. These findings underscore the need for a tailored approach to decarbonizing the steel industry, balancing environmental benefits with economic and infrastructural considerations.

References

1. W. Cornwall. Science, 2024, 384(6695), 498-499.

2. G. Wernet, C. Bauer, B. Steubing, J. Reinhard, E. Moreno-Ruiz and B. Weidema, Int. J. Life Cycle Assess., 2016, 21, 1218–1230.



OPTIMIZATION OF HEAT EXCHANGERS THROUGH AN ENHANCED METAHEURISTIC STRATEGY: THE SUCCESS-BASED OPTIMIZATION ALGORITHM

Oscar Daniel Lara-Montaño1, Fernando Israel Gómez-Castro2, Claudia Gutiérrez-Antonio1, Elena Niculina Dragoi3

1Universidad Autónoma de Querétaro, Mexico; 2Universidad de Guanajuato, Mexico; 3Gheorghe Asachi Technical University of Iasi, Romania

The optimal design of the units in a chemical process is commonly challenging due to the high nonlinearity of the models that represent the equipment. This also applies to heat exchangers, where the mathematical equations modeling such units are nonlinear, including nonconvex terms, and require simultaneous handling of continuous and discrete variables. Finding the global optima of such models is complex, thus the optimization strategy must be robust. In this context, metaheuristics are a robust alternative to classical optimization strategies. They are a set of stochastic algorithms that can efficiently find the global optima region when adequately tuned and are adequate for nonconvex functions with several local optima. The literature presents numerous metaheuristics, each with distinct properties, many of which require parameter tuning. However, no universal method exists to solve all optimization problems, as stated by the no-free-lunch theorem (Wolpert and Macready, 1997). This implies that a given algorithm may properly work for some problems but have an inadequate performance for others, as reported for the optimal design of heat exchangers by Lara-Montaño et al. (2021). As such, new optimization strategies are still under development, and this work presents an enhanced metaheuristic algorithm, the Success-Based Optimization Algorithm (SBOA). The development of the method takes the concepts of success from a social perspective as initial inspiration. As a case study, the design of a shell-and-tube heat exchanger using the Bell-Delaware method is analyzed to minimize the total annual cost. The algorithm's performance is compared with current state-of-the-art metaheuristic algorithms, such as particle swarm optimization, grey wolf optimizer, cuckoo search, and differential evolution. Based on the findings, in terms of the standard deviation and mean values, the suggested algorithm outperforms nearly all other approaches except differential evolution. Nevertheless, the SBOA has shown a faster convergence than differential evolution and best solutions with lower total annual costs.

References

Wolpert, D.H., Macready, W.G., 1997. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), 67-82.

Lara-Montaño, O.D., Gómez-Castro, F.I., Gutiérrez-Antonio, C. 2021. Comparison of the performance of different metaheuristic methods for the optimization of shell-and-tube heat exchangers. Computers & Chemical Engineering, 152, 107403.



OPTIMAL DESIGN OF PROCESS EQUIPMENT THROUGH HYBRID MECHANISTIC-ANN MODELS: EFFECT OF HYBRIDIZATION

Zaira Jelena Mosqueda-Huerta1, Oscar Daniel Lara-Montaño2, Fernando Israel Gómez-Castro1, Manuel Toledano-Ayala2

1Universidad de Guanajuato, Mexico; 2Universidad Autónoma de Querétaro, México

Artificial neural networks (ANN) are data-based structures that allow the representation of the performance of units in chemical processes. They have been widely used to represent the operation of equipment as reactors (e.g. Cerinski et al., 2020) and separation units (e.g. Jawad et al., 2020). To develop ANN-based models, it is necessary to obtain data to train the network. Thus, their employment for process design represents a challenge, since the equipment does not exist, and actual data is commonly not available. On the other hand, despite the popularity of artificial neural networks to generate models for chemical processes, there are warnings about the risks of completely depend on these data-based models while ignoring the fundamental knowledge of the phenomena occurring in the units, given by the traditional mechanistic models. Thus, the use of hybrid models has arisen to combine the power of the ANN’s to predict interactions difficult to represent through rigorous modelling, but maintaining the relevant information provided by the traditional mechanistic approach. However, a rising question is, what part of the model must be represented through a data-based approach for design applications? To answer this question, this work analyzes the effect of the degree of hybridization in the design and optimization of a shell-and-tube heat exchanger, assessing the performance of a complete ANN model and a hybrid model in terms of the computational time and the accuracy of the solution. Since the data for the heat exchanger is not available, such information is obtained through the solution of the rigorous model for randomly selected conditions. The Bell-Delaware approach is employed to perform the design of the exchanger. Such model is characterized by non-linearities and the need for handling discrete and continuous variables. Using the data, a neural network is trained in Python to obtain an approximation to determine the area and cost of the exchanger. A second neural network is generated to predict a component of the model with the high nonlinearities, namely the calculation of the heat transfer coefficients, while the other calculations are performed with the rigorous model. Both representations are optimized with the differential evolution algorithm. According to the preliminary results, for the same architecture, the hybrid model produces designs with standard deviation approximately 30% lower than the complete ANN model, related to the areas predicted by the rigorous model. However, the hybrid model requires approximately 11 times of computational time than the complete ANN model.

References

Cerinski, D., Baleta, J., Mikulčić, H., Mikulandrić, R., Wang, J., 2020. Dynamic modelling of the biomass gsification process in a fixed bed reactor by using the artificial neural network. Cleaner Engineering and Technology, 1, 100029.

Jawad, J., Hawari, A.H., Zaidi, S. 2020. Modeling of forward osmosis process using artificial neural networks (ANN) to predict the permeate flux. Desalination, 484, 114427.



MODELLING OF A PROPYLENGLYCOL PRODUCTION PROCESS WITH ARTIFICIAL NEURAL NETWORKS: OPTIMIZATION OF THE ARCHITECTURE

Emilio Alba-Robles1, Oscar Daniel Lara-Montaño2, Fernando Israel Gómez-Castro1, Jahaziel Alberto Sánchez-Gómez1, Manuel Toledano-Ayala2

1Universidad de Guanajuato, Mexico; 2Universidad Autónoma de Querétaro, México

The mathematical models used to represent chemical processes are characterized by a high non-linearity, mainly associated to the thermodynamic and kinetic relationships. The inclusion of non-convex bilinear terms is also common when modelling chemical processes. This leads to challenges when optimizing an entire process. In the last years, the interest for the development of data-based models to represent processing units has increased. As example, the work of Kwon et al. (2021), related to the dynamic performance of distillation columns, can be mentioned. Artificial neural networks (ANN) can be mentioned as one of the most relevant strategies to develop data-based models. The accuracy of the predictions for an ANN is highly dependent on the quality of the provided data, the nature of the interactions among the studied variables, and the architecture of the network. Indeed, the selection of an adequate architecture is itself an optimization problem. In this work, two strategies are proposed and assessed for the determination of the architecture of ANN’s that represent the performance of a chemical process. As case study, a process to produce propylene glycol using glycerol as raw material is analyzed (Sánchez-Gómez et al., 2023). The main units of the process are the chemical reactor and two distillation columns. To generate the data required to train the artificial neural network, random values for the design and operating variables are generated from a simulation in Aspen Plus. To determine the best architecture for the artificial neural network, two approaches are used: (i) the random generation of structures for the ANN, and (ii) the formal optimization of the architecture employing the ant colony algorithm, which is particularly useful for discrete problems (Zhao et al., 2022). In both cases, the decision variables are the number of hidden layers and the number of neurons per layer. The objective function implies the minimization of the mean squared error. Both strategies generate ANN-based predictions with good agreement with the data from rigorous simulation, with values of r2 higher than 99.9%. However, the use of the ant colony algorithm allows the best fit, although it has a slower convergence.

References

Kwon, H., Oh, K.C., Choi, Y., Chung, Y.G., Kim, J., 2021. Development and application of machine learning-based prediction model for distillation column. International Journal of Intelligent Systems, 36, 1970-1997.

Sánchez-Gómez, J.A., Gómez-Castro, F.I., Hernández, S. 2023. Design and intensification of the production process of propylene glycol as a high value-added glycerol derivative. Computer Aided Chemical Engineering, 52, 1915-1920.

Zhao, H., Zhang, C., Zheng, X., Zhang, C., Zhang, B. 2022. A decomposition-based many-objective ant colony optimization algorithm with adaptive solution construction and selection approaches. Swarm and Evolutionary Computation, 68, 100977.



Analysis for CFD of the Claus Reaction Furnace with Operating Conditions: Temperature and Excess Air for Sulfur Recovery

PABLO VIZGUERRA MORALES1, MIGUEL ANGEL MORALES CABRERA2, FABIAN SALVADOR MEDEROS NIETO1

1INSTITUTO POLITECNICO NACIONAL, MEXICO; 2UNIVERSIDAD VERACRUZANA, MEXICO

In this work, a Claus reaction furnace was analyzed in a sulfur recovery unit (SRU) of the Abadan Oil Refinery where the combustion operating temperature is important since it ensures optimal performance in the reactor, the novelty of the research focused on temperature control of 1400, 1500 and 1600 K and excess air of 10, 20 and 30% to improve the reaction yield and H2S conversion and the CFD simulation was carried out in Ansys Fluent in transitory state and in 3 dimensions, considering the turbulence model estándar, energy model with transport by convection and mass transport with chemical reaction using the Arrhenius Finite-rate/Eddy - Dissipation model for a kinetic model of destruction of acid gases H2S and CO2, obtaining a good approximation with the experimental results of the industrial process of the Abadan Oil refinery, Iran. The percentage difference between experimental and simulated results varies between 0.6 to 4% depending on the species. The temperature of 1600 K and with excess air of 30% was the best, with one a mol fraction of 0.065 of S2 at the outlet and with a conversion of the acid gas (H2S) of 95.64%, which is quite good compared to the experimental one.



Numerical Analysis of the Hydrodynamics of Proximity Impellers using the SPH Method

MARIA SOLEDAD HERNÁNDEZ-RIVERA1, KAREN GUADALUPE MEDINA-ELIZARRARAZ1, JAZMÍN CORTEZ-GONZÁLEZ1, RODOLFO MURRIETA-DUEÑAS1, JUAN GABRIEL SEGOVIA-HERNÁNDEZ2, CARLOS ENRIQUE ALVARADO-RODRÍGUEZ2, JOSÉ DE JESÚS RAMÍREZ-MINGUELA2

1TECNOLÓGICO NACIONAL DE MÉXICO/ CAMPUS IRAPUATO, DEPARTAMENTO DE INGENIERÍA QUÍMICA; 2UNIVERSIDAD DE GUANAJUATO/DEPARTAMENTO DE INGENIERÍA QUÍMICA

Mixing is a fundamental operation in many industrial processes, typically achieved using agitated tanks for homogenization. However, the design of tanks and impellers is often overlooked during the selection of the agitation system, leading to excessive energy consumption and non-homogeneous mixing. To address these operational inefficiencies, Computational Fluid Dynamics (CFD) can be utilized to analyze the hydrodynamics and mixing times within the tank. CFD employs mathematical modeling of mass, heat, and momentum transport phenomena to simulate fluid behavior. Among the latest methods used for modeling stirred tank hydrodynamics is Smoothed Particle Hydrodynamics (SPH), a mesh-free Lagrangian approach that tracks individual particles carrying physical properties such as mass, position, velocity, and pressure. This method offers advantages over traditional mesh discretization techniques by analyzing particle interactions to simulate fluid behavior more accurately. In this study, we compare the performance of different impellers based on hydrodynamics and mixing times during the homogenization of water and ethanol in a 0.5 L stirred tank. The tank and agitators were rigorously sized, operating at 70% capacity with the fluids' rheological properties as follows: ρ₁=1000 kg/m³, ρ₂=789 kg/m³, μ₁=1E-6 m²/s², and μ₂=1.52E-6 m²/s². The simulation, conducted for 2 minutes at a turbulent flow regime with a Reynolds number of 10,000, involved three impellers—double ribbon, paravisc, and hybrid—simulated using DualSPHysics software at a stirring speed of 34 rpm. The initial particle distance was set to 1 mm, generating 270,232 fluid particles and 187,512 boundary particles representing the tank and agitator. The results included velocity profiles, flow patterns, divergence, vorticity, and density fields to quantify mixing performance. The Q criterion was also applied to identify whether fluid motion was dominated by rotation or deformation and to locate stagnation zones. The double ribbon impeller demonstrated the best performance, achieving 88.28% mixing in approximately 100 seconds, while the paravisc and hybrid impellers reached 12.36% and 11.8% mixing, respectively. The findings highlight SPH as a robust computational tool for linking hydrodynamics with mixing times, allowing for the identification of key parameters that enhance mixing efficiency.



Surrogate Modeling of Twin-Screw Extruders Using a Recurrent Deep Embedding Network

Po-Hsun Huang1, Yuan Yao1, Yen-Ming Chen2, Chih-Yu Chen2, Meng-Hsin Chen2

1Department of Chemical Engineering, National Tsing Hua University, Hsinchu 30013, Taiwan; 2Industrial Technology Research Institute, Hsinchu 30013, Taiwan

Twin-screw extruders (TSEs) are extensively used in the plastics processing industry, with their performance highly dependent on operating conditions and screw configurations. However, optimizing these parameters through experimental trials is often time-consuming and resource-intensive. Although some neural network models have been proposed to tackle the screw arrangement problem [1], they fail to account for the positional information of the screw elements. To overcome this challenge, we propose a recurrent deep embedding network that leverages a deep autoencoder with a recurrent neural network (RNN) structure to develop a surrogate model based on simulation data.

The details are as follows. An autoencoder is a neural network architecture designed to learn latent representations of input data. In this study, we integrate the autoencoder with an RNN to capture the complex physical relationships between the operating conditions, screw configurations of TSEs, and their corresponding performance metrics. To further enhance the model’s ability to represent screw positions, we incorporate an attention layer from the Transformer model architecture. This addition allows the model to more effectively capture the spatial relationships between the screw elements.

The model was trained and evaluated using simulation data generated from the Ludovic software package. The experimental setup included eight screw element arrangements and three key operating variables: temperature, feed rate, and rotation speed. For data collection, we employed two data sampling strategies: progressive Latin hypercube sampling [2] and random sampling.

The results demonstrate that the proposed surrogate model accurately predicts TSE performance across both training and testing datasets. Notably, the model generalizes well to unseen operating conditions, making reliable predictions even for scenarios not encountered during training. This highlights the model’s robustness and versatility as a tool for optimizing TSE configurations.

In conclusion, the recurrent deep embedding surrogate model offers a highly efficient and effective solution for optimizing TSE performance. By integrating this model with optimization algorithms, it is possible to rapidly identify optimal configurations, resulting in improved product quality, enhanced process efficiency, and reduced production costs.



Predicting Final Properties in Ibuprofen Production with Variable Batch Durations

Kuan-Che Huang, David Shan-Hill Wong, Yuan Yao

Department of Chemical Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan

This study addresses the challenge of predicting final properties in batch processes with highly uneven durations, using the ibuprofen production process as a case study. A novel methodology is proposed and compared against traditional regression algorithms, which rely on batch trajectory synchronization as a pre-processing step. The performance of each method is evaluated using established metrics.

Batch processes are widely used in the chemical industry. Nevertheless, variability between production runs often leads to differences in batch durations, resulting in unequal lengths of process variable trajectories. Common solutions include time series truncation or time warping. However, truncation risks losing valuable process information, thereby reducing model prediction accuracy. Conversely, time warping may introduce noise or distort trajectories when compressing significantly unequal sequences, causing the model to learn incorrect process information. In multivariate chemical processes, combining time warping with batch-wise unfolding can result in the curse of dimensionality, especially when data is limited, thereby increasing the risk of overfitting in machine learning models.

The data for this study were generated using Aspen Plus V12 simulation software, focused on batch reactors. To capture the process characteristics, statistical sampling was employed to strategically position data points within a reasonable process range. The final isobutylbenzene conversion rate for each batch was used to determine batch completion. A total of 1,000 simulation runs were conducted, and the resulting data were used to develop a neural network model. The target variables to predict are: (1) the isobutylbenzene conversion rate, and (2) the accumulated mass of ibuprofen.

To handle the unequal-length trajectories in batch processes, this research constructs a dual-transformer deep neural network with multihead attention and layer normalization mechanism to extract shared information from the high-dimensional, uneven-length manipulated variable profiles into latent space, generating equal-dimensional latent codes. As an alternative strategy for feature extraction, a dual-autoencoder framework is also employed to achieve equal-dimensional representations. The representation vectors are then used as inputs for downstream deep learning models to predict the target variables.



Develop a Digital Twin System Based on a Physics-Informed Neural Networks for Pipeline Leakage Detection

Wei-Shiang Lin1, Yi-Hsiang Cheng2, Zhen-Yu Hung2, Yuan Yao1

1Department of Chemical Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan; 2Material and Chemical Research Laboratories, Industrial Technology Research Institute, Hsinchu 310401, Taiwan

As the demand for industrial and domestic resources continues to grow, the transportation of water, fossil fuels, and chemical products increasingly depends on pipeline systems. Therefore, monitoring pipeline transportation has become crucial, as leaks can lead to severe environmental disasters and safety risks. To address this challenge, this study is dedicated to developing a pipeline leakage detection system based on digital twin technology.

The core of this research lies in combining existing physical knowledge, such as the continuity and momentum equations, with neural network technology. These physical models are incorporated into the loss function of the neural network, enabling the model to be trained based on physical laws. By integrating physical models with neural networks, we aim to achieve high accuracy in detecting pipeline leakages. An advantage of Physics-informed Neural Networks (PINNs) is that they do not rely on large datasets and can enforce physical constraints during model training, making them a powerful tool for addressing pipeline safety challenges. Using the PINN model, we can more accurately simulate the fluid dynamics within pipelines, thereby significantly enhancing the prediction of potential leaks.

In detail, the system employs a fully connected neural network alongside the continuity and momentum partial differential equations to describe fluid pressure and flow rate variations. This equation not only predicts pressure transients and pressure wave propagation but also accounts for the impact of pipeline friction coefficients on flow behavior. By integrating data fitting with physical constraints, our model aims to minimize both prediction loss and partial differential equation loss, ensuring that predictions align closely with real-world data while adhering to physical laws. This approach provides both interpretability and reliability.

The PINN model is trained on data from normal pipeline operations to describe fluid dynamics in non-leakage conditions. When the input data reflects flow rates and pressures indicative of a leak, the predicted values will exhibit statistically significant deviations from the actual values. The process involves collecting prediction errors from the training data, evaluating their statistical distribution, and establishing a detection statistic using parametric or non-parametric methods. A rejection region and control limits are then defined, followed by the creation of a control chart to detect leaks. Finally, we test the accuracy and efficiency of the control chart using field or experimental data to ensure reliability.



Higher alcohol = higher value? Identifying Promising and Unpromising Synthesis Routes for 1-Propanol

Lukas Spiekermann, Mae McKenna, Luca Bosetti, André Bardow

Energy & Process Systems Engineering, Department of Mechanical and Process Engineering, ETH Zürich

In response to climate change, the chemical industry is investigating synthesis routes using renewable carbon sources (Shukla et al., 2022). CO2 and biomass have been shown to be convertible into 1-propanol, which could serve as a future platform chemical with diverse applications and higher value than traditional bulk chemicals (Jouny et al., 2018, Schemme et al., 2018, Gehrmann and Tenhumberg, 2020, Vo et al., 2021). A variety of potential pathways to 1-propanol have been proposed, but their respective benefits and disadvantages are unclear in guiding future innovations.

Here, we aim to identify the most promising routes to produce 1-propanol and establish development targets necessary to become competitive with benchmark technologies. To allow for a comprehensive assessment, we embed 1-propanol into the overall chemical supply chain. For this purpose, we formulate a technology choice model (Kätelhön et al., 2019, Meys et al., 2021) of the chemical industry to evaluate the cost-effectiveness and climate impact of various 1-propanol synthesis routes. The model includes thermo-catalytic, electrocatalytic, and fermentation-based synthesis steps with various intermediates to produce 1-propanol from CO2, diverse biomass feedstocks, and fossil resources. A comprehensive techno-economic analysis coupled with life cycle assessment quantifies both the economic and environmental potentials of new synthesis routes.

Our findings define performance targets for direct conversion of CO2 to 1-propanol via thermo-catalytic hydrogenation or electrocatalysis to become a beneficial synthesis route. If these performance targets are not met, the direct synthesis of 1-propanol is substituted by multi-step processes based on syngas and ethylene from CO2 or biomass.

Overall, our study demonstrates the critical role of synthesis route optimization in guiding the development of new chemical processes. By establishing quantitative benchmarks, we provide a roadmap for advancing 1-propanol synthesis technologies, contributing to the broader effort of reducing the chemical industry’s carbon footprint.

References

P. R. Shukla, et al., 2022, Climate Change 2022: Mitigation of Climate Change. Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC, Cambridge University Press, Cambridge, UK and New York, NY, USA)

M. Jouny, et al., 2018, Ind. Eng. Chem. Res. 57(6), 2165–2177

C. H. Vo, et al., 2021, ACS Sustain. Chem. Eng. 9(31), 10591–10600

S. Schemme, et al., 2018, Journal of CO2 Utilization 27, 223–237

S. Gehrmann, N. Tenhumberg, 2020, Chemie Ingenieur Technik 92(10), 1444–1458

A. Kätelhön, et al., 2019, Proceedings of the National Academy of Sciences 116(23), 11187–11194

R. Meys, et al., 2021, Science 374(6563), 71–76



A Python/Numpy-based package to support model discrimination and identification

Seyed Zuhair Bolourchian Tabrizi1,2, Elena Barbera1, Wilson Ricardo Leal da Silva2, Fabrizio Bezzo1

1Department of Industrial Engineering, University of Padova, via Marzolo 9, 35131 Padova PD, Italy; 2FLSmidth Cement, Green Innovation, Denmark

Process design, scale-up, and optimisation requires the precise determination of underlying phenomena and the identification of accurate models to describe them. This process can become complex when multiple rival models and higher uncertainty in the data exist, and the data needed to select and calibrate them is costly to obtain. Numerical techniques for screening various models and narrowing the pool of candidates without requiring additional experimental effort have been introduced to streamline the pre-discrimination stage [1]. These techniques have been followed by the development of model-based design of experiment (MBDoE) methods, which not only design new experiments to maximize the information for easier discrimination between rival models but also reduce the confidence ellipsoid volume of estimated parameters by enriching the information matrix through optimal experiment design [2].
Performing these techniques in an open source and user-friendly environment has been recognized by the community and has led to the development of several valuable packages, especially in the Python/PYOMO environment, which perform many of these numerical techniques [3,4]. These existing packages have made significant contributions to parameter estimation and calibration of models as well as model-based design of experiments. However, the need for a systematic package that flexibly performs all of these steps with a clear distinction between model simulation and model identification in an object-oriented approach is still highly advocated. To address these challenges, we present a new Python package that serves as an independent numerical wrapper around the kernel functions (models and their numerical interpretation). It facilitates crucial model identification steps, including the screening of rival models (through global sensitivity, identifiability, and estimability analyses), parameter estimation, uncertainty analysis, and model-based design of experiments to discriminate and calibrate models. This package not only brings together all the necessary steps but also conducts the analysis in an object-oriented manner, offering flexibility to adapt to the physical constraints of various processes. It is independent of specific programming structures and relies on Numpy and Python arrays, making it as general as possible while remaining compatible with features available in these packages. The application and advantages are demonstrated through an in-silico approach to a multivariate model identification case.

References:
[1] Moshiritabrizi, I., Abdi, K., McMullen, J. P., Wyvratt, B. M. & McAuley, K. B. Parameter estimation and estimability analysis in pharmaceutical models with uncertain inputs. AIChE Journal (2023).
[2] Asprey, S. P. & Macchietto, S. Statistical tools for optimal dynamic model building. Comput Chem Eng 24, (2000).
[3] Wang, J. & Dowling, A. W. Pyomo.DOE: An open-source package for model-based design of experiments in Python. AIChE Journal 68, (2022).
[4] Klise, K. A., Nicholson, B. L., Staid, A. & Woodruff, D. L. Parmest: Parameter Estimation Via Pyomo. in 41–46 (2019).



Experiences in Teaching Statistics and Data Science to Chemical Engineering Students at the University of Wisconsin-Madison

VICTOR ZAVALA

UNIVERSITY OF WISCONSIN-MADISON, United States of America

In this talk, I offer a perspective on my recent experiences in designing a course on statistics and data science for chemical engineers at the University of Wisconsin-Madison and in writing a textbook on the subject.

Statistics is one of the pillars of modern science and engineering and of emerging topics such as data science and machine learning; despite of this, its scope and relevance has remained stubbornly misunderstood and underappreciated in chemical engineering education (and in engineering education at large). Specifically, statistics is often taught by placing emphasis on data analysis. However, statistics is much more than that; statistics is a mathematical modeling paradigm that complements physical modeling paradigms used in chemical engineering (e.g., thermodynamics, transport phenomena, conservation, reaction kinetics). Specifically, statistics can help model random phenomena that might not be predictable from physics alone (or from deterministic physical laws), can help quantify the uncertainty of predictions obtained with physical models, can help discover physical models from data, and can help create models directly from data (in the absence of physical knowledge).

The desire design a new course on statistics for chemical engineering came about from my personal experience in learning statistics in college and in identifying the significant gaps in my understanding of statistics throughout my professional career. Similar feelings are often shared with me by professionals working in industry and academia. Throughout my professional career, I have been exposed to a broad range of applications in which knowledge of statistics has proven to be essential: uncertainty quantification, quality control, risk assessment, modeling of random phenomena, process monitoring, forecasting, machine learning, computer vision, and decision-making under uncertainty. These are applications that are pervasive in industry and academia.

The course that I designed at UW-Madison (and the accompanying textbook) follows a "data-models-decisions" pipeline. The intent of this design is to emphasize that statistics is a modeling paradigm that maps data to decisions; moreover, this design also aims to "connect the dots" between different branches of statistics. The focus on the pipeline is also important in reminding students that understanding the application context matters. Similarly, the nature of the decision and the data available influence the type of model used. The design is also intended for the student to understand the close interplay between statistical and physical modeling; specifically, we emphasize on how statistics provides tools to model aspects of a system that cannot be fully predicted from physics. The design is also intended to help the student appreciate how statistics provides a foundation to a broad range of modern tools of data science and machine learning.

The talk also offers insights into experiences in using software, as a way to reduce complex mathematical concepts to practice. Moreover, I discuss how statistics provides an excellent framework to teach and reinforce concepts of linear algebra and optimization. For instance, it is much easier to explain the relevance of eigenvalues when this is explained from the perspective of data science (e.g., they measure information).



Rule-based autocorrection of Piping and Instrumentation Diagrams (P&IDs) on graphs

Lukas Schulze Balhorn1, Niels Seijsener2, Kevin Dao2, Minji Kim1, Dominik P. Goldstein1, Ge H. M. Driessen2, Artur M. Schweidtmann1

1Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, The Netherlands; 2Fluor BV, Amsterdam, The Netherlands

Undetected errors or suboptimal designs in Piping and Instrumentation Diagrams (P&IDs) can cause increased financial costs, hazardous situations, unnecessary emissions, and inefficient operation. These errors are currently captured in extensive design processes leading to safe, operable, and maintainable facilities. However, grassroots engineering projects can involve tens to thousands of P&ID pages, leading to a significant revision workload. With the advent of digitalization and data exchange standards such as the Data Exchange in the Process Industry (DEXPI), there are new opportunities for algorithmic support of P&ID revision.

We propose a rule-based, automatic correction (i.e., autocorrection) of errors in P&IDs represented by the DEXPI data model. Our method detects potential errors, suggests improvements, and provides explanations for these suggestions. Specifically, our autocorrection method represents a DEXPI P&ID as a graph. Thereby, nodes represent DEXPI classes and directed edges the connectivity between them. The nodes retain all attributes of the DEXPI classes. Additionally, each rule consists of an erroneous P&ID template and the corresponding correct template, represented as a graph. The correct template includes the rule explanation as a graph attribute. Then, we apply the rules at inference time. The autocorrection method searches the erroneous template via subgraph isomorphism and replaces the erroneous with the corresponding correct template in the P&ID graph.

An industry case study demonstrates the method’s accuracy and performance, with rule inference taking less than a second. However, rules can conflict, requiring careful application order, and rules must be extended for specific cases. The explainability of the rule-based approach builds trust in the method and facilitates its integration into existing engineering workflows. Furthermore, DEXPI provides an existing interface between the autocorrection method and industrial P&ID development software.



Deposition rate constants: a DPM approach for particles in pipe flow

Alkhatab Bani Saad, Edward Obianagha, Lande Liu

University of Huddersfield, United Kingdom

Particle deposition is a natural phenomenon that occurs in many natural and industrial systems. Nevertheless, modelling and understanding of particle deposition in flow is still quite a big challenge especially for the determination of deposition rate constant. This study focuses on the use of the discrete particle model to calculate the deposition rate constant of particles flowing in a horizontal pipe. It was found that increasing the velocity of the flow decreases particle deposition. As the size of particles increases, deposition increases. Similarly, deposition flux was proportional to the concentration of the particles. The deposits per unit area of the inner pipe surface is higher at lower fluid velocity. Nonetheless, when the velocity of the continuous phase is increased by a factor of 100, the deposits volume per unit area decreased by half. The deposition rate constant was found to be nonlinear to both the location of the pipe and particle size. It was also interesting to see that the constant is substantially higher at the inlet of the pipe then gradually decreases along the axial direction of the flow. The change of deposition rate constant in particle size was found to be exponentially dependent.

Novelty in this research is that by extracting some quantitative parameters, deposition rate constants in this case, from a steady state Lagrangian simulation, the Eulerian approach based unsteady state population balance modelling can be made possible to determine the thickness of particle deposit in a pipe.



Plate heat exchangers: a CFD study on the effect of dimple shape on heat transfer

Mitchell Stolycia, Lande Liu

University of Huddersfield, United Kingdom

This article studies how heat transfer is affected across different dimple shapes on a plate within a plate heat exchanger using computational fluid dynamics (CFD). Four different dimple shapes were designed and studied: spherical, edge smoothed-spherical, normal distribution, and error distribution. In a pipe of 0.1 m in diameter with the dimple height being 0.05 m located at a distance of 0.3 m from the inlet under the fluid velocity of 0.5 m s–1, simulation shows that the normal distribution dimple lifted a 0.53 K increase in fluid temperature after 1.5 s. This increase is 10 times of the spherical, 8 times of the edge smoothed-spherical and 1.13 times of the error distribution shapes in their contributions to elevating fluid temperature. This was primarily due to the large increase in the intensity and number of eddies that the normal distribution dimple induced upon the fluid flow.

The effect that a fully developed velocity profile had on heat transfer was also analysed for an array of normal distribution dimples in a 5 m long pipe. It was found that fully developed flow resulted in the greatest temperature change, which was 9.5% more efficient than half developed flow and 31% more efficient than placing dimples directly next to one another.

Novelty in this research demonstrates how a typical plate heat exchanger equipment can be designed and optimised by a computational approach prior to manufacturing.



Modeling and life cycle assessment for ammonia cracking process

Heungseok Jin, Yeonsoo Kim

Kwangwoon University, Korea, Republic of (South Korea)

Ammonia (NH3) is gaining attention as a sustainable hydrogen (H2) carrier for long-distance transportation due to its higher boiling point and lower boil-off issues compared to liquefied hydrogen. These properties make ammonia a practical choice for storing and transporting hydrogen over long distances. However, extracting hydrogen from ammonia requires significant energy due to the endothermic nature of the reaction. Optimizing the operational conditions for this decomposition process is crucial to ensure energy-efficient hydrogen production. In particular, we focus on determining the amount of slipped ammonia that provides the most efficient energy generation through mixed oxidation, where both slipped ammonia (unreacted NH3) and a small amount of hydrogen are used.

Key factors include the temperature and pressure of the ammonia cracking process, the ammonia-to-hydrogen ratio in the fuel mixture, and catalyst kinetics. By optimizing these conditions, the goal is to maximize ammonia production while minimizing hydrogen consumption for fueling and NH3 consumption for NOx reduction.

In addition to the mass and energy balance derived from process modeling, a comprehensive life cycle assessment (LCA) is conducted to evaluate the sustainability of ammonia as a hydrogen carrier. The LCA considers the entire process, from ammonia production (often through the energy-intensive Haber-Bosch process or renewable energy-driven water electrolysis) to transportation and ammonia cracking for hydrogen extraction. This assessment highlights the environmental and energy impacts at each stage, offering insights into how to reduce the overall carbon footprint of using ammonia as a hydrogen carrier.



Technoeconomic Analysis of a Methanol Conversion Process Using Microwave-Assisted Dry Reforming and Chemical Looping

Omar Almaraz, Srinivas Palanki

West Virginia University, United States of America

The global methanol market size was valued at $28.78 billion in 2020 and is projected to reach $41.91 billion by 2026 [1]. Methanol has traditionally been produced from natural gas by first converting methane to syn gas and then converting syn gas to methanol. However, this is a very energy intensive process and produces a significant amount of the greenhouse gas carbon dioxide. Hence, there is motivation to look for alternative routes to the manufacture of methanol. In this research a novel microwave reactor is used for simulating the dry reforming process to convert methane to methanol. The objective is to produce 14,200 lbmol/h of methanol, which is the current production rate of methanol at Natgasoline LLC, Texas (USA) using the traditional steam reforming process [2].

Dry reforming requires a stream of carbon dioxide as well as a stream of methane to produce syn gas. Additional hydrogen is required to achieve the necessary carbon to hydrogen ratio to produce methanol from syn gas. These streams of carbon dioxide and hydrogen are generated via chemical looping. A three-reactor chemical looping system is developed that utilizes methane as the feed to produce a pure stream of hydrogen and a pure stream of carbon dioxide. The carbon dioxide stream from the chemical looping reactor system is mixed with a desulfurized natural gas stream and is sent to a novel microwave syngas reactor, which operates at a temperature of 800 °C and pressure of 1 bar to produce a mixture of carbon monoxide and hydrogen. The stream of hydrogen obtained via chemical looping is added to this syngas stream and sent to a methanol reactor train where methanol is produced. These reactors operate at a temperature range of 220-255°C and pressure of 76 bar. The reactor outlet stream is sent to a distillation train where the product methanol is separated from methane, carbon dioxide, hydrogen, and other products. The carbon dioxide is recycled back to the microwave reactor.

This process was simulated in ASPEN Plus. The thermodynamic property method used was RKSoave for the process to convert methane to syn gas and NRTL for the process to convert syn gas to methanol. The energy requirement for operating the microwave reactor is determined via simulation in COMSOL. Heat integration tools are utilized to reduce the hot utility and cold utility usage in this integrated plant that leads to optimal operation. A technoeconomic analysis is conducted to determine the overall capital cost and the operating cost of this novel process. The simulation results from this study demonstrate the significant potential of utilizing a microwave-assisted reactor for dry reforming of methane.

References

[1] Methanol Market by Feedstock (Natural Gas, Coal, Biomass), Derivative (Formaldehyde, Acetic Acid), End-use Industry (Construction, Automotive, Electronics, Solvents, Packaging), and Region - Global Forecast to 2028, Markets and Markets. (2023). https://www.marketresearch.com/MarketsandMarkets-v3719/Methanol-Feedstock-Natural-Gas-Coal-30408866/

[2] M. E. Haque, N. Tripathi, and S. Palanki,” Development of an Integrated Process Plant for the Conversion of Shale Gas to Propylene Glycol,” Industrial & Engineering Chemistry Research, 60 (1), 399-41 (2021)



A Techno-enviro-economic Transparency of a Coal-fired Power Plant: Integrating Biomass Co-firing and CO2 Sequestration Technology in a Carbon-priced Environment

Norhuda Abdul Manaf1, Nilay Shah2, Noor Fatina Emelin Nor Fadzil3

1Department of Chemical and Environmental Engineering, Malaysia-Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Kuala Lumpur; 2Department of Chemical Engineering, Imperial College London, SW7 2AZ, United Kingdom; 3Department of Chemical and Environmental Engineering, Malaysia-Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Kuala Lumpur

The energy industry, as the primary contributor to worldwide greenhouse gas emissions, plays a crucial role in addressing global climate issues. Despite numerous governmental commitments and initiatives aimed at combating the root causes of rising temperatures, carbon dioxide (CO2) emissions from industrial and energy-related activities continue to climb. Coal-fired power plants are significant contributors to this situation. Currently, two promising strategies for mitigating emissions from coal-fired power plants are CO2 capture and storage (CCS) and biomass gasification. CCS is a mature technology in the field, while biomass gasification, a process that converts biomass into gaseous fuel, offers an encouraging avenue for generating sustainable energy resources. While extensive research has explored the techno-economic potential of coal-biomass co-firing with CCS (CB-CCS) retrofit systems, no work has considered the synergistic impact of coal power plant stranded assets, carbon price schemes, and co-firing ratios. This study develops an hourly-resolution optimization model framework using mixed-integer linear programming to predict the operational profile and economic potential of CB-CCS-retrofit systems. Two dynamic scenarios for ten-year operations are evaluated: with and without carbon price imposition, subject to the minimum coal power plant stranded asset and CO2 emissions at different co-firing ratios. These scenarios reflect possible implementations in developed countries with established carbon price schemes, such as the United Kingdom and Australia, as well as developing or middle-income countries without strong carbon policy schemes, such as Malaysia and Indonesia. The outcome of this work will help determine whether retrofitting individual coal power plants is worthwhile for reducing greenhouse gas emissions. It is also significant to comprehend the role of CCS in the retrofit system and the associated co-firing ratio for biomass gasification systems. This work contributes to the international agenda delineated in the International Energy Agency (IEA) report addressing carbon lock-in and stranded assets, which potentially stem from the premature decommissioning of contemporary coal-based electricity generation facilities. This work also aligns with Malaysia's National Energy Transition Roadmap, which focuses on bioenergy and CCS.



Methodology for multi-actor and multi-scale decision support for Water-Food-Energy systems

Amaya Saint-Bois1, Ludovic Montastruc1, Marianne Boix1, Olivier Therond2

1Laboratoire de Génie Chimique, UMR 5503 CNRS, Toulouse INP, UPS, 4 Allée Emile Monso, 31432 Toulouse Cedex 4, France; 2UMR 1121 LAE INRAE- Université de Lorraine – ENSAIA, 54000 Nancy, France

We have designed a generic multi-actor multi-level framework to optimize the management of water-energy-food nexus systems. These are essential systems for human life characterized by water, energy and food synergies and trade-offs at varied spatial and temporal scales. They are managed by cross sector decision-makers at varied decision levels. They are complex and dynamic systems for which the operational level cannot be overlooked to design adequate management strategies.

Our methodology combines spatial operational multi-agent based integrated simulations of water-energy-food nexus systems with strategic decision-making methods (Saint-Bois et al., 2024). We have implemented it to allocate land-use alternatives to agricultural plots. The number of territory possible combinations of parcel land-use allocations equals the number of land-use alternatives explored for each parcel exponential the number of parcels in the territory. Stochastic multi-criteria decision-making methods have been designed to provide decision support for large territories (more than 1000 parcels). A multi-objective optimization method has been designed to produce optimized regional level land-use scenarios.

The methodology has been applied to an agricultural watershed of approximately 800 km2 and 15224 parcels situated downstream the French Aveyron River. The watershed experiences water stress and is located in one of France’s sunniest regions. Renewable energy production in agricultural land appears as a means to meet national renewable energy production targets and to move towards autonomous sustainable agricultural systems and regions. The installation of renewable energy generation units in agricultural land facing water stress is a perfect illustration of a complex water-energy-food system for which a holistic approach is required. MAELIA (Therond et al., 2014) (modelling of socio-agro-ecological systems for landscape integrated assessment), a multi-agent based platform developed by French researches to simulate complex agro-hydrological systems, has been used to simulate dynamics of water-energy-food nexus systems at operational level. Three strategic multi-criteria decision-making methods that combine Monte Carlo simulations with the Analytic Hierarchy Process method have been implemented. The first one is local; it selects land-use alternatives that optimize multi-sector parcel level indicators. The other two are regional; decisions are based on regional indicators. The first regional decision-making method identifies the best uniform regional scenario from those known and the second regional decision-making method explores combinations of parcel land-use allocations and selects the one that optimizes multi-sector criteria at regional level. A multi-objective optimization method that combines MILP (Mixed Integer Linear Programming) and goal programming has been implemented with IBM’s ILOG CPLEX optimization studio to find parcel level land-use allocations that optimize regional multi-sector criteria.

The three decision-making methods provide the same result: covering all land that is suitable for solar panels with solar panels optimizes parcel and regional multi-criteria performance indicators. Perspectives are simulating scenarios with positive agricultural governmental studies, adding social indicators and designing a game theory based strategic decision-making method.



Synergies of Adaptive Learning for Surrogate-Based Flowsheet Model Maintenance

Balázs Palotai1,2, Gábor Kis1, János Abonyi2, Ágnes Bárkányi2

1MOL Group Plc.; 2Faculty of Engineering, University of Pannonia

The integration of digital models with business processes and real-time data access is pivotal for advancing Industry 4.0 and autonomous systems. This evolution necessitates that digital models maintain high fidelity and credibility to ensure reliable decision support in dynamic environments. Flowsheet models, commonly used for process simulation and optimization in such contexts, often face challenges related to convergence issues and computational demands during optimization. Surrogate models, which approximate complex models with simpler mathematical representations, present a promising solution to mitigate these challenges by estimating calibration factors for flowsheet models efficiently. Traditionally, surrogate models are trained using Latin Hypercube Sampling to capture a broad range of system behaviors. However, physical systems in industrial applications are typically operated within specific local regions, where globally trained surrogate models may not perform adequately. This discrepancy limits the effectiveness of surrogate models in accurately calibrating flowsheet models, especially when the system deviates from the conditions used during the surrogate model training.

This paper introduces a novel adaptive calibration methodology that combines the principles of active and adaptive learning to enhance surrogate model performance for flowsheet model calibration. The proposed approach iteratively refines the surrogate model by generating new data points in the local operating regions of interest using the flowsheet model itself. This adaptive retraining process ensures that the surrogate model remains accurate across both local and global domains, thus providing reliable calibration factors for the flowsheet model.

A case study on a simplified refinery process demonstrates the effectiveness of the proposed methodology. The adaptive surrogate-based calibration significantly reduces the computational time associated with direct simulation-based calibration while maintaining high accuracy in model predictions. The results show an improvement in both the efficiency and precision of the flowsheet model calibration process, highlighting the synergistic benefits of integrating surrogate models into adaptive calibration strategies for industrial process engineering.

In summary, the synergies between adaptive maintenance of surrogate and flowsheet models offer a robust solution for maintaining model fidelity and reducing computational costs in dynamic industrial environments. This research contributes to the field of computer-aided process engineering by presenting a methodology that not only supports real-time decision-making but also enhances the adaptability and performance of digital models in the face of evolving physical systems.



Comparison of Prior Mean and Multi-Fidelity Bayesian Optimization of a Hydroformylation Reactor

Stefan Tönnis, Luise F. Kaven, Eike Cramer

Process Systems Engineering, RWTH Aachen University, Germany

Accurate process models are not always available and can be prohibitively expensive to obtain for model-based optimization. Hence, the process systems engineering (PSE) community has gained an interest in Bayesian Optimization (BO), for it approximates black-box objectives using the probabilistic Gaussian processes (GP) surrogate models [1]. BO fits the surrogate models by iteratively proposing experiments by optimizing over so-called acquisition functions and updating the surrogate model based on the results. Although BO is generally known as sample-efficient, treating chemical engineering design problems as fully black-box problems can still be prohibitively expensive, particularly for high-cost technical-scale experiments. At the same time, there is an extensive knowledge and modeling base for chemical engineering design problems that are fully neglected by black-box algorithms such as BO. One widely known option to include such prior knowledge in BO is prior mean modeling [2], where the user complements the BO algorithm with an initial guess, i.e., the prior mean. Alternatives include hybrid models or compositions of GPs with mechanistic equations [3]. A lesser-known alternative is augmenting the GP with lower fidelity data [4], e.g., from low-cost simulations or approximate models. Such low-fidelity data can give cheap yet valuable insights, which reduces the number of high-cost experiments. In this work, we compare the usage of prior mean and multi-fidelity modeling for BO in PSE design problems. We first review how prior mean and multi-fidelity modeling can be incorporated using multi-fidelity benchmark problems such as the well-known Forrester, Rosenbrock, and Rastrigin test functions. In a second step, we apply the two methods to optimize a multi-phase reaction mini-plant process, including a decanter separation step and a recycle stream. The process is based on the hydroformylation of 1-dodecene in microemulsion systems [5]. Overall, we observe accelerated convergence in the different test functions and the hydroformylation mini-plant. In fact, combining both prior mean and multi-fidelity modeling methods achieves the best overall fit of the GP surrogate models. However, our analysis also reveals how poorly chosen prior mean functions can cause the algorithm to get stuck in local minima or lead to numerical failure.

Bibliography
[1] Roman Garnett. Bayesian optimization. Cambridge University Press, Cambridge, United Kingdom,
2023.

[2] Aniket Chitre, Jayce Cheng, Sarfaraz Ahamed, Robert C. M. Querimit, Benchuan Zhu, Ke Wang,
Long Wang, Kedar Hippalgaonkar, and Alexei A. Lapkin. phbot: Self–driven robot for ph adjustment
of viscous formulations via physics–informed–ml. Chemistry–Methods, 4(2), 2024.

[3] Leonardo D. González and Victor M. Zavala. Bois: Bayesian optimization of interconnected systems.
IFAC-PapersOnLine, 58(14):446–451, 2024.

[4] Jian Wu, Saul Toscano-Palmerin, Peter I. Frazier, and Andrew Gordon Wilson. Practical multi-fidelity
bayesian optimization for hyperparameter tuning. In Ryan P. Adams and Vibhav Gogate, editors,
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, volume 115 of Proceedings
of Machine Learning Research, pages 788–798. PMLR, 2020.

[5] David Müller, Markus Illner, Erik Esche, Tobias Pogrzeba, Marcel Schmidt, Reinhard Schomäcker,
Lorenz T. Biegler, Günter Wozny, and Jens-Uwe Repke. Dynamic real-time optimization under
uncertainty of a hydroformylation mini-plant. Computers & Chemical Engineering, 106:836–848,
2017.



A global sensitivity analysis for a bipolar membrane electrodialysis capturing carbon dioxide from the air

Grazia Leonzio1, Alexia Thill2, Nilay Shah2

1Department of Mechanical, Chemical and Materials Engineering, University of Cagliari, via Marengo 2, 09123 Cagliari, Italy , Sargent Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK; 2Sargent Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK

Global warming and climate change are two critical, current global challenges. For this reason, as the concentration of atmospheric carbon dioxide (CO2) continues to rise, it is becoming increasingly imperative to invent efficient and cost-effective technologies for controlling the atmospheric CO2 concentration. In addition to the capture of CO2 from flue gases and industrial processes, new solutions to capture CO2 from the air have been proposed and investigated in the literature such as absorption, adsorption, ion exchange resin, mineral carbonation, membrane, photocatalysis, cryogenic separation, electrochemical approach and electrodialysis approaches (Leonzio et al., 2022). These are the well-known direct air capture (DAC) or negative emission technologies (NETs).

Among them, in the electrodialysis approach, a bipolar membrane electrodialysis (BPMED) stack is used to regenerate the hydroxide based-solved (NaOH or KOH water solution) coming from an absorption column and capturing CO2 from the air (Sabatino et al., 2020). In this way, it is possible to recycle the solvent to the column and release the captured CO2 for its storage or utilization.

Although not yet deployed at an industrial or even pilot scale, CO2 separation through BPMED has already been described and analyzed in the literature (Eisaman et al., 2011; Sabatino et al., 2020, 2022; Vallejo Castano et al., 2024).

Regarding the economic aspect, a preliminary levelized cost of the BPM-based process was suggested to be 770 $/tonCO2 due to the high cost of the membrane, the large electricity consumption, and uncertainties on the lifetime of the materials (Sabatino et al., 2020). Due to the relatively early stage of development, process optimization through the use of a mathematical model is therefore useful to support design and development through identifiation of the best operating conditions and parameters along with a Global Sensitivity Analysis (GSA) with the aim of suggesting significant operating parameters that could optimize both cost and energy consumption.

In this research, a mathematical model for a BPMED capturing CO2 from the air is proposed to conduct a GSA to identify the most effective operating condition for total costs (including capital and operating expenditures) and energy consumption, as the considered Key Performance Indicators (KPIs). The investigated uncertain parameters are: current density, concentration in the rich solution, membrane active area, number of cell pairs, CO2 partial pressure in the gas phase, load ratio and carbon loading.

References

Vallejo Castano, S., Shu, Q., Shi, M., Blauw, R., Loldrup Fosbøl, P., Kuntke, P., Tedesco, M., Hamelers, H.V.M., 2024. Chemical Engineering Journal 488, 150870

Eisaman, M. D.; Alvarado, L.; Larner, D.; Wang, P.; Littau, K.A. 2011. Energy Environ. Sci. 4 (10), 4031.

Leonzio, G., Fennell, P.S., Shah, N., 2022, Appli. Sci., 12(16), 8321

Sabatino, F., Mehta, M., Grimm, A., Gazzani, M., Gallucci, F., Kramer, G.J., and Annaland, M., 2020. Ind. Eng. Chem. Res. 59, 7007−7020

Sabatino, F., Gazzani, M., Gallucci, F., Annaland, M., 2022. Ind. Eng. Chem. Res. 61, 12668−12679



Refrigerant Selection and Cycle Design for Industrial Heat Pump Applications exemplified for Distillation Processes

Jonas Schnurr, Momme Adami, Mirko Skiborowski

Hamburg University of Technology, Institute of Process System Engineering, Germany

Abstract

In the scope of global warming the essential objectives for the industry are the transition to renewable energy and the improvement of energy efficiency. A potential approach to achieving both of these goals in a single step is the implementation of heat pumps, which effectively recover low-temperature waste heat that would otherwise be lost to the environment. By elevating the heat to a higher temperature level where it can be reused or recycled within the process, the application range of heat pumps is not limited to new designs and they have a huge potential; as retrofit options for existing processes in order to reduce the external energy demand [1] and electrify the industrial processes, thereby promoting a more sustainable industry with an increased share of renewable electricity generation.

Nevertheless, the optimal design of heat pumps depends heavily on the selection of an appropriate refrigerant, as the refrigerant performance is influenced by both thermodynamic properties and the heat pump cycle design, which is typically fixed in current selection approaches. Methods like iterative approaches [2], database screening followed by simulations [3], and optimization of thermodynamic parameters with subsequent identification of real refrigerants [4] are computationally intensive and time-consuming. Although these methods can identify thermodynamically beneficial refrigerants, practical application may be hindered by limitations of the compressor. Additionally, these approaches are challenging to implement in process design tools.

The current work presents a novel approach for a fast screening and identification of suitable refrigerant and heat pump cycle designs for specific applications, considering a variety of established refrigerants. The method automatically evaluates the performance of 38 pure refrigerants for any heat pump with defined heat sink and source, adapting the heat pump design by incorporating an internal heat exchanger, in case superheating the refrigerant prior to compression is required. By considering practical constraints such as compression ratio and compressor discharge temperature, the remaining suitable refrigerants are ranked based on energy demand or COP.

The application of an integrated process design and screening is demonstrated for the evaluation of different distillation processes, by linking the screening tool with an existing shortcut screening framework proposed by Skiborowski [5]. This integration enables the combination of heat pumps with other energy integration methods, like thermal coupling, thereby facilitating a more comprehensive assessment of potential process variants and the identification of the most promising process alternatives.

References

[1] A. A. Kiss, C. A. I. Ferreira, Heat Pumps in Chemical Process Industry, CRC Press, Boca Raton, 2017

[2] J. Jiang, B. Hu, T. Ge, R. Wang, Energy 2022, 241, 1222831.

[3] M. O. McLinden, J. S. Brown, R. Brignoli, A. F. Kazakov, P. A. Domanski, Nature Communications 2017, 8 (1), 1-9.

[4] J. Mairhofer, M. Stavrou, Chemie Ingenieur Technik 2023, 95 (3), 458-466.

[5] M. Skiborowski, Chemical Engineering Transactions 2018, 69, 199-204.



CO2 conversion to polyethylene based on power-to-X technology and renewable resources

Monika Dokl1, Blaž Likozar2, Chunyan Si3, Zdravko Kravanja1, Yee Van Fan3,4, Lidija Čuček1

1Faculty of Chemistry and Chemical Engineering, University of Maribor, Smetanova ulica 17, 2000 Maribor, Slovenia; 2Department of Catalysis and Chemical Reaction Engineering, National Institute of Chemistry, Hajdrihova 19, Ljubljana 1001, Slovenia; 3Sustainable Process Integration Laboratory, Faculty of Mechanical Engineering, Brno University of Technology, Technická 2896/2, 616 69 Brno, Czech Republic; 4Environmental Change Institute, University of Oxford, Oxford OX1 3QY, United Kingdom

In addition to increasing material and energy efficiency, the plastics sector is already stepping up its efforts further to minimize greenhouse gas emissions during the production phase in order to support the EU's transition to climate neutrality by 2050. These initiatives include expanding the circular economy in the plastics value chain through recycling, increasing the use of renewable raw materials, switching to renewable energy and developing advanced carbon capture and utilization methods. Bio-based plastics have been extensively explored as a potential substitute for plastics derived from fossil fuels. Despite the potential of bio-based plastics, there are concerns about sustainability, including the impact on land use, water resources and biodiversity. An alternative route is to convert CO2 into valuable chemicals using power-to-X technology. This includes the surplus of renewable energy to transform CO2 into fuels, chemicals and plastics. In this study, the process simulation of polyethylene production using CO2 and renewable electricity is performed to identify feedstocks aligned with climate objectives. CO2-based polyethylene production is compared with conventional fossil-based production and burdening and unburdening effects of potential transition to the production of renewable plastics are evaluated.



Design of Experiments Algorithm for Comprehensive Exploration and Rapid Optimization in Chemical Space

Kazuhiro Takeda1, Kondo Masaru2, Muthu Karuppasamy3,4, Mohamed S. H. Salem3,5, Takizawa Shinobu3

1Shizuoka university, Japan; 2University of shizuoka, Japan; 3Osaka university, Japan; 4Graduate School of Pharmaceutical Sciences, Osaka University, Japan; 5Suez Canal University, Egypt

1. Introduction

Bayesian Optimization (BO)1) is known for its ability to explore optimal conditions with a limited number of experiments. However, the number of experiments conducted through BO is often insufficient to fully understand the experimental condition space. To address this, various experimental design methods have been proposed. Among these, the Definitive Screening Design (DSD)2) has been introduced as a method that minimizes confounding and requires fewer experiments. This study proposes an algorithm that combines DSD and BO to reduce confounding, ensure sufficient experimentation to understand the experimental condition space and enable rapid optimization.

2. Fusion Algorithm of DSD and BO

In DSD, each factor is set at three levels (+, 0, -), and experiments are conducted with one factor at 0 and the others at + or -. This process is repeated for the number of factors m, and a final experiment is conducted with all factors set to 0, resulting in a total of 2m+1 experiments. Typically, after conducting experiments based on DSD, a model is created by selecting factors using criteria such as AIC (Akaike information criteria), followed by additional experiments to optimize the objective function. Using BO allows for optimization with fewer additional experiments.

In this study, the levels (+ and -) required by DSD are determined based on BO, enabling the integration of BO from the DSD experiment stage. The proposed algorithm is outlined as follows:

1. Formulate a DSD experimental plan with 0, +, and - levels.

2. Conduct experiments using the maximum and minimum ranges (as defined by DSD) until all variables are no longer unique.

3. For the next experimental condition, use BO to search within the range of the original planned values with the same sign.

4. Conduct experiments under the explored conditions.

5. If the experimental plan formulated in Step 1 is complete, proceed to the next step; otherwise, return to Step 3.

6. Use BO to explore the optimal conditions within the range.

7. Conduct experiments under the explored conditions.

8. If the convergence criteria are met, terminate the process; otherwise, return to Step 6.

3. Numerical Experiment

Numerical experiments were conducted to minimize each objective function. The upper and lower limits of each variable were set at (-2, 2), and the experiment was conducted 10 times. The results indicate that the proposed algorithm converges faster than BO alone. Moreover, the variability in convergence speed was also reduced. Although not shown due to space constraints, the proposed algorithm also demonstrated faster and more stable convergence compared to other experimental design methods combined with BO.

4. Conclusion

This study proposed an algorithm combining DSD and BO to minimize confounding, reduce the required experiments, and enable rapid optimization. Numerical experiments demonstrated that the algorithm converges early and stably. Future work will involve verifying the effectiveness of the proposed algorithm through actual experiments.

References

1. J. Snoek, et al.; arXiv:1206.2944, pp.1-9, 2012

2. B. Jones and C. J. Nachtsheim; J. Qual. Technol., Vol.43, pp.1-15, 2011



Surrogate Modeling for Real-Time Simulation of Spatially Distributed Dynamically Operated Chemical Reactors: A Power-to-X Case Study

Luisa Peterson1, Ali Forootani2, Edgar Ivan Sanchez Medina1, Ion Victor Gosea1, Peter Benner1,3, Kai Sundmacher1,3

1Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstraße 1, Magdeburg, 39106, Germany; 2Helmholtz Centre for Environmental Research, Permoserstraße 15, Leipzig, 04318 , Germany; 3Otto von Guericke University Magdeburg, Universitaetsplatz 2, Magdeburg, 39106, Germany

Spatially distributed dynamical systems are omnipresent in chemical engineering. These systems are often modeled by partial differential equations (PDEs) to describe complex, coupled processes. However, solving PDEs can be computationally expensive, especially for highly nonlinear systems. This is particularly challenging for outer-loop computations such as optimization, control, and uncertainty quantification, all requiring real-time performance. Surrogate models reduce computational costs and are classified into data-fit, reduced-order, and hierarchical models. Data-fit models use statistical techniques or machine learning to map input-output relationships, while reduced-order models project equations onto a lower-dimensional subspace. Hierarchical models simplify physical or numerical methods to reduce complexity.

In this study, we simulate the dynamic behavior of a catalytic CO2 methanation reactor, critical for Power-to-X applications that convert CO2 and green hydrogen to methane. The reactor must adapt to changing load conditions, which requires real-time executable simulation models. A one-dimensional mechanistic model, calibrated with pilot plant data, simulates temperature and CO2 conversion. We develop and test three surrogate models using load change simulation data. (i) Operator Inference (OpInf) projects the system into a lower dimensional subspace and infers a quadratic polynomial within this space, incorporating stability constraints to improve prediction reliability. (ii) Sparse Identification of Nonlinear Dynamics (SINDy) uncovers the system's governing equations through sparse regression. Our adaptation of SINDy uses Q-DEIM to efficiently select significant data for regression inputs and is implemented within a neural network framework with a Physics-Informed Neural Network (PINN) loss function. (iii) The proposed Graph Neural Network (GNN) uses a windowed graph structure with Graph Attention Networks.

When reproducing data from the mechanistic model, OpInf achieves a low relative Frobenius norm error of 0.043% for CO2 conversion and 0.030% for temperature. The quadratic, guaranteed stable polynomial provides a good balance between interpretability and performance. SINDy gives relative errors of 2.37% for CO2 conversion and 0.91% for temperature. While SINDy is the most interpretable model, it is also the most computationally intensive to evaluate, requires manual tuning of the regression library, and occasionally experiences stability issues. GNNs produce relative errors of 1.08% for CO2 conversion and 0.81% for temperature. GNNs offer the fastest evaluation and require the least domain-specific knowledge of the three methods, but their black-box nature limits interpretability and they are prone to overfitting and can struggle with extrapolation. All surrogate models reduce computational time while maintaining acceptable accuracy, making them suitable for real-time decision-making in dynamic reactor operations. The choice of model depends on the application requirements, in particular the balance between speed and interpretability. In this case, OpInf provides the best overall balance, while SINDy and GNNs provide useful trade-offs depending on whether interpretability or speed is prioritized [2].


References

[1] R. T. Zimmermann, J. Bremer, and K. Sundmacher, “Load-flexible fixed-bed reactors by multi-period design optimization,” Chemical Engineering Journal, vol. 428, 130771, 2022, DOI: 0.1016/j.cej.2021.130771.

[2] L. Peterson, A. Forootani, E. I. S. Medina, I. V. Gosea, K. Sundmacher, and P. Benner, “Towards Digital Twins for Power-to-X: Comparing Surrogate Models for a Catalytic CO2 Methanation Reactor”, Authorea Preprints, 2024, DOI: 10.36227/techrxiv.172263007.76668955/v1.



Computer Vision for Chemical Engineering Diagrams

Maged Ibrahim Elsayed Eid, Giancarlo Dalle Ave

McMaster University, Canada

This paper details the development of a state-of-the-art object, word, and connectivity detection system tailored for the analysis of chemical engineering diagrams, namely Process Flow Diagrams (PFDs), Block Flow Diagrams (BFDs), and Piping and Instrumentation Diagrams (P&IDs), utilizing cutting-edge computer vision methodologies. Chemical engineering diagrams play a pivotal role in the field, offering visual representations of plant processes and equipment. They are integral to both the design, analysis, and operational phases of chemical processes, aiding in process documentation and serving as a foundation for simulating and monitoring the performance of essential equipment operations.

The necessity of automating the interpretation of BFDs, PFDs, and P&IDs arises from their widespread use and the challenges associated with their manual analysis. These diagrams, often stored as image-based PDFs, present significant hurdles in terms of data extraction and interpretation. Manual processing is not only labor-intensive but also prone to errors and inconsistencies. Given the complexity and volume of these diagrams, which include intricate details of plant processes and equipment, manual methods can lead to delays and inaccuracies. Automating this process with advanced computer vision techniques addresses these challenges by providing a scalable, accurate, and efficient means to extract and analyze information.

The primary aim of this project is to automate the interpretation of various chemical engineering diagrams, a task that has traditionally relied on manual expertise. This automation encompasses the precise detection of unit operations, text recognition, and the mapping of interconnections between components. To achieve this, the proposed methodology relies on rule-based and predefined approaches. These approaches are employed to detect unit operations by analyzing visual patterns and shapes, recognizing text using OCR techniques, and mapping the interconnections between components based on spatial relationships. This method specifically avoids deep learning which can be computationally intensive and often requires extensive labeling to effectively differentiate between various objects. These challenges can complicate implementation and scalability, making deep learning less suitable for this application. The results showed high detection accuracy, successfully identifying unit operations, text, and interconnections with reliable performance, even in complex diagrams.



Digital Twin for Operator Training- and Real-Time Support for a Pilot Scale Packed Batch Distillation Column

Mads Stevnsborg, Jakob K. Huusom, Krist V. Gernaey

PROSYS DTU, Denmark

Digital Twin (DT) is a frequently used term by industry and academia to describe data-centric models that accurately depict a physical system counterpart. The DTs is typically used in either an offline context as Virtual Laboratories (VL) [4, 5] or in real-time applications as predictive toolboxes [2]. In processes restricted by a low degree of automation that instead rely greatly on operator competence in key decision-making situations the DTs can act as a guiding tool [1, 3]. This work explores the challenge of developing DTs to support operators by developing a combined virtual laboratory and decision support tool for students conducting experiments on a pilot scale packed batch distillation column at the Technical University of Denmark [2]. This operation is an unsteady operation, controlled by a set of manual valves, which the operator must continuously balance to meet purity constraints without an excessive consumption of utilities. The realisation is achieved by leveraging the software development and IT operations (DevOps) methodology with a modular compartmentalisation of DT resources to better leverage model applicability across various projects. The final solution is comprised of several stand-alone packages that together offer real-time communication to physical equipment through OPC-UA endpoints and a scalable simulation environment through web-based user interfaces (UI). The advantages of this implementation strategy are flexibility and speed allowing for continuously updating process models as data is generated and offering process operators with the necessary training and knowledge before and during operation to run experiments effectively and enhancing the learning outcome.

References

[1] F. Bähner et al., 2021,” Challenges in Optimization and Control of Biobased Process Systems: An Industrial-Academic Perspective”, Industrial and Engineering Chemistry Research, Volume 60, Issue 42, pp. 14985-15003

[2] M. Jones et al., 2022, “Pilot Plant 4.0: A Review of Digitalization Efforts of the Chemical and Biochemical Engineering Department at the Technical University of Denmark (DTU)”, Computer-aided Chemical Engineering, Volume 49, pp. 1525-1530

[3] V. Steinwandter et al., 2019, “Data science tools and applications on the way to Pharma 4.0”, Drug Discovery Today, Volume 24, Issue 9, pp. 1795-1805

[4] M. Schueler & T. Mehling, 2022, “Digital Twin- A System for Testing and Training”, Computer Aided Chemical Engineering, Volume 52, pp. 2049-2055

[5] J.Ismite et al., 2019, “A systems engineering framework for the design of bioprocess operator training simulators”, E3s Web of Conferences, 2019, Volume 78, pp. 03001

[6] N. Kamihama et al., 2011, “Isobaric Vapor−Liquid Equilibria for Ethanol + Water + EthyleneGlycol and Its Constituent Three Binary Systems”, Journal of Chemical and Engineering Data, Volume 57, Issue 2, pp. 339-344



Hybridizing Neural Networks with Physical Laws for Advanced Process Modeling in Chemical Engineering

Jana Mousa, Stéphane Negny

INP Toulouse, France

Neural networks (NNs) have become the talk of the century as they have been labeled as indispensable tools for modeling complex systems due to their ability to learn and predict from vast datasets. Their success spans a wide range of applications, including chemical engineering processes. However, one key limitation of NNs is their lack of physical interpretability, which becomes critical when dealing with complex systems governed by known physical laws. In chemical engineering, particularly in unit operations like reactors—considered the heart of any process—the accuracy and reliability of models depend not only on their predictive capabilities, but also on their adherence to physical constraints such as mass and energy balances, reaction kinetics, and equilibrium constants.

This study investigates the integration of neural networks with nonlinear data reconciliation (NDR) as a method to impose physical constraints on predictive models. Nonlinear data reconciliation is a mathematical technique used to adjust measured data to satisfy predefined physical laws, enhancing model consistency and accuracy. By embedding NDR into neural networks, the resulting hybrid models ensure physical realism while retaining the flexibility and learning power of NNs.

The framework first trains an NN to capture nonlinear system relationships, then applies NDR to correct predictions so that key physical metrics—such as conversion, selectivity, and equilibrium constants in reactors—are met. This ensures that the model aligns not only with data but also with fundamental physical laws, enhancing the model's interpretability and reliability. Furthermore, this method's efficacy has been evaluated by comparing it to other hybrid approaches, such as Karush-Kuhn-Tucker Neural Networks (KKT-NN) and KarushKuhn-Tucker Physics-Informed Neural Networks (KKT-PINN), both of which aim to enforce physical constraints within neural networks.

In conclusion, the integration of physical interpretability into neural networks through nonlinear data reconciliation significantly enhances modeling accuracy and reliability in engineering applications. Future enhancements may focus on refining the method to accommodate a wider range of engineering challenges, thereby facilitating its application in diverse fields such as process engineering, and system optimization



Transferring Graph Neural Networks for Soft Sensor Modeling using Process Topologies

Maximilian F. Theisen1, Gabrie M.H. Meesters2, Artur M. Schweidtmann1

1Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands; 2Product and Process Engineering, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands

Transfer learning allows - in theory - to re-use and fine-tune machine learning models and thus reduce data requirements. However, transferring data-driven soft sensor models is in practice often not possible. In particular, the fixed input structure of standard soft sensor models prohibits transfer if e.g. the sensor information is not identical in all plants.

We propose a process-aware graph neural network approach for transfer learning of soft sensor models across multiple plants. In our method, plants are modeled as graphs: Unit operations are nodes, streams are edges, and sensors are embedded as attributes. Our approach brings two advantages for transfer learning: First, we not only include sensor data but also crucial information on the plant topology. Second, the graph neural network algorithm is flexible with respect to its sensor inputs. We test the transfer learning capabilities of our modeling approach on ammonia synthesis loops with different process topologies (Moulijn, 2013). We build a soft sensor predicting the ammonia concentration in the product. After training on data from several processes, we successfully transfer our soft sensor model to a previously unseen process with a different topology. Our approach promises to extend the use case of data-driven soft sensors to cases where data from similar plants is leveraged.

References

Moulijn, J. A. (2013). Chemical process technology (2nd ed (Online-Ausg.) ed.). (M. Makkee, & A. Diepen, Eds.) Chichester, West, Sussex: John Wiley & Sons Inc.



Production scheduling based on Real-time Optimization and Zone Control Nonlinear Model Predictive Controller

José Matias1, Alvaro Marcelo Acevedo Peña2

1KU Leuven, Belgium; 2YPFB Refinación S.A.

Chemical industry has a high demand for process optimization methods and tools that enhance profitability while operating near the nominal capacity. Products inventory, both in-process and end-of-process, serve as buffers to mitigate fluctuations in operation and demand while maintaining consistent and predictable production. Efficient product inventory management is crucial for the a profitable operation of chemical plants. To ensure optimal operation, various strategies have been proposed that consider in-process storage and aim to satisfy mass balances while avoiding bottlenecks [1].

When final product demand is highly oscillatory with unexpected stoppages, end-of-process inventories must be carefully controlled within minimum and maximum bounds. This prevents plant shutdowns and ensures compliance with legal product supply requirements. In both cases, plant-wide operations should be considered when making in- and end-of-process product inventory level decisions to improve overall profitability [2].

To address this problem, we propose a holistic hierarchical two-layered strategy. The upper layer uses real-time optimization (RTO) to determine optimal plant flow rates from an economic perspective. The lower layer employs a zone control nonlinear model predictive controller (NMPC) to define inventory setpoints. The idea is that RTO defines setpoints for flow rates that manipulate plant throughput, while NMPC maintains inventory levels within desired bounds while keeping flow rates as close as possible to the RTO-defined setpoints. The use of this two-layered holistic approach is novel for this specific problem; however, our primary contribution lies in introducing an ensemble of optimization problems at the RTO level. Each RTO problem is associated with a different uncertain product demand scenario. This enables us to recompute optimal throughput plant manipulator setpoints based on the current scenario, improving the overall strategy performance.

We tested our strategy on a three-stage distillation column system that separates a mixture of four products, inspired by a LPG production plant with recycle split vapour (RSV) invented by Ortloff Ltd [3]. While the lightest and cheapest product is directly sent to a pipeline, the other three more valuable products are stored in tanks. Demand for these three products fluctuates significantly, but can be forecasted in advance, allowing for proactive measures. We compared the results of our holistic two-layered strategy to typical actions taken by plant operators in various uncertain demand scenarios. Our approach addresses the challenges of mitigating bottlenecks and minimizing inventory fluctuations and is more effective than the operator decisions from an economic perspective.

[1] Skogestad, S., 2004. Computers & Chemical Engineering, 28(1-2), pp.219-234.

[2] Downs, J.J. and Skogestad, S., 2011. Annual Reviews in Control, 35(1), pp.99-110.

[3] Zhang S. et al., 2020. Comprehensive Comparison of Enhanced Recycle Split Vapour Processes for Ethane Recovery, Energy Reports, 6, pp.1819–1837.



Talking like Piping and Instrumentation Diagrams (P&IDs)

Achmad Anggawirya Alimin, Dominik P. Goldstein, Lukas Schulze Balhorn, Artur M. Schweidtmann

Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands

Piping and Instrumentation Diagrams (P&IDs) are pivotal in process engineering, serving as comprehensive references across multiple disciplines (Toghraei, 2019). However, the intricate nature of P&IDs and the system's complexity pose challenges for engineers to examine flowsheet overview and details efficiently and accurately. Recent developments in flowsheet digitalization through computer vision and data exchange in the process industry (DEXPI) have opened up the potential to have a unified machine-readable format for P&IDs (Theisen et al., 2023). Yet, the industry DEXPI P&IDs are often extremely complex often including thousands pages.

We propose the ChatP&ID methodology that allows to communicate with P&IDs using natural language. In particular, we represent DEXPI P&IDs as labelled property graphs and integrate them with Large Language Models (LLMs). The approach consists of three main parts: 1) P&IDs graph representation is developed following DEXPI specification via our pyDEXPI Python package (Goldstein et al., n.d.). 2) A tool for generating P&ID knowledge graphs from pyDEXPI. 3) Integration of the P&ID knowledge graph to LLMs using graph-based retrieval augmented generation (graph-RAG). This approach allows users to communicate with P&IDs using natural language. It extends LLM’s ability to retrieve contextual data from P&IDs and mitigate hallucinations. Leveraging the LLM's large corpus, the model is also able to interpret process information in P&ID, which could help engineer daily tasks. In the future, this work also opens up opportunities in the context of other generative Artificial Intelligence (genAI) solutions on P&IDs such as auto-generation or auto-correction (Schweidtmann, 2024).

References

Goldstein, D.P., Alimin, A.A., Schulze Balhorn, L., Schweidtmann, A.M., n.d. pyDEXPI: A Python implementation and toolkit for the DEXPI information model.

Schweidtmann, A.M., 2024. Generative artificial intelligence in chemical engineering. Nat. Chem. Eng. 1, 193–193. https://doi.org/10.1038/s44286-024-00041-5

Theisen, M.F., Flores, K.N., Balhorn, L.S., Schweidtmann, A.M., 2023. Digitization of chemical process flow diagrams using deep convolutional neural networks. Digit. Chem. Eng. 6, 100072.

Toghraei, M., 2019. Piping and instrumentation diagram development. Wiley, Hoboken, NJ, USA.



Multi-Objective Optimization and Analytical Hierarchical Process for Sustainable Power Generation Alternatives in the High Mountain Region of Santurbán: case of Pamplona, Colombia

Ana María Rosso-Cerón2, Nicolas Cabrera1, Viatcheslav Kafarov1

1Department of Chemical Engineering, Carrera 27 Calle 9, Universidad Industrial de Santander, Bucaramanga, Colombia; 2Department of Chemical Engineering, Cl. 5 No. 3-93, Kilometro 1 Vía Bucaramanga, Universidad de Pamplona, Norte de Santander, Colombia

This study presents an integrated approach combining the Analytical Hierarchical Process (AHP) and a Mixed-Integer Multi-Objective Linear Programming (MOMILP) model to evaluate and select sustainable power generation alternatives for Pamplona, Colombia. The research focuses on the high mountain region of Santurbán, a páramo ecosystem that provides water to over 2.5 million people and supports rich biodiversity. Given the region’s vulnerability to climate change, sustainable energy solutions are essential to ensure environmental conservation and energy security [1].

The MOMILP model considers several power generation technologies, including photovoltaic panels, wind turbines, biomass, and diesel plants. These alternatives are integrated into the local electrical distribution system with the goal of minimizing two objectives: costs (net present value) and CO₂ emissions, while adhering to design, operational, and budgetary constraints. The ε-constraint method was employed to generate a Pareto-optimal set of solutions, balancing trade-offs between economic and environmental performance. Additionally, the study examines the potential for forming local energy communities by allowing surplus electricity from renewable sources to be sold, promoting local economic growth and energy independence.

The AHP is used to assess these alternatives based on a set of multi-criteria, including social acceptance, job creation, regional accessibility, technological maturity, reliability, pollutant emissions, land use, and habitat impact. Expert opinions were gathered through the Delphi method, and the criteria were weighted using Saaty’s scale. This comprehensive evaluation ensures that the decision-making process incorporates not only technical and economic aspects but also environmental and social considerations [2].

The analysis revealed that a hybrid solution combining solar, wind, and biomass technologies provides the best balance between economic viability and environmental sustainability. Solar energy, due to its technological maturity and minimal impact on the local habitat, emerged as a highly favourable option. Biomass, although contributing more to emissions than solar and wind, was positively evaluated for its potential to create local jobs and its high social acceptance in the region.

This study contributes to the growing body of literature on the integration of renewable energy sources into power distribution networks, particularly in ecologically sensitive areas like the Santurbán páramo. The combined use of AHP and MOMILP offers a robust framework for decision-makers, allowing for the systematic evaluation of sustainable alternatives based on technical performance and stakeholder priorities. This approach is particularly relevant for policymakers and utility companies engaged in Colombia’s energy transition efforts and sustainable development.

References

[1] Llambí, L. D., Becerra, M. T., Peralvo, M., Avella, A., Baruffol, M., & Díaz, L. J. (2019). Monitoring biodiversity and ecosystem services in Colombia's high Andean ecosystems: Toward an integrated strategy. Mountain Research and Development, 39(3). https://doi.org/10.1659/MRD-JOURNAL-D-19-00020.

[2] A. M. Rosso-Cerón, V. Kafarov, G. Latorre-Bayona, and R. Quijano-Hurtado, "A novel hybrid approach based on fuzzy multi-criteria decision-making tools for assessing sustainable alternatives of power generation in San Andrés Island," Renewable and Sustainable Energy Reviews, vol. 110, 159–173, 2019. https://doi.org/10.1016/j.rser.2019.04.053.



Environmental assessment of the catalytic Arabinose oxidation

Mouad Hachhach, Dmitry Murzin, Tapio Salmi

a Laboratory of Industrial Chemistry and Reaction Engineering (TKR), Johan Gadolin Process Chemistry Centre, Åbo Akademi University, Åbo-Turku FI-20500, Finland

Oxidation of arabinose to arabinoic acid present an innovative way to valorize local biomass to high add value product. Experiments on the oxidation of arabinose to arabinoic acid with molecular oxygen were previously to determine the optimum reaction conditions (Kusema et al., 2010; Manzano et al., 2021) and using the obtained results a scaled-up process has been designed and analysed from techno-economic aspect (Hachhach et al., 2021).

Also these results are used to analyse the environmental impact of the scaled-up process during its its life time using life cycle assessment (LCA) methodology. SimaPro software combined with impact assessment method IMPACT 2002+ were used in this work.

The results revealed that the heating seems to be the biggest contributor of the environmental impacts even though that the reaction is performed under mild conditions (70 C) which highlighted the importance of reducing the energy consumption via an efficient heat integration for example.



A FOREST BIOMASS-TO-HYDROCARBON SUPPLY CHAIN MATHEMATICAL MODEL FOR OPTIMIZING CARBON EMISSIONS AND ECONOMIC METRICS

Frank Piedra-Jimenez1, Rishabh Mehta2, Valeria Larnaudie3, Maria Analia Rodriguez1, Ana Inés Torres2

1Instituto de Investigación y Desarrollo en Ingeniería de Procesos y Química Aplicada (UNC-CONICET), Universidad Nacional de Córdoba. Facultad de Ciencias Exactas, Físicas y Naturales. Av. Vélez Sarsfield 1611, X5016GCA Ciudad Universitaria, Córdoba, Argentina; 2Department of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh PA 15213; 3Departamento de Bioingeniería, Facultad de Ingeniería, Universidad de la Republica, Julio Herrera y Reissig 565, Montevideo, Uruguay.

Forest supply chains (FSCs) are critical for achieving decarbonization targets (Santos et al., 2019). FSCs are characterized by abundant biomass residues, offering an opportunity to add value to processes while contributing to the production of clean energy products. One particularly interesting aspect is their potential integration with oil refineries to produce drop-in fuels, offering a transformative pathway to mitigate traditional refinery emissions (Barbosa-Povoa and Pinto, 2020).

In this article, a disjunctive mathematical programming approach is presented to optimize the design and planning of the FSC for the production of hydrocarbon products from biomass, optimizing both economic and environmental objectives. Various types of byproducts and residual biomass from forest harvesting activities, sawmill production, and the pulp and paper industries are considered. Alternative processing facilities and technologies can be established over a multi-period planning horizon. The design problem scope involves selecting forest areas for exploitation, identifying biomass sources, and determining the locations, technologies, and capacities of facilities that transform wood-based residues into methanol and pyrolysis oil, which are further processed in biodiesel and petroleum refinery plants, respectively. This problem is challenging due to the complexity of the supply chain network, which involves numerous decisions, constraints, and objectives.

Especially in the case of large geographical areas, transportation becomes a crucial aspect of supply chain design and planning because the low biomass density significantly impacts carbon emissions and costs. Thus, the planning problem scope includes selecting connections and material flows across the supply chain and analyzing the impact of different types of transportation vehicles.

To estimate FSC carbon emissions, the Life Cycle Assessment (LCA) methodology is used. A gate-to-gate analysis is carried out for each activity in the FSC. The predicted LCA results are then integrated as input parameters into a mathematical programming model for FSC design and planning, extending previous work (Piedra-Jimenez et al., 2024). In this article, a multi-objective approach is employed to minimize CO2-equivalent emissions while optimizing net present value from an economic standpoint. A set of efficient Pareto points is obtained and compared in a case study of the Argentine forest industry.

References

Barbosa-Povoa, A.P., Pinto, J.M. (2020). “Process supply chains: perspectives from academia and industry”. Comput. Chem. Eng., 132, 106606, 10.1016/J.COMPCHEMENG.2019.106606

Piedra-Jimenez, F., Torres, A.I., Rodriguez, M.A. (2024), “A robust disjunctive formulation for the redesign of forest biomass-based fuels supply chain under multiple factors of uncertainty.” Computers & Chemical Engineering, 108540, ISSN 0098-1354.

Santos, A., Carvalho, A., Barbosa-Póvoa, A.P, Marques, A., Amorim, P. (2019). “Assessment and optimization of sustainable forest wood supply chains – a systematic literature review.” For. Policy Econ., 105, pp. 112-135, 10.1016/J.FORPOL.2019.05.026



Introducing competition in a multi-agent system for hybrid optimization

Veerawat Udomvorakulchai, Miguel Pineda, Eric S. Fraga

University College London, United Kingdom

Process systems engineering optimization problems may be
challenging. These problems often exhibit nonlinearity, non-convexity,
discontinuity, and uncertainty, and often only the values of objective
and constraint functions are accessible. Black-box optimization
methods may be appropriate to tackle such problems. The effectiveness
of each method differs and is often unknown beforehand. Prior experience
has shown that hybrid approaches can lead to better outcomes than
using a single optimization method (1).

A general-purpose multi-agent framework for optimization, Cocoa, has
recently been developed to automate the configuration and use of
hybrid optimization, allowing for any number of optimization solvers,
including different instances of the same solver (2). Solvers can
share solutions, leading to better outcomes with the same
computational effort. However, the computational resource allocated
to each solver is inversely proportional to the number of solvers.
Allocating equal time to each solver may not be ideal.

This paper describes the implementation of competition to go alongside
cooperation: allocate more computational resource to solvers best
suited to a given problem. The allocation is dynamic and evolves as
the search progresses. Each solver is assigned a priority which
changes based on the results obtained by that solver. Scheduling is
priority based. The scheduler is similar to algorithms used by
multi-tasking operating systems (3). Individual solvers will be given
more or less access to the computational resource, enabling the system
to reward those solvers that do well while ensuring that all solvers
are allocated some computational resource.

The framework allows for the use of both metaheuristic and direct
search methods. Metaheuristics explore the full search space while
direct search methods are good at exploiting solutions. The framework
has been implemented in Julia (4) making full use of multiprocessing.

A case study on the design of a micro-analytic system is presented
(5). The model is dynamic and has uncertainties; the selection of
designs is based on multiple criteria. This is a good test of the
proposed framework as the computational demands are large and the
search space is complex. The case study demonstrates the benefits of
a multi-solver hybrid optimization approach with both cooperation and
competition. The framework adapts to the evolving requirements of the
search. Often, a metaheuristic method is allocated more computational
resource at the beginning of the search while direct search methods
are emphasized later.

1. Fraga ES. Hybrid methods for optimisation. In: Zilinskas J, Bogle
IDL, editors. Computer aided methods for optimal design and
operations. World Scientific Publishing Co.; 2006. p. 1–14.

2. Fraga ES, Udomvorakulchai V, Papageorgiou L. 2024. DOI: 10.1016/B978-0-443-28824-1.50556-1.

3. Madnick SE, Donovan JJ. Operating systems. McGraw-Hill Book Company;
1974.

4. Bezanson J, Edelman A, Karpinski S, Shah VB. Julia: A fresh approach
to numerical computing. SIAM rev. 2017;59(1):65–98.

5. Pineda M, Tsaoulidis D, Filho P, Tsukahara T, Angeli P, Fraga
E. 2021. DOI: 10.1016/j.nucengdes.2021.111432.



A Component Property Modeling Framework Utilizing Molecular Similarity for Accurate Predictions and Uncertainty Quantification

Youquan Xu, Zhijiang Shao, Anjan Kumar Tula

Zhejiang University, China, People's Republic of

In many industrial applications, the demand for high-performance products, such as advanced materials and efficient working media, continues to rise. A key step in developing these products lies in the design of their constituent molecules. Traditional methods, based on expert experience, are often slow, labor-intensive, and prone to overlooking molecules with optimal performance. As a result, computer-aided molecular design (CAMD) has garnered significant attention for its potential to accelerate and improve the design process. One of the major challenges in CAMD is the lack of mechanistic knowledge that accurately links molecular structure to its properties. As a result, machine learning models trained on existing molecular databases have become the primary tools for predicting molecular properties. The typical approach involves using these models to predict the properties of potential molecules and selecting the best candidates based on these predictions. However, prediction errors are inevitable, introducing uncertainty into the reliability of the design. This can result in significant discrepancies between the predicted and experimentally verified properties, limiting the effectiveness of molecular discovery.
To address this issue, we propose a novel molecular property modeling framework based on a similarity coefficient. This framework introduces a new formula for molecular similarity, which considers compound type identification to enable more accurate molecular comparisons. By calculating the similarity between a target molecule and those in an existing database, the framework selects the most similar molecules to form a tailored training dataset, ensuring that only the most informative molecules are selected for the training set, while less relevant or misleading data points are excluded, significantly improving the accuracy of property predictions. In addition to enhancing prediction accuracy, the similarity coefficient also quantifies the confidence in the property predictions. By evaluating the availability and magnitude of the similarity index, the framework provides a measure of uncertainty in the predictions, giving a clearer understanding of how reliable the predicted properties are. This is especially important for molecules where limited similar data is available, allowing for more informed decision-making in the selection process. In tests across various molecular properties, our framework not only enhances the accuracy of predictions but also offers a clear evaluation of prediction reliability, especially for molecules with high similarity. Our framework introduces a two-fold evaluation system for potential molecules, using both predicted properties and the similarity coefficient. This dual criterion ensures that only molecules with both excellent predicted properties and high similarity are selected, enhancing the reliability of the screening process. The improved prediction accuracy, particularly for molecules with high similarity, reduces the need for extensive experimental validation and significantly increases the overall confidence in the molecular design process by explicitly addressing prediction uncertainty.



A simple model for control and optimisation of a produced water re-injection facility

Rafael David de Oliveira1, Edmary Altamiranda2, Gjermund Mathisen2, Johannes Jäschke1

1Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), Trondheim, Norway; 2Subsea Technology, AkerBP ASA, Stavanger, Norway

Water injection (or water flooding) is an enhanced oil recovery technique that consists of injecting water into the reservoir to maintain the reservoir pressure. The injected water can come either from the sea or the water separated from the oil and gas production (produced water). The amount of injected water in each well is typically decided by the reservoir engineers, and many methodologies have been proposed so far in the literature where reservoir models are usually applied (Grema et al., 2016). Once the injection targets have been defined, the water injection network system can be optimised. A relevant optimisation problem in this context may consist of the optimal operation of the pump system topside while ensuring the integrity of the subsea water injection system by maximising the lifetime of the equipment. Usually, the works in that phase are modelling the system at a macro-level where each unit is represented as a node in a network (Ivo and Imsland, 2022). The use of simple and lower-level models where the manipulated variables and measured variables can be directly connected proved to be very useful in the design of new control strategies (Sivertsen et al., 2006) as well as in real-time optimisation formulations where the model parameters can be updated in real-time (Matias et al., 2022).

This work proposes a simple model for control and optimisation of a produced water re-injection facility. The model was based on a real facility in operation on the Norwegian continental shelf and consisted of a set of differential-algebraic equations. Data was gathered from the available sensors, pump operation and water injection targets. Model parameters related to equipment dimensions and the valve's flow coefficient were fixed as in the real plant. The remaining parameters were estimated from the field data, solving a nonlinear least-square problem. Uncertainty quantification was performed to assess the parameter's confidence intervals. Moreover, simulations were performed to evaluate and validate the proposed model. The results show that a simple model can be fitted to the plant and, at the same time, describe the key features of the plant dynamics. The developed model is expected to aid the implementation of strategies like self-optimising control and real-time optimisation on produced water re-injection facilities in the near future.

References

Grema, A. S., and Yi Cao. 2016. “Optimal Feedback Control of Oil Reservoir Waterflooding Processes.” International Journal of Automation and Computing 13 (1): 73–80.

Ivo, Otávio Fonseca, and Lars Struen Imsland. 2022. “Framework for Produced Water Discharge Management with Flow-Weighted Mean Concentration Based Economic Model Predictive Control.” Computers & Chemical Engineering 157 (January):107604.

Matias, José, Julio P. C. Oliveira, Galo A. C. Le Roux, and Johannes Jäschke. 2022. “Steady-State Real-Time Optimization Using Transient Measurements on an Experimental Rig.” Journal of Process Control 115 (July):181–96.

Sivertsen, Heidi, John-Morten Godhavn, Audun Faanes, and Sigurd Skogestad. 2006. “CONTROL SOLUTIONS FOR SUBSEA PROCESSING AND MULTIPHASE TRANSPORT.” IFAC Proceedings Volumes, 6th IFAC Symposium on Advanced Control of Chemical Processes, 39 (2): 1069–74.



An optimization-based conceptual synthesis of reaction-separation systems for glucose to chemicals conversion.

Syed Ejaz Haider, Ville Alopaeus

Department of Chemical and Metallurgical Engineering, School of Chemical Engineering, Aalto University, P.O. Box 16100, 00076 Aalto, Finland.

Abstract

Lignocellulosic biomass has emerged as a promising renewable alternative to fossil resources for the sustainable production of green chemicals [1]. Among the high-value biomass-derived building block chemicals, levulinic acid has gained significant attention due to its wide industrial applications [2]. It serves as a raw material for the synthesis of resins, plasticizers, textiles, animal feed, coatings, antifreeze, pharmaceuticals, and bio-based products [3]. In order to produce levulinic acid on commercial scale, it is essential to identify the most cost-effective and optimal synthesis route.

Two main methods exist to identify the optimal process structure: hierarchical decomposition and superstructure-based optimization. The hierarchical decomposition method involves making design decisions at each detail level based on heuristics; however, it struggles to capture interactions among decisions at different detail levels. In contrast, superstructure-based synthesis is a smart process systems engineering methodology that systematically evaluates a wide range of structural alternatives simultaneously using an equation-oriented approach to identify the optimal structure.

This study aims to identify the optimal process structure and parameters for the commercial-scale production of levulinic acid from glucose using mathematical programming approach. To achieve more valuable results, the reaction and separation systems were separately investigated under two optimization scenarios using two different objective functions.

Scenario 1 focuses on optimizing the glucose conversion reactor to enhance overall profit and minimize waste disposal. The optimization model includes a rigorous economic objective function that simultaneously considers product selling prices, capital and manufacturing costs over a 20-year project life, and waste disposal costs. A continuous tank reactor model was used as a mass balance constraint, utilizing the rate parameters from our recent research findings at chemical engineering research group, Aalto University. This non-linear programming (NLP) problem was implemented in GAMS and solved using the BARON solver to determine the optimal operating conditions and reactor size. The optimal reactor volume was found to be 13.2m3, with an optimal temperature of 197.8°C for the levulinic acid production capacity of 1593tonnes/year.

Scenario 2 addresses the synthesis of distillation-based separation sequences to separate the multicomponent reactor effluent into various product streams. All potential candidates are embedded in a superstructure, which is translated into a mixed-integer nonlinear programming problem (MINLP). Research is progressing towards solving this MINLP problem and identifying the optimal configuration of distillation columns for the desired separation task.

References

[1] F. H. Isikgor and C. R. Becer, "Lignocellulosic biomass: a sustainable platform for the production of bio-based chemicals and polymers," Polymer chemistry, vol. 6, no. 25, pp. 4497-4559, 2015.

[2] T. Werpy and G. Petersen, "Top value added chemicals from biomass: volume I--results of screening for potential candidates from sugars and synthesis gas," National Renewable Energy Lab.(NREL), Golden, CO (United States), 2004.

[3] S. Takkellapati, T. Li, and M. A. Gonzalez, "An overview of biorefinery-derived platform chemicals from a cellulose and hemicellulose biorefinery," Clean technologies and environmental policy, vol. 20, pp. 1615-1630, 2018.



Kinetic modeling of drug substance synthesis considering slug flow characteristics in a liquid-liquid reaction

Shunsei Yayabe1, Junu Kim1, Yusuke Hayashi1, Kazuya Okamoto2, Keisuke Shibukawa2, Hayao Nakanishi2, Hirokazu Sugiyama1

1The University of Tokyo, Japan; 2Shionogi Pharma Co., Ltd., Japan

In the production of drug substances (or active pharmaceutical ingredients), flow synthesis is increasingly being introduced due to its various advantages such as high surface volume ratio and small system size [1]. One promising application of flow synthesis is liquid-liquid reaction [2]. When two immiscible liquids enter together in a flow reactor, unique flow patterns, especially like slug flow, are formed. These patterns are determined by the fluid properties and the reactor specifications, and have a significant impact on the mass transfer rate. Previous studies have analyzed the effect of slug flow on mass transfer in liquid-liquid reactions using computational fluid dynamics [3, 4]. These studies provide valuable insights into the influence of flow characteristics on reaction. However, there is a lack of modeling approaches that simultaneously account for flow characteristics and reaction kinetics, which may limit the application of liquid-liquid reactions in flow synthesis.

We developed a kinetic model of drug substance synthesis by incorporating slug flow characteristics in a liquid-liquid reaction, with the aim to determine the feasible range of the process parameters. The target reaction was Stevens oxidation, which is a novel liquid-liquid reaction of organic and aqueous phases, producing the ester via a shorter pathway than the conventional route. To obtain kinetic data, experiments were conducted, varying the inner diameter, reaction temperature, and residence time. In Stevens oxidation, a catalyst was used, and experimental conditions were adjusted to form slug flow to promote the catalyst’s mass transfer. Using the obtained data, the model was developed for the change in concentrations of the starting material, desired product, intermediate, dimer, carboxylic acid, and the catalyst. In the change in the catalyst concentration model, mass transfer was considered using the overall volumetric mass transfer coefficient during slug flow formation.

The model successfully reproduced experimental results and demonstrated that, as the inner diameter increases, the efficiency of mass transfer in slug flow decreases with slowing down the reaction. The developed model was used to simulate the yields of the start material, the dimer, and the process mass intensity, in order to determine the feasible region. As a result, it was shown that when the reagent concentration was either too high or too low, operating conditions fell outside the feasible region. This kinetic model with flow characteristics will be useful for the process design of drug substance synthesis using liquid-liquid reactions. In the ongoing work, we are conducting validation of the feasible region.

[1] S. Diab, et al., React. Chem. Eng., 2021, 6, 1819. [2] L. Capaldo, et al., Chem. Sci., 2023, 14, 4230. [3] A. Mittal, et al., Ind. Eng. Chem. Res., 2023, 62, 15006. [4] D. Cheng, et al., Ind. Eng. Chem. Res., 2020, 59, 4397.



Learning-based control approach for nanobody-scorpion antivenom optimization

Juan Camilo Acosta-Pavas1, David C Corrales1, Susana M Alonso Villela1, Balkiss Bouhaouala-Zahar2, Georgios Georgakilas3, Konstantinos Mexis4, Stefanos Xenios4, Theodore Dalamagas3, Antonis Kokosis4, Michael O'donohue1, Luc Fillaudeau1, César A. Aceves-Lara1

1TBI, Université de Toulouse, CNRS UMR5504, INRAE UMR792, INSA, Toulouse, France, France; 2Laboratoire des Biomolécules, Venins et Applications Théranostiques (LBVAT), Institut Pasteur de Tunis, 13 Place Pasteur, BP-74, 1002 Le Belvédère, Tunis, Tunisia; 3Athena Research Center, Marousi, Greece; 4School of Chemical Engineering, National Technical University of Athens, Iroon Polytechneiou 9, Zografou, 15780 Athens, Greece

One market scope of bioindustries is the production of recombinant proteins in E. coli for its application in serotherapy (Alonso Villela et al., 2023). However, its process's monitoring, control, and optimization present limitations. There are different approaches to optimize bioprocess performance; one common is using model-based control strategies such as Model Predictive Control (MPC). Another strategy is learning-based control, such as Reinforcement Learning (RL).

In this work, an RL approach was applied to maximize the production of recombinant proteins in E. coli at induction mode. The aim was to find the optimal substrate feed rate (Fs) applied during the induction that maximize the protein productivity. The RL model was trained using the actor-critic Twin-Delayed Deep Deterministic (TD3) Policy Gradient agent. The reward corresponded to the maximum value of the productivity. The environment was represented with a dynamic hybrid model (DHM) published by Corrales et al. (2024). The simulated conditions consisted in a reactor with 2L of working volume (V) at 37°C for the batch (10gglucose/L) and fed-batch (feeding with 300gglucose/L) modes, and 28°C during induction stage. The first 3.4h was the batch mode. The fed-batch mode was operated with a Fs=1x10^-3L/h until reach 8h. Afterwards, the RL agent was trained in the induction mode until the process's final at 20h. The agent actions were updated every 2h. It was considered two types of constraints 1.49<V<5.00L and 1x10^-3<Fs≤5x10^-4 L/h. Finally, the results were compared with the MPC approach.

The training options for all the networks were a learning rate of 1x10^-3 for the critic and 1x10^-4 for the actor; gradient threshold of 1.0, mini-batch size of 1x10^2, a discount factor of 0.9, experience buffer length of 1x10^6, and agent sample time of 0.1h with maximum 700 episodes.

MPC and RL control strategies show similar behaviors. In both cases, the optimal action suggested is to apply the maximum Fs to increase the protein productivity at the end of the process until 4.81x10^-2 mg/h. Regarding computation time, the RL agent training spent a mean value of 0.3284s performing 14.0x10^3 steps in each action update. At the same time, the MPC required a mean value of 0.3366s to solve an optimization problem at every action update. The RL approach demonstrates to be a good alternative to explore the optimization in the production of recombinant proteins.

References

Alonso Villela, S. M., Kraïem-Ghezal, H., Bouhaouala-Zahar, B., Bideaux, C., Aceves Lara, C. A., & Fillaudeau, L. (2023). Production of recombinant scorpion antivenoms in E. coli: Current state and perspectives. Applied Microbiology and Biotechnology, 107(13), 4133-4152. https://doi.org/10.1007/s00253-023-12578-1

Corrales, D. C., Villela, S. M. A., Cescut, J., Daboussi, F., Fillaudeau, L., & Aceves-Lara, C. A. (2024). Dynamic Hybrid Model for Nanobody-based Antivenom Production (scorpion antivenom) with E. coli CH10-12 and E. coli NbF12-10.



Kinetics modeling of the thermal degradation of densified refuse-derived fuel (d-RDF)

Mohammad Ali Nazari, Juma Haydary

Institute of Chemical and Environmental Engineering, Slovak University of Technology in Bratislava, Slovak Republic

Currently, modern human life is experiencing an energy crisis and a massive generation of Municipal Solid Waste (MSW). The conversion of the carbon-containing fraction of MSW, known as refuse-derived fuel (RDF), into energy, fuel, and high-value bio-based chemicals has become a key focus in ongoing discussions on sustainable development, driven by rising energy demand, depleting fossil fuel reserves, and growing environmental concerns. However, a significant limitation of unprocessed RDF lies in its heterogeneous composition, which complicates material handling, reactor feeding, and the accurate prediction of its physical and chemical properties. The densification of RDF (d-RDF) offers a potential solution to these challenges by reducing material variability and generating a more uniform, durable form, thereby enhancing its suitability for processes such as pyrolysis. This research effort involves evaluating the physicochemical characteristics and thermal degradation of d-RDF using a thermogravimetric analyzer (TGA) under controlled conditions at various heating rates of 2, 5, 10, and 20 K·min⁻¹. The model-free methods, including Friedman (FRM), Flynn-Wall-Ozawa (FWO), Kissinger-Akahira-Sunose (KAS), Vyazovkin (VYZ), and Kissinger, were applied to determine the apparent kinetic and thermodynamic parameters within the conversion range of 1% to 85%. The physicochemical properties of d-RDF demonstrated its suitability for various thermochemical conversion applications. Thermal degradation predominantly occurred within the temperature range of 220–500°C, accounting for 98% of the total weight loss. The coefficients of determination (R²) for the fitted plots ranged from 0.90 to 1.00 across all applied models. The average activation energy (Eα) calculated using the FRM, FWO, KAS, and VYZ methods was 260, 247, 247, and 263 kJ·mol⁻¹, respectively. The evaluation of thermodynamic parameters (ΔH, ΔG, and ΔS) indicated the endothermic nature of the process. Statistical F-test was applied to identify the best agreement between experimental and calculated data. According to the F-value test, the differences of variance in FRM and VYZ models were insignificant, illustrating the best agreement with experimental data. Considering all results, including kinetic and thermodynamic parameters, along with the high heating value (HHV) of 25.20 MJ·kg⁻¹, d-RDF demonstrates a strong affinity for thermal degradation under pyrolysis conditions and can be regarded as a suitable feedstock to produce fuel and value-added products. Moreover, it serves as a viable alternative to fossil fuels, contributing to the United Nations 2030 Sustainable Development Goals.



Cost-optimal solvent selection for batch cooling crystallisation of flurbiprofen

Matthew Blair, Dimitrios I. Gerogiorgis

University of Edinburgh, United Kingdom

ABSTRACT

Choosing suitable solvents for crystallisation processes can be very challenging when developing new pharmaceuticals, given the vast number of choices, crystallisation techniques and performance metrics. A high-efficiency solvent must ensure high API recovery, low cost and minimal environmental impact,1 and allow batch (or possibly continuous) operation within an acceptable (not narrow) parameter space. To streamline this task, process and thermodynamic modelling tools2,3 can be used to systematically probe the behaviour of different crystallisation setups in silico prior to conducting lab-scale experiments. In particular, it has been found that we can use thermodynamic models alongside principles from solid-liquid equilibria (SLE) to determine the impact of key process variables (e.g. temperature and solvent choice)1 on the performance of different processes without (or prior to) testing them in the laboratory.2,3

This paper presents the implementation of a modelling framework that can be used to minimise the cost and environmental impact of batch crystallisation processes on the basis of thermodynamic principles. This process modelling framework (implemented in MATLAB®) is employed to study the batch cooling crystallisation of flurbiprofen, a non-steroidal anti-inflammatory drug (NSAID) used against arthritis.4 Moreover, we have used the Non-Random Two-Liquid (NRTL) activity coefficient model, to study its thermophysical and solubility properties in twelve (12) common upstream pharmaceutical solvents,4,5 namely three alkanes (n-hexane, n-heptane, n-octane), two (isopropyl, methyl-tert-butyl) ethers, five alcohols (n-propanol, isopropanol, n-butanol, isobutanol, isopentanol), an ester (isopropyl acetate), and acetonitrile, in an adequately wide temperature range (283.15-323.15 K). Established green metrics1 (e.g. E-factor) and costing methodologies are employed to comparatively evaluate process candidates.6

LITERATURE REFERENCES

  1. Blair et al., Process modeling, simulation and technoeconomic evaluation of batch vs continuous pharmaceutical manufacturing cephalexin. 2023 AIChE Annual Meeting, Orlando, to appear (2023).
  2. Watson et al., Computer aided design of solvent blends for hybrid cooling and antisolvent crystallization of active pharmaceutical ingredients. Organic Process Research & Development 25(5): 1123 (2021).
  3. Sheikholeslamzadeh et al., Optimal solvent screening for crystallization of pharmaceutical compounds from multisolvent systems. Industrial & Engineering Chemistry Research 51(42): 13792 (2012).
  4. Tian et al., Solution thermodynamic properties of flurbiprofen in twelve solvents (283.15–323.15 K). Journal of Molecular Liquids 296: 111744 (2019).
  5. Prat et al., CHEM21 selection guide of classical and less classical solvents. Green Chemistry 18(1): 288 (2016).
  6. Dafnomilis et al., Multiobjective dynamic optimization of ampicillin batch crystallization: sensitivity analysis of attainable performance vs product quality constraints, Industrial & Engineering Chemistry Research 58(40): 18756 (2019).


A Machine Learning (ML) implementation for beer fermentation optimisation

Dimitrios I. Gerogiorgis

University of Edinburgh, United Kingdom

ABSTRACT

Food and beverage industries receive key feedstocks whose composition is subject to geographic and seasonal variability, and rely on factories whose process conditions have limited manipulation margins but must rightfully meet stringent product quality specifications. Unlike chemicals, most of our favourite foods and beverages are highly sensitive and perishable, with relatively small profit margins. Although manufacturing processes (recipes) have been perfected over centuries or even millennia, quantitative understanding is limited. Predictions about the influence of input (feedstock) composition and manufacturing (process) conditions on final food/drink product quality are hazardous, if not impossible, because small changes can result in extreme variations. A slightly warmer fermentation renders beer undrinkable; similarly, an imbalance among sugar, lipid (fat) and protein can make chocolate unstable.

Artificial Neural Networks/ANN and their representational versatility for process systems studies is well known for decades.2 First-principles knowledge, though (mass-heat-momentum conservation, chemical reactions) is captured via deterministic (ODE/PDE) models, which invariably require laborious parameterisation for each particular process plant. Physics-Informed Neural Networks (PINN)3 though combine the best of both worlds: they offer chemistry-compliant NN with proven extrapolation power to revolutionise manufacturing, circumventing parametric estimation uncertainty and enabling efficient process control. Fermentation for specific products (e.g. ethanol4, biopharmaceuticals5) has been explored by means of ML/ANN (not PINN) tools, thus without embedded first-principles descriptions.3

Though Food Science cannot provide global composition-structure-quality correlations, Artificial Intelligence/AI can be used to extract valuable process knowledge from factory data. The case of beer, in particular, has been the focus of several of our papers,6-7 offering a sound comparison basis for evaluating model fidelity between precedents and new PINN approaches. Pursuing PINN modelling caters to greater complexity, in terms of plant flowsheet, and target product structure and chemistry. We thus revisit the problem with ML/PINN tools to efficiently predict process efficiency, which is instrumental in computational design and optimisation of key unit operations (e.g. fermentors). Traditional (first-principles) descriptions of these necessitate elaborate (e.g. CFD) submodels of extreme complexity, with at least two severe drawbacks: (1) cumbersome prerequisite parameter estimation with extreme uncertainty, (2) prohibitively high CPU cost. The complementarity of the two major approaches is thus investigated, and major advantages/shortcomings will be highlighted.

LITERATURE REFERENCES

  1. Gerogiorgis & Bakalis, Digitalisation of Food+Beverage Manufacturing, Food & Bioproducts Processing, 128: 259-261 (2021).
  2. Lee et al., Machine learning: Overview of recent progresses and implications for the Process Systems Engineering field, Computers & Chemical Engineering, 114: 111-121 (2018).
  3. Karniadakis et al., Physics-informed machine learning, Nature Reviews Physics, 3(6): 422-440 (2021).
  4. Pereira et al., Hybrid NN modelling and particle swarm optimization for improved ethanol production from cashew apple juice, Bioprocess & Biosystems Engineering 44: 329-342 (2021).
  5. Petsagkourakis et al., Reinforcement learning for batch bioprocess optimization. Computers & Chemical Engineering, 133: 106649 (2020).
  6. Rodman & Gerogiorgis, Multi-objective process optimisation of beer fermentation via dynamic simulation, Food & Bioproducts Processing, 100A: 255-274 (2016).
  7. Rodman & Gerogiorgis, Dynamic optimization of beer fermentation: Sensitivity analysis of attainable performance vs. product flavour constraints, Computers & Chemical Engineering, 106: 582-595 (2017).


Operability analysis of modular heterogeneous electrolyzer plants using system co-simulation

Michael Große1,3, Isabell Viedt2,3, Hannes Lange2,3, Leon Urbas1,2

1TUD Dresden University of Technology, Chair of Process Control Systems; 2TUD Dresden University of Technology, Process Systems Engineering Group; 3TUD Dresden University of Technology, Process-to-Order Lab

In the upcoming decades, the scale-up of hydrogen production will play a crucial role for the integration of renewable energy into future energy systems [1]. One scale-up strategy is the numbering-up of standardized electrolysis units in a modular plant concept, according to [2, 3]. The use of a modular plant concept can support the integration of different electrolyzer technologies into one heterogeneous electrolyzer plant to leverage technology-specific advantages and counteract disadvantages [4].

This work focuses on the analysis of technical operability and feasibility of large-scale modular electrolyzer plants in a heterogeneous plant layout using system co-simulation. Developed and available dynamic process models of low-temperature electrolysis components are combined in Simulink as a shared co-simulation environment. Strategies to control relevant process parameters, like temperatures, pressures, flow rates and component mass fractions in the different subsystems and the overall plant, are developed and presented. An operability analysis is carried out to verify the functionality of the presented plant layout and the corresponding control strategies [5].

The dynamic progression of all controlled parameters is presented for different operative states that may occur, like start-up, continuous operation, load change and hot-standby behavior. It is observed that the exemplary plant is operational, as all relevant process parameter can be held within the allowed operating range during all operative states. However, some limitations, regarding the possible operating range of individual technologies, are introduced. Possible solution approaches for these identified problems are conceptualized.

Additionally, relevant metrics for efficiency and flexibility, like the specific energy consumption and expected unserved flexible energy (EUFE) [4] are calculated to prove the feasibility and show the advantages of heterogeneous electrolyzer plant layouts, such as a heightened operational flexibility without mayor reductions in efficiency.

Sources

[1] I. International Energy Agency, „Global Hydrogen Review 2023“, 2023, doi: https://www.iea.org/reports/global-hydrogen-review-2023.

[2] L. Bittorf u. a., „Upcoming domains for the MTP and an evaluation of its usability for electrolysis“, in 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), Sep. 2022, S. 1–4. doi: 10.1109/ETFA52439.2022.9921280.

[3] H. Lange, A. Klose, L. Beisswenger, D. Erdmann, und L. Urbas, „Modularization approach for large-scale electrolysis systems: a review“, Sustain. Energy Fuels, Bd. 8, Nr. 6, S. 1208–1224, 2024, doi: 10.1039/D3SE01588B.

[4] M. Mock, I. Viedt, H. Lange, und L. Urbas, „Heterogenous electrolysis plants as enabler of efficient and flexible Power-to-X value chains“, in Computer Aided Chemical Engineering, Bd. 53, Elsevier, 2024, S. 1885–1890. doi: 10.1016/B978-0-443-28824-1.50315-X.

[5] V. Gazzaneo, J. C. Carrasco, D. R. Vinson, und F. V. Lima, „Process Operability Algorithms: Past, Present, and Future Developments“, Ind. Eng. Chem. Res., Bd. 59, Nr. 6, S. 2457–2470, Feb. 2020, doi: 10.1021/acs.iecr.9b05181.



High-pressure membrane reactor for ammonia decomposition: Modeling, simulation and scale-up using a Python-Aspen Custom Modeler interface

Leonardo Antonio Cáceres Avilez, Antonio Esio Bresciani, Claudio Augusto Oller do Nascimento, Rita Maria de Brito Alves

Universidade de São Paulo, Brazil

One of the current challenges for hydrogen-related technologies is its storage and transportation. The low volumetric density and low boiling point require high-pressure and low-temperature conditions for effective transport and storage. A potential solution to these challenges involves storing hydrogen in chemical compounds that can be easily transported and stored, with hydrogen being released through decomposition processes [1]. Ammonia is a promising hydrogen carrier due to its high hydrogen content, approximately 17.8% by mass, and its low volumetric density of H2, which is 121 kg/m³ at 10 bar pressure [2]. The objective of this study was to develop a mathematical model to analyze and design a packed bed membrane reactor (PBMR) for large-scale ammonia decomposition. The kinetic model for the Ru-K/CaO catalyst was obtained from the literature and validated with experimental data [3]. This catalyst was selected due to its effective performance under high-pressure conditions, which increases the drive force for hydrogen permeation in the membrane reactor. The model was developed in Aspen Custom Modeler (ACM) using a 1D pseudo-homogeneous approach. The governing equations for mass, energy, and momentum conservation were discretized via a first-order backward finite difference method and solved using a nonlinear solver. The effectiveness factor was incorporated to account for intraparticle mass transfer limitations, which are prevalent with the large particle sizes typically employed in industrial applications. The study further investigated the influence of sweep gas ratio, temperature, relative pressure, and spatial velocity on ammonia conversion and hydrogen recovery, employing response surface methodology generated through an ACM-Python interface. The proposed multi-tubular membrane reactor achieved approximately 90,4% ammonia conversion and 91% hydrogen recovery, operating at an inlet temperature of 400°C and a pressure of 40 bar. Under the same heat flux, the membrane reactor exhibited approximately 15% higher ammonia conversion compared to a conventional fixed bed reactor. Furthermore, the developed model is easily transferable to Aspen Plus, facilitating subsequent process conceptual design and economic analyses.

[1] I. Lucentini, G. García Colli, C. D. Luzi, I. Serrano, O. M. Martínez, and J. Llorca, ‘Catalytic ammonia decomposition over Ni-Ru supported on CeO2 for hydrogen production: Effect of metal loading and kinetic analysis’, Appl Catal B, vol. 286, p. 119896, 2021.

[2] J. W. Makepeace, T. J. Wood, H. M. A. Hunter, M. O. Jones, and W. I. F. David, ‘Ammonia decomposition catalysis using non-stoichiometric lithium imide’, Chem Sci, vol. 6, no. 7, p. 3805–3815, 2015.

[3] S. Sayas, N. Moerlanés, S. P. Katikaneni, A. Harale, B. Solami, J. Gascon. ‘High pressure ammonia decomposition on Ru-K/CaO catalysts’. Catal. Sci. Technol. vol. 10, p. 5027- 5035, 2020.



Developing a circular economy around jam production wastes

Carlos Sanz, Mariano Martin

Department of Chemical Engineering. Universidad de Salamanca, Plz Caídos 1-5, 37008, Salamanca, Spain

Abstract

The food industry is a significant source of waste. In the EU alone, more than 58 million tons of food waste are generated annually [1], with an estimated market value of 132 billion euros [2]. While over half of this waste is produced at the household level and thus consists of a mixture, one-quarter originates directly from manufacturing facilities. Traditionally, the mixed waste has been managed through municipal solid waste (MSW) treatment and valorization procedures [3]. However, there is an opportunity to valorize the waste produced in the agri-food sector to support the adoption of a circular economy within the food supply chain, beginning at the transformation facilities. This would enable the recovery of value-added products and reduce the need for external resources, creating a circular economy through process integration.

In this work, the valorization of biowaste for a circular economy is explored through the case of jam waste. An integrated process is designed to extract value-added products such as phenolic compounds and pectin, as well as to produce ethanol, a green solvent, for internal use and/or as a final product. The solid residue can then either be gasified (GA) or digested (AD) to produce hydrogen, thermal energy and power. These technologies are systematically compared using a mathematical optimization approach, with units modeled based on first principles and experimental yields. The base case focuses on a real jam production facility from a well-known company.

Waste processing requires an investment of €2.0-2.3 million to treat 37 tons of waste per year, yielding 5.2 kg/t of phenolic compounds and 15.9 kg/t of pectin. After extraction of the valuable products, the solids are subjected to either anaerobic digestion or gasification. The amount of biogas produced (368.1 Nm3/t) is about half that of syngas (660.2 Nm3/t), so the energy produced by the gasification process (5,085.6 kWh/t) is higher than that produced by anaerobic digestion (3,136.3 kWh/t). Nevertheless, both technologies are self-sufficient in terms of energy, but require additional thermal energy input. Conversely, although the energy produced by gasification is higher than that produced by anaerobic digestion, the latter is cheaper than the former and has a lower entry barrier, especially as the process scales. As the results show, incorporating such processes into jam production facilities is not only profitable, but also allows the application of circular economy principles, reducing waste and external energy consumption, while providing value-added by-products such as phenolic compounds and pectin.

References

[1] Eurostat, Food waste and food waste prevention - estimates, (2023).

[2] SWD, Impact Assessment Report, Brussels, 2023.

[3] EPA, Municipal Solid Waste, (2016). https://archive.epa.gov/epawaste/nonhaz/municipal/web/html/ (accessed April 13, 2024).



Data-driven optimization of chemical dosage in wastewater treatment: A surrogate model approach for enhanced physicochemical phosphorus removal

Florencia Caro1, Jimena Ferreira2,3, José Carlos Pinto4, Elena Castelló1, Claudia Santiviago1

1Biotechnological Processes for the Environment Group, Faculty of Engineering, Universidad de la República, Montevideo, Uruguay, 11300; 2Chemical & Process Systems Engineering Group, Faculty of Engineering, Universidad de la República, Montevideo, Uruguay, 11300; 3Heterogeneous Computing Laboratory, Faculty of Engineering, Universidad de la República, Montevideo, Uruguay, 11300; 4Programa de Engenharia Química/COPPE, Universidade Federal do Rio de Janeiro, Cidade Universitária, CP: 68502, Rio de Janeiro, 21941-972 RJ, Brazil

Excessive phosphorus discharge into water bodies can cause severe environmental issues, such as eutrophication [1]. Discharge limits have become more stringent and the operation of phosphorus removal systems from wastewater that are economically feasible and allow for regulatory compliance remains a challenge [2]. Physicochemical phosphorus removal (PPR) using metal salts is effective for achieving low phosphorus levels and can supplement biological phosphorus removal (BPR) [3]. PPR offers flexibility, as phosphorus removal can be adjusted by modifying chemical dosage [4], and is simple, requiring only a chemical dosing system and a clarifier to separate the treated effluent from the resulting precipitate [3]. Proper dosage control is relevant to avoid under or overdosing, which affects phosphorus removal efficiency and operational costs. PPR depends on the system design and effluent characteristics [4]. Therefore, dosages are generally established through laboratory experiments, data from other wastewater treatment plants (WWTPs), and dosing charts [3]. Modeling can enhance chemical dosing in WWTPs, and various sequential simulators can perform this task. BioWin exemplifies this capability, incorporating PPR using metal salts, accounting for pH, precipitation processes, and interactions with organic matter measured as chemical oxygen demand (COD). However, BioWin cannot directly optimize chemical dosing for specific WWTPs configurations. This work develops a surrogate model using BioWin's simulated data to create a tool to optimize chemical dosages based on influent characteristics, thus providing tailored solutions for an edible oil WWTP, which serves as the case study. The industry operates its own WWTP and discharges the treated effluent into a watercourse. Due to the production process, the influent has high and variable phosphorus concentrations. PPR is applied as a supplementary treatment to BPR when phosphorus levels exceed discharge limits. The decision variables in the optimization are the aluminum sulfate dosage for phosphorus removal and sodium hydroxide for pH adjustment, as aluminum sulfate lowers effluent pH. The chemical cost is set as the objective function, and effluent discharge parameters as constraints. The surrogate physicochemical model, which links influent parameters and dosing to effluent outcomes, is also included as a constraint. Data acquisition from BioWin is automated using Bio2Py. [5]. The optimization model is implemented in Pyomo.

Preliminary results indicate that influent COD significantly affects phosphorus removal and should be considered when determining chemical dosage. For high COD levels, more aluminum than the suggested by a rule of thumb [3] is required, whereas for moderate and low COD levels, less dosage is needed, leading to potential cost savings. Furthermore, it was found that pH adjustment is only necessary when phosphorus concentrations are high.

[1]V. Smith et al., Environ. Pollut. 100, 179–196 (1999). doi: 10.1016/S0269-7491(99)00091-3.

[2]R. Bashar, et al., Chemosphere 197, 280–290 (2018). doi: 10.1016/j.chemosphere.2017.12.169.

[3]Metcalf & Eddy, Wastewater Engineering: Treatment and Resource Recovery (McGraw-Hill, 2014).

[4]A. Szabó et al., Water Environ. Res. 80, 407–416 (2008). doi: 10.2175/106143008x268498.

[5]F. Caro et al., J. Water Process Eng. 63, 105426 (2024). doi: 10.1016/j.jwpe.2024.105426.



Leveraging Machine Learning for Real-Time Performance Prediction of Near Infrared Separators in Waste Sorting Plant

Imam Mujahidin Iqbal1, Xinyu Wang1, Isabell Viedt1,2, Leonhard Urbas1,2

1TUD Dresden University of Technology, Chair of Process Control Systems; 2TUD Dresden University of Technology, Process Systems Engineering Group

Abstract

Many small and medium-sized enterprises (SMEs), including waste sorting facilities, are not fully capitalizing on the data they collect. Recent advances in waste sorting technology are addressing this challenge. For instance, Tanguay-Rioux et al. (2022) used a mix modelling approach to develop a process model using data from Canadian sorting facilities, while Kroell et al. (2024) leveraged Near Infrared (NIR) data to create a machine learning model that optimizes NIR setup. A key obstacle for SMEs in utilizing their data effectively is the lack of technical expertise. Wang et al. (2024) demonstrated that the ecoKI platform is a viable solution for SMEs, as it is a low-code platform, requires no prior machine learning knowledge and is simple to use. This work forms part of the EnSort project, which aims to enhance automation and energy efficiency in waste sorting plants by utilizing the collected data. This study explores the application of the ecoKI platform to process measurement data into performance monitoring tools. Data, including material composition and belt weigher sensor readings, were collected from an operational waste sorting plant in Northen Europe. The data was processed using the ready-made building blocks provided within the ecoKI platform, avoiding the need for manual coding. The platform’s real-time monitoring feature was utilized to continuously track performance. Two neural network architectures—Multilayer Perceptrons (MLP) and Long Short-Term Memory (LSTM) networks—were explored for predicting NIR separation efficiency. Results demonstrated the potential of these data-driven models to accurately capture essential relationships between input features and NIR performance. This work illustrates how raw measurement data in waste sorting facilities is transformed into actionable insights for real-time performance monitoring, offering an accessible, user-friendly solution for industries that lack machine learning expertise. By enabling SMEs to leverage their existing data, the platform paves the way for improved operational efficiency and decision-making. Furthermore, this approach can be adapted to various industrial contexts besides waste sorting applications, setting the stage for future developments in automated, data-driven optimization of equipment performance.

References

Tanguay-Rioux, F., Spreutels, L., Héroux, M., & Legros, R. (2022). Mixed modeling approach for mechanical sorting processes based on physical properties of municipal solid waste. Waste Management, 144, 533–542.

Kroell, N., Maghmoumi, A., Dietl, T., Chen, X., Küppers, B., Scherling, T., Feil, A., & Greiff, K. (2024). Towards digital twins of waste sorting plants: Developing data-driven process models of industrial-scale sensor-based sorting units by combining machine learning with near-infrared-based process monitoring. Resources, Conservation and Recycling, 200, 107257.

Wang, X., Rani, F., Charania, Z., Vogt, L., Klose, A., & Urbas, L. (2024). Steigerung der Energieeffizienz für eine nachhaltige Entwicklung in der Produktion: Die Rolle des maschinellen Lernens im ecoKI-Projekt (p. 840).



A Benchmark Simulation Model of Ammonia Production: Enabling Safe Innovation in the Emerging Renewable Hydrogen Economy

Niklas Groll, Gürkan Sin

Process and Systems Engineering Center (PROSYS), Department of Chemical and Biochemical Engineering, Technical University of Denmark (DTU), 2800 Kgs.Lyngby, Denmark

The emerging hydrogen economy plays a vital part in transitioning to a sustainable industry. Green hydrogen can be a renewable fuel for process heat and a sustainable feedstock, e.g., for green ammonia. From today on, the necessity of producing green ammonia for the food industry and as a platform chemical will be inherent [1]. Accordingly, many developments focus on designing and optimizing hydrogen process routes. However, implementing new process ideas and designs also requires testing and ensuring safety.

Safety methodologies can be tested on so-called "Benchmark models." Several benchmark processes have been used to innovate new process control and monitoring methods. The Tennessee-Eastman process imitates the behavior of a standard chemical process, the Fed-Batch Fermentation of Penicillin serves as a benchmark for biochemical fed-batch operated processes, and with the COST benchmark model methodologies for wastewater treatment can be evaluated [2], [3], [4]. However, the established benchmark processes do not feature all relevant aspects of the renewable hydrogen pathways, e.g., sustainable feedstocks and energy supply or electrochemical reactions. Thus, lacking a basic benchmark model for the hydrogen industry creates unnecessary risks when adopting process monitoring and control technologies.

Introducing our unique simulation benchmark model, we pave the way for safer innovations in the hydrogen industry. Our model connects hydrogen production with renewable electricity to the Haber-Bosch process for ammonia production. By integrating electrochemical electrolysis with a standard chemical process, our ammonia benchmark process encompasses all key aspects of innovative hydrogen pathways. The model, built with the versatile Aveva Process Simulator, allows for a seamless transition between steady-state and dynamic simulations and easy adjustments to process design and control parameters. By introducing a set of failures, the model is a benchmark for evaluating risk monitoring and control methods. Furthermore, detecting and eliminating failures can also contribute to the development of new process safety methodologies.

Our new ammonia simulation model is a significant addition to the emerging hydrogen industry, filling the void of a missing benchmark. This comprehensive model serves a dual purpose: It can evaluate and confirm existing process safety methodologies and serve as a foundation for developing new safety methodologies specifically targeting safe hydrogen pathways.

[1] A. G. Olabi et al., ‘Recent progress in Green Ammonia: Production, applications, assessment; barriers, and its role in achieving the sustainable development goals’, Feb. 01, 2023, Elsevier Ltd. doi: 10.1016/j.enconman.2022.116594.

[2] U. Jeppsson and M. N. Pons, ‘The COST benchmark simulation model-current state and future perspective’, 2004, Elsevier Ltd. doi: 10.1016/j.conengprac.2003.07.001.

[3] G. Birol, C. Ündey, and A. Çinar, ‘A modular simulation package for fed-batch fermentation: penicillin production’, Comput Chem Eng, vol. 26, no. 11, pp. 1553–1565, Nov. 2002, doi: 10.1016/S0098-1354(02)00127-8.

[4] J. J. Downs and E. F. Vogel, ‘A plant-wide industrial process control problem’, Comput Chem Eng, vol. 17, no. 3, pp. 245–255, Mar. 1993, doi: 10.1016/0098-1354(93)80018-I.



Thermo-Hydraulic Performance of Pillow-Plate Heat Exchangers with Streamlined Secondary Structures: A Numerical Analysis

Reza Afsahnoudeh, Julia Riese, Eugeny Y. Kenig

Paderborn University, Germany

In recent years, pillow-plate heat exchangers (PPHEs) have gained attention as a promising alternative to conventional shell-and-tube and plate heat exchangers. Their advantages include high pressure resistance, leak-tight construction, and good cleanability. The pillow-like wavy channel structure promotes fluid mixing in the boundary layer, thereby improving heat transfer. However, a significant drawback of PPHEs is boundary layer separation near the welding spots, leading to large recirculation zones. Such zones are the primary cause of increased pressure drop and reduced heat transfer efficiency. Downsizing these recirculation zones is key to improving the thermo-hydraulic performance of PPHEs.

One potential solution is the application of secondary surface structuring [1]. Among others, this can be realized using Electrohydraulic Incremental Forming (EHIF) [2]. Afsahnoudeh et al. [3] demonstrated that streamlined secondary structures, particularly those with ellipsoidal geometries, improved thermo-hydraulic efficiency by up to 6% compared to unstructured PPHEs.

Building upon previous numerical studies, this work investigated the impact of streamlined secondary structures on fluid dynamics and heat transfer within PPHEs. The complex geometries of PPHEs, with and without secondary structures, were generated using forming simulations in ABAQUS 2020. Flow and heat transfer in the inner PPHE channels were simulated using FLUENT 24.1, assuming a single-phase, incompressible, and turbulent system with constant physical properties.

Performance evaluation was based on pressure drop, heat transfer coefficients, and overall thermo-hydraulic efficiency. Additionally, a detailed analysis of the Fanning friction factor and drag coefficient was conducted for various Reynolds numbers to provide deeper insights into the fluid dynamics in the inner channels. The results of these investigations are summarized in this contribution.

References

[1] M. Piper, A. Zibart, E. Djakow, R. Springer, W. Homberg, E.Y. Kenig, Heat transfer enhancement in pillow-plate heat exchangers with dimpled surfaces: A numerical study. Appl. Therm. Eng., vol 153, 142-146, 2019.

[2] E. Djakow, R. Springer, W. Homberg, M. Piper, J. Tran, A. Zibart, E.Y. Kenig, “Incremental electrohydraulic forming - A new approach for the manufacturing of structured multifunctional sheet metal blanks,” Proc. of the 20th International ESAFORM Conference on Material Forming, Dublin, Ireland, vol. 1896, 2017.

[3] R. Afsahnoudeh, A. Wortmeier, M. Holzmüller, Y. Gong, W. Homberg, E.Y. Kenig, Thermo-hydraulic Performance of Pillow-Plate Heat Exchangers with Secondary Structuring: A Numerical Analysis,” Energies, vol. 16 (21), 7284, 2023.



Modular and Heterogeneous Electrolysis Systems: a System Flexibility Comparison

Hannes Lange1,2, Michael Große2,3, Isabell Viedt2,3, Leon Urbas1,3

1TUD Dresden University of Technology, Process Systems Engineering Group; 2TUD Dresden University of Technology, Process to Order Lab; 3TUD Dresden University of Technology, Chair of Process Control Systems

Green hydrogen will play a key role in the decarbonization of the steel sector. As a result, the demand for hydrogen in the steel industry will increase in the coming years due to the direct reduction of iron [1]. As the currently commercially available electrolysis stacks are far too small for the production of green hydrogen, the scaling strategy of numbering up standardized process units can provide support [2]. In addition, a cost-effective production of green hydrogen requires the electrolysis system to be able to follow the electricity load, which necessitate a more efficient and flexible system. The modularization of electrolysis systems can provide an approach for this [3]. The potential to include different electrolysis technologies into one heterogeneous electrolysis system can help make use of technology specific advantages and reduce disadvantages [4]. In this paper, a design of such a heterogeneous electrolysis system is presented, which is built using the modularization of electrolysis process units and is scaled up for large-scale applications, such as a direct iron reduction process, by numbering up. The impact of different degrees of technological and production capacity-related heterogeneity is investigated using system co-simulation of existing electrolyzer models. The direct reduction of iron for green steel production must be provided a constant stream of hydrogen from a fluctuating electricity profile. To reduce cost and storage losses the hydrogen storage capacity must be minimized. For this presented use-case the distribution of technology and production capacity in the heterogeneous plant layout is optimized regarding overall system efficiency and the ability to follow flexible electricity profiles. The resulting pareto front is analyzed and the results are compared with a conventional homogenous electrolyzer plant layout. First results underline the benefits of combining different technologies and production capacities of individual systems in a large-scale heterogeneous electrolyzer plant.

1] Wietschel M, Zheng L, Arens M, Hebling C, Ranzmeyer O, Schaadt A, et al. Metastudie wasserstoff – auswertung von energiesystemstudien. Studie im auftrag des nationalen wasserstoffrats. Karlsruhe, Freiburg, Cottbus: Fraunhofer ISI, Fraunhofer ISE, Fraunhofer IEG; 2021.

[2] Lange H, Klose A, Beisswenger L, Erdmann D, Urbas L. Modularization approach for large-scale electrolysis systems: a review. Sustain Energy Fuels 2024:10.1039.D3SE01588B. https://doi.org/10.1039/D3SE01588B.

[3] Lange H, Klose A, Lippmann W, Urbas L. Technical evaluation of the flexibility of water electrolysis systems to increase energy flexibility: A review. Int J Hydrog Energy 2023;48:15771–83. https://doi.org/10.1016/j.ijhydene.2023.01.044.

[4] Mock M, Viedt I, Lange H, Urbas L. Heterogenous electrolysis plants as enabler of efficient and flexible Power-to-X value chains. Comput. Aided Chem. Eng., vol. 53, Elsevier; 2024, p. 1885–90. https://doi.org/10.1016/B978-0-443-28824-1.50315-X.



CFD-Based Shape Optimization of Structured Packings for Enhancing Separation Efficiency in Distillation

Sebastian Blauth1, Dennis Stucke2, Mohamed Adel Ashour2, Johannes Schnebele1, Thomas Grützner2, Christian Leithäuser1

1Fraunhofer ITWM, Germany; 2Ulm University, Germany

In past years the research in the field of structured packing development for laboratory-scale separation processes has intensified, where one of the main objectives is to miniaturize laboratory columns regarding the column diameter. This reduction has several advantages such as reduced operational costs and lower safety requirements due to a reduced amount of chemicals being used. However, a reduction in diameter also causes problems due to the increased surface-to-volume ratio, e.g., stronger impact of heat losses or liquid maldistribution issues. There are a lot of different approaches to design structured packings, such as using repeatedly stacked unit cells, but all of these approaches have in common that the development of new structures and the improvement of existing ones is based on educated guesses by the engineers.
In this talk, we investigate the novel approach of applying techniques from free-form shape optimization to increase the separation efficiency of structured packings in laboratory-scale distillation columns. A simplified single-phase computational fluid dynamics (CFD) model for the mass transfer in the distillation column is used and a corresponding shape optimization problem is solved numerically with the optimization software cashocs. The optimization approach uses a free-form shape optimization, where the shape is not parametrized, e.g., with the help of a CAD model, but all nodes of the computational mesh are moved to alter the shape. Particularly, this approach allows for more freedom in the packing design than the classical, parametrized approach. The goal of the shape optimization is to increase the mass transfer in the column by changing the packing's shape. The numerical shape optimization yields promising results and shows a greatly increased mass transfer for the simplified CFD model. To validate our findings, the optimized shape is additively manufactured and investigated experimentally. The experimental results are in great agreeement with the simulation-based prediction and show that the separation efficiency of the packing increased by around 20 % as consequence of the optimization. Our results show that the proposed approach of using free-form shape optimization for improving structured packings in distillation is extremely promising and will be pursued further in future research.



Multi-Model Predictive Control of a Distillation Column

Mehmet Arıcı1,3, Wachira Daosud2, Jozef Vargan3, Miroslav Fikar3

1Gaziantep Islam Science and Technology University, Gaziantep 27010, Turkey; 2Faculty of Engineering, Burapha University, Chonburi 20131, Thailand; 3Slovak University of Technology in Bratislava, Bratislava 81237, Slovakia

Due to the increasing demand for performance and rising complexity of systems, classical model predictive control (MPC) techniques are often inadequate and new applications often requires some modifications in predictive control mechanism. The modifications frequently include reformulation of optimal control in order to cope with system uncertainties, external perturbations and adverse effect of rapid changes in operating points. Besides, successful implementation of this optimization-driven control technique is highly dependent on an accurate and detailed model of the process which is relatively easy to obtain for chemical processes with simple structure. As complexity in the system increases, however linear approximation used in MPC may result with poor performance or even a total failure. In such a case, nonlinear system model can be used for optimal control signal calculation but lack of reliable dynamic process model is of major challenges in real time implementation of MPC schemes. Even though a model representing the complex behavior is available such model can be difficult to optimize in real time.
To demonstrate the potential challenges addressed above, a binary distillation column process is chosen as testbed. The process is multivariable and inherently nonlinear. Furthermore, linear model approximation for a critical operating point is valid in only a small neighborhood of operation. Therefore, we propose to employ multiple models that can describe the same process dynamics to a certain degree. In addition to the linear model, multi-layered feedforward network is used for data-based modeling and constitutes an additional process model. Both models collaborate to predict state variables individually, and their outputs and constraints are applied in the MPC algorithm. Various cost function formulations are proposed to cope with multiple models. The aim is to enhance efficiency and robustness in process control by compensating for the limitations of each individual model. Additionally, the offset-free technique is applied to eliminate steady-state errors resulting from model-process mismatch.
We compare the performance of the proposed method to MPC using the full nonlinear model and also to single-model MPC methods for both the linear model and neural network model. We show that the proposed method is only slightly suboptimal with respect to the best available performance and greatly improves over individual methods. In addition, the computational load is significantly reduced when compared to the full nonlinear MPC.



Enhancing Fault diagnosis for Chemical Processes via MSCNN with Hyperparameters Optimization and Uncertainty Estimation

Jingkang Liang, Gürkan Sin

Process and Systems Engineering Center (PROSYS), Department of Chemical and Biochemical Engineering, Technical University of Denmark

Fault diagnosis is critical for maintaining the safety and efficiency of chemical process, as undetected faults can lead to operational disruptions, safety hazards, and significant financial losses. Data-driven fault diagnosis methods, especially deep-learning-based methods have been widely used in the field of fault diagnosis of chemical process [1]. However, these deep learning methods often rely on manually tunning the hyperparameters to obtain an optimal model, which is time-consuming and labor-intensive [2]. Additionally, existing fault diagnosis methods typically lack consideration of uncertainty in their analysis, which is essential to assess the confidence in model predictions, especially in safety-critical industries. This underscores the need for research to provide reliable methods that not only improve accuracy but also provide uncertainty estimation in fault diagnosis for chemical processes. This sets the premise for the research focus in this contribution.,

To this end, we present assessment of a new approach that combines a Multiscale Convolutional Neural Network (MSCNN) with hyperparameter optimization and Bootstrap for uncertainty estimation. The MSCNN is designed to capture complex nonlinear features from chemical processes. Tree-Structured Parzen Estimator (TPE), a Bayesian optimization method was employed to automatically search for optimal hyperparameters, such as the number of convolutional layers, and kernel sizes in the multiscale module, minimizing manual tuning efforts and ensuring higher accuracy for training the deep learning models. Additionally, Bootstrap technique which was validated earlier for deep learning applications for property predictions [3], was employed to improve model accuracy and provide uncertainty estimation, making the model more robust and reliable.

A simulation study was carried out on the Tennessee Eastman Process dataset, which is a widely used benchmark for fault diagnosis in a chemical process. The dataset consists of 21 types of faults, each sample is a one-dimensional vector of 52 variables. Totally 26880 samples were collected, and was split randomly to training, validation and testing set according to the ratio of 0.6:0.2:0.2. Other state-of-the-art machine learning methods, including MLP, CNN, LSTM, and WDCNN were conducted for benchmarking of the proposed method. Performance is evaluated based on precision, recall, number of parameters, and quality of predictions (i.e. uncertainty estimation).

The benchmarking results showed that the proposed MSCNN with TPE and Bootstrap achieved the highest accuracy among all the methods considered. Ablation studies were carried out to verify the effectiveness of the TEP and Bootstrap in enhancing the fault diagnosis of chemical process. Confusion matrix and uncertainty estimation were presented to further discuss the effectiveness of the proposed method.

This work paves the way for more robust and reliable fault diagnosis systems in the chemical industry, offering a powerful tool to enhance process safety and efficiency.

References

[1] Melo et al. "Data-Driven Process Monitoring and Fault Diagnosis: A Comprehensive Survey." Processes 12.2 (2024): 251.

[2] Qin et al. "Adaptive multiscale convolutional neural network model for chemical process fault diagnosis." Chinese Journal of Chemical Engineering 50 (2022): 398-411.

[3] Aouichaoui et al. "Uncertainty estimation in deep learning‐based property models: Graph neural networks applied to the critical properties." AIChE Journal 68.6 (2022): e17696.



Machine learning-aided identification of flavor compounds with green notes in plant-based foods

Huabin Luo, Simen Akkermans, Thian Ping Wong, Ferdinandus Archie Pangestu, Jan F.M. Van Impe

BioTeC+, Chemical and Biochemical Process Technology and Control, Department of Chemical Engineering, KU Leuven, Ghent, Belgium

Plant-based foods have emerged as a global trend as consumers become increasingly concerned about sustainability and health. Despite their growing demand, the presence of off-flavors, especially green notes, significantly impacts consumer acceptance and preference. This study aims to develop a model using Machine Learning (ML) techniques to identify flavor compounds with green notes based on their molecular structure. To achieve this, a database of green compounds in plant-based foods was established by searching flavor databases and literature. Additionally, non-green compounds with similar structures and balanced chemical classes to green compounds were collected as a negative set for model training. Subsequently, molecular descriptors (MD) and molecular fingerprints (MF) were calculated based on the molecular structure of these collected flavor compounds and then used as input for ML. In this study, k-Nearest Neighbor (kNN), Logistic Regression (LR), and Random Forest (RF) were used to develop a model. Afterward, the developed models were optimized and evaluated. Results indicated that green compounds exhibit a wide range of structural variations. Topological structure, electronic properties, and surface area properties were essential MD to distinguish green and nongreen compounds. Regarding the identification of flavor compounds with green notes, the LR model performed best, correctly classifying more than 95% of the compounds in the test set, followed by the RF model with an accuracy of more than 92%. In summary, combining MD and MF as the input for ML provides a solid foundation for identifying flavor compounds with green notes. These findings provide knowledge tools for developing strategies to mitigate green off-flavor and control flavor quality in plant-based foods.



LSTMs and nonlinear State Space Models- are they the same?

Ashwin Chandrasekhar, Prashant Mhaskar

McMaster University, Canada

This manuscript identifies and addresses discrepancies in the implementation of Long Short-Term Memory (LSTM) neural networks for naturally occurring dynamical processes, specifically in cases claiming to capture input-output dynamic relationships using a state-space framework. While LSTMs are well-suited for these kinds of problems, there are two key issues in how LSTMs are currently structured and trained in this context.

First, the hidden and the cells states of the LSTM model are often reinitialized or discarded between input-output sequences in the training dataset. This practice essentially results in a framework where the initial hidden and cells states of each sequence are not being trained. However, in a typical state-space model identification process, both model parameters and states need to be identified simultaneously.

Second, the model structure of LSTMs differs from a traditional state-space (SS) representation. In state-space models, the current state is defined as a function of the previous state and input from the prior time step. In contrast, LSTMs use the input from the same time step, creating a structural mismatch. Moreover, for each LSTM cell, there is a corresponding hidden state and a cell state, representing the short- and long-term memory of a given state, and hence it is necessary to address this difference in structure conceptually.

To resolve these inconsistencies, two changes are proposed in this paper. First, the initial hidden and cell states for the training sequences should be trained. Second, to address the structural mismatch, the hidden and cell states from the LSTM are reformatted to match the state and data pairing that a state-space model would use.

The effectiveness of these modifications is demonstrated using data generated from a simple dynamical system modeled by a Linear Time-Invariant (LTI) state-space system. The importance of these corrections is shown by testing them individually. Interestingly, the worst performance was observed in the model with only trained hidden states, followed by the unmodified LSTM model. The model that only corrected the input timing (without trained hidden and cell states) showed a significant improvement. Finally, the best results were achieved when both corrections were applied together.



Simple Regulatory Control Structure for Proton Exchange Membrane Water Electrolysis Systems

Marius Fredriksen, Johannes Jäschke

Norwegian University of Science and Technology, Norway

Effective control of electrolysis systems connected to renewable energy sources (RES) is crucial to ensure efficient and safe plant operation due to the intermittent nature of most RES. Current control architectures for Proton Exchange Membrane (PEM) electrolysis systems primarily use relatively simple control structures such as Proportional-Integral-Derivative (PID) controllers and on/off controllers. Some works introduce more advanced control structures based on Model Predictive Controllers (MPC) and AI-based control methods (Mao et al., 2024). However, few studies have been conducted on advanced regulatory control (ARC) strategies for PEM electrolysis systems. These control structures have several advantages as they offer fast disturbance rejection, are easier to scale, and are less affected by model accuracy than many of the more computationally expensive control methods, such as MPC (Cammann & Jäschke, 2024).

In this work, we proposed an ARC structure for a PEM electrolysis system using the "Top-down" section of Skogestad's plantwide control procedure (Skogestad & Postlethwite, 2007, p. 384). First, we developed a steady-state model loosely based on the PEM system presented by Crespi et al. (2023). The model was verified by comparing the behavior of the polarization curve under varying pressure and temperature. We performed step responses on different system inputs to assess their impact on the outputs and to determine suitable pairings of the manipulated and controlled variables. Thereafter, we formulated an optimization problem for the plant and evaluated various implementations of the system's cost function. Finally, we mapped the active constraint regions of the electrolysis system to identify the active constraints in relation to the system's power input. From an economic perspective, controlling the active constraints is crucial, as deviating from the optimal constraint values usually results in an economic penalty (Skogestad, 2023).

We have shown that the optimal operation of PEM electrolysis systems is close to fully constrained in all regions. This implies that constraint-switching control may be used to achieve optimal system operation. The active constraint regions found for the PEM system share several similarities with those found for alkaline electrolysis systems by Cammann and Jäschke (2024). Finally, we have presented a simple constraint-switching control structure for the PEM electrolysis system using PID controllers and selectors.

References

Cammann, L. & Jäschke, J. A simple constraint-switching control structure for flexible operation of an alkaline water electrolyzer. IFAC-PapersOnLine 58, 706–711 (2024).

Crespi, E., Guandalini, G., Mastropasqua, L., Campanari, S. & Brouwer, J. Experimental and theoretical evaluation of a 60 kW PEM electrolysis system for flexible dynamic operation. Energy Conversion and Management 277, 116622 (2023).

Mao, J. et al. A review of control strategies for proton exchange membrane (PEM) fuel cells and water electrolysers: From automation to autonomy. Energy and AI 17, 100406 (2024).

Skogestad, S. Advanced control using decomposition and simple elements. Annual Reviews in Control 56, 100903 (2023).

Skogestad, S. & Postlethwaite, I. Multivariable Feedback Control: Analysis and Design. (John Wiley & Sons, 2007).



Solid streams modelling for process integration of an EAF steel plant.

Maura Camerin, Alexandre Bertrand, Laurent Chion

Luxembourg Institute of Science and Technology (LIST), Luxembourg

Global warming is an urgent matter that involves and heavily influences industrial activities. Steelmaking is one of the largest sources of industrial CO2 emissions globally, with key players setting ambitious targets to reduce these emissions by 2030 and/or achieve carbon neutrality by 2050. A key factor in reaching these goals is the efficient use of waste heat, especially in such industries that involve high-temperature processes. Waste heat valorisation (WHV) holds significant potential. (McBrien et al., 2016) highlighted that about 28% of the heating needs in a blast furnace plant could be met using existing WHV technologies. This figure could rise to 44% if solid streams, not just gaseous and liquid ones, are included.

At the current state, heat recovery from solid streams, like semi-finished products and slag, and its transfer to cold solid streams, such as scrap and DRI, is rather uncommon. Its mathematical definition for process integration (PI) / mathematical programming (MP) models poses unique challenges due to the need for specialized equipment (Matsuda et al., 2012).

The objective of this work is to propose novel WHV models of such solid streams, specifically formulated for PI/MP problems. In a first step, emerging technologies for slag treatment will be incorporated, and key parameters of the streams will be defined. The heat recovery potential of the slag will be modelled based on its charge weight and the recovery technology used, for example from a heat exchanger below the slag pit or using more advanced treatment technologies. The algorithm will calculate the resulting mass flow and temperature of the heat transfer medium, which can be incorporated into the heat cascade to meet the needs of cold streams such as scrap or DRI preheating.

The expected outcome is an improvement of solid streams models and, as such, more precise process integration results. The improved quantification of waste heat valorisation, especially through the inclusion of previously unconsidered streams, will be of significant benefit to support the decarbonization of the steel industry.

References:

Matsuda, K., Tanaka, S., Endou, M., & Iiyoshi, T. (2012). Energy saving study on a large steel plant by total site based pinch technology. Applied Thermal Engineering, 43, 14–19.

McBrien, M., Serrenho, A. C., & Allwood, J. M. (2016). Potential for energy savings by heat recovery in an integrated steel supply chain. Applied Thermal Engineering, 103, 592–606. https://doi.org/https://doi.org/10.1016/j.applthermaleng.2016.04.099



Design of Microfluidic Mixers using Bayesian Shape Optimization

Rui Miguel Grunert da Fonseca, Fernando Pedro Martins Bernardo

CERES, Department of Chemical Engineering, University of Coimbra, Portugal

Mixing and mass transfer are fundamental aspects of many chemical and biological processes. For instance, in the synthesis of nanoparticles, where a solute solution is mixed with an antisolvent to induce nanoprecipitation, highly efficient and controlled mixing conditions are required to obtain particles with low size variability. Specialized mixing technologies, such as microfluidic mixing, are therefore used. Microfluidic mixing is a continuous process in which passive mixing of two different streams of fluid takes place in micro-sized channels. The geometry and small volume of the device enable very fast mixing, which in turn reduces mass transfer limitations during the nanoparticle formation process. Several different mixer geometries, such as the toroidal and herringbone micromixer [1], have already been used for nanoparticle production. Since mixer geometry plays such a vital role in mixing performance, mathematical optimization of that geometry is clearly a tool to exploit in order to come up with superior designs.
In this work, a methodology for shape optimization of micromixers using Computational Fluid Dynamics (CFD) and Bayesian Optimization is presented. It consists in the sequential performance evaluation of mixer geometries defined through geometric variables, such as angles and lengths, with predefined bounds. The performance of a given geometry is evaluated through CFD simulation, using OpenFOAM software, of the Villermaux-Dushman reaction system [2]. It consists of two competing reactions: one quasi-instantaneous acid-base reaction and a very fast redox reaction. Mixing time can therefore be inferred by analyzing the reaction selectivity at the mixer's outlet. Using Bayesian Optimization, the geometric domain can be explored with an emphasis on maximizing the defined objective functions. This is done by assigning probabilistic functions to each objective based on previously attained data. An acquisition function is then optimized in order to determine the next geometry to be evaluated, balancing exploration and exploitation. This approach is especially appropriate when objective function evaluation is expensive, which is the case for CFD simulations. This methodology is very flexible and can be applied to many other equipment design problems. Its main challenge is the definition of the optimization problem and its domain. This is similar to network design problems, where the choice of the system's superstructure has a great impact on problem solvability. The domain must include as many viable solutions as possible while minimizing problem dimensionality and avoiding redundancy of solutions.
In this work, a case-study for the optimization of the toroidal mixer geometry is presented for three different operating conditions and seven geometric degrees of freedom. Both pressure drop and mixing time were considered as objective functions and the respective Pareto fronts were obtained. The trade-offs between objective functions were analyzed for each case and the general design features are presented.

[1] C. Webb et al, “Using microfluidics for scalable manufacturing of nanomedicines from bench to gmp: A case study using protein-loaded liposomes,” International Journal of Pharmaceutics, vol. 582, p. 119266, May 2020.

[2] J.-M. Commenge and L. Falk, “Villermaux–dushman protocol for experimental characterization of micromixers, ”Chemical Engineering and Processing: Process Intensification, vol. 50, no. 10, pp.979–990, Oct. 2011.



Solubility prediction of lipid compounds using machine learning

Agustin Porley Santana1, Gabriel Gutierrez1, Soledad Gutiérrez Parodi1, Jimena Ferreira1,2

1Grupo de Ingeniería de Sistemas Químicos y de Procesos, Instituto de Ingeniería Química, Facultad de Ingeniería, Universidad de la República, Montevideo, 11300, Uruguay; 2Heterogenoeus Computing Laboratory, Instituto de Computación, Facultad de Ingeniería, Universidad de la República, Montevideo, 11300, Uruguay

Aligned with the principles of biorefinery and circular economy, biomass waste valorization not only reduces the environmental impact of production processes but also presents economic opportunities for companies. Various natural lipids with complex chemical compositions are recovered from different types of biomass and further processed, such as essential oils from citrus waste and eucalyptus oil from wood.

In this context, wool grease, a complex mixture of esters of steroid and aliphatic alcohols with fatty acids, is a byproduct of wool washing. [1] Its derivatives, including lanolin, cholesterol, and lanosterol, differ in their methods of extraction and market value.

Purification of the high value products can be achieved using crystallization, chromatography, liquid-liquid extraction or solid-liquid extraction. The interaction of the selected compound with a liquid phase known as a solvent or diluent (depending on the case) is a crucial aspect in the design of this processes. To achieve an effective separation of target components, it is crucial to identify the solubility of the compounds in a solvent. Given the practical difficulties in determining solubility and the vast array of natural compounds, a comprehensive bibliographic source for their solubilities in different solvents remains elusive. Employing machine learning [2] is an alternative to predict the solubility of the target compound in alternate solvents.

This work focuses on the construction of a model to predict the solubility of several lipids in various solvents, using experimental data obtained from scientific articles and handbooks. It was collected almost 800 data points from 6 solutes and 34 solvents. As a first step it was evaluated 21 properties as input variables of the model, this includes properties of the solute, properties of the solvent, and temperature.

After data preprocessing, in the feature selection step it uses the Pearson and Spearman correlations between input variables to select the relevant input variables. The model is obtained using Random Forest and it is compared to a linear regression model. The dataset was divided between training and validation sets in an 80-20 split. It is analysed the usage of different compound between training and validation sets (extrapolation model), and a random separation of the sets (interpolation model).

It is compared the performance of the models obtained with a full and a reduced input variables set, as well as interpolation and extrapolation models.

In all cases, the Random Forest model performs than the linear one. The preliminaries results shown that the model using the reduced set of input variables performs better than the one that use the full set of input variables.

References

[1] S. Gutiérrez, M. Viñas (2003). Anaerobic degradation kinetics of a cholesteryl ester. Water Science and Technology, 48(6), 141-147.

[2] P. Daoutidis, J. H. Lee, S. Rangarajan, L. Chiang, B. Gopaluni, A. M. Schweidtmann, I. Harjunkoski, M. Mercangöz, A. Mesbah, F. Boukouvala, F. V. Lima, A. del Rio Chanona, C. Georgakis (2024). Machine learning in process systems engineering: Challenges and opportunities, Computers & Chemical Engineering, 181, 108523.



Refining Equation-Based Model Building for Practical Applications in Process Industry

Shota Kato, Manabu Kano

Kyoto University, Japan

Automating physical model building from literature databases holds significant potential for advancing the process industry, particularly in the rapid development of digital twins. Digital twins, based on accurate physical models, can effectively simulate real-world processes, yielding substantial operational and strategic benefits. We aim to develop an AI system that automatically extracts relevant information from documents and constructs accurate physical models.
One of the primary challenges is constructing practical models from extracted equations. The existing method [Kato and Kano, 2023] builds physical models by combining equations to satisfy two criteria: ensuring all specified variables are included and matching the number of degrees of freedom with the number of input variables. While this approach excels at quickly generating models that meet these requirements, it does not guarantee their solvability, leading to the inclusion of impractical models. This issue underscores the need for a robust validation mechanism.
To address this issue, we propose a filtering method that refines models generated by the approach above to identify solvable models. This method evaluates models by comparing variables across different equations, efficiently identifying redundant or conflicting equations to ensure that only coherent and functional models are retained. Furthermore, we generated an evaluation dataset comprising physical models relevant to chemical engineering and applied our proposed method. The results demonstrated that our method accurately identifies solvable models, significantly enhancing the automated model-building approach from the literature.
However, our method faces challenges mainly when the same variable is defined differently under varying conditions. For example, the concentration of a gas dissolved in a liquid might be determined by temperature via an equilibrium constant or by pressure using Henry's law. If extracted equations include these equations, the model-building algorithm may include both equations in the output models; then, the proposed method may struggle to filter models precisely. Another limitation is the necessity to compare multiple equations to determine the model's solvability. In cases where several reaction rate equations and corresponding rate constants are available, all possible combinations must be evaluated. This strategy can be complex and cannot be efficiently handled by our current methodology without additional enhancements.
In summary, aiming to automate physical model building, we proposed a method for refining the models generated by an existing approach. Our method successfully identified solvable models from sets that included redundant ones. Future work will focus on refining our algorithms to handle complexities such as variables defined under different conditions and integrating advanced natural language processing technologies to standardize notation and interpret nuanced relationships between variables, ultimately achieving truly automated physical model building.

References
Kato and Kano, "Efficient physical model building algorithm using equations extracted from documents," Computer Aided Chemical Engineering, 52, pp. 151–156, 2023.



Solar Desalination and Porphyrin Mediated Vis-Light Photocatalysis in Decolouration of Dyes as Biological Analogues Applied in Advanced Water Treatment

Evans Martin Nkhalambayausi Chirwa, Fisseha Andualem Bezza, Osemeikhain Ogbeifun, Shepherd Masimba Tichapondwa, Wesley Lawrence, Bonhle Manoto

University of Pretoria, South Africa

Engineering can be made simple and more impactful by observing and understanding how organisms in nature solve eminent problems. For example, scientists around the world have observed green plants thriving without organic food inputs using the complex photosynthesis process to kick start a biochemical food chain. Two case studies are presented in this study based on research under way at the University of Pretoria on Solar Desalination of sea water using plant-based carbon material as solar absorbers and the work on solar or vis-light photocatalysis using porphyrin based Bi-OCl and BiOIO3 compounds and simulations of the function of chlorophyll in advanced water treatment and recovery. In the study on solar desalination using 3D-printed Graphene Oxide (GO), 82% of water recovery has thus far been achieved using simple GO-Black TiO2 monolayer as a solar absorber supported on cellulose nanocubes. In preparation for possible scale-up of the process, methods are being investigated for inhibition or reversal of salting on the adsorbed surface which inhibits energy transfer. For the vis-light photocatalytic process for discoloration of dye, a in Porphyrin@Bi12O17Cl2 system was used to successfully degrade methyl Blue dye in batch experiments achieving up to 98% degradation within 120 minutes. These results show that more advances and more efficient engineered systems can achieved through observation of nature and how these systems have survived over billions of years. Based on these observations, the research group from the Water Utilisation Group at the University of Pretoria has studied and developed fundamental processes for degradation and remediation of unwanted compounds such as disinfection byproducts (DBPs), volatile organic compounds (VOCs) and pharmaceutical products from water.



Diagnosing Faults in Wastewater Systems: A Data-Driven Approach to Handle Imbalanced Big Data

Morteza Zadkarami1, Krist Gernaey2, Ali Akbar Safavi1, Pedram Ramin2

1Shiraz University, Iran, Islamic Republic of; 2Technical University of Denmark (DTU), Denmark

Process monitoring is critical in industrial settings to ensure system functionality, making it essential to identify and understand the causes of any faults that occur. Although a considerably broader range of research focuses on fault detection, significantly less attention has been devoted to fault diagnosis. Typically, faults arise either from abnormal instrument behavior, suggesting the need for calibration or replacement, or from process faults indicating a malfunction within the system [1]. A key objective of this study is to apply the proposed process fault diagnosis methodology to a benchmark that closely mirrors real-world conditions. In fact, we propose a fault diagnosis framework for a wastewater treatment plant (WWTP) that effectively addresses the challenges of imbalanced big data typically found in large-scale systems. Fault scenarios were simulated using the Benchmark Simulation Model No.2 (BSM2) [2], a highly regarded tool that closely mimics the operations of a real-world WWTP. Using BSM2 a dataset was generated which spans 609 days, comprising 876,960 data points across 31 process parameters.

In contrast to our previous research [3], [4], which primarily focused on fault detection frameworks for imbalanced big data in the BSM2, this study extends the approach to include a comprehensive fault diagnosis structure. Specifically, it determines whether a fault has occurred and, if so, identifies whether the fault is due to an abnormality in the instrument, the process, or both simultaneously. A major challenge lies in the highly imbalanced nature of the dataset: 87.82% of the data represent normal operating conditions, while 6% reflect instrument faults, 6.14% correspond to process faults, and less than 0.05% involve concurrent faults in both the process and instruments. To address this imbalance, we evaluated multiple deep network architectures and various learning strategies to identify a robust fault diagnosis framework that achieves acceptable accuracy across all fault scenarios.

References:

[1] Liu, Y., Ramin, P., Flores-Alsina, X., & Gernaey, K. V. (2023). Transforming data into actionable knowledge for fault detection, diagnosis and prognosis in urban wastewater systems with AI techniques: A mini-review. Process Safety and Environmental Protection, 172, 501-512.

[2] Al, R., Behera, C. R., Zubov, A., Gernaey, K. V., & Sin, G. (2019). Meta-modeling based efficient global sensitivity analysis for wastewater treatment plants–An application to the BSM2 model. Computers & Chemical Engineering, 127, 233-246.

[3] Zadkarami, M., Gernaey, K. V., Safavi, A. A., & Ramin, P. (2024). Big Data Analytics for Advanced Fault Detection in Wastewater Treatment Plants. In Computer Aided Chemical Engineering (Vol. 53, pp. 1831-1836). Elsevier.

[4] Zadkarami, M., Safavi, A. A., Gernaey, A. A., & Ramin, P. (2024). A Process Monitoring Framework for Imbalanced Big Data: A Wastewater Treatment Plant Case Study. In IEEE Access (Vol. 12, pp. 132139-132158). IEEE.



Industrial Time Series Forecasting for Fluid Catalytic Cracking Process

Qiming Zhao1, Yaning Zhang2, Tong Qiu1

1Department of Chemical Engineering, Tsinghua University, Beijing 100084, China; 2PetroChina Planning & Engineering Institute, Beijing 100083, China

Abstract

Industrial process systems generate complex time-series data, challenging traditional regression models that assume static relationships and struggle with system uncertainty and process drifts. These models may also be sensitive to noise and disturbances in the training data, potentially leading to unreliable predictions when encountering fluctuating inputs.

To address these limitations, researchers have explored various algorithms in time-series analysis. The wavelet transform (WT) has emerged as a powerful tool for analyzing non-stationary time series by representing them with localized signals. For instance, Hosseini et al. applied WT and feature extraction to improve gas-liquid two-phase flow meters in oil and petrochemical industries, successfully classifying flow regimes and calculating void fraction percentages with low errors. Another approach to modeling uncertainties in observations is through stochastic processes, with the Gaussian process (GP) gaining popularity due to its flexibility. Bradford et al. demonstrated its effectiveness by proposing a GP-based nonlinear model predictive control algorithm that considered state-dependent uncertainty, which they verified in a challenging semi-batch bioprocess case study. Recent research has explored the integration of WT and GP. Band et al. developed a hybrid model combining these techniques, which accurately predicted groundwater levels in arid areas. However, much of the current research focuses on one-step ahead forecasts rather than comprehensive process modeling.

This research explores a novel predictive modeling framework that integrates wavelet features with GP regression, thus creating a more robust predictive model capable of extracting both temporal and cross-variable information from the data while adapting to changing patterns over time. The effectiveness of this hybrid method is verified using an industrial dataset from fluid catalytic cracking (FCC), a complex petrochemical process crucial for fuel production. The results demonstrate the method’s robustness in delivering accurate and reliable predictions despite the presence of noise and system variability typical in industrial settings. Percentage yields are predicted with a mean absolute percentage error (MAPE) of less than 1% for critical products, meeting the requirements for industrial application in modeling and optimization.

References

[1] Band, S. S., Heggy, E., Bateni, S. M., Karami, H., Rabiee, M., Samadianfard, S., Chau, K.-W., & Mosavi, A. (2021). Groundwater level prediction in arid areas using wavelet analysis and Gaussian process regression. Engineering Applications of Computational Fluid Mechanics, 15(1), 1147–1158. https://doi.org/10.1080/19942060.2021.1944913

[2] Bradford, E., Imsland, L., Zhang, D., & del Rio Chanona, E. A. (2020). Stochastic data-driven model predictive control using Gaussian processes. Computers & Chemical Engineering, 139, 106844. https://doi.org/10.1016/j.compchemeng.2020.106844

[3] Hosseini, S., Taylan, O., Abusurrah, M., Akilan, T., Nazemi, E., Eftekhari-Zadeh, E., Bano, F., & Roshani, G. H. (2021). Application of Wavelet Feature Extraction and Artificial Neural Networks for Improving the Performance of Gas-Liquid Two-Phase Flow Meters Used in Oil and Petrochemical Industries. Polymers, 13(21), Article 21. https://doi.org/10.3390/polym13213647



Electrochemical conversion of CO2 into CO. Analysis of the influence of the electrolyzer type, operating parameters, and separation stage

Luis Vaquerizo1,2, David Danaci2,3, Bhavin Siritanaratkul4, Alexander J Cowan4, Benoît Chachuat2

1Institute of Bioeconomy, University of Valladolid, Spain; 2The Sargent Centre for Process Systems Engineering, Imperial College, UK; 3I-X Centre for AI in Science, Imperial College, UK; 4Department of Chemistry, Stephenson Institute for Renewable Energy, University of Liverpool, UK

The electrochemical conversion of CO2 into CO is an opportunity for the decarbonization of the chemical industry, turning the current linear utilization scheme of carbon into a more cyclic scheme. Compared to other existing CO2 conversion technologies, the electrochemical reduction of CO2 into CO benefits from the fact that is a room temperature process, it does not depend on the physical location of the plant, and its energy efficiency is in the range of 40-50%. Although some techno-economic analyses have already assessed the potential of this technology, finding that the CO production cost is mainly influenced by the CO2 cost, the availability and price of the electricity, and the maturity of the carbon capture technologies, none of them addressed the effect of the electrolyzer type, operating conditions, and separation stage on the final production cost. This work determines the impact of the electrolyzer type (either AEM or BPM), the operating parameters (current density and CO2 inlet flow), and the technology used for product separation (either PSA or, for the first time for this technology, cryogenic distillation) on the annual production cost of CO using experimental data for CO2 electrolysis. The main findings of this work are that the use of either AEM or BPM electrolyzers and either PSA or cryogenic distillation yields a very similar annual production cost (around 25 MM$/y for a 100 t/d CO plant) and that the operation beyond current intensities of 150 mA/cm2 and CO2 inlet flowrates of 3.2 (AEM) and 1 (BPM) NmL/min/cm2 slightly affect the annual production cost. For all the possible operating cases (AEM or BPM electrolyzer, and PSA or cryogenic distillation alternative), the minimum production cost is reached when maximizing the CO productivity in the electrolyzer. Moreover, it is found that although the downstream process alternative has minimum influence on the CO production cost, since the cryogenic distillation alternative requires also a final PSA to separate the column overhead products, a downstream process based on PSA separation seems, at least at this scale, more preferable. Finally, a minimum selling price of 868 $/t CO is estimated in this work considering a CO2 cost of 40 $/t and an electricity cost of 0.03 $/kWh. Although this value is higher than the current CO selling price (600 $/t), there is some margin for improvement if the current electrolyzer CAPEX and lifetime are improved.



Enhancing Pharmaceutical Development: Process Modelling and Control Strategy Optimization for Liquids Drug Products Multiphase Mixing and Milling Processes

Noor Al-Rifai, Guoqing Wang, Sushank Sharma, Maxim Verstraeten

Johnson & Johnson Innovative Medicine, Belgium

Recent regulatory trends from health authorities advocate for greater understanding of drug product and process, enabling more efficient drug development, supply chain agility and the introduction of new and challenging therapies and modalities. Traditional drug product process development and validation relies on fully experimental design spaces with limited insight into what drives process performance, and where every change (in equipment, material attributes, scale) triggers the requirement for a new experimental design space, post-approval submission, as well as risking issues with process performance. Quality-by-Design in process development and manufacturing helps to achieve these aims, aided by sufficient mechanistic understanding and resulting in flexible yet robust control strategies.

Mechanistic correlations and computational fluid dynamics simulations provide digital input towards demonstrating process robustness, scale-up and transfer; particularly for pharmaceutical mixing and milling setups involving complex and unconventional geometries.

This presentation will show synergistic workflows, utilizing mechanistic correlations and/or CFD and PAT to gain process understanding, optimize development work and construct control strategies for pharmaceutical multiphase mixing and milling processes.



Assessing operational resilience within the natural gas monetisation network for enhanced production risk management: Qatar as a case study

Noor Yusuf, Ahmed AlNouss, Roberto Baldacci, Tareq Al-Ansari

Hamad Bin Khalifa University, Qatar

The increased turbulence in energy consumer markets has imposed risks on energy suppliers regarding sustaining markets and profits. While risk mitigation strategies are essential when assessing new projects, retrofitting existing industrially mature infrastructure to adapt to the changing market conditions enforces added cost and time. For the state of Qatar, a gas-dependent economy, the natural gas industry is highly vulnerable to exogenous uncertainties in final markets, including demand and price volatilities. On the other hand, endogenous uncertainties could hinder the project’s profitability and sustainability targets due to poor proactive planning in the early design stages of the project. Hence, in order to understand the industrially mature network’s risk management capabilities, it is crucial to assess the resilience at a production system and overall network level. This is especially important in natural gas supply chains as failure in the production part would influence the subsequent components, represented by storage, shipping, and agreed volume sales to markets. This work evaluates the resilience of the existing Qatari natural gas monetisation infrastructure (i.e., production) by addressing the system’s failure to satisfy the targeted production capacity due to process-level disruptions and/or final market conditions. The network addressed herein comprises 7 direct and in-direct natural gas utilisaion industrial clusters (i.e., natural gas liquefaction, ammonia and urea, methanol and MTBE, power, and gas-to-liquids). Process technical data simulated using Aspen Plus, along with calculated emissions and economic data were used to estimate the resilience of individual processes and the overall network at different endogenous disruption scenarios. First, historical and forecasted demand and prices were used to simulate the optimal natural gas allocation to processes over a planning period between 2000-2032. Secondly the resilience index of the processes within the baseline allocation strategy were then investigated throughout the planning period. Overall, a resilience index value below 100% indicate low process resilience towards the changing endogenous and/or exogenous fluctuations. For methanol and ammonia processes within the investigated network, the annual resilience index was enhanced from 35% to 90% for ammonia process, and from 36% to 84% for methanol process. The increase in the value of the resilience index was mainly due to the introduction of operational bounds and forecasted demand and price data that aided in efficient resilient process management. Finally, qualitative recommendations were summarised to aid decision-makers with planning under different economic and environmental scenarios to maintain the resilience of the network despite the fluctuations imposed by unavoidable external factors, including climate change, policy change, and demand fluctuations.



Membrane-based Blue Hydrogen Production in Sub-Ambient Temperature: Process Optimization, Techno-Economic Analysis and Life Cycle Assessment

JIUN YUN1, BORAM GU1, KYUNHWAN RYU2

1Chonnam National University, Korea, Republic of (South Korea); 2Sunchon National University, Korea, Republic of (South Korea)

In 2022, 62% of hydrogen was produced using natural gas, while only 0.1% came from water electrolysis [1]. This suggests that an immediate shift to green hydrogen is infeasible in the short- to medium-term, which makes blue hydrogen production crucial. Auto-thermal reforming (ATR) processes, which combine steam methane reforming reaction and partial oxidation, offer high energy efficiency by eliminating additional heating. During the ATR process, CO2 can be captured from the shifted syngas, which consists mainly of a CO2/H2 binary mixture.

Recently, gas separation membranes have been gaining significant attention for their high energy efficiency for CO₂ capture. For instance, the Polaris CO₂-selective membrane, specifically designed to separate CO₂/H₂ mixtures, is known to offer a high CO₂ permeance of 1000 GPU and a CO₂/H₂ selectivity of 10. Furthermore, sub-ambient temperatures are reported to enhance its CO₂/H₂ selectivity up to 20, enabling the production of high-purity liquid CO₂ (over 98%) [1].

Hydrogen recovery rates are significantly affected by the H₂ purity at the PSA inlet and the pressure of the tail gas [2], which are dependent on the selected capture location. In the ATR process, CO2 capture can be applied to shifted syngas and PSA tail gas. Therefore, optimal location selection is crucial for improving hydrogen production efficiency.

In this study, an integrated process combining ATR with a sub ambient temperature membrane process for CO₂ capture was designed using gPROMS. Two different capture locations were compared, and economic feasibility of the integrated process was evaluated. The ATR process was developed as a flowsheet-based model, while the membrane unit was built using equation-based custom modeling, which consists of balances and permeation models. Concentration polarization effects were also accounted for, which play a significant role in performance when membrane permeance is high. In both cases, the CO₂ capture rate was fixed at 90%.

In the membrane-based CO2 capture process, the inlet gas was cooled to -35°C using a cooling cycle, increasing membrane selectivity up to 20. This enables energy savings and the capture of high-purity liquid CO₂. Our simulation results demonstrated that the H₂ purity at the PSA inlet reached 92% when CO2 was captured from syngas, and this high H₂ purity improved the PSA recovery rate. For PSA tail gas capture, the CO₂ capture rate was 98.8%, with only a slight increase in the levelized cost of hydrogen (LCOH). However, in the syngas capture case, higher capture rates led to increased costs. Overall, syngas capture achieved a lower LCOH due to the higher PSA recovery rate.

Further modeling of the PSA unit will be performed to optimize the integrated process and perform a CO₂ life cycle assessment. Our results will provide insights into the potential of sub-ambient membrane gas separation for blue hydrogen production and guidelines for the design and operation of PSA and gas separation membranes in the ATR process.

References

[1] International Energy Agency, Global Hydrogen Review 2023, 2023.

[2] C.R. Spínola Franco, Pressure Swing Adsorption for the Purification of Hydrogen, Master's Dissertation, University of Porto, 2014.



Dynamic Life Cycle Assessment in Continuous Biomanufacturing

Ada Robinson Medici1, Mohammad Reza Boskabadi2, Pedram Ramin2, Seyed Soheil Mansouri2, Stavros Papadokonstantakis1

1Institute of Chemical, Environmental and Bioscience Engineering TU Wien,1060 Wien, Austria; 2Department of Chemical and Biochemical Engineering, Søltofts Plads, Building 228A, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark

Process Systems Engineering (PSE) has seen rapid advancements since its inception in the 1970s. Currently, there is an increasing demand for tools that enable the integration of sustainability metrics into process simulation to cope with today’s grand challenges. In recent years, continuous manufacturing has gained attention in biologics production due to its ability to improve process monitoring and ensure consistent product quality. This work introduces a Python-based interface that integrates process simulation and control with cradle-to-gate Life Cycle Assessment resulting into dynamic process inventories and thus to dynamic life cycle inventories and impact assessment (dLCA), with the potential to improve environmental assessment and sustainability metrics in the biopharmaceutical industry.

This framework utilizes the open-source tool Activity Browser, a graphical user interface for Brightway25, that supports the analysis of environmental impacts using LCA (Mutel, 2017). The tool allows real-time tracking of environmental inventories of the foreground process and its impact assessment. Unlike traditional sustainability indicators, such as the E-factor, which focuses only on waste generation, the introduced approach can retrieve comprehensive environmental inventories from the 3.9.10 ecoinvent database to calculate mid-point (e.g. global warming potential)) and end-point LCA indicators (e.g. damage to ecosystems), according to the ReCiPE framework, a widely recognized method in life cycle impact assessment.

This study utilizes the KTB1 benchmark model as a dynamic simulation model for continuous biomanufacturing, which serves as a decision-support tool for evaluating process design, optimization, monitoring, and control strategies in real-time (Boskabadi et al., 2024). KTB1 is a comprehensive dynamic model developed in MATLAB-Simulink covering upstream and downstream components that provide an integrated production process perspective (Boskabadi, M. R., 2024). The functional unit for this study is the production of the typical maintenance dose commonly found in pharmaceutical formulations, 40 mg of pure Active Pharmaceutical Ingredient (API) Lovastatin, under dynamic manufacturing conditions.

Preliminary results show that control decisions can have a significant impact on the dynamic and integral LCA profile for selected resource and energy-related Life Cycle Impact Assessment (LCIA) categories. By integrating LCIA into the control framework, a multi-objective model predictive control (MO-MPC) is enabled with the potential to dynamically adjust process parameters and optimize process conditions based on real-time environmental and process data (Sohn et al., 2020). This work lays the foundation for an advanced computational platform for assessing sustainability in biomanufacturing, positioning it as a critical tool in the industry's ongoing transition toward more environmentally responsible continuous production methods.

Importantly, open-source tools ensure transparency, adaptability, and accessibility, facilitating collaboration and further development within the scientific community.

References

Boskabadi, M. R., 2024.KT-Biologics I (KTB1): A dynamic simulation model for continuous biologics manufacturing.

Boskabadi, M.R., Ramin, P., Kager, J., Sin, G., Mansouri, S.S., 2024. KT-Biologics I (KTB1): A dynamic simulation model for continuous biologics manufacturing. Computers & Chemical Engineering 188, 108770. https://doi.org/10.1016/j.compchemeng.2024.108770

Mutel, C., 2017. Brightway: An open source framework for Life Cycle Assessment. JOSS 2, 236. https://doi.org/10.21105/joss.00236

Sohn, J., Kalbar, P., Goldstein, B., Birkved, M., 2020. Defining Temporally Dynamic Life Cycle Assessment: AReview. Integr Envir Assess &amp; Manag 16, 314–323. https://doi.org/10.1002/ieam.4235



Multi-level modeling of reverse osmosis process based on CFD results

Yu-hyeok Jeong, Boram Gu

Chonnam National University, Korea, Republic of (South Korea)

Reverse osmosis (RO) is a membrane separation process that is widely used in desalination and wastewater treatment [1]. However, solutes blocked by the membrane can accumulate near the membrane, causing concentration polarization (CP), which hinders RO separation performance and reduces energy efficiency [2]. Structures called spacers are added between membrane sheets to create flow channels, which also induces disturbed flow that mitigates CP. Different types of spacers exhibit different hydrodynamic behavior, and understanding this is essential to designing the optimal spacer.

Computational fluid dynamics (CFD) can be a useful tool for theoretically analyzing the impact of these spacers, through which the impact of geometric characteristics of each spacer on RO performance can be understood. However, due to the requirement for large computing resources, it is limited to a small-scale RO unit. Alternatively, mathematical modeling of RO modules can help to understand the effect of spacers on process variables and separation performance by incorporating appropriate constitutive model equations. Despite its advantages of low demands of computing resources even for a large-scale simulation, the impact of spacers is approximated using simple empirical correlations usually derived from experimental data in the limited ranges of operating and geometric conditions.

To overcome this, we present a novel modeling approach that combines these two methods. First, three-dimensional (3D) CFD models of RO spacer units at a small scale possible to represent the periodicity of the spacer geometry were simulated for various spacers (a total of 20 geometries) and a wide range of operating conditions. By fitting the relationship between the operating conditions and simulation results with the response surface methodology, a surrogate model with the operating conditions as independent variables and the simulation results as dependent variables was derived for each spacer. Using the surrogate model, the outlet conditions were derived from the inlet conditions for a single unit. These outlet conditions were then iteratively applied as the inlet conditions for the next unit, thereby representing processes at various scales.

As we expected, the CFD analysis in this study showed varied hydrodynamic behaviors across the spacers, resulting in up to a 10% difference in water flux. The multi-level modeling using the surrogate model showed that the optimal spacer design may vary depending on the size of the process, as the ranking of performance indices, such as recovery and specific energy consumption, changes with process size. In particular, pressure losses were not proportional to process size, and water recovery did not increase linearly. This highlights the need for utilizing a surrogate model via CFD in large-scale process simulations.

By combining 3D CFD simulation with 1D mathematical modeling, the hydrodynamic behavior influenced by the geometric characteristics of the spacer and the varied effects of spacers at different process scales can be efficiently reflected, using as a platform for large-scale process optimization.

References

[1] Sung, Berrin, Novel technologies for reverse osmosis concentrate treatment: A review, Journal of Environmental Management, 2015.

[2] Haidari, Heijman, Meer, Optimal design of spacers in reverse osmosis, Separation and Purification Technology, 2018.



Optimal system design and scheduling for ammonia production from renewables under uncertainty: Stochastic programming vs. robust optimization

Alexander Klimek1, Caroline Ganzer1, Kai Sundmacher1,2

1Max Planck Institute for Dynamics of Complex Technical Systems, Department of Process Systems Engineering, Sandtorstr. 1, 39106 Magdeburg, Germany; 2Otto von Guericke University, Chair for Process Systems Engineering, Universitätsplatz 2, 39106 Magdeburg, Germany

Production of green ammonia from renewable electricity could play a vital role in a net zero economy, yet the intermittency of wind and solar energy poses challenges to sizing and scheduling of such plants [1]. One approach to investigate the interaction between fluctuating renewables and chemical processes is to model the production network in the form of a large-scale mixed-integer linear programming (MILP) problem [2, 3].

A wide range of parameters is necessary to characterize the chemical production system, including investment costs, wind speeds, solar irradiance, purchase and sales prices. These parameters are usually derived from literature data and fixed before optimization. However, parameters such as costs and capacity factors are not deterministic in reality but rather subject to uncertainty. Mathematical methods of optimization under uncertainty can be applied to deal with such deviations from the nominal parameter values. Stochastic programming (SP) and robust optimization (RO) in particular are widely used to address parameter uncertainty in optimization problems and to identify solutions that satisfy all constraints under all possible realizations of uncertain parameters [4].

In this work, we reformulate a deterministic MILP model for determining the optimal design and scheduling of an ammonia plant based on renewables as a SP and a RO problem. Using the Pyomo extensions mpi-sppy and ROmodel [5, 6], the optimization problems are implemented and solved under parameter uncertainty. The results in terms of plant design and operation are compared with the outcomes of the deterministic formulation. In the case of SP, a two-stage problem is used, whereby Monte Carlo sampling is applied to generate different scenarios. Analysis of the value of the stochastic solution (VSS) shows that when the model is constrained by the nominal scenario's first-stage decisions and subjected to the conditions of other scenarios, the deterministic model cannot handle even a 1% decrease in the wind potential, highlighting the model’s sensitivity. The stochastic approach mitigates this risk with a solution approximately 30% worse in terms of net present value (NPV) but more resilient to fluctuations. For RO, different approaches are chosen with regard to uncertainty sets and formulation. The very conservative approach using box uncertainty sets is relaxed by the use of auxiliary parameters, ensuring that only a certain number of uncertain parameters can take their worst-case value at the same time. The RO framework is extended by the use of adjustable decision variables, requiring a reduction in the time horizon compared to the SP formulation in order to solve these problems within a reasonable time frame.

References:
[1] Wang, H. et al. 2021. ACS Sust. Chem. Eng. 9, 7, 2816–2834.
[2] Ganzer, C. and Mac Dowell, N. 2020. Sust. En. Fuels 4, 8, 3888–3903.
[3] Svitnič, T. and Sundmacher, K. 2022. Appl. En. 326, 120017.
[4] Mavromatidis, G. 2017. PhD Dissertation. ETH Zurich.
[5] Knueven, B. et al. 2023. Math. Prog. Comp. 15, 4, 591–619.
[6] Wiebe, J. and Misener, R. 2022. Optim. & Eng. 23, 4, 1873–1894.



CO2 Sequestration and Valorization to Synthetic Fuels: Multi-criteria Based Process Design and Optimization for Feasibility

Thuy T. Hong Nguyen, Satoshi Taniguchi, Takehiro Yamaki, Nobuo Hara, Sho Kataoka

National Institute of Advanced Industrial Science and Technology, Japan

CO2 capture and utilization/storage (CCU/S) has been considered one of the linchpin strategies to reduce greenhouse gas (CO2 equivalent) emissions. CCS promises to contribute to removing large CO2 amount but faces high-cost barriers. CCU produces high-value products; thereby gaining some economic benefits but requires large supplies of energy. Different CCU pathways have been studied to utilize CO2 as renewable raw material for producing different valuable chemical products and fuels. Especially, many kinds of catalysts and synthesis conditions have been examined to convert CO2 to different types of gaseous and liquid fuels (methane, methanol, gasoline, etc.). As the demand of these synthetic fuels are exceptionally high, such CCU pathways potentially help mitigate large CO2 emissions. Nevertheless, implementation of these CCU pathways hinges on an ample supply of carbon free H2 raw material that is currently not available for large-scale production. Thus, to remove large industrial CO2 emission sources, combining these CCU pathways with sequestration is required.

This study aims to develop a CCUS system that can contribute to remove large CO2 emissions with high economic efficiency. Herein, multiple CCU pathways converting CO2 to different gaseous and liquid synthetic fuels (methane, methanol and Fischer-Tropsch fuels) were examined for integrating with CO2 sequestration in an economic manner. Process simulator is employed to design and optimize the operating conditions of all included processes. A multi-objective evaluation model is constructed to optimize the economic benefit and CO2 reduction amount. Based on the optimization results, the feasible synthetic fuel production processes that can be integrated with CO2 sequestration process for mitigating large CO2 emission sources can be proposed.

The results showed that the formulation of CCUS system (types of CCU pathways and the amounts of CO2 to be utilized and stored) heavily depends on the types and purchasing cost of hydrogen raw material, product selling prices, and carbon tax. The CCUS system integrating the CCU pathways converting CO2 to methanol and methane and CO2 sequestration can contribute to large CO2 reduction with low economic loss. The economic benefit of this system can be dramatically enhanced when the carbon tax increases up to $250/ton CO2. Due to the exceptionally high demand of energy supply and high initial investment cost, Fischer-Tropsch fuels synthesis process is the least competitive process in terms of both economic benefit and potential CO2 reduction.



Leveraging Pilot-scale Data for Real-Time Analysis of Ion Exchange Chromatography

Søren Villumsen, Jesper Frandsen, Jakob Huusom, Xiadong Liang, Jens Abildskov

Technical University of Denmark, Denmark

Chromatography processes are key in the downstream processing of bio-manufactured products to attain high-purity products. Chromatographic separation is hard to operate optimally due to hard-to-describe mechanisms, which can be partly described by partial differential equations of convection, diffusion, mass transfer and adsorption. The processes may also be subject to batch-to-batch variation in feed composition and operating conditions. To ensure high purity of products, chromatography may be operated in a conservative manner, meaning fraction collection may be started later than necessary and terminated prematurely. This results in sub-optimal chromatographic yields in productions, as operators are forced to make the tough decision to cut the purification process at a point where they know purity is ensured at the expense of product loss (Kozorog et al. 2023).

If the overall separation process were better understood and monitored, such that the batch-to-batch variation could be better accounted for, it may be possible to secure a higher yield in the separation process (Kumar and Lenhoff 2020). Using mechanistic models or hybrid models of the chromatographic process, the process may be analyzed in real-time leading to potential insights about the current process. These insights could be communicated to operators, allowing them to perform more optimal decision-making, increasing yield without sacrificing purity.

The potential for this real-time process prediction was investigated at a pilot scale ion-exchange facility at the Technical University of Denmark (DTU). The process is equipped with sensors for real-time data extraction and supports digital twin development (Jones et al. 2022). Using this data, mechanistic and hybrid models were fitted to predict key process events such as breakthrough. The partial differential equations were solved using state-of-the-art discretization methods that are sufficiently computationally fast to allow for real-time prediction of process events (Frandsen et al. 2024). This serves as proof-of-concept for real-time analysis through Monte Carlo simulation of chromatographic processes.

References

Frandsen, Jesper, Jan Michael Breuer, Eric Von Lieres, Johannes Schmölder, Jakob K. Huusom, Krist V. Gernaey, and Jens Abildskov. 2024. “Discontinuous Galerkin Spectral Element Method for Continuous Chromatography: Application to the Lumped Rate Model Without Pores.” In Computer Aided Chemical Engineering, 53:3325–30. Elsevier.

Jones, Mark Nicholas, Mads Stevnsborg, Rasmus Fjordbak Nielsen, Deborah Carberry, Khosrow Bagherpour, Seyed Soheil Mansouri, Steen Larsen, et al. 2022. “Pilot Plant 4.0: A Review of Digitalization Efforts of the Chemical and Biochemical Engineering Department at the Technical University of Denmark (DTU).” In Computer Aided Chemical Engineering, 49:1525–30. Elsevier.

Kozorog, Mirijam, Simon Caserman, Matic Grom, Filipa A. Vicente, Andrej Pohar, and Blaž Likozar. 2023. “Model-Based Process Optimization for mAb Chromatography.” Separation and Purification Technology 305 (January): 122528.

Kumar, Vijesh, and Abraham M. Lenhoff. 2020. “Mechanistic Modeling of Preparative Column Chromatography for Biotherapeutics.” Annual Review of Chemical and Biomolecular Engineering 11 (1): 235–55.



Model Based Flowsheet Studies on Cement Clinker Production Processes

Georgios Melitos1,2, Bart de Groot1, Fabrizio Bezzo2

1Siemens Industry Software Limited, 26-28 Hammersmith Grove, W6 7HA London, United Kingdom; 2CAPE-Lab (Computer-Aided Process Engineering Laboratory), Department of Industrial Engineering, University of Padova, 35131 Padova PD, Italy

The cement value chain is responsible for 7-8% of global CO­2 emissions [1]. These emissions originate both directly via chemical reactions (e.g. calcination) taking place in the process and indirectly via the process energy demands. Around 90% of these emissions come from the production of clinker, the main constituent of cement [1]. Clinker production comprises some high temperature and carbon intensive processes, which occur in the pyroprocessing section of a cement plant. The chemical and physical phenomena occurring in such processes are rather complex and to this day, these processes have mostly been examined and modelled in literature as standalone unit operations [2-4]. As a result, there is a lack of holistic model-based approaches on flowsheet simulations of cement plants in literature.

This paper presents first-principles mathematical models for the simulation of the pyro-process section of a cement production plant; more specifically the preheating cyclones, the calciner, the rotary kiln and the grate cooler. These mathematical models are then combined in an integrated flowsheet model for the production of clinker. The models incorporate the major heat and mass transport phenomena, reaction kinetics and thermodynamic property estimation models. These mathematical formulations have been implemented in the gPROMS® Advanced Process Modelling environment and solved for various reactor geometries and operating conditions.

The final flowsheet is validated against published data, demonstrating the ability to predict accurately operating temperatures, degree of calcination, gas and solids compositions, fuel consumption and overall CO2 emissions. The utilization of several types of alternative fuels is also investigated, to evaluate the potential for avoiding CO2 emissions by replacing part of the fossil-based coal fuel (used as a reference case). Trade-offs between different process KPIs (net energy consumption, conversion efficiency, CO2 emissions) are identified and evaluated for each fuel utilization scenario.

REFERENCES

[1] Monteiro, Paulo JM, Sabbie A. Miller, and Arpad Horvath. "Towards sustainable concrete." Nature materials 16.7 (2017): 698-699.

[2] Iliuta, I., Dam-Johansen, K., & Jensen, L. S. (2002). Mathematical modeling of an in-line low-NOx calciner. Chemical engineering science, 57(5), 805-820.

[3] Pieper, C., Liedmann, B., Wirtz, S., Scherer, V., Bodendiek, N., & Schaefer, S. (2020). Interaction of the combustion of refuse derived fuel with the clinker bed in rotary cement kilns: A numerical study. Fuel, 266, 117048.

[4] Cui, Z., Shao, W., Chen, Z., & Cheng, L. (2017). Mathematical model and numerical solutions for the coupled gas–solid heat transfer process in moving packed beds. Applied energy, 206, 1297-1308.



A Social Life Cycle Assessment for Sustainable Pharmaceutical Supply Chains

Inês Duarte, Bruna Mota, Andreia Santos, Tânia Pinto-Varela, Ana Paula Barbosa-Povoa

Centre for Management Studies of IST (CEG-IST), University of Lisbon, Portugal

The increasing pressure from governments, media, and consumers is driving companies to adopt sustainable practices by reducing their environmental and social impacts. While the economic dimension of sustainable supply chain management is always considered, and the environmental one has been thoroughly addressed, the social dimension remains underdeveloped (Barbosa-Póvoa et al., 2018) despite growing attention to social sustainability issues in recent years (Duarte et al., 2022). This imbalance is particularly concerning in the healthcare sector, especially within the pharmaceutical industry, given the significant impact of pharmaceutical products on public health and well-being. On the other hand, while vital to society, there are social risks incurred throughout the entire supply chain, from primary production activities to the manufacturing of the final product and its distribution. Addressing these concerns requires a comprehensive framework that captures the social impacts of every stage of the pharmaceutical supply chain.

Social LCA is a well-established approach to assessing the social performance of supply chains by identifying both the positive and negative social impacts linked to a system's life cycle. By adopting a four-step process as outlined in the ISO 14040 standard (ISO, 2006), Social LCA enables a thorough evaluation of the social sustainability of supply chain activities. This approach allows for the identification and mitigation of key social risks, thus enabling more informed decision-making and promoting sustainable development goals. Hence, in this work, a social life cycle assessment framework is developed and integrated into the pharmaceutical supply chain design and planning model of Duarte et al. (2022), a multi-objective mixed integer linear programming model. The economic objective is measured through the maximization of the Net Present Value, while the social objective maximizes equity in access through a Disability Adjusted Life Years (DALY) metric. The social life cycle assessment will allow a broader social assessment of the whole supply chain activities by evaluating social risks and generating actionable insights for minimizing the most significant social risks within the pharmaceutical supply chain.

A case study based on a global vaccine supply chain is conducted where the main social hotspots are identified, as well as trade-offs between the economic and accessibility objectives. Through this analysis, informed recommendations are developed to mitigate potential social impacts associated with the supply chain under study.

The integration of social LCA into a pharmaceutical supply chain design and planning optimization model constitutes the main contribution of this work, providing a practical tool for decision-makers to enhance the overall sustainability of their operations and address the complex social challenges of global pharmaceutical supply chains.

Barbosa-Póvoa, A. P., da Silva, C., & Carvalho, A. (2018). Opportunities and challenges in sustainable supply chain: An operations research perspective. European Journal of Operational Research, 268(2), 399–431. https://doi.org/10.1016/j.ejor.2017.10.036

Duarte, I., Mota, B., Pinto-Varela, T., & Barbosa-Póvoa, A. P. (2022). Pharmaceutical industry supply chains: How to sustainably improve access to vaccines? Chemical Engineering Research and Design, 182, 324–341. https://doi.org/10.1016/j.cherd.2022.04.001

ISO. (2006). ISO 14040:2006 Environmental management - Life cycle assessment - Principles and framework. Geneva, Switzerland: International Organization for Standardization.



Quantum Computing for Synthetic Bioprocess Data Generation and Time-Series Forecasting

Shawn Gibford1,2, Mohammed Reza Boskabadi2, Seyed Soheil Mansouri1,2

1Sqale; 2Denmark Technical University

Data scarcity in bioprocess engineering, particularly for single-cell organism cultivation in pilot-scale photobioreactors (PBRs), poses significant challenges for accurate model development and process optimization. This issue is especially pronounced in pilot-scale operations (e.g., 20L PBRs), where data acquisition is infrequent and costly. The nonlinear nature of these processes, coupled with various non-idealities, creates a substantial gap between lab-scale and pilot-scale operations, hindering the development of accurate mechanistic models and data-driven approaches.

To address these challenges, we propose a novel approach leveraging quantum computing and machine learning. Specifically, we employ a quantum Generative Adversarial Network (qGAN) to generate synthetic bioprocess time-series data, with a focus on quality indicator variables like Optical Density (OD) and Dissolved Oxygen (DO), key metrics for Dry Biomass estimation. The quantum approach offers potential advantages over classical methods, including better generalization capabilities and faster model training using tensor networks.

Various network and quantum circuit architectures were tested to capture the statistical characteristics of real process data. Our results show high fidelity in synthetic data generation and significant improvement in the performance of forecasting models, such as Long Short-Term Memory (LSTM) networks, when augmented with GAN-generated samples. This approach addresses critical data gaps, enabling better model development and parameter optimization in bioprocess engineering.

The success in generating high-quality synthetic data opens new avenues for bioprocess optimization and scale-up. By addressing the critical issue of data scarcity, this method enables the development of more accurate virtual twins and robust optimization strategies. Furthermore, the ability to continuously update models with newly acquired online data suggests a pathway towards adaptive, real-time process control.

This work not only demonstrates the potential of quantum machine learning in bioprocess engineering but also provides a framework for addressing similar data scarcity issues in other complex scientific domains. Future research will focus on refining the qGAN architectures, exploring integration with real-time sensor data, and extending the approach to other bioprocess systems and scale-up scenarios.

References:

Orlandi, F.; Barbierato, E.; Gatti, A. Enhancing Financial Time Series Prediction with Quantum-Enhanced Synthetic Data Generation: A Case Study on the S&P 500 Using a Quantum Wasserstein Generative Adversarial Network Approach with a Gradient Penalty. Electronics 2024, 13, 2158. https://doi.org/10.3390/electronics13112158



Optimising Crop Schedules and Environmental Impact in Climate-Controlled Greenhouses: A Hydroponics vs. Soil-Based Food Production Case Study

Sarah Namany, Farhat Mahmoud, Tareq Al-Ansari

Hamad bin Khalifa University, Qatar

Optimising greenhouse operations in arid regions is essential for sustainable agriculture due to limited water resources and high energy demands for climate control. This paper proposes a multi-objective optimisation framework aimed at minimising both the operational costs and environmental emissions of a climate-controlled greenhouse. The framework schedules the cultivation of three different crops counting tomato, cucumber, and bell pepper, throughout the year. These crops are selected for their varying growth conditions, which induce variability in energy and water inputs, providing a comprehensive assessment of the optimisation model. The model integrates factors such as temperature, humidity, light intensity, and irrigation requirements specific to each crop. It is solved using a genetic algorithm combined with Pareto front analysis to address the multi-objective nature effectively. This approach facilitates the identification of optimal trade-offs between cost and emissions, offering a set of efficient solutions for decision-makers. Applied to a greenhouse in an arid region, the model evaluates two scenarios: a hydroponic system and a conventional soil-based system. Results of the study indicate that the multi-objective optimisation effectively reduces operational costs and environmental emissions while fulfilling crop demand. The hydroponic scenario demonstrates higher water-use efficiency and allows for precise nutrient management, resulting in a lower environmental impact compared to the conventional soil system. Moreover, the optimised scheduling balances energy consumption for climate control across different crop requirements, enhancing overall sustainability. This study underscores the potential of advanced optimisation techniques in enhancing the efficiency and sustainability of greenhouse agriculture in challenging environments.



Technological Trends towards Sustainable and Circular Process Design

MAURICIO SALES-CRUZ, TERESA LOPEZ-ARENAS

Departamento de Procesos y Tecnología, Universidad Autónoma Metropolitana-Cuajimalpa, Mexico

Current trends in technology are being directed toward the enhancement of teaching methods and the applicability of engineering concepts to industry, especially in the areas of sustainability and circular process design. These shifts signal a transformation in the education of chemical and biological engineering students, who are being equipped with emerging skills through practical, digital-focused approaches that align with evolving industry needs and global sustainability objectives.

Within this educational framework, significant focus is placed on the computational modeling and simulation tools, sustainable design process and circular economy, which are recognized as essential in preparing students to implement efficient and environmentally friendly processes. For instance:

  • The circular economic concept is introduced, where waste is eliminated by redesigning production systems to enhance or maintain profitability. This model emphasizes product longevity, recycling, reuse, and the valorization of waste.
  • Process integration (the biorefineries concept) is highlighted as a complex challenge requiring advanced techniques in separation, catalysis, and biotechnology, integrating both chemical and biological engineering disciplines.
  • Modeling and simulation tools are essential in engineering education, enabling students to analyze and optimize complex processes without incurring the costs or time associated with experimental setups.
  • The use of programming languages (such as MATLAB or COMSOL), equation-based process simulators (such as gPROMS), and modular process simulators (such as ASPEN or SuperPro Designer) is strongly encouraged.

From a pedagogical viewpoint, primary educational trends for knowledge transfer and meaningful learning include:

  1. Problem-Based Learning (PBL) approaches are promoted, using practical industry-related problems to improve students' decision-making skills and knowledge application.
  2. Virtual Labs offer students remote or simulated access to complex processes, including immersive experiences in industrial plants and laboratory equipment.
  3. Integration of Industry 4.0 and Process Automation tools facilitate the analysis of massive data (Big Data) and introduce technologies such as artificial intelligence (AI).
  4. Interdisciplinary and Collaborative Learning fosters integration across disciplines such as biology, chemistry, materials engineering, computer science, and economics.
  5. Blended Learning Models combine traditional teaching methods with digital tools, with online courses, e-learning platforms, and multimedia resources enhancing in-person classes.
  6. Continuing Education and Micro-credentials are encouraged as technologies and approaches evolve rapidly, with short, specialized courses often offered through online platforms.

This paper critically examines these educational trends, emphasizing the shift toward practical and digital approaches that align with changing industry demands and sustainability goals. Additionally, student-led case studies on organic waste revalorization will be included, demonstrating the quantification of environmental impacts, assessments of economic viability in terms of investment and operational costs, and evaluations of innovative solutions grounded in circular economy principles.



From experiment design to data-driven modeling of powder compaction process

Rene Brands1, Vikas Kumar Mishra2, Jens Bartsch1, Mohammad Al Khatib2, Markus Thommes1, Naim Bajcinca2

1RPTU Kaiserslautern, Germany; 2TU Dortmund, Germany

Tableting is a dry granulation process for compacting powder blends into tablets. In this process, a blend of active pharmaceutical ingredients (APIs) and excipients are fed into the hopper of a rotary tablet press via feeders. Inside the tablet press, rotating feed frame paddle wheels fill powder into dies, with tablet mass adjusted by the lower punch position during the die filling process. Pre-compression rolls press air out of the die, while main compression rolls apply the force necessary for compacting the powder into tablets. In this paper, process variables such as feeder screw speeds, feed frame impeller speed, lower punch position during die filling, and punch distance during main compression have been systematically varied. Corresponding responses, including pre-compression force, ejection force, and tablet porosity have been evaluated to optimize the tableting process. After implementing an OPC UA interface, process variables can be monitored in real-time. To enable in-line monitoring of tablet porosity, a novel UV/Vis fiber optic probe has been implemented into the rotary tablet press. To further analyze the overall process, a data-driven modeling approach is adopted. Data-driven modeling is a valuable alternative to modeling real-world processes where, for instance, first principles modeling is difficult or infeasible. Due to the complex nature of the process, several model classes need to be explored. To begin with, linear autoregressive models with exogenous inputs (ARX) have been considered. Thereafter, nonlinear autoregressive models with exogenous inputs (NARX) have been considered. Finally, several experiments have been designed to further validate and test the effectiveness of the developed models in real-time scenarios.



Taking into account social aspects for the development of industrial ecology

Maud Verneuil, Sydney Thomas, Marianne Boix

Laboratoire de Genie Chimique, Toulouse INP, CNRS, Université Paul Sabatier, France

Industrial Ecology, in the context of decarbonization, appears to be an important and significant way to reduce carbon dioxide emissions. The development of eco-industrial parks are also real applications that can help to modify socio-ecological landscapes at the scale of a territory.

In the context of Industrial ecology, optimization models make possible to implement synergies according to economic and environmental criteria. Even if numerous studies have proposed several criteria such as: CO2 emissions, Net Present Value or other economic ones; to date, a few social indicators have been taken into account in multi-criteria models. Job creation is often used as a social indicator in this type of analysis. However, the social nature of this indicator is debatable.

The first aim of the present work is to question the relevance of job creation as a social indicator with a case study. Afterward, we will evaluate the need to measure the social impact of industrial ecology initiatives and query the meaning and the added value of social indicators in this context.

So, the case study is about the development of offshore wind energy expertise in the port of Port-La-Nouvelle, with the port of Sète as a rear base. The aim is to assess the capacity of the port of Sète to host component manufacturing and anchor system storage activities, by evaluating the economic, environmental and social impacts of this approach. We will then highlight the criteria chosen and assess their relevance and limitations, particularly with regard to the social aspect.

The second step is to define the needs and challenges of an industrial and territorial ecology approach. What are the key success factors? In attempting to answer this question, it became clear that an eco-industrial park could not survive without a climate of trust and cooperation (Diemer & Rubio, 2016). The complexity of this eco-system and its communicating vessels between industrialists on a micro scale, the park on a meso scale and its environment on a macro scale make the link and the building of relationships the sinews of war.

Thirdly, we will examine the real added value of social indicators on this relational dimension, in particular by studying the way in which social indicators are implemented. Indeed, beyond the indicator itself, the process chosen for its elaboration has a real influence on the indicator itself, as well as on the ability of users to appropriate it. We therefore need to consider which process seems most effective in enabling the use of social indicators to provide a new perspective on the context of an industrial and territorial ecology approach.

Finally, we will highlight the limits of metrics based on social indicators, and question their ability to capture a complex, multidimensional social environment. We will also explore the possibility of using other concepts and tools to account for social reality, and assess their relevance to industrial and territorial ecology.



Life cycle impacts characterization of carbon capture technologies for their integration in eco-industrial parks

Agathe Gabrion, Sydney Thomas, Marianne Boix, Stephane Negny

Laboratoire de Genie Chimique, Toulouse INP, CNRS, Université Paul Sabatier, France

Human activities since pre-industrial era have been recognized to be responsible of climate change. This influence on the climate is primarily driven by the combustion of fossil fuels. The burning of these fuels releases significant quantities of carbon dioxide (CO2) and other greenhouse gases into the atmosphere, contributing to the greenhouse effect.

Industrial activities are a major factor in climate change, given the amount of greenhouse gases released into the Earth’s atmosphere from fossil fuel burning and from the energy required for industrial processes. In an attempt to reduce the environmental impact of the industry on climate change, many methods are studied and considered.

This study focuses on one of these technologies – carbon capture. Carbon capture refers to the process of trapping CO2 molecules after the combustion of fossil fuels. Next, the carbon is used or stored in order to prevent him reaching the atmosphere. This whole process is referred as Carbon Capture, Utilization and Storage (CCUS). Carbon capture is declined into multiple technologies. This study focuses only on absorption capture method using amine because it represents 90% of the operational market. It does not evaluate the utilization and storage part.

In this study, the process of carbon capture is seen as part of a bigger project aiming at reducing the CO2 emissions of the industry referred to as an Eco-Industrial Park (EIP). Indeed, the process is studied in the context of an EIP in order to determine whether setting it up is more or less valuable in terms of ecological impact than the current situation consisting on releasing the greenhouse gases into the atmosphere. Results will conduct to study the integration of carbon capture alternative methods in the EIP.

To properly conduct this study, it was necessary to consider various factors of ecological impacts. While carbon absorption using an amine solvent reduces the amount of CO2 released into the atmosphere, the degradation associated with amine solvents must also be taken into account. Therefore, it was necessary to involve several different criteria in order to compare the ecological impact of a carbon capture and the ecological impact of the release of industry-produced greenhouse gases. The objective is to prevent the transfer of pollution from greenhouse gases to other forms of environmental contamination. To do so, the Life Cycle Analysis (LCA) method was chosen to assess the environmental impacts of both scenarios.

Using the SimaPro© software to conduct the LCA, this study showed that processing the stream gas exiting an industrial site offers environmental advantages compared to its direct release into the atmosphere. Within the framework of an Eco-Industrial Park (EIP), the implementation of a CO2 absorption process could contribute to mitigate climate change impacts. However, it is important to consider that other factors, such as ecotoxicity and resource utilization, may become more significant when the CO2 absorption process is incorporated into the EIP.



Dynamic simulation and life cycle assessment of energy storage systems connecting variable renewable sources with regional energy demand

Ayumi Yamaki, Shoma Fujii, Yuichiro Kanematsu, Yasunori Kikuchi

The University of Tokyo, Japan

Increasing reliance on variable renewable energy (VRE) is crucial to achieving a sustainable and carbon-neutral energy system. However, the inherent intermittency of VRE creates challenges in ensuring a reliable power supply that meets fluctuating electricity demand. Energy storage systems are pivotal in addressing this issue by storing surplus energy and supplying it when needed. This study explores the applicability of different energy storage technologies—batteries, hydrogen (H2) storage, and thermal energy storage (TES)—to control electricity variability from renewable energy sources, focusing on electricity demand and life cycle impacts.
This research aims to evaluate the performance and environmental impacts of the energy storage system when integrated with wind power. A model of an energy storage system connected to wind energy was constructed based on the existing model (Yamaki et al., 2024), and the annual energy flow simulation was conducted. The model assumes that all generated wind energy is stored and subsequently used to supply electricity to consumers. The energy flow was calculated hourly from 0:00 on January 1st to 24:00 on December 31st based on the model made by Yamaki et al. (2023). The amounts of energy storage and VRE installation were set, and then the maximum amount of power to be sold from the energy storage system was estimated. In the simulation, the stored energy was calculated hourly from the charge of VRE-derived power/heat and the discharge of power to be sold.
Life cycle assessment (LCA) was employed to quantify the environmental impacts of each storage technology from cradle to grave, considering both the energy storage system infrastructure and operational processes for various wind energy and energy storage scales. This study evaluated GHG emissions and abiotic resource depletion as environmental impacts.
The amount of power sold was calculated by energy flow simulation. The simulation results indicate that the amount of power sold increases as wind energy generation and storage capacity rise. However, when storage capacities are over-dimensioned, the stored energy diminishes due to battery self-discharge, H2 leakage, or thermal losses in TES. This loss of stored energy leads to a reduction in the power sold. The environmental impacts of each energy storage system depended on the specific storage type and capacity. Batteries, H2 storage, and TES exhibited different trade-offs regarding GHG emissions and abiotic resource depletion.
This study highlights the importance of integrating dynamic simulations with LCA to provide a holistic assessment of energy storage systems. By quantifying both the energy supply capacity and the environmental impacts, this research offers valuable insights for designing energy storage solutions that enhance the viability of VRE integration while minimizing environmental impacts. The findings contribute to developing more resilient and sustainable energy storage systems that are adaptable to regional energy supply conditions.

Yamaki, A., et al.; Life cycle greenhouse gas emissions of cogeneration energy hubs at Japanese paper mills with thermal energy storage, Energy, 270, 126886 (2023)
Yamaki, A., et al.; Comparative Life Cycle Assessment of Energy Storage Systems for Connecting Large-Scale Wind Energy to the Grid, J. Chem. Eng. Jpn., 57 (2024)



Optimisation of carbon capture utilisation and storage supply chains under carbon trading and taxation

Hourissa Soleymani Babadi, Lazaros G. Papageorgiou

The Sargent Centre for Process Systems Engineering, Department of Chemical Engineering,University College London (UCL), Torrington Place, London WC1E 7JE, UK

To mitigate climate change, and in particular, the rise of CO2 levels in the atmosphere, ambitious emissions targets have been set by political institutions such as the European Union, which aims to reduce 2050 emissions by 80% versus 1990 levels (Leonzio et al., 2019). One proposed solution to lower CO2 levels in the atmosphere is Carbon Capture, Storage and Utilisation (CCUS). However, studies in the literature to date have largely focused on utilisation and storage separately and neither considered the effects of CO2 taxation nor systematically studied the optimality criteria of the CO2 conversion products (Leonzio et al., 2019; Zhang et al., 2017; Zhang et al., 2020). A systematic study for a realistically large industrial supply chain that considers the aforementioned aspects jointly is necessary to inform political and industrial decision-making.

In this work, a Mixed Integer Linear Programming (MILP) framework for a supply chain network was developed to incorporate storage, utilisation, trading, and taxation as strategies for managing CO2 emissions. Possible CO2 utilisation products were ranked using Multi-Criteria Decision Analysis (MCDA) techniques, and three of the top 10 products were considered to serve as CO2 -based products in this supply chain network. The model included several power plants in one of the European countries with the highest CO2 emissions. The goal of the proposed model is to minimise the total cost of the supply chain taking into account the process and investment decisions. Furthermore, incorporating multi-objective optimisation that simultaneously considers CO2 reduction and supply chain costs can offer both environmental and economic benefits. Therefore, the ε-constraint multi-objective optimisation method was implemented as a solution procedure to minimise the total cost while maximising the CO2 reduction. The game theory Nash approach was applied to determine the fair trade-off between the two objectives. The investigated case study demonstrates the importance of including financial carbon management through tax and trade in addition to the physical CO2 capturing and storage, and utilisation.

References

Leonzio, G., Foscolo, P. U., & Zondervan, E. (2019). An outlook towards 2030: optimization and design of a CCUS supply chain in Germany. Computers & Chemical Engineering, 125, 499-513.

Zhang, D., Alhorr, Y., Elsarrag, E., Marafia, A. H., Lettieri, P., & Papageorgiou, L. G. (2017). Fair design of CCS infrastructure for power plants in Qatar under carbon trading scheme. International Journal of Greenhouse Gas Control, 56, 43-54.

Zhang, S., Zhuang, Y., Liu, L., Zhang, L., & Du, J. (2020). Optimization-based approach for CO2 utilization in carbon capture, utilization and storage supply chain. Computers & Chemical Engineering, 139, 106885.



Impact of energy sources on Global Warming Potential of hydrogen production: Case study of Uruguay

Vitória Olave de Freitas1, José Pineda1, Valeria Larnaudie2, Mariana Corengia3

1Unidad Tecnológica de Energias Renovables, Universidad Tecnologica del Uruguay; 2Depto. de Bioingeniería, Instituto de Ingeniería Química, Facultad de Ingeniería, Udelar; 3Instituto de Ingeniería Química, Facultad de Ingeniería, Udelar

In recent years, several countries have developed strategies to advance green hydrogen as a feedstock or energy carrier. Hydrogen can contribute to the decarbonization of various sectors, being of particular interest its use in the transport and industry sectors. In 2022, Uruguay launched its green hydrogen roadmap, outlining its plan to promote this market. The country has the potential to become a producer of green hydrogen derivatives for exportation due to: the availability and complementarity of renewable energies (solar and wind); an electricity matrix with a high share of renewable sources; the availability of water; and favorable logistics.

The energy source for water electrolysis is a key factor in both the final cost and the environmental impact of hydrogen production. In this context, this work performs the life cycle assessment (LCA) of a hydrogen production process by water electrolysis, combining different renewable energy sources available in Uruguay. The system evaluated includes a 50 MW electrolyzer and the installation of 150 MW of new power sources. Three configurations for power production were analyzed: (1) a photovoltaic farm, (2) a wind farm, and (3) a hybrid farm (solar and wind). In all cases, connection to the national power grid is assumed to ensure a reliable and uninterrupted energy supply for plant operation.

Different scenarios for the grid energy mix are analyzed to assess the environmental impact on the hydrogen produced. For the current case, the average generation over the past five years is considered, while for future projections it was evaluated the variation of fossil and renewable energy sources.

To determine the optimal combination of renewable energy sources for the hybrid generation scenario, the complementarity of solar and wind resources was analyzed using the standard deviation, a metric widely used for this purpose. This study was developed employing data for real plants in Uruguay. Seeking for the most stable generation, the optimal mix of power generation capacity implies 54% solar and 46% wind.

The environmental impact of the different case studies was evaluated through an LCA using OpenLCA software and the Ecoinvent database. For this analysis, 1 kg of produced hydrogen was considered the functional unit. The system boundaries included power generation and the electrolysis system used for hydrogen production. Among the impact categories that can be analyzed by LCA (human health, environmental, resource depletion, etc.), this work focused on the global warming potential (GWP). As hydrogen is promoted as an alternative fuel or feedstock that may diminish CO2 emissions, its GWP is a particularly relevant metric.

Implementing hybrid solar and wind energy systems increases the stability in the energy produced from renewable sources, thereby reducing the amount of energy taken from the grid. Then, these hybrid plants have the potential to reduce CO2 emissions per kg of hydrogen produced. Still, this impact is diminished when the electric grid has higher contributions of renewable energy.



Impact of the share of renewable energy integration in the selection of sustainable natural gas production pathways

Meire Ellen Gorete Ribeiro Domingos, Daniel Florez-Orrego, Oktay Boztas, Soline Corre, François Maréchal

Ecole Polytechnique Federale de Lausanne, Switzerland

Sustainable natural gas (SNG) can be produced via different routes, such as anaerobic digestion and thermal gasification. Other technologies, such as CO2 injection, storage systems (e.g., CH4, CO2) and reversible solid oxide cells (rSOC) can be also integrated in order to handle the seasonal fluctuations of renewable energy supply and market volatility. In this work, the impact of seasonal excess and deficit of electricity generation, and the renewable fraction thereof, on the sustainability metrics of different scenarios for the energy transition in the SNG production is evaluated. The analysis considers both the current energy mix scenario and a future energy mix scenario. In the latter, full renewable grid is modeled based on the generation taking into account GIS-based land-restriction, geo-spatial wind speed and irradiation data, and the maximum electricity production from renewable sources considering EU-wide low restrictions. Moreover, the electricity demand considers full electrification of the residential and mobility sectors. The biodigestion process considers a biomethane potential of 300 Nm3 CH4 per t of volatile solids using organic wastes. The upgraded biomethane is marketed and the CO2 rich stream follows to the biomethane production. The CO2 from the anaerobic digestion unit can be stored at -50 °C and 7 bar (1,155 kg/m3), so that it can be later regasified and fed to a methanation system. The necessary hydrogen is provided by the rSOC system operating at 1 bar, 800 °C, and 81% water conversion. The rSOC system can also be operated in fuel cell mode consuming methane to produce electricity. The gasification of the digestate from the anaerobic digestion unit uses steam as gasification agent, and hydrogen coming from the electrolyzer is used to adjust the syngas composition to be suitable for the methanation reaction. The methanation system is based on the TREMP® process, consisting of intercooled catalytic beds to achieve higher conversion. A mixed integer linear programming method is employed to identify optimal system configurations under different economic scenarios, helping elucidating the feasibility of the proposed processes, as well as the optimal planning production of SNG. As a result, the integration of renewable energy and the combination of different SNG production processes prove to be crucial for the strategic planning, enhancing the resilience against market volatility and also supporting the decarbonization of the energy sector. Improved handling of intermittent renewable energy allows an optimal CO2 and waste management to achieve year-round overall processes efficiencies above 55%. This systematic approach enables better decision-making, risk management, and investment planning, informing energy providers about the opportunities and challenges linked to the decarbonization of the energy supply.



Decarbonizing the German Aviation Sector: Assessing the feasibility of E-Fuels and their environmental implications

PABLO SILVA ORTIZ1, OUALID BOUKSILA2, AGNES JOCHER2

1Universidad Industrial de Santander-UIS, Colombia; 2Technical University of Munich-TUM, Germany

The aviation industry is united in its goal of achieving "net-zero" emissions by mid-century, in accordance with global targets like COP21 and European initiatives such as "Fit for 55" and "ReFuelEU Aviation." However, current advancements and capacities may be insufficient to meet these targets on time. Recognizing the critical need to reduce greenhouse gas GHG emissions, the German government and the European Commission strongly advocate measures to lower aviation emissions, which is expected to significantly increase the demand for sustainable aviation fuels, especially synthetic fuels. In this sense, import scenarios from North African countries to Germany are under consideration. Hence, we set the objective of this work in exploring the pathways and the life cycle environmental impacts of e-fuels production and import, focusing on decarbonizing the aviation sector. Through a multi-faceted investigation, this work aims to offer strategic insights into the future of aviation fuel, blending technological advancements with international cooperation for a sustainable aviation industry. Our analysis compares the feasibility of local production in Germany with potential imports from Maghreb countries—Tunisia, Algeria, and Morocco. To establish a comprehensive view, the study forecasts Germany’s aviation fuel demand across three key timelines: the current scenario, 2030, and 2050. These projections account for anticipated advancements in renewable energy, proton exchange membrane-PEM electrolysis, and Direct Air Capture-DAC technologies via Life Cycle Assessment-LCA prospective. A technical concept of a power-to-liquid fuel production is presented with the corresponding Life Cycle Inventory, reflecting a realistic consideration of the local conditions including the effect of water desalination. In parallel, the export potential of the Maghreb countries is evaluated, considering both social and economic dimensions. The environmental impacts of two export pathways—direct e-fuel export and hydrogen export as an intermediate product—are then assessed through cradle-to-gate and cradle-to-grave scenarios, offering a detailed analysis of their respective carbon footprints. Finally, the study determines the qualitative cost implications of each pathway, providing a comparative analysis that identifies the most promising approach for sustainable aviation fuel production. The results, related mainly to Global Warming Potential-GWP and Water Consumption Potential-WCP suggest that Algeria, doted with high-capacity factors for photovoltaic-PV solar and wind systems, achieves the most considerable WCP reductions compared to Germany, ranging from 31.2% to 57.1% in a cradle-to-gate scenario. From a cradle-to-grave perspective, local German PV solar scenarios fail to meet RED II sustainable fuel requirements, whereas most export scenarios achieve GWP reductions exceeding 70%. Algeria shows the best overall reduction, particularly with wind energy (85% currently to 88% by 2050), while Morocco excels with PV solar (70% currently to 75% by 2050). Despite onshore wind showing strong environmental numbers, PV solar offers the highest impact reductions and cost advantages, making Morocco and Algeria’s PV systems superior to German and North African wind systems.



Solar-Driven Hydrogen Economy Potential in the Greater Middle East: Geographic, Economic, and Environmental Perspectives

Abiha Abbas1, Muhammad Mustafa Tahir2, Jay Liu3, Rofice Dickson1

1Department of Chemical and Metallurgical Engineering, School of Chemical Engineering, Aalto University, P.O. Box 11000, FI-00076 Aalto, Finland; 2Department of Chemistry & Chemical Engineering, SBA School of Science and Engineering, Lahore University of Management Sciences (LUMS), Lahore, 54792, Pakistan; 3Department of Chemical Engineering, Pukyong National University, Busan, Republic of Korea

This study employed advanced GIS spatial analysis to assess land suitability for solar-powered hydrogen production across thirty countries in the GME region. Factors such as PVOUT, proximity to water sources and roads, land slope, land use and cover, and restricted/protected areas were evaluated. An AHP-based MCDM analysis was used to classify land into different suitability levels.

Techno-economic optimization models were then applied to assess the levelized cost of hydrogen (LCOH), production potential, and the levelized costs of ammonia (LCOA) and methanol (LCOM) for 2024 and 2050 under different scenarios. Sensitivity analysis quantified uncertainties, while cradle-to-grave life cycle analysis (LCA) calculated the CO₂ avoidance potential for highly suitable areas.

Key findings include:

  1. Water scarcity is a major factor in site selection for hydrogen production. Fifty-seven percent of the region lacks access to water or is over 10 km away from any source, posing a challenge for hydrogen facility placement. A minimum of 1.7 trillion liters of water is needed to meet conservative hydrogen production estimates, and up to 13 trillion liters for optimistic estimates. A reliable water supply chain is crucial to realize this potential.
  2. Around 14% of the land in the region is unsuitable for hydrogen production due to slopes exceeding 5°. In mountainous countries like Tajikistan, Kyrgyzstan, Lebanon, Armenia, and Türkiye, this figure rises to 50%.
  3. Forty percent of the region is unsuitable due to poor road access, highlighting the need for adequate transportation infrastructure. Roads are essential for the construction, operation, and maintenance of hydrogen facilities, as well as for transporting resources and products.
  4. Only 3.8% of the GME region (1,122,696 km²) is classified as highly suitable for solar hydrogen projects. This land could produce 167 Mt/y and 209 Mt/y of hydrogen in 2024 and 2050 under conservative estimates, with an LCOH of 4.7–7.9 $/kg in 2024 and 2.56–4.17 $/kg in 2050. Under optimistic scenarios, production potential could rise to 1,267 Mt/y in 2024 and 1,590 Mt/y in 2050. Saudi Arabia, Sudan, Pakistan, Iran, and Algeria account for over 50% of the region’s hydrogen potential.
  5. Green ammonia production costs in the region range from 0.96–1.38 $/kg in 2024, decreasing to 0.56–0.79 $/kg by 2050. Green methanol costs range from 1.12–1.59 $/kg in 2024, dropping to 0.67–0.93 $/kg by 2050. Egypt and Libya show the lowest production costs.
  6. LCA reveals significant potential for CO₂ emissions avoidance. In 2024, avoided emissions could range from 119–488 t/y/km² (481 Mt/y), increasing to 477–1952 t/y/km² (3,655 Mt/y) in the optimistic case. By 2050, avoided emissions could reach 4,586 Mt/y. Saudi Arabia and Egypt show the highest potential for CO₂ avoidance.

The study provides a multitude of insights, making significant contributions to the global hydrogen dialogue and offering significant contributions to the roadmap for policymakers to develop comprehensive strategies for expanding the hydrogen economy in the GME region.

 
10:30am - 11:30amBrewery visit
Location: On-campus brewery
10:30am - 12:30pmT1: Modelling and Simulation - Session 1
Location: Zone 3 - Aula E036
Chair: Simen Akkermans
 
10:30am - 10:50am

Computational Intelligence Applied to the Mathematical Modeling of the Esterification of Fatty Acids with Sugars

Lorenzo Giovanni Tonetti, Ruy de Sousa Junior

Universidade Federal de São Carlos, Brazil

Due to increasing demand for biosurfactants, which are more environmentally friendly than surfactants derived from non-renewable raw materials, there is a growing need for studies proposing new processes for their production or aiming at the optimization of existing ones. In this context, the mathematical modeling of enzymatic reactors for the esterification of fatty acids with sugars in the production of biosurfactants has been a useful tool for studying and optimizing the process. In particular, artificial neural networks and fuzzy systems emerge as promising methods for developing models for those processes [1]. Thus, this work aimed at the development of hybrid-neural models and a fuzzy model for enzymatic esterification reactors associated with biosurfactant production. For the development of artificial neural networks, experimental data provided by Lima et al. [2] were employed, pertaining to the kinetics of xylose ester synthesis obtained through the esterification of oleic or lauric acid in tert-butyl alcohol medium. In the case of artificial neural networks application, the coupling of networks to reactor mass balances was considered in hybrid models to infer reactant concentrations over time. To achieve this, the Runge-Kutta method was employed for the integration of the material balance differential equations. Computationally, an algorithm was constructed incorporating material balances, neural reaction rates and numerical integration. In the case of applying fuzzy logic for modeling and optimizing the conversion of fatty acid esterification with sugars as a function of operational process parameters (time, temperature, molar ratio of substrates and enzyme loading), a study was conducted based on the available set of experimental data [2]. All computational development was carried out using Matlab. In the application of hybrid-neural models, neural networks were able to predict the kinetic behavior of the xylose esterification process in biosurfactant synthesis by applying them to reactor mass balances, obtaining R^2 values above 0.94, indicating a good predictive capacity. The trained fuzzy models were able to simulate the relationships between input variables and the output variable, enabling the construction of various response surface combinations and estimating the optimal operational condition at 60 h of reaction, 55°C, molar ratio of substrates of 5:1 and enzyme loading of 37.5 U/g. The same condition was obtained when applying the particle swarm optimization algorithm. Thus, this study demonstrated the capability of computational intelligence in modeling, simulation and optimization of biosurfactant synthesis.

References

[1] Alice de C.L. Torres, Rafael A. Akisue, Lionete N. de Lima, Paulo W. Tardioli, Ruy de Sousa Júnior, Computational intelligence applied to the mathematical modeling of enzymatic syntheses of biosurfactants, Editor(s): Ludovic Montastruc, Stephane Negny, Computer Aided Chemical Engineering, Elsevier, Volume 51, 2022, Pages 139-144, https://doi.org/10.1016/B978-0-323-95879-0.50024-2.

[2] Lima LN, Vieira GNA, Kopp W, Tardioli PW, Giordano RLC. Mono- and heterofunctionalized silica magnetic microparticles (SMMPs) as new carriers for immobilization of lipases. Journal of molecular catalysis B, Enzymatic. 2016 Nov;133:S491–499.



10:50am - 11:10am

Data-Driven Modelling of Biogas Production Using Multi-Task Gaussian Processes

Benaissa Dekhici1,2, Michael Short1,2

1School of Chemistry and Chemical Engineering, University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.; 2Supergen Bioenergy Impact Hub, Energy and Bioproducts Research Institute, UK.

This study introduces a data-driven modelling approach using Multi-Task Gaussian Process (MTGP) models to predict biogas production and other key performance indicators in a continuous anaerobic digester fed with hydrothermal carbonization (HTC) products. The feedstock consists of hydrochar and HTC liquor derived from sewage sludge and agro-industrial waste, which exhibit significant variability in composition and degradation profiles, complicating predictive modelling. A Gaussian Process (GP) based approach is utilised because of its capacity to manage intricate and uncertain systems. While GPs have been applied in various fields, their use in predicting biogas production from HTC products has not yet been tested. This problem is critical given the need for reliable models to optimise biogas production from highly variable waste materials. Inputs include operational parameters like dilution rate, soluble chemical oxygen demand (SCOD) at the inlet, and organic loading rate. The GP model predicts biogas production, SCOD in the output, and volatile fatty acid (VFA) concentration. Using an MTGP framework, the model jointly predicts these outputs by leveraging correlations between them, enhancing prediction accuracy. The probabilistic nature of the GP framework allows for the prediction of mean output values along with uncertainties, captured through confidence intervals. This is particularly valuable in dynamic systems like AD, where uncertainties arise from variations in feedstock and microbial activity. The MTGP model extends standard GP regression to handle multiple outputs through the Linear Model of Coregionalization (LMC), which shares a latent structure among tasks via a common kernel, while allowing for task-specific variations. By jointly modelling multiple outputs, the MTGP benefits from shared information across tasks, leading to improved predictions, especially when one output, such as VFA, is difficult to predict independently. The GP model was trained using 164 days of experimental data from a lab-scale anaerobic digester. Results were compared with a previously developed mechanistic model. While the mechanistic model, which incorporates biological kinetics, effectively captures broad trends in biogas production and reactor performance, it is parameter-dependent and assumes specific system dynamics. It struggles with uncertainties in input conditions and process variability. In contrast, the GP model provides a non-parametric, flexible alternative, capable of adapting to complex, nonlinear relationships without prior knowledge of system dynamics. The performance of the GP model was evaluated based on Mean Absolute Error (MAE) and R2 values and compared to the mechanistic model. The GP model yielded: SCOD (MAE = 0.108, R2 = 0.984), VFA (MAE = 0.311, R2 = 0.988), and Biogas (MAE =0.131, R2 = 0.935), indicating a strong fit. In contrast, produced less accurate results, including SCOD (MAE = 0.307, R2 = 0.5), VFA (MAE = 0.2.311, R2 = 0.455), and Biogas (MAE =0.359, R2 = 0.538. The GP model was able to accurately capture the biogas production trend while providing predictive intervals that encapsulate the observed fluctuations in SCOD and VFA concentrations. In conclusion, this study highlights the potential complementarity between data-driven models like MTGPs and mechanistic models. Combining the flexibility of GPs with mechanistic insights could lead to hybrid models that enhance predictive accuracy and robustness in AD systems.



11:10am - 11:30am

Integration of Yield Gradient Information in Numerical Modeling of the Fluid Catalytic Cracking Process

Wenle Xu1, Baohua Chen1,2, Tong Qiu1

1Department of Chemical Engineering, Tsinghua University, Beijing 100084, China; 2PetroChina Guangxi Petrochemical Company, Qinzhou 535000, China

Abstract

Fluid Catalytic Cracking (FCC) is a crucial process in the refining industry, capable of converting lower-quality feedstocks such as Vacuum Gas Oil (VGO) into higher-value products like gasoline and diesel. Due to the changes in feedstock properties and market prices of products, there is a need to adjust and optimize the FCC unit in a timely manner. Accurate modelling of the FCC unit facilitates this optimization, potentially leading to significant economic benefits. However, the complexity of the feedstock and the reactions during the cracking process introduce considerable nonlinearity to the FCC unit (Khaldi et al., 2023). Data-driven modelling approaches are increasingly preferred for their effectiveness in modelling these nonlinear systems. Compared to mechanism models, deep learning models such as Multiple Layer Perceptron (MLP) and Recurrent Neural Network (RNN) generally offer higher accuracy and prediction speed (Yang et al., 2023). However, due to the limited range of actual plant operations and the black-box nature of data-driven models, relying solely on these models for optimization may lead to contradictory decisions.

To address these challenges, we propose incorporating the gradient information of product yields into the training process of data-driven models. Specifically, during the model training process, we not only predict the yields but also calculate the gradients of yields with respect to key variables using the forward difference method. The deviation between the predicted and actual gradients is then integrated into the loss function. We utilize the gradients from the mechanism model Petro-SIM as the ground truth. Given the impracticality of using Petro-SIM to calculate gradients for all points in the training set due to high computational costs, we develop a surrogate model for Petro-SIM to enhance computational speed. Additionally, we employ Bayesian optimization to sample the key variables over a broader range when building the surrogate model, thereby reducing the number of sampling instances needed. The results demonstrate that the surrogate model for Petro-SIM achieves high prediction accuracy, with a mean absolute percentage error (MAPE) of just 0.003, ensuring reliable gradient calculations. More importantly, compared to a purely data-driven model, our hybrid model accurately predicts the gradients of yields with respect to key variables, facilitating the optimization of operating conditions.

References

Khaldi, M.K., Al-Dhaifallah, M., Taha, O., 2023. Artificial intelligence perspectives: A systematic literature review on modeling, control, and optimization of fluid catalytic cracking. Alexandria Engineering Journal 80, 294–314. https://doi.org/10.1016/j.aej.2023.08.066

Yang, F., Xu, M., Lei, W., Lv, J., 2023. Artificial Intelligence Methods Applied to Catalytic Cracking Processes. Big Data Mining and Analytics 6, 361–380. https://doi.org/10.26599/BDMA.2023.9020002



11:30am - 11:50am

Hybrid Modelling for Reaction Network Simulation in Syngas Methanol Production

Harry Kay, Fernando Vega-Ramon, Dongda Zhang

University of Manchester, United Kingdom

Sustainability is a thriving global topic of concern and following the advancement of technological progress and increased standards of living, the demands for energy, fuels, chemicals and other requirements have increased significantly. Methanol is one such chemical which has seen increases in demand due to its importance as a precursor in the development of other widely used chemicals such as formaldehyde as well its use in the solvent industry. As a result of this, ample research has been conducted in order to develop new production pathways and to further improve the efficiency of previous production routes. CO and CO2 hydrogenation has shown promise as a potential sustainable method for producing CH3OH due to its potential for achieving high selectivity and to mitigate environmental issues (recycling of CO2 to generate green fuels and reduce greenhouse emissions).

In order to gain insight into the reaction mechanisms driving the process, it is beneficial to develop kinetic models that accurately describe the system for several reasons: (i) to develop understanding of variable relationships; (ii) to facilitate control and optimisation; (iii) to conduct model-based design of experiments (MBDoE) and reduce experimental burdens; and (iv) to expedite scale up and scale down of processes. Two commonly used reaction rate models are the power law and Langmuir-Hinshelwood expressions. The former being popular within industrial contexts due to its simplicity to implement and integrate further effects such as mass transfer and heat transfer, however, retains the disadvantage of reduced generalisability as the reaction orders may change significantly under different operating conditions. The latter is commonly used within the field of heterogeneous catalysis as they describe the adsorption of reactants on an ideal catalyst surface. The strong assumptions imposed when developing such kinetic models may limit their predictive performance through the introduction of inductive bias (i.e. model structural uncertainty).

A solution to counter these drawbacks is the inauguration of a data-driven component within the kinetic modelling framework such that any complex, less understood kinetics can be instead learnt from historical data by a machine learning model. This framework is referred to as hybrid modelling and has shown success within the literature in the fields of bioprocessing and biotechnology, requiring lower quantities of data and increased interpretability than traditional black box models. It also removes the necessity for complex, less understood kinetics to be approximated via strong assumptions. However, much less effort has been made to investigate the advantages of hybrid modelling in chemical reaction engineering applications. Therefore, in order to identify the pros and cons associated with each kinetic and hybrid modelling strategies for chemical reaction network modelling, a thorough comparison was made in this work using syngas methanol production as a case study. By constructing different kinetic models and hybrid models, it was observed that hybrid models offer clear advantages over kinetic models for prediction and uncertainty estimation and show greater capability to generalise to unseen conditions when trained with limited data, thus indicating their potential for use in the field of chemical reaction kinetics.



11:50am - 12:10pm

Data-driven joint chance-constrained optimization via copulas: Application to MINLP integrated planning and scheduling

Syu-Ning Johnn1, Hasan Nikkhah2, Meng-Lin Tsai3, Styliana Avraamidou3, Burcu Beykal2, Vassilis Charitopoulos1

1University College London, UK; 2University of Connecticut, USA; 3University of Wisconsin-Madison, USA

In real-world applications, many optimisation problems are inherently difficult to find feasible solutions due to the lack of exact information, the presence of noisy data distributions, and parameter uncertainties. As a result, data-driven optimisation approaches are increasingly adopted to efficiently explore solution spaces and identify improved outcomes. Chance constraint programming (CCP) is an optimisation approach that ensures stochastic constraints are met with a predetermined probability of satisfaction amongst all possible scenarios (Li et al. 2008; Calfa et al. 2015). Numerous studies have successfully integrated specifically with various optimisation problems (Bianco et al. 2019). Copulas are data-driven coupling functions that capture the dependence structure between multiple univariate marginal distributions under certain correlations. Incorporating copula formulations into CCP makes it possible to better model dependencies between variables under different scenarios when underlying data exhibits complex distributions or non-trivial dependencies, such as correlated risks or non-linear relationships, thereby improving the accuracy of decision-making in optimisation problems with the presence of uncertain parameters. In recent years, the integration of copula and CCP has shown significant promise (Hosseini et al., 2020; Khezri and Khodayifar, 2023).

In this work, we present a copula-based chance-constrained optimisation framework designed to achieve good efficiency and accuracy in estimating demand levels for integrated planning and scheduling problems. Our approach ensures feasible decision-making within a defined risk threshold. We validated this framework within the context of data-driven optimisation, leveraging the DOMINO framework (Beykal et al., 2020), which is a data-driven grey-box algorithm for addressing generic constrained bilevel optimisation problems. Our experiments demonstrate that the proposed approach is capable of identifying robust solutions that result in higher joint satisfaction rates for products and near-optimal performance, all while significantly reducing computational time compared to exact methods and other simulation-based software. The efficiency and effectiveness of our approach are further validated through a number of case studies across a range of optimisation problems.

Reference:

Beykal, B., Avraamidou, S., Pistikopoulos, I. P., Onel, M., & Pistikopoulos, E. N. (2020). Domino: Data-driven optimization of bi-level mixed-integer nonlinear problems. J. Glob. Optim., 78, 1-36.

Bianco, L., Caramia, M., & Giordani, S. (2019). A chance constrained optimization approach for resource unconstrained project scheduling with uncertainty in activity execution intensity. Comput. Ind. Eng., 128, 831-836.

Calfa, B. A., Grossmann, I. E., Agarwal, A., Bury, S. J., & Wassick, J. M. (2015). Data-driven individual and joint chance-constrained optimization via kernel smoothing. Comput. Chem. Eng., 78, 51-69.

Hosseini Nodeh, Z., Babapour Azar, A., Khanjani Shiraz, R., Khodayifar, S., & Pardalos, P. M. (2020). Joint chance constrained shortest path problem with Copula theory. J. Comb. Optim., 40, 110-140.

Khezri, S., & Khodayifar, S. (2023). Joint chance-constrained multi-objective multi-commodity minimum cost network flow problem with copula theory. Comput. Oper. Res., 156, 106260.

Li, P., Arellano-Garcia, H., & Wozny, G. (2008). Chance constrained programming approach to process optimization under uncertainty. Comput. Chem. Eng., 32(1-2), 25-45.



12:10pm - 12:30pm

Integrating Thermodynamic Simulation and Surrogate Modeling to Find Optimal Drive Cycle Strategies for Hydrogen-Powered Trucks

Laura Stops1, Alexander Stary1, Johannes Hamacher1, Daniel Siebe1, Thomas Funke2, Sebastian Rehfeldt1, Harald Klein1

1Technical University of Munich, TUM School of Engineering and Design, Department of Energy and Process Engineering, Institute of Plant and Process Technology, Garching, 85748, Germany; 2Cryomotive GmbH, Grasbrunn, 85630, Germany

Hydrogen-powered heavy-duty trucks have a high potential to significantly reduce CO2 emissions in the transportation sector. Therefore, efficient hydrogen storage onboard vehicles is a key enabler for sustainable transportation as achieving high storage densities and extended driving ranges is essential for the competitiveness of hydrogen-powered trucks. Cryo-compressed hydrogen (CcH2), stored at cryogenic temperatures and high pressures, emerges as a promising solution. To fully exploit this technology, understanding the thermodynamic behavior of CcH2 storage systems is critical for optimizing operational strategies. This study presents a comprehensive thermodynamic model implemented in MATLAB, which is capable of simulating the tank system across all operating conditions and, therefore, enables thermodynamic analysis and optimization of drive cycles.

The considered CcH2 tanks consist of an aluminum liner wrapped in carbon fiber and insulation material. Further, these tanks are equipped with two heat exchangers. The first heat exchanger heats up the hydrogen that is discharged from the tank to power the truck in a fuel cell. The second heat exchanger is used for pressure control by providing heat to the hydrogen stored within the tank, which is essential for maintaining the hydrogen at the required minimum pressure level.

With the applied MATLAB model, the thermodynamic state of the hydrogen in the onboard tank system can be simulated in all typical operating scenarios (discharge, refueling, dormancy) and real-life drive cycles consisting of these base operations. The core of the model is a differential-algebraic equation system that describes the thermodynamic state of hydrogen in the tank. Additionally, surrogate models based on artificial neural networks are applied to efficiently describe the heat exchangers integrated into the tank system. These surrogate models accurately replicate the performance of larger individual component models, allowing for fast and flexible simulation. Their integration into the overall tank system enables advanced process analysis and optimization.

Several use cases are explored to demonstrate the model's ability to simulate the thermodynamic behavior during real-live drive cycles and to find optimal operating strategies. As such, an optimal stop density is determined, when to stop driving and refuel the tank to maximize overall driving ranges. Real-live drive cycles are considered, taking into account the limited availability of refueling stations in early market applications as well as longer periods of dormancy. By analyzing the simulation results, preliminary conclusions regarding optimal operation depending on the desired requirements, like driving range and loss-free holding time, are drawn. For instance, refueling right before a dormancy period will reduce the loss-free holding time of the cryogenic tank, but will also extend the driving range of the subsequent discharge period.

These results provide valuable insights into how operational strategies can be tailored to maximize driving range, minimize hydrogen losses, and improve overall system efficiency, ultimately supporting the adoption of hydrogen in long-haul transportation.

 
10:30am - 12:30pmT2: Sustainable Product Development and Process Design - Sesson 1
Location: Zone 3 - Aula D002
Chair: Sujit Jogwar
Co-chair: Zdravko Kravanja
 
10:30am - 10:50am

Desing and Planning of the Green Hydrogen Supply Chain (GHSC), considering resources availability and evolving hydrogen demand over time: The Portuguese Industrial Sector

Catarina Mansilha1, Ana Barbosa-Povoa2, Luís Tarelho3, André Fonseca4

1Centre for Management Studies (CEG-IST) , Instituto Superior Técnico, University of Lisbon; 2Centre for Management Studies (CEG-IST) , Instituto Superior Técnico, University of Lisbon; 3Department of Environment and Planning, Centre for Environmental and Marine Studies (CESAM), University of Aveiro; 4Industrial Innovation Centre of Galp, Lisbon

In the recent years, sustained and coordinated strategic actions have been established to drastically reduce greenhouse gas (GHG) emissions and to accomplish emission targets and climate change goals established by the European Union (EU) and the United Nations (UN). The European Commission has introduced the "Fit for 55" package, a comprehensive set of legislative proposals aimed at achieving carbon neutrality by 2050, with an interim target of at least a 55% reduction in GHG emissions by 2030 (European Commission, 2021). In this context, green hydrogen has emerged as an energy carrier, and as an alternative fuel and feedstock, produced from renewable energy sources (RES), which could help to decarbonize various sectors.

Among the EU member states, Portugal has unveiled its national energy and climate plans, emphasizing its national hydrogen strategies. Due to its abundant renewable energy sources (i.e. weather and climate conditions), Portugal holds a strategic position in the European green hydrogen landscape. In line with these plans, Portugal has also adopted its National Strategy for Hydrogen (EN-H2), aiming to position the country as a major player in the global hydrogen industry. EN-H2's macro-objectives for 2030 include deploying 2% to 5% of green hydrogen in the industrial energy consumption sector (Presidência do Conselho de Ministros, 2020).

Currently, successful integration of green hydrogen into the industrial and energy sectors hinges on cost reduction and optimized infrastructure development. In all types of renewable and sustainable energy systems, the development of a competitive market requires complex design, planning, and optimization methods. To address the development of the green H2 economy and overcome significant hurdles, as the current lack of infrastructure and the required capital investments for its expansion, it is necessary to effectively design and plan the green hydrogen supply chain (GHSC), considering its evolving demand.

The concept of the GHSC design is used to study the implementation of hydrogen infrastructures. The GHSC superstructure is a network representation that includes alternatives for feedstock, production, storage, and transport technologies, and requires a deep understanding of the system network, the trade-offs between technologies and the availability of resources. Mathematical modelling approaches, using mixed integer programming (MIP) problems, have been extensively used to model the GHSC. Critical parameters that may significantly influence this superstructure, such as water and RES availability, and its evolution over a long-time, should be implemented in these modelling works. Resources availability has a significant function in the supply chain due to the dependence of hydrogen production on the regionally unique resource characteristics.

This work addresses these challenges by developing a multi-period MILP model for the GHSC design and planning. The objective is to minimize the total network costs, while ensuring that industrial demand is met over time. The developed model considers long-term availability and uncertainty for RES and water, considering evolving hydrogen demand. Diverse scenarios encompass different storage and transportation modes, considering market evolution, economies of scale and penetration rates. The model is applied to the Portuguese Industrial Case, supporting the decision-making process and contributing to a practical, cost-effective and sustainable GHSC.



10:50am - 11:10am

Techno-Economic and Environmental Analysis of Biomethane Production from Sewage Sludge in Hydrothermal Gasification Process

Soline Corre, Meire Ellen Ribeiro Domingos, Daniel Florez Orrego, François Maréchal

EPFL, Switzerland

The conventional disposal methods for sewage sludge, which typically involve concentration and incineration, are compared to hydrothermal gasification (HTG) and anaerobic digestion (AD) coupled with syngas upgrading for methane production. Each process route has been modeled using Aspen Plus V11 software to integrate mass and energy balances, while the MILP Osmose software is employed for energy integration and multi-objective optimization, considering environmental impacts, minimum energy requirements, CAPEX, and OPEX. For HTG operating at 500°C with heat integration via a transcritical CO2 power cycle and a carbon conversion rate of 80%, the system achieves an exergy efficiency of 50.4%. Various carbon management strategies are also explored, including periodic storage, co-electrolysis for synthetic natural gas production, and carbon sequestration via serpentine mineralization. Given the biogenic origin of the carbon in the sludge, these strategies can potentially support net-zero emissions goals. From an economic standpoint, even with a carbon tax of 166 CHF/t, direct CO2 emissions remain more cost-effective than sequestration (via mineralization, which is 24% more expensive) or upgrading (via co-electrolysis, which is 20% more expensive). However, when optimizing for environmental impact, particularly in minimizing greenhouse gas emissions, CO2 mineralization is the preferred configuration. In this context, a Pareto curve was generated to illustrate the optimal configurations balancing costs and impacts. Future work will focus on assessing the integration of these systems within a broader energy framework.



11:10am - 11:30am

System scale design and mesoscale modeling for natural gas dehydration process

Zhehao Jin1, Zhongde Dai2, Yiyang Dai1

1School of Chemical Engineering, Sichuan University, Chengdu 610065, PR China; 2School of Carbon Neutrality Future Technology, Sichuan University, Chengdu 610065, PR China

Triethylene glycol (TEG) or mono-ethylene glycol (MEG) absorption are the commercial technologies for natural gas dehydration processes. Nevertheless, the necessity of regenerating solvents under high temperatures results in environmental footprint and complex operation. Membrane with advantages in small footprint and high feasibility operation in hostile conditions is considered as promising technology for natural gas dehydration processes. In this work, system scale design and mesoscale modelling are synchronously adopted to optimize natural dehydration process design. Aspen HYSYS with MemCal extension is used for natural gas dehydration process. Taking pressure ration, stage cut, and multistage as decision variables for minimizing total annual cost (TAC) is optimized through NSGA-II algorithms. The minimum specific cost of < 3.15×10-3 $/m3 natural gas is estimated to achieve the separation requirement of <120 ppm. Then, the module length, and membrane thickness of the hollow fiber membrane design is investigated using Computational fluid dynamics (CFD), which better configures the simulation results. The system scale engineering design and mesoscale modelling provide an in-depth insight into natural gas dehydration process.



11:30am - 11:50am

Superstructure optimization of chemical reaction networks uncovers synergies between chemical processes

Dion Jakobs, Lucas F. Santos, Gonzalo Guillén-Gosálbez

Institute for Chemical and Bioengineering, Department of Chemistry and Applied Biosciences, ETH Zurich, Switzerland

To achieve the chemical industry's transition to net-zero emissions and overall sustainability, the next generation of chemical processes must be both environmentally and economically viable. However, many emerging sustainable processes remain economically unfeasible, often due to technical challenges such as low selectivity or yields, resource competition, or high energy demands [1]. In today’s linear economy, where processes typically operate in isolation, potentially crucial synergies between chemical processes could boost their economic and environmental performance. The integration of multiple chemical processes, sometimes referred to as chemical clustering, has been explored in several studies that showed that the economic and environmental performance of the cluster often outperforms that of the individual processes [2]. However, these studies predominantly focus on mature, high-TRL (Technology Readiness Level) technologies and are largely limited to the production of high-production volume products, such as methanol, olefins, or fuels. Furthermore, these investigations do not systematically explore and identify candidates with significant synergies, but rather manually select the candidates only via heuristics or previous knowledge. As a result, there is a significant gap in the literature concerning the identification of synergies between chemical processes involving low- to mid-TRL processes, and the impact on economic and environmental feasibility through process integration remains underexplored.

In this work, we explore the rigorous optimization and integration of chemical clusters by developing a methodology that systematically identifies synergies and selects high-, mid-, and low-TRL chemical reactions using superstructure optimization. A chemical reaction network (CRN), generated by querying reactions from Reaxys, serves as a “low fidelity” proxy for fully developed chemical processes. The CRN is represented as a directed bipartite graph that connects product and reactant chemicals to model possible mass and energy exchanges between chemical processes within the chemical industry. Optimization and property estimation techniques are employed to address critical data gaps for heat and mass integration — such as reaction stoichiometry, process energy requirements, and key performance indicators related to Green Chemistry, Life Cycle Assessment, and Techno-Economic Analysis. We formulate a multi-objective superstructure optimization problem that maximizes both economic and environmental objectives to select and integrate both the mass and energy flows of candidate chemical reactions. This optimization produces a Pareto front of non-dominated chemical clusters with high synergy, which are further analyzed to uncover additional insights, such as the identification of key reactions that enable efficient clustering. Overall, this work highlights the significant potential of process integration, particularly through the inclusion of low- and mid-TRL technologies, to pave the way for the next generation of sustainable and economically viable chemical processes.

References

(1) Gwehenberger, G.; Narodoslawsky, M. Sustainable Processes—The Challenge of the 21st Century for Chemical Engineering. Process Safety and Environmental Protection 2008, 86 (5), 321–327. https://doi.org/10.1016/j.psep.2008.03.004.

(2) Demirhan, C. D.; Tso, W. W.; Powell, J. B.; Pistikopoulos, E. N. A Multi-Scale Energy Systems Engineering Approach towards Integrated Multi-Product Network Optimization. Applied Energy 2021, 281, 116020. https://doi.org/10.1016/j.apenergy.2020.116020.



11:50am - 12:10pm

Conceptual design of energy storage systems for continuous operations in renewable-powered chemical processes

Andrea Isella1, Alfonso Pascarella1, Angelo Matichecchia2, Raffaele Ostuni2, Davide Manca1

1Politecnico di Milano, Italy; 2Casale SA, Switzerland

This work aims to develop an energy storage system that allows fluctuating energy inputs (i.e. from process sections driven by renewable sources) to power two process units that are operated continuously at different temperatures. The system consists of two vessels storing diathermal mediums: one for the hotter- and the other for the colder-energy fluxes. The investigated solutions include sensible-heat-, latent-heat-, and thermochemical-TES (Thermal Energy Storage). Also, Organic Rankine Cycles with lithium-ion batteries and thermoelectric generators were assessed. Indeed, all these technologies allow the exploitation of low-temperature thermal energy to supply the high-temperature unit during periods of energy scarcity. Both vessels aim for total self-sufficiency; however, the option to rely on external utilities has been included to meet the energy demand of both units when not sufficient process-side power is available. To assess the performance of the proposed storage systems, two energy profiles were investigated: one showing high energy inputs (i.e. optimistic scenario) and the other featuring low energy inputs (i.e. pessimistic scenario). Finally, an optimization problem was formulated to estimate the optimal size of both storage vessels. Increasing their capacity would lead to higher capital expenses (CAPEX) and lower overall operating expenses (OPEX). The investigated storage systems exhibited significant cost reductions when compared to non-integrated solutions (i.e. those relying on external utilities only).

 
10:30am - 12:30pmT3: Large Scale Design and Planning/ Scheduling - Including keynote
Location: Zone 3 - Room E030
Co-chair: Marianne Boix
 
10:30am - 11:10am

Keynote: Joint Optimization of Fair Facility Allocation and Robust Inventory Management for Perishable Consumer Products

Saba Ghasemi Naraghi, Zheyu Jiang

Oklahoma State University, United States of America

Perishable consumer products, such as food, cosmetics, and household chemicals (e.g., pesticides and herbicides), present unique challenges their supply chain management due to their limited shelf life and uncertainties in demand and transportation. For instance, every year in the U.S., 38% of all food goes unsold or uneaten, which translates to almost 145 billion meals' worth of food, or roughly 1.8% of U.S. GDP. Inefficient warehousing and poor logistics are major factors contributing to product waste. Thus, in this work, we propose a robust optimization framework to jointly optimize facility allocation and inventory management for perishable product. The goal is to determine the optimal locations for product distribution centers, allocation of customers, and inventory policies that minimize the total costs associated with the life cycle involving food transportation, distribution, and storage, subject to uncertain demand conditions.

Specifically, we propose a two-stage mixed-integer linear programming (MILP) model that explicitly accounts for product perishability by enforcing a First-In-First-Out (FIFO) inventory policy, which reduces spoilage and ensures freshness upon delivery. To improve computational efficiency, we linearize the bilinear FIFO constraints and show that the linearization is exact. To ensure social equity and fairness in facility allocation, we define a fairness index (FI) and incorporate it in the optimization framework. Furthermore, we propose a robust optimization approach to address demand uncertainty by using affine demand functions to account for a range of potential scenarios and ensure resilience in the supply chain. To efficiently solve this joint optimization problem, we utilize a row and column generation technique within the robust optimization framework. This method enhances scalability and allows for efficient handling of large-scale problem instances, ensuring optimal or near-optimal solutions.

Overall, this robust optimization framework provides a comprehensive solution to the challenges of facility allocation and inventory management associated with perishable products, thereby reducing waste and carbon footprint and enhancing the supply chain’s robustness to uncertainty.



11:10am - 11:30am

Integrating Time-Varying Environmental Indicators into an Energy Systems Modeling and Optimization Framework for Enhanced Sustainability

Marco Pedro De Sousa1,2, Rahul Kakodkar1,2, Betsie Montano Flores1,2, Saatvi Suresh1,2, Harsh Birenkumar Shah1,2, Dustin Kenefake1,2, Iosif Pappas3, Xiao Fu3, C. Doga Demirhan4, Brianna Ruggiero4, Mete Mutlu4, Efstratios N. Pistikopoulos1,2

1Artie McFerrin Department of Chemical Engineering, Texas A&M University, College Station, TX, USA; 2Texas A&M Energy Institute, Texas A&M University, College Station, TX, USA; 3Shell Global Solutions International B.V., Amsterdam, Netherlands; 4Shell Global Solutions International B.V., Houston, TX, USA

Data-driven decision-making is crucial in the transition to a low-carbon economy, especially as global industries strive to meet stringent sustainability goals. Traditional life cycle assessments (LCAs) often rely on static emission factors, overlooking the dynamic nature of the energy grid [1]. As renewable energy penetration increases, grid carbon intensity fluctuates significantly across time and regions, due to the inherent intermittency of renewable sources like wind and solar. This variability introduces discrepancies in emission estimations if time-averaged factors are applied, leading to sub-optimal process designs and
unintended environmental consequences.

To this end, we present a real-time emission-aware optimization framework, which is implemented through a mixed-integer linear programming (MILP) formulation to determine optimal design configurations and operation schedules while simultaneously mitigating emissions by utilizing electricity price forecasts, time-varying emission factors, sporadic weather data, and supply and demand variability. Furthermore, we emphasize the critical role of battery storage in mitigating the intermittency of renewables, enabling efficient energy storage of cleaner energy mix periods throughout the day. The optimization model integrates life cycle assessment criteria to evaluate various environmental inputs, categorized into direct on-site emissions (Scope 1), indirect emissions from energy use (Scope 2), and upstream, downstream, and construction-related emissions associated with the process system (Scope 3) [2, 3]. The framework, demonstrated through a detailed hydrogen production case study, yields a set of optimal solutions that balance the trade-offs between environmentally sustainable and economically competitive designs and operational strategies, all while complying with stringent carbon reduction targets.

References

[1] G. J. Miller, K. Novan, A. Jenn, Hourly accounting of carbon emissions from electricity consumption, Environmental Research Letters 17 (4) (2022) 044073. doi:10.1088/1748-9326/ac6147.URL https://dx.doi.org/10.1088/1748-9326/ac6147
[2] A. Hugo, E. Pistikopoulos, Environmental conscious long-range planning and design of supply chain networks, Journal of Cleaner Production 13 (2005) 1471–1491. doi:10.1016/j.jclepro.2005.04.011.
[3] E. G. Hertwich, R. Wood, The growing importance of scope 3 greenhouse gas emissions from industry, Environmental Research Letters 13 (10) (2018) 104013.



11:30am - 11:50am

Integrating Carbon Value Vectors in the Energy and Materials Transition Nexus: A Case Study on Mobility Optimization

Betsie Sara Monserrat Montano Flores1,2, Rahul Kakodkar1,2, Marco Pedro De Sousa1,2, Shayan Sean Niknezhad1,2, Efstratios N. Pistikopoulos1,2

1Artie McFerrin Department of Chemical Engineering, Texas A&M University, College Station, TX, USA; 2Texas A&M Energy Institute, Texas A&M University, College Station, TX, USA

The ongoing energy transition involves decarbonization across many sectors. Amongst these, the transportation sector contributes significantly owing to its reliance on traditional fossil fuels as feedstock. Attaining decarbonization goals requires the adoption of novel sustainable technologies such as electric vehicles (EVs), hydrogen fuel cell vehicles (HFCVs), amongst others [1]. The feedstock transition towards electricity and dense energy carriers is challenged by the requirement for additional infrastructure to manage intermittency, power generation, and grid expansion which requires both materials and capital investment [2]. By evaluating and redirecting the role of carbon value vector from fossil fuel production towards the production of polymeric materials to empower the energy transition [3], we can optimize resource allocation and maintain economic viability, all while reducing environmental impact.

To achieve this, we propose modeling the nexus between energy and materials within a circular economy. The multiscale modeling and optimization framework utilizes the resource-task-network (RTN) methodology and a life cycle assessment (LCA) approach, integrating simultaneous design and scheduling, considering future material demand, availability, and production capacities. The framework’s capabilities are demonstrated through a case study on the transition from gasoline-fueled vehicles to EVs, analyzing 1) the role of carbon value vectors in resources and materials production, and 2) electricity generation, storage, and dispatch using intermittent renewables. The study reveals the interactions between energy, material, and mobility value chains and provides configurations where such synergies can be exploited. Moreover, the sensitivity to considered parameters as well as the trade-offs between objectives are highlighted.

Keywords— Energy transition, Material transition, Carbon value vectors

TopicsT2: Sustainable Product Development and Process Design, T3: Large Scale Design and Planning/Scheduling

References

[1] H. C. Lau, S. Ramakrishna, K. Zhang, A. V. Radhamani, The role of carbon capture and storage in the energy transition, Energy & Fuels 35 (9) (2021) 7364–7386.

[2] J. Holechek, H. Geli, M. Sawalhah, R. Valdez, A. G. Assessment, Can renewable energy replace fossil fuels by 2050? sustainability 14 (2022) 4792.

[3] R. Kakodkar, B. Flores, M. Sousa, Y. Lin, E. Pistikopoulos, Towards energy and material transition integration-a systematic multi-scale modeling and optimization framework, 2024, pp. 461–468. doi:10.69997/sct.171988.



11:50am - 12:10pm

Flow Simulation of Plastic Life Cycle Considering Carbon Renewability and Environmental Impact

Kota Chida1, Heng Yi Teah2, Yuichiro Kanematsu2, Yasunori Kikuchi1,2,3

1Department of Chemical System Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan; 2Presidential Endowed Chair for “Platinum Society”, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan; 3Institute for Future Initiatives, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654, Japan

Biomass and waste-derived plastics are being promoted as sustainable carbon-based materials to replace petroleum-based plastics. However, their environmental advantages are not guaranteed because of the uncertain environmental impact in the biomass supply processes (Lam et al., 2019). In addition, numerous processes in the resin production and recycling are under development, or only operating at small scales; the lack of knowledge organization leads to incomplete understanding among stakeholders. Therefore, it is difficult to conduct a comprehensive assessment and system design of the plastic life cycle. It is essential to examine the ‘true’ societal ripple effects of biomass-based plastics in the context of sustainable development while discussing optimal combinations of resources and technologies.

This study aims to elucidate the carbon flow within the life cycle of biomass/recycled-derived plastics, and assess the renewability of carbon sources and their environmental impacts. Furthermore, it discusses the potential for biomass and waste as carbon sources and their appropriate introduction pathways. The research progresses through the development of assessment methods, including the metrics and system boundaries, model construction, flow analysis, interpretation of results, and validation of findings. Greenhouse gas (GHG) emissions were chosen as the indicator for environmental impact assessment, meanwhile a novel Carbon Circularity Indicator (CCI) was developed to assess the renewability of carbon sources. CCI extends the Material Circularity Indicator (Ellen MacArthur Foundation, 2019), which measures the recyclability of petroleum-based plastics, to cover the entire life cycle, including raw material supply stages. Technologies related to plastics were gathered based on keywords extracted through bibliometric analysis. A superstructure was visualized to represent the carbon flow in the life cycle of plastics. A carbon flow analysis method was developed using this superstructure as the system boundary. The inventory data and GHG emissions for each process necessary for flow analysis were primarily based on existing research and literature, supplemented by process simulations where needed.

Our preliminary assessment of the superstructure and system highlighted that the selection of raw materials, resin applications, and recycling technologies are the hotspots in life cycle design in terms of environmental impact and carbon renewability. The flow analysis revealed the impact of introducing biomass and waste as carbon sources on the overall plastic life cycle considering constraints, such as the availability of biomass feedstocks and the capacity for recycling. We also identified the processes, resins, and applications that should be prioritized for substitution with biomass-derived plastics.

To enhance the interpretability of the model, we address uncertainties in the biomass cultivation process, including regional boundary differences. In addition, we are working on the model towards incorporating variations in end-of-life flows based on different applications, and scenario analysis incorporating temporal factors, such as future technological developments.

W. Y. Lam, M. Kulak, S. Sim, H. King, M. A. J. Huijbregts, R. Chaplin-Kramer, 2019, Greenhouse gas footprints of palm oil production in Indonesia over space and time, Sci. Total Environ., 688, 827-837
Ellen MacArthur Foundation, 2019, Circularity Indicators -An Approach to Measuring Circularity-

 
10:30am - 12:30pmT4: Model Based optimisation and advanced Control - Session 1
Location: Zone 3 - Room E031
Chair: Antonio Espuña
Co-chair: Miroslav Fikar
 
10:30am - 10:50am

Efficient approximation of the Koopman operator for large-scale nonlinear systems

Gajanand Verma1, William Heath2, Constantinos Theodoropoulos1

1University of Manchester, United Kingdom; 2University of Bangor, United Kingdom

Implementing Model Predictive Control (MPC) for large-scale nonlinear systems is often computationally challenging due to the intensive online optimization required. To address this, various reduced-order linearization techniques have been developed [1]. The Koopman operator linearizes a nonlinear system by mapping it into an infinite-dimensional space of observables [2], enabling the application of linear control strategies [3]. While Artificial Neural Networks (ANNs) can approximate the Koopman operator in a data-driven manner [4, 5], training these networks becomes computationally intensive for high-dimensional systems as the lifting into a higher-dimensional observable space significantly increases data size and complexity. In this work, we propose a technique, combining Proper Orthogonal Decomposition (POD) with an efficient modified ANN structure to reduce the training time of ANN for large order systems. By first applying POD, we obtain a low order projection of the system. Subsequently, we train the modified ANNs to approximate the Koopman operator, significantly decreasing training time without sacrificing accuracy. The methodology is demonstrated through an illustrative large-scale chemical engineering case study.

Keywords: model predictive control, data-driven methodology, artificial neural networks

References

  1. Theodoropoulos, C. 2010. ‘Optimisation and Linear Control of Large Scale Nonlinear Systems: A Review and a Suite of Model Reduction-Based Techniques’. Coping with Complexity: Model Reduction and Data Analysis, 37–61.
  2. Koopman, Bernard O. 1931. ‘Hamiltonian Systems and Transformation in Hilbert Space’. Proceedings of the National Academy of Sciences 17 (5): 315–18.
  3. Korda, M, and I Mezić. 2018. ‘Linear Predictors for Nonlinear Dynamical Systems: Koopman Operator Meets Model Predictive Control’. Automatica 93:149–60.
  4. Wang, M., X. Lou, W. Wu, and B. Cui. 2022. ‘Koopman-Based MPC with Learned Dynamics: Hierarchical Neural Network Approach’. IEEE Transactions on Neural Networks and Learning Systems.
  5. Verma G, Heath W, Theodoropoulos C. Robust stability analysis of Koopman based MPC system. In Computer Aided Chemical Engineering 2024 Jan 1 (Vol. 53, pp. 1927-1932). Elsevier.


10:50am - 11:10am

Enhancing Consumer Engagement in Plastic Waste Reduction:A Stackelberg Game Approach

Chunyan Si1, Yee Van Fan1,2, Monika Dokl3, Lidija Čuček3, Zdravko Kravanja3, Petar Sabev Varbanov4

1Brno University of Technology, Czech Republic; 2University of Oxford,United Kingdom; 3University of Maribor, Slovenia; 4Széchenyi István University, Hungary

Recycling plastic waste is considered one of the most effective strategies to promote the circular economy, but needs to be further improved. Consumer engagement is one of the most influencing factors and as such is one of the objectives of government incentive measures to promote effective recycling of plastic waste. In this study, the Stackelberg Game Approach is used to investigate how government incentive mechanisms (leaders) can influence the recycling behaviour of consumers (followers). Various incentives are evaluated, including economic, policy and educational measures to ensure permanent consumer participation. Three scenarios are proposed based on different incentive combinations aligned with circular strategies, namely narrow (use less), slow (use longer) and close (reuse), and are evaluated with the aim of measuring the optimal benefit for both participants, i.e. minimising government costs, such as recycling subsidies, while maximising recycling rates and maximising consumer gains. The comparison of the scenarios highlights possible ways of combining different incentives to maximise consumer engagement. Future work could integrate Internet-of-Things technology to facilitate dynamic strategy optimisation.



11:10am - 11:30am

A decomposition approach to feasibility for decentralized operation of multi-stage processes

Ekundayo Olorunshe, Nilay Shah, Benoit Chachuat, Max Mowbray

Imperial College London, United Kingdom

Certifying feasibility in decision-making is critical in the process industries and can be framed as a constraint satisfaction problem (CSP). Feasible decisions are often identified through mathematical programming. In this case, a single feasible decision is returned which minimizes an objective posed. In this research, we consider algorithmic approaches to identify a set of potential feasible decisions. This has been well explored through domain reduction methods in mathematical programming [1], flexibility as proposed in [2], and design space identification as explored in widely within the pharmaceutical industry [3].

Specific focus is directed towards a CSP where the task is to find parameter values from a continuous domain that satisfy constraints defined on a directed acyclic graph of constituent functions. This problem setting may be used to describe the operation of a network of process unit operations without recycle streams. In this case, the parameters one would like to identify may represent set-points for the unit operations in the network. We additionally impose the condition that the identification of the feasible space must enable decentralised operation of each of the constituent units. In practice, this means that selection of any given unit set-point must be feasible for selection of any other unit’s set-point.

We assume that the constituent unit operations are described by general input-output functions that may not be available in closed form and could result from expensive simulations. The solution approach assumes one is only able to evaluate the input-output behaviour of the functions and the associated constraints. This lends itself to the use of sampling methods to gain an inner approximation of the feasible region. However, sampling faces challenges due to the curse of dimensionality. To address this, a decomposition approach is introduced, leveraging the network structure to break the problem into unit-wise subproblems of reduced dimension. A data-driven tuning is introduced to ensure maximum volume of the feasible region. The methodology is demonstrated through a two-unit batch reactor network. Future research will extend this approach to account for uncertain parameters in the constituent models robustly.

[1] Puranik, Y., & Sahinidis, N. V. (2017). Domain reduction techniques for global NLP and MINLP optimization. Constraints, 22(3), 338-376.

[2] Swaney, R. E., & Grossmann, I. E. (1985). An index for operational flexibility in chemical process design. Part I: Formulation and theory. AIChE Journal, 31(4), 621-630.

[3] Kusumo, K. P., Gomoescu, L., Paulen, R., García Muñoz, S., Pantelides, C. C., Shah, N., & Chachuat, B. (2019). Bayesian approach to probabilistic design space characterization: A nested sampling strategy. Industrial & Engineering Chemistry Research, 59(6), 2396-2408.



11:30am - 11:50am

Safe Bayesian Optimization in Process Systems Engineering

Donggyu Lee, Ehecatl Antonio del Rio-Chanona

Imperial College London, United Kingdom

Bayesian Optimization (BO) has demonstrated significant promise in enhancing data-driven optimization strategies across various fields. However, both the machine learning and process systems engineering communities face similar challenges when applying BO in safety-critical settings, where model discrepancies, noisy measurements, and stringent safety constraints are prevalent. This has led to the emergence of Safe BO, designed to operate effectively under these constraints. Despite these advancements, there still remains a limited comparative understanding on the effectiveness and applicability of these safe BO methods, particularly within process system engineering. Thus, this work provides a comprehensive examination of state-of-the-art safe BO methods, with our own enhancements, focusing on their performance in process systems.

While safe BO methods have been developed to address limitations in traditional BO methods, they still face significant challenges when applied to process systems. For instance, SafeOpt [1] and GoOSE [2], which have excellent performance in ML applications, rely on discretization of a system, suffer from heavy computational expense, and together with StableOpt [3,4], lack the capability to manage multiple safety constraints. To address these challenges, we introduced modifications that enable optimization under continuous system, handle multiple constraints and reduce computational costs, thereby enhancing their practical applicability.

Our study rigorously assessess these safe BO algorithms, including our own enhancements, with a reactory system from the process system engineering literature, focusing on convergence speed, mitigation of unknown constraints, practicality, and robustness against adversarial perturbations. We found that a performance of SafeOpt is often hindered by inefficiency due to excessive exploration, while GoOSE mitigates this inefficiency by incorporating an oracle to selectively expand the safe set, thus minimizing unnecessary evaluations. The integration of conventional Trust-Region with BO [5] demonstrates high performance, though its effectiveness is highly sensitive to the initial choices on trust-region parameters. StableOpt while guaranteeing feasible solutions under adversarial perturbations, often yields suboptimal solution due to its focus on the worst-case scenarios.

Overall, this study highlights the strengths and limitations of safe BO methods in process system engineering, advancing the field on data-driven approaches for decision-making in safety-critical processes while also identifies areas where further improvements are necessary.

References

[1] Yanan Sui, Alkis Gotovos, Joel Burdick, and Andreas Krause. Safe exploration for optimization with Gaussian processes. In International Conference on Machine Learning (ICML), pages 997–1005, 2015.
[2] Matteo Turchetta, Felix Berkenkamp, and A. Krause. Safe exploration for interactive machine learning. In NeurIPS, 2019.
[3] Ilija Bogunovic, Jonathan Scarlett, Stefanie Jegelka, and Volkan Cevher. Adversarially robust optimization with gaussian processes. In Advances in Neural Information Processing Systems, pages 5765–5775, 2018.
[4] Joel A. Paulson, Georgios Makrygiorgos, Ali Mesbah. Adversarially robust Bayesian optimization for efficient auto-tuning of generic control structures under uncertainty. AIChE Journal. p. e17591, 2022.
[5] E.A. del Rio Chanona, J.E. Alves Graciano, E. Bradford, B. Chachuat. Modifier-Adaptation Schemes Employing Gaussian Processes and Trust Regions for Real-Time Optimization. IFAC-PapersOnLine.



11:50am - 12:10pm

Combined Flexibility and Resilience-aware Design Optimization of Process Systems Using Multi-Parametric Programming

Natasha Jane Chrisandina1,2,3, Eleftherios Iakovou2,4,5, Efstratios N. Pistikopoulos1,2, Mahmoud M. El-Halwagi1,2,3

1Artie McFerrin Department of Chemical Engineering, Texas A&M University, 3122 TAMU, 100 Spence St., College Station, TX 77843, USA; 2Texas A&M Energy Institute, Texas A&M University, College Station, TX, 77843, USA; 3Gas and Fuels Research Center, Texas A&M Engineering Experiment Station, College Station, USA; 4Department of Multidisciplinary Engineering, Texas A&M University, College Station, USA; 5Department of Engineering Technology and Industrial Distribution, Texas A&M University, College Station, USA

A critical part of process design and synthesis is ensuring that the designs generated are resilient against various uncertainties and disruptions that could affect the system during its lifetime. Various approaches have been proposed to address different aspects of this problem. Techniques such as flexibility analysis have been applied to tackle continuous uncertainties inherent in parameter values, such as cost or demand [1,2]. Reliability theory, which focuses on the likelihood that a system or component will perform or fail under specific conditions, has been utilized to design systems that can maintain a specific performance for a desired time duration [3]. To optimize operation under low probability-high impact disruptions, different scenarios are simulated on a given system design to minimize their impact through scheduling decisions [4,5]. While these techniques are well-established individually, a resilient system requires the integration of prevention, mitigation, and recovery actions under a general design framework that brings together consideration of continuous uncertainties and discrete disruption scenarios.

In this work, we propose a two-stage design under uncertainty and disruption framework for flexibility and resilience considerations. In the first stage, a multi-parametric programming reformulation of the flexibility analysis is solved to capture trade-offs among investment cost, design decisions, and flexibility for a desired range of uncertain parameters. In particular, to represent the probability distribution of the uncertain parameters at this stage, the stochastic flexibility method is applied. In the second stage, disruption scenarios are simulated on designs generated in the previous stage with known cost and flexibility index. The resilience performance of each design against disruption scenarios is assessed, providing design strategies that feature both the flexibility to handle parameter fluctuations and the resilience to manage discrete disruptions. The proposed methodology offers a path to exploring trade-offs among cost, flexibility, and resilience at the process design stage through multi-parametric programming. An illustrative case study is presented on an energy system under threat of internal failures as well as fluctuating market conditions.


References

  1. Di Pretoro, Alessandro, et al. "Flexibility assessment of a biorefinery distillation train: Optimal design under uncertain conditions." Computers & Chemical Engineering 138 (2020): 106831.
  2. Tian, Huayu, et al. "Feasibility/Flexibility-based optimization for process design and operations." Computers & Chemical Engineering 180 (2024): 108461.
  3. Ade, Nilesh, et al. "Investigating the effect of inherent safety principles on system reliability in process design." Process Safety and Environmental Protection 117 (2018): 100-110.
  4. Badejo, Oluwadare, and Marianthi Ierapetritou. "Enhancing Pharmaceutical Supply Chain Resilience: A Multi-Objective Study with Disruption Management." Computers & Chemical Engineering (2024): 108769.
  5. Gong, Jian, and Fengqi You. "Resilient design and operations of process systems: Nonlinear adaptive robust optimization model and algorithm for resilience analysis and enhancement." Computers & chemical engineering 116 (2018): 231-252.


12:10pm - 12:30pm

Optimization-Based Methodology for the Design of a Pulsed Fusion Power Plant using a Dynamic Model

Oliver M. G. Ward1, Federico Galvanin1, Nelia Jurado2, Robert J. Warren3, Daniel Blackburn3, Eric S. Fraga1

1Sargent CPSE, Department of Chemical Engineering, UCL, Torrington Place, London, WC1E 7JE, UK; 2Department of Mechanical Engineering, UCL, Torrington Place, London, WC1E 7JE, UK; 3United Kingdom Atomic Energy Authority, Culham Science Centre, Abingdon, OX14 3DB, UK

Spherical Tokamak for Energy Production (STEP) is a project by the UK Atomic Energy Authority to build and demonstrate the viability of a fusion power plant for generating clean electricity. As a heat source, fusion tokamaks present challenges for the design of the power plant, such as multiple heat sources of different qualities and pulsed operation leading to pulsed heat supplies. Pulsed operation means that dynamic modelling is necessary to simulate and evaluate designs. Thermal energy storage is chosen as a solution to mitigate the impact of consequent large thermal power transients on electricity generation, as already used in commercial solar thermal power plants.

This work presents a dynamic model of a power conversion system suitable for use in optimization-based design. The model is implemented in Modelica using OpenModelica. The power conversion system uses three different alternative heat sources as energy inputs. These energy inputs vary in time due to the pulsed nature of the fusion plant operation. A two-tank molten salt sensible heat storage system is used to provide heat during a tokamak dwell to a steam Rankine cycle and to buffer sensitive components, like the turbine, from thermal fluctuations. The variable nature of the operation necessitates the incorporation of a control system to regulate process flows due to the changing tokamak modes. Lumped-parameter models are favoured due to their computational efficiency, the robustness against simulation failures relative to more complex models, and the simplicity of parameterisation.

Multi-objective optimization is used to explore the design space, with each design evaluation involving a full dynamic simulation. Two objectives are considered simultaneously: robustness in the presence of pulsed operation and equipment sizing, such as the size of the molten salt tanks, as a proxy for the economics. Design variables include equipment sizing and controller tuning parameters. The latter are considered as different designs may require different controller actions. Integration of an external simulation program into an optimization algorithm poses some challenges, such as handling simulation failures with minimal information on the cause.

Due to the computational expense involved in dynamic simulation, a meta-heuristic optimization approach is used for design. Specifically, a population-based plant propagation meta-heuristic algorithm is used [1]. Populations can consist of both feasible and infeasible solutions. This facilitates handling failed simulations without losing potentially valuable information about the search space. A recently developed multi-agent system [2] is used to support multiple instances of the optimization method working together to more efficiently explore the design space.

A case study shows that the methodology can generate a diverse set of non-dominated designs in reasonable time despite using relatively computationally expensive simulations in the objective function. The resulting set of non-dominated designs is shown to capture the trade-off between economics and most efficient use of the heat storage system.

[1] A. Salhi, E. S. Fraga, 2011, Nature-inspired optimisation approaches and the new plant propagation algorithm , Proceedings of ICeMATH 2011

[2] E. S. Fraga, V. Udomvorakulchai, L. G. Papageorgiou, 2024, A multi-agent system for hybrid optimization, Computer Aided Chemical Engineering, 53

 
10:30am - 12:30pmT5: Concepts, Methods and Tools - Session 1
Location: Zone 3 - Room E032
 
10:30am - 10:50am

AutoJSA: A Knowledge-Enhanced Large Language Model Framework for Improving Job Safety Analysis

Shuo Xu, Jinsong Zhao

Tsinghua University, China, People's Republic of

Job Safety Analysis (JSA) is a core tool for identifying potential hazards in the workplace, assessing risks, and proposing control measures. It is widely used in various industries. However, traditional JSA methods face many challenges, including time-consuming and lengthy, lack of unified risk assessment standards, and difficulty in identifying hidden risks from the surrounding environment. In addition, the reliance on manpower in the analysis process leads to its inefficiency, especially in dynamic work environments that require frequent updates. These problems limit the effectiveness of JSA in actual operations and make it difficult to fully realize its potential safety assurance function.

In recent years, the rapid development of natural language processing (NLP) technology, especially the rise of large language models (LLMs), has provided a new idea for solving these problems. LLM can process and understand large amounts of complex text data. With its powerful language generation and knowledge integration capabilities, it has gradually been applied to various fields to optimize the analysis process. Despite this, the application of LLM in job safety analysis is still in its infancy, especially the research on automated hazard identification and risk assessment is relatively limited.

To address the limitations of existing JSA methods, this paper proposes the AutoJSA, which uses knowledge-enhanced LLM to support and improve the entire JSA analysis process. AutoJSA employs domain knowledge enhancement technology by extracting workplace safety insights from a wealth of JSA reports and industry safety documents. This extracted knowledge is then integrated into the LLM, significantly improving its ability to comprehend hazardous situations within specific industry contexts. AutoJSA can generate detailed job analysis reports according to analyst requirements, including job step division, hazard factor identification and evaluation, and suggestions for control measures.

Experimental results show that AutoJSA can significantly shorten JSA analysis time, reduce human errors, and perform well in identifying potential risks and proposing effective control measures. Compared with traditional manual JSA methods, AutoJSA not only improves efficiency, but also can more comprehensively identify potential risks in complex tasks and consider interference from surrounding operations through its knowledge enhancement capabilities. Especially in highly dynamic and multi-task parallel work environments, AutoJSA ensures the timeliness and effectiveness of safety measures by timely updating analysis results and providing real-time suggestions.

Although there are still challenges in the scale and quality of the dataset, AutoJSA demonstrates its great potential in job safety analysis. This paper argues that by leveraging the latest advances in NLP, especially LLM, AutoJSA is expected to promote the automation and intelligence of workplace safety management while improving the efficiency and accuracy of JSA.



10:50am - 11:10am

Phenomena-Based Graph Representations and Applications to Chemical Process Simulation

Yoel Rene Cortes-Pena1, Victor M. Zavala2

1University of Wisconsin Madison, United States of America; 2University of Wisconsin Madison, United States of America

Rapid and robust simulation of a chemical production process is critical to address core scientific questions related to process design, optimization, and sustainability. Efficiently solving a chemical process, however, remains a challenge due to their highly coupled and nonlinear nature. While many algorithmic paradigms for process simulation exist (e.g., sequential modular simulation [1], equation-based simulation [2], pseudo-transient methods [3]) only a limited set of approaches exist that exploit the topology/connectivity of the process at the phenomenological (physical) level [4].

Graph-theoretic representations of process equations and variables can help better understand the topology of a chemical process at the phenomenological level and potentially uncover more efficient decompositions for flowsheet simulation. Decoupling of equations based on phenomena (e.g., mass, energy, thermodynamic, kinetic) is widely used to solve unit operations such as multistage equilibrium columns [5]. Through graph-theoretic representations, it may be possible to generalize and expand phenomena-based decomposition approaches used at the unit operation level towards the complete flowsheet.

In this work, we developed a general graph abstraction of underlying physical phenomena within unit operations. The abstraction consists of a graph/network of interconnected variable/equation nodes that are systematically generated through PhenomeNode, an open-source library that we developed and implemented in Python. By employing the graph representation on an industrial separation process for purifying glacial acetic acid, we show how partitioning the graph into separate mass, energy, and phenomena subgraphs can help decouple nonlinearities and developed a phenomena-based simulation algorithm. We implemented this new simulation algorithm in BioSTEAM [6]—an open-source process simulation platform in Python— and demonstrate how phenomena-based decomposition enables more efficient simulation of large, highly-coupled systems than sequential modular. Using an industrial process for dewatering acetic acid as a representative case for a large and highly coupled system, the new algorithm results in a 76.6% reduction in computation time. These results suggest that phenomena-based decomposition of the flowsheet may open new avenues for more rapid and robust simulation.

References

(1) Motard, R. L.; Shacham, M.; Rosen, E. M. Steady State Chemical Process Simulation. AIChE Journal 1975, 21 (3), 417–436. https://doi.org/10.1002/aic.690210302.

(2) Bogle, I. D. L.; Perkins, J. D. Sparse Newton-like Methods in Equation Oriented Flowsheeting. Computers & Chemical Engineering 1988, 12 (8), 791–805. https://doi.org/10.1016/0098-1354(88)80018-8.

(3) Tsay, C.; Baldea, M. Fast and Efficient Chemical Process Flowsheet Simulation by Pseudo-Transient Continuation on Inertial Manifolds. Computer Methods in Applied Mechanics and Engineering 2019, 348, 935–953. https://doi.org/10.1016/j.cma.2019.01.025.

(4) Ishii, Y.; Otto, F. D. Novel and Fundamental Strategies for Equation-Oriented Process Flowsheeting. Computers & Chemical Engineering 2008, 32 (8), 1842–1860. https://doi.org/10.1016/j.compchemeng.2007.10.004.

(5) Monroy-Loperena, R. Simulation of Multicomponent Multistage Vapor−Liquid Separations. An Improved Algorithm Using the Wang−Henke Tridiagonal Matrix Method. Ind. Eng. Chem. Res. 2003, 42 (1), 175–182. https://doi.org/10.1021/ie0108898.

(6) Cortes-Peña, Y.; Kumar, D.; Singh, V.; Guest, J. S. BioSTEAM: A Fast and Flexible Platform for the Design, Simulation, and Techno-Economic Analysis of Biorefineries under Uncertainty. ACS Sustainable Chem. Eng. 2020, 8 (8), 3302–3310. https://doi.org/10.1021/acssuschemeng.9b07040.



11:10am - 11:30am

Systematic comparison between Graph Neural Networks and UNIFAC-IL for solvent pre-selection in liquid-liquid extraction

Edgar Ivan Sanchez Medina2, Ann-Joelle Minor1, Kai Sundmacher1,2

1Max Planck Institute for Dynamics of Complex Technical Systems, Germany; 2Otto von Guericke University

Solvent selection is a complex decision that involves balancing multiple factors, including economic, environmental, and societal considerations. As industries strive for sustainability, choosing the right solvent has become increasingly important. However, the vast chemical space makes evaluating all possible solvents impractical. To address this, pre-selection strategies are essential, narrowing down the search to the most promising solvent candidates.

Predictive thermodynamic models are commonly used for solvent pre-selection, with the UNIFAC model being one of the most popular. Recently, advancements in deep learning and computational power have led to the development of new models, such as the Gibbs-Helmholtz Graph Neural Network (GH-GNN). This model has demonstrated higher accuracy in predicting infinite dilution activity coefficients over a larger chemical space than UNIFAC [1]. Moreover, GH-GNN has been applied to solvent pre-selection for extractive distillation [2]. However, a systematic comparison of the pre-selection methods has not yet been conducted.

In this work, we present a systematic comparison between solvent pre-selection using GH-GNN and UNIFAC-IL, an extension of the UNIFAC model for ionic liquids. The case study focuses on pre-selecting solvents for a liquid-liquid extraction process involving the ionic liquid ethyl-3-methylimidazolium tetrafluoroborate ([EMIM][BF4]) and caprolactam. Recent research has identified ionic liquids as promising candidates for the solvolytic depolymerization of polyamide 6 into caprolactam [3]. In this process, the resulting mixture, which primarily consists of caprolactam and the ionic liquid, must be separated into pure components to enable the recycling of the ionic liquid and the reuse of caprolactam in a circular economy.

Our results show that, despite differences in solvent rankings across methods, there is a significant correlation in the overall pre-selection. This suggests that deep learning-based models like GH-GNN can be viable alternatives for solvent pre-selection across a broader chemical space compared to traditional group contribution methods. Additionally, we show that chemical similarity metrics, such as Tanimoto similarity of molecular fingerprints, can be used to assess confidence in the proposed solvent rankings. This allows the model user to decide the level of risk that is willing to tolerate regarding the predictions of a vast chemical space.

[1] Sanchez Medina, E.I., Linke, S., Stoll, M. and Sundmacher, K., 2023. Gibbs–Helmholtz graph neural network: capturing the temperature dependency of activity coefficients at infinite dilution. Digital Discovery, 2(3), pp.781-798.

[2] Sanchez Medina, E.I. and Sundmacher, K., 2023. Solvent pre-selection for extractive distillation using Gibbs-Helmholtz Graph Neural Networks. In Computer Aided Chemical Engineering (Vol. 52, pp. 2037-2042). Elsevier.

[3] Kamimura, A., Shiramatsu, Y. and Kawamoto, T., 2019. Depolymerization of polyamide 6 in hydrophilic ionic liquids. Green Energy & Environment, 4(2), pp.166-170.



11:30am - 11:50am

Optimizing Individual-based Modelling: A Grid- based Approach to Computationally Efficient Microbial Simulations

Ihab Hashem, Jian Wang, Jan Van Impe

KU Leuven, Belgium

Individual-based Modeling (IbM) is a powerful approach for simulating microbial populations, with applications in biochemical engineering such as wastewater treatment, bioreactor design, and biofilm formation. This bottom-up approach explicitly models individual cell behaviors, allowing population dynamics to emerge from cellular interactions. Despite its strengths, IbM is often limited by high computational demands, especially when scaling to larger populations. The main bottleneck lies in resolving spatial overlaps between cells, which is usually managed through pairwise comparisons using data structures like kd-trees that partition the space recursively. While kd-trees reduce the complexity of neighbor searches to O(N log N), they become less efficient as population sizes increase [1]. To overcome this limitation, we developed the Discretized Overlap Resolution Algorithm (DORA), a grid-based method that transforms the overlap resolution process into a more efficient O(N) operation. Rather than tracking individual cell positions and performing neighbor searches, DORA discretizes the simulation space into grid units, resolves overlaps using a diffusion-like process, and translates the results back into movement vectors for the cells. This approach resulted in substantial improvements in computational efficiency for large-scale colony and biofilm simulations. DORA was implemented within MICRODIMS, an in-house IbM platform developed for simulating microbial growth [2]. MICRODIMS is built on the Repast Simphony toolkit and allows for the simulation of microbial dynamics, nutrient diffusion, and biofilm formation with detailed control over spatial and temporal resolution. Integrating DORA into MICRODIMS significantly enhanced its ability in handling large-scale growth simulations in a computationally feasible time.

We extended DORA by incorporating adaptive grid scaling, which adjusts grid resolution based on local cell density to optimize computational resources and speed up simulations in less dense areas. We also introduced stochastic movement components prior to the overlap resolution step, enhancing the realism of simulations by capturing inherent biological variability, such as microbial motility and environmental fluctuations. Our results showed that these optimizations significantly improved DORA’s performance and enabled more computationally efficient simulations without compromising accuracy. The incorporation of stochasticity also provides flexibility, allowing the model to better reflect the natural variability seen in biological systems, thereby offering a more accurate representation of microbial behavior under diverse conditions.

References

1- Hellweger, F.L. and Bucci, V., 2009. A bunch of tiny individuals—Individual-based modeling for microbes. Ecological Modelling, 220(1), pp.8-22.

2- Tack, I.L., Nimmegeers, P., Akkermans, S., Hashem, I. and Van Impe, J.F., 2017. Simulation of Escherichia coli dynamics in biofilms and submerged colonies with an individual-based model including metabolic network information. Frontiers in microbiology, 8, p.2509.



11:50am - 12:10pm

A Framework Utilizing a Seamless Integration of Python with AspenPlus® for a Multi-Criteria Process Evaluation

Simon Maier, Julia Weyand, Ginif Kaur, Oliver Erdmann, Ralph-Uwe Dietrich

German Aerospace Center (DLR), Germany

Detailed assessment of fuel production processes at an early stage of a project is crucial to identify potential technical challenges, optimize efficiency and minimize costs and environmental impact. While process simulations often are either very rigid and accurate or very flexible and unprecise, informed decision making can only be maintained by establishing a detailed process model as early as possible with in the project lifecycle while keeping relevant aspects of the process flexible enough.

In this work, we present the development of a framework based on a dynamic interface between AspenPlus® process simulations and Python, enabling enhanced flexibility and automation for process modeling and optimization. This integration leverages the powerful simulation capabilities of AspenPlus® with the versatility of Python for data analysis and optimization, delivering significant improvements in workflow efficiency and process control. By utilizing the dynamic simulation data exchange with Python, extensive parameter studies can be conducted and post-processed for techno-economic and environmental analyses. Furthermore, the interface allows the implementation of complex kinetic models or optimization routines for single process units. An additional extension for heat integration ensures the technical viability of the process route for reliable comparisons of different routes and process configurations.

The functionalities are applied to a biomass- and power-based methanol production process including various process designs and operating conditions. To keep the level of detail at a high level, additional Python scripts are implemented securing a proper scaling of process units such as the methanol synthesis reactor system. The process configurations are assessed technically, economically and environmentally.



12:10pm - 12:30pm

Global Robust Optimisation for Non-Convex Quadratic Programs: Application to Pooling Problems

Asimina Marousi, Vassilis Charitopoulos

The Sargent Centre for Process Systems Engineering, Department of Chemical Engineering,University College London, United Kingdom

Robust optimization is widely used to identify worst-case scenarios, ensuring constraint satisfaction under uncertainty or when statistical data is unavailable, as an alternative to scenario-based approaches [1]. The most prevalent solution algorithms for convex problems are robust reformulation and robust cutting-planes, both of which can be extended to non-convex problems, though they lack guarantees of polynomial-time convergence [2]. Cutting-planes involve sequentially solving an upper-level deterministic problem and lower-level problems to find uncertainty values that cause the maximum constraint violation. For those values, corresponding cuts are added to the upper-level problem. Implementations of cutting-planes can be found in solvers like PyROS [3] and the ROmodel package [4] in Python, which rely on local or global solvers to handle intermediate problems. However, traditional robust cutting-planes are heavily influenced by the performance of the chosen solver, and if the solver fails to converge during a cut-round, the entire algorithm may not converge [3].

With the growing presence of nonlinear functions in chemical engineering problems, especially when data-driven methods are employed, there is a need for systematic techniques that handle non-convex problems [5]. This study proposes a novel spatial Branch-and-Bound algorithm integrated with robust cutting-planes (RsBB) for solving non-convex robust optimization problems. The coupling of global and robust optimization algorithms has been explored in process systems engineering literature [6,7]. In this work, the proposed algorithm is implemented to solve benchmark pooling problems with uncertain feed quality, using McCormick relaxations. In the RsBB algorithm, robust infeasibility checks are performed at each node of the BB tree. The infeasibility cuts are added both on the original and the relaxed problem. The branching tree is separated into different families depending on the number of cuts at each node, and a depth-first search is used, favouring nodes within the same family. A comparison is performed between RsBB and state-of-the-art software in terms of computational efficiency and solution robustness.

[1] Dias, L. S., & Ierapetritou, M. G. (2016). Integration of scheduling and control under uncertainties: Review and challenges. Chemical Engineering Research and Design, 116, 98–113.

[2] Wiebe, J., Cecĺlio, I., & Misener, R. (2019). Robust Optimization for the Pooling Problem. Industrial and Engineering Chemistry Research, 58(28), 12712–12722.

[3] Isenberg, N. M., Akula, P., Eslick, J. C., Bhattacharyya, D., Miller, D. C., & Gounaris, C. E. (2021). A generalized cutting-set approach for nonlinear robust optimization in process systems engineering. AIChE Journal, 67(5), e17175.

[4] Wiebe, J., & Misener, R. (2022). ROmodel: modeling robust optimization problems in Pyomo. Optimization and Engineering, 23(4), 1873–1894.

[5] Schweidtmann, A. M., Bongartz, D., Grothe, D., Kerkenhoff, T., Lin, X., Najman, J., & Mitsos, A. (2021). Deterministic global optimization with Gaussian processes embedded. Mathematical Programming Computation, 13(3), 553–581.

[6] Li, J., Misener, R., & Floudas, C. A. (2011). Scheduling of Crude Oil Operations Under Demand Uncertainty: A Robust Optimization Framework Coupled with Global Optimization. AIChE Journal, 58, 2373–2396.

[7] Zhang, L., Yuan, Z., & Chen, B. (2021). Refinery-wide planning operations under uncertainty via robust optimization approach coupled with global optimization. Computers & Chemical Engineering, 146, 107205.

 
10:30am - 12:30pmT6: Digitalization and AI - Keynote
Location: Zone 3 - Room E033
Chair: Filip Logist
Co-chair: Manabu Kano
 
10:30am - 11:10am

Keynote by BASF: Beyond the hype. How to create impact with digital innovation in chemical production.

Stijn Verbert

BASF

At BASF, we create chemistry for a sustainable future. Our ambition: We want to be the preferred chemical company to enable our customers’ green transformation. Through science and innovation we provide our customers, active in nearly all sectors, with products to meet the current and future needs of business and society. BASF Antwerp is the largest integrated production site in Belgium and the second largest of the BASF Group worldwide.
The chemical industry worldwide and in Europe in particular is sailing through rough waters. The combination of high uncertainty due to the geopolitical situation, structurally high costs, regulatory burden and climate change prove a truly challenging cocktail.
For a production site like BASF Antwerp, one of the key levers to master this challenge is the smart use of developments in digitalization, automation and AI to make sure our production plants operate under the most optimal conditions.
However, despite the hype that often accompanies new developments, generating a significant and sustainable impact often proves harder than expected. We will discuss some of the typical challenges that have to be overcome. We will present a number of real life realizations: i) optimizing production and productivity using data analytics, artificial intelligence and model based optimization & control, ii) developing digital twins in legacy plants, iii) streamlining workflows using agile cocreation and iv) introducing drones and robots to facilitate daily operations. Through these real life examples we will illustrate some of the lessons we learned so far on our journey and hint at some of the future opportunities we see.



11:10am - 11:30am

Exploring industrial text data for monitoring chemical manufacturing processes

Eugeniu Strelet1,2, Ivan Castillo2, You Peng2, Swee-Teng Chin2, Anna Zink2, Ricardo Rendall2, Marco S. Reis1

1Univ Coimbra, CERES, Department of Chemical Engineering, Rua Sílvio Lima, Pólo II – Pinhal de Marrocos, 3030-790 Coimbra, Portugal; 2The Dow Chemical Company, Lake Jackson, USA

In the context of Chemical Processing Industry (CPI), a wide range of sensor technologies and data collection methods are available to use. These sources provide valuable insights into the monitoring of physical and chemical phenomena occurring throughout the process, the status of equipment, prevailing process conditions, attributes related to raw materials and product quality, emissions data, logistic issues, and more (Ye et al., 2020). Despite the intensive use of sensors in industrial settings, they often miss critical process information. Critical issues such as leaks, corrosion, and insulation degradation may escape the observational space of industrial sensing devices and go undetected.

Therefore, it becomes imperative to seek alternative sources of information for acquiring insights into the state and health of industrial processes. One such alternative is industrial text data available in operators reports, alarm tags, process memorandums, etc. Despite the existence of numerous text processing methods, there is a notable lack of studies exploring their applicability in the chemical processing industry (CPI) context. This work endeavors to investigate the efficacy of text processing techniques for information retrieval from industrial text data, specifically, for process safety and containment event (PSCE) prediction.

To assess the value of information contained in the industrial text data, two scenarios were considered, one being simulated using GPT-3 model from OpenAI, and another being real industrial data. The tested NLP approach (Reimers and Gurevych, 2019) has proven to be efficient regarding information retrieval from simulated text data. On other hand, the extraction of information present solely in industrial text data presented some challenges, such as, specific vocabulary and incomplete information with respect to the topic of analysis.

To address the challenges associated with extracting information from industrial text data, one potential solution involves fine-tuning NLP models for specific production contexts. This approach requires a high-quality dataset representative of the targeted manufacturing process. However, this method poses limitations in terms of generalizing to other manufacturing processes and sites. In this case, the limitation is related to the site/process specific vocabulary used. Additionally, it is still not able to cope with incomplete information. An alternative approach is integrating industrial text data with available numerical (sensor) data (Strelet et al., 2023), which can mitigate the inherent limitations of text data and enhance scalability across different production environments.

References

Ye, Z., Yang, J., Zhong, N., Tu, X., Jia, J., & Wang, J. (2020). Tackling environmental challenges in pollution controls using artificial intelligence: A review. Science of The Total Environment, 699, 134279. https://doi.org/10.1016/j.scitotenv.2019.134279

Strelet, E., Peng, Y., Castillo, I., Rendall, R., Wang, Z., Joswiak, M., Braun, B., Chiang, L., & Reis, M. S. (2023). Multi-source and Multimodal Data Fusion for Improved Management of a Wastewater Treatment Plant. Journal of Environmental Chemical Engineering, 111530. https://doi.org/10.1016/j.jece.2023.111530

Reimers, N. and Gurevych, I. (2019). Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3980–3990, Hong Kong, China. Association for Computational Linguistics



11:30am - 11:50am

Text2Model: Generating dynamic chemical reactor models using large language models

Sophia Rupprecht, Yassine Hounat, Monisha Kumar, Giacomo Lastrucci, Artur M. Schweidtmann

Delft University of Technology, Department of Chemical Engineering, Process Intelligence Research Group, Van der Maasweg 9, 2629 HZ, Delft, The Netherlands

As large language models (LLMs) have shown remarkable capabilities in conversing via natural language [1], the question arises in which way LLMs could potentially assist scientists and engineers in research and industry with domain-specific tasks [2]. Existing approaches of leveraging LLMs in science have been employed in a multitude of different domains [3] such as ChemCrow for autonomous chemical synthesis execution [4]. Since state-of-the-art LLMs have shown remarkable capabilities in code generation in various programming languages [5], LLMs could assist scientists with using tools such as modeling environments by converting textual information into structured domain-specific languages [6].
We generate dynamic chemical reactor models in Modelica code format from textual descriptions as user input. Firstly, we fine-tune Llama 3 8B Instruct on synthetically generated Modelica code for different reactor scenarios. The supervised fine-tuning procedure is conducted using the parameter efficient finetuning technique low rank adaptation [7]. Secondly, we compare the performance of our fine-tuned model to the baseline Llama 3 8B Instruct model as well as GPT-4o. A human trained in the chemical engineering domain assesses the models’ predictions manually with regards to syntactic and semantic accuracy of the generated dynamic models.
Our initial findings show that the fine-tuned model is able to follow the syntax of the Modelica language more accurately than the respective base model Llama 3 8B Instruct and GPT-4o. However, the fine-tuned model reveals shortcomings with respect to the semantic accuracy of the generated systems of equation compared to GPT-4o and Llama 3 8B Instruct. We expect that adapting training and inference settings in successive investigations will significantly improve the fine-tuned model’s Modelica code generation capabilities and thus, in the long run, enable chemical engineers to save time when performing dynamic simulations.

References
[1] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4. March 2023. doi: 10.48550/ARXIV.2303.12712.

[2] Artur M. Schweidtmann. Generative artificial intelligence in chemical engineering. Nature Chemical Engineering, 1(3): 193–193, March 2024. ISSN 2948-1198. doi: 10.1038/s44286-024-00041-5.

[3] Lukas Schulze Balhorn, Jana M. Weber, Stefan Buijsman, Julian R. Hildebrandt, Martina Ziefle, and Artur M. Schweidtmann. Empirical assessment of chatgpt’s answering capabilities in natural science and engineering. Scientific Reports, 14(1), February 2024. ISSN 2045-2322. doi: 10.1038/s41598-024-54936-7.

[4] Andres M Bran, Sam Cox, Oliver Schilter, Carlo Baldassari, Andrew D White, and Philippe Schwaller. Chemcrow: Augmenting large-language models with chemistry tools. April 2023. doi: 10.48550/ARXIV.2304.05376.

[5] YueWang, Hung Le, Akhilesh Deepak Gotmare, Nghi D. Q. Bui, Junnan Li, and Steven C. H. Hoi. Codet5+: Open code large language models for code understanding and generation. May 2023. doi: 10.48550/ARXIV.2305.07922.

[6] Pieter Floris Jacobs and Robert Pollice. Developing large language models for quantum chemistry simulation input generation. September 2024. doi: 10.26434/chemrxiv-2024-9g2w2.

[7] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. June 2021. doi: 10.48550/ARXIV.2106.09685.



11:50am - 12:10pm

Multi-Agent LLMs for Automating Sustainable Operational Decision-Making

Emma Pajak, Abdullah Bahamdan, Klaus Hellgardt, Antonio del Río Chanona

Imperial College London, United Kingdom

The future of Process Systems Engineering (PSE) lies in greater automation, enabling faster and more accurate operational decision-making [1]. This research investigates whether Large Language Model (LLM) Autonomous Agents (LAAs) can be a driving force in this transformation. LAAs are AI-driven systems capable of executing complex, multi-step tasks autonomously, with minimal human intervention [2]. LAAs have demonstrated potential in automating processes in other industries; for instance, ChatDev within software engineering: a framework that leverages task-specific LLM agents to manage software design, coding, and testing [3]. This case study illustrated the potential of using multiple LLM agents to streamline complex processes, automate decision-making, and significantly boost overall performance by dividing the workload among interacting agents [3]. Building on the success of ChatDev, this project seeks to explore whether similar innovations could be beneficial to PSE.

To explore the potential of LAAs within PSE, a framework was developed to automate sustainable operational decision-making in a system of Gas-Oil Separation Plants (GOSPs). Since PSE relies on a combination of models, simulations, and tools, the case study was structured to encompass a variety of techniques. The objective is to determine the optimal sustainable operation of a GOSP system given a production target from upper management. A concise prompt relays this target, initiating the automated workflow: following the Mixture of Experts (MoE) framework [4], a series of expert LLM agents collaborate to complete the task. The workflow integrates multiple tools, such as interfacing with a HYSYS flowsheet, solving multi-objective optimisation problems, and analysing cost-emissions trade-offs from a Pareto frontier. It emulates a realistic series of analyses for operational decision-making, leveraging quick low-fidelity simulations for initial assessments and high-fidelity modelling for refinement. Subsequently, two specialised agents—one focused on economic objectives and the other on environmental considerations—evaluate the Pareto front to negotiate an optimal operating point.

Although the case study is framed within the sustainable operation of the oil and gas industry, the broader purpose of this research is to explore how LAAs can be leveraged in PSE. If successful, by automating multi-step processes, LAAs could offer significant improvements in efficiency and flexibility in operational decision-making, extending their benefits beyond just decision-making in PSE. Ultimately, this research aims to demonstrate how LAAs can play a pivotal role in the industry's transition towards the ‘plant of the future’, where diverse models and technologies are seamlessly integrated into a unified, intelligent framework.

References
[1] Gamer, T., Hoernicke, M., Kloepper, B., Bauer, R. and Isaksson, A.J. (2020) 'The autonomous industrial plant – future of process engineering, operations and maintenance', Journal of Process Control, 88, pp. 101–110. doi: 10.1016/j.jprocont.2020.01.012.

[2] Liu, Z. et al. (2023) 'Bolaa: Benchmarking and orchestrating LLM-augmented autonomous agents', arXiv preprint, arXiv:2308.05960.

[3] Qian, C. et al. (2024) 'Chatdev: Communicative agents for software development', in Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, 1, pp. 15174–15186.

[4] Wang, J., Wang, J., Athiwaratkun, B., Zhang, C. & Zou, J., 2024. Mixture-of-Agents enhances large language model capabilities. arXiv preprint, [online] Available at: https://arxiv.org/abs/2406.04692

 
10:30am - 12:30pmT7: CAPEing with Societal Challenges - Session 1
Location: Zone 3 - Room D016
Co-chair: Ryan Muir
 
10:30am - 10:50am

Waste heat upgrading from alkaline and PEM electrolyzers using heat pumps

Aldwin-Lois Galvan-Cara1,2, Dominik Bongartz1,2

1Department of Chemical Engineering, KU Leuven, 3001 Leuven, Belgium; 2EnergyVille, 3600 Genk, Belgium

The use of waste heat from electrolysis has been shown to significantly increase the efficiency of the process [1]. The most mature electrolyzer technologies are alkaline and PEM electrolyzers, which produce low-temperature waste heat below 90ºC [2]. Therefore, most studies focus on the direct use of this waste heat for low-temperature applications, such as district heating [3]. Another alternative is to upgrade the waste heat, which allows a wider range of applications also in the chemical industry, e.g., for low-pressure steam generation. This is enabled by the recent development of steam generating heat pumps [4]. However, the potential of combining low-temperature electrolysis and emerging heat pump technologies has not been sufficiently explored yet. Furthermore, if heat is to be considered as a valuable output, it is still not clear what changes could be made in the design of the electrolyzers to improve efficiency and economics.

In this work, we analyze the use of heat pumps for waste-heat upgrading from low-temperature electrolyzers using simple models. We evaluate the performance of the combined system, i.e., electrolyzer with a heat pump, under different operating conditions. In addition, we investigate the benefit of co-designing both units compared to the case of adding a heat pump to an electrolyzer designed without waste-heat utilization (i.e., a posteriori coupling). We show that designing for waste-heat utilization changes the preferred operating conditions for the electrolyzer. This new approach leads to a more compact electrolyzer design, which reduces capital costs and sacrifices efficiency, while allowing more heat as useful output. Additionally, we highlight similarities and differences between waste-heat upgrading from PEM and alkaline electrolyzers.

[1] van der Roest, E., Bol, R., Fens, T. & van Wijk, A. Utilisation of waste heat from PEM electrolysers – Unlocking local optimisation. Int J Hydrogen Energy 48, 27872–27891 (2023).

[2] Arsad, S. R. et al. Recent advancement in water electrolysis for hydrogen production: A comprehensive bibliometric analysis and technology updates. Int J Hydrogen Energy 60, 780–801 (2024).

[3] Malcher, X. & Gonzalez-Salazar, M. Strategies for decarbonizing European district heating: Evaluation of their effectiveness in Sweden, France, Germany, and Poland. Energy 306, 132457 (2024).

[4] Klute, S., Budt, M., van Beek, M. & Doetsch, C. Steam generating heat pumps – Overview, classification, economics, and basic modeling principles. Energy Convers Manag 299, 117882 (2024).



10:50am - 11:10am

Model-based Operability and Safety Optimization for PEM Water Electrolysis

Beatriz Dantas1, Sahithi Srijana Akundi2,3,4, Yuanxing Liu2,3,4, Austin Braniff1, Shayan S. Niknezhad2, Faisal Khan3,4, Efstratios N. Pistikopoulos2,4, Fernando V. Lima1, Yuhe Tian1

1Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, WV, USA; 2Texas A&M Energy Institute, Texas A&M University, College Station, TX, USA; 3Mary Kay O’Connor Process Safety Center (MKOPSC), Texas A&M University, College Station, TX, USA; 4Artie McFerrin Department of Chemical Engineering, Texas A&M University, College Station, TX, USA

In recent years, the transition of the energy grid to hydropower, wind, and solar photovoltaics has gained significant attention. While renewable sources are at the core of energy transition and remain among the most determining factors in the process of decarbonization of economies, their output is variable, such as daily and seasonal, which may lead to fluctuations in energy supply. Advancements in electrolysis technologies, such as proton exchange membrane water electrolyzer (PEMWE) systems, are critical in addressing the challenges of integrating renewable energy sources for hydrogen production. PEMWE stands out among other technologies, including solid oxide electrolyzers and alkaline electrolyzers, due to its simplicity, reversible operation, higher current densities supported, and ability to supply highly pure hydrogen. These attributes make PEMWE a promising option for coupling with intermittent renewable energy sources, contributing to the decarbonization of various sectors (e.g., transportation, industry, and power generation).

This study aims to develop a novel systematic approach to quantify the safe operating window of a PEMWE system considering energy intermittency and varying hydrogen demand. The PEMWE model has been developed based on first principles with the polarization curve validated against a lab-scale experimental setup. The impact of key operational variables will be investigated including current density, inlet temperature, and water flow rate (utilized for both feed and system cooling). Emphasis is given to operating temperature, a safety-critical variable, as its elevation can pose significant hydrogen safety risks within both the electrolyzer cells and the storage system. Increased temperatures also have negative effects on the durability of PEMWE, accelerating the degradation processes of the membrane and catalysts and thus increasing the likelihood of reaching hazardous conditions. The impact of temperature will be quantified via a risk index considering the fault probability and consequence severity. Process operability analysis is employed to quantify the achievability of a safe and feasible region through the integration of design and control strategies in early design stages. By calculating the possible output space given a set of inputs, forward mapping offers insights into the capabilities of the system under various operating scenarios (e.g., fluctuating energy supply and demand-driven operations). Inverse mapping computations are then carried out to evaluate the viability of functioning within a certain targeted area, pinpointing the required input configurations to meet the productivity objective while maintaining process safety. By combining forward and inverse mapping, this analysis provides a comprehensive framework to optimize PEMWE systems for enhanced operational flexibility and robust performance with application to modular hydrogen production using renewable energy sources.



11:10am - 11:30am

Modelling and Analysis of CO2 Electrolysers Integrated with Downstream Separation Processes via Heat Pumps

Riccardo Dal Mas, Andrea Carta, Ana Somoza-Tornos, Anton A. Kiss

Delft University of Technology, Netherlands, The

The electrification of chemical processes represents a promising approach to improve efficiency and utilize waste heat, while potentially introducing flexible operational frameworks exploiting period of abundant and/or cheap electricity supply. Electrolysers could be pivotal in this shift, enabling hydrogen production from renewable sources and converting CO2 into value-added products. However, these processes present inefficiencies leading to the release of significant amounts of low-temperature heat (<100 °C) by the electrolysers, which can account from 20% up to 60-70% of the input power to the stacks (Dal Mas, 2024, Front. Energy Res.). This research tackles the challenge of utilising this waste energy by modelling a system that combines electrolysers with downstream separation processes, employing heat pumps to upgrade and use the waste heat for the separation process.

The study utilizes a steady-state model of a system that includes multiple electrolyser stacks, a balance of plant, and a product separation system composed of electrodialysis and distillation, which allows also for the recycle of unreacted water and electrolyte. The case study of choice is formic acid production via direct CO2 electrolysis, a process which could become feasible for large-scale industrial implementation (Dal Mas, 2024, Comput. Aided Chem. Eng.). A key objective is to ensure that the energy needs of the downstream processes are, to the extent possible, satisfied by the upgraded waste heat from the electrolysers, thus optimizing the overall system efficiency and minimising the need for external heating sources.

Through this research, a novel process design integrating electrolysis and downstream separation systems with heat pumps has been developed, exploiting simulations in Aspen Plus. The project offers valuable insights into the efficient integration of electrolysers and downstream processes, emphasizing the role of heat pumps in enhancing system performance and energy utilization (based on 25 MW power input to the electrolyser, a COP of 3 was obtained through the application of vapour compression heat pumps).

In addition to steady-state analysis, the aim of the work is also to conduct dynamic simulations to understand the system's response to various disturbances, including partial shutdowns and subsequent startups of the electrolysers. This dynamic study will evaluate how such fluctuations impact separation performance and energy demand, with a particular interest in the performance of the heat pump and of the separation processes.



11:30am - 11:50am

A Data-Driven Conceptual Approach to Heat Pump Sizing in Chemical Processes with Fluctuating Heat Supply and Demand

Thorben Hochhaus1, Johannes Wloch1, Marcus Grünewald1, Julia Riese2

1Ruhr University Bochum, Germany; 2Paderborn University, Germany

The CO2-emissions of the chemical industry must be reduced significantly to meet Europe’s climate goals of zero net-emissions by 2050. A large proportion of the heat demand in process industry is still met by burning fossil fuels such as natural gas which results in significant amount of CO2 emissions to the atmosphere. Therefore, an effort should be made to reduce the supply of process heat supplied from fossil fuels. This can be achieved by improving the efficiency of chemical processes or using alternative, low-carbon sources of heat supply. Compression heat pumps offer a possibility of providing heat with low CO2-emissions when powered by electricity with a low CO2-emission factor. The use of formerly unused waste heat which is lifted to a higher temperature level by the heat pump can substitute process heat provided by fossil sources.
Despite the advantages of heat pumps for providing process heat with low CO2-emissions, heat pump integration into chemical processes is still limited by mayor challenges. Among others, these challenges include economic feasibility and a lack of experience regarding the identification of appropriate heat pump sizing. Tackling the challenge of appropriate heat pump sizing requires thorough process data analysis regarding heat supply and waste heat availability. Especially non-continuous processes with fluctuating waste heat supply and process heat demand present an additional challenge for heat pump sizing. Key evaluation criteria must be derived which support further decision-making in heat pump sizing. Usually, mathematical programming methods sometimes combined with concepts of pinch analysis are used for this task [1]. This requires a good understanding of modelling and optimization techniques on behalf of the process engineer. A more user-friendly way to quickly estimate and evaluate investment decisions could enhance the broader application of heat pumps in chemical engineering.
In this contribution, a collection of methods and criteria are presented which may be useful for quick decision-making regarding heat pump sizing. These criteria vary for different configurations of heat supply system design including e.g. combinations of heat pumps and heat storages. Moreover, the availability of waste heat and the demand of process heat play a crucial role in decision making for heat pump sizing and therefore need to be included in the derivation of sizing criteria. Using a case study, possible methods to determine heat pump integration configurations for different scenarios are derived. By integrating relevant operational constraints, the sensitivity of the different evaluation criteria is determined.

References
[1] J.V. Walden, B. Wellig, P. Stathopoulos, Heat pump integration in non-continuous industrial processes by Dynamic Pinch Analysis Targeting, Applied Energy 352 (2023) 121933. https://doi.org/10.1016/j.apenergy.2023.121933.



11:50am - 12:10pm

Evaluating Energy Transition Pathways for CO2 Reduction in Industries with Low-Temperature Heat Demand: A Superstructure Optimization Approach

Juliette M.S. Limpach1, Muhammad Salman1, Daniel Flórez-Orrego2, François Maréchal2, Grégoire Léonard1

1Chemical Engineering, University of Liège, Liège Sart Tilman, 4000, Belgium.; 2Federal Polytechnic School of Lausanne, IPESE group, Sion, Switzerland.

The goal of achieving net zero emissions by 2050 has driven industries to intensify their efforts toward implementing CO2 reduction strategies, particularly as CO2 quotas increase. While, hard-to-abate sectors like steel, cement, glass and lime production etc., characterized by high-temperature energy demands, are the focus of much attention, other sectors which are characterized with lower emissions and typically low to moderate temperature heat demands, also play a vital role in collective manner. The objective of this study is to analyze the overlooked potential of energy transition in these sectors, given their substantial contribution to the overall emissions reduction strategy necessary to meet long-term climate goals.

Key CO2 reduction strategies include heat recovery, fuel substitution (e.g., electricity, hydrogen, biomass, and biogas), and CO2 capture technologies. A superstructure optimization based methodology is developed to evaluate the various transition pathways in each sector and Blueprint (BP) models are developed which incorporate detailed mass and energy balance plus cost parameters of each sector. Afterwards, the Osmose Lua optimization framework, developed at EPFL, is utilized to solve mixed-integer linear programming (MILP) based formulation containing objective functions of total specific cost and emissions. It optimizes the superstructure of the BP model, serving as a decision-support to identify the most effective CO2 reduction strategy tailored to each sector's specific needs.

Three industries, namely laundry, frozen vegetable processing, and syrup production, were selected as case studies due to their low-temperature heat demand, predominantly met by natural gas-fired boilers. These industries offer significant opportunities for efficiency improvements through heat recovery and the integration of heat pumps. For instance, in the frozen vegetable processing industry, the implementation of heat pumps to recover waste heat from air cooling processes can reduce specific energy consumption by up to 17%. In sectors producing liquid and solid waste, such as the food processing industry, there is further potential for bio-sourced fuel production. The syrup industry, for example, could cut its CO2 emissions by 50% by converting solid waste into biogas, to fuel its boilers.

The viability of these technological solutions is heavily influenced by future energy scenarios, including the price of energy and CO2 emissions. Hence, this study provides critical insights into how industrial sectors with low-T heat demand can transition toward cleaner energy sources in future scenarios, while balancing economic constraints and technological readiness. The findings underscore the importance of tailored, sector-specific strategies for achieving significant emissions reductions and meeting global climate objectives.

 
10:30am - 12:30pmT10: PSE4BioMedical and (Bio)Pharma - Session 1
Location: Zone 3 - Room D049
Chair: Yusuke Hayashi
Co-chair: Prashant Mhaskar
 
10:30am - 10:50am

Modeling the Impact of Non-Ideal Mixing on Continuous Crystallization: A Non-Dimensional Approach

Jan Trnka, František Štěpánek

University of Chemistry and Technology, Prague, Czech Republic

Mathematical modeling is essential for the effective control of many chemical engineering processes, including crystallization. However, most existing continuous crystallization models used in industry and academia assume ideal mixing. As a result, the effects of imperfect mixing on crystallization are largely unexplored in the literature.

Population balance modeling (PBM) is the standard approach for crystallization processes. While PBM can be integrated within a computational fluid dynamics (CFD) framework, this method is computationally demanding, making it impractical for extensive parametric. Therefore, Alternative modeling approaches are required to address the impact of nonideal mixing in a less costly manner. In our study, we employ a simplified mixing model based on the engulfment model originally developed by Baldyga and Bourne in the context of reaction engineering [1].

We present an efficient method for nondimensionalizing the model equations. As in other areas of chemical engineering, nondimensionalization offers valuable insights by reducing the number of parameters and revealing characteristic system properties. It also provides appropriate scaling of variables, improving both the precision and efficiency of numerical computations.

Our work focuses on a theoretical study of continuous crystallization, including extensive parametric investigations including attainable regions analysis [2]. We compare the simulation results of our model with those obtained under the assumption of perfect mixing. The primary objective is to assess the effect of mixing intensity on particle size. Interestingly, we demonstrate that increasing mixing intensity can either increase or decrease the mean crystal size, in agreement with experimental observations, and we offer an explanation for this behavior.

To validate the model’s relevance, we use experimental data from the literature to fit the model parameters. Our results indicate that the simplified model performs comparably to more complex CFD-based models, providing a computationally efficient alternative for studying crystallization under non-ideal mixing conditions.

1. Bałdyga, J.; Bourne, J.; Hearn, S., Interaction between chemical reactions and mixing on various scales. Chemical Engineering Science 1997, 52 (4), 457-466.

2. Vetter, T.; Burcham, C. L.; Doherty, M. F., Regions of attainable particle sizes in continuous and batch crystallization processes. Chemical Engineering Science 2014, 106, 167-180.



10:50am - 11:10am

Process Design of an Industrial Crystallization Based on Degree of Agglomeration

Yung-Shun Kang1, Hemalatha Kilari1, Neda Nazemifard2, C. Benjamin Renner2, Yihui Yang2, Charles D. Papageorgiou2, Zoltan Nagy1

1Purdue University, United States of America; 2Process Engineering and Technology, SMPD, Takeda, Cambridge, MA

Agglomeration, often undesirable in crystallization, can lead to impurity incorporation [1,2], longer filtration and drying times, and challenges in achieving uniformity in particle size and content. While controlling supersaturation and agitation rates has been shown to mitigate agglomeration [1,3], more rigorous techniques, such as thermocycles, can promote deagglomeration and dissolve fines resulting from breakage or attrition.

Optimizing thermocycles involves determining parameters like the number of heating-cooling cycles and heating-cooling rates. This presents a complex optimization challenge, where traditional quality-by-control methods relying on process analytical technology become resource-intensive [4,5]. A more efficient alternative is the use of population balance models (PBMs) to monitor and control agglomeration during crystallization [6].

In this study, we propose a model-based approach for optimizing temperature profiles to minimize agglomeration and increase crystal size. A PBM is coupled with the number density of agglomerates to monitor agglomeration during thermocycles. The hybrid PBM incorporates key mechanisms such as nucleation, growth, dissolution, agglomeration, and deagglomeration, and is applied to the crystallization of an industrial active pharmaceutical ingredient, Compound K. Most parameters were estimated through design of experiments (DoE) in a prior study, while additional thermocycle experiments were conducted to refine dissolution parameters.

Results from in-silico DoE simulations indicate that the hybrid PBM approach surpasses traditional methods in accurately evaluating process performance when agglomeration is involved. This approach provides a more robust means of assessing agglomeration control compared to methods based solely on particle bridge formation. Moreover, the simulations reveal that while thermocycles are effective in reducing agglomeration, their efficiency saturates after a certain number of cycles, which were verified through the experiments.

In conclusion, this study demonstrates the value of a model-based approach for optimizing thermocycle profiles, leading to improved control over agglomeration compared to linear cooling or quality-by-design recipes. The optimized thermocycles resulted in reduced agglomeration and shorter batch times, making this approach more efficient for crystallization process optimization.

  1. Urwin, Stephanie J., et al. "A structured approach to cope with impurities during industrial crystallization development." Organic process research & development 24.8 (2020): 1443-1456.
  2. Terdenge, Lisa‐Marie, and Kerstin Wohlgemuth. "Impact of agglomeration on crystalline product quality within the crystallization process chain." Crystal Research and Technology 51.9 (2016): 513-523.
  3. Sun, Zhuang, et al. "Use of Wet Milling Combined with Temperature Cycling to Minimize Crystal Agglomeration in a Sequential Antisolvent–Cooling Crystallization." Crystal Growth & Design 22.8 (2022): 4730-4744.
  4. Fujiwara, Mitsuko, et al. "First-principles and direct design approaches for the control of pharmaceutical crystallization." Journal of Process Control 15.5 (2005): 493-504.
  5. Wu, Wei-Lee, et al. "Implementation and application of image analysis-based turbidity direct nucleation control for rapid agrochemical crystallization process design and scale-up." Industrial & Engineering Chemistry Research 61.39 (2022): 14561-14572.
  6. Szilagyi, Botond, et al. "Application of model-free and model-based quality-by-control (QbC) for the efficient design of pharmaceutical crystallization processes." Crystal Growth & Design 20.6 (2020): 3979-3996.


11:10am - 11:30am

Developing a Solvent Selection Framework to Recover Active Pharmaceutical Ingredients from Unused Solid Drugs

Shrivatsa Shrirang Korde1, Aishwarya Menon2, Gintaras Reklaitis1, Zoltan Nagy1

1Davison School of Chemical Engineering, Purdue University, West Lafayette, IN,United States of America; 2Agricultural & Biological Engineering, Purdue University, West Lafayette, IN, United States of America

The rapid population growth and escalating economic burden of human diseases suggest a potential rise in pharmaceutical waste, necessitating proper management strategies to address this challenge effectively. Sources of unused drugs can go beyond consumer waste products that have exceeded their shelf life and may exist anywhere in the supply chain, as well as the failed batches created during drug product manufacture. Previous research has highlighted the presence of pharmaceutical pollutants in water, soil, and surface environments.1,2

Regardless of the source of unused drugs, it is imperative to ensure appropriate waste management. Given the significant health and environmental impacts of active pharmaceutical ingredients (APIs) in pharmaceutical formulations, prioritizing their separation from various excipients in their formulations is a strategic approach to address this challenge. When recycling APIs, it is crucial to ensure that the recovered APIs possess the desired critical quality attributes (CQAs), such as purity and particle size distribution, while adhering to strict FDA regulations. Traditionally, APIs, after synthesis, are purified through crystallization.3 Solvent selection is the primary step in obtaining high-purity APIs post-synthesis. Likewise, solvent selection plays a crucial role in recovering the APIs from tablets containing a variety of excipients and selectively dissolving and re-crystallizing them.4

This study proposes a robust solvent selection framework to recycle the APIs that conceptualizes different solvent selection metrics such as solubility, percentage recovery, and process mass intensity. The framework is demonstrated using paracetamol as a model API with six common excipients, and ten commonly used solvents in crystallization studies. Firstly, a process flowsheet is modeled to simulate the steady-state recovery process. Next, temperature-dependent equilibrium solubility curves are generated of the API for all the solvents, followed by the generation of excipient-solvent solubility curves. Having obtained the solubility curves, the framework is executed to select solvents based on each metric. Selecting solvent based on solubility involved mapping the relative solubilities of the API and excipients over a range of temperatures for all the solvents, while recovery-based selection targeted maximizing recovery in the final stage of re-crystallization of API. Further, a sustainability-based selection included calculating the process mass intensity for each solvent. All the solvents in each metric were ranked. The results enable solvent selection for each metric while encapsulating the associated tradeoffs. The framework also enables solvent selection by incorporating multiple metrics as objectives, allowing for a more comprehensive and optimized approach to identifying a suitable solvent. This multi-objective consideration ensures that the chosen solvent balances various factors, such as solubility, recovery, and environmental impact, leading to a more effective and sustainable recovery process. The developed solvent selection framework lays a foundation for creating a process to recover APIs from their formulations.



11:30am - 11:50am

Data-driven Digital Design of Pharmaceutical Crystallization Processes

Yash Ghanashyam Barhate1, Yung-Shun Kang1, Neda Nazemifard2, Ben Renner2, Yihui Yang2, Charles Papageorgiou2, Zoltan K. Nagy1

1Purdue University, United States of America; 2Takeda Pharmaceuticals International Company, United States of America

Mechanistic population-balance modeling (PBM)-based digital design has gained significant traction in the pharmaceutical industry, facilitating the design of crystallization processes to consistently produce active pharmaceutical ingredient (API) crystals with targeted critical quality attributes (CQAs), such as purity, crystal size distribution (CSD), and yield. However, developing accurate PBM models for specific API-solvent systems presents challenges due to the need for well-designed design of experiments (DoE) to decouple crystallization mechanisms, as well as high-quality real-time process data (e.g., concentration, CSD) from process analytical technology tools or offline analyses. The substantial resources—time, material, and labor—required to obtain such measurements, along with fast-paced pharmaceutical process development timelines, make PBM model development resource-intensive and impractical for numerous active compounds studied during clinical and development phases.1

In response, this research explores data-driven modeling as an alternative for building ‘fit-for-purpose’ digital twins of crystallization processes, aiding process development using a quality-by-digital design (QbD2) framework. While data-driven strategies have been actively researched, many existing frameworks have limited industrial applicability due to a lack of training on experimental data or reliance on real-time process measurements, which are not always available.2 A key contribution of this research is the development of a machine learning (ML)-based modeling workflow that leverages industrially available DoE data to link operating conditions with CQAs, addressing the knowledge gap and enhancing process development for industrially relevant API systems.

Given the absence of real-time measurements, this study operates within a low-data regime, making ML model development particularly challenging. To address this challenge, this research explores the augmentation of original experimental DoE data with synthetic data (generated from DoE data) to improve the ML model’s predictive performance. Synthetic data generation techniques, including Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), are employed to enrich the dataset and improve model training. Additionally, model-informed active learning is integrated to generate new, strategically designed DoEs, further improving model accuracy and robustness. This iterative workflow continues until the models achieve the desired accuracy, after which they are used within optimization frameworks to inversely design operating conditions that meet the target CQAs.

The effectiveness of this workflow is demonstrated using industrial data for a commercial API facing challenges related to crystallinity, severe agglomeration tendencies, and slow growth kinetics, which are CQAs that are difficult to describe using mechanistic modeling approaches. The proposed workflow enables the efficient digital design of the crystallization process, identifying operating conditions that achieve the desired CQAs.

References:

1. Nagy, Z. K. & Braatz, R. D. Advances and new directions in crystallization control. Annu. Rev. Chem. Biomol. Eng. 3, 55–75 (2012).

2. Xiouras, C., et al., Vlachos, D. G. & Stefanidis, G. D. Applications of Artificial Intelligence and Machine Learning Algorithms to Crystallization. Chem. Rev. 122, 13006–13042 (2022).



11:50am - 12:10pm

Bayesian Optimization for Enhancing Spherical Crystallization Derived from Emulsions: A Case Study on Ibuprofen

Xinyu Cao1, Yifan Song2, Jiayuan Wang2, Linyu Zhu2, Xi Chen1

1State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China; 2College of Chemical Engineering, Zhejiang University of Technology, Hangzhou 310014, China

The pharmaceutical industry is a highly specialized field where strict quality control and accelerated time-to-market are essential for maintaining competitive advantage. Spherical crystallization has emerged as a promising approach in pharmaceutical manufacturing, offering significant potential to reduce equipment and operating costs, enhance drug bioavailability, and facilitate compliance with product quality regulations. Emulsions, as an enabling technology for spherical crystallization, present unique advantages. However, the quality of spherical crystallization products derived from emulsions is significantly influenced by the intricate interactions between crystallization phenomena, formulation variables, and solution hydrodynamics. These complexities pose substantial challenges in determining optimal operational conditions to achieve the desired product characteristics.

In this study, Bayesian optimization (BO) is employed to refine and optimize the operational conditions for the spherical crystallization of a representative drug, ibuprofen. The primary goal is to improve product flowability, measured by a reduced angle of repose, while maintaining the D50 particle size within a specified range. The optimization process focuses on key variables such as temperature, stirring speed, duration, and BSA concentration. With the help of acquisition functions, BO enhances control over crystal growth and aggregation, enabling the identification of a high-quality product with fewer experimental trials compared with traditional design of experiments (DoE) methods.



12:10pm - 12:30pm

Model-based approach to template-induced macromolecule crystallisation

Daniele Pessina1,2, Jorge Calderon de Anda4, Claire Heffernan4, Jerry Y.Y. Heng1,3, Maria M. Papathanasiou1,2

1Department of Chemical Engineering, Imperial College London, London, SW7 2AZ; 2Sargent Centre for Process Systems Engineering, Imperial College London, London, SW7 2AZ; 3Institute for Molecular Science and Engineering, Department of Chemical Engineering, Imperial College London, London, SW7 2AZ; 4Chemical Development, Pharmaceutical Technology & Development, Operations, AstraZeneca, Macclesfield, SK10 2NA

Biomacromolecules have intricate crystallisation behaviour due to their size and many interactions in solution, and can often only crystallise in narrow ranges of experimental conditions [1]. High solute concentrations are needed for crystal nucleation and growth, exceeding those eluted upstream and therefore preventing the adoption of crystallisation in downstream separation steps [2]. By promoting molecular aggregation and nucleation via a lowered energy barrier, heterogeneous surfaces or templates can relax the supersaturation requirements and widen the crystallisation operating space [3].

Though templates are promising candidates for process optimisation, their experimental testing has generally been limited to small-volume experiments, and quantification of their impact on process intensification and quality metrics at higher volumes remains unexplored [4]. Computational crystallisation models can support in-vitro experiments and accelerate process development as virtual experiments can be performed quicker and with reduced material costs [5].

To address the knowledge gap, a model-based investigation of template-induced protein crystallisation systems through evaluation of key metrics is presented. Porous silica nano-particles with three chemical functionalisations (hydroxyl, carboxyl and butyl) are added to batch lysozyme crystallisation experiments at 40ml. Crystallisation population balance models are parametrised with an experimentally-validated parameter estimation methodology and and further experiments are simulated. The templates appear to lower the estimated interfacial energy compared to the homogeneous case, leading to nucleation rate profiles which are less dependent on supersaturation. For this reason, the templates can crystallize quicker than the homogeneous system, particularly at lower initial concentrations. The simulation results highlight the ability of heteronucleants to alter nucleation rate profiles, and their potential to be used as process optimisation and intensification tools for biomacromolecule purification.

Acknowledgements: This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) for the Imperial College London Doctoral Training Partnership (DTP) and by AstraZeneca UK Ltd through a CASE studentship award.

1. Hekmat D (2015) Large-scale crystallization of proteins for purification and formulation. Bioprocess Biosyst Eng 38:1209–1231. https://doi.org/10.1007/s00449-015-1374-y

2. McCue C, Girard H-L, Varanasi KK (2023) Enhancing Protein Crystal Nucleation Using In Situ Templating on Bioconjugate-Functionalized Nanoparticles and Machine Learning. ACS Appl Mater Interfaces 15:12622–12630. https://doi.org/10.1021/acsami.2c17208

3. Govada L, Rubio N, Saridakis E, et al (2022) Graphene-Based Nucleants for Protein Crystallization. Adv Funct Mater 32:2202596. https://doi.org/10.1002/adfm.202202596

4. Chen W, Cheng TNH, Khaw LF, et al (2021) Protein purification with nanoparticle-enhanced crystallisation. Sep Purif Technol 255:117384. https://doi.org/10.1016/j.seppur.2020.117384

5. Orosz Á, Szilágyi E, Spaits A, et al (2024) Dynamic Modeling and Optimal Design Space Determination of Pharmaceutical Crystallization Processes: Realizing the Synergy between Off-the-Shelf Laboratory and Industrial Scale Data. Ind Eng Chem Res 63:4068–4082. https://doi.org/10.1021/acs.iecr.3c03954

 
12:30pm - 1:30pmEurecha Meeting - for Eurecha members only
Location: L226
Annual meeting of the Eurecha members.
12:30pm - 2:30pmLunch
Location: Zone 2 - Cafetaria
1:20pm - 2:30pmPoster Session 1
Location: Zone 2 - Cafetaria
 

IMPLEMENTATION AND ASSESSMENT OF FRACTIONAL CONTROLLERS FOR AN INTENSIFIED DISTILLATION SYSTEM

Luis Refugio Flores-Gómez1, Fernando Israel Gómez-Castro1, Francisco López-Villarreal2, Vicente Rico-Ramírez3

1Universidad de Guanajuato, Mexico; 2Instituto Tecnológico de Villahermosa, Mexico; 3Tecnológico Nacional de México en Celaya, Mexico

Process intensification is a strategy applied to chemical engineering which is devoted to the development of technologies that enhance the performance of the operations in a chemical process. This is achieved through the implementation of modified equipment and multi-tasking equipment, among other approaches. Although various studies have demonstrated that the dynamic properties of intensified systems can be better than the conventional configurations, the development of better control structures is still necessary (Wang et al., 2018). The use of fractional controllers can be an alternative to achieve this target. Fractional PID controllers are based on fractional calculus, increasing the flexibility of the controller by allowing fractional orders for the derivative and the integrative actions. However, this implies a higher complexity to perform the tuning of the controller. This work presents an approach to implement and assess fractional controllers in an intensified distillation system. The study is performed in the Simulink environment in Matlab, tuning the controllers through a hybrid optimization approach; first using a genetic algorithm to find an initial point, and then refining the solution with the fmincon algorithm. The calculations also involve the estimation of fractional derivatives and integrals with fractional order numerical techniques. As case study, the experimental dynamic data for an extractive distillation column has been used (Kumar et al., 1984). The data has been adjusted to fractional order functions. Since the number of experimental points is low, a strategy is implemented to interpolate data and generate a more adequate adjustment to the fractional order transfer function. Through this approach, the sum of the square of errors is below 2.9x10-6 for perturbations in heat duty, and 1.2x10-5 for perturbations in the reflux ratio. Moreover, after controller tuning, a minimal value for ISE of 1,278.12 is obtained, which is approximately 8% lower than the value obtained for an integer-order controller.

References

Wang, C., Wang, C., Cui, Y., Guang, C., Zhang, Z., 2018. Economics and controllability of conventional and intensified extractive distillation configurations for acetonitrile/ethanol/benzene mixtures. Industrial & Engineering Chemistry Research, 57, 10551-10563.

Kumar, S., Wright, J.D., Taylor, P.A. 1984. Modelling and dynamics of an extractive distillation column. Canadian Journal of Chemical Engineering, 62, 185-192.



Sustainable pathways toward a decarbonized steel industry

Selene Cobo Gutiérrez1, Max Kroppen2, Juan Diego Medrano2, Gonzalo Guillén-Gosálbez2

1University of Cantabria; 2ETH Zurich

The steel industry, responsible for about 7% of global CO2 emissions1, faces significant pressure to reduce its environmental impact. Various technological pathways are available, but it remains unclear which is the most effective in minimizing CO2 emissions without causing greater environmental harm in other areas. This work aims to conduct the prospective life cycle assessment of five steelmaking pathways to identify the most environmentally sustainable option in terms of global warming impacts and damage to human health, ecosystems, and resources. The studied processes are 1) blast furnace plus basic oxygen furnace (BF-BOF, the dominant steelmaking route at present), 2) BF-BOF with carbon capture and storage (CCS), 3) coal-based direct reduction of iron paired with an electric arc furnace (DRI-EAF), 4) DRI-EAF using natural gas, and 5) the more recently developed low-temperature iron oxide electrolysis (IOE). Life cycle inventories were developed using a detailed Aspen Plus® model for BF-BOF, data from the Ecoinvent V3.8 database2, and literature for the other processes. The results indicate that the BF-BOF process with CCS, gas-based DRI-EAF, and IOE are the most promising pathways for reducing the steel industry’s carbon footprint while minimizing overall environmental damage. If renewable energy and hydrogen produced via water electrolysis are available at competitive costs, DRI-EAF and IOE show the most promise. However, if low-carbon hydrogen is not available and the main electricity source is the global grid mix, BF-BOF with CCS has the lowest overall impacts. The choice of technology depends on the expected development of the energy system and the current technological stock. Retrofitting existing BF-BOF plants with CCS is a viable option, while constructing new DRI-EAF plants may be more advantageous due to their versatility and higher decarbonization potential. IOE, although promising, is not yet ready for immediate industrial deployment but could be a key technology in the long term. In conclusion, the optimal technology choice depends on regional energy availability and technological readiness levels. These findings underscore the need for a tailored approach to decarbonizing the steel industry, balancing environmental benefits with economic and infrastructural considerations.

References

1. W. Cornwall. Science, 2024, 384(6695), 498-499.

2. G. Wernet, C. Bauer, B. Steubing, J. Reinhard, E. Moreno-Ruiz and B. Weidema, Int. J. Life Cycle Assess., 2016, 21, 1218–1230.



OPTIMIZATION OF HEAT EXCHANGERS THROUGH AN ENHANCED METAHEURISTIC STRATEGY: THE SUCCESS-BASED OPTIMIZATION ALGORITHM

Oscar Daniel Lara-Montaño1, Fernando Israel Gómez-Castro2, Claudia Gutiérrez-Antonio1, Elena Niculina Dragoi3

1Universidad Autónoma de Querétaro, Mexico; 2Universidad de Guanajuato, Mexico; 3Gheorghe Asachi Technical University of Iasi, Romania

The optimal design of the units in a chemical process is commonly challenging due to the high nonlinearity of the models that represent the equipment. This also applies to heat exchangers, where the mathematical equations modeling such units are nonlinear, including nonconvex terms, and require simultaneous handling of continuous and discrete variables. Finding the global optima of such models is complex, thus the optimization strategy must be robust. In this context, metaheuristics are a robust alternative to classical optimization strategies. They are a set of stochastic algorithms that can efficiently find the global optima region when adequately tuned and are adequate for nonconvex functions with several local optima. The literature presents numerous metaheuristics, each with distinct properties, many of which require parameter tuning. However, no universal method exists to solve all optimization problems, as stated by the no-free-lunch theorem (Wolpert and Macready, 1997). This implies that a given algorithm may properly work for some problems but have an inadequate performance for others, as reported for the optimal design of heat exchangers by Lara-Montaño et al. (2021). As such, new optimization strategies are still under development, and this work presents an enhanced metaheuristic algorithm, the Success-Based Optimization Algorithm (SBOA). The development of the method takes the concepts of success from a social perspective as initial inspiration. As a case study, the design of a shell-and-tube heat exchanger using the Bell-Delaware method is analyzed to minimize the total annual cost. The algorithm's performance is compared with current state-of-the-art metaheuristic algorithms, such as particle swarm optimization, grey wolf optimizer, cuckoo search, and differential evolution. Based on the findings, in terms of the standard deviation and mean values, the suggested algorithm outperforms nearly all other approaches except differential evolution. Nevertheless, the SBOA has shown a faster convergence than differential evolution and best solutions with lower total annual costs.

References

Wolpert, D.H., Macready, W.G., 1997. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), 67-82.

Lara-Montaño, O.D., Gómez-Castro, F.I., Gutiérrez-Antonio, C. 2021. Comparison of the performance of different metaheuristic methods for the optimization of shell-and-tube heat exchangers. Computers & Chemical Engineering, 152, 107403.



OPTIMAL DESIGN OF PROCESS EQUIPMENT THROUGH HYBRID MECHANISTIC-ANN MODELS: EFFECT OF HYBRIDIZATION

Zaira Jelena Mosqueda-Huerta1, Oscar Daniel Lara-Montaño2, Fernando Israel Gómez-Castro1, Manuel Toledano-Ayala2

1Universidad de Guanajuato, Mexico; 2Universidad Autónoma de Querétaro, México

Artificial neural networks (ANN) are data-based structures that allow the representation of the performance of units in chemical processes. They have been widely used to represent the operation of equipment as reactors (e.g. Cerinski et al., 2020) and separation units (e.g. Jawad et al., 2020). To develop ANN-based models, it is necessary to obtain data to train the network. Thus, their employment for process design represents a challenge, since the equipment does not exist, and actual data is commonly not available. On the other hand, despite the popularity of artificial neural networks to generate models for chemical processes, there are warnings about the risks of completely depend on these data-based models while ignoring the fundamental knowledge of the phenomena occurring in the units, given by the traditional mechanistic models. Thus, the use of hybrid models has arisen to combine the power of the ANN’s to predict interactions difficult to represent through rigorous modelling, but maintaining the relevant information provided by the traditional mechanistic approach. However, a rising question is, what part of the model must be represented through a data-based approach for design applications? To answer this question, this work analyzes the effect of the degree of hybridization in the design and optimization of a shell-and-tube heat exchanger, assessing the performance of a complete ANN model and a hybrid model in terms of the computational time and the accuracy of the solution. Since the data for the heat exchanger is not available, such information is obtained through the solution of the rigorous model for randomly selected conditions. The Bell-Delaware approach is employed to perform the design of the exchanger. Such model is characterized by non-linearities and the need for handling discrete and continuous variables. Using the data, a neural network is trained in Python to obtain an approximation to determine the area and cost of the exchanger. A second neural network is generated to predict a component of the model with the high nonlinearities, namely the calculation of the heat transfer coefficients, while the other calculations are performed with the rigorous model. Both representations are optimized with the differential evolution algorithm. According to the preliminary results, for the same architecture, the hybrid model produces designs with standard deviation approximately 30% lower than the complete ANN model, related to the areas predicted by the rigorous model. However, the hybrid model requires approximately 11 times of computational time than the complete ANN model.

References

Cerinski, D., Baleta, J., Mikulčić, H., Mikulandrić, R., Wang, J., 2020. Dynamic modelling of the biomass gsification process in a fixed bed reactor by using the artificial neural network. Cleaner Engineering and Technology, 1, 100029.

Jawad, J., Hawari, A.H., Zaidi, S. 2020. Modeling of forward osmosis process using artificial neural networks (ANN) to predict the permeate flux. Desalination, 484, 114427.



MODELLING OF A PROPYLENGLYCOL PRODUCTION PROCESS WITH ARTIFICIAL NEURAL NETWORKS: OPTIMIZATION OF THE ARCHITECTURE

Emilio Alba-Robles1, Oscar Daniel Lara-Montaño2, Fernando Israel Gómez-Castro1, Jahaziel Alberto Sánchez-Gómez1, Manuel Toledano-Ayala2

1Universidad de Guanajuato, Mexico; 2Universidad Autónoma de Querétaro, México

The mathematical models used to represent chemical processes are characterized by a high non-linearity, mainly associated to the thermodynamic and kinetic relationships. The inclusion of non-convex bilinear terms is also common when modelling chemical processes. This leads to challenges when optimizing an entire process. In the last years, the interest for the development of data-based models to represent processing units has increased. As example, the work of Kwon et al. (2021), related to the dynamic performance of distillation columns, can be mentioned. Artificial neural networks (ANN) can be mentioned as one of the most relevant strategies to develop data-based models. The accuracy of the predictions for an ANN is highly dependent on the quality of the provided data, the nature of the interactions among the studied variables, and the architecture of the network. Indeed, the selection of an adequate architecture is itself an optimization problem. In this work, two strategies are proposed and assessed for the determination of the architecture of ANN’s that represent the performance of a chemical process. As case study, a process to produce propylene glycol using glycerol as raw material is analyzed (Sánchez-Gómez et al., 2023). The main units of the process are the chemical reactor and two distillation columns. To generate the data required to train the artificial neural network, random values for the design and operating variables are generated from a simulation in Aspen Plus. To determine the best architecture for the artificial neural network, two approaches are used: (i) the random generation of structures for the ANN, and (ii) the formal optimization of the architecture employing the ant colony algorithm, which is particularly useful for discrete problems (Zhao et al., 2022). In both cases, the decision variables are the number of hidden layers and the number of neurons per layer. The objective function implies the minimization of the mean squared error. Both strategies generate ANN-based predictions with good agreement with the data from rigorous simulation, with values of r2 higher than 99.9%. However, the use of the ant colony algorithm allows the best fit, although it has a slower convergence.

References

Kwon, H., Oh, K.C., Choi, Y., Chung, Y.G., Kim, J., 2021. Development and application of machine learning-based prediction model for distillation column. International Journal of Intelligent Systems, 36, 1970-1997.

Sánchez-Gómez, J.A., Gómez-Castro, F.I., Hernández, S. 2023. Design and intensification of the production process of propylene glycol as a high value-added glycerol derivative. Computer Aided Chemical Engineering, 52, 1915-1920.

Zhao, H., Zhang, C., Zheng, X., Zhang, C., Zhang, B. 2022. A decomposition-based many-objective ant colony optimization algorithm with adaptive solution construction and selection approaches. Swarm and Evolutionary Computation, 68, 100977.



Analysis for CFD of the Claus Reaction Furnace with Operating Conditions: Temperature and Excess Air for Sulfur Recovery

PABLO VIZGUERRA MORALES1, MIGUEL ANGEL MORALES CABRERA2, FABIAN SALVADOR MEDEROS NIETO1

1INSTITUTO POLITECNICO NACIONAL, MEXICO; 2UNIVERSIDAD VERACRUZANA, MEXICO

In this work, a Claus reaction furnace was analyzed in a sulfur recovery unit (SRU) of the Abadan Oil Refinery where the combustion operating temperature is important since it ensures optimal performance in the reactor, the novelty of the research focused on temperature control of 1400, 1500 and 1600 K and excess air of 10, 20 and 30% to improve the reaction yield and H2S conversion and the CFD simulation was carried out in Ansys Fluent in transitory state and in 3 dimensions, considering the turbulence model estándar, energy model with transport by convection and mass transport with chemical reaction using the Arrhenius Finite-rate/Eddy - Dissipation model for a kinetic model of destruction of acid gases H2S and CO2, obtaining a good approximation with the experimental results of the industrial process of the Abadan Oil refinery, Iran. The percentage difference between experimental and simulated results varies between 0.6 to 4% depending on the species. The temperature of 1600 K and with excess air of 30% was the best, with one a mol fraction of 0.065 of S2 at the outlet and with a conversion of the acid gas (H2S) of 95.64%, which is quite good compared to the experimental one.



Numerical Analysis of the Hydrodynamics of Proximity Impellers using the SPH Method

MARIA SOLEDAD HERNÁNDEZ-RIVERA1, KAREN GUADALUPE MEDINA-ELIZARRARAZ1, JAZMÍN CORTEZ-GONZÁLEZ1, RODOLFO MURRIETA-DUEÑAS1, JUAN GABRIEL SEGOVIA-HERNÁNDEZ2, CARLOS ENRIQUE ALVARADO-RODRÍGUEZ2, JOSÉ DE JESÚS RAMÍREZ-MINGUELA2

1TECNOLÓGICO NACIONAL DE MÉXICO/ CAMPUS IRAPUATO, DEPARTAMENTO DE INGENIERÍA QUÍMICA; 2UNIVERSIDAD DE GUANAJUATO/DEPARTAMENTO DE INGENIERÍA QUÍMICA

Mixing is a fundamental operation in many industrial processes, typically achieved using agitated tanks for homogenization. However, the design of tanks and impellers is often overlooked during the selection of the agitation system, leading to excessive energy consumption and non-homogeneous mixing. To address these operational inefficiencies, Computational Fluid Dynamics (CFD) can be utilized to analyze the hydrodynamics and mixing times within the tank. CFD employs mathematical modeling of mass, heat, and momentum transport phenomena to simulate fluid behavior. Among the latest methods used for modeling stirred tank hydrodynamics is Smoothed Particle Hydrodynamics (SPH), a mesh-free Lagrangian approach that tracks individual particles carrying physical properties such as mass, position, velocity, and pressure. This method offers advantages over traditional mesh discretization techniques by analyzing particle interactions to simulate fluid behavior more accurately. In this study, we compare the performance of different impellers based on hydrodynamics and mixing times during the homogenization of water and ethanol in a 0.5 L stirred tank. The tank and agitators were rigorously sized, operating at 70% capacity with the fluids' rheological properties as follows: ρ₁=1000 kg/m³, ρ₂=789 kg/m³, μ₁=1E-6 m²/s², and μ₂=1.52E-6 m²/s². The simulation, conducted for 2 minutes at a turbulent flow regime with a Reynolds number of 10,000, involved three impellers—double ribbon, paravisc, and hybrid—simulated using DualSPHysics software at a stirring speed of 34 rpm. The initial particle distance was set to 1 mm, generating 270,232 fluid particles and 187,512 boundary particles representing the tank and agitator. The results included velocity profiles, flow patterns, divergence, vorticity, and density fields to quantify mixing performance. The Q criterion was also applied to identify whether fluid motion was dominated by rotation or deformation and to locate stagnation zones. The double ribbon impeller demonstrated the best performance, achieving 88.28% mixing in approximately 100 seconds, while the paravisc and hybrid impellers reached 12.36% and 11.8% mixing, respectively. The findings highlight SPH as a robust computational tool for linking hydrodynamics with mixing times, allowing for the identification of key parameters that enhance mixing efficiency.



Surrogate Modeling of Twin-Screw Extruders Using a Recurrent Deep Embedding Network

Po-Hsun Huang1, Yuan Yao1, Yen-Ming Chen2, Chih-Yu Chen2, Meng-Hsin Chen2

1Department of Chemical Engineering, National Tsing Hua University, Hsinchu 30013, Taiwan; 2Industrial Technology Research Institute, Hsinchu 30013, Taiwan

Twin-screw extruders (TSEs) are extensively used in the plastics processing industry, with their performance highly dependent on operating conditions and screw configurations. However, optimizing these parameters through experimental trials is often time-consuming and resource-intensive. Although some neural network models have been proposed to tackle the screw arrangement problem [1], they fail to account for the positional information of the screw elements. To overcome this challenge, we propose a recurrent deep embedding network that leverages a deep autoencoder with a recurrent neural network (RNN) structure to develop a surrogate model based on simulation data.

The details are as follows. An autoencoder is a neural network architecture designed to learn latent representations of input data. In this study, we integrate the autoencoder with an RNN to capture the complex physical relationships between the operating conditions, screw configurations of TSEs, and their corresponding performance metrics. To further enhance the model’s ability to represent screw positions, we incorporate an attention layer from the Transformer model architecture. This addition allows the model to more effectively capture the spatial relationships between the screw elements.

The model was trained and evaluated using simulation data generated from the Ludovic software package. The experimental setup included eight screw element arrangements and three key operating variables: temperature, feed rate, and rotation speed. For data collection, we employed two data sampling strategies: progressive Latin hypercube sampling [2] and random sampling.

The results demonstrate that the proposed surrogate model accurately predicts TSE performance across both training and testing datasets. Notably, the model generalizes well to unseen operating conditions, making reliable predictions even for scenarios not encountered during training. This highlights the model’s robustness and versatility as a tool for optimizing TSE configurations.

In conclusion, the recurrent deep embedding surrogate model offers a highly efficient and effective solution for optimizing TSE performance. By integrating this model with optimization algorithms, it is possible to rapidly identify optimal configurations, resulting in improved product quality, enhanced process efficiency, and reduced production costs.



Predicting Final Properties in Ibuprofen Production with Variable Batch Durations

Kuan-Che Huang, David Shan-Hill Wong, Yuan Yao

Department of Chemical Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan

This study addresses the challenge of predicting final properties in batch processes with highly uneven durations, using the ibuprofen production process as a case study. A novel methodology is proposed and compared against traditional regression algorithms, which rely on batch trajectory synchronization as a pre-processing step. The performance of each method is evaluated using established metrics.

Batch processes are widely used in the chemical industry. Nevertheless, variability between production runs often leads to differences in batch durations, resulting in unequal lengths of process variable trajectories. Common solutions include time series truncation or time warping. However, truncation risks losing valuable process information, thereby reducing model prediction accuracy. Conversely, time warping may introduce noise or distort trajectories when compressing significantly unequal sequences, causing the model to learn incorrect process information. In multivariate chemical processes, combining time warping with batch-wise unfolding can result in the curse of dimensionality, especially when data is limited, thereby increasing the risk of overfitting in machine learning models.

The data for this study were generated using Aspen Plus V12 simulation software, focused on batch reactors. To capture the process characteristics, statistical sampling was employed to strategically position data points within a reasonable process range. The final isobutylbenzene conversion rate for each batch was used to determine batch completion. A total of 1,000 simulation runs were conducted, and the resulting data were used to develop a neural network model. The target variables to predict are: (1) the isobutylbenzene conversion rate, and (2) the accumulated mass of ibuprofen.

To handle the unequal-length trajectories in batch processes, this research constructs a dual-transformer deep neural network with multihead attention and layer normalization mechanism to extract shared information from the high-dimensional, uneven-length manipulated variable profiles into latent space, generating equal-dimensional latent codes. As an alternative strategy for feature extraction, a dual-autoencoder framework is also employed to achieve equal-dimensional representations. The representation vectors are then used as inputs for downstream deep learning models to predict the target variables.



Develop a Digital Twin System Based on a Physics-Informed Neural Networks for Pipeline Leakage Detection

Wei-Shiang Lin1, Yi-Hsiang Cheng2, Zhen-Yu Hung2, Yuan Yao1

1Department of Chemical Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan; 2Material and Chemical Research Laboratories, Industrial Technology Research Institute, Hsinchu 310401, Taiwan

As the demand for industrial and domestic resources continues to grow, the transportation of water, fossil fuels, and chemical products increasingly depends on pipeline systems. Therefore, monitoring pipeline transportation has become crucial, as leaks can lead to severe environmental disasters and safety risks. To address this challenge, this study is dedicated to developing a pipeline leakage detection system based on digital twin technology.

The core of this research lies in combining existing physical knowledge, such as the continuity and momentum equations, with neural network technology. These physical models are incorporated into the loss function of the neural network, enabling the model to be trained based on physical laws. By integrating physical models with neural networks, we aim to achieve high accuracy in detecting pipeline leakages. An advantage of Physics-informed Neural Networks (PINNs) is that they do not rely on large datasets and can enforce physical constraints during model training, making them a powerful tool for addressing pipeline safety challenges. Using the PINN model, we can more accurately simulate the fluid dynamics within pipelines, thereby significantly enhancing the prediction of potential leaks.

In detail, the system employs a fully connected neural network alongside the continuity and momentum partial differential equations to describe fluid pressure and flow rate variations. This equation not only predicts pressure transients and pressure wave propagation but also accounts for the impact of pipeline friction coefficients on flow behavior. By integrating data fitting with physical constraints, our model aims to minimize both prediction loss and partial differential equation loss, ensuring that predictions align closely with real-world data while adhering to physical laws. This approach provides both interpretability and reliability.

The PINN model is trained on data from normal pipeline operations to describe fluid dynamics in non-leakage conditions. When the input data reflects flow rates and pressures indicative of a leak, the predicted values will exhibit statistically significant deviations from the actual values. The process involves collecting prediction errors from the training data, evaluating their statistical distribution, and establishing a detection statistic using parametric or non-parametric methods. A rejection region and control limits are then defined, followed by the creation of a control chart to detect leaks. Finally, we test the accuracy and efficiency of the control chart using field or experimental data to ensure reliability.



Higher alcohol = higher value? Identifying Promising and Unpromising Synthesis Routes for 1-Propanol

Lukas Spiekermann, Mae McKenna, Luca Bosetti, André Bardow

Energy & Process Systems Engineering, Department of Mechanical and Process Engineering, ETH Zürich

In response to climate change, the chemical industry is investigating synthesis routes using renewable carbon sources (Shukla et al., 2022). CO2 and biomass have been shown to be convertible into 1-propanol, which could serve as a future platform chemical with diverse applications and higher value than traditional bulk chemicals (Jouny et al., 2018, Schemme et al., 2018, Gehrmann and Tenhumberg, 2020, Vo et al., 2021). A variety of potential pathways to 1-propanol have been proposed, but their respective benefits and disadvantages are unclear in guiding future innovations.

Here, we aim to identify the most promising routes to produce 1-propanol and establish development targets necessary to become competitive with benchmark technologies. To allow for a comprehensive assessment, we embed 1-propanol into the overall chemical supply chain. For this purpose, we formulate a technology choice model (Kätelhön et al., 2019, Meys et al., 2021) of the chemical industry to evaluate the cost-effectiveness and climate impact of various 1-propanol synthesis routes. The model includes thermo-catalytic, electrocatalytic, and fermentation-based synthesis steps with various intermediates to produce 1-propanol from CO2, diverse biomass feedstocks, and fossil resources. A comprehensive techno-economic analysis coupled with life cycle assessment quantifies both the economic and environmental potentials of new synthesis routes.

Our findings define performance targets for direct conversion of CO2 to 1-propanol via thermo-catalytic hydrogenation or electrocatalysis to become a beneficial synthesis route. If these performance targets are not met, the direct synthesis of 1-propanol is substituted by multi-step processes based on syngas and ethylene from CO2 or biomass.

Overall, our study demonstrates the critical role of synthesis route optimization in guiding the development of new chemical processes. By establishing quantitative benchmarks, we provide a roadmap for advancing 1-propanol synthesis technologies, contributing to the broader effort of reducing the chemical industry’s carbon footprint.

References

P. R. Shukla, et al., 2022, Climate Change 2022: Mitigation of Climate Change. Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC, Cambridge University Press, Cambridge, UK and New York, NY, USA)

M. Jouny, et al., 2018, Ind. Eng. Chem. Res. 57(6), 2165–2177

C. H. Vo, et al., 2021, ACS Sustain. Chem. Eng. 9(31), 10591–10600

S. Schemme, et al., 2018, Journal of CO2 Utilization 27, 223–237

S. Gehrmann, N. Tenhumberg, 2020, Chemie Ingenieur Technik 92(10), 1444–1458

A. Kätelhön, et al., 2019, Proceedings of the National Academy of Sciences 116(23), 11187–11194

R. Meys, et al., 2021, Science 374(6563), 71–76



A Python/Numpy-based package to support model discrimination and identification

Seyed Zuhair Bolourchian Tabrizi1,2, Elena Barbera1, Wilson Ricardo Leal da Silva2, Fabrizio Bezzo1

1Department of Industrial Engineering, University of Padova, via Marzolo 9, 35131 Padova PD, Italy; 2FLSmidth Cement, Green Innovation, Denmark

Process design, scale-up, and optimisation requires the precise determination of underlying phenomena and the identification of accurate models to describe them. This process can become complex when multiple rival models and higher uncertainty in the data exist, and the data needed to select and calibrate them is costly to obtain. Numerical techniques for screening various models and narrowing the pool of candidates without requiring additional experimental effort have been introduced to streamline the pre-discrimination stage [1]. These techniques have been followed by the development of model-based design of experiment (MBDoE) methods, which not only design new experiments to maximize the information for easier discrimination between rival models but also reduce the confidence ellipsoid volume of estimated parameters by enriching the information matrix through optimal experiment design [2].
Performing these techniques in an open source and user-friendly environment has been recognized by the community and has led to the development of several valuable packages, especially in the Python/PYOMO environment, which perform many of these numerical techniques [3,4]. These existing packages have made significant contributions to parameter estimation and calibration of models as well as model-based design of experiments. However, the need for a systematic package that flexibly performs all of these steps with a clear distinction between model simulation and model identification in an object-oriented approach is still highly advocated. To address these challenges, we present a new Python package that serves as an independent numerical wrapper around the kernel functions (models and their numerical interpretation). It facilitates crucial model identification steps, including the screening of rival models (through global sensitivity, identifiability, and estimability analyses), parameter estimation, uncertainty analysis, and model-based design of experiments to discriminate and calibrate models. This package not only brings together all the necessary steps but also conducts the analysis in an object-oriented manner, offering flexibility to adapt to the physical constraints of various processes. It is independent of specific programming structures and relies on Numpy and Python arrays, making it as general as possible while remaining compatible with features available in these packages. The application and advantages are demonstrated through an in-silico approach to a multivariate model identification case.

References:
[1] Moshiritabrizi, I., Abdi, K., McMullen, J. P., Wyvratt, B. M. & McAuley, K. B. Parameter estimation and estimability analysis in pharmaceutical models with uncertain inputs. AIChE Journal (2023).
[2] Asprey, S. P. & Macchietto, S. Statistical tools for optimal dynamic model building. Comput Chem Eng 24, (2000).
[3] Wang, J. & Dowling, A. W. Pyomo.DOE: An open-source package for model-based design of experiments in Python. AIChE Journal 68, (2022).
[4] Klise, K. A., Nicholson, B. L., Staid, A. & Woodruff, D. L. Parmest: Parameter Estimation Via Pyomo. in 41–46 (2019).



Experiences in Teaching Statistics and Data Science to Chemical Engineering Students at the University of Wisconsin-Madison

VICTOR ZAVALA

UNIVERSITY OF WISCONSIN-MADISON, United States of America

In this talk, I offer a perspective on my recent experiences in designing a course on statistics and data science for chemical engineers at the University of Wisconsin-Madison and in writing a textbook on the subject.

Statistics is one of the pillars of modern science and engineering and of emerging topics such as data science and machine learning; despite of this, its scope and relevance has remained stubbornly misunderstood and underappreciated in chemical engineering education (and in engineering education at large). Specifically, statistics is often taught by placing emphasis on data analysis. However, statistics is much more than that; statistics is a mathematical modeling paradigm that complements physical modeling paradigms used in chemical engineering (e.g., thermodynamics, transport phenomena, conservation, reaction kinetics). Specifically, statistics can help model random phenomena that might not be predictable from physics alone (or from deterministic physical laws), can help quantify the uncertainty of predictions obtained with physical models, can help discover physical models from data, and can help create models directly from data (in the absence of physical knowledge).

The desire design a new course on statistics for chemical engineering came about from my personal experience in learning statistics in college and in identifying the significant gaps in my understanding of statistics throughout my professional career. Similar feelings are often shared with me by professionals working in industry and academia. Throughout my professional career, I have been exposed to a broad range of applications in which knowledge of statistics has proven to be essential: uncertainty quantification, quality control, risk assessment, modeling of random phenomena, process monitoring, forecasting, machine learning, computer vision, and decision-making under uncertainty. These are applications that are pervasive in industry and academia.

The course that I designed at UW-Madison (and the accompanying textbook) follows a "data-models-decisions" pipeline. The intent of this design is to emphasize that statistics is a modeling paradigm that maps data to decisions; moreover, this design also aims to "connect the dots" between different branches of statistics. The focus on the pipeline is also important in reminding students that understanding the application context matters. Similarly, the nature of the decision and the data available influence the type of model used. The design is also intended for the student to understand the close interplay between statistical and physical modeling; specifically, we emphasize on how statistics provides tools to model aspects of a system that cannot be fully predicted from physics. The design is also intended to help the student appreciate how statistics provides a foundation to a broad range of modern tools of data science and machine learning.

The talk also offers insights into experiences in using software, as a way to reduce complex mathematical concepts to practice. Moreover, I discuss how statistics provides an excellent framework to teach and reinforce concepts of linear algebra and optimization. For instance, it is much easier to explain the relevance of eigenvalues when this is explained from the perspective of data science (e.g., they measure information).



Rule-based autocorrection of Piping and Instrumentation Diagrams (P&IDs) on graphs

Lukas Schulze Balhorn1, Niels Seijsener2, Kevin Dao2, Minji Kim1, Dominik P. Goldstein1, Ge H. M. Driessen2, Artur M. Schweidtmann1

1Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, The Netherlands; 2Fluor BV, Amsterdam, The Netherlands

Undetected errors or suboptimal designs in Piping and Instrumentation Diagrams (P&IDs) can cause increased financial costs, hazardous situations, unnecessary emissions, and inefficient operation. These errors are currently captured in extensive design processes leading to safe, operable, and maintainable facilities. However, grassroots engineering projects can involve tens to thousands of P&ID pages, leading to a significant revision workload. With the advent of digitalization and data exchange standards such as the Data Exchange in the Process Industry (DEXPI), there are new opportunities for algorithmic support of P&ID revision.

We propose a rule-based, automatic correction (i.e., autocorrection) of errors in P&IDs represented by the DEXPI data model. Our method detects potential errors, suggests improvements, and provides explanations for these suggestions. Specifically, our autocorrection method represents a DEXPI P&ID as a graph. Thereby, nodes represent DEXPI classes and directed edges the connectivity between them. The nodes retain all attributes of the DEXPI classes. Additionally, each rule consists of an erroneous P&ID template and the corresponding correct template, represented as a graph. The correct template includes the rule explanation as a graph attribute. Then, we apply the rules at inference time. The autocorrection method searches the erroneous template via subgraph isomorphism and replaces the erroneous with the corresponding correct template in the P&ID graph.

An industry case study demonstrates the method’s accuracy and performance, with rule inference taking less than a second. However, rules can conflict, requiring careful application order, and rules must be extended for specific cases. The explainability of the rule-based approach builds trust in the method and facilitates its integration into existing engineering workflows. Furthermore, DEXPI provides an existing interface between the autocorrection method and industrial P&ID development software.



Deposition rate constants: a DPM approach for particles in pipe flow

Alkhatab Bani Saad, Edward Obianagha, Lande Liu

University of Huddersfield, United Kingdom

Particle deposition is a natural phenomenon that occurs in many natural and industrial systems. Nevertheless, modelling and understanding of particle deposition in flow is still quite a big challenge especially for the determination of deposition rate constant. This study focuses on the use of the discrete particle model to calculate the deposition rate constant of particles flowing in a horizontal pipe. It was found that increasing the velocity of the flow decreases particle deposition. As the size of particles increases, deposition increases. Similarly, deposition flux was proportional to the concentration of the particles. The deposits per unit area of the inner pipe surface is higher at lower fluid velocity. Nonetheless, when the velocity of the continuous phase is increased by a factor of 100, the deposits volume per unit area decreased by half. The deposition rate constant was found to be nonlinear to both the location of the pipe and particle size. It was also interesting to see that the constant is substantially higher at the inlet of the pipe then gradually decreases along the axial direction of the flow. The change of deposition rate constant in particle size was found to be exponentially dependent.

Novelty in this research is that by extracting some quantitative parameters, deposition rate constants in this case, from a steady state Lagrangian simulation, the Eulerian approach based unsteady state population balance modelling can be made possible to determine the thickness of particle deposit in a pipe.



Plate heat exchangers: a CFD study on the effect of dimple shape on heat transfer

Mitchell Stolycia, Lande Liu

University of Huddersfield, United Kingdom

This article studies how heat transfer is affected across different dimple shapes on a plate within a plate heat exchanger using computational fluid dynamics (CFD). Four different dimple shapes were designed and studied: spherical, edge smoothed-spherical, normal distribution, and error distribution. In a pipe of 0.1 m in diameter with the dimple height being 0.05 m located at a distance of 0.3 m from the inlet under the fluid velocity of 0.5 m s–1, simulation shows that the normal distribution dimple lifted a 0.53 K increase in fluid temperature after 1.5 s. This increase is 10 times of the spherical, 8 times of the edge smoothed-spherical and 1.13 times of the error distribution shapes in their contributions to elevating fluid temperature. This was primarily due to the large increase in the intensity and number of eddies that the normal distribution dimple induced upon the fluid flow.

The effect that a fully developed velocity profile had on heat transfer was also analysed for an array of normal distribution dimples in a 5 m long pipe. It was found that fully developed flow resulted in the greatest temperature change, which was 9.5% more efficient than half developed flow and 31% more efficient than placing dimples directly next to one another.

Novelty in this research demonstrates how a typical plate heat exchanger equipment can be designed and optimised by a computational approach prior to manufacturing.



Modeling and life cycle assessment for ammonia cracking process

Heungseok Jin, Yeonsoo Kim

Kwangwoon University, Korea, Republic of (South Korea)

Ammonia (NH3) is gaining attention as a sustainable hydrogen (H2) carrier for long-distance transportation due to its higher boiling point and lower boil-off issues compared to liquefied hydrogen. These properties make ammonia a practical choice for storing and transporting hydrogen over long distances. However, extracting hydrogen from ammonia requires significant energy due to the endothermic nature of the reaction. Optimizing the operational conditions for this decomposition process is crucial to ensure energy-efficient hydrogen production. In particular, we focus on determining the amount of slipped ammonia that provides the most efficient energy generation through mixed oxidation, where both slipped ammonia (unreacted NH3) and a small amount of hydrogen are used.

Key factors include the temperature and pressure of the ammonia cracking process, the ammonia-to-hydrogen ratio in the fuel mixture, and catalyst kinetics. By optimizing these conditions, the goal is to maximize ammonia production while minimizing hydrogen consumption for fueling and NH3 consumption for NOx reduction.

In addition to the mass and energy balance derived from process modeling, a comprehensive life cycle assessment (LCA) is conducted to evaluate the sustainability of ammonia as a hydrogen carrier. The LCA considers the entire process, from ammonia production (often through the energy-intensive Haber-Bosch process or renewable energy-driven water electrolysis) to transportation and ammonia cracking for hydrogen extraction. This assessment highlights the environmental and energy impacts at each stage, offering insights into how to reduce the overall carbon footprint of using ammonia as a hydrogen carrier.



Technoeconomic Analysis of a Methanol Conversion Process Using Microwave-Assisted Dry Reforming and Chemical Looping

Omar Almaraz, Srinivas Palanki

West Virginia University, United States of America

The global methanol market size was valued at $28.78 billion in 2020 and is projected to reach $41.91 billion by 2026 [1]. Methanol has traditionally been produced from natural gas by first converting methane to syn gas and then converting syn gas to methanol. However, this is a very energy intensive process and produces a significant amount of the greenhouse gas carbon dioxide. Hence, there is motivation to look for alternative routes to the manufacture of methanol. In this research a novel microwave reactor is used for simulating the dry reforming process to convert methane to methanol. The objective is to produce 14,200 lbmol/h of methanol, which is the current production rate of methanol at Natgasoline LLC, Texas (USA) using the traditional steam reforming process [2].

Dry reforming requires a stream of carbon dioxide as well as a stream of methane to produce syn gas. Additional hydrogen is required to achieve the necessary carbon to hydrogen ratio to produce methanol from syn gas. These streams of carbon dioxide and hydrogen are generated via chemical looping. A three-reactor chemical looping system is developed that utilizes methane as the feed to produce a pure stream of hydrogen and a pure stream of carbon dioxide. The carbon dioxide stream from the chemical looping reactor system is mixed with a desulfurized natural gas stream and is sent to a novel microwave syngas reactor, which operates at a temperature of 800 °C and pressure of 1 bar to produce a mixture of carbon monoxide and hydrogen. The stream of hydrogen obtained via chemical looping is added to this syngas stream and sent to a methanol reactor train where methanol is produced. These reactors operate at a temperature range of 220-255°C and pressure of 76 bar. The reactor outlet stream is sent to a distillation train where the product methanol is separated from methane, carbon dioxide, hydrogen, and other products. The carbon dioxide is recycled back to the microwave reactor.

This process was simulated in ASPEN Plus. The thermodynamic property method used was RKSoave for the process to convert methane to syn gas and NRTL for the process to convert syn gas to methanol. The energy requirement for operating the microwave reactor is determined via simulation in COMSOL. Heat integration tools are utilized to reduce the hot utility and cold utility usage in this integrated plant that leads to optimal operation. A technoeconomic analysis is conducted to determine the overall capital cost and the operating cost of this novel process. The simulation results from this study demonstrate the significant potential of utilizing a microwave-assisted reactor for dry reforming of methane.

References

[1] Methanol Market by Feedstock (Natural Gas, Coal, Biomass), Derivative (Formaldehyde, Acetic Acid), End-use Industry (Construction, Automotive, Electronics, Solvents, Packaging), and Region - Global Forecast to 2028, Markets and Markets. (2023). https://www.marketresearch.com/MarketsandMarkets-v3719/Methanol-Feedstock-Natural-Gas-Coal-30408866/

[2] M. E. Haque, N. Tripathi, and S. Palanki,” Development of an Integrated Process Plant for the Conversion of Shale Gas to Propylene Glycol,” Industrial & Engineering Chemistry Research, 60 (1), 399-41 (2021)



A Techno-enviro-economic Transparency of a Coal-fired Power Plant: Integrating Biomass Co-firing and CO2 Sequestration Technology in a Carbon-priced Environment

Norhuda Abdul Manaf1, Nilay Shah2, Noor Fatina Emelin Nor Fadzil3

1Department of Chemical and Environmental Engineering, Malaysia-Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Kuala Lumpur; 2Department of Chemical Engineering, Imperial College London, SW7 2AZ, United Kingdom; 3Department of Chemical and Environmental Engineering, Malaysia-Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Kuala Lumpur

The energy industry, as the primary contributor to worldwide greenhouse gas emissions, plays a crucial role in addressing global climate issues. Despite numerous governmental commitments and initiatives aimed at combating the root causes of rising temperatures, carbon dioxide (CO2) emissions from industrial and energy-related activities continue to climb. Coal-fired power plants are significant contributors to this situation. Currently, two promising strategies for mitigating emissions from coal-fired power plants are CO2 capture and storage (CCS) and biomass gasification. CCS is a mature technology in the field, while biomass gasification, a process that converts biomass into gaseous fuel, offers an encouraging avenue for generating sustainable energy resources. While extensive research has explored the techno-economic potential of coal-biomass co-firing with CCS (CB-CCS) retrofit systems, no work has considered the synergistic impact of coal power plant stranded assets, carbon price schemes, and co-firing ratios. This study develops an hourly-resolution optimization model framework using mixed-integer linear programming to predict the operational profile and economic potential of CB-CCS-retrofit systems. Two dynamic scenarios for ten-year operations are evaluated: with and without carbon price imposition, subject to the minimum coal power plant stranded asset and CO2 emissions at different co-firing ratios. These scenarios reflect possible implementations in developed countries with established carbon price schemes, such as the United Kingdom and Australia, as well as developing or middle-income countries without strong carbon policy schemes, such as Malaysia and Indonesia. The outcome of this work will help determine whether retrofitting individual coal power plants is worthwhile for reducing greenhouse gas emissions. It is also significant to comprehend the role of CCS in the retrofit system and the associated co-firing ratio for biomass gasification systems. This work contributes to the international agenda delineated in the International Energy Agency (IEA) report addressing carbon lock-in and stranded assets, which potentially stem from the premature decommissioning of contemporary coal-based electricity generation facilities. This work also aligns with Malaysia's National Energy Transition Roadmap, which focuses on bioenergy and CCS.



Methodology for multi-actor and multi-scale decision support for Water-Food-Energy systems

Amaya Saint-Bois1, Ludovic Montastruc1, Marianne Boix1, Olivier Therond2

1Laboratoire de Génie Chimique, UMR 5503 CNRS, Toulouse INP, UPS, 4 Allée Emile Monso, 31432 Toulouse Cedex 4, France; 2UMR 1121 LAE INRAE- Université de Lorraine – ENSAIA, 54000 Nancy, France

We have designed a generic multi-actor multi-level framework to optimize the management of water-energy-food nexus systems. These are essential systems for human life characterized by water, energy and food synergies and trade-offs at varied spatial and temporal scales. They are managed by cross sector decision-makers at varied decision levels. They are complex and dynamic systems for which the operational level cannot be overlooked to design adequate management strategies.

Our methodology combines spatial operational multi-agent based integrated simulations of water-energy-food nexus systems with strategic decision-making methods (Saint-Bois et al., 2024). We have implemented it to allocate land-use alternatives to agricultural plots. The number of territory possible combinations of parcel land-use allocations equals the number of land-use alternatives explored for each parcel exponential the number of parcels in the territory. Stochastic multi-criteria decision-making methods have been designed to provide decision support for large territories (more than 1000 parcels). A multi-objective optimization method has been designed to produce optimized regional level land-use scenarios.

The methodology has been applied to an agricultural watershed of approximately 800 km2 and 15224 parcels situated downstream the French Aveyron River. The watershed experiences water stress and is located in one of France’s sunniest regions. Renewable energy production in agricultural land appears as a means to meet national renewable energy production targets and to move towards autonomous sustainable agricultural systems and regions. The installation of renewable energy generation units in agricultural land facing water stress is a perfect illustration of a complex water-energy-food system for which a holistic approach is required. MAELIA (Therond et al., 2014) (modelling of socio-agro-ecological systems for landscape integrated assessment), a multi-agent based platform developed by French researches to simulate complex agro-hydrological systems, has been used to simulate dynamics of water-energy-food nexus systems at operational level. Three strategic multi-criteria decision-making methods that combine Monte Carlo simulations with the Analytic Hierarchy Process method have been implemented. The first one is local; it selects land-use alternatives that optimize multi-sector parcel level indicators. The other two are regional; decisions are based on regional indicators. The first regional decision-making method identifies the best uniform regional scenario from those known and the second regional decision-making method explores combinations of parcel land-use allocations and selects the one that optimizes multi-sector criteria at regional level. A multi-objective optimization method that combines MILP (Mixed Integer Linear Programming) and goal programming has been implemented with IBM’s ILOG CPLEX optimization studio to find parcel level land-use allocations that optimize regional multi-sector criteria.

The three decision-making methods provide the same result: covering all land that is suitable for solar panels with solar panels optimizes parcel and regional multi-criteria performance indicators. Perspectives are simulating scenarios with positive agricultural governmental studies, adding social indicators and designing a game theory based strategic decision-making method.



Synergies of Adaptive Learning for Surrogate-Based Flowsheet Model Maintenance

Balázs Palotai1,2, Gábor Kis1, János Abonyi2, Ágnes Bárkányi2

1MOL Group Plc.; 2Faculty of Engineering, University of Pannonia

The integration of digital models with business processes and real-time data access is pivotal for advancing Industry 4.0 and autonomous systems. This evolution necessitates that digital models maintain high fidelity and credibility to ensure reliable decision support in dynamic environments. Flowsheet models, commonly used for process simulation and optimization in such contexts, often face challenges related to convergence issues and computational demands during optimization. Surrogate models, which approximate complex models with simpler mathematical representations, present a promising solution to mitigate these challenges by estimating calibration factors for flowsheet models efficiently. Traditionally, surrogate models are trained using Latin Hypercube Sampling to capture a broad range of system behaviors. However, physical systems in industrial applications are typically operated within specific local regions, where globally trained surrogate models may not perform adequately. This discrepancy limits the effectiveness of surrogate models in accurately calibrating flowsheet models, especially when the system deviates from the conditions used during the surrogate model training.

This paper introduces a novel adaptive calibration methodology that combines the principles of active and adaptive learning to enhance surrogate model performance for flowsheet model calibration. The proposed approach iteratively refines the surrogate model by generating new data points in the local operating regions of interest using the flowsheet model itself. This adaptive retraining process ensures that the surrogate model remains accurate across both local and global domains, thus providing reliable calibration factors for the flowsheet model.

A case study on a simplified refinery process demonstrates the effectiveness of the proposed methodology. The adaptive surrogate-based calibration significantly reduces the computational time associated with direct simulation-based calibration while maintaining high accuracy in model predictions. The results show an improvement in both the efficiency and precision of the flowsheet model calibration process, highlighting the synergistic benefits of integrating surrogate models into adaptive calibration strategies for industrial process engineering.

In summary, the synergies between adaptive maintenance of surrogate and flowsheet models offer a robust solution for maintaining model fidelity and reducing computational costs in dynamic industrial environments. This research contributes to the field of computer-aided process engineering by presenting a methodology that not only supports real-time decision-making but also enhances the adaptability and performance of digital models in the face of evolving physical systems.



Comparison of Prior Mean and Multi-Fidelity Bayesian Optimization of a Hydroformylation Reactor

Stefan Tönnis, Luise F. Kaven, Eike Cramer

Process Systems Engineering, RWTH Aachen University, Germany

Accurate process models are not always available and can be prohibitively expensive to obtain for model-based optimization. Hence, the process systems engineering (PSE) community has gained an interest in Bayesian Optimization (BO), for it approximates black-box objectives using the probabilistic Gaussian processes (GP) surrogate models [1]. BO fits the surrogate models by iteratively proposing experiments by optimizing over so-called acquisition functions and updating the surrogate model based on the results. Although BO is generally known as sample-efficient, treating chemical engineering design problems as fully black-box problems can still be prohibitively expensive, particularly for high-cost technical-scale experiments. At the same time, there is an extensive knowledge and modeling base for chemical engineering design problems that are fully neglected by black-box algorithms such as BO. One widely known option to include such prior knowledge in BO is prior mean modeling [2], where the user complements the BO algorithm with an initial guess, i.e., the prior mean. Alternatives include hybrid models or compositions of GPs with mechanistic equations [3]. A lesser-known alternative is augmenting the GP with lower fidelity data [4], e.g., from low-cost simulations or approximate models. Such low-fidelity data can give cheap yet valuable insights, which reduces the number of high-cost experiments. In this work, we compare the usage of prior mean and multi-fidelity modeling for BO in PSE design problems. We first review how prior mean and multi-fidelity modeling can be incorporated using multi-fidelity benchmark problems such as the well-known Forrester, Rosenbrock, and Rastrigin test functions. In a second step, we apply the two methods to optimize a multi-phase reaction mini-plant process, including a decanter separation step and a recycle stream. The process is based on the hydroformylation of 1-dodecene in microemulsion systems [5]. Overall, we observe accelerated convergence in the different test functions and the hydroformylation mini-plant. In fact, combining both prior mean and multi-fidelity modeling methods achieves the best overall fit of the GP surrogate models. However, our analysis also reveals how poorly chosen prior mean functions can cause the algorithm to get stuck in local minima or lead to numerical failure.

Bibliography
[1] Roman Garnett. Bayesian optimization. Cambridge University Press, Cambridge, United Kingdom,
2023.

[2] Aniket Chitre, Jayce Cheng, Sarfaraz Ahamed, Robert C. M. Querimit, Benchuan Zhu, Ke Wang,
Long Wang, Kedar Hippalgaonkar, and Alexei A. Lapkin. phbot: Self–driven robot for ph adjustment
of viscous formulations via physics–informed–ml. Chemistry–Methods, 4(2), 2024.

[3] Leonardo D. González and Victor M. Zavala. Bois: Bayesian optimization of interconnected systems.
IFAC-PapersOnLine, 58(14):446–451, 2024.

[4] Jian Wu, Saul Toscano-Palmerin, Peter I. Frazier, and Andrew Gordon Wilson. Practical multi-fidelity
bayesian optimization for hyperparameter tuning. In Ryan P. Adams and Vibhav Gogate, editors,
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, volume 115 of Proceedings
of Machine Learning Research, pages 788–798. PMLR, 2020.

[5] David Müller, Markus Illner, Erik Esche, Tobias Pogrzeba, Marcel Schmidt, Reinhard Schomäcker,
Lorenz T. Biegler, Günter Wozny, and Jens-Uwe Repke. Dynamic real-time optimization under
uncertainty of a hydroformylation mini-plant. Computers & Chemical Engineering, 106:836–848,
2017.



A global sensitivity analysis for a bipolar membrane electrodialysis capturing carbon dioxide from the air

Grazia Leonzio1, Alexia Thill2, Nilay Shah2

1Department of Mechanical, Chemical and Materials Engineering, University of Cagliari, via Marengo 2, 09123 Cagliari, Italy , Sargent Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK; 2Sargent Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK

Global warming and climate change are two critical, current global challenges. For this reason, as the concentration of atmospheric carbon dioxide (CO2) continues to rise, it is becoming increasingly imperative to invent efficient and cost-effective technologies for controlling the atmospheric CO2 concentration. In addition to the capture of CO2 from flue gases and industrial processes, new solutions to capture CO2 from the air have been proposed and investigated in the literature such as absorption, adsorption, ion exchange resin, mineral carbonation, membrane, photocatalysis, cryogenic separation, electrochemical approach and electrodialysis approaches (Leonzio et al., 2022). These are the well-known direct air capture (DAC) or negative emission technologies (NETs).

Among them, in the electrodialysis approach, a bipolar membrane electrodialysis (BPMED) stack is used to regenerate the hydroxide based-solved (NaOH or KOH water solution) coming from an absorption column and capturing CO2 from the air (Sabatino et al., 2020). In this way, it is possible to recycle the solvent to the column and release the captured CO2 for its storage or utilization.

Although not yet deployed at an industrial or even pilot scale, CO2 separation through BPMED has already been described and analyzed in the literature (Eisaman et al., 2011; Sabatino et al., 2020, 2022; Vallejo Castano et al., 2024).

Regarding the economic aspect, a preliminary levelized cost of the BPM-based process was suggested to be 770 $/tonCO2 due to the high cost of the membrane, the large electricity consumption, and uncertainties on the lifetime of the materials (Sabatino et al., 2020). Due to the relatively early stage of development, process optimization through the use of a mathematical model is therefore useful to support design and development through identifiation of the best operating conditions and parameters along with a Global Sensitivity Analysis (GSA) with the aim of suggesting significant operating parameters that could optimize both cost and energy consumption.

In this research, a mathematical model for a BPMED capturing CO2 from the air is proposed to conduct a GSA to identify the most effective operating condition for total costs (including capital and operating expenditures) and energy consumption, as the considered Key Performance Indicators (KPIs). The investigated uncertain parameters are: current density, concentration in the rich solution, membrane active area, number of cell pairs, CO2 partial pressure in the gas phase, load ratio and carbon loading.

References

Vallejo Castano, S., Shu, Q., Shi, M., Blauw, R., Loldrup Fosbøl, P., Kuntke, P., Tedesco, M., Hamelers, H.V.M., 2024. Chemical Engineering Journal 488, 150870

Eisaman, M. D.; Alvarado, L.; Larner, D.; Wang, P.; Littau, K.A. 2011. Energy Environ. Sci. 4 (10), 4031.

Leonzio, G., Fennell, P.S., Shah, N., 2022, Appli. Sci., 12(16), 8321

Sabatino, F., Mehta, M., Grimm, A., Gazzani, M., Gallucci, F., Kramer, G.J., and Annaland, M., 2020. Ind. Eng. Chem. Res. 59, 7007−7020

Sabatino, F., Gazzani, M., Gallucci, F., Annaland, M., 2022. Ind. Eng. Chem. Res. 61, 12668−12679



Refrigerant Selection and Cycle Design for Industrial Heat Pump Applications exemplified for Distillation Processes

Jonas Schnurr, Momme Adami, Mirko Skiborowski

Hamburg University of Technology, Institute of Process System Engineering, Germany

Abstract

In the scope of global warming the essential objectives for the industry are the transition to renewable energy and the improvement of energy efficiency. A potential approach to achieving both of these goals in a single step is the implementation of heat pumps, which effectively recover low-temperature waste heat that would otherwise be lost to the environment. By elevating the heat to a higher temperature level where it can be reused or recycled within the process, the application range of heat pumps is not limited to new designs and they have a huge potential; as retrofit options for existing processes in order to reduce the external energy demand [1] and electrify the industrial processes, thereby promoting a more sustainable industry with an increased share of renewable electricity generation.

Nevertheless, the optimal design of heat pumps depends heavily on the selection of an appropriate refrigerant, as the refrigerant performance is influenced by both thermodynamic properties and the heat pump cycle design, which is typically fixed in current selection approaches. Methods like iterative approaches [2], database screening followed by simulations [3], and optimization of thermodynamic parameters with subsequent identification of real refrigerants [4] are computationally intensive and time-consuming. Although these methods can identify thermodynamically beneficial refrigerants, practical application may be hindered by limitations of the compressor. Additionally, these approaches are challenging to implement in process design tools.

The current work presents a novel approach for a fast screening and identification of suitable refrigerant and heat pump cycle designs for specific applications, considering a variety of established refrigerants. The method automatically evaluates the performance of 38 pure refrigerants for any heat pump with defined heat sink and source, adapting the heat pump design by incorporating an internal heat exchanger, in case superheating the refrigerant prior to compression is required. By considering practical constraints such as compression ratio and compressor discharge temperature, the remaining suitable refrigerants are ranked based on energy demand or COP.

The application of an integrated process design and screening is demonstrated for the evaluation of different distillation processes, by linking the screening tool with an existing shortcut screening framework proposed by Skiborowski [5]. This integration enables the combination of heat pumps with other energy integration methods, like thermal coupling, thereby facilitating a more comprehensive assessment of potential process variants and the identification of the most promising process alternatives.

References

[1] A. A. Kiss, C. A. I. Ferreira, Heat Pumps in Chemical Process Industry, CRC Press, Boca Raton, 2017

[2] J. Jiang, B. Hu, T. Ge, R. Wang, Energy 2022, 241, 1222831.

[3] M. O. McLinden, J. S. Brown, R. Brignoli, A. F. Kazakov, P. A. Domanski, Nature Communications 2017, 8 (1), 1-9.

[4] J. Mairhofer, M. Stavrou, Chemie Ingenieur Technik 2023, 95 (3), 458-466.

[5] M. Skiborowski, Chemical Engineering Transactions 2018, 69, 199-204.



CO2 conversion to polyethylene based on power-to-X technology and renewable resources

Monika Dokl1, Blaž Likozar2, Chunyan Si3, Zdravko Kravanja1, Yee Van Fan3,4, Lidija Čuček1

1Faculty of Chemistry and Chemical Engineering, University of Maribor, Smetanova ulica 17, 2000 Maribor, Slovenia; 2Department of Catalysis and Chemical Reaction Engineering, National Institute of Chemistry, Hajdrihova 19, Ljubljana 1001, Slovenia; 3Sustainable Process Integration Laboratory, Faculty of Mechanical Engineering, Brno University of Technology, Technická 2896/2, 616 69 Brno, Czech Republic; 4Environmental Change Institute, University of Oxford, Oxford OX1 3QY, United Kingdom

In addition to increasing material and energy efficiency, the plastics sector is already stepping up its efforts further to minimize greenhouse gas emissions during the production phase in order to support the EU's transition to climate neutrality by 2050. These initiatives include expanding the circular economy in the plastics value chain through recycling, increasing the use of renewable raw materials, switching to renewable energy and developing advanced carbon capture and utilization methods. Bio-based plastics have been extensively explored as a potential substitute for plastics derived from fossil fuels. Despite the potential of bio-based plastics, there are concerns about sustainability, including the impact on land use, water resources and biodiversity. An alternative route is to convert CO2 into valuable chemicals using power-to-X technology. This includes the surplus of renewable energy to transform CO2 into fuels, chemicals and plastics. In this study, the process simulation of polyethylene production using CO2 and renewable electricity is performed to identify feedstocks aligned with climate objectives. CO2-based polyethylene production is compared with conventional fossil-based production and burdening and unburdening effects of potential transition to the production of renewable plastics are evaluated.



Design of Experiments Algorithm for Comprehensive Exploration and Rapid Optimization in Chemical Space

Kazuhiro Takeda1, Kondo Masaru2, Muthu Karuppasamy3,4, Mohamed S. H. Salem3,5, Takizawa Shinobu3

1Shizuoka university, Japan; 2University of shizuoka, Japan; 3Osaka university, Japan; 4Graduate School of Pharmaceutical Sciences, Osaka University, Japan; 5Suez Canal University, Egypt

1. Introduction

Bayesian Optimization (BO)1) is known for its ability to explore optimal conditions with a limited number of experiments. However, the number of experiments conducted through BO is often insufficient to fully understand the experimental condition space. To address this, various experimental design methods have been proposed. Among these, the Definitive Screening Design (DSD)2) has been introduced as a method that minimizes confounding and requires fewer experiments. This study proposes an algorithm that combines DSD and BO to reduce confounding, ensure sufficient experimentation to understand the experimental condition space and enable rapid optimization.

2. Fusion Algorithm of DSD and BO

In DSD, each factor is set at three levels (+, 0, -), and experiments are conducted with one factor at 0 and the others at + or -. This process is repeated for the number of factors m, and a final experiment is conducted with all factors set to 0, resulting in a total of 2m+1 experiments. Typically, after conducting experiments based on DSD, a model is created by selecting factors using criteria such as AIC (Akaike information criteria), followed by additional experiments to optimize the objective function. Using BO allows for optimization with fewer additional experiments.

In this study, the levels (+ and -) required by DSD are determined based on BO, enabling the integration of BO from the DSD experiment stage. The proposed algorithm is outlined as follows:

1. Formulate a DSD experimental plan with 0, +, and - levels.

2. Conduct experiments using the maximum and minimum ranges (as defined by DSD) until all variables are no longer unique.

3. For the next experimental condition, use BO to search within the range of the original planned values with the same sign.

4. Conduct experiments under the explored conditions.

5. If the experimental plan formulated in Step 1 is complete, proceed to the next step; otherwise, return to Step 3.

6. Use BO to explore the optimal conditions within the range.

7. Conduct experiments under the explored conditions.

8. If the convergence criteria are met, terminate the process; otherwise, return to Step 6.

3. Numerical Experiment

Numerical experiments were conducted to minimize each objective function. The upper and lower limits of each variable were set at (-2, 2), and the experiment was conducted 10 times. The results indicate that the proposed algorithm converges faster than BO alone. Moreover, the variability in convergence speed was also reduced. Although not shown due to space constraints, the proposed algorithm also demonstrated faster and more stable convergence compared to other experimental design methods combined with BO.

4. Conclusion

This study proposed an algorithm combining DSD and BO to minimize confounding, reduce the required experiments, and enable rapid optimization. Numerical experiments demonstrated that the algorithm converges early and stably. Future work will involve verifying the effectiveness of the proposed algorithm through actual experiments.

References

1. J. Snoek, et al.; arXiv:1206.2944, pp.1-9, 2012

2. B. Jones and C. J. Nachtsheim; J. Qual. Technol., Vol.43, pp.1-15, 2011



Surrogate Modeling for Real-Time Simulation of Spatially Distributed Dynamically Operated Chemical Reactors: A Power-to-X Case Study

Luisa Peterson1, Ali Forootani2, Edgar Ivan Sanchez Medina1, Ion Victor Gosea1, Peter Benner1,3, Kai Sundmacher1,3

1Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstraße 1, Magdeburg, 39106, Germany; 2Helmholtz Centre for Environmental Research, Permoserstraße 15, Leipzig, 04318 , Germany; 3Otto von Guericke University Magdeburg, Universitaetsplatz 2, Magdeburg, 39106, Germany

Spatially distributed dynamical systems are omnipresent in chemical engineering. These systems are often modeled by partial differential equations (PDEs) to describe complex, coupled processes. However, solving PDEs can be computationally expensive, especially for highly nonlinear systems. This is particularly challenging for outer-loop computations such as optimization, control, and uncertainty quantification, all requiring real-time performance. Surrogate models reduce computational costs and are classified into data-fit, reduced-order, and hierarchical models. Data-fit models use statistical techniques or machine learning to map input-output relationships, while reduced-order models project equations onto a lower-dimensional subspace. Hierarchical models simplify physical or numerical methods to reduce complexity.

In this study, we simulate the dynamic behavior of a catalytic CO2 methanation reactor, critical for Power-to-X applications that convert CO2 and green hydrogen to methane. The reactor must adapt to changing load conditions, which requires real-time executable simulation models. A one-dimensional mechanistic model, calibrated with pilot plant data, simulates temperature and CO2 conversion. We develop and test three surrogate models using load change simulation data. (i) Operator Inference (OpInf) projects the system into a lower dimensional subspace and infers a quadratic polynomial within this space, incorporating stability constraints to improve prediction reliability. (ii) Sparse Identification of Nonlinear Dynamics (SINDy) uncovers the system's governing equations through sparse regression. Our adaptation of SINDy uses Q-DEIM to efficiently select significant data for regression inputs and is implemented within a neural network framework with a Physics-Informed Neural Network (PINN) loss function. (iii) The proposed Graph Neural Network (GNN) uses a windowed graph structure with Graph Attention Networks.

When reproducing data from the mechanistic model, OpInf achieves a low relative Frobenius norm error of 0.043% for CO2 conversion and 0.030% for temperature. The quadratic, guaranteed stable polynomial provides a good balance between interpretability and performance. SINDy gives relative errors of 2.37% for CO2 conversion and 0.91% for temperature. While SINDy is the most interpretable model, it is also the most computationally intensive to evaluate, requires manual tuning of the regression library, and occasionally experiences stability issues. GNNs produce relative errors of 1.08% for CO2 conversion and 0.81% for temperature. GNNs offer the fastest evaluation and require the least domain-specific knowledge of the three methods, but their black-box nature limits interpretability and they are prone to overfitting and can struggle with extrapolation. All surrogate models reduce computational time while maintaining acceptable accuracy, making them suitable for real-time decision-making in dynamic reactor operations. The choice of model depends on the application requirements, in particular the balance between speed and interpretability. In this case, OpInf provides the best overall balance, while SINDy and GNNs provide useful trade-offs depending on whether interpretability or speed is prioritized [2].


References

[1] R. T. Zimmermann, J. Bremer, and K. Sundmacher, “Load-flexible fixed-bed reactors by multi-period design optimization,” Chemical Engineering Journal, vol. 428, 130771, 2022, DOI: 0.1016/j.cej.2021.130771.

[2] L. Peterson, A. Forootani, E. I. S. Medina, I. V. Gosea, K. Sundmacher, and P. Benner, “Towards Digital Twins for Power-to-X: Comparing Surrogate Models for a Catalytic CO2 Methanation Reactor”, Authorea Preprints, 2024, DOI: 10.36227/techrxiv.172263007.76668955/v1.



Computer Vision for Chemical Engineering Diagrams

Maged Ibrahim Elsayed Eid, Giancarlo Dalle Ave

McMaster University, Canada

This paper details the development of a state-of-the-art object, word, and connectivity detection system tailored for the analysis of chemical engineering diagrams, namely Process Flow Diagrams (PFDs), Block Flow Diagrams (BFDs), and Piping and Instrumentation Diagrams (P&IDs), utilizing cutting-edge computer vision methodologies. Chemical engineering diagrams play a pivotal role in the field, offering visual representations of plant processes and equipment. They are integral to both the design, analysis, and operational phases of chemical processes, aiding in process documentation and serving as a foundation for simulating and monitoring the performance of essential equipment operations.

The necessity of automating the interpretation of BFDs, PFDs, and P&IDs arises from their widespread use and the challenges associated with their manual analysis. These diagrams, often stored as image-based PDFs, present significant hurdles in terms of data extraction and interpretation. Manual processing is not only labor-intensive but also prone to errors and inconsistencies. Given the complexity and volume of these diagrams, which include intricate details of plant processes and equipment, manual methods can lead to delays and inaccuracies. Automating this process with advanced computer vision techniques addresses these challenges by providing a scalable, accurate, and efficient means to extract and analyze information.

The primary aim of this project is to automate the interpretation of various chemical engineering diagrams, a task that has traditionally relied on manual expertise. This automation encompasses the precise detection of unit operations, text recognition, and the mapping of interconnections between components. To achieve this, the proposed methodology relies on rule-based and predefined approaches. These approaches are employed to detect unit operations by analyzing visual patterns and shapes, recognizing text using OCR techniques, and mapping the interconnections between components based on spatial relationships. This method specifically avoids deep learning which can be computationally intensive and often requires extensive labeling to effectively differentiate between various objects. These challenges can complicate implementation and scalability, making deep learning less suitable for this application. The results showed high detection accuracy, successfully identifying unit operations, text, and interconnections with reliable performance, even in complex diagrams.



Digital Twin for Operator Training- and Real-Time Support for a Pilot Scale Packed Batch Distillation Column

Mads Stevnsborg, Jakob K. Huusom, Krist V. Gernaey

PROSYS DTU, Denmark

Digital Twin (DT) is a frequently used term by industry and academia to describe data-centric models that accurately depict a physical system counterpart. The DTs is typically used in either an offline context as Virtual Laboratories (VL) [4, 5] or in real-time applications as predictive toolboxes [2]. In processes restricted by a low degree of automation that instead rely greatly on operator competence in key decision-making situations the DTs can act as a guiding tool [1, 3]. This work explores the challenge of developing DTs to support operators by developing a combined virtual laboratory and decision support tool for students conducting experiments on a pilot scale packed batch distillation column at the Technical University of Denmark [2]. This operation is an unsteady operation, controlled by a set of manual valves, which the operator must continuously balance to meet purity constraints without an excessive consumption of utilities. The realisation is achieved by leveraging the software development and IT operations (DevOps) methodology with a modular compartmentalisation of DT resources to better leverage model applicability across various projects. The final solution is comprised of several stand-alone packages that together offer real-time communication to physical equipment through OPC-UA endpoints and a scalable simulation environment through web-based user interfaces (UI). The advantages of this implementation strategy are flexibility and speed allowing for continuously updating process models as data is generated and offering process operators with the necessary training and knowledge before and during operation to run experiments effectively and enhancing the learning outcome.

References

[1] F. Bähner et al., 2021,” Challenges in Optimization and Control of Biobased Process Systems: An Industrial-Academic Perspective”, Industrial and Engineering Chemistry Research, Volume 60, Issue 42, pp. 14985-15003

[2] M. Jones et al., 2022, “Pilot Plant 4.0: A Review of Digitalization Efforts of the Chemical and Biochemical Engineering Department at the Technical University of Denmark (DTU)”, Computer-aided Chemical Engineering, Volume 49, pp. 1525-1530

[3] V. Steinwandter et al., 2019, “Data science tools and applications on the way to Pharma 4.0”, Drug Discovery Today, Volume 24, Issue 9, pp. 1795-1805

[4] M. Schueler & T. Mehling, 2022, “Digital Twin- A System for Testing and Training”, Computer Aided Chemical Engineering, Volume 52, pp. 2049-2055

[5] J.Ismite et al., 2019, “A systems engineering framework for the design of bioprocess operator training simulators”, E3s Web of Conferences, 2019, Volume 78, pp. 03001

[6] N. Kamihama et al., 2011, “Isobaric Vapor−Liquid Equilibria for Ethanol + Water + EthyleneGlycol and Its Constituent Three Binary Systems”, Journal of Chemical and Engineering Data, Volume 57, Issue 2, pp. 339-344



Hybridizing Neural Networks with Physical Laws for Advanced Process Modeling in Chemical Engineering

Jana Mousa, Stéphane Negny

INP Toulouse, France

Neural networks (NNs) have become the talk of the century as they have been labeled as indispensable tools for modeling complex systems due to their ability to learn and predict from vast datasets. Their success spans a wide range of applications, including chemical engineering processes. However, one key limitation of NNs is their lack of physical interpretability, which becomes critical when dealing with complex systems governed by known physical laws. In chemical engineering, particularly in unit operations like reactors—considered the heart of any process—the accuracy and reliability of models depend not only on their predictive capabilities, but also on their adherence to physical constraints such as mass and energy balances, reaction kinetics, and equilibrium constants.

This study investigates the integration of neural networks with nonlinear data reconciliation (NDR) as a method to impose physical constraints on predictive models. Nonlinear data reconciliation is a mathematical technique used to adjust measured data to satisfy predefined physical laws, enhancing model consistency and accuracy. By embedding NDR into neural networks, the resulting hybrid models ensure physical realism while retaining the flexibility and learning power of NNs.

The framework first trains an NN to capture nonlinear system relationships, then applies NDR to correct predictions so that key physical metrics—such as conversion, selectivity, and equilibrium constants in reactors—are met. This ensures that the model aligns not only with data but also with fundamental physical laws, enhancing the model's interpretability and reliability. Furthermore, this method's efficacy has been evaluated by comparing it to other hybrid approaches, such as Karush-Kuhn-Tucker Neural Networks (KKT-NN) and KarushKuhn-Tucker Physics-Informed Neural Networks (KKT-PINN), both of which aim to enforce physical constraints within neural networks.

In conclusion, the integration of physical interpretability into neural networks through nonlinear data reconciliation significantly enhances modeling accuracy and reliability in engineering applications. Future enhancements may focus on refining the method to accommodate a wider range of engineering challenges, thereby facilitating its application in diverse fields such as process engineering, and system optimization



Transferring Graph Neural Networks for Soft Sensor Modeling using Process Topologies

Maximilian F. Theisen1, Gabrie M.H. Meesters2, Artur M. Schweidtmann1

1Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands; 2Product and Process Engineering, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands

Transfer learning allows - in theory - to re-use and fine-tune machine learning models and thus reduce data requirements. However, transferring data-driven soft sensor models is in practice often not possible. In particular, the fixed input structure of standard soft sensor models prohibits transfer if e.g. the sensor information is not identical in all plants.

We propose a process-aware graph neural network approach for transfer learning of soft sensor models across multiple plants. In our method, plants are modeled as graphs: Unit operations are nodes, streams are edges, and sensors are embedded as attributes. Our approach brings two advantages for transfer learning: First, we not only include sensor data but also crucial information on the plant topology. Second, the graph neural network algorithm is flexible with respect to its sensor inputs. We test the transfer learning capabilities of our modeling approach on ammonia synthesis loops with different process topologies (Moulijn, 2013). We build a soft sensor predicting the ammonia concentration in the product. After training on data from several processes, we successfully transfer our soft sensor model to a previously unseen process with a different topology. Our approach promises to extend the use case of data-driven soft sensors to cases where data from similar plants is leveraged.

References

Moulijn, J. A. (2013). Chemical process technology (2nd ed (Online-Ausg.) ed.). (M. Makkee, & A. Diepen, Eds.) Chichester, West, Sussex: John Wiley & Sons Inc.



Production scheduling based on Real-time Optimization and Zone Control Nonlinear Model Predictive Controller

José Matias1, Alvaro Marcelo Acevedo Peña2

1KU Leuven, Belgium; 2YPFB Refinación S.A.

Chemical industry has a high demand for process optimization methods and tools that enhance profitability while operating near the nominal capacity. Products inventory, both in-process and end-of-process, serve as buffers to mitigate fluctuations in operation and demand while maintaining consistent and predictable production. Efficient product inventory management is crucial for the a profitable operation of chemical plants. To ensure optimal operation, various strategies have been proposed that consider in-process storage and aim to satisfy mass balances while avoiding bottlenecks [1].

When final product demand is highly oscillatory with unexpected stoppages, end-of-process inventories must be carefully controlled within minimum and maximum bounds. This prevents plant shutdowns and ensures compliance with legal product supply requirements. In both cases, plant-wide operations should be considered when making in- and end-of-process product inventory level decisions to improve overall profitability [2].

To address this problem, we propose a holistic hierarchical two-layered strategy. The upper layer uses real-time optimization (RTO) to determine optimal plant flow rates from an economic perspective. The lower layer employs a zone control nonlinear model predictive controller (NMPC) to define inventory setpoints. The idea is that RTO defines setpoints for flow rates that manipulate plant throughput, while NMPC maintains inventory levels within desired bounds while keeping flow rates as close as possible to the RTO-defined setpoints. The use of this two-layered holistic approach is novel for this specific problem; however, our primary contribution lies in introducing an ensemble of optimization problems at the RTO level. Each RTO problem is associated with a different uncertain product demand scenario. This enables us to recompute optimal throughput plant manipulator setpoints based on the current scenario, improving the overall strategy performance.

We tested our strategy on a three-stage distillation column system that separates a mixture of four products, inspired by a LPG production plant with recycle split vapour (RSV) invented by Ortloff Ltd [3]. While the lightest and cheapest product is directly sent to a pipeline, the other three more valuable products are stored in tanks. Demand for these three products fluctuates significantly, but can be forecasted in advance, allowing for proactive measures. We compared the results of our holistic two-layered strategy to typical actions taken by plant operators in various uncertain demand scenarios. Our approach addresses the challenges of mitigating bottlenecks and minimizing inventory fluctuations and is more effective than the operator decisions from an economic perspective.

[1] Skogestad, S., 2004. Computers & Chemical Engineering, 28(1-2), pp.219-234.

[2] Downs, J.J. and Skogestad, S., 2011. Annual Reviews in Control, 35(1), pp.99-110.

[3] Zhang S. et al., 2020. Comprehensive Comparison of Enhanced Recycle Split Vapour Processes for Ethane Recovery, Energy Reports, 6, pp.1819–1837.



Talking like Piping and Instrumentation Diagrams (P&IDs)

Achmad Anggawirya Alimin, Dominik P. Goldstein, Lukas Schulze Balhorn, Artur M. Schweidtmann

Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands

Piping and Instrumentation Diagrams (P&IDs) are pivotal in process engineering, serving as comprehensive references across multiple disciplines (Toghraei, 2019). However, the intricate nature of P&IDs and the system's complexity pose challenges for engineers to examine flowsheet overview and details efficiently and accurately. Recent developments in flowsheet digitalization through computer vision and data exchange in the process industry (DEXPI) have opened up the potential to have a unified machine-readable format for P&IDs (Theisen et al., 2023). Yet, the industry DEXPI P&IDs are often extremely complex often including thousands pages.

We propose the ChatP&ID methodology that allows to communicate with P&IDs using natural language. In particular, we represent DEXPI P&IDs as labelled property graphs and integrate them with Large Language Models (LLMs). The approach consists of three main parts: 1) P&IDs graph representation is developed following DEXPI specification via our pyDEXPI Python package (Goldstein et al., n.d.). 2) A tool for generating P&ID knowledge graphs from pyDEXPI. 3) Integration of the P&ID knowledge graph to LLMs using graph-based retrieval augmented generation (graph-RAG). This approach allows users to communicate with P&IDs using natural language. It extends LLM’s ability to retrieve contextual data from P&IDs and mitigate hallucinations. Leveraging the LLM's large corpus, the model is also able to interpret process information in P&ID, which could help engineer daily tasks. In the future, this work also opens up opportunities in the context of other generative Artificial Intelligence (genAI) solutions on P&IDs such as auto-generation or auto-correction (Schweidtmann, 2024).

References

Goldstein, D.P., Alimin, A.A., Schulze Balhorn, L., Schweidtmann, A.M., n.d. pyDEXPI: A Python implementation and toolkit for the DEXPI information model.

Schweidtmann, A.M., 2024. Generative artificial intelligence in chemical engineering. Nat. Chem. Eng. 1, 193–193. https://doi.org/10.1038/s44286-024-00041-5

Theisen, M.F., Flores, K.N., Balhorn, L.S., Schweidtmann, A.M., 2023. Digitization of chemical process flow diagrams using deep convolutional neural networks. Digit. Chem. Eng. 6, 100072.

Toghraei, M., 2019. Piping and instrumentation diagram development. Wiley, Hoboken, NJ, USA.



Multi-Objective Optimization and Analytical Hierarchical Process for Sustainable Power Generation Alternatives in the High Mountain Region of Santurbán: case of Pamplona, Colombia

Ana María Rosso-Cerón2, Nicolas Cabrera1, Viatcheslav Kafarov1

1Department of Chemical Engineering, Carrera 27 Calle 9, Universidad Industrial de Santander, Bucaramanga, Colombia; 2Department of Chemical Engineering, Cl. 5 No. 3-93, Kilometro 1 Vía Bucaramanga, Universidad de Pamplona, Norte de Santander, Colombia

This study presents an integrated approach combining the Analytical Hierarchical Process (AHP) and a Mixed-Integer Multi-Objective Linear Programming (MOMILP) model to evaluate and select sustainable power generation alternatives for Pamplona, Colombia. The research focuses on the high mountain region of Santurbán, a páramo ecosystem that provides water to over 2.5 million people and supports rich biodiversity. Given the region’s vulnerability to climate change, sustainable energy solutions are essential to ensure environmental conservation and energy security [1].

The MOMILP model considers several power generation technologies, including photovoltaic panels, wind turbines, biomass, and diesel plants. These alternatives are integrated into the local electrical distribution system with the goal of minimizing two objectives: costs (net present value) and CO₂ emissions, while adhering to design, operational, and budgetary constraints. The ε-constraint method was employed to generate a Pareto-optimal set of solutions, balancing trade-offs between economic and environmental performance. Additionally, the study examines the potential for forming local energy communities by allowing surplus electricity from renewable sources to be sold, promoting local economic growth and energy independence.

The AHP is used to assess these alternatives based on a set of multi-criteria, including social acceptance, job creation, regional accessibility, technological maturity, reliability, pollutant emissions, land use, and habitat impact. Expert opinions were gathered through the Delphi method, and the criteria were weighted using Saaty’s scale. This comprehensive evaluation ensures that the decision-making process incorporates not only technical and economic aspects but also environmental and social considerations [2].

The analysis revealed that a hybrid solution combining solar, wind, and biomass technologies provides the best balance between economic viability and environmental sustainability. Solar energy, due to its technological maturity and minimal impact on the local habitat, emerged as a highly favourable option. Biomass, although contributing more to emissions than solar and wind, was positively evaluated for its potential to create local jobs and its high social acceptance in the region.

This study contributes to the growing body of literature on the integration of renewable energy sources into power distribution networks, particularly in ecologically sensitive areas like the Santurbán páramo. The combined use of AHP and MOMILP offers a robust framework for decision-makers, allowing for the systematic evaluation of sustainable alternatives based on technical performance and stakeholder priorities. This approach is particularly relevant for policymakers and utility companies engaged in Colombia’s energy transition efforts and sustainable development.

References

[1] Llambí, L. D., Becerra, M. T., Peralvo, M., Avella, A., Baruffol, M., & Díaz, L. J. (2019). Monitoring biodiversity and ecosystem services in Colombia's high Andean ecosystems: Toward an integrated strategy. Mountain Research and Development, 39(3). https://doi.org/10.1659/MRD-JOURNAL-D-19-00020.

[2] A. M. Rosso-Cerón, V. Kafarov, G. Latorre-Bayona, and R. Quijano-Hurtado, "A novel hybrid approach based on fuzzy multi-criteria decision-making tools for assessing sustainable alternatives of power generation in San Andrés Island," Renewable and Sustainable Energy Reviews, vol. 110, 159–173, 2019. https://doi.org/10.1016/j.rser.2019.04.053.



Environmental assessment of the catalytic Arabinose oxidation

Mouad Hachhach, Dmitry Murzin, Tapio Salmi

a Laboratory of Industrial Chemistry and Reaction Engineering (TKR), Johan Gadolin Process Chemistry Centre, Åbo Akademi University, Åbo-Turku FI-20500, Finland

Oxidation of arabinose to arabinoic acid present an innovative way to valorize local biomass to high add value product. Experiments on the oxidation of arabinose to arabinoic acid with molecular oxygen were previously to determine the optimum reaction conditions (Kusema et al., 2010; Manzano et al., 2021) and using the obtained results a scaled-up process has been designed and analysed from techno-economic aspect (Hachhach et al., 2021).

Also these results are used to analyse the environmental impact of the scaled-up process during its its life time using life cycle assessment (LCA) methodology. SimaPro software combined with impact assessment method IMPACT 2002+ were used in this work.

The results revealed that the heating seems to be the biggest contributor of the environmental impacts even though that the reaction is performed under mild conditions (70 C) which highlighted the importance of reducing the energy consumption via an efficient heat integration for example.



A FOREST BIOMASS-TO-HYDROCARBON SUPPLY CHAIN MATHEMATICAL MODEL FOR OPTIMIZING CARBON EMISSIONS AND ECONOMIC METRICS

Frank Piedra-Jimenez1, Rishabh Mehta2, Valeria Larnaudie3, Maria Analia Rodriguez1, Ana Inés Torres2

1Instituto de Investigación y Desarrollo en Ingeniería de Procesos y Química Aplicada (UNC-CONICET), Universidad Nacional de Córdoba. Facultad de Ciencias Exactas, Físicas y Naturales. Av. Vélez Sarsfield 1611, X5016GCA Ciudad Universitaria, Córdoba, Argentina; 2Department of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh PA 15213; 3Departamento de Bioingeniería, Facultad de Ingeniería, Universidad de la Republica, Julio Herrera y Reissig 565, Montevideo, Uruguay.

Forest supply chains (FSCs) are critical for achieving decarbonization targets (Santos et al., 2019). FSCs are characterized by abundant biomass residues, offering an opportunity to add value to processes while contributing to the production of clean energy products. One particularly interesting aspect is their potential integration with oil refineries to produce drop-in fuels, offering a transformative pathway to mitigate traditional refinery emissions (Barbosa-Povoa and Pinto, 2020).

In this article, a disjunctive mathematical programming approach is presented to optimize the design and planning of the FSC for the production of hydrocarbon products from biomass, optimizing both economic and environmental objectives. Various types of byproducts and residual biomass from forest harvesting activities, sawmill production, and the pulp and paper industries are considered. Alternative processing facilities and technologies can be established over a multi-period planning horizon. The design problem scope involves selecting forest areas for exploitation, identifying biomass sources, and determining the locations, technologies, and capacities of facilities that transform wood-based residues into methanol and pyrolysis oil, which are further processed in biodiesel and petroleum refinery plants, respectively. This problem is challenging due to the complexity of the supply chain network, which involves numerous decisions, constraints, and objectives.

Especially in the case of large geographical areas, transportation becomes a crucial aspect of supply chain design and planning because the low biomass density significantly impacts carbon emissions and costs. Thus, the planning problem scope includes selecting connections and material flows across the supply chain and analyzing the impact of different types of transportation vehicles.

To estimate FSC carbon emissions, the Life Cycle Assessment (LCA) methodology is used. A gate-to-gate analysis is carried out for each activity in the FSC. The predicted LCA results are then integrated as input parameters into a mathematical programming model for FSC design and planning, extending previous work (Piedra-Jimenez et al., 2024). In this article, a multi-objective approach is employed to minimize CO2-equivalent emissions while optimizing net present value from an economic standpoint. A set of efficient Pareto points is obtained and compared in a case study of the Argentine forest industry.

References

Barbosa-Povoa, A.P., Pinto, J.M. (2020). “Process supply chains: perspectives from academia and industry”. Comput. Chem. Eng., 132, 106606, 10.1016/J.COMPCHEMENG.2019.106606

Piedra-Jimenez, F., Torres, A.I., Rodriguez, M.A. (2024), “A robust disjunctive formulation for the redesign of forest biomass-based fuels supply chain under multiple factors of uncertainty.” Computers & Chemical Engineering, 108540, ISSN 0098-1354.

Santos, A., Carvalho, A., Barbosa-Póvoa, A.P, Marques, A., Amorim, P. (2019). “Assessment and optimization of sustainable forest wood supply chains – a systematic literature review.” For. Policy Econ., 105, pp. 112-135, 10.1016/J.FORPOL.2019.05.026



Introducing competition in a multi-agent system for hybrid optimization

Veerawat Udomvorakulchai, Miguel Pineda, Eric S. Fraga

University College London, United Kingdom

Process systems engineering optimization problems may be
challenging. These problems often exhibit nonlinearity, non-convexity,
discontinuity, and uncertainty, and often only the values of objective
and constraint functions are accessible. Black-box optimization
methods may be appropriate to tackle such problems. The effectiveness
of each method differs and is often unknown beforehand. Prior experience
has shown that hybrid approaches can lead to better outcomes than
using a single optimization method (1).

A general-purpose multi-agent framework for optimization, Cocoa, has
recently been developed to automate the configuration and use of
hybrid optimization, allowing for any number of optimization solvers,
including different instances of the same solver (2). Solvers can
share solutions, leading to better outcomes with the same
computational effort. However, the computational resource allocated
to each solver is inversely proportional to the number of solvers.
Allocating equal time to each solver may not be ideal.

This paper describes the implementation of competition to go alongside
cooperation: allocate more computational resource to solvers best
suited to a given problem. The allocation is dynamic and evolves as
the search progresses. Each solver is assigned a priority which
changes based on the results obtained by that solver. Scheduling is
priority based. The scheduler is similar to algorithms used by
multi-tasking operating systems (3). Individual solvers will be given
more or less access to the computational resource, enabling the system
to reward those solvers that do well while ensuring that all solvers
are allocated some computational resource.

The framework allows for the use of both metaheuristic and direct
search methods. Metaheuristics explore the full search space while
direct search methods are good at exploiting solutions. The framework
has been implemented in Julia (4) making full use of multiprocessing.

A case study on the design of a micro-analytic system is presented
(5). The model is dynamic and has uncertainties; the selection of
designs is based on multiple criteria. This is a good test of the
proposed framework as the computational demands are large and the
search space is complex. The case study demonstrates the benefits of
a multi-solver hybrid optimization approach with both cooperation and
competition. The framework adapts to the evolving requirements of the
search. Often, a metaheuristic method is allocated more computational
resource at the beginning of the search while direct search methods
are emphasized later.

1. Fraga ES. Hybrid methods for optimisation. In: Zilinskas J, Bogle
IDL, editors. Computer aided methods for optimal design and
operations. World Scientific Publishing Co.; 2006. p. 1–14.

2. Fraga ES, Udomvorakulchai V, Papageorgiou L. 2024. DOI: 10.1016/B978-0-443-28824-1.50556-1.

3. Madnick SE, Donovan JJ. Operating systems. McGraw-Hill Book Company;
1974.

4. Bezanson J, Edelman A, Karpinski S, Shah VB. Julia: A fresh approach
to numerical computing. SIAM rev. 2017;59(1):65–98.

5. Pineda M, Tsaoulidis D, Filho P, Tsukahara T, Angeli P, Fraga
E. 2021. DOI: 10.1016/j.nucengdes.2021.111432.



A Component Property Modeling Framework Utilizing Molecular Similarity for Accurate Predictions and Uncertainty Quantification

Youquan Xu, Zhijiang Shao, Anjan Kumar Tula

Zhejiang University, China, People's Republic of

In many industrial applications, the demand for high-performance products, such as advanced materials and efficient working media, continues to rise. A key step in developing these products lies in the design of their constituent molecules. Traditional methods, based on expert experience, are often slow, labor-intensive, and prone to overlooking molecules with optimal performance. As a result, computer-aided molecular design (CAMD) has garnered significant attention for its potential to accelerate and improve the design process. One of the major challenges in CAMD is the lack of mechanistic knowledge that accurately links molecular structure to its properties. As a result, machine learning models trained on existing molecular databases have become the primary tools for predicting molecular properties. The typical approach involves using these models to predict the properties of potential molecules and selecting the best candidates based on these predictions. However, prediction errors are inevitable, introducing uncertainty into the reliability of the design. This can result in significant discrepancies between the predicted and experimentally verified properties, limiting the effectiveness of molecular discovery.
To address this issue, we propose a novel molecular property modeling framework based on a similarity coefficient. This framework introduces a new formula for molecular similarity, which considers compound type identification to enable more accurate molecular comparisons. By calculating the similarity between a target molecule and those in an existing database, the framework selects the most similar molecules to form a tailored training dataset, ensuring that only the most informative molecules are selected for the training set, while less relevant or misleading data points are excluded, significantly improving the accuracy of property predictions. In addition to enhancing prediction accuracy, the similarity coefficient also quantifies the confidence in the property predictions. By evaluating the availability and magnitude of the similarity index, the framework provides a measure of uncertainty in the predictions, giving a clearer understanding of how reliable the predicted properties are. This is especially important for molecules where limited similar data is available, allowing for more informed decision-making in the selection process. In tests across various molecular properties, our framework not only enhances the accuracy of predictions but also offers a clear evaluation of prediction reliability, especially for molecules with high similarity. Our framework introduces a two-fold evaluation system for potential molecules, using both predicted properties and the similarity coefficient. This dual criterion ensures that only molecules with both excellent predicted properties and high similarity are selected, enhancing the reliability of the screening process. The improved prediction accuracy, particularly for molecules with high similarity, reduces the need for extensive experimental validation and significantly increases the overall confidence in the molecular design process by explicitly addressing prediction uncertainty.



A simple model for control and optimisation of a produced water re-injection facility

Rafael David de Oliveira1, Edmary Altamiranda2, Gjermund Mathisen2, Johannes Jäschke1

1Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), Trondheim, Norway; 2Subsea Technology, AkerBP ASA, Stavanger, Norway

Water injection (or water flooding) is an enhanced oil recovery technique that consists of injecting water into the reservoir to maintain the reservoir pressure. The injected water can come either from the sea or the water separated from the oil and gas production (produced water). The amount of injected water in each well is typically decided by the reservoir engineers, and many methodologies have been proposed so far in the literature where reservoir models are usually applied (Grema et al., 2016). Once the injection targets have been defined, the water injection network system can be optimised. A relevant optimisation problem in this context may consist of the optimal operation of the pump system topside while ensuring the integrity of the subsea water injection system by maximising the lifetime of the equipment. Usually, the works in that phase are modelling the system at a macro-level where each unit is represented as a node in a network (Ivo and Imsland, 2022). The use of simple and lower-level models where the manipulated variables and measured variables can be directly connected proved to be very useful in the design of new control strategies (Sivertsen et al., 2006) as well as in real-time optimisation formulations where the model parameters can be updated in real-time (Matias et al., 2022).

This work proposes a simple model for control and optimisation of a produced water re-injection facility. The model was based on a real facility in operation on the Norwegian continental shelf and consisted of a set of differential-algebraic equations. Data was gathered from the available sensors, pump operation and water injection targets. Model parameters related to equipment dimensions and the valve's flow coefficient were fixed as in the real plant. The remaining parameters were estimated from the field data, solving a nonlinear least-square problem. Uncertainty quantification was performed to assess the parameter's confidence intervals. Moreover, simulations were performed to evaluate and validate the proposed model. The results show that a simple model can be fitted to the plant and, at the same time, describe the key features of the plant dynamics. The developed model is expected to aid the implementation of strategies like self-optimising control and real-time optimisation on produced water re-injection facilities in the near future.

References

Grema, A. S., and Yi Cao. 2016. “Optimal Feedback Control of Oil Reservoir Waterflooding Processes.” International Journal of Automation and Computing 13 (1): 73–80.

Ivo, Otávio Fonseca, and Lars Struen Imsland. 2022. “Framework for Produced Water Discharge Management with Flow-Weighted Mean Concentration Based Economic Model Predictive Control.” Computers & Chemical Engineering 157 (January):107604.

Matias, José, Julio P. C. Oliveira, Galo A. C. Le Roux, and Johannes Jäschke. 2022. “Steady-State Real-Time Optimization Using Transient Measurements on an Experimental Rig.” Journal of Process Control 115 (July):181–96.

Sivertsen, Heidi, John-Morten Godhavn, Audun Faanes, and Sigurd Skogestad. 2006. “CONTROL SOLUTIONS FOR SUBSEA PROCESSING AND MULTIPHASE TRANSPORT.” IFAC Proceedings Volumes, 6th IFAC Symposium on Advanced Control of Chemical Processes, 39 (2): 1069–74.



An optimization-based conceptual synthesis of reaction-separation systems for glucose to chemicals conversion.

Syed Ejaz Haider, Ville Alopaeus

Department of Chemical and Metallurgical Engineering, School of Chemical Engineering, Aalto University, P.O. Box 16100, 00076 Aalto, Finland.

Abstract

Lignocellulosic biomass has emerged as a promising renewable alternative to fossil resources for the sustainable production of green chemicals [1]. Among the high-value biomass-derived building block chemicals, levulinic acid has gained significant attention due to its wide industrial applications [2]. It serves as a raw material for the synthesis of resins, plasticizers, textiles, animal feed, coatings, antifreeze, pharmaceuticals, and bio-based products [3]. In order to produce levulinic acid on commercial scale, it is essential to identify the most cost-effective and optimal synthesis route.

Two main methods exist to identify the optimal process structure: hierarchical decomposition and superstructure-based optimization. The hierarchical decomposition method involves making design decisions at each detail level based on heuristics; however, it struggles to capture interactions among decisions at different detail levels. In contrast, superstructure-based synthesis is a smart process systems engineering methodology that systematically evaluates a wide range of structural alternatives simultaneously using an equation-oriented approach to identify the optimal structure.

This study aims to identify the optimal process structure and parameters for the commercial-scale production of levulinic acid from glucose using mathematical programming approach. To achieve more valuable results, the reaction and separation systems were separately investigated under two optimization scenarios using two different objective functions.

Scenario 1 focuses on optimizing the glucose conversion reactor to enhance overall profit and minimize waste disposal. The optimization model includes a rigorous economic objective function that simultaneously considers product selling prices, capital and manufacturing costs over a 20-year project life, and waste disposal costs. A continuous tank reactor model was used as a mass balance constraint, utilizing the rate parameters from our recent research findings at chemical engineering research group, Aalto University. This non-linear programming (NLP) problem was implemented in GAMS and solved using the BARON solver to determine the optimal operating conditions and reactor size. The optimal reactor volume was found to be 13.2m3, with an optimal temperature of 197.8°C for the levulinic acid production capacity of 1593tonnes/year.

Scenario 2 addresses the synthesis of distillation-based separation sequences to separate the multicomponent reactor effluent into various product streams. All potential candidates are embedded in a superstructure, which is translated into a mixed-integer nonlinear programming problem (MINLP). Research is progressing towards solving this MINLP problem and identifying the optimal configuration of distillation columns for the desired separation task.

References

[1] F. H. Isikgor and C. R. Becer, "Lignocellulosic biomass: a sustainable platform for the production of bio-based chemicals and polymers," Polymer chemistry, vol. 6, no. 25, pp. 4497-4559, 2015.

[2] T. Werpy and G. Petersen, "Top value added chemicals from biomass: volume I--results of screening for potential candidates from sugars and synthesis gas," National Renewable Energy Lab.(NREL), Golden, CO (United States), 2004.

[3] S. Takkellapati, T. Li, and M. A. Gonzalez, "An overview of biorefinery-derived platform chemicals from a cellulose and hemicellulose biorefinery," Clean technologies and environmental policy, vol. 20, pp. 1615-1630, 2018.



Kinetic modeling of drug substance synthesis considering slug flow characteristics in a liquid-liquid reaction

Shunsei Yayabe1, Junu Kim1, Yusuke Hayashi1, Kazuya Okamoto2, Keisuke Shibukawa2, Hayao Nakanishi2, Hirokazu Sugiyama1

1The University of Tokyo, Japan; 2Shionogi Pharma Co., Ltd., Japan

In the production of drug substances (or active pharmaceutical ingredients), flow synthesis is increasingly being introduced due to its various advantages such as high surface volume ratio and small system size [1]. One promising application of flow synthesis is liquid-liquid reaction [2]. When two immiscible liquids enter together in a flow reactor, unique flow patterns, especially like slug flow, are formed. These patterns are determined by the fluid properties and the reactor specifications, and have a significant impact on the mass transfer rate. Previous studies have analyzed the effect of slug flow on mass transfer in liquid-liquid reactions using computational fluid dynamics [3, 4]. These studies provide valuable insights into the influence of flow characteristics on reaction. However, there is a lack of modeling approaches that simultaneously account for flow characteristics and reaction kinetics, which may limit the application of liquid-liquid reactions in flow synthesis.

We developed a kinetic model of drug substance synthesis by incorporating slug flow characteristics in a liquid-liquid reaction, with the aim to determine the feasible range of the process parameters. The target reaction was Stevens oxidation, which is a novel liquid-liquid reaction of organic and aqueous phases, producing the ester via a shorter pathway than the conventional route. To obtain kinetic data, experiments were conducted, varying the inner diameter, reaction temperature, and residence time. In Stevens oxidation, a catalyst was used, and experimental conditions were adjusted to form slug flow to promote the catalyst’s mass transfer. Using the obtained data, the model was developed for the change in concentrations of the starting material, desired product, intermediate, dimer, carboxylic acid, and the catalyst. In the change in the catalyst concentration model, mass transfer was considered using the overall volumetric mass transfer coefficient during slug flow formation.

The model successfully reproduced experimental results and demonstrated that, as the inner diameter increases, the efficiency of mass transfer in slug flow decreases with slowing down the reaction. The developed model was used to simulate the yields of the start material, the dimer, and the process mass intensity, in order to determine the feasible region. As a result, it was shown that when the reagent concentration was either too high or too low, operating conditions fell outside the feasible region. This kinetic model with flow characteristics will be useful for the process design of drug substance synthesis using liquid-liquid reactions. In the ongoing work, we are conducting validation of the feasible region.

[1] S. Diab, et al., React. Chem. Eng., 2021, 6, 1819. [2] L. Capaldo, et al., Chem. Sci., 2023, 14, 4230. [3] A. Mittal, et al., Ind. Eng. Chem. Res., 2023, 62, 15006. [4] D. Cheng, et al., Ind. Eng. Chem. Res., 2020, 59, 4397.



Learning-based control approach for nanobody-scorpion antivenom optimization

Juan Camilo Acosta-Pavas1, David C Corrales1, Susana M Alonso Villela1, Balkiss Bouhaouala-Zahar2, Georgios Georgakilas3, Konstantinos Mexis4, Stefanos Xenios4, Theodore Dalamagas3, Antonis Kokosis4, Michael O'donohue1, Luc Fillaudeau1, César A. Aceves-Lara1

1TBI, Université de Toulouse, CNRS UMR5504, INRAE UMR792, INSA, Toulouse, France, France; 2Laboratoire des Biomolécules, Venins et Applications Théranostiques (LBVAT), Institut Pasteur de Tunis, 13 Place Pasteur, BP-74, 1002 Le Belvédère, Tunis, Tunisia; 3Athena Research Center, Marousi, Greece; 4School of Chemical Engineering, National Technical University of Athens, Iroon Polytechneiou 9, Zografou, 15780 Athens, Greece

One market scope of bioindustries is the production of recombinant proteins in E. coli for its application in serotherapy (Alonso Villela et al., 2023). However, its process's monitoring, control, and optimization present limitations. There are different approaches to optimize bioprocess performance; one common is using model-based control strategies such as Model Predictive Control (MPC). Another strategy is learning-based control, such as Reinforcement Learning (RL).

In this work, an RL approach was applied to maximize the production of recombinant proteins in E. coli at induction mode. The aim was to find the optimal substrate feed rate (Fs) applied during the induction that maximize the protein productivity. The RL model was trained using the actor-critic Twin-Delayed Deep Deterministic (TD3) Policy Gradient agent. The reward corresponded to the maximum value of the productivity. The environment was represented with a dynamic hybrid model (DHM) published by Corrales et al. (2024). The simulated conditions consisted in a reactor with 2L of working volume (V) at 37°C for the batch (10gglucose/L) and fed-batch (feeding with 300gglucose/L) modes, and 28°C during induction stage. The first 3.4h was the batch mode. The fed-batch mode was operated with a Fs=1x10^-3L/h until reach 8h. Afterwards, the RL agent was trained in the induction mode until the process's final at 20h. The agent actions were updated every 2h. It was considered two types of constraints 1.49<V<5.00L and 1x10^-3<Fs≤5x10^-4 L/h. Finally, the results were compared with the MPC approach.

The training options for all the networks were a learning rate of 1x10^-3 for the critic and 1x10^-4 for the actor; gradient threshold of 1.0, mini-batch size of 1x10^2, a discount factor of 0.9, experience buffer length of 1x10^6, and agent sample time of 0.1h with maximum 700 episodes.

MPC and RL control strategies show similar behaviors. In both cases, the optimal action suggested is to apply the maximum Fs to increase the protein productivity at the end of the process until 4.81x10^-2 mg/h. Regarding computation time, the RL agent training spent a mean value of 0.3284s performing 14.0x10^3 steps in each action update. At the same time, the MPC required a mean value of 0.3366s to solve an optimization problem at every action update. The RL approach demonstrates to be a good alternative to explore the optimization in the production of recombinant proteins.

References

Alonso Villela, S. M., Kraïem-Ghezal, H., Bouhaouala-Zahar, B., Bideaux, C., Aceves Lara, C. A., & Fillaudeau, L. (2023). Production of recombinant scorpion antivenoms in E. coli: Current state and perspectives. Applied Microbiology and Biotechnology, 107(13), 4133-4152. https://doi.org/10.1007/s00253-023-12578-1

Corrales, D. C., Villela, S. M. A., Cescut, J., Daboussi, F., Fillaudeau, L., & Aceves-Lara, C. A. (2024). Dynamic Hybrid Model for Nanobody-based Antivenom Production (scorpion antivenom) with E. coli CH10-12 and E. coli NbF12-10.



Kinetics modeling of the thermal degradation of densified refuse-derived fuel (d-RDF)

Mohammad Ali Nazari, Juma Haydary

Institute of Chemical and Environmental Engineering, Slovak University of Technology in Bratislava, Slovak Republic

Currently, modern human life is experiencing an energy crisis and a massive generation of Municipal Solid Waste (MSW). The conversion of the carbon-containing fraction of MSW, known as refuse-derived fuel (RDF), into energy, fuel, and high-value bio-based chemicals has become a key focus in ongoing discussions on sustainable development, driven by rising energy demand, depleting fossil fuel reserves, and growing environmental concerns. However, a significant limitation of unprocessed RDF lies in its heterogeneous composition, which complicates material handling, reactor feeding, and the accurate prediction of its physical and chemical properties. The densification of RDF (d-RDF) offers a potential solution to these challenges by reducing material variability and generating a more uniform, durable form, thereby enhancing its suitability for processes such as pyrolysis. This research effort involves evaluating the physicochemical characteristics and thermal degradation of d-RDF using a thermogravimetric analyzer (TGA) under controlled conditions at various heating rates of 2, 5, 10, and 20 K·min⁻¹. The model-free methods, including Friedman (FRM), Flynn-Wall-Ozawa (FWO), Kissinger-Akahira-Sunose (KAS), Vyazovkin (VYZ), and Kissinger, were applied to determine the apparent kinetic and thermodynamic parameters within the conversion range of 1% to 85%. The physicochemical properties of d-RDF demonstrated its suitability for various thermochemical conversion applications. Thermal degradation predominantly occurred within the temperature range of 220–500°C, accounting for 98% of the total weight loss. The coefficients of determination (R²) for the fitted plots ranged from 0.90 to 1.00 across all applied models. The average activation energy (Eα) calculated using the FRM, FWO, KAS, and VYZ methods was 260, 247, 247, and 263 kJ·mol⁻¹, respectively. The evaluation of thermodynamic parameters (ΔH, ΔG, and ΔS) indicated the endothermic nature of the process. Statistical F-test was applied to identify the best agreement between experimental and calculated data. According to the F-value test, the differences of variance in FRM and VYZ models were insignificant, illustrating the best agreement with experimental data. Considering all results, including kinetic and thermodynamic parameters, along with the high heating value (HHV) of 25.20 MJ·kg⁻¹, d-RDF demonstrates a strong affinity for thermal degradation under pyrolysis conditions and can be regarded as a suitable feedstock to produce fuel and value-added products. Moreover, it serves as a viable alternative to fossil fuels, contributing to the United Nations 2030 Sustainable Development Goals.



Cost-optimal solvent selection for batch cooling crystallisation of flurbiprofen

Matthew Blair, Dimitrios I. Gerogiorgis

University of Edinburgh, United Kingdom

ABSTRACT

Choosing suitable solvents for crystallisation processes can be very challenging when developing new pharmaceuticals, given the vast number of choices, crystallisation techniques and performance metrics. A high-efficiency solvent must ensure high API recovery, low cost and minimal environmental impact,1 and allow batch (or possibly continuous) operation within an acceptable (not narrow) parameter space. To streamline this task, process and thermodynamic modelling tools2,3 can be used to systematically probe the behaviour of different crystallisation setups in silico prior to conducting lab-scale experiments. In particular, it has been found that we can use thermodynamic models alongside principles from solid-liquid equilibria (SLE) to determine the impact of key process variables (e.g. temperature and solvent choice)1 on the performance of different processes without (or prior to) testing them in the laboratory.2,3

This paper presents the implementation of a modelling framework that can be used to minimise the cost and environmental impact of batch crystallisation processes on the basis of thermodynamic principles. This process modelling framework (implemented in MATLAB®) is employed to study the batch cooling crystallisation of flurbiprofen, a non-steroidal anti-inflammatory drug (NSAID) used against arthritis.4 Moreover, we have used the Non-Random Two-Liquid (NRTL) activity coefficient model, to study its thermophysical and solubility properties in twelve (12) common upstream pharmaceutical solvents,4,5 namely three alkanes (n-hexane, n-heptane, n-octane), two (isopropyl, methyl-tert-butyl) ethers, five alcohols (n-propanol, isopropanol, n-butanol, isobutanol, isopentanol), an ester (isopropyl acetate), and acetonitrile, in an adequately wide temperature range (283.15-323.15 K). Established green metrics1 (e.g. E-factor) and costing methodologies are employed to comparatively evaluate process candidates.6

LITERATURE REFERENCES

  1. Blair et al., Process modeling, simulation and technoeconomic evaluation of batch vs continuous pharmaceutical manufacturing cephalexin. 2023 AIChE Annual Meeting, Orlando, to appear (2023).
  2. Watson et al., Computer aided design of solvent blends for hybrid cooling and antisolvent crystallization of active pharmaceutical ingredients. Organic Process Research & Development 25(5): 1123 (2021).
  3. Sheikholeslamzadeh et al., Optimal solvent screening for crystallization of pharmaceutical compounds from multisolvent systems. Industrial & Engineering Chemistry Research 51(42): 13792 (2012).
  4. Tian et al., Solution thermodynamic properties of flurbiprofen in twelve solvents (283.15–323.15 K). Journal of Molecular Liquids 296: 111744 (2019).
  5. Prat et al., CHEM21 selection guide of classical and less classical solvents. Green Chemistry 18(1): 288 (2016).
  6. Dafnomilis et al., Multiobjective dynamic optimization of ampicillin batch crystallization: sensitivity analysis of attainable performance vs product quality constraints, Industrial & Engineering Chemistry Research 58(40): 18756 (2019).


A Machine Learning (ML) implementation for beer fermentation optimisation

Dimitrios I. Gerogiorgis

University of Edinburgh, United Kingdom

ABSTRACT

Food and beverage industries receive key feedstocks whose composition is subject to geographic and seasonal variability, and rely on factories whose process conditions have limited manipulation margins but must rightfully meet stringent product quality specifications. Unlike chemicals, most of our favourite foods and beverages are highly sensitive and perishable, with relatively small profit margins. Although manufacturing processes (recipes) have been perfected over centuries or even millennia, quantitative understanding is limited. Predictions about the influence of input (feedstock) composition and manufacturing (process) conditions on final food/drink product quality are hazardous, if not impossible, because small changes can result in extreme variations. A slightly warmer fermentation renders beer undrinkable; similarly, an imbalance among sugar, lipid (fat) and protein can make chocolate unstable.

Artificial Neural Networks/ANN and their representational versatility for process systems studies is well known for decades.2 First-principles knowledge, though (mass-heat-momentum conservation, chemical reactions) is captured via deterministic (ODE/PDE) models, which invariably require laborious parameterisation for each particular process plant. Physics-Informed Neural Networks (PINN)3 though combine the best of both worlds: they offer chemistry-compliant NN with proven extrapolation power to revolutionise manufacturing, circumventing parametric estimation uncertainty and enabling efficient process control. Fermentation for specific products (e.g. ethanol4, biopharmaceuticals5) has been explored by means of ML/ANN (not PINN) tools, thus without embedded first-principles descriptions.3

Though Food Science cannot provide global composition-structure-quality correlations, Artificial Intelligence/AI can be used to extract valuable process knowledge from factory data. The case of beer, in particular, has been the focus of several of our papers,6-7 offering a sound comparison basis for evaluating model fidelity between precedents and new PINN approaches. Pursuing PINN modelling caters to greater complexity, in terms of plant flowsheet, and target product structure and chemistry. We thus revisit the problem with ML/PINN tools to efficiently predict process efficiency, which is instrumental in computational design and optimisation of key unit operations (e.g. fermentors). Traditional (first-principles) descriptions of these necessitate elaborate (e.g. CFD) submodels of extreme complexity, with at least two severe drawbacks: (1) cumbersome prerequisite parameter estimation with extreme uncertainty, (2) prohibitively high CPU cost. The complementarity of the two major approaches is thus investigated, and major advantages/shortcomings will be highlighted.

LITERATURE REFERENCES

  1. Gerogiorgis & Bakalis, Digitalisation of Food+Beverage Manufacturing, Food & Bioproducts Processing, 128: 259-261 (2021).
  2. Lee et al., Machine learning: Overview of recent progresses and implications for the Process Systems Engineering field, Computers & Chemical Engineering, 114: 111-121 (2018).
  3. Karniadakis et al., Physics-informed machine learning, Nature Reviews Physics, 3(6): 422-440 (2021).
  4. Pereira et al., Hybrid NN modelling and particle swarm optimization for improved ethanol production from cashew apple juice, Bioprocess & Biosystems Engineering 44: 329-342 (2021).
  5. Petsagkourakis et al., Reinforcement learning for batch bioprocess optimization. Computers & Chemical Engineering, 133: 106649 (2020).
  6. Rodman & Gerogiorgis, Multi-objective process optimisation of beer fermentation via dynamic simulation, Food & Bioproducts Processing, 100A: 255-274 (2016).
  7. Rodman & Gerogiorgis, Dynamic optimization of beer fermentation: Sensitivity analysis of attainable performance vs. product flavour constraints, Computers & Chemical Engineering, 106: 582-595 (2017).


Operability analysis of modular heterogeneous electrolyzer plants using system co-simulation

Michael Große1,3, Isabell Viedt2,3, Hannes Lange2,3, Leon Urbas1,2

1TUD Dresden University of Technology, Chair of Process Control Systems; 2TUD Dresden University of Technology, Process Systems Engineering Group; 3TUD Dresden University of Technology, Process-to-Order Lab

In the upcoming decades, the scale-up of hydrogen production will play a crucial role for the integration of renewable energy into future energy systems [1]. One scale-up strategy is the numbering-up of standardized electrolysis units in a modular plant concept, according to [2, 3]. The use of a modular plant concept can support the integration of different electrolyzer technologies into one heterogeneous electrolyzer plant to leverage technology-specific advantages and counteract disadvantages [4].

This work focuses on the analysis of technical operability and feasibility of large-scale modular electrolyzer plants in a heterogeneous plant layout using system co-simulation. Developed and available dynamic process models of low-temperature electrolysis components are combined in Simulink as a shared co-simulation environment. Strategies to control relevant process parameters, like temperatures, pressures, flow rates and component mass fractions in the different subsystems and the overall plant, are developed and presented. An operability analysis is carried out to verify the functionality of the presented plant layout and the corresponding control strategies [5].

The dynamic progression of all controlled parameters is presented for different operative states that may occur, like start-up, continuous operation, load change and hot-standby behavior. It is observed that the exemplary plant is operational, as all relevant process parameter can be held within the allowed operating range during all operative states. However, some limitations, regarding the possible operating range of individual technologies, are introduced. Possible solution approaches for these identified problems are conceptualized.

Additionally, relevant metrics for efficiency and flexibility, like the specific energy consumption and expected unserved flexible energy (EUFE) [4] are calculated to prove the feasibility and show the advantages of heterogeneous electrolyzer plant layouts, such as a heightened operational flexibility without mayor reductions in efficiency.

Sources

[1] I. International Energy Agency, „Global Hydrogen Review 2023“, 2023, doi: https://www.iea.org/reports/global-hydrogen-review-2023.

[2] L. Bittorf u. a., „Upcoming domains for the MTP and an evaluation of its usability for electrolysis“, in 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), Sep. 2022, S. 1–4. doi: 10.1109/ETFA52439.2022.9921280.

[3] H. Lange, A. Klose, L. Beisswenger, D. Erdmann, und L. Urbas, „Modularization approach for large-scale electrolysis systems: a review“, Sustain. Energy Fuels, Bd. 8, Nr. 6, S. 1208–1224, 2024, doi: 10.1039/D3SE01588B.

[4] M. Mock, I. Viedt, H. Lange, und L. Urbas, „Heterogenous electrolysis plants as enabler of efficient and flexible Power-to-X value chains“, in Computer Aided Chemical Engineering, Bd. 53, Elsevier, 2024, S. 1885–1890. doi: 10.1016/B978-0-443-28824-1.50315-X.

[5] V. Gazzaneo, J. C. Carrasco, D. R. Vinson, und F. V. Lima, „Process Operability Algorithms: Past, Present, and Future Developments“, Ind. Eng. Chem. Res., Bd. 59, Nr. 6, S. 2457–2470, Feb. 2020, doi: 10.1021/acs.iecr.9b05181.



High-pressure membrane reactor for ammonia decomposition: Modeling, simulation and scale-up using a Python-Aspen Custom Modeler interface

Leonardo Antonio Cáceres Avilez, Antonio Esio Bresciani, Claudio Augusto Oller do Nascimento, Rita Maria de Brito Alves

Universidade de São Paulo, Brazil

One of the current challenges for hydrogen-related technologies is its storage and transportation. The low volumetric density and low boiling point require high-pressure and low-temperature conditions for effective transport and storage. A potential solution to these challenges involves storing hydrogen in chemical compounds that can be easily transported and stored, with hydrogen being released through decomposition processes [1]. Ammonia is a promising hydrogen carrier due to its high hydrogen content, approximately 17.8% by mass, and its low volumetric density of H2, which is 121 kg/m³ at 10 bar pressure [2]. The objective of this study was to develop a mathematical model to analyze and design a packed bed membrane reactor (PBMR) for large-scale ammonia decomposition. The kinetic model for the Ru-K/CaO catalyst was obtained from the literature and validated with experimental data [3]. This catalyst was selected due to its effective performance under high-pressure conditions, which increases the drive force for hydrogen permeation in the membrane reactor. The model was developed in Aspen Custom Modeler (ACM) using a 1D pseudo-homogeneous approach. The governing equations for mass, energy, and momentum conservation were discretized via a first-order backward finite difference method and solved using a nonlinear solver. The effectiveness factor was incorporated to account for intraparticle mass transfer limitations, which are prevalent with the large particle sizes typically employed in industrial applications. The study further investigated the influence of sweep gas ratio, temperature, relative pressure, and spatial velocity on ammonia conversion and hydrogen recovery, employing response surface methodology generated through an ACM-Python interface. The proposed multi-tubular membrane reactor achieved approximately 90,4% ammonia conversion and 91% hydrogen recovery, operating at an inlet temperature of 400°C and a pressure of 40 bar. Under the same heat flux, the membrane reactor exhibited approximately 15% higher ammonia conversion compared to a conventional fixed bed reactor. Furthermore, the developed model is easily transferable to Aspen Plus, facilitating subsequent process conceptual design and economic analyses.

[1] I. Lucentini, G. García Colli, C. D. Luzi, I. Serrano, O. M. Martínez, and J. Llorca, ‘Catalytic ammonia decomposition over Ni-Ru supported on CeO2 for hydrogen production: Effect of metal loading and kinetic analysis’, Appl Catal B, vol. 286, p. 119896, 2021.

[2] J. W. Makepeace, T. J. Wood, H. M. A. Hunter, M. O. Jones, and W. I. F. David, ‘Ammonia decomposition catalysis using non-stoichiometric lithium imide’, Chem Sci, vol. 6, no. 7, p. 3805–3815, 2015.

[3] S. Sayas, N. Moerlanés, S. P. Katikaneni, A. Harale, B. Solami, J. Gascon. ‘High pressure ammonia decomposition on Ru-K/CaO catalysts’. Catal. Sci. Technol. vol. 10, p. 5027- 5035, 2020.



Developing a circular economy around jam production wastes

Carlos Sanz, Mariano Martin

Department of Chemical Engineering. Universidad de Salamanca, Plz Caídos 1-5, 37008, Salamanca, Spain

Abstract

The food industry is a significant source of waste. In the EU alone, more than 58 million tons of food waste are generated annually [1], with an estimated market value of 132 billion euros [2]. While over half of this waste is produced at the household level and thus consists of a mixture, one-quarter originates directly from manufacturing facilities. Traditionally, the mixed waste has been managed through municipal solid waste (MSW) treatment and valorization procedures [3]. However, there is an opportunity to valorize the waste produced in the agri-food sector to support the adoption of a circular economy within the food supply chain, beginning at the transformation facilities. This would enable the recovery of value-added products and reduce the need for external resources, creating a circular economy through process integration.

In this work, the valorization of biowaste for a circular economy is explored through the case of jam waste. An integrated process is designed to extract value-added products such as phenolic compounds and pectin, as well as to produce ethanol, a green solvent, for internal use and/or as a final product. The solid residue can then either be gasified (GA) or digested (AD) to produce hydrogen, thermal energy and power. These technologies are systematically compared using a mathematical optimization approach, with units modeled based on first principles and experimental yields. The base case focuses on a real jam production facility from a well-known company.

Waste processing requires an investment of €2.0-2.3 million to treat 37 tons of waste per year, yielding 5.2 kg/t of phenolic compounds and 15.9 kg/t of pectin. After extraction of the valuable products, the solids are subjected to either anaerobic digestion or gasification. The amount of biogas produced (368.1 Nm3/t) is about half that of syngas (660.2 Nm3/t), so the energy produced by the gasification process (5,085.6 kWh/t) is higher than that produced by anaerobic digestion (3,136.3 kWh/t). Nevertheless, both technologies are self-sufficient in terms of energy, but require additional thermal energy input. Conversely, although the energy produced by gasification is higher than that produced by anaerobic digestion, the latter is cheaper than the former and has a lower entry barrier, especially as the process scales. As the results show, incorporating such processes into jam production facilities is not only profitable, but also allows the application of circular economy principles, reducing waste and external energy consumption, while providing value-added by-products such as phenolic compounds and pectin.

References

[1] Eurostat, Food waste and food waste prevention - estimates, (2023).

[2] SWD, Impact Assessment Report, Brussels, 2023.

[3] EPA, Municipal Solid Waste, (2016). https://archive.epa.gov/epawaste/nonhaz/municipal/web/html/ (accessed April 13, 2024).



Data-driven optimization of chemical dosage in wastewater treatment: A surrogate model approach for enhanced physicochemical phosphorus removal

Florencia Caro1, Jimena Ferreira2,3, José Carlos Pinto4, Elena Castelló1, Claudia Santiviago1

1Biotechnological Processes for the Environment Group, Faculty of Engineering, Universidad de la República, Montevideo, Uruguay, 11300; 2Chemical & Process Systems Engineering Group, Faculty of Engineering, Universidad de la República, Montevideo, Uruguay, 11300; 3Heterogeneous Computing Laboratory, Faculty of Engineering, Universidad de la República, Montevideo, Uruguay, 11300; 4Programa de Engenharia Química/COPPE, Universidade Federal do Rio de Janeiro, Cidade Universitária, CP: 68502, Rio de Janeiro, 21941-972 RJ, Brazil

Excessive phosphorus discharge into water bodies can cause severe environmental issues, such as eutrophication [1]. Discharge limits have become more stringent and the operation of phosphorus removal systems from wastewater that are economically feasible and allow for regulatory compliance remains a challenge [2]. Physicochemical phosphorus removal (PPR) using metal salts is effective for achieving low phosphorus levels and can supplement biological phosphorus removal (BPR) [3]. PPR offers flexibility, as phosphorus removal can be adjusted by modifying chemical dosage [4], and is simple, requiring only a chemical dosing system and a clarifier to separate the treated effluent from the resulting precipitate [3]. Proper dosage control is relevant to avoid under or overdosing, which affects phosphorus removal efficiency and operational costs. PPR depends on the system design and effluent characteristics [4]. Therefore, dosages are generally established through laboratory experiments, data from other wastewater treatment plants (WWTPs), and dosing charts [3]. Modeling can enhance chemical dosing in WWTPs, and various sequential simulators can perform this task. BioWin exemplifies this capability, incorporating PPR using metal salts, accounting for pH, precipitation processes, and interactions with organic matter measured as chemical oxygen demand (COD). However, BioWin cannot directly optimize chemical dosing for specific WWTPs configurations. This work develops a surrogate model using BioWin's simulated data to create a tool to optimize chemical dosages based on influent characteristics, thus providing tailored solutions for an edible oil WWTP, which serves as the case study. The industry operates its own WWTP and discharges the treated effluent into a watercourse. Due to the production process, the influent has high and variable phosphorus concentrations. PPR is applied as a supplementary treatment to BPR when phosphorus levels exceed discharge limits. The decision variables in the optimization are the aluminum sulfate dosage for phosphorus removal and sodium hydroxide for pH adjustment, as aluminum sulfate lowers effluent pH. The chemical cost is set as the objective function, and effluent discharge parameters as constraints. The surrogate physicochemical model, which links influent parameters and dosing to effluent outcomes, is also included as a constraint. Data acquisition from BioWin is automated using Bio2Py. [5]. The optimization model is implemented in Pyomo.

Preliminary results indicate that influent COD significantly affects phosphorus removal and should be considered when determining chemical dosage. For high COD levels, more aluminum than the suggested by a rule of thumb [3] is required, whereas for moderate and low COD levels, less dosage is needed, leading to potential cost savings. Furthermore, it was found that pH adjustment is only necessary when phosphorus concentrations are high.

[1]V. Smith et al., Environ. Pollut. 100, 179–196 (1999). doi: 10.1016/S0269-7491(99)00091-3.

[2]R. Bashar, et al., Chemosphere 197, 280–290 (2018). doi: 10.1016/j.chemosphere.2017.12.169.

[3]Metcalf & Eddy, Wastewater Engineering: Treatment and Resource Recovery (McGraw-Hill, 2014).

[4]A. Szabó et al., Water Environ. Res. 80, 407–416 (2008). doi: 10.2175/106143008x268498.

[5]F. Caro et al., J. Water Process Eng. 63, 105426 (2024). doi: 10.1016/j.jwpe.2024.105426.



Leveraging Machine Learning for Real-Time Performance Prediction of Near Infrared Separators in Waste Sorting Plant

Imam Mujahidin Iqbal1, Xinyu Wang1, Isabell Viedt1,2, Leonhard Urbas1,2

1TUD Dresden University of Technology, Chair of Process Control Systems; 2TUD Dresden University of Technology, Process Systems Engineering Group

Abstract

Many small and medium-sized enterprises (SMEs), including waste sorting facilities, are not fully capitalizing on the data they collect. Recent advances in waste sorting technology are addressing this challenge. For instance, Tanguay-Rioux et al. (2022) used a mix modelling approach to develop a process model using data from Canadian sorting facilities, while Kroell et al. (2024) leveraged Near Infrared (NIR) data to create a machine learning model that optimizes NIR setup. A key obstacle for SMEs in utilizing their data effectively is the lack of technical expertise. Wang et al. (2024) demonstrated that the ecoKI platform is a viable solution for SMEs, as it is a low-code platform, requires no prior machine learning knowledge and is simple to use. This work forms part of the EnSort project, which aims to enhance automation and energy efficiency in waste sorting plants by utilizing the collected data. This study explores the application of the ecoKI platform to process measurement data into performance monitoring tools. Data, including material composition and belt weigher sensor readings, were collected from an operational waste sorting plant in Northen Europe. The data was processed using the ready-made building blocks provided within the ecoKI platform, avoiding the need for manual coding. The platform’s real-time monitoring feature was utilized to continuously track performance. Two neural network architectures—Multilayer Perceptrons (MLP) and Long Short-Term Memory (LSTM) networks—were explored for predicting NIR separation efficiency. Results demonstrated the potential of these data-driven models to accurately capture essential relationships between input features and NIR performance. This work illustrates how raw measurement data in waste sorting facilities is transformed into actionable insights for real-time performance monitoring, offering an accessible, user-friendly solution for industries that lack machine learning expertise. By enabling SMEs to leverage their existing data, the platform paves the way for improved operational efficiency and decision-making. Furthermore, this approach can be adapted to various industrial contexts besides waste sorting applications, setting the stage for future developments in automated, data-driven optimization of equipment performance.

References

Tanguay-Rioux, F., Spreutels, L., Héroux, M., & Legros, R. (2022). Mixed modeling approach for mechanical sorting processes based on physical properties of municipal solid waste. Waste Management, 144, 533–542.

Kroell, N., Maghmoumi, A., Dietl, T., Chen, X., Küppers, B., Scherling, T., Feil, A., & Greiff, K. (2024). Towards digital twins of waste sorting plants: Developing data-driven process models of industrial-scale sensor-based sorting units by combining machine learning with near-infrared-based process monitoring. Resources, Conservation and Recycling, 200, 107257.

Wang, X., Rani, F., Charania, Z., Vogt, L., Klose, A., & Urbas, L. (2024). Steigerung der Energieeffizienz für eine nachhaltige Entwicklung in der Produktion: Die Rolle des maschinellen Lernens im ecoKI-Projekt (p. 840).



A Benchmark Simulation Model of Ammonia Production: Enabling Safe Innovation in the Emerging Renewable Hydrogen Economy

Niklas Groll, Gürkan Sin

Process and Systems Engineering Center (PROSYS), Department of Chemical and Biochemical Engineering, Technical University of Denmark (DTU), 2800 Kgs.Lyngby, Denmark

The emerging hydrogen economy plays a vital part in transitioning to a sustainable industry. Green hydrogen can be a renewable fuel for process heat and a sustainable feedstock, e.g., for green ammonia. From today on, the necessity of producing green ammonia for the food industry and as a platform chemical will be inherent [1]. Accordingly, many developments focus on designing and optimizing hydrogen process routes. However, implementing new process ideas and designs also requires testing and ensuring safety.

Safety methodologies can be tested on so-called "Benchmark models." Several benchmark processes have been used to innovate new process control and monitoring methods. The Tennessee-Eastman process imitates the behavior of a standard chemical process, the Fed-Batch Fermentation of Penicillin serves as a benchmark for biochemical fed-batch operated processes, and with the COST benchmark model methodologies for wastewater treatment can be evaluated [2], [3], [4]. However, the established benchmark processes do not feature all relevant aspects of the renewable hydrogen pathways, e.g., sustainable feedstocks and energy supply or electrochemical reactions. Thus, lacking a basic benchmark model for the hydrogen industry creates unnecessary risks when adopting process monitoring and control technologies.

Introducing our unique simulation benchmark model, we pave the way for safer innovations in the hydrogen industry. Our model connects hydrogen production with renewable electricity to the Haber-Bosch process for ammonia production. By integrating electrochemical electrolysis with a standard chemical process, our ammonia benchmark process encompasses all key aspects of innovative hydrogen pathways. The model, built with the versatile Aveva Process Simulator, allows for a seamless transition between steady-state and dynamic simulations and easy adjustments to process design and control parameters. By introducing a set of failures, the model is a benchmark for evaluating risk monitoring and control methods. Furthermore, detecting and eliminating failures can also contribute to the development of new process safety methodologies.

Our new ammonia simulation model is a significant addition to the emerging hydrogen industry, filling the void of a missing benchmark. This comprehensive model serves a dual purpose: It can evaluate and confirm existing process safety methodologies and serve as a foundation for developing new safety methodologies specifically targeting safe hydrogen pathways.

[1] A. G. Olabi et al., ‘Recent progress in Green Ammonia: Production, applications, assessment; barriers, and its role in achieving the sustainable development goals’, Feb. 01, 2023, Elsevier Ltd. doi: 10.1016/j.enconman.2022.116594.

[2] U. Jeppsson and M. N. Pons, ‘The COST benchmark simulation model-current state and future perspective’, 2004, Elsevier Ltd. doi: 10.1016/j.conengprac.2003.07.001.

[3] G. Birol, C. Ündey, and A. Çinar, ‘A modular simulation package for fed-batch fermentation: penicillin production’, Comput Chem Eng, vol. 26, no. 11, pp. 1553–1565, Nov. 2002, doi: 10.1016/S0098-1354(02)00127-8.

[4] J. J. Downs and E. F. Vogel, ‘A plant-wide industrial process control problem’, Comput Chem Eng, vol. 17, no. 3, pp. 245–255, Mar. 1993, doi: 10.1016/0098-1354(93)80018-I.



Thermo-Hydraulic Performance of Pillow-Plate Heat Exchangers with Streamlined Secondary Structures: A Numerical Analysis

Reza Afsahnoudeh, Julia Riese, Eugeny Y. Kenig

Paderborn University, Germany

In recent years, pillow-plate heat exchangers (PPHEs) have gained attention as a promising alternative to conventional shell-and-tube and plate heat exchangers. Their advantages include high pressure resistance, leak-tight construction, and good cleanability. The pillow-like wavy channel structure promotes fluid mixing in the boundary layer, thereby improving heat transfer. However, a significant drawback of PPHEs is boundary layer separation near the welding spots, leading to large recirculation zones. Such zones are the primary cause of increased pressure drop and reduced heat transfer efficiency. Downsizing these recirculation zones is key to improving the thermo-hydraulic performance of PPHEs.

One potential solution is the application of secondary surface structuring [1]. Among others, this can be realized using Electrohydraulic Incremental Forming (EHIF) [2]. Afsahnoudeh et al. [3] demonstrated that streamlined secondary structures, particularly those with ellipsoidal geometries, improved thermo-hydraulic efficiency by up to 6% compared to unstructured PPHEs.

Building upon previous numerical studies, this work investigated the impact of streamlined secondary structures on fluid dynamics and heat transfer within PPHEs. The complex geometries of PPHEs, with and without secondary structures, were generated using forming simulations in ABAQUS 2020. Flow and heat transfer in the inner PPHE channels were simulated using FLUENT 24.1, assuming a single-phase, incompressible, and turbulent system with constant physical properties.

Performance evaluation was based on pressure drop, heat transfer coefficients, and overall thermo-hydraulic efficiency. Additionally, a detailed analysis of the Fanning friction factor and drag coefficient was conducted for various Reynolds numbers to provide deeper insights into the fluid dynamics in the inner channels. The results of these investigations are summarized in this contribution.

References

[1] M. Piper, A. Zibart, E. Djakow, R. Springer, W. Homberg, E.Y. Kenig, Heat transfer enhancement in pillow-plate heat exchangers with dimpled surfaces: A numerical study. Appl. Therm. Eng., vol 153, 142-146, 2019.

[2] E. Djakow, R. Springer, W. Homberg, M. Piper, J. Tran, A. Zibart, E.Y. Kenig, “Incremental electrohydraulic forming - A new approach for the manufacturing of structured multifunctional sheet metal blanks,” Proc. of the 20th International ESAFORM Conference on Material Forming, Dublin, Ireland, vol. 1896, 2017.

[3] R. Afsahnoudeh, A. Wortmeier, M. Holzmüller, Y. Gong, W. Homberg, E.Y. Kenig, Thermo-hydraulic Performance of Pillow-Plate Heat Exchangers with Secondary Structuring: A Numerical Analysis,” Energies, vol. 16 (21), 7284, 2023.



Modular and Heterogeneous Electrolysis Systems: a System Flexibility Comparison

Hannes Lange1,2, Michael Große2,3, Isabell Viedt2,3, Leon Urbas1,3

1TUD Dresden University of Technology, Process Systems Engineering Group; 2TUD Dresden University of Technology, Process to Order Lab; 3TUD Dresden University of Technology, Chair of Process Control Systems

Green hydrogen will play a key role in the decarbonization of the steel sector. As a result, the demand for hydrogen in the steel industry will increase in the coming years due to the direct reduction of iron [1]. As the currently commercially available electrolysis stacks are far too small for the production of green hydrogen, the scaling strategy of numbering up standardized process units can provide support [2]. In addition, a cost-effective production of green hydrogen requires the electrolysis system to be able to follow the electricity load, which necessitate a more efficient and flexible system. The modularization of electrolysis systems can provide an approach for this [3]. The potential to include different electrolysis technologies into one heterogeneous electrolysis system can help make use of technology specific advantages and reduce disadvantages [4]. In this paper, a design of such a heterogeneous electrolysis system is presented, which is built using the modularization of electrolysis process units and is scaled up for large-scale applications, such as a direct iron reduction process, by numbering up. The impact of different degrees of technological and production capacity-related heterogeneity is investigated using system co-simulation of existing electrolyzer models. The direct reduction of iron for green steel production must be provided a constant stream of hydrogen from a fluctuating electricity profile. To reduce cost and storage losses the hydrogen storage capacity must be minimized. For this presented use-case the distribution of technology and production capacity in the heterogeneous plant layout is optimized regarding overall system efficiency and the ability to follow flexible electricity profiles. The resulting pareto front is analyzed and the results are compared with a conventional homogenous electrolyzer plant layout. First results underline the benefits of combining different technologies and production capacities of individual systems in a large-scale heterogeneous electrolyzer plant.

1] Wietschel M, Zheng L, Arens M, Hebling C, Ranzmeyer O, Schaadt A, et al. Metastudie wasserstoff – auswertung von energiesystemstudien. Studie im auftrag des nationalen wasserstoffrats. Karlsruhe, Freiburg, Cottbus: Fraunhofer ISI, Fraunhofer ISE, Fraunhofer IEG; 2021.

[2] Lange H, Klose A, Beisswenger L, Erdmann D, Urbas L. Modularization approach for large-scale electrolysis systems: a review. Sustain Energy Fuels 2024:10.1039.D3SE01588B. https://doi.org/10.1039/D3SE01588B.

[3] Lange H, Klose A, Lippmann W, Urbas L. Technical evaluation of the flexibility of water electrolysis systems to increase energy flexibility: A review. Int J Hydrog Energy 2023;48:15771–83. https://doi.org/10.1016/j.ijhydene.2023.01.044.

[4] Mock M, Viedt I, Lange H, Urbas L. Heterogenous electrolysis plants as enabler of efficient and flexible Power-to-X value chains. Comput. Aided Chem. Eng., vol. 53, Elsevier; 2024, p. 1885–90. https://doi.org/10.1016/B978-0-443-28824-1.50315-X.



CFD-Based Shape Optimization of Structured Packings for Enhancing Separation Efficiency in Distillation

Sebastian Blauth1, Dennis Stucke2, Mohamed Adel Ashour2, Johannes Schnebele1, Thomas Grützner2, Christian Leithäuser1

1Fraunhofer ITWM, Germany; 2Ulm University, Germany

In past years the research in the field of structured packing development for laboratory-scale separation processes has intensified, where one of the main objectives is to miniaturize laboratory columns regarding the column diameter. This reduction has several advantages such as reduced operational costs and lower safety requirements due to a reduced amount of chemicals being used. However, a reduction in diameter also causes problems due to the increased surface-to-volume ratio, e.g., stronger impact of heat losses or liquid maldistribution issues. There are a lot of different approaches to design structured packings, such as using repeatedly stacked unit cells, but all of these approaches have in common that the development of new structures and the improvement of existing ones is based on educated guesses by the engineers.
In this talk, we investigate the novel approach of applying techniques from free-form shape optimization to increase the separation efficiency of structured packings in laboratory-scale distillation columns. A simplified single-phase computational fluid dynamics (CFD) model for the mass transfer in the distillation column is used and a corresponding shape optimization problem is solved numerically with the optimization software cashocs. The optimization approach uses a free-form shape optimization, where the shape is not parametrized, e.g., with the help of a CAD model, but all nodes of the computational mesh are moved to alter the shape. Particularly, this approach allows for more freedom in the packing design than the classical, parametrized approach. The goal of the shape optimization is to increase the mass transfer in the column by changing the packing's shape. The numerical shape optimization yields promising results and shows a greatly increased mass transfer for the simplified CFD model. To validate our findings, the optimized shape is additively manufactured and investigated experimentally. The experimental results are in great agreeement with the simulation-based prediction and show that the separation efficiency of the packing increased by around 20 % as consequence of the optimization. Our results show that the proposed approach of using free-form shape optimization for improving structured packings in distillation is extremely promising and will be pursued further in future research.



Multi-Model Predictive Control of a Distillation Column

Mehmet Arıcı1,3, Wachira Daosud2, Jozef Vargan3, Miroslav Fikar3

1Gaziantep Islam Science and Technology University, Gaziantep 27010, Turkey; 2Faculty of Engineering, Burapha University, Chonburi 20131, Thailand; 3Slovak University of Technology in Bratislava, Bratislava 81237, Slovakia

Due to the increasing demand for performance and rising complexity of systems, classical model predictive control (MPC) techniques are often inadequate and new applications often requires some modifications in predictive control mechanism. The modifications frequently include reformulation of optimal control in order to cope with system uncertainties, external perturbations and adverse effect of rapid changes in operating points. Besides, successful implementation of this optimization-driven control technique is highly dependent on an accurate and detailed model of the process which is relatively easy to obtain for chemical processes with simple structure. As complexity in the system increases, however linear approximation used in MPC may result with poor performance or even a total failure. In such a case, nonlinear system model can be used for optimal control signal calculation but lack of reliable dynamic process model is of major challenges in real time implementation of MPC schemes. Even though a model representing the complex behavior is available such model can be difficult to optimize in real time.
To demonstrate the potential challenges addressed above, a binary distillation column process is chosen as testbed. The process is multivariable and inherently nonlinear. Furthermore, linear model approximation for a critical operating point is valid in only a small neighborhood of operation. Therefore, we propose to employ multiple models that can describe the same process dynamics to a certain degree. In addition to the linear model, multi-layered feedforward network is used for data-based modeling and constitutes an additional process model. Both models collaborate to predict state variables individually, and their outputs and constraints are applied in the MPC algorithm. Various cost function formulations are proposed to cope with multiple models. The aim is to enhance efficiency and robustness in process control by compensating for the limitations of each individual model. Additionally, the offset-free technique is applied to eliminate steady-state errors resulting from model-process mismatch.
We compare the performance of the proposed method to MPC using the full nonlinear model and also to single-model MPC methods for both the linear model and neural network model. We show that the proposed method is only slightly suboptimal with respect to the best available performance and greatly improves over individual methods. In addition, the computational load is significantly reduced when compared to the full nonlinear MPC.



Enhancing Fault diagnosis for Chemical Processes via MSCNN with Hyperparameters Optimization and Uncertainty Estimation

Jingkang Liang, Gürkan Sin

Process and Systems Engineering Center (PROSYS), Department of Chemical and Biochemical Engineering, Technical University of Denmark

Fault diagnosis is critical for maintaining the safety and efficiency of chemical process, as undetected faults can lead to operational disruptions, safety hazards, and significant financial losses. Data-driven fault diagnosis methods, especially deep-learning-based methods have been widely used in the field of fault diagnosis of chemical process [1]. However, these deep learning methods often rely on manually tunning the hyperparameters to obtain an optimal model, which is time-consuming and labor-intensive [2]. Additionally, existing fault diagnosis methods typically lack consideration of uncertainty in their analysis, which is essential to assess the confidence in model predictions, especially in safety-critical industries. This underscores the need for research to provide reliable methods that not only improve accuracy but also provide uncertainty estimation in fault diagnosis for chemical processes. This sets the premise for the research focus in this contribution.,

To this end, we present assessment of a new approach that combines a Multiscale Convolutional Neural Network (MSCNN) with hyperparameter optimization and Bootstrap for uncertainty estimation. The MSCNN is designed to capture complex nonlinear features from chemical processes. Tree-Structured Parzen Estimator (TPE), a Bayesian optimization method was employed to automatically search for optimal hyperparameters, such as the number of convolutional layers, and kernel sizes in the multiscale module, minimizing manual tuning efforts and ensuring higher accuracy for training the deep learning models. Additionally, Bootstrap technique which was validated earlier for deep learning applications for property predictions [3], was employed to improve model accuracy and provide uncertainty estimation, making the model more robust and reliable.

A simulation study was carried out on the Tennessee Eastman Process dataset, which is a widely used benchmark for fault diagnosis in a chemical process. The dataset consists of 21 types of faults, each sample is a one-dimensional vector of 52 variables. Totally 26880 samples were collected, and was split randomly to training, validation and testing set according to the ratio of 0.6:0.2:0.2. Other state-of-the-art machine learning methods, including MLP, CNN, LSTM, and WDCNN were conducted for benchmarking of the proposed method. Performance is evaluated based on precision, recall, number of parameters, and quality of predictions (i.e. uncertainty estimation).

The benchmarking results showed that the proposed MSCNN with TPE and Bootstrap achieved the highest accuracy among all the methods considered. Ablation studies were carried out to verify the effectiveness of the TEP and Bootstrap in enhancing the fault diagnosis of chemical process. Confusion matrix and uncertainty estimation were presented to further discuss the effectiveness of the proposed method.

This work paves the way for more robust and reliable fault diagnosis systems in the chemical industry, offering a powerful tool to enhance process safety and efficiency.

References

[1] Melo et al. "Data-Driven Process Monitoring and Fault Diagnosis: A Comprehensive Survey." Processes 12.2 (2024): 251.

[2] Qin et al. "Adaptive multiscale convolutional neural network model for chemical process fault diagnosis." Chinese Journal of Chemical Engineering 50 (2022): 398-411.

[3] Aouichaoui et al. "Uncertainty estimation in deep learning‐based property models: Graph neural networks applied to the critical properties." AIChE Journal 68.6 (2022): e17696.



Machine learning-aided identification of flavor compounds with green notes in plant-based foods

Huabin Luo, Simen Akkermans, Thian Ping Wong, Ferdinandus Archie Pangestu, Jan F.M. Van Impe

BioTeC+, Chemical and Biochemical Process Technology and Control, Department of Chemical Engineering, KU Leuven, Ghent, Belgium

Plant-based foods have emerged as a global trend as consumers become increasingly concerned about sustainability and health. Despite their growing demand, the presence of off-flavors, especially green notes, significantly impacts consumer acceptance and preference. This study aims to develop a model using Machine Learning (ML) techniques to identify flavor compounds with green notes based on their molecular structure. To achieve this, a database of green compounds in plant-based foods was established by searching flavor databases and literature. Additionally, non-green compounds with similar structures and balanced chemical classes to green compounds were collected as a negative set for model training. Subsequently, molecular descriptors (MD) and molecular fingerprints (MF) were calculated based on the molecular structure of these collected flavor compounds and then used as input for ML. In this study, k-Nearest Neighbor (kNN), Logistic Regression (LR), and Random Forest (RF) were used to develop a model. Afterward, the developed models were optimized and evaluated. Results indicated that green compounds exhibit a wide range of structural variations. Topological structure, electronic properties, and surface area properties were essential MD to distinguish green and nongreen compounds. Regarding the identification of flavor compounds with green notes, the LR model performed best, correctly classifying more than 95% of the compounds in the test set, followed by the RF model with an accuracy of more than 92%. In summary, combining MD and MF as the input for ML provides a solid foundation for identifying flavor compounds with green notes. These findings provide knowledge tools for developing strategies to mitigate green off-flavor and control flavor quality in plant-based foods.



LSTMs and nonlinear State Space Models- are they the same?

Ashwin Chandrasekhar, Prashant Mhaskar

McMaster University, Canada

This manuscript identifies and addresses discrepancies in the implementation of Long Short-Term Memory (LSTM) neural networks for naturally occurring dynamical processes, specifically in cases claiming to capture input-output dynamic relationships using a state-space framework. While LSTMs are well-suited for these kinds of problems, there are two key issues in how LSTMs are currently structured and trained in this context.

First, the hidden and the cells states of the LSTM model are often reinitialized or discarded between input-output sequences in the training dataset. This practice essentially results in a framework where the initial hidden and cells states of each sequence are not being trained. However, in a typical state-space model identification process, both model parameters and states need to be identified simultaneously.

Second, the model structure of LSTMs differs from a traditional state-space (SS) representation. In state-space models, the current state is defined as a function of the previous state and input from the prior time step. In contrast, LSTMs use the input from the same time step, creating a structural mismatch. Moreover, for each LSTM cell, there is a corresponding hidden state and a cell state, representing the short- and long-term memory of a given state, and hence it is necessary to address this difference in structure conceptually.

To resolve these inconsistencies, two changes are proposed in this paper. First, the initial hidden and cell states for the training sequences should be trained. Second, to address the structural mismatch, the hidden and cell states from the LSTM are reformatted to match the state and data pairing that a state-space model would use.

The effectiveness of these modifications is demonstrated using data generated from a simple dynamical system modeled by a Linear Time-Invariant (LTI) state-space system. The importance of these corrections is shown by testing them individually. Interestingly, the worst performance was observed in the model with only trained hidden states, followed by the unmodified LSTM model. The model that only corrected the input timing (without trained hidden and cell states) showed a significant improvement. Finally, the best results were achieved when both corrections were applied together.



Simple Regulatory Control Structure for Proton Exchange Membrane Water Electrolysis Systems

Marius Fredriksen, Johannes Jäschke

Norwegian University of Science and Technology, Norway

Effective control of electrolysis systems connected to renewable energy sources (RES) is crucial to ensure efficient and safe plant operation due to the intermittent nature of most RES. Current control architectures for Proton Exchange Membrane (PEM) electrolysis systems primarily use relatively simple control structures such as Proportional-Integral-Derivative (PID) controllers and on/off controllers. Some works introduce more advanced control structures based on Model Predictive Controllers (MPC) and AI-based control methods (Mao et al., 2024). However, few studies have been conducted on advanced regulatory control (ARC) strategies for PEM electrolysis systems. These control structures have several advantages as they offer fast disturbance rejection, are easier to scale, and are less affected by model accuracy than many of the more computationally expensive control methods, such as MPC (Cammann & Jäschke, 2024).

In this work, we proposed an ARC structure for a PEM electrolysis system using the "Top-down" section of Skogestad's plantwide control procedure (Skogestad & Postlethwite, 2007, p. 384). First, we developed a steady-state model loosely based on the PEM system presented by Crespi et al. (2023). The model was verified by comparing the behavior of the polarization curve under varying pressure and temperature. We performed step responses on different system inputs to assess their impact on the outputs and to determine suitable pairings of the manipulated and controlled variables. Thereafter, we formulated an optimization problem for the plant and evaluated various implementations of the system's cost function. Finally, we mapped the active constraint regions of the electrolysis system to identify the active constraints in relation to the system's power input. From an economic perspective, controlling the active constraints is crucial, as deviating from the optimal constraint values usually results in an economic penalty (Skogestad, 2023).

We have shown that the optimal operation of PEM electrolysis systems is close to fully constrained in all regions. This implies that constraint-switching control may be used to achieve optimal system operation. The active constraint regions found for the PEM system share several similarities with those found for alkaline electrolysis systems by Cammann and Jäschke (2024). Finally, we have presented a simple constraint-switching control structure for the PEM electrolysis system using PID controllers and selectors.

References

Cammann, L. & Jäschke, J. A simple constraint-switching control structure for flexible operation of an alkaline water electrolyzer. IFAC-PapersOnLine 58, 706–711 (2024).

Crespi, E., Guandalini, G., Mastropasqua, L., Campanari, S. & Brouwer, J. Experimental and theoretical evaluation of a 60 kW PEM electrolysis system for flexible dynamic operation. Energy Conversion and Management 277, 116622 (2023).

Mao, J. et al. A review of control strategies for proton exchange membrane (PEM) fuel cells and water electrolysers: From automation to autonomy. Energy and AI 17, 100406 (2024).

Skogestad, S. Advanced control using decomposition and simple elements. Annual Reviews in Control 56, 100903 (2023).

Skogestad, S. & Postlethwaite, I. Multivariable Feedback Control: Analysis and Design. (John Wiley & Sons, 2007).



Solid streams modelling for process integration of an EAF steel plant.

Maura Camerin, Alexandre Bertrand, Laurent Chion

Luxembourg Institute of Science and Technology (LIST), Luxembourg

Global warming is an urgent matter that involves and heavily influences industrial activities. Steelmaking is one of the largest sources of industrial CO2 emissions globally, with key players setting ambitious targets to reduce these emissions by 2030 and/or achieve carbon neutrality by 2050. A key factor in reaching these goals is the efficient use of waste heat, especially in such industries that involve high-temperature processes. Waste heat valorisation (WHV) holds significant potential. (McBrien et al., 2016) highlighted that about 28% of the heating needs in a blast furnace plant could be met using existing WHV technologies. This figure could rise to 44% if solid streams, not just gaseous and liquid ones, are included.

At the current state, heat recovery from solid streams, like semi-finished products and slag, and its transfer to cold solid streams, such as scrap and DRI, is rather uncommon. Its mathematical definition for process integration (PI) / mathematical programming (MP) models poses unique challenges due to the need for specialized equipment (Matsuda et al., 2012).

The objective of this work is to propose novel WHV models of such solid streams, specifically formulated for PI/MP problems. In a first step, emerging technologies for slag treatment will be incorporated, and key parameters of the streams will be defined. The heat recovery potential of the slag will be modelled based on its charge weight and the recovery technology used, for example from a heat exchanger below the slag pit or using more advanced treatment technologies. The algorithm will calculate the resulting mass flow and temperature of the heat transfer medium, which can be incorporated into the heat cascade to meet the needs of cold streams such as scrap or DRI preheating.

The expected outcome is an improvement of solid streams models and, as such, more precise process integration results. The improved quantification of waste heat valorisation, especially through the inclusion of previously unconsidered streams, will be of significant benefit to support the decarbonization of the steel industry.

References:

Matsuda, K., Tanaka, S., Endou, M., & Iiyoshi, T. (2012). Energy saving study on a large steel plant by total site based pinch technology. Applied Thermal Engineering, 43, 14–19.

McBrien, M., Serrenho, A. C., & Allwood, J. M. (2016). Potential for energy savings by heat recovery in an integrated steel supply chain. Applied Thermal Engineering, 103, 592–606. https://doi.org/https://doi.org/10.1016/j.applthermaleng.2016.04.099



Design of Microfluidic Mixers using Bayesian Shape Optimization

Rui Miguel Grunert da Fonseca, Fernando Pedro Martins Bernardo

CERES, Department of Chemical Engineering, University of Coimbra, Portugal

Mixing and mass transfer are fundamental aspects of many chemical and biological processes. For instance, in the synthesis of nanoparticles, where a solute solution is mixed with an antisolvent to induce nanoprecipitation, highly efficient and controlled mixing conditions are required to obtain particles with low size variability. Specialized mixing technologies, such as microfluidic mixing, are therefore used. Microfluidic mixing is a continuous process in which passive mixing of two different streams of fluid takes place in micro-sized channels. The geometry and small volume of the device enable very fast mixing, which in turn reduces mass transfer limitations during the nanoparticle formation process. Several different mixer geometries, such as the toroidal and herringbone micromixer [1], have already been used for nanoparticle production. Since mixer geometry plays such a vital role in mixing performance, mathematical optimization of that geometry is clearly a tool to exploit in order to come up with superior designs.
In this work, a methodology for shape optimization of micromixers using Computational Fluid Dynamics (CFD) and Bayesian Optimization is presented. It consists in the sequential performance evaluation of mixer geometries defined through geometric variables, such as angles and lengths, with predefined bounds. The performance of a given geometry is evaluated through CFD simulation, using OpenFOAM software, of the Villermaux-Dushman reaction system [2]. It consists of two competing reactions: one quasi-instantaneous acid-base reaction and a very fast redox reaction. Mixing time can therefore be inferred by analyzing the reaction selectivity at the mixer's outlet. Using Bayesian Optimization, the geometric domain can be explored with an emphasis on maximizing the defined objective functions. This is done by assigning probabilistic functions to each objective based on previously attained data. An acquisition function is then optimized in order to determine the next geometry to be evaluated, balancing exploration and exploitation. This approach is especially appropriate when objective function evaluation is expensive, which is the case for CFD simulations. This methodology is very flexible and can be applied to many other equipment design problems. Its main challenge is the definition of the optimization problem and its domain. This is similar to network design problems, where the choice of the system's superstructure has a great impact on problem solvability. The domain must include as many viable solutions as possible while minimizing problem dimensionality and avoiding redundancy of solutions.
In this work, a case-study for the optimization of the toroidal mixer geometry is presented for three different operating conditions and seven geometric degrees of freedom. Both pressure drop and mixing time were considered as objective functions and the respective Pareto fronts were obtained. The trade-offs between objective functions were analyzed for each case and the general design features are presented.

[1] C. Webb et al, “Using microfluidics for scalable manufacturing of nanomedicines from bench to gmp: A case study using protein-loaded liposomes,” International Journal of Pharmaceutics, vol. 582, p. 119266, May 2020.

[2] J.-M. Commenge and L. Falk, “Villermaux–dushman protocol for experimental characterization of micromixers, ”Chemical Engineering and Processing: Process Intensification, vol. 50, no. 10, pp.979–990, Oct. 2011.



Solubility prediction of lipid compounds using machine learning

Agustin Porley Santana1, Gabriel Gutierrez1, Soledad Gutiérrez Parodi1, Jimena Ferreira1,2

1Grupo de Ingeniería de Sistemas Químicos y de Procesos, Instituto de Ingeniería Química, Facultad de Ingeniería, Universidad de la República, Montevideo, 11300, Uruguay; 2Heterogenoeus Computing Laboratory, Instituto de Computación, Facultad de Ingeniería, Universidad de la República, Montevideo, 11300, Uruguay

Aligned with the principles of biorefinery and circular economy, biomass waste valorization not only reduces the environmental impact of production processes but also presents economic opportunities for companies. Various natural lipids with complex chemical compositions are recovered from different types of biomass and further processed, such as essential oils from citrus waste and eucalyptus oil from wood.

In this context, wool grease, a complex mixture of esters of steroid and aliphatic alcohols with fatty acids, is a byproduct of wool washing. [1] Its derivatives, including lanolin, cholesterol, and lanosterol, differ in their methods of extraction and market value.

Purification of the high value products can be achieved using crystallization, chromatography, liquid-liquid extraction or solid-liquid extraction. The interaction of the selected compound with a liquid phase known as a solvent or diluent (depending on the case) is a crucial aspect in the design of this processes. To achieve an effective separation of target components, it is crucial to identify the solubility of the compounds in a solvent. Given the practical difficulties in determining solubility and the vast array of natural compounds, a comprehensive bibliographic source for their solubilities in different solvents remains elusive. Employing machine learning [2] is an alternative to predict the solubility of the target compound in alternate solvents.

This work focuses on the construction of a model to predict the solubility of several lipids in various solvents, using experimental data obtained from scientific articles and handbooks. It was collected almost 800 data points from 6 solutes and 34 solvents. As a first step it was evaluated 21 properties as input variables of the model, this includes properties of the solute, properties of the solvent, and temperature.

After data preprocessing, in the feature selection step it uses the Pearson and Spearman correlations between input variables to select the relevant input variables. The model is obtained using Random Forest and it is compared to a linear regression model. The dataset was divided between training and validation sets in an 80-20 split. It is analysed the usage of different compound between training and validation sets (extrapolation model), and a random separation of the sets (interpolation model).

It is compared the performance of the models obtained with a full and a reduced input variables set, as well as interpolation and extrapolation models.

In all cases, the Random Forest model performs than the linear one. The preliminaries results shown that the model using the reduced set of input variables performs better than the one that use the full set of input variables.

References

[1] S. Gutiérrez, M. Viñas (2003). Anaerobic degradation kinetics of a cholesteryl ester. Water Science and Technology, 48(6), 141-147.

[2] P. Daoutidis, J. H. Lee, S. Rangarajan, L. Chiang, B. Gopaluni, A. M. Schweidtmann, I. Harjunkoski, M. Mercangöz, A. Mesbah, F. Boukouvala, F. V. Lima, A. del Rio Chanona, C. Georgakis (2024). Machine learning in process systems engineering: Challenges and opportunities, Computers & Chemical Engineering, 181, 108523.



Refining Equation-Based Model Building for Practical Applications in Process Industry

Shota Kato, Manabu Kano

Kyoto University, Japan

Automating physical model building from literature databases holds significant potential for advancing the process industry, particularly in the rapid development of digital twins. Digital twins, based on accurate physical models, can effectively simulate real-world processes, yielding substantial operational and strategic benefits. We aim to develop an AI system that automatically extracts relevant information from documents and constructs accurate physical models.
One of the primary challenges is constructing practical models from extracted equations. The existing method [Kato and Kano, 2023] builds physical models by combining equations to satisfy two criteria: ensuring all specified variables are included and matching the number of degrees of freedom with the number of input variables. While this approach excels at quickly generating models that meet these requirements, it does not guarantee their solvability, leading to the inclusion of impractical models. This issue underscores the need for a robust validation mechanism.
To address this issue, we propose a filtering method that refines models generated by the approach above to identify solvable models. This method evaluates models by comparing variables across different equations, efficiently identifying redundant or conflicting equations to ensure that only coherent and functional models are retained. Furthermore, we generated an evaluation dataset comprising physical models relevant to chemical engineering and applied our proposed method. The results demonstrated that our method accurately identifies solvable models, significantly enhancing the automated model-building approach from the literature.
However, our method faces challenges mainly when the same variable is defined differently under varying conditions. For example, the concentration of a gas dissolved in a liquid might be determined by temperature via an equilibrium constant or by pressure using Henry's law. If extracted equations include these equations, the model-building algorithm may include both equations in the output models; then, the proposed method may struggle to filter models precisely. Another limitation is the necessity to compare multiple equations to determine the model's solvability. In cases where several reaction rate equations and corresponding rate constants are available, all possible combinations must be evaluated. This strategy can be complex and cannot be efficiently handled by our current methodology without additional enhancements.
In summary, aiming to automate physical model building, we proposed a method for refining the models generated by an existing approach. Our method successfully identified solvable models from sets that included redundant ones. Future work will focus on refining our algorithms to handle complexities such as variables defined under different conditions and integrating advanced natural language processing technologies to standardize notation and interpret nuanced relationships between variables, ultimately achieving truly automated physical model building.

References
Kato and Kano, "Efficient physical model building algorithm using equations extracted from documents," Computer Aided Chemical Engineering, 52, pp. 151–156, 2023.



Solar Desalination and Porphyrin Mediated Vis-Light Photocatalysis in Decolouration of Dyes as Biological Analogues Applied in Advanced Water Treatment

Evans Martin Nkhalambayausi Chirwa, Fisseha Andualem Bezza, Osemeikhain Ogbeifun, Shepherd Masimba Tichapondwa, Wesley Lawrence, Bonhle Manoto

University of Pretoria, South Africa

Engineering can be made simple and more impactful by observing and understanding how organisms in nature solve eminent problems. For example, scientists around the world have observed green plants thriving without organic food inputs using the complex photosynthesis process to kick start a biochemical food chain. Two case studies are presented in this study based on research under way at the University of Pretoria on Solar Desalination of sea water using plant-based carbon material as solar absorbers and the work on solar or vis-light photocatalysis using porphyrin based Bi-OCl and BiOIO3 compounds and simulations of the function of chlorophyll in advanced water treatment and recovery. In the study on solar desalination using 3D-printed Graphene Oxide (GO), 82% of water recovery has thus far been achieved using simple GO-Black TiO2 monolayer as a solar absorber supported on cellulose nanocubes. In preparation for possible scale-up of the process, methods are being investigated for inhibition or reversal of salting on the adsorbed surface which inhibits energy transfer. For the vis-light photocatalytic process for discoloration of dye, a in Porphyrin@Bi12O17Cl2 system was used to successfully degrade methyl Blue dye in batch experiments achieving up to 98% degradation within 120 minutes. These results show that more advances and more efficient engineered systems can achieved through observation of nature and how these systems have survived over billions of years. Based on these observations, the research group from the Water Utilisation Group at the University of Pretoria has studied and developed fundamental processes for degradation and remediation of unwanted compounds such as disinfection byproducts (DBPs), volatile organic compounds (VOCs) and pharmaceutical products from water.



Diagnosing Faults in Wastewater Systems: A Data-Driven Approach to Handle Imbalanced Big Data

Morteza Zadkarami1, Krist Gernaey2, Ali Akbar Safavi1, Pedram Ramin2

1Shiraz University, Iran, Islamic Republic of; 2Technical University of Denmark (DTU), Denmark

Process monitoring is critical in industrial settings to ensure system functionality, making it essential to identify and understand the causes of any faults that occur. Although a considerably broader range of research focuses on fault detection, significantly less attention has been devoted to fault diagnosis. Typically, faults arise either from abnormal instrument behavior, suggesting the need for calibration or replacement, or from process faults indicating a malfunction within the system [1]. A key objective of this study is to apply the proposed process fault diagnosis methodology to a benchmark that closely mirrors real-world conditions. In fact, we propose a fault diagnosis framework for a wastewater treatment plant (WWTP) that effectively addresses the challenges of imbalanced big data typically found in large-scale systems. Fault scenarios were simulated using the Benchmark Simulation Model No.2 (BSM2) [2], a highly regarded tool that closely mimics the operations of a real-world WWTP. Using BSM2 a dataset was generated which spans 609 days, comprising 876,960 data points across 31 process parameters.

In contrast to our previous research [3], [4], which primarily focused on fault detection frameworks for imbalanced big data in the BSM2, this study extends the approach to include a comprehensive fault diagnosis structure. Specifically, it determines whether a fault has occurred and, if so, identifies whether the fault is due to an abnormality in the instrument, the process, or both simultaneously. A major challenge lies in the highly imbalanced nature of the dataset: 87.82% of the data represent normal operating conditions, while 6% reflect instrument faults, 6.14% correspond to process faults, and less than 0.05% involve concurrent faults in both the process and instruments. To address this imbalance, we evaluated multiple deep network architectures and various learning strategies to identify a robust fault diagnosis framework that achieves acceptable accuracy across all fault scenarios.

References:

[1] Liu, Y., Ramin, P., Flores-Alsina, X., & Gernaey, K. V. (2023). Transforming data into actionable knowledge for fault detection, diagnosis and prognosis in urban wastewater systems with AI techniques: A mini-review. Process Safety and Environmental Protection, 172, 501-512.

[2] Al, R., Behera, C. R., Zubov, A., Gernaey, K. V., & Sin, G. (2019). Meta-modeling based efficient global sensitivity analysis for wastewater treatment plants–An application to the BSM2 model. Computers & Chemical Engineering, 127, 233-246.

[3] Zadkarami, M., Gernaey, K. V., Safavi, A. A., & Ramin, P. (2024). Big Data Analytics for Advanced Fault Detection in Wastewater Treatment Plants. In Computer Aided Chemical Engineering (Vol. 53, pp. 1831-1836). Elsevier.

[4] Zadkarami, M., Safavi, A. A., Gernaey, A. A., & Ramin, P. (2024). A Process Monitoring Framework for Imbalanced Big Data: A Wastewater Treatment Plant Case Study. In IEEE Access (Vol. 12, pp. 132139-132158). IEEE.



Industrial Time Series Forecasting for Fluid Catalytic Cracking Process

Qiming Zhao1, Yaning Zhang2, Tong Qiu1

1Department of Chemical Engineering, Tsinghua University, Beijing 100084, China; 2PetroChina Planning & Engineering Institute, Beijing 100083, China

Abstract

Industrial process systems generate complex time-series data, challenging traditional regression models that assume static relationships and struggle with system uncertainty and process drifts. These models may also be sensitive to noise and disturbances in the training data, potentially leading to unreliable predictions when encountering fluctuating inputs.

To address these limitations, researchers have explored various algorithms in time-series analysis. The wavelet transform (WT) has emerged as a powerful tool for analyzing non-stationary time series by representing them with localized signals. For instance, Hosseini et al. applied WT and feature extraction to improve gas-liquid two-phase flow meters in oil and petrochemical industries, successfully classifying flow regimes and calculating void fraction percentages with low errors. Another approach to modeling uncertainties in observations is through stochastic processes, with the Gaussian process (GP) gaining popularity due to its flexibility. Bradford et al. demonstrated its effectiveness by proposing a GP-based nonlinear model predictive control algorithm that considered state-dependent uncertainty, which they verified in a challenging semi-batch bioprocess case study. Recent research has explored the integration of WT and GP. Band et al. developed a hybrid model combining these techniques, which accurately predicted groundwater levels in arid areas. However, much of the current research focuses on one-step ahead forecasts rather than comprehensive process modeling.

This research explores a novel predictive modeling framework that integrates wavelet features with GP regression, thus creating a more robust predictive model capable of extracting both temporal and cross-variable information from the data while adapting to changing patterns over time. The effectiveness of this hybrid method is verified using an industrial dataset from fluid catalytic cracking (FCC), a complex petrochemical process crucial for fuel production. The results demonstrate the method’s robustness in delivering accurate and reliable predictions despite the presence of noise and system variability typical in industrial settings. Percentage yields are predicted with a mean absolute percentage error (MAPE) of less than 1% for critical products, meeting the requirements for industrial application in modeling and optimization.

References

[1] Band, S. S., Heggy, E., Bateni, S. M., Karami, H., Rabiee, M., Samadianfard, S., Chau, K.-W., & Mosavi, A. (2021). Groundwater level prediction in arid areas using wavelet analysis and Gaussian process regression. Engineering Applications of Computational Fluid Mechanics, 15(1), 1147–1158. https://doi.org/10.1080/19942060.2021.1944913

[2] Bradford, E., Imsland, L., Zhang, D., & del Rio Chanona, E. A. (2020). Stochastic data-driven model predictive control using Gaussian processes. Computers & Chemical Engineering, 139, 106844. https://doi.org/10.1016/j.compchemeng.2020.106844

[3] Hosseini, S., Taylan, O., Abusurrah, M., Akilan, T., Nazemi, E., Eftekhari-Zadeh, E., Bano, F., & Roshani, G. H. (2021). Application of Wavelet Feature Extraction and Artificial Neural Networks for Improving the Performance of Gas-Liquid Two-Phase Flow Meters Used in Oil and Petrochemical Industries. Polymers, 13(21), Article 21. https://doi.org/10.3390/polym13213647



Electrochemical conversion of CO2 into CO. Analysis of the influence of the electrolyzer type, operating parameters, and separation stage

Luis Vaquerizo1,2, David Danaci2,3, Bhavin Siritanaratkul4, Alexander J Cowan4, Benoît Chachuat2

1Institute of Bioeconomy, University of Valladolid, Spain; 2The Sargent Centre for Process Systems Engineering, Imperial College, UK; 3I-X Centre for AI in Science, Imperial College, UK; 4Department of Chemistry, Stephenson Institute for Renewable Energy, University of Liverpool, UK

The electrochemical conversion of CO2 into CO is an opportunity for the decarbonization of the chemical industry, turning the current linear utilization scheme of carbon into a more cyclic scheme. Compared to other existing CO2 conversion technologies, the electrochemical reduction of CO2 into CO benefits from the fact that is a room temperature process, it does not depend on the physical location of the plant, and its energy efficiency is in the range of 40-50%. Although some techno-economic analyses have already assessed the potential of this technology, finding that the CO production cost is mainly influenced by the CO2 cost, the availability and price of the electricity, and the maturity of the carbon capture technologies, none of them addressed the effect of the electrolyzer type, operating conditions, and separation stage on the final production cost. This work determines the impact of the electrolyzer type (either AEM or BPM), the operating parameters (current density and CO2 inlet flow), and the technology used for product separation (either PSA or, for the first time for this technology, cryogenic distillation) on the annual production cost of CO using experimental data for CO2 electrolysis. The main findings of this work are that the use of either AEM or BPM electrolyzers and either PSA or cryogenic distillation yields a very similar annual production cost (around 25 MM$/y for a 100 t/d CO plant) and that the operation beyond current intensities of 150 mA/cm2 and CO2 inlet flowrates of 3.2 (AEM) and 1 (BPM) NmL/min/cm2 slightly affect the annual production cost. For all the possible operating cases (AEM or BPM electrolyzer, and PSA or cryogenic distillation alternative), the minimum production cost is reached when maximizing the CO productivity in the electrolyzer. Moreover, it is found that although the downstream process alternative has minimum influence on the CO production cost, since the cryogenic distillation alternative requires also a final PSA to separate the column overhead products, a downstream process based on PSA separation seems, at least at this scale, more preferable. Finally, a minimum selling price of 868 $/t CO is estimated in this work considering a CO2 cost of 40 $/t and an electricity cost of 0.03 $/kWh. Although this value is higher than the current CO selling price (600 $/t), there is some margin for improvement if the current electrolyzer CAPEX and lifetime are improved.



Enhancing Pharmaceutical Development: Process Modelling and Control Strategy Optimization for Liquids Drug Products Multiphase Mixing and Milling Processes

Noor Al-Rifai, Guoqing Wang, Sushank Sharma, Maxim Verstraeten

Johnson & Johnson Innovative Medicine, Belgium

Recent regulatory trends from health authorities advocate for greater understanding of drug product and process, enabling more efficient drug development, supply chain agility and the introduction of new and challenging therapies and modalities. Traditional drug product process development and validation relies on fully experimental design spaces with limited insight into what drives process performance, and where every change (in equipment, material attributes, scale) triggers the requirement for a new experimental design space, post-approval submission, as well as risking issues with process performance. Quality-by-Design in process development and manufacturing helps to achieve these aims, aided by sufficient mechanistic understanding and resulting in flexible yet robust control strategies.

Mechanistic correlations and computational fluid dynamics simulations provide digital input towards demonstrating process robustness, scale-up and transfer; particularly for pharmaceutical mixing and milling setups involving complex and unconventional geometries.

This presentation will show synergistic workflows, utilizing mechanistic correlations and/or CFD and PAT to gain process understanding, optimize development work and construct control strategies for pharmaceutical multiphase mixing and milling processes.



Assessing operational resilience within the natural gas monetisation network for enhanced production risk management: Qatar as a case study

Noor Yusuf, Ahmed AlNouss, Roberto Baldacci, Tareq Al-Ansari

Hamad Bin Khalifa University, Qatar

The increased turbulence in energy consumer markets has imposed risks on energy suppliers regarding sustaining markets and profits. While risk mitigation strategies are essential when assessing new projects, retrofitting existing industrially mature infrastructure to adapt to the changing market conditions enforces added cost and time. For the state of Qatar, a gas-dependent economy, the natural gas industry is highly vulnerable to exogenous uncertainties in final markets, including demand and price volatilities. On the other hand, endogenous uncertainties could hinder the project’s profitability and sustainability targets due to poor proactive planning in the early design stages of the project. Hence, in order to understand the industrially mature network’s risk management capabilities, it is crucial to assess the resilience at a production system and overall network level. This is especially important in natural gas supply chains as failure in the production part would influence the subsequent components, represented by storage, shipping, and agreed volume sales to markets. This work evaluates the resilience of the existing Qatari natural gas monetisation infrastructure (i.e., production) by addressing the system’s failure to satisfy the targeted production capacity due to process-level disruptions and/or final market conditions. The network addressed herein comprises 7 direct and in-direct natural gas utilisaion industrial clusters (i.e., natural gas liquefaction, ammonia and urea, methanol and MTBE, power, and gas-to-liquids). Process technical data simulated using Aspen Plus, along with calculated emissions and economic data were used to estimate the resilience of individual processes and the overall network at different endogenous disruption scenarios. First, historical and forecasted demand and prices were used to simulate the optimal natural gas allocation to processes over a planning period between 2000-2032. Secondly the resilience index of the processes within the baseline allocation strategy were then investigated throughout the planning period. Overall, a resilience index value below 100% indicate low process resilience towards the changing endogenous and/or exogenous fluctuations. For methanol and ammonia processes within the investigated network, the annual resilience index was enhanced from 35% to 90% for ammonia process, and from 36% to 84% for methanol process. The increase in the value of the resilience index was mainly due to the introduction of operational bounds and forecasted demand and price data that aided in efficient resilient process management. Finally, qualitative recommendations were summarised to aid decision-makers with planning under different economic and environmental scenarios to maintain the resilience of the network despite the fluctuations imposed by unavoidable external factors, including climate change, policy change, and demand fluctuations.



Membrane-based Blue Hydrogen Production in Sub-Ambient Temperature: Process Optimization, Techno-Economic Analysis and Life Cycle Assessment

JIUN YUN1, BORAM GU1, KYUNHWAN RYU2

1Chonnam National University, Korea, Republic of (South Korea); 2Sunchon National University, Korea, Republic of (South Korea)

In 2022, 62% of hydrogen was produced using natural gas, while only 0.1% came from water electrolysis [1]. This suggests that an immediate shift to green hydrogen is infeasible in the short- to medium-term, which makes blue hydrogen production crucial. Auto-thermal reforming (ATR) processes, which combine steam methane reforming reaction and partial oxidation, offer high energy efficiency by eliminating additional heating. During the ATR process, CO2 can be captured from the shifted syngas, which consists mainly of a CO2/H2 binary mixture.

Recently, gas separation membranes have been gaining significant attention for their high energy efficiency for CO₂ capture. For instance, the Polaris CO₂-selective membrane, specifically designed to separate CO₂/H₂ mixtures, is known to offer a high CO₂ permeance of 1000 GPU and a CO₂/H₂ selectivity of 10. Furthermore, sub-ambient temperatures are reported to enhance its CO₂/H₂ selectivity up to 20, enabling the production of high-purity liquid CO₂ (over 98%) [1].

Hydrogen recovery rates are significantly affected by the H₂ purity at the PSA inlet and the pressure of the tail gas [2], which are dependent on the selected capture location. In the ATR process, CO2 capture can be applied to shifted syngas and PSA tail gas. Therefore, optimal location selection is crucial for improving hydrogen production efficiency.

In this study, an integrated process combining ATR with a sub ambient temperature membrane process for CO₂ capture was designed using gPROMS. Two different capture locations were compared, and economic feasibility of the integrated process was evaluated. The ATR process was developed as a flowsheet-based model, while the membrane unit was built using equation-based custom modeling, which consists of balances and permeation models. Concentration polarization effects were also accounted for, which play a significant role in performance when membrane permeance is high. In both cases, the CO₂ capture rate was fixed at 90%.

In the membrane-based CO2 capture process, the inlet gas was cooled to -35°C using a cooling cycle, increasing membrane selectivity up to 20. This enables energy savings and the capture of high-purity liquid CO₂. Our simulation results demonstrated that the H₂ purity at the PSA inlet reached 92% when CO2 was captured from syngas, and this high H₂ purity improved the PSA recovery rate. For PSA tail gas capture, the CO₂ capture rate was 98.8%, with only a slight increase in the levelized cost of hydrogen (LCOH). However, in the syngas capture case, higher capture rates led to increased costs. Overall, syngas capture achieved a lower LCOH due to the higher PSA recovery rate.

Further modeling of the PSA unit will be performed to optimize the integrated process and perform a CO₂ life cycle assessment. Our results will provide insights into the potential of sub-ambient membrane gas separation for blue hydrogen production and guidelines for the design and operation of PSA and gas separation membranes in the ATR process.

References

[1] International Energy Agency, Global Hydrogen Review 2023, 2023.

[2] C.R. Spínola Franco, Pressure Swing Adsorption for the Purification of Hydrogen, Master's Dissertation, University of Porto, 2014.



Dynamic Life Cycle Assessment in Continuous Biomanufacturing

Ada Robinson Medici1, Mohammad Reza Boskabadi2, Pedram Ramin2, Seyed Soheil Mansouri2, Stavros Papadokonstantakis1

1Institute of Chemical, Environmental and Bioscience Engineering TU Wien,1060 Wien, Austria; 2Department of Chemical and Biochemical Engineering, Søltofts Plads, Building 228A, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark

Process Systems Engineering (PSE) has seen rapid advancements since its inception in the 1970s. Currently, there is an increasing demand for tools that enable the integration of sustainability metrics into process simulation to cope with today’s grand challenges. In recent years, continuous manufacturing has gained attention in biologics production due to its ability to improve process monitoring and ensure consistent product quality. This work introduces a Python-based interface that integrates process simulation and control with cradle-to-gate Life Cycle Assessment resulting into dynamic process inventories and thus to dynamic life cycle inventories and impact assessment (dLCA), with the potential to improve environmental assessment and sustainability metrics in the biopharmaceutical industry.

This framework utilizes the open-source tool Activity Browser, a graphical user interface for Brightway25, that supports the analysis of environmental impacts using LCA (Mutel, 2017). The tool allows real-time tracking of environmental inventories of the foreground process and its impact assessment. Unlike traditional sustainability indicators, such as the E-factor, which focuses only on waste generation, the introduced approach can retrieve comprehensive environmental inventories from the 3.9.10 ecoinvent database to calculate mid-point (e.g. global warming potential)) and end-point LCA indicators (e.g. damage to ecosystems), according to the ReCiPE framework, a widely recognized method in life cycle impact assessment.

This study utilizes the KTB1 benchmark model as a dynamic simulation model for continuous biomanufacturing, which serves as a decision-support tool for evaluating process design, optimization, monitoring, and control strategies in real-time (Boskabadi et al., 2024). KTB1 is a comprehensive dynamic model developed in MATLAB-Simulink covering upstream and downstream components that provide an integrated production process perspective (Boskabadi, M. R., 2024). The functional unit for this study is the production of the typical maintenance dose commonly found in pharmaceutical formulations, 40 mg of pure Active Pharmaceutical Ingredient (API) Lovastatin, under dynamic manufacturing conditions.

Preliminary results show that control decisions can have a significant impact on the dynamic and integral LCA profile for selected resource and energy-related Life Cycle Impact Assessment (LCIA) categories. By integrating LCIA into the control framework, a multi-objective model predictive control (MO-MPC) is enabled with the potential to dynamically adjust process parameters and optimize process conditions based on real-time environmental and process data (Sohn et al., 2020). This work lays the foundation for an advanced computational platform for assessing sustainability in biomanufacturing, positioning it as a critical tool in the industry's ongoing transition toward more environmentally responsible continuous production methods.

Importantly, open-source tools ensure transparency, adaptability, and accessibility, facilitating collaboration and further development within the scientific community.

References

Boskabadi, M. R., 2024.KT-Biologics I (KTB1): A dynamic simulation model for continuous biologics manufacturing.

Boskabadi, M.R., Ramin, P., Kager, J., Sin, G., Mansouri, S.S., 2024. KT-Biologics I (KTB1): A dynamic simulation model for continuous biologics manufacturing. Computers & Chemical Engineering 188, 108770. https://doi.org/10.1016/j.compchemeng.2024.108770

Mutel, C., 2017. Brightway: An open source framework for Life Cycle Assessment. JOSS 2, 236. https://doi.org/10.21105/joss.00236

Sohn, J., Kalbar, P., Goldstein, B., Birkved, M., 2020. Defining Temporally Dynamic Life Cycle Assessment: AReview. Integr Envir Assess &amp; Manag 16, 314–323. https://doi.org/10.1002/ieam.4235



Multi-level modeling of reverse osmosis process based on CFD results

Yu-hyeok Jeong, Boram Gu

Chonnam National University, Korea, Republic of (South Korea)

Reverse osmosis (RO) is a membrane separation process that is widely used in desalination and wastewater treatment [1]. However, solutes blocked by the membrane can accumulate near the membrane, causing concentration polarization (CP), which hinders RO separation performance and reduces energy efficiency [2]. Structures called spacers are added between membrane sheets to create flow channels, which also induces disturbed flow that mitigates CP. Different types of spacers exhibit different hydrodynamic behavior, and understanding this is essential to designing the optimal spacer.

Computational fluid dynamics (CFD) can be a useful tool for theoretically analyzing the impact of these spacers, through which the impact of geometric characteristics of each spacer on RO performance can be understood. However, due to the requirement for large computing resources, it is limited to a small-scale RO unit. Alternatively, mathematical modeling of RO modules can help to understand the effect of spacers on process variables and separation performance by incorporating appropriate constitutive model equations. Despite its advantages of low demands of computing resources even for a large-scale simulation, the impact of spacers is approximated using simple empirical correlations usually derived from experimental data in the limited ranges of operating and geometric conditions.

To overcome this, we present a novel modeling approach that combines these two methods. First, three-dimensional (3D) CFD models of RO spacer units at a small scale possible to represent the periodicity of the spacer geometry were simulated for various spacers (a total of 20 geometries) and a wide range of operating conditions. By fitting the relationship between the operating conditions and simulation results with the response surface methodology, a surrogate model with the operating conditions as independent variables and the simulation results as dependent variables was derived for each spacer. Using the surrogate model, the outlet conditions were derived from the inlet conditions for a single unit. These outlet conditions were then iteratively applied as the inlet conditions for the next unit, thereby representing processes at various scales.

As we expected, the CFD analysis in this study showed varied hydrodynamic behaviors across the spacers, resulting in up to a 10% difference in water flux. The multi-level modeling using the surrogate model showed that the optimal spacer design may vary depending on the size of the process, as the ranking of performance indices, such as recovery and specific energy consumption, changes with process size. In particular, pressure losses were not proportional to process size, and water recovery did not increase linearly. This highlights the need for utilizing a surrogate model via CFD in large-scale process simulations.

By combining 3D CFD simulation with 1D mathematical modeling, the hydrodynamic behavior influenced by the geometric characteristics of the spacer and the varied effects of spacers at different process scales can be efficiently reflected, using as a platform for large-scale process optimization.

References

[1] Sung, Berrin, Novel technologies for reverse osmosis concentrate treatment: A review, Journal of Environmental Management, 2015.

[2] Haidari, Heijman, Meer, Optimal design of spacers in reverse osmosis, Separation and Purification Technology, 2018.



Optimal system design and scheduling for ammonia production from renewables under uncertainty: Stochastic programming vs. robust optimization

Alexander Klimek1, Caroline Ganzer1, Kai Sundmacher1,2

1Max Planck Institute for Dynamics of Complex Technical Systems, Department of Process Systems Engineering, Sandtorstr. 1, 39106 Magdeburg, Germany; 2Otto von Guericke University, Chair for Process Systems Engineering, Universitätsplatz 2, 39106 Magdeburg, Germany

Production of green ammonia from renewable electricity could play a vital role in a net zero economy, yet the intermittency of wind and solar energy poses challenges to sizing and scheduling of such plants [1]. One approach to investigate the interaction between fluctuating renewables and chemical processes is to model the production network in the form of a large-scale mixed-integer linear programming (MILP) problem [2, 3].

A wide range of parameters is necessary to characterize the chemical production system, including investment costs, wind speeds, solar irradiance, purchase and sales prices. These parameters are usually derived from literature data and fixed before optimization. However, parameters such as costs and capacity factors are not deterministic in reality but rather subject to uncertainty. Mathematical methods of optimization under uncertainty can be applied to deal with such deviations from the nominal parameter values. Stochastic programming (SP) and robust optimization (RO) in particular are widely used to address parameter uncertainty in optimization problems and to identify solutions that satisfy all constraints under all possible realizations of uncertain parameters [4].

In this work, we reformulate a deterministic MILP model for determining the optimal design and scheduling of an ammonia plant based on renewables as a SP and a RO problem. Using the Pyomo extensions mpi-sppy and ROmodel [5, 6], the optimization problems are implemented and solved under parameter uncertainty. The results in terms of plant design and operation are compared with the outcomes of the deterministic formulation. In the case of SP, a two-stage problem is used, whereby Monte Carlo sampling is applied to generate different scenarios. Analysis of the value of the stochastic solution (VSS) shows that when the model is constrained by the nominal scenario's first-stage decisions and subjected to the conditions of other scenarios, the deterministic model cannot handle even a 1% decrease in the wind potential, highlighting the model’s sensitivity. The stochastic approach mitigates this risk with a solution approximately 30% worse in terms of net present value (NPV) but more resilient to fluctuations. For RO, different approaches are chosen with regard to uncertainty sets and formulation. The very conservative approach using box uncertainty sets is relaxed by the use of auxiliary parameters, ensuring that only a certain number of uncertain parameters can take their worst-case value at the same time. The RO framework is extended by the use of adjustable decision variables, requiring a reduction in the time horizon compared to the SP formulation in order to solve these problems within a reasonable time frame.

References:
[1] Wang, H. et al. 2021. ACS Sust. Chem. Eng. 9, 7, 2816–2834.
[2] Ganzer, C. and Mac Dowell, N. 2020. Sust. En. Fuels 4, 8, 3888–3903.
[3] Svitnič, T. and Sundmacher, K. 2022. Appl. En. 326, 120017.
[4] Mavromatidis, G. 2017. PhD Dissertation. ETH Zurich.
[5] Knueven, B. et al. 2023. Math. Prog. Comp. 15, 4, 591–619.
[6] Wiebe, J. and Misener, R. 2022. Optim. & Eng. 23, 4, 1873–1894.



CO2 Sequestration and Valorization to Synthetic Fuels: Multi-criteria Based Process Design and Optimization for Feasibility

Thuy T. Hong Nguyen, Satoshi Taniguchi, Takehiro Yamaki, Nobuo Hara, Sho Kataoka

National Institute of Advanced Industrial Science and Technology, Japan

CO2 capture and utilization/storage (CCU/S) has been considered one of the linchpin strategies to reduce greenhouse gas (CO2 equivalent) emissions. CCS promises to contribute to removing large CO2 amount but faces high-cost barriers. CCU produces high-value products; thereby gaining some economic benefits but requires large supplies of energy. Different CCU pathways have been studied to utilize CO2 as renewable raw material for producing different valuable chemical products and fuels. Especially, many kinds of catalysts and synthesis conditions have been examined to convert CO2 to different types of gaseous and liquid fuels (methane, methanol, gasoline, etc.). As the demand of these synthetic fuels are exceptionally high, such CCU pathways potentially help mitigate large CO2 emissions. Nevertheless, implementation of these CCU pathways hinges on an ample supply of carbon free H2 raw material that is currently not available for large-scale production. Thus, to remove large industrial CO2 emission sources, combining these CCU pathways with sequestration is required.

This study aims to develop a CCUS system that can contribute to remove large CO2 emissions with high economic efficiency. Herein, multiple CCU pathways converting CO2 to different gaseous and liquid synthetic fuels (methane, methanol and Fischer-Tropsch fuels) were examined for integrating with CO2 sequestration in an economic manner. Process simulator is employed to design and optimize the operating conditions of all included processes. A multi-objective evaluation model is constructed to optimize the economic benefit and CO2 reduction amount. Based on the optimization results, the feasible synthetic fuel production processes that can be integrated with CO2 sequestration process for mitigating large CO2 emission sources can be proposed.

The results showed that the formulation of CCUS system (types of CCU pathways and the amounts of CO2 to be utilized and stored) heavily depends on the types and purchasing cost of hydrogen raw material, product selling prices, and carbon tax. The CCUS system integrating the CCU pathways converting CO2 to methanol and methane and CO2 sequestration can contribute to large CO2 reduction with low economic loss. The economic benefit of this system can be dramatically enhanced when the carbon tax increases up to $250/ton CO2. Due to the exceptionally high demand of energy supply and high initial investment cost, Fischer-Tropsch fuels synthesis process is the least competitive process in terms of both economic benefit and potential CO2 reduction.



Leveraging Pilot-scale Data for Real-Time Analysis of Ion Exchange Chromatography

Søren Villumsen, Jesper Frandsen, Jakob Huusom, Xiadong Liang, Jens Abildskov

Technical University of Denmark, Denmark

Chromatography processes are key in the downstream processing of bio-manufactured products to attain high-purity products. Chromatographic separation is hard to operate optimally due to hard-to-describe mechanisms, which can be partly described by partial differential equations of convection, diffusion, mass transfer and adsorption. The processes may also be subject to batch-to-batch variation in feed composition and operating conditions. To ensure high purity of products, chromatography may be operated in a conservative manner, meaning fraction collection may be started later than necessary and terminated prematurely. This results in sub-optimal chromatographic yields in productions, as operators are forced to make the tough decision to cut the purification process at a point where they know purity is ensured at the expense of product loss (Kozorog et al. 2023).

If the overall separation process were better understood and monitored, such that the batch-to-batch variation could be better accounted for, it may be possible to secure a higher yield in the separation process (Kumar and Lenhoff 2020). Using mechanistic models or hybrid models of the chromatographic process, the process may be analyzed in real-time leading to potential insights about the current process. These insights could be communicated to operators, allowing them to perform more optimal decision-making, increasing yield without sacrificing purity.

The potential for this real-time process prediction was investigated at a pilot scale ion-exchange facility at the Technical University of Denmark (DTU). The process is equipped with sensors for real-time data extraction and supports digital twin development (Jones et al. 2022). Using this data, mechanistic and hybrid models were fitted to predict key process events such as breakthrough. The partial differential equations were solved using state-of-the-art discretization methods that are sufficiently computationally fast to allow for real-time prediction of process events (Frandsen et al. 2024). This serves as proof-of-concept for real-time analysis through Monte Carlo simulation of chromatographic processes.

References

Frandsen, Jesper, Jan Michael Breuer, Eric Von Lieres, Johannes Schmölder, Jakob K. Huusom, Krist V. Gernaey, and Jens Abildskov. 2024. “Discontinuous Galerkin Spectral Element Method for Continuous Chromatography: Application to the Lumped Rate Model Without Pores.” In Computer Aided Chemical Engineering, 53:3325–30. Elsevier.

Jones, Mark Nicholas, Mads Stevnsborg, Rasmus Fjordbak Nielsen, Deborah Carberry, Khosrow Bagherpour, Seyed Soheil Mansouri, Steen Larsen, et al. 2022. “Pilot Plant 4.0: A Review of Digitalization Efforts of the Chemical and Biochemical Engineering Department at the Technical University of Denmark (DTU).” In Computer Aided Chemical Engineering, 49:1525–30. Elsevier.

Kozorog, Mirijam, Simon Caserman, Matic Grom, Filipa A. Vicente, Andrej Pohar, and Blaž Likozar. 2023. “Model-Based Process Optimization for mAb Chromatography.” Separation and Purification Technology 305 (January): 122528.

Kumar, Vijesh, and Abraham M. Lenhoff. 2020. “Mechanistic Modeling of Preparative Column Chromatography for Biotherapeutics.” Annual Review of Chemical and Biomolecular Engineering 11 (1): 235–55.



Model Based Flowsheet Studies on Cement Clinker Production Processes

Georgios Melitos1,2, Bart de Groot1, Fabrizio Bezzo2

1Siemens Industry Software Limited, 26-28 Hammersmith Grove, W6 7HA London, United Kingdom; 2CAPE-Lab (Computer-Aided Process Engineering Laboratory), Department of Industrial Engineering, University of Padova, 35131 Padova PD, Italy

The cement value chain is responsible for 7-8% of global CO­2 emissions [1]. These emissions originate both directly via chemical reactions (e.g. calcination) taking place in the process and indirectly via the process energy demands. Around 90% of these emissions come from the production of clinker, the main constituent of cement [1]. Clinker production comprises some high temperature and carbon intensive processes, which occur in the pyroprocessing section of a cement plant. The chemical and physical phenomena occurring in such processes are rather complex and to this day, these processes have mostly been examined and modelled in literature as standalone unit operations [2-4]. As a result, there is a lack of holistic model-based approaches on flowsheet simulations of cement plants in literature.

This paper presents first-principles mathematical models for the simulation of the pyro-process section of a cement production plant; more specifically the preheating cyclones, the calciner, the rotary kiln and the grate cooler. These mathematical models are then combined in an integrated flowsheet model for the production of clinker. The models incorporate the major heat and mass transport phenomena, reaction kinetics and thermodynamic property estimation models. These mathematical formulations have been implemented in the gPROMS® Advanced Process Modelling environment and solved for various reactor geometries and operating conditions.

The final flowsheet is validated against published data, demonstrating the ability to predict accurately operating temperatures, degree of calcination, gas and solids compositions, fuel consumption and overall CO2 emissions. The utilization of several types of alternative fuels is also investigated, to evaluate the potential for avoiding CO2 emissions by replacing part of the fossil-based coal fuel (used as a reference case). Trade-offs between different process KPIs (net energy consumption, conversion efficiency, CO2 emissions) are identified and evaluated for each fuel utilization scenario.

REFERENCES

[1] Monteiro, Paulo JM, Sabbie A. Miller, and Arpad Horvath. "Towards sustainable concrete." Nature materials 16.7 (2017): 698-699.

[2] Iliuta, I., Dam-Johansen, K., & Jensen, L. S. (2002). Mathematical modeling of an in-line low-NOx calciner. Chemical engineering science, 57(5), 805-820.

[3] Pieper, C., Liedmann, B., Wirtz, S., Scherer, V., Bodendiek, N., & Schaefer, S. (2020). Interaction of the combustion of refuse derived fuel with the clinker bed in rotary cement kilns: A numerical study. Fuel, 266, 117048.

[4] Cui, Z., Shao, W., Chen, Z., & Cheng, L. (2017). Mathematical model and numerical solutions for the coupled gas–solid heat transfer process in moving packed beds. Applied energy, 206, 1297-1308.



A Social Life Cycle Assessment for Sustainable Pharmaceutical Supply Chains

Inês Duarte, Bruna Mota, Andreia Santos, Tânia Pinto-Varela, Ana Paula Barbosa-Povoa

Centre for Management Studies of IST (CEG-IST), University of Lisbon, Portugal

The increasing pressure from governments, media, and consumers is driving companies to adopt sustainable practices by reducing their environmental and social impacts. While the economic dimension of sustainable supply chain management is always considered, and the environmental one has been thoroughly addressed, the social dimension remains underdeveloped (Barbosa-Póvoa et al., 2018) despite growing attention to social sustainability issues in recent years (Duarte et al., 2022). This imbalance is particularly concerning in the healthcare sector, especially within the pharmaceutical industry, given the significant impact of pharmaceutical products on public health and well-being. On the other hand, while vital to society, there are social risks incurred throughout the entire supply chain, from primary production activities to the manufacturing of the final product and its distribution. Addressing these concerns requires a comprehensive framework that captures the social impacts of every stage of the pharmaceutical supply chain.

Social LCA is a well-established approach to assessing the social performance of supply chains by identifying both the positive and negative social impacts linked to a system's life cycle. By adopting a four-step process as outlined in the ISO 14040 standard (ISO, 2006), Social LCA enables a thorough evaluation of the social sustainability of supply chain activities. This approach allows for the identification and mitigation of key social risks, thus enabling more informed decision-making and promoting sustainable development goals. Hence, in this work, a social life cycle assessment framework is developed and integrated into the pharmaceutical supply chain design and planning model of Duarte et al. (2022), a multi-objective mixed integer linear programming model. The economic objective is measured through the maximization of the Net Present Value, while the social objective maximizes equity in access through a Disability Adjusted Life Years (DALY) metric. The social life cycle assessment will allow a broader social assessment of the whole supply chain activities by evaluating social risks and generating actionable insights for minimizing the most significant social risks within the pharmaceutical supply chain.

A case study based on a global vaccine supply chain is conducted where the main social hotspots are identified, as well as trade-offs between the economic and accessibility objectives. Through this analysis, informed recommendations are developed to mitigate potential social impacts associated with the supply chain under study.

The integration of social LCA into a pharmaceutical supply chain design and planning optimization model constitutes the main contribution of this work, providing a practical tool for decision-makers to enhance the overall sustainability of their operations and address the complex social challenges of global pharmaceutical supply chains.

Barbosa-Póvoa, A. P., da Silva, C., & Carvalho, A. (2018). Opportunities and challenges in sustainable supply chain: An operations research perspective. European Journal of Operational Research, 268(2), 399–431. https://doi.org/10.1016/j.ejor.2017.10.036

Duarte, I., Mota, B., Pinto-Varela, T., & Barbosa-Póvoa, A. P. (2022). Pharmaceutical industry supply chains: How to sustainably improve access to vaccines? Chemical Engineering Research and Design, 182, 324–341. https://doi.org/10.1016/j.cherd.2022.04.001

ISO. (2006). ISO 14040:2006 Environmental management - Life cycle assessment - Principles and framework. Geneva, Switzerland: International Organization for Standardization.



Quantum Computing for Synthetic Bioprocess Data Generation and Time-Series Forecasting

Shawn Gibford1,2, Mohammed Reza Boskabadi2, Seyed Soheil Mansouri1,2

1Sqale; 2Denmark Technical University

Data scarcity in bioprocess engineering, particularly for single-cell organism cultivation in pilot-scale photobioreactors (PBRs), poses significant challenges for accurate model development and process optimization. This issue is especially pronounced in pilot-scale operations (e.g., 20L PBRs), where data acquisition is infrequent and costly. The nonlinear nature of these processes, coupled with various non-idealities, creates a substantial gap between lab-scale and pilot-scale operations, hindering the development of accurate mechanistic models and data-driven approaches.

To address these challenges, we propose a novel approach leveraging quantum computing and machine learning. Specifically, we employ a quantum Generative Adversarial Network (qGAN) to generate synthetic bioprocess time-series data, with a focus on quality indicator variables like Optical Density (OD) and Dissolved Oxygen (DO), key metrics for Dry Biomass estimation. The quantum approach offers potential advantages over classical methods, including better generalization capabilities and faster model training using tensor networks.

Various network and quantum circuit architectures were tested to capture the statistical characteristics of real process data. Our results show high fidelity in synthetic data generation and significant improvement in the performance of forecasting models, such as Long Short-Term Memory (LSTM) networks, when augmented with GAN-generated samples. This approach addresses critical data gaps, enabling better model development and parameter optimization in bioprocess engineering.

The success in generating high-quality synthetic data opens new avenues for bioprocess optimization and scale-up. By addressing the critical issue of data scarcity, this method enables the development of more accurate virtual twins and robust optimization strategies. Furthermore, the ability to continuously update models with newly acquired online data suggests a pathway towards adaptive, real-time process control.

This work not only demonstrates the potential of quantum machine learning in bioprocess engineering but also provides a framework for addressing similar data scarcity issues in other complex scientific domains. Future research will focus on refining the qGAN architectures, exploring integration with real-time sensor data, and extending the approach to other bioprocess systems and scale-up scenarios.

References:

Orlandi, F.; Barbierato, E.; Gatti, A. Enhancing Financial Time Series Prediction with Quantum-Enhanced Synthetic Data Generation: A Case Study on the S&P 500 Using a Quantum Wasserstein Generative Adversarial Network Approach with a Gradient Penalty. Electronics 2024, 13, 2158. https://doi.org/10.3390/electronics13112158



Optimising Crop Schedules and Environmental Impact in Climate-Controlled Greenhouses: A Hydroponics vs. Soil-Based Food Production Case Study

Sarah Namany, Farhat Mahmoud, Tareq Al-Ansari

Hamad bin Khalifa University, Qatar

Optimising greenhouse operations in arid regions is essential for sustainable agriculture due to limited water resources and high energy demands for climate control. This paper proposes a multi-objective optimisation framework aimed at minimising both the operational costs and environmental emissions of a climate-controlled greenhouse. The framework schedules the cultivation of three different crops counting tomato, cucumber, and bell pepper, throughout the year. These crops are selected for their varying growth conditions, which induce variability in energy and water inputs, providing a comprehensive assessment of the optimisation model. The model integrates factors such as temperature, humidity, light intensity, and irrigation requirements specific to each crop. It is solved using a genetic algorithm combined with Pareto front analysis to address the multi-objective nature effectively. This approach facilitates the identification of optimal trade-offs between cost and emissions, offering a set of efficient solutions for decision-makers. Applied to a greenhouse in an arid region, the model evaluates two scenarios: a hydroponic system and a conventional soil-based system. Results of the study indicate that the multi-objective optimisation effectively reduces operational costs and environmental emissions while fulfilling crop demand. The hydroponic scenario demonstrates higher water-use efficiency and allows for precise nutrient management, resulting in a lower environmental impact compared to the conventional soil system. Moreover, the optimised scheduling balances energy consumption for climate control across different crop requirements, enhancing overall sustainability. This study underscores the potential of advanced optimisation techniques in enhancing the efficiency and sustainability of greenhouse agriculture in challenging environments.



Technological Trends towards Sustainable and Circular Process Design

MAURICIO SALES-CRUZ, TERESA LOPEZ-ARENAS

Departamento de Procesos y Tecnología, Universidad Autónoma Metropolitana-Cuajimalpa, Mexico

Current trends in technology are being directed toward the enhancement of teaching methods and the applicability of engineering concepts to industry, especially in the areas of sustainability and circular process design. These shifts signal a transformation in the education of chemical and biological engineering students, who are being equipped with emerging skills through practical, digital-focused approaches that align with evolving industry needs and global sustainability objectives.

Within this educational framework, significant focus is placed on the computational modeling and simulation tools, sustainable design process and circular economy, which are recognized as essential in preparing students to implement efficient and environmentally friendly processes. For instance:

  • The circular economic concept is introduced, where waste is eliminated by redesigning production systems to enhance or maintain profitability. This model emphasizes product longevity, recycling, reuse, and the valorization of waste.
  • Process integration (the biorefineries concept) is highlighted as a complex challenge requiring advanced techniques in separation, catalysis, and biotechnology, integrating both chemical and biological engineering disciplines.
  • Modeling and simulation tools are essential in engineering education, enabling students to analyze and optimize complex processes without incurring the costs or time associated with experimental setups.
  • The use of programming languages (such as MATLAB or COMSOL), equation-based process simulators (such as gPROMS), and modular process simulators (such as ASPEN or SuperPro Designer) is strongly encouraged.

From a pedagogical viewpoint, primary educational trends for knowledge transfer and meaningful learning include:

  1. Problem-Based Learning (PBL) approaches are promoted, using practical industry-related problems to improve students' decision-making skills and knowledge application.
  2. Virtual Labs offer students remote or simulated access to complex processes, including immersive experiences in industrial plants and laboratory equipment.
  3. Integration of Industry 4.0 and Process Automation tools facilitate the analysis of massive data (Big Data) and introduce technologies such as artificial intelligence (AI).
  4. Interdisciplinary and Collaborative Learning fosters integration across disciplines such as biology, chemistry, materials engineering, computer science, and economics.
  5. Blended Learning Models combine traditional teaching methods with digital tools, with online courses, e-learning platforms, and multimedia resources enhancing in-person classes.
  6. Continuing Education and Micro-credentials are encouraged as technologies and approaches evolve rapidly, with short, specialized courses often offered through online platforms.

This paper critically examines these educational trends, emphasizing the shift toward practical and digital approaches that align with changing industry demands and sustainability goals. Additionally, student-led case studies on organic waste revalorization will be included, demonstrating the quantification of environmental impacts, assessments of economic viability in terms of investment and operational costs, and evaluations of innovative solutions grounded in circular economy principles.



From experiment design to data-driven modeling of powder compaction process

Rene Brands1, Vikas Kumar Mishra2, Jens Bartsch1, Mohammad Al Khatib2, Markus Thommes1, Naim Bajcinca2

1RPTU Kaiserslautern, Germany; 2TU Dortmund, Germany

Tableting is a dry granulation process for compacting powder blends into tablets. In this process, a blend of active pharmaceutical ingredients (APIs) and excipients are fed into the hopper of a rotary tablet press via feeders. Inside the tablet press, rotating feed frame paddle wheels fill powder into dies, with tablet mass adjusted by the lower punch position during the die filling process. Pre-compression rolls press air out of the die, while main compression rolls apply the force necessary for compacting the powder into tablets. In this paper, process variables such as feeder screw speeds, feed frame impeller speed, lower punch position during die filling, and punch distance during main compression have been systematically varied. Corresponding responses, including pre-compression force, ejection force, and tablet porosity have been evaluated to optimize the tableting process. After implementing an OPC UA interface, process variables can be monitored in real-time. To enable in-line monitoring of tablet porosity, a novel UV/Vis fiber optic probe has been implemented into the rotary tablet press. To further analyze the overall process, a data-driven modeling approach is adopted. Data-driven modeling is a valuable alternative to modeling real-world processes where, for instance, first principles modeling is difficult or infeasible. Due to the complex nature of the process, several model classes need to be explored. To begin with, linear autoregressive models with exogenous inputs (ARX) have been considered. Thereafter, nonlinear autoregressive models with exogenous inputs (NARX) have been considered. Finally, several experiments have been designed to further validate and test the effectiveness of the developed models in real-time scenarios.



Taking into account social aspects for the development of industrial ecology

Maud Verneuil, Sydney Thomas, Marianne Boix

Laboratoire de Genie Chimique, Toulouse INP, CNRS, Université Paul Sabatier, France

Industrial Ecology, in the context of decarbonization, appears to be an important and significant way to reduce carbon dioxide emissions. The development of eco-industrial parks are also real applications that can help to modify socio-ecological landscapes at the scale of a territory.

In the context of Industrial ecology, optimization models make possible to implement synergies according to economic and environmental criteria. Even if numerous studies have proposed several criteria such as: CO2 emissions, Net Present Value or other economic ones; to date, a few social indicators have been taken into account in multi-criteria models. Job creation is often used as a social indicator in this type of analysis. However, the social nature of this indicator is debatable.

The first aim of the present work is to question the relevance of job creation as a social indicator with a case study. Afterward, we will evaluate the need to measure the social impact of industrial ecology initiatives and query the meaning and the added value of social indicators in this context.

So, the case study is about the development of offshore wind energy expertise in the port of Port-La-Nouvelle, with the port of Sète as a rear base. The aim is to assess the capacity of the port of Sète to host component manufacturing and anchor system storage activities, by evaluating the economic, environmental and social impacts of this approach. We will then highlight the criteria chosen and assess their relevance and limitations, particularly with regard to the social aspect.

The second step is to define the needs and challenges of an industrial and territorial ecology approach. What are the key success factors? In attempting to answer this question, it became clear that an eco-industrial park could not survive without a climate of trust and cooperation (Diemer & Rubio, 2016). The complexity of this eco-system and its communicating vessels between industrialists on a micro scale, the park on a meso scale and its environment on a macro scale make the link and the building of relationships the sinews of war.

Thirdly, we will examine the real added value of social indicators on this relational dimension, in particular by studying the way in which social indicators are implemented. Indeed, beyond the indicator itself, the process chosen for its elaboration has a real influence on the indicator itself, as well as on the ability of users to appropriate it. We therefore need to consider which process seems most effective in enabling the use of social indicators to provide a new perspective on the context of an industrial and territorial ecology approach.

Finally, we will highlight the limits of metrics based on social indicators, and question their ability to capture a complex, multidimensional social environment. We will also explore the possibility of using other concepts and tools to account for social reality, and assess their relevance to industrial and territorial ecology.



Life cycle impacts characterization of carbon capture technologies for their integration in eco-industrial parks

Agathe Gabrion, Sydney Thomas, Marianne Boix, Stephane Negny

Laboratoire de Genie Chimique, Toulouse INP, CNRS, Université Paul Sabatier, France

Human activities since pre-industrial era have been recognized to be responsible of climate change. This influence on the climate is primarily driven by the combustion of fossil fuels. The burning of these fuels releases significant quantities of carbon dioxide (CO2) and other greenhouse gases into the atmosphere, contributing to the greenhouse effect.

Industrial activities are a major factor in climate change, given the amount of greenhouse gases released into the Earth’s atmosphere from fossil fuel burning and from the energy required for industrial processes. In an attempt to reduce the environmental impact of the industry on climate change, many methods are studied and considered.

This study focuses on one of these technologies – carbon capture. Carbon capture refers to the process of trapping CO2 molecules after the combustion of fossil fuels. Next, the carbon is used or stored in order to prevent him reaching the atmosphere. This whole process is referred as Carbon Capture, Utilization and Storage (CCUS). Carbon capture is declined into multiple technologies. This study focuses only on absorption capture method using amine because it represents 90% of the operational market. It does not evaluate the utilization and storage part.

In this study, the process of carbon capture is seen as part of a bigger project aiming at reducing the CO2 emissions of the industry referred to as an Eco-Industrial Park (EIP). Indeed, the process is studied in the context of an EIP in order to determine whether setting it up is more or less valuable in terms of ecological impact than the current situation consisting on releasing the greenhouse gases into the atmosphere. Results will conduct to study the integration of carbon capture alternative methods in the EIP.

To properly conduct this study, it was necessary to consider various factors of ecological impacts. While carbon absorption using an amine solvent reduces the amount of CO2 released into the atmosphere, the degradation associated with amine solvents must also be taken into account. Therefore, it was necessary to involve several different criteria in order to compare the ecological impact of a carbon capture and the ecological impact of the release of industry-produced greenhouse gases. The objective is to prevent the transfer of pollution from greenhouse gases to other forms of environmental contamination. To do so, the Life Cycle Analysis (LCA) method was chosen to assess the environmental impacts of both scenarios.

Using the SimaPro© software to conduct the LCA, this study showed that processing the stream gas exiting an industrial site offers environmental advantages compared to its direct release into the atmosphere. Within the framework of an Eco-Industrial Park (EIP), the implementation of a CO2 absorption process could contribute to mitigate climate change impacts. However, it is important to consider that other factors, such as ecotoxicity and resource utilization, may become more significant when the CO2 absorption process is incorporated into the EIP.



Dynamic simulation and life cycle assessment of energy storage systems connecting variable renewable sources with regional energy demand

Ayumi Yamaki, Shoma Fujii, Yuichiro Kanematsu, Yasunori Kikuchi

The University of Tokyo, Japan

Increasing reliance on variable renewable energy (VRE) is crucial to achieving a sustainable and carbon-neutral energy system. However, the inherent intermittency of VRE creates challenges in ensuring a reliable power supply that meets fluctuating electricity demand. Energy storage systems are pivotal in addressing this issue by storing surplus energy and supplying it when needed. This study explores the applicability of different energy storage technologies—batteries, hydrogen (H2) storage, and thermal energy storage (TES)—to control electricity variability from renewable energy sources, focusing on electricity demand and life cycle impacts.
This research aims to evaluate the performance and environmental impacts of the energy storage system when integrated with wind power. A model of an energy storage system connected to wind energy was constructed based on the existing model (Yamaki et al., 2024), and the annual energy flow simulation was conducted. The model assumes that all generated wind energy is stored and subsequently used to supply electricity to consumers. The energy flow was calculated hourly from 0:00 on January 1st to 24:00 on December 31st based on the model made by Yamaki et al. (2023). The amounts of energy storage and VRE installation were set, and then the maximum amount of power to be sold from the energy storage system was estimated. In the simulation, the stored energy was calculated hourly from the charge of VRE-derived power/heat and the discharge of power to be sold.
Life cycle assessment (LCA) was employed to quantify the environmental impacts of each storage technology from cradle to grave, considering both the energy storage system infrastructure and operational processes for various wind energy and energy storage scales. This study evaluated GHG emissions and abiotic resource depletion as environmental impacts.
The amount of power sold was calculated by energy flow simulation. The simulation results indicate that the amount of power sold increases as wind energy generation and storage capacity rise. However, when storage capacities are over-dimensioned, the stored energy diminishes due to battery self-discharge, H2 leakage, or thermal losses in TES. This loss of stored energy leads to a reduction in the power sold. The environmental impacts of each energy storage system depended on the specific storage type and capacity. Batteries, H2 storage, and TES exhibited different trade-offs regarding GHG emissions and abiotic resource depletion.
This study highlights the importance of integrating dynamic simulations with LCA to provide a holistic assessment of energy storage systems. By quantifying both the energy supply capacity and the environmental impacts, this research offers valuable insights for designing energy storage solutions that enhance the viability of VRE integration while minimizing environmental impacts. The findings contribute to developing more resilient and sustainable energy storage systems that are adaptable to regional energy supply conditions.

Yamaki, A., et al.; Life cycle greenhouse gas emissions of cogeneration energy hubs at Japanese paper mills with thermal energy storage, Energy, 270, 126886 (2023)
Yamaki, A., et al.; Comparative Life Cycle Assessment of Energy Storage Systems for Connecting Large-Scale Wind Energy to the Grid, J. Chem. Eng. Jpn., 57 (2024)



Optimisation of carbon capture utilisation and storage supply chains under carbon trading and taxation

Hourissa Soleymani Babadi, Lazaros G. Papageorgiou

The Sargent Centre for Process Systems Engineering, Department of Chemical Engineering,University College London (UCL), Torrington Place, London WC1E 7JE, UK

To mitigate climate change, and in particular, the rise of CO2 levels in the atmosphere, ambitious emissions targets have been set by political institutions such as the European Union, which aims to reduce 2050 emissions by 80% versus 1990 levels (Leonzio et al., 2019). One proposed solution to lower CO2 levels in the atmosphere is Carbon Capture, Storage and Utilisation (CCUS). However, studies in the literature to date have largely focused on utilisation and storage separately and neither considered the effects of CO2 taxation nor systematically studied the optimality criteria of the CO2 conversion products (Leonzio et al., 2019; Zhang et al., 2017; Zhang et al., 2020). A systematic study for a realistically large industrial supply chain that considers the aforementioned aspects jointly is necessary to inform political and industrial decision-making.

In this work, a Mixed Integer Linear Programming (MILP) framework for a supply chain network was developed to incorporate storage, utilisation, trading, and taxation as strategies for managing CO2 emissions. Possible CO2 utilisation products were ranked using Multi-Criteria Decision Analysis (MCDA) techniques, and three of the top 10 products were considered to serve as CO2 -based products in this supply chain network. The model included several power plants in one of the European countries with the highest CO2 emissions. The goal of the proposed model is to minimise the total cost of the supply chain taking into account the process and investment decisions. Furthermore, incorporating multi-objective optimisation that simultaneously considers CO2 reduction and supply chain costs can offer both environmental and economic benefits. Therefore, the ε-constraint multi-objective optimisation method was implemented as a solution procedure to minimise the total cost while maximising the CO2 reduction. The game theory Nash approach was applied to determine the fair trade-off between the two objectives. The investigated case study demonstrates the importance of including financial carbon management through tax and trade in addition to the physical CO2 capturing and storage, and utilisation.

References

Leonzio, G., Foscolo, P. U., & Zondervan, E. (2019). An outlook towards 2030: optimization and design of a CCUS supply chain in Germany. Computers & Chemical Engineering, 125, 499-513.

Zhang, D., Alhorr, Y., Elsarrag, E., Marafia, A. H., Lettieri, P., & Papageorgiou, L. G. (2017). Fair design of CCS infrastructure for power plants in Qatar under carbon trading scheme. International Journal of Greenhouse Gas Control, 56, 43-54.

Zhang, S., Zhuang, Y., Liu, L., Zhang, L., & Du, J. (2020). Optimization-based approach for CO2 utilization in carbon capture, utilization and storage supply chain. Computers & Chemical Engineering, 139, 106885.



Impact of energy sources on Global Warming Potential of hydrogen production: Case study of Uruguay

Vitória Olave de Freitas1, José Pineda1, Valeria Larnaudie2, Mariana Corengia3

1Unidad Tecnológica de Energias Renovables, Universidad Tecnologica del Uruguay; 2Depto. de Bioingeniería, Instituto de Ingeniería Química, Facultad de Ingeniería, Udelar; 3Instituto de Ingeniería Química, Facultad de Ingeniería, Udelar

In recent years, several countries have developed strategies to advance green hydrogen as a feedstock or energy carrier. Hydrogen can contribute to the decarbonization of various sectors, being of particular interest its use in the transport and industry sectors. In 2022, Uruguay launched its green hydrogen roadmap, outlining its plan to promote this market. The country has the potential to become a producer of green hydrogen derivatives for exportation due to: the availability and complementarity of renewable energies (solar and wind); an electricity matrix with a high share of renewable sources; the availability of water; and favorable logistics.

The energy source for water electrolysis is a key factor in both the final cost and the environmental impact of hydrogen production. In this context, this work performs the life cycle assessment (LCA) of a hydrogen production process by water electrolysis, combining different renewable energy sources available in Uruguay. The system evaluated includes a 50 MW electrolyzer and the installation of 150 MW of new power sources. Three configurations for power production were analyzed: (1) a photovoltaic farm, (2) a wind farm, and (3) a hybrid farm (solar and wind). In all cases, connection to the national power grid is assumed to ensure a reliable and uninterrupted energy supply for plant operation.

Different scenarios for the grid energy mix are analyzed to assess the environmental impact on the hydrogen produced. For the current case, the average generation over the past five years is considered, while for future projections it was evaluated the variation of fossil and renewable energy sources.

To determine the optimal combination of renewable energy sources for the hybrid generation scenario, the complementarity of solar and wind resources was analyzed using the standard deviation, a metric widely used for this purpose. This study was developed employing data for real plants in Uruguay. Seeking for the most stable generation, the optimal mix of power generation capacity implies 54% solar and 46% wind.

The environmental impact of the different case studies was evaluated through an LCA using OpenLCA software and the Ecoinvent database. For this analysis, 1 kg of produced hydrogen was considered the functional unit. The system boundaries included power generation and the electrolysis system used for hydrogen production. Among the impact categories that can be analyzed by LCA (human health, environmental, resource depletion, etc.), this work focused on the global warming potential (GWP). As hydrogen is promoted as an alternative fuel or feedstock that may diminish CO2 emissions, its GWP is a particularly relevant metric.

Implementing hybrid solar and wind energy systems increases the stability in the energy produced from renewable sources, thereby reducing the amount of energy taken from the grid. Then, these hybrid plants have the potential to reduce CO2 emissions per kg of hydrogen produced. Still, this impact is diminished when the electric grid has higher contributions of renewable energy.



Impact of the share of renewable energy integration in the selection of sustainable natural gas production pathways

Meire Ellen Gorete Ribeiro Domingos, Daniel Florez-Orrego, Oktay Boztas, Soline Corre, François Maréchal

Ecole Polytechnique Federale de Lausanne, Switzerland

Sustainable natural gas (SNG) can be produced via different routes, such as anaerobic digestion and thermal gasification. Other technologies, such as CO2 injection, storage systems (e.g., CH4, CO2) and reversible solid oxide cells (rSOC) can be also integrated in order to handle the seasonal fluctuations of renewable energy supply and market volatility. In this work, the impact of seasonal excess and deficit of electricity generation, and the renewable fraction thereof, on the sustainability metrics of different scenarios for the energy transition in the SNG production is evaluated. The analysis considers both the current energy mix scenario and a future energy mix scenario. In the latter, full renewable grid is modeled based on the generation taking into account GIS-based land-restriction, geo-spatial wind speed and irradiation data, and the maximum electricity production from renewable sources considering EU-wide low restrictions. Moreover, the electricity demand considers full electrification of the residential and mobility sectors. The biodigestion process considers a biomethane potential of 300 Nm3 CH4 per t of volatile solids using organic wastes. The upgraded biomethane is marketed and the CO2 rich stream follows to the biomethane production. The CO2 from the anaerobic digestion unit can be stored at -50 °C and 7 bar (1,155 kg/m3), so that it can be later regasified and fed to a methanation system. The necessary hydrogen is provided by the rSOC system operating at 1 bar, 800 °C, and 81% water conversion. The rSOC system can also be operated in fuel cell mode consuming methane to produce electricity. The gasification of the digestate from the anaerobic digestion unit uses steam as gasification agent, and hydrogen coming from the electrolyzer is used to adjust the syngas composition to be suitable for the methanation reaction. The methanation system is based on the TREMP® process, consisting of intercooled catalytic beds to achieve higher conversion. A mixed integer linear programming method is employed to identify optimal system configurations under different economic scenarios, helping elucidating the feasibility of the proposed processes, as well as the optimal planning production of SNG. As a result, the integration of renewable energy and the combination of different SNG production processes prove to be crucial for the strategic planning, enhancing the resilience against market volatility and also supporting the decarbonization of the energy sector. Improved handling of intermittent renewable energy allows an optimal CO2 and waste management to achieve year-round overall processes efficiencies above 55%. This systematic approach enables better decision-making, risk management, and investment planning, informing energy providers about the opportunities and challenges linked to the decarbonization of the energy supply.



Decarbonizing the German Aviation Sector: Assessing the feasibility of E-Fuels and their environmental implications

PABLO SILVA ORTIZ1, OUALID BOUKSILA2, AGNES JOCHER2

1Universidad Industrial de Santander-UIS, Colombia; 2Technical University of Munich-TUM, Germany

The aviation industry is united in its goal of achieving "net-zero" emissions by mid-century, in accordance with global targets like COP21 and European initiatives such as "Fit for 55" and "ReFuelEU Aviation." However, current advancements and capacities may be insufficient to meet these targets on time. Recognizing the critical need to reduce greenhouse gas GHG emissions, the German government and the European Commission strongly advocate measures to lower aviation emissions, which is expected to significantly increase the demand for sustainable aviation fuels, especially synthetic fuels. In this sense, import scenarios from North African countries to Germany are under consideration. Hence, we set the objective of this work in exploring the pathways and the life cycle environmental impacts of e-fuels production and import, focusing on decarbonizing the aviation sector. Through a multi-faceted investigation, this work aims to offer strategic insights into the future of aviation fuel, blending technological advancements with international cooperation for a sustainable aviation industry. Our analysis compares the feasibility of local production in Germany with potential imports from Maghreb countries—Tunisia, Algeria, and Morocco. To establish a comprehensive view, the study forecasts Germany’s aviation fuel demand across three key timelines: the current scenario, 2030, and 2050. These projections account for anticipated advancements in renewable energy, proton exchange membrane-PEM electrolysis, and Direct Air Capture-DAC technologies via Life Cycle Assessment-LCA prospective. A technical concept of a power-to-liquid fuel production is presented with the corresponding Life Cycle Inventory, reflecting a realistic consideration of the local conditions including the effect of water desalination. In parallel, the export potential of the Maghreb countries is evaluated, considering both social and economic dimensions. The environmental impacts of two export pathways—direct e-fuel export and hydrogen export as an intermediate product—are then assessed through cradle-to-gate and cradle-to-grave scenarios, offering a detailed analysis of their respective carbon footprints. Finally, the study determines the qualitative cost implications of each pathway, providing a comparative analysis that identifies the most promising approach for sustainable aviation fuel production. The results, related mainly to Global Warming Potential-GWP and Water Consumption Potential-WCP suggest that Algeria, doted with high-capacity factors for photovoltaic-PV solar and wind systems, achieves the most considerable WCP reductions compared to Germany, ranging from 31.2% to 57.1% in a cradle-to-gate scenario. From a cradle-to-grave perspective, local German PV solar scenarios fail to meet RED II sustainable fuel requirements, whereas most export scenarios achieve GWP reductions exceeding 70%. Algeria shows the best overall reduction, particularly with wind energy (85% currently to 88% by 2050), while Morocco excels with PV solar (70% currently to 75% by 2050). Despite onshore wind showing strong environmental numbers, PV solar offers the highest impact reductions and cost advantages, making Morocco and Algeria’s PV systems superior to German and North African wind systems.



Solar-Driven Hydrogen Economy Potential in the Greater Middle East: Geographic, Economic, and Environmental Perspectives

Abiha Abbas1, Muhammad Mustafa Tahir2, Jay Liu3, Rofice Dickson1

1Department of Chemical and Metallurgical Engineering, School of Chemical Engineering, Aalto University, P.O. Box 11000, FI-00076 Aalto, Finland; 2Department of Chemistry & Chemical Engineering, SBA School of Science and Engineering, Lahore University of Management Sciences (LUMS), Lahore, 54792, Pakistan; 3Department of Chemical Engineering, Pukyong National University, Busan, Republic of Korea

This study employed advanced GIS spatial analysis to assess land suitability for solar-powered hydrogen production across thirty countries in the GME region. Factors such as PVOUT, proximity to water sources and roads, land slope, land use and cover, and restricted/protected areas were evaluated. An AHP-based MCDM analysis was used to classify land into different suitability levels.

Techno-economic optimization models were then applied to assess the levelized cost of hydrogen (LCOH), production potential, and the levelized costs of ammonia (LCOA) and methanol (LCOM) for 2024 and 2050 under different scenarios. Sensitivity analysis quantified uncertainties, while cradle-to-grave life cycle analysis (LCA) calculated the CO₂ avoidance potential for highly suitable areas.

Key findings include:

  1. Water scarcity is a major factor in site selection for hydrogen production. Fifty-seven percent of the region lacks access to water or is over 10 km away from any source, posing a challenge for hydrogen facility placement. A minimum of 1.7 trillion liters of water is needed to meet conservative hydrogen production estimates, and up to 13 trillion liters for optimistic estimates. A reliable water supply chain is crucial to realize this potential.
  2. Around 14% of the land in the region is unsuitable for hydrogen production due to slopes exceeding 5°. In mountainous countries like Tajikistan, Kyrgyzstan, Lebanon, Armenia, and Türkiye, this figure rises to 50%.
  3. Forty percent of the region is unsuitable due to poor road access, highlighting the need for adequate transportation infrastructure. Roads are essential for the construction, operation, and maintenance of hydrogen facilities, as well as for transporting resources and products.
  4. Only 3.8% of the GME region (1,122,696 km²) is classified as highly suitable for solar hydrogen projects. This land could produce 167 Mt/y and 209 Mt/y of hydrogen in 2024 and 2050 under conservative estimates, with an LCOH of 4.7–7.9 $/kg in 2024 and 2.56–4.17 $/kg in 2050. Under optimistic scenarios, production potential could rise to 1,267 Mt/y in 2024 and 1,590 Mt/y in 2050. Saudi Arabia, Sudan, Pakistan, Iran, and Algeria account for over 50% of the region’s hydrogen potential.
  5. Green ammonia production costs in the region range from 0.96–1.38 $/kg in 2024, decreasing to 0.56–0.79 $/kg by 2050. Green methanol costs range from 1.12–1.59 $/kg in 2024, dropping to 0.67–0.93 $/kg by 2050. Egypt and Libya show the lowest production costs.
  6. LCA reveals significant potential for CO₂ emissions avoidance. In 2024, avoided emissions could range from 119–488 t/y/km² (481 Mt/y), increasing to 477–1952 t/y/km² (3,655 Mt/y) in the optimistic case. By 2050, avoided emissions could reach 4,586 Mt/y. Saudi Arabia and Egypt show the highest potential for CO₂ avoidance.

The study provides a multitude of insights, making significant contributions to the global hydrogen dialogue and offering significant contributions to the roadmap for policymakers to develop comprehensive strategies for expanding the hydrogen economy in the GME region.

 
2:00pm - 3:00pmBrewery visit
Location: On-campus brewery
2:30pm - 3:30pmT1: Modelling and Simulation - Session 2
Location: Zone 3 - Aula D002
Chair: Antonios Armaou
Co-chair: Monika Polanska
 
2:30pm - 2:50pm

Synthesis of Liquid Mixture Separation Networks Using Multi-Material Membranes

Harshit Verma1, Christos T. Maravelias1,2

1Department of Chemical and Biological Engineering, Princeton University, United States; 2Andlinger Center for Energy and Environment, Princeton University, United States

Polymeric membranes for liquid separation are recognized as a promising technology in various industrial separation applications. Two important characteristics of polymeric membranes are selectivity and permeability – higher permeability leads to high recovery, while higher selectivity translates to high purity. However, polymeric membranes exhibit an inherent tradeoff between selectivity and permeability [1]. Therefore, simultaneously achieving high recovery and high purity with a single-stage membrane is often impractical or leads to increased operating and capital costs. To address this limitation, a network with multiple membrane stages must be synthesized.

Multiple potential network configurations, with different stream connections and a distinct number of membrane stages, can be designed for a given separation task. However, the operating and capital costs of these configurations can differ significantly [2]. Therefore, the economic feasibility of membrane separation is heavily influenced by the decisions made during network synthesis. Thus, an optimization-based framework can be employed to synthesize globally optimal membrane networks. Additionally, a membrane unit model is a critical element to a membrane network optimization framework. However, nonidealities present in liquid mixtures pose difficulties in describing membrane permeation. As a result, existing computationally tractable unit models are valid only for separation of either binary liquid mixtures or multicomponent ideal gas mixtures.

In this work, we present a novel approach to design globally optimal membrane networks for multicomponent liquid separation. We propose a generalized optimization framework to recover multiple target components from the feed liquid mixture. First, we present a physics-based nonlinear surrogate unit model to describe membrane permeation for multicomponent liquid mixtures. Second, we formulate a highly interconnected superstructure to represent the broad spectrum of potential network configurations. Third, we propose an optimization model to determine the network configuration, along with the operating conditions, that minimizes the total required membrane area. The resulting optimization model is a nonconvex mixed integer nonlinear programming (MINLP) model, which is generally challenging to solve; hence, we introduce solution methods to improve computational efficiency. Finally, through multiple applications, we showcase how the proposed approach can obtain globally optimal solutions.

[1] L.M. Robeson, The upper bound revisited, J. Membr. Sci. 320 (2008) 390–400. https://doi.org/10.1016/j.memsci.2008.04.030.
[2] R. Spillman, Chapter 13 Economics of gas separation membrane processes, in: R.D. Noble, S.A. Stern (Eds.), Membr. Sci. Technol., Elsevier, 1995: pp. 589–667. https://doi.org/10.1016/S0927-5193(06)80015-X.



2:50pm - 3:10pm

Modelling Internal Diffusion of Pb(II) Adsorption onto Reclaimed Mine Water Sludge (RMWS): A Step Towards Circular Economy Applications

Nokuthula Nothando Nchabeleng, Evans Chirwa, Hendrik Gideon Brink

University of Pretoria, South Africa

In the drive towards a circular economy, the reuse of waste materials for pollutant mitigation is a critical area of research. This study investigates the adsorption of lead, an EPA priority pollutant, onto reclaimed mine water sludge (RMWS), a sustainable and low-cost adsorbent from a water desalination plant. The focus was on modeling the internal diffusion mechanisms governing adsorption kinetics, offering a comprehensive model that tracks the diffusion of lead onto the RMWS adsorbent in both space and time. This was done to by exploring the influence of different transport phenomena playing a role within the system.

The adsorption process typically involves the occurrence of three steps: the transport of the adsorbate from the bulk liquid phase to the boundary layer, movement to the adsorbent surface (external mass transfer) as well as diffusion through the material's pores (internal mass transfer). This analysis builds on the understanding that in liquid–solid adsorption, fluid film diffusion often plays a secondary role compared to intraparticle diffusion, especially in mesoporous adsorbents such as RMWS. Parameter optimisation previously conducted showed external mass transfer effects becomes less significant under sufficient agitation conditions.

Traditional kinetic models used for adsorption modelling typically assume that both external and internal mass transfer are negligible, this is not always justified. Only in the context of intrinsic kinetics, where the system’s behaviour depends solely on the physical or chemical interactions between the adsorbate and adsorbent, can this assumption hold true. In the study conducted, diffusion was examined in great depth. It was assumed that the contaminant concentration at the moth of the adsorbent’s pore is greater than that inside the pore as adsorption takes place. Consequently, the rate throughout the particles will vary. The model derived is based on a mole balance on the RMWS as the lead is adsorbed. This will give insight into the application of a species balance over volume segments in a packed bed.

Incorporating both surface and pore diffusion effects, our model captures the dynamics of Pb(II) diffusion from the bulk solution to the RMWS surface and further into its pore structure. These findings are crucial for the design of large-scale adsorption units, where oversimplified kinetic models can lead to suboptimal system performance. The validated diffusion model highlights RMWS's potential as a circular economy solution for heavy metal removal, promoting resource efficiency and environmental sustainability.

 
2:30pm - 3:30pmT2: Sustainable Product Development and Process Design - Session 2
Location: Zone 3 - Room E030
Co-chair: Jose Antonio Abarca
 
2:30pm - 2:50pm

Integrating Chemical Recycling into Brownfield Processes: Waste Polyethylene Pyrolysis and Naphtha Steam Cracking

Marc Caballero, Thanyanart Sroisamut, Anton A. Kiss, Ana Somoza-Tornos

TU Delft, Netherlands, The

Circularity in the chemical industry is crucial for achieving sustainability, particularly as it transitions from traditional, linear, fossil-based production models to more sustainable, resource-efficient processes. Circular processes aim to minimize waste, optimize resource use, and close the loop by utilizing renewable or waste resources, which is becoming a key strategy to address environmental and economic challenges [1]. Repurposing existing infrastructure to integrate circular processes offers a cost-effective and resource-efficient solution, particularly in the petrochemical sector, where capital outlay and long-term economic viability pose challenges to adopting new technologies [2].

This work presents a systematic approach to process repurposing, focusing on matching circular processes with existing petrochemical infrastructure through superstructure-based optimization.

First, we evaluate the viability of incorporating ethylene derived from waste PE pyrolysis into existing naphtha-based steam cracking operations. To do so, we developed process simulation models using Aspen Plus software, followed by a comprehensive techno-economic assessment (TEA) and life cycle assessment (LCA). The findings indicate that the integrated approach offers competitive ethylene production costs of 0.745 €/kg of ethylene, in comparison to the business as usual cost of 0.747€/kg [3] and the 0.839 €/kg cost of standalone pyrolysis. A similar trend is observed in the environmental analysis, with a small reduction of the CO₂ eq. emissions of 1.4396 kg CO kg CO₂ eq./kg of ethylene with the best integration scenario, in comparison to the business as usual cost of 1.4408 kg CO kg CO₂ eq./kg; while the standalone pyrolysis is more favourable in this metric, with 1.2433 kg CO kg CO₂ eq./kg.

In conclusion, this study demonstrates the potential for combining chemical recycling with traditional ethylene production methods, highlighting the importance of balancing economic and environmental factors for sustainable chemical processing. Further research should focus on scaling this approach for industrial use and validating the results with real-world data.

References:

[1] Slootweg, J.; One Earth, 2024. Sustainable chemistry: Green, circular, and safe-by-design.

[2] Télessy, K.; Barner, L.; Holz, F.; International Journal of Hydrogen Energy, 2024. Repurposing natural gas pipelines for hydrogen: Limits and options from a case study in Germany.

[3] Spallina, V., et al. Energy Conversion and Management, 2017. Techno-economic assessment of different routes for olefins production through the oxidative coupling of methane (OCM): Advances in benchmark technologies.



2:50pm - 3:10pm

A System-Dynamics Based Approach for Modeling Circular Economy Networks: Application to the Polyethylene Terephthalate (PET) supply chain

Daniel Pert, Ana Torres

Carnegie Mellon University, United States of America

The transition to a circular economy (CE) requires agents in circular supply chain (SC) networks to take a variety of different initiatives, many of which are dynamic in nature. However, there is a lack of generic mathematical models for circular initiatives that incorporate the time dimension, and their combined effects on different agents and overall SC circularity is not well understood. We use a system dynamics (SD)-based approach to develop a generic framework for dynamic modeling of CE networks and use it to model the supply chain for Polyethylene Terephthalate (PET) plastic packaging, a significant contributor to pollution in landfills and waterways. Novel contributions include generic quantitative models for material quality loss and a model for a consumer that includes both continuous and discrete product reuse.

We propose a prototypical circular SC network by combining dynamic models for five agents: a manufacturer, consumer, material recovery facility (MRF), recycling facility, and the Earth. We use the planetary boundaries framework to quantify the absolute environmental sustainability of the network while accounting for feedback effects between different Earth-system processes. We apply this framework to the case study of the PET SC by considering different scenarios over a 65-year time horizon in the US, including both “slow-down-the-loop” initiatives (i.e., those that extend product use time through demand reduction or reuse) and “close-the-loop” initiatives (i.e., those that reintroduce product to the supply chain through recycling) by the consumer, as well as capacity expansion of the MRF and recycling facilities.

We find that given the current recycling infrastructure in the U.S., “slow-down-the-loop” initiatives are more effective than “close-the-loop” initiatives, which require capacity expansion to accommodate the increased recycle rate and an associated time delay. However, combining the two eliminates the need for capacity expansion and leads to the highest circularity. Sensitivity analyses are performed to analyze the effect of consumer behavior on network circularity. As the consumer recycle rate increases, circularity increases until reaching a plateau. This plateau may be due to recycling capacity limitations or quality loss due to the mechanical recycling process; above this plateau, there may be a trade-off between circularity and sustainability. Overall, we conclude that although chemical recycling technologies have the potential to eliminate quality loss and may be promising long-term solutions, such technologies have a significantly higher cost and environmental impact than mechanical recycling and are not currently widespread in the U.S. Thus, in the short term, “slow-down-the-loop” initiatives are more promising solutions for a CE transition.

This material is based upon work supported by the National Science Foundation under Award No. 2339068 (NSF CAREER Award PI AI Torres). Disclaimer: Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.



3:10pm - 3:30pm

The Role of Industrial Symbiosis in Plastic Waste Recycling to achieve Circularity

Christine El Khoury1,2, Laureano Jiménez1, Carlos Pozo1, Ana Somoza-Tornos2

1Universitat Rovira i Virgili, Spain; 2Delft University of Technology, Netherlands

The European Green Deal's objectives for circularity and climate neutrality aims for Europe to achieve net-zero emissions by 2050, proving that economic growth can occur without increasing resource consumption. In the context of plastics, this is potentially achievable by coupling mechanical and chemical recycling to ensure circular economy principles– the former allows for reshaping of the plastic waste to form plastic pellets, while the latter can restore plastic to its original components, maintaining both the quality and mass. Additionally, industrial symbiosis enables sharing resources within industries to reduce waste and increase resource efficiency. This promotes sustainability and circularity by turning one company’s waste into another’s raw material.

This study explores several scenarios for the end-of-life of two common plastic types, polyethylene and polypropylene. To this end, we model the two supply chains, starting from raw material extraction all the way to end-of-life, using input-output data gathered from the literature. Most of the scenarios generated assume the two supply chains work independently, i.e., by exploiting their own resources and showing no interactions with the neighboring system. However, we also explore one scenario implementing industrial symbiosis principles, where the focus is to exchange by-products between the two supply chains so as to decrease resource material extraction and plastic waste. This is particularly appealing when combined with chemical recycling, that can produce a myriad of products, as not all the products generated with this process can be used to produce the original plastic. With these scenarios, we then perform a detailed techno-economic assessment of the different options, while also calculating the degree of circularity they achieve.

Results reveal that the scenario combining mechanical and chemical recycling, within each individual supply chain, already leads to a decrease in the extraction of fossil-based resources, while increasing circularity compared to the combination of recycling with incineration or landfill. Some of these benefits are further improved in the industrial symbiosis scenario, achieving a 14% decrease in raw material extraction at an expense of a moderate decrease in revenues due to the exchange of the byproducts within the systems. The other scenarios show an inferior performance in all the metrics assessed and should be considered inferior options.

This holistic approach to plastic production demonstrates that combining recycling strategies and industrial symbiosis can reduce resource consumption. A combination of the recycling technologies while sharing raw materials within industries can be a key drive for addressing the growing concern of accumulation of plastic waste, thereby improving resource use, reducing waste formation and creating closed-loop systems that can ultimately reach the EU Green Deal targets.

 
2:30pm - 3:30pmT4: Model Based optimisation and advanced Control - Session 2
Location: Zone 3 - Room E032
Co-chair: Flavio Manenti
 
2:30pm - 2:50pm

Physics-based mechanistic modelling and dynamic adaptive multi-objective optimization of chemical reactors for CO2 capture based on enhanced weathering

Lei Xing

University of Surrey, United Kingdom

Enhanced weathering (EW) of minerals has recently been recognised as a promising strategy for gigaton-level large-scale carbon dioxide removal. However, prior to the practical application of EW-based CO2 capture, significant acceleration is necessary by optimising the local environment where solid, liquid, and gas phases interact. This optimisation is crucial as the drive for more efficient CO2 capture technologies continues to escalate, underscored by the increasing need for operational flexibility to adapt to substantial variations in the flow rate and concentration of CO2-rich flue gases. Additionally, integrating renewable energy sources into CO2 capture strategies enhances environmental benefits; however, the inherently intermittent power output due to meteorological and seasonal variations poses a considerable challenge.

The efficacy of chemical reactors to enhance the mass transport and reaction rates in the EW-based CO2 capture process has not yet been thoroughly evaluated. In response, our study in the past a few years conducted detailed mechanistic modelling, validated rigorously through experimental data, focusing on three distinct reactor configurations designed specifically for EW-based CO2 capture: trickle bed reactors, packed bubble column reactors, and stirred slurry reactors. We selected CO2 capture rate, energy consumption, and water consumption as three pivotal performance indicators under a range of operating conditions. Given the complex computational demands of these mechanistic models, we developed advanced machine learning-based models and optimisation approaches, including Response Surface Methodology (RSM), Support Vector Regression (SVR), and hybrid algorithms. These methodologies facilitated the rigorous optimisation of each reactor type to minimise competing objectives effectively, allowing for the generation and comparative analysis of Pareto fronts for the three types of reactors. Our findings illustrate that the mass transport of CO2 into the aqueous phase, considered as the rate-limiting step, substantially influences the capture performance. Notably, when substituting calcite with forsterite, the process's controlling mechanism transitions from gas/liquid mass transfer to solid dissolution due to forsterite's lower dissolution rate.

Furthermore, our research extended into the dynamic adaptive of the CO2 capture process within a renewable energy framework, where CO2 emitted from power plant flue gases is converted into bicarbonate and subsequently stored in the ocean. We developed data-driven surrogate dynamic models to accurately predict the CO2 capture rate and energy consumption of a selected reactor. Utilising a multi-objective NSGA-II genetic algorithm, we pre-emptively optimised reactor conditions based on forecasts of inlet flue gas CO2 concentration and available wind energy. This proactive approach aimed to maximise the carbon capture rate while minimising the reliance on non-renewable energy sources. The adaptive multi-objective optimisation framework demonstrated a significant improvement, increasing the CO2 capture rate by 16.7% and reducing the use of non-renewable energy by 36.3%.

The methodologies we have developed and refined throughout this study hold considerable potential for widespread application across the industry, thereby advancing progress toward achieving UN Sustainability Goals 7 (Affordable and Clean Energy), 9 (Industry, Innovation, and Infrastructure), 12 (Responsible Consumption and Production), and 13 (Climate Action). This research not only provides a foundation for future innovations in CO2 capture technologies but also serves as a vital stepping stone toward sustainable industrial practices.



2:50pm - 3:10pm

Optimisation of a Haber-Bosch Synthesis Loop for PtA

Joachim Weel Rosbo1, Anker Degn Jensen1, John Bagterp Jørgensen1, Sigurd Skogestad2, Jakob Kjøbsted Huusom1

1Technical University of Denmark, Denmark; 2Norwegian University of Science and Technology, Norway

Power-to-X (PtX) is one of the most promising solutions for long-term storage of renewable energy such as wind and solar power (Miehling et al., 2022). In particular, Power-to-Ammonia (PtA) has attracted significant interest due to its ability to store and recover energy without carbon dioxide emissions. The major challenge for Power-to-Ammonia (PtA) lies in managing the highly variable power supply from renewable sources, as the Haber-Bosch process is designed for stable operation. The fluctuating nature of renewable energy requires the development of new flexible operating strategies across a wide operating envelope from 10 % to 120 % of the nominal load (Armijo & Philibert, 2020). Rosbo et al. (2023) introduced a dynamic model for an adiabatic quench-cooled reactor (AQCR). We investigated open-loop transients, optimal operating points, and basic control strategies across a broad operating window. Following this, we extended our model library to include adiabatic indirect cooled (AICR) and direct-cooled reactors (IDCR), evaluating the stability and conversion performance of these reactors for a PtA process (Rosbo et al., 2024).

In this work, we expand to a plantwide model of the ammonia synthesis loop, incorporating catalytic beds, heat exchangers, compressors, steam turbines, and flash separators. We define a function for the total electrical power utility of the PtA plant composed of electrolysers, air separation, compressors, separator cooling, and steam turbines. The power function is adopted as the objective function for optimising the PtA plant across its operating envelope. Operating constraints include maximum reactor temperatures, compressor choke and stall, minimum steam temperature, and maximum loop pressure. Six degrees of freedom are considered for optimisation: three reactor temperatures, N2/H2 feed ratio, separator temperature, and loop pressure. We perform the optimisation by minimising the power function for a given hydrogen make-up feed flow. Across the operating envelope, the optimisation results reveal different active constraint regions. We assess the sensitivity of the optimisation results for relevant process disturbances such as feed argon content, catalyst activity, cooling water temperature, and hydrogen production cost. Additionally, we evaluate the loss in the objective function associated with fixing individual optimisation variables across the operating envelope. This identifies potential self-optimising variables that result in minimal loss when controlled to a constant value across the operating range.

References
Armijo, J., & Philibert, C. (2020). Flexible production of green hydrogen and ammonia from variable solar and wind energy: Case study of Chile and Argentina. International Journal of Hydrogen Energy, 45(3), 1541–1558.

Miehling, S., Fendt, S., & Spliethoff, H. (2022). Optimal integration of Power-to-X plants in a future European energy system and the resulting dynamic requirements. Energy Conversion and Management, 251(July 2021),

Rosbo, J. W., Jensen, A. D., Jørgensen, J. B., & Huusom, J. K. (2024). Comparison, operation and cooling design of three general reactor types for Power-to-Ammonia processes. Chemical Engineering Journal, 496, 153660.

Rosbo, J. W., Ritschel, T. K. S., Hørstholt, S., Huusom, J. K., & Jørgensen, J. B. (2023). Flexible operation, optimisation and stabilising control of a quench cooled ammonia reactor for Power-to-Ammonia. Computers & Chemical Engineering, 176(108316).

 
2:30pm - 3:30pmT5: Concepts, Methods and Tools - Session 2
Location: Zone 3 - Room E033
Chair: Eike Cramer
Co-chair: Artur Schweidtmann
 
2:30pm - 2:50pm

Bayesian uncertainty quantification for molecular property prediction with graph neural networks

Qinghe Gao1, Daniel C. Miedema1, Yidong Zhao2, Jana M. Weber3, Qian Tao2, Artur M. Schweidtmann1

1Process Intelligence Research Team, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands; 2Department of Imaging Physics, Delft University of Technology, Delft, the Netherlands; 3Pattern Recognition and Bioinformatics, Department of Intelligent Systems, Delft University of Technology, Van Mourik Broekmanweg 6, 2628 XE Delft, The Netherlands

Graph neural networks (GNNs) have proven state-of-the-art performance in molecular property prediction tasks[1]. However, a significant challenge with GNNs is the reliability of their predictions, particularly in critical domains where quantifying model confidence is essential. Therefore, assessing uncertainty in GNN predictions is crucial to improving their robustness. Existing uncertainty quantification methods, such as Deep ensembles and Monte Carlo Dropout (MC-dropout), have been applied to GNNs with some success, but these methods are limited to approximate the full posterior distribution.

In this work, we propose a novel approach for scalable uncertainty quantification in molecular property prediction using Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) [2] with a cyclical learning rate. This method facilitates sampling from multiple posterior modes and improves posterior exploration within a single training round. Additionally, we compare the proposed methods with MC-dropout [3] and Deep ensembles [4], focusing on error analysis, calibration, and sharpness, considering both epistemic and aleatoric uncertainties. Our experimental results demonstrate that the proposed parallel-SGHMC approach significantly outperforms MC-dropout and Deep ensembles in terms of calibration and sharpness. Specifically, parallel-SGHMC reduces the sum of squared errors (SSE) by 99.4% and 75%, respectively, when compared to MC-dropout and Deep Ensembles. These findings suggest that parallel-SGHMC is a promising method for uncertainty quantification in GNN-based molecular property prediction.

[1] Schweidtmann, A. M., Rittig, J. G., Konig, A., Grohe, M., Mitsos, A., & Dahmen, M. (2020). Graph neural networks for prediction of fuel ignition quality. Energy & fuels, 34(9), 11395-11407.

[2] Chen, T., Fox, E., & Guestrin, C. (2014, June). Stochastic gradient hamiltonian monte carlo. In International conference on machine learning (pp. 1683-1691). PMLR.

[3] Gal, Y., & Ghahramani, Z. (2016, June). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning (pp. 1050-1059). PMLR.

[4] Lakshminarayanan, B., Pritzel, A., & Blundell, C. (2017). Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30.



2:50pm - 3:10pm

Enhanced Reinforcement Learning-driven Process Design via Quantum Machine Learning

Austin Braniff1, Fengqi You2, Yuhe Tian1

1Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, WV, United States; 2Smith School of Chemical and Biomolecular Engineering, Cornell University, Ithaca, NY, United States

Reinforcement learning (RL)-driven process design methods [1-2] have received recent impetus, which strive to intelligently identify the optimal design solutions based on an available set of unit operations without any pre-postulation of superstructure or flowsheet configuration. This provides a more systematic and robust strategy to design optimal processes by minimizing the impact of prior expert knowledge. However, a key challenge for these methods lies in the significantly large combinatorial design space, which can be highly computationally intensive or even intractable to identify the truly optimal process design. To address the challenge, this work presents a novel approach integrating RL-driven design with quantum machine learning (QML). QML provides a promising alternative to expedite the design search, benefited from its theoretical speed advantages over their classical counterparts and the continuous advancement of real-world quantum machines [3-4]. Built on our prior work [5], the quantum-enhanced RL-driven approach starts with a maximum set of unit operations which are available for constructing the process design. Input-output stream matrix is used to represent the flowsheet structure, serving as the observation for reinforcement learning. A Deep Q-Network (DQN) algorithm is utilized to train a neural network (NN) as the RL agent to generate new flowsheet designs. Herein, the classical NN is replaced with a parameterized quantum circuit (PQC), a state-of-the-art model in QML that is considered the quantum equivalent of an NN [6-7]. The underlying principles and algorithm architecture of DQN are maintained to avoid model divergence and recent sampling bias when updating the PQC. The resulting designs are simulated and optimized using the python-based IDAES-PSE software [8]. The value of objective function (e.g., cost, productivity) is used as the reward to the agent toward continuously improving the design optimality. The efficacy and simulated computational tractability of this quantum-enhanced RL-driven process design algorithm will be demonstrated through a hydrodealkylation process case study. The key novelty of this work is to integrate two cutting-edge computing algorithms, QML and RL, aiming to provide an intelligent, efficient, and reliable approach toward automated process design.

References

[1] Stops, L.et al. (2023). Flowsheet generation through hierarchical reinforcement learning and graph neural networks. AIChE Journal, 69(1), e17938.

[2] Wang, D. et al. (2023). A coupled reinforcement learning and IDAES process modeling framework for automated conceptual design of energy and chemical systems. Energy Advances, 2(10), 1735-1751.

[3] Bernal, D. E. et al. (2022). Perspectives of quantum computing for chemical engineering. AIChE Journal, 68(6), e17651.

[4] Ajagekar, A. & You, F. (2022). New frontiers of quantum computing in chemical engineering. Korean Journal of Chemical Engineering, 39(4), 811–820.

[5] Tian, Y. et al. (2024). Reinforcement Learning-Driven Process Design: A Hydrodealkylation Example. 387–393.

[6] Jerbi, S. et al. (2021). Parametrized quantum policies for reinforcement learning (arXiv:2103.05577).

[7] Skolik, A. et al. (2022). Quantum agents in the Gym: A variational quantum algorithm for deep Q-learning. Quantum, 6, 720.

[8] Lee, A. et al. (2021). The IDAES process modeling framework and model library - Flexibility for process simulation and optimization. Journal of advanced manufacturing and processing, 3(3), e10095.



3:10pm - 3:30pm

Solving Complex Combinatorial Optimization Problems Using Quantum Annealing Approaches

Vasileios K. Mappas, Bogdan Dorneanu, Harvey Arellano-Garcia

FG Prozess, und Anlagentechnik, Brandenburgische Technische Universität Cottbus-Senftenberg Burger Chaussee 2, D-03044, Cottbus, Germany

The complexity and demands of optimization problems are increasing due to economic and environmental constraints, as well as resource depletion. Combinatorial optimization (CO) is a critical area within optimization that holds significant importance in both academic and industrial contexts, with a variety of applications (Weinand et al., 2022). Despite recent advancements, solving CO problems remains challenging, primarily because of the growing problem size (NP-hard nature) and issues related to nonconvexity and bilinearity, which can result in multiple local solutions (Peres and Castelli, 2021). Numerous approaches have been suggested in literature to tackle these problems, utilizing classical methods, heuristics, and neural network (Blekos et al., 2024). However, these frequently fail to provide solutions or do so only within unrealistic timeframes, even for moderately sized problems (Pop et al., 2024).

To address these limitations, this work introduces an optimization approach based on quantum computing (QC). Over the past decade, QC has advanced rapidly and presents a promising technology for addressing CO problems that are intractable on classical hardware (Truger et al., 2024). This approach is advantageous due to its structural formulation.

In this work, the proposed framework employs quantum annealing (QA) to tackle CO problems through quantum adiabatic computation. As a case study, the Haverley’s pooling-blending problem (PBP) is presented and solved using both classical and quantum techniques. In applying the QA method to the PBP, the original problem is reformulated as a quadratic unconstrained binary optimization (QUBO) problem, expressed as minxQxT, where x represents the binary decision variables and Q is a square matrix of constants. The transformation procedure involves several steps related to the transformation of the inequality constraints into equalities, the discretization of the continuous variables into binaries, the removal of bilinear terms, the introduction of quadratic penalty terms and the construction of the Q matrix. Furthermore, verbose and succinct transformations are employed, with the former expanding discretized variables for greater precision and the latter restricting them to integer values.

The resulting QUBO formulations are effectively embedded and solved, emerging as a promising solution technique. Specifically, QA exhibited the lowest computational time and the most consistent performance, indicating its suitability for embedding and solving this problem type. Moreover, verbose formulation required larger penalty multipliers compared to the succinct for all the examined solvers and the obtained results align with those previously reported in literature. The approach will be exposed in diverse numerical case studies to highlight its performance.

References

Blekos, K., et al. (2024). A review on quantum approximate optimization algorithm and its variants. Physics Reports, 1068, 1-66.

Peres, F., & Castelli, M. (2021). Combinatorial optimization problems and metaheuristics: Review, challenges, design, and development. Applied Sciences, 11(14), 6449.

Truger, F., et al. (2024). Warm-starting and quantum computing: A systematic mapping study. ACM Computing Surveys, 56(9), 1-31.

Weinand, J. M., et al. (2022). Research trends in combinatorial optimization. International Transactions in Operational Research, 29(2), 667-705.

Pop, P. C., et al. (2024). A comprehensive survey on the generalized traveling salesman problem. European Journal of Operational Research, 314(3), 819-835.

 
2:30pm - 3:30pmT7: CAPEing with Societal Challenges - Keynote: Process integration for industry decarbonization: Enabling a shared database of ex-ante models of industrial processes, decarbonization technologies and energy systems featuring sustainability metrics
Location: Zone 3 - Room D049
Chair: Henrique Matos
Co-chair: Léonard Grégoire

Process integration for industry decarbonization: Enabling a shared database of ex-ante models of industrial processes, decarbonization technologies and energy systems featuring sustainability metrics



The mass adoption of process simulators has accelerated data generation for the process synthesis and optimization, which is reflected in growing rates of scientific publications. However, many models are constantly redeveloped due to the limited scientific collaboration and confidentiality issues. Poorly documented databases also hinder the sharing of models between public-funded researchers, and research data is often static and difficult to reproduce, hampering comparability and reliability. To address these challenges, the creation of an open-source database for ex-ante models is crucial, facilitating knowledge transfer and promoting industrial symbiosis. This work aims to enable the procedures and infrastructure needed to build a shared database of ex-ante models of relevant industrial processes, decarbonization technologies and energy systems, with clear definitions of model attributes, adopted syntax, sharing protocols, reporting standards and visualization tools. The set of industrial and technological blueprints summarizes key material and energy flows without revealing confidential data, and being helpful for assisting in energy audits, identifying inefficiency sources and improving energy integration practices. The database of processes and energy technology models ranges from simple equation-oriented modeling approaches to complex sequential modular simulations with detailed internal flows, featuring a well-established documentation and versioning control strategy. By gathering information from different actors such as academics, researchers, industry experts, policymakers, opinion leaders, and stakeholders, the models database is continuously being built, validated and maintained, aiming to guide long-term decisions for the energy transition and demonstrate the role of Process Integration (PI) in industry decarbonization. These activities are developed within the Task XXIV of the Technology Collaboration Program of the International Energy Agency, whose objective is to share not only models but also experiences that inform decision-makers about the opportunities for industrial decarbonization using renewable electricity, CO2 capture and sequestration, and biomass conversion routes. The shared models also help highlighting synergies, such as utilizing waste heat and materials in other energy systems to reduce energy imports and minimize the environmental impact. The confidentiality calls for data anonymization and improved methods that employ minimal sensitive information. Finally, continuous education programs are integral to the dissemination strategy, serving as a conduit for training skilled engineers that contribute to the ongoing evolution of the Task. It creates a feedback loop that enlarges and refines the project’s database and models. This iterative process ensures that the project remains relevant and up-to-date, continuously adapting to challenges and opportunities in the industrial decarbonization landscape.
 

Keynote by Daniel Florez Orrego

Daniel Florez Orrego

EPFL

 
 
2:30pm - 3:30pmT8: CAPE Education and Knowledge Transfer - Including keynote by JMP
Location: Zone 3 - Room E031
Chair: Antonio Espuña
Keynote by JMP
Higher education curricula are changing rapidly, shifting to more ML/AI in open-source and open-data environments often requiring coding, while sacrificing essential statistical thinking skills like understanding variation in data or conducting strategic experimentation. However, the ultimate questions are: Will the next generation of professionals in Computer-Aided Process Engineering (CAPE) and other fields be prepared for a data-driven future? Will the students really learn how to learn from data, e.g. to understand and optimize a process or to drive innovation and develop a new high-quality product?

Surprisingly, while students get access to more and more data and see more analytics in action in their daily life, the skill gaps reported by JMP customers in chemical, pharma or biotech industries are increasing. In this 1-hour workshop we will discuss these gaps and recommend some remedy, exploring best practices in teaching multivariate thinking and real-world problem solving. The JMP software used in this session is freely available to students, instructors and academic researchers at jmp.com/student, and all demo content will be shared with the participants.
2:30pm - 3:30pmT9: PSE4Food and Biochemical - Session 1
Location: Zone 3 - Room D016
Chair: Ihab Hashem
 
2:30pm - 2:50pm

Multi-Dimensional Singular Value Decomposition of Scale-Varying CFD Data: Analyzing Scale-Up Effects in Fermentation Processes

Pedro M. Pereira1, Bruno S. Ferreira2, Fernando P. Bernardo1

1CERES, Department of Chemical Engineering, University of Coimbra, R. Sílvio Lima, 3030-790 Coimbra; 2Biotrend SA, Biocant Park Núcleo 04, Lote 2, 3060-197 Cantanhede, Portugal

The scale-up of processes with complex fluid flow presents significant challenges in process engineering, particularly in fermentation. As processes scale up, unfavourable fluid flow conditions lead to limitations in transport phenomena, resulting in spatial concentration gradients as well as mechanical stress gradients, negatively impacting the productivity and selectivity of the process. This leads to inefficiencies and suboptimal performance in large-scale operations. This challenge is particularly crucial in the biotechnology sector, where fermentation conditions can have seemingly unpredictable effects on product yield and quality.

Computational fluid dynamics (CFD) is a crucial tool for accurately modelling the hydrodynamic environment in bioreactors and understanding the effects of scale-up. This study utilizes Higher Order SVD (HOSVD)1 the multidimensional extension of Singular Value Decomposition (SVD) to identify the dominant structures of fluid flow in CFD data of a fermentation processes. This method similarly to Proper Orthogonal Decomposition (POD), also based on SVD, can be used to model and identify the dominant structures of fluid flow in fermentation processes, with the added possibility of exploring additional parameter spaces.

We propose a novel application of HOSVD to investigate fluid flow patterns across different process scales, with scale being an explicit parametric component of the analysis. This approach enables us to identify the main coherent structures that emerge at various scales of fermentation and quantify the contribution of each structure to the total energy of the system. Our methodology includes CFD simulations of the fermentation process at multiple scales, utilizing operating parameters determined by traditional scale-up criteria. Through interpolation we bridged the different CFD meshes and constructed a snapshots tensor of the CFD results across all process scales, standardized on the same spatial reference grid.

As a first test case, we examined five scales of a reciprocally shaken flask bioreactor, ranging from 125 mL to 1 L, along with a hypothetical 10 L shake flask. Results indicated a common set of spatial modes across all scales, suggesting a degree of dynamic similarity with preserved main flow features. However, notable differences in the relative importance of these spatial modes were observed, particularly at the 10 L scale, where the main mode's contribution dropped to 12.5% of the total energy, compared to 21.5% for the 125 mL scale. This shift highlights how scaling affects flow dynamics, and ultimately the efficiency of the fermentation process, and the limitations of traditional scale-up methods.

These findings illustrate the impact of scale on fluid dynamics in a particular bioreactor flow, but also provide a proof of concept for this methodology which can give valuable insights for a more rational approach to bioreactor scale-up and optimization. The application of this method can easily be extended beyond fermentation scale-up, to the construction of parametric reduced models of CFD for other chemical and biochemical processes.

  1. Lorente, L. S.; Vega, J. M.; Velazquez, A. Generation of aerodynamics databases using high-order singular value decomposition. Journal of Aircraft, 2008, 45.5: 1779-1788.


2:50pm - 3:10pm

CFD Simulations of Mixing Dynamics and Photobioreaction Kinetics in Miniature Bioreactors under Transitional Flow Regimes.

Bovinille Anye Cho1, George Mbella Teke2, Godfrey K. Gakingo3, Robert William McCelland Pott2, Dongda Zhang1

1Department of Chemical Engineering, The University of Manchester, United Kingdom; 2Department of Chemical Engineering, University of Stellenbosch, South Africa; 3Department of Chemical Engineering, Dedan Kimathi University of Technology, Kenya

Miniaturised stirred bioreactors are crucial in high-throughput bioprocesses for their simplicity and cost-effectiveness. To accelerate process optimisation in chemical and bioprocess industries, models that integrate CFD-predicted flow fields with (bio)reaction kinetics are needed. However, conventional two-step coupling methods, which freeze flow fields after solving hydrodynamics and then address (bio)reaction transport, face numerical challenges in miniaturised systems due to unsteady radial flows, recirculation zones, and secondary vortices. These flow fluctuations prevent steady-state hydrodynamic convergence.

This study addresses these challenges by time-averaging the RANS solutions of the transitional SST model to achieve statistical hydrodynamic convergence. This method is particularly effective for internal flow problems at low to midrange Reynolds numbers (<10,000), typical of transitional regimes. Following this, photo-bioreaction transport models were solved using these converged fields, considering the bioreactor’s directional illumination and curvature.

Applied to a 0.7 L Schott bottle photobioreactor mechanically mixed by a magnetic stirrer (100-500 rpm), the model predictions were thoroughly validated against tracer dye experiments and Rhodopseudomonas palustris biomass growth data. It accurately predicted swirly vortex fields at 500 rpm with a 7% error margin and the biomass growth profiles align to literature datasets. However, parallel computing efficiency did not scale linearly from 16 to 32 processor cores (4G of RAM/core), making time-averaging computationally expensive for simulating scale-up bioreactors. Improved bioreactor mixing influenced cell light/dark cycles and enhanced biomass productivity, but stirring speeds above 300 rpm required increased light intensity (>100 W/m²) due to light limitation.

This model provides a framework for optimising stirring speeds and refining operational parameters, aiding in the scale-up and scale-down of bioprocesses.



3:10pm - 3:30pm

A Dynamic CFD Model to Replicate Real-Time Dynamics in Commercial Storage Unit of Chicory Roots

Abhishek Bhat K. N., Pieter Verboven, Bart Nicolai

KU Leuven, Belgium

The storage of chicory roots in commercial storage units requires maintaining optimal conditions to ensure product quality and minimize energy consumption. Traditional systems typically rely on a single measurement point to control storage room conditions, which may lead to suboptimal regulation and energy inefficiency. This study introduces a dynamic Computational Fluid Dynamics (CFD) model designed to move beyond single-point measurement by analyzing temperature and airflow gradients within the storage environment.

The model simulates the real-time thermal dynamics, accounting for spatial variations across the storage room. By studying these gradients, the approach aims to provide a more detailed understanding of how local conditions impact the overall system, allowing for more precise and energy-efficient control strategies. This gradient-based analysis represents a significant step forward in advancing storage technology, ensuring that environmental conditions are more uniformly optimized.

The results demonstrate the potential of using CFD to enhance current storage practices by providing insights into the thermal dynamics, which can inform improved control mechanisms. The long-term goal of this work is to transition from single-measurement systems to advanced, gradient-based control systems. Future work will explore the integration of machine learning algorithms further optimize storage room operations. This advancement has significant implications for both economic and environmental sustainability in post-harvest storage.

 
2:30pm - 3:30pmT10: PSE4BioMedical and (Bio)Pharma - Session 2
Location: Zone 3 - Aula E036
Chair: Zoltan Nagy
Co-chair: Domenica Braile
 
2:30pm - 2:50pm

A hybrid-modeling approach to monoclonal antibody production process design using automated bioreactor equipment

Kosuke Nemoto1, Sara Badr1, Yusuke Hayashi1, Yuki Yoshiyama1, Kozue Okamura1, Mizuki Morisasa2, Junshin Iwabuchi2, Hirokazu Sugiyama1

1Department of Chemical System Engineering, The University of Tokyo, Japan; 2Tech & Biz Development Division, Chitose Laboratory Co., Ltd., Japan

Monoclonal antibody (mAb) drugs offer advantages such as higher affinity and specificity as compared to conventional drugs for the treatment of critical diseases. Such advantages have led to a rapid growth of the global mAb market, along which new production technologies have been developed including host cell lines such as a high-yield CHO-MK1 with complex metabolism. In the cultivation step, not only the final product, mAb, but also impurities such as host cell proteins (HCP), DNA, and charge variants are produced. These impurities have a significant impact on quality in the time- and resource-intensive cultivation step, making this step a major factor influencing the overall production cost, time, and quality. Therefore, mathematical models which can reduce experimental burden are useful for the design and operation of this critical step.

In the field of process systems engineering, model-based approaches have been applied to mAb production processes. Several studies have worked on improving mechanistic models based on the understanding of phenomena using data-driven models, e.g., improving lactate model equations with clustering2. Some studies have focused on process design by utilizing models, e.g., dynamic optimization for maximizing mAb production while keeping costs low3, and comparison between production processes considering time and costs4. However, elements affecting quality described by previous models are limited, which is a significant barrier in the design of mAb production regulated by quality standards.

This work presents a hybrid-model-based approach to CHO-MK cell cultivation process design. Automated cultivation equipment was setup that contains 12 parallel 250 mL bioreactors, and three cycles of fed-batch cultivation were performed by varying agitation speed (700, 1200, and 1400 rpm), dissolved oxygen (20 and 50 %), and glucose feed rate (6, 15, and 20 g L-1 day-1). Multiple items including viable cell density, product mAb, metabolites, and impurities were measured as time series data. Based on the experimental data, we first worked on model development using mechanistic model equations from the literature5, but the model was unable to reproduce the behavior of lactate and viable cell density. In improving the model, it is difficult to elucidate all the biological phenomena, and increasing the number of estimated parameters is not advisable. Therefore, we developed a hybrid model that maintains the mass balance of the original model while estimating the coefficients using a data-driven model. The developed hybrid model accurately described not only the behavior of viable cell density, lactate, and the final product mAb, but also impurities (HCP, DNA, and charge variants) comprehensively. By utilizing the developed model, we found the condition to maximize mAb while keeping impurities low. In the ongoing work, we are conducting validation experiments for the developed hybrid model.

  1. K. Masuda, et al, J. Biosci. Bioeng. (2024), 137, 471-479
  2. K. Okamura, et al, Comput. Chem. Eng. (2024), 191, 108822
  3. W. Jones & D. Gerogiorgis, Comput. Chem. Eng. (2022), 165, 107855
  4. S. Badr, et al, Comput. Chem. Eng. (2021), 153, 107422
  5. Z. Xing, et al, Biotechnol. Prog. (2010), 26, 208-219


2:50pm - 3:10pm

Model Predictive Control to Avoid Oxygen Limitations in Microbial Cultivations - A Comparative Simulation Study

Philipp Pably, Jakob Kjøbsted Huusom, Julian Kager

DTU, Denmark

In cell cultivation, the physiological conditions inside the reactor are critical for achieving the best overall process performance. To reach high titers and productivity, bioprocess engineers try to provide the microorganisms with the best possible environment for the given objective. The dissolved oxygen (DO) level is integral to this, as limitations cause shifts in the metabolic activity of the cultivated organisms or even cell death. This process parameter is commonly manipulated by the stirring speed and the aeration flow rate, where controllers are employed to keep it above a certain threshold. Often simple PID algorithms are deployed for this task, which are then extended with feedback linearization, cascaded control or gain scheduling to tackle the inherent nonlinear nature of bioprocesses (Babuška et al. 2003). Still, when faced with abrupt changes in nutrient addition, the purely reactive nature these systems results in sudden drops of the DO signal and extended periods of oxygen limitation. This problem is encountered in systems where the substrate is added with intermittent bolus shots, such as high-throughput small scale multi-reactor systems paired with a pipetting robot, as described by Kim et al. (2023). These metabolic challenges for the cells can affect their physiological health and further the productivity of the process, which gives the need for a more advanced control scheme.

Model Predictive Control (MPC) emerges as a promising alternative to prevent oxygen limitations using a rather simple model for the non-linear process dynamics and providing it with the known feed trajectory. The resulting oxygen uptake is modeled by combining first principal mass balances and a Monod-type kinetic for the metabolic activity. The oxygen uptake rate is then connected to the physical oxygen transfer rate of the provided air into the liquid phase through the kLA correlation proposed by Van’t Riet (1979). The parameter estimation is done with lab-scale experiments, where a combination of offline and online measurements is recorded. The MPC algorithm is then tested in-silico with different configurations for the objective function and compared to the performance of a common PID control. The simulated performance of the predictive controller shows that the time of oxygen limitation throughout the process can be minimized by anticipating the needed control action before the next feed pulse is added. These results show that the proposed algorithm can ensure an improved dissolved oxygen level throughout the changing dynamics of the bioprocess, even when challenged with sudden changes in nutrient supply.

Babuška, R., Damen, M.R., Hellinga, C., Maarleveld, H., 2003. Intelligent adaptive control of bioreactors. Journal of Intelligent Manufacturing 14, 255–265. https://doi.org/10.1023/A:1022963716905

Kim, J.W., Krausch, N., Aizpuru, J., Barz, T., Lucia, S., Neubauer, P., Cruz Bournazou, M.N., 2023. Model predictive control and moving horizon estimation for adaptive optimal bolus feeding in high-throughput cultivation of E. coli. Computers & Chemical Engineering 172, 108158. https://doi.org/10.1016/j.compchemeng.2023.108158

Van’t Riet, K., 1979. Review of Measuring Methods and Results in Nonviscous Gas-Liquid Mass Transfer in Stirred Vessels. Ind. Eng. Chem. Proc. Des. Dev. 18, 357–364. https://doi.org/10.1021/i260071a001



3:10pm - 3:30pm

Improving drug solubility prediction in in-vitro intestinal fluids through hybrid modelling strategies

Marco Brendolan1, Francesca Cenci2, Konstantinos Stamatopoulos2, Fabrizio Bezzo1, Pierantonio Facco1

1CAPE-Lab – Computer-Aided Process Engineering Laboratory, Department of Industrial Engineering, University of Padova, via Marzolo, 9 - 35131, Padova, PD, Italy; 2GlaxoSmithKline, Park Road, Ware SG12 0DP, United Kingdom

This study aims at accelerating the identification of poorly soluble drugs during the phase of drug development, and reducing time and resources needed for experimentation. Typically, the equilibrium solubility of a drug in the gastrointestinal tract is a key parameter for assessing the availability of the drug through the human body. However, the complexity of the extraction and manipulation of human intestinal fluids determines that only few experiments can be carried out in vivo on patients. For this reason, predicting solubility in vitro on Simulated Intestinal Fluids is of paramount importance. To this purpose, pharmacokinetics physiologically based (PBPK) models are utilized. PBPK models describe mathematically the human body by dividing it into a series of compartments, which correspond to different organs or tissues (Stamatopoulos, 2022). However, several phenomena (e.g. the impact of food) are not accounted for, thus leading to inaccurate predictions.

In this study, a novel hybrid modelling approach is proposed, which exploits Gaussian Process and Multi-Linear regression to increase the predictive accuracy and the physical interpretability of the physiological model. The proposed methodology is applied on an Active Pharmaceutical Ingredient whose solubility is measured in fasted and fed conditions (Stamatopoulos et al. 2023). Results demonstrate that the proposed hybrid model outperforms the state-of-the-art literature models, describing both inter- and intra-subject variability of drug solubility in the gastrointestinal tract in a very accurate manner: the determination coefficient in prediction for test sets is = 0.96. The proposed model represents a significant step forward in improving the understanding of the relationship between intestinal components and drug solubility, and in enhancing physiological interpretability.

References

Stamatopoulos, K., Ferrini, P., Nguyen, D., Zhang, Y., Butler, J. M., Hall, J., & Mistry, N. (2023). Integrating In Vitro Biopharmaceutics into Physiologically Based Biopharmaceutic Model (PBBM) to Predict Food Effect of BCS IV Zwitterionic Drug (GSK3640254). Pharmaceutics, 15(2), Article 2. https://doi.org/10.3390/pharmaceutics15020521

Stamatopoulos, K. (2022). Integrating Biopharmaceutics to Predict Oral Absorption Using PBPK Modeling. In Biopharmaceutics (pp. 189–203). https://doi.org/10.1002/9781119678366.ch12

 
3:30pm - 4:00pmCoffee Break
Location: Zone 2 - Cafetaria
3:30pm - 4:00pmPoster session 1
Location: Zone 2 - Cafetaria
 

IMPLEMENTATION AND ASSESSMENT OF FRACTIONAL CONTROLLERS FOR AN INTENSIFIED DISTILLATION SYSTEM

Luis Refugio Flores-Gómez1, Fernando Israel Gómez-Castro1, Francisco López-Villarreal2, Vicente Rico-Ramírez3

1Universidad de Guanajuato, Mexico; 2Instituto Tecnológico de Villahermosa, Mexico; 3Tecnológico Nacional de México en Celaya, Mexico

Process intensification is a strategy applied to chemical engineering which is devoted to the development of technologies that enhance the performance of the operations in a chemical process. This is achieved through the implementation of modified equipment and multi-tasking equipment, among other approaches. Although various studies have demonstrated that the dynamic properties of intensified systems can be better than the conventional configurations, the development of better control structures is still necessary (Wang et al., 2018). The use of fractional controllers can be an alternative to achieve this target. Fractional PID controllers are based on fractional calculus, increasing the flexibility of the controller by allowing fractional orders for the derivative and the integrative actions. However, this implies a higher complexity to perform the tuning of the controller. This work presents an approach to implement and assess fractional controllers in an intensified distillation system. The study is performed in the Simulink environment in Matlab, tuning the controllers through a hybrid optimization approach; first using a genetic algorithm to find an initial point, and then refining the solution with the fmincon algorithm. The calculations also involve the estimation of fractional derivatives and integrals with fractional order numerical techniques. As case study, the experimental dynamic data for an extractive distillation column has been used (Kumar et al., 1984). The data has been adjusted to fractional order functions. Since the number of experimental points is low, a strategy is implemented to interpolate data and generate a more adequate adjustment to the fractional order transfer function. Through this approach, the sum of the square of errors is below 2.9x10-6 for perturbations in heat duty, and 1.2x10-5 for perturbations in the reflux ratio. Moreover, after controller tuning, a minimal value for ISE of 1,278.12 is obtained, which is approximately 8% lower than the value obtained for an integer-order controller.

References

Wang, C., Wang, C., Cui, Y., Guang, C., Zhang, Z., 2018. Economics and controllability of conventional and intensified extractive distillation configurations for acetonitrile/ethanol/benzene mixtures. Industrial & Engineering Chemistry Research, 57, 10551-10563.

Kumar, S., Wright, J.D., Taylor, P.A. 1984. Modelling and dynamics of an extractive distillation column. Canadian Journal of Chemical Engineering, 62, 185-192.



Sustainable pathways toward a decarbonized steel industry

Selene Cobo Gutiérrez1, Max Kroppen2, Juan Diego Medrano2, Gonzalo Guillén-Gosálbez2

1University of Cantabria; 2ETH Zurich

The steel industry, responsible for about 7% of global CO2 emissions1, faces significant pressure to reduce its environmental impact. Various technological pathways are available, but it remains unclear which is the most effective in minimizing CO2 emissions without causing greater environmental harm in other areas. This work aims to conduct the prospective life cycle assessment of five steelmaking pathways to identify the most environmentally sustainable option in terms of global warming impacts and damage to human health, ecosystems, and resources. The studied processes are 1) blast furnace plus basic oxygen furnace (BF-BOF, the dominant steelmaking route at present), 2) BF-BOF with carbon capture and storage (CCS), 3) coal-based direct reduction of iron paired with an electric arc furnace (DRI-EAF), 4) DRI-EAF using natural gas, and 5) the more recently developed low-temperature iron oxide electrolysis (IOE). Life cycle inventories were developed using a detailed Aspen Plus® model for BF-BOF, data from the Ecoinvent V3.8 database2, and literature for the other processes. The results indicate that the BF-BOF process with CCS, gas-based DRI-EAF, and IOE are the most promising pathways for reducing the steel industry’s carbon footprint while minimizing overall environmental damage. If renewable energy and hydrogen produced via water electrolysis are available at competitive costs, DRI-EAF and IOE show the most promise. However, if low-carbon hydrogen is not available and the main electricity source is the global grid mix, BF-BOF with CCS has the lowest overall impacts. The choice of technology depends on the expected development of the energy system and the current technological stock. Retrofitting existing BF-BOF plants with CCS is a viable option, while constructing new DRI-EAF plants may be more advantageous due to their versatility and higher decarbonization potential. IOE, although promising, is not yet ready for immediate industrial deployment but could be a key technology in the long term. In conclusion, the optimal technology choice depends on regional energy availability and technological readiness levels. These findings underscore the need for a tailored approach to decarbonizing the steel industry, balancing environmental benefits with economic and infrastructural considerations.

References

1. W. Cornwall. Science, 2024, 384(6695), 498-499.

2. G. Wernet, C. Bauer, B. Steubing, J. Reinhard, E. Moreno-Ruiz and B. Weidema, Int. J. Life Cycle Assess., 2016, 21, 1218–1230.



OPTIMIZATION OF HEAT EXCHANGERS THROUGH AN ENHANCED METAHEURISTIC STRATEGY: THE SUCCESS-BASED OPTIMIZATION ALGORITHM

Oscar Daniel Lara-Montaño1, Fernando Israel Gómez-Castro2, Claudia Gutiérrez-Antonio1, Elena Niculina Dragoi3

1Universidad Autónoma de Querétaro, Mexico; 2Universidad de Guanajuato, Mexico; 3Gheorghe Asachi Technical University of Iasi, Romania

The optimal design of the units in a chemical process is commonly challenging due to the high nonlinearity of the models that represent the equipment. This also applies to heat exchangers, where the mathematical equations modeling such units are nonlinear, including nonconvex terms, and require simultaneous handling of continuous and discrete variables. Finding the global optima of such models is complex, thus the optimization strategy must be robust. In this context, metaheuristics are a robust alternative to classical optimization strategies. They are a set of stochastic algorithms that can efficiently find the global optima region when adequately tuned and are adequate for nonconvex functions with several local optima. The literature presents numerous metaheuristics, each with distinct properties, many of which require parameter tuning. However, no universal method exists to solve all optimization problems, as stated by the no-free-lunch theorem (Wolpert and Macready, 1997). This implies that a given algorithm may properly work for some problems but have an inadequate performance for others, as reported for the optimal design of heat exchangers by Lara-Montaño et al. (2021). As such, new optimization strategies are still under development, and this work presents an enhanced metaheuristic algorithm, the Success-Based Optimization Algorithm (SBOA). The development of the method takes the concepts of success from a social perspective as initial inspiration. As a case study, the design of a shell-and-tube heat exchanger using the Bell-Delaware method is analyzed to minimize the total annual cost. The algorithm's performance is compared with current state-of-the-art metaheuristic algorithms, such as particle swarm optimization, grey wolf optimizer, cuckoo search, and differential evolution. Based on the findings, in terms of the standard deviation and mean values, the suggested algorithm outperforms nearly all other approaches except differential evolution. Nevertheless, the SBOA has shown a faster convergence than differential evolution and best solutions with lower total annual costs.

References

Wolpert, D.H., Macready, W.G., 1997. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), 67-82.

Lara-Montaño, O.D., Gómez-Castro, F.I., Gutiérrez-Antonio, C. 2021. Comparison of the performance of different metaheuristic methods for the optimization of shell-and-tube heat exchangers. Computers & Chemical Engineering, 152, 107403.



OPTIMAL DESIGN OF PROCESS EQUIPMENT THROUGH HYBRID MECHANISTIC-ANN MODELS: EFFECT OF HYBRIDIZATION

Zaira Jelena Mosqueda-Huerta1, Oscar Daniel Lara-Montaño2, Fernando Israel Gómez-Castro1, Manuel Toledano-Ayala2

1Universidad de Guanajuato, Mexico; 2Universidad Autónoma de Querétaro, México

Artificial neural networks (ANN) are data-based structures that allow the representation of the performance of units in chemical processes. They have been widely used to represent the operation of equipment as reactors (e.g. Cerinski et al., 2020) and separation units (e.g. Jawad et al., 2020). To develop ANN-based models, it is necessary to obtain data to train the network. Thus, their employment for process design represents a challenge, since the equipment does not exist, and actual data is commonly not available. On the other hand, despite the popularity of artificial neural networks to generate models for chemical processes, there are warnings about the risks of completely depend on these data-based models while ignoring the fundamental knowledge of the phenomena occurring in the units, given by the traditional mechanistic models. Thus, the use of hybrid models has arisen to combine the power of the ANN’s to predict interactions difficult to represent through rigorous modelling, but maintaining the relevant information provided by the traditional mechanistic approach. However, a rising question is, what part of the model must be represented through a data-based approach for design applications? To answer this question, this work analyzes the effect of the degree of hybridization in the design and optimization of a shell-and-tube heat exchanger, assessing the performance of a complete ANN model and a hybrid model in terms of the computational time and the accuracy of the solution. Since the data for the heat exchanger is not available, such information is obtained through the solution of the rigorous model for randomly selected conditions. The Bell-Delaware approach is employed to perform the design of the exchanger. Such model is characterized by non-linearities and the need for handling discrete and continuous variables. Using the data, a neural network is trained in Python to obtain an approximation to determine the area and cost of the exchanger. A second neural network is generated to predict a component of the model with the high nonlinearities, namely the calculation of the heat transfer coefficients, while the other calculations are performed with the rigorous model. Both representations are optimized with the differential evolution algorithm. According to the preliminary results, for the same architecture, the hybrid model produces designs with standard deviation approximately 30% lower than the complete ANN model, related to the areas predicted by the rigorous model. However, the hybrid model requires approximately 11 times of computational time than the complete ANN model.

References

Cerinski, D., Baleta, J., Mikulčić, H., Mikulandrić, R., Wang, J., 2020. Dynamic modelling of the biomass gsification process in a fixed bed reactor by using the artificial neural network. Cleaner Engineering and Technology, 1, 100029.

Jawad, J., Hawari, A.H., Zaidi, S. 2020. Modeling of forward osmosis process using artificial neural networks (ANN) to predict the permeate flux. Desalination, 484, 114427.



MODELLING OF A PROPYLENGLYCOL PRODUCTION PROCESS WITH ARTIFICIAL NEURAL NETWORKS: OPTIMIZATION OF THE ARCHITECTURE

Emilio Alba-Robles1, Oscar Daniel Lara-Montaño2, Fernando Israel Gómez-Castro1, Jahaziel Alberto Sánchez-Gómez1, Manuel Toledano-Ayala2

1Universidad de Guanajuato, Mexico; 2Universidad Autónoma de Querétaro, México

The mathematical models used to represent chemical processes are characterized by a high non-linearity, mainly associated to the thermodynamic and kinetic relationships. The inclusion of non-convex bilinear terms is also common when modelling chemical processes. This leads to challenges when optimizing an entire process. In the last years, the interest for the development of data-based models to represent processing units has increased. As example, the work of Kwon et al. (2021), related to the dynamic performance of distillation columns, can be mentioned. Artificial neural networks (ANN) can be mentioned as one of the most relevant strategies to develop data-based models. The accuracy of the predictions for an ANN is highly dependent on the quality of the provided data, the nature of the interactions among the studied variables, and the architecture of the network. Indeed, the selection of an adequate architecture is itself an optimization problem. In this work, two strategies are proposed and assessed for the determination of the architecture of ANN’s that represent the performance of a chemical process. As case study, a process to produce propylene glycol using glycerol as raw material is analyzed (Sánchez-Gómez et al., 2023). The main units of the process are the chemical reactor and two distillation columns. To generate the data required to train the artificial neural network, random values for the design and operating variables are generated from a simulation in Aspen Plus. To determine the best architecture for the artificial neural network, two approaches are used: (i) the random generation of structures for the ANN, and (ii) the formal optimization of the architecture employing the ant colony algorithm, which is particularly useful for discrete problems (Zhao et al., 2022). In both cases, the decision variables are the number of hidden layers and the number of neurons per layer. The objective function implies the minimization of the mean squared error. Both strategies generate ANN-based predictions with good agreement with the data from rigorous simulation, with values of r2 higher than 99.9%. However, the use of the ant colony algorithm allows the best fit, although it has a slower convergence.

References

Kwon, H., Oh, K.C., Choi, Y., Chung, Y.G., Kim, J., 2021. Development and application of machine learning-based prediction model for distillation column. International Journal of Intelligent Systems, 36, 1970-1997.

Sánchez-Gómez, J.A., Gómez-Castro, F.I., Hernández, S. 2023. Design and intensification of the production process of propylene glycol as a high value-added glycerol derivative. Computer Aided Chemical Engineering, 52, 1915-1920.

Zhao, H., Zhang, C., Zheng, X., Zhang, C., Zhang, B. 2022. A decomposition-based many-objective ant colony optimization algorithm with adaptive solution construction and selection approaches. Swarm and Evolutionary Computation, 68, 100977.



Analysis for CFD of the Claus Reaction Furnace with Operating Conditions: Temperature and Excess Air for Sulfur Recovery

PABLO VIZGUERRA MORALES1, MIGUEL ANGEL MORALES CABRERA2, FABIAN SALVADOR MEDEROS NIETO1

1INSTITUTO POLITECNICO NACIONAL, MEXICO; 2UNIVERSIDAD VERACRUZANA, MEXICO

In this work, a Claus reaction furnace was analyzed in a sulfur recovery unit (SRU) of the Abadan Oil Refinery where the combustion operating temperature is important since it ensures optimal performance in the reactor, the novelty of the research focused on temperature control of 1400, 1500 and 1600 K and excess air of 10, 20 and 30% to improve the reaction yield and H2S conversion and the CFD simulation was carried out in Ansys Fluent in transitory state and in 3 dimensions, considering the turbulence model estándar, energy model with transport by convection and mass transport with chemical reaction using the Arrhenius Finite-rate/Eddy - Dissipation model for a kinetic model of destruction of acid gases H2S and CO2, obtaining a good approximation with the experimental results of the industrial process of the Abadan Oil refinery, Iran. The percentage difference between experimental and simulated results varies between 0.6 to 4% depending on the species. The temperature of 1600 K and with excess air of 30% was the best, with one a mol fraction of 0.065 of S2 at the outlet and with a conversion of the acid gas (H2S) of 95.64%, which is quite good compared to the experimental one.



Numerical Analysis of the Hydrodynamics of Proximity Impellers using the SPH Method

MARIA SOLEDAD HERNÁNDEZ-RIVERA1, KAREN GUADALUPE MEDINA-ELIZARRARAZ1, JAZMÍN CORTEZ-GONZÁLEZ1, RODOLFO MURRIETA-DUEÑAS1, JUAN GABRIEL SEGOVIA-HERNÁNDEZ2, CARLOS ENRIQUE ALVARADO-RODRÍGUEZ2, JOSÉ DE JESÚS RAMÍREZ-MINGUELA2

1TECNOLÓGICO NACIONAL DE MÉXICO/ CAMPUS IRAPUATO, DEPARTAMENTO DE INGENIERÍA QUÍMICA; 2UNIVERSIDAD DE GUANAJUATO/DEPARTAMENTO DE INGENIERÍA QUÍMICA

Mixing is a fundamental operation in many industrial processes, typically achieved using agitated tanks for homogenization. However, the design of tanks and impellers is often overlooked during the selection of the agitation system, leading to excessive energy consumption and non-homogeneous mixing. To address these operational inefficiencies, Computational Fluid Dynamics (CFD) can be utilized to analyze the hydrodynamics and mixing times within the tank. CFD employs mathematical modeling of mass, heat, and momentum transport phenomena to simulate fluid behavior. Among the latest methods used for modeling stirred tank hydrodynamics is Smoothed Particle Hydrodynamics (SPH), a mesh-free Lagrangian approach that tracks individual particles carrying physical properties such as mass, position, velocity, and pressure. This method offers advantages over traditional mesh discretization techniques by analyzing particle interactions to simulate fluid behavior more accurately. In this study, we compare the performance of different impellers based on hydrodynamics and mixing times during the homogenization of water and ethanol in a 0.5 L stirred tank. The tank and agitators were rigorously sized, operating at 70% capacity with the fluids' rheological properties as follows: ρ₁=1000 kg/m³, ρ₂=789 kg/m³, μ₁=1E-6 m²/s², and μ₂=1.52E-6 m²/s². The simulation, conducted for 2 minutes at a turbulent flow regime with a Reynolds number of 10,000, involved three impellers—double ribbon, paravisc, and hybrid—simulated using DualSPHysics software at a stirring speed of 34 rpm. The initial particle distance was set to 1 mm, generating 270,232 fluid particles and 187,512 boundary particles representing the tank and agitator. The results included velocity profiles, flow patterns, divergence, vorticity, and density fields to quantify mixing performance. The Q criterion was also applied to identify whether fluid motion was dominated by rotation or deformation and to locate stagnation zones. The double ribbon impeller demonstrated the best performance, achieving 88.28% mixing in approximately 100 seconds, while the paravisc and hybrid impellers reached 12.36% and 11.8% mixing, respectively. The findings highlight SPH as a robust computational tool for linking hydrodynamics with mixing times, allowing for the identification of key parameters that enhance mixing efficiency.



Surrogate Modeling of Twin-Screw Extruders Using a Recurrent Deep Embedding Network

Po-Hsun Huang1, Yuan Yao1, Yen-Ming Chen2, Chih-Yu Chen2, Meng-Hsin Chen2

1Department of Chemical Engineering, National Tsing Hua University, Hsinchu 30013, Taiwan; 2Industrial Technology Research Institute, Hsinchu 30013, Taiwan

Twin-screw extruders (TSEs) are extensively used in the plastics processing industry, with their performance highly dependent on operating conditions and screw configurations. However, optimizing these parameters through experimental trials is often time-consuming and resource-intensive. Although some neural network models have been proposed to tackle the screw arrangement problem [1], they fail to account for the positional information of the screw elements. To overcome this challenge, we propose a recurrent deep embedding network that leverages a deep autoencoder with a recurrent neural network (RNN) structure to develop a surrogate model based on simulation data.

The details are as follows. An autoencoder is a neural network architecture designed to learn latent representations of input data. In this study, we integrate the autoencoder with an RNN to capture the complex physical relationships between the operating conditions, screw configurations of TSEs, and their corresponding performance metrics. To further enhance the model’s ability to represent screw positions, we incorporate an attention layer from the Transformer model architecture. This addition allows the model to more effectively capture the spatial relationships between the screw elements.

The model was trained and evaluated using simulation data generated from the Ludovic software package. The experimental setup included eight screw element arrangements and three key operating variables: temperature, feed rate, and rotation speed. For data collection, we employed two data sampling strategies: progressive Latin hypercube sampling [2] and random sampling.

The results demonstrate that the proposed surrogate model accurately predicts TSE performance across both training and testing datasets. Notably, the model generalizes well to unseen operating conditions, making reliable predictions even for scenarios not encountered during training. This highlights the model’s robustness and versatility as a tool for optimizing TSE configurations.

In conclusion, the recurrent deep embedding surrogate model offers a highly efficient and effective solution for optimizing TSE performance. By integrating this model with optimization algorithms, it is possible to rapidly identify optimal configurations, resulting in improved product quality, enhanced process efficiency, and reduced production costs.



Predicting Final Properties in Ibuprofen Production with Variable Batch Durations

Kuan-Che Huang, David Shan-Hill Wong, Yuan Yao

Department of Chemical Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan

This study addresses the challenge of predicting final properties in batch processes with highly uneven durations, using the ibuprofen production process as a case study. A novel methodology is proposed and compared against traditional regression algorithms, which rely on batch trajectory synchronization as a pre-processing step. The performance of each method is evaluated using established metrics.

Batch processes are widely used in the chemical industry. Nevertheless, variability between production runs often leads to differences in batch durations, resulting in unequal lengths of process variable trajectories. Common solutions include time series truncation or time warping. However, truncation risks losing valuable process information, thereby reducing model prediction accuracy. Conversely, time warping may introduce noise or distort trajectories when compressing significantly unequal sequences, causing the model to learn incorrect process information. In multivariate chemical processes, combining time warping with batch-wise unfolding can result in the curse of dimensionality, especially when data is limited, thereby increasing the risk of overfitting in machine learning models.

The data for this study were generated using Aspen Plus V12 simulation software, focused on batch reactors. To capture the process characteristics, statistical sampling was employed to strategically position data points within a reasonable process range. The final isobutylbenzene conversion rate for each batch was used to determine batch completion. A total of 1,000 simulation runs were conducted, and the resulting data were used to develop a neural network model. The target variables to predict are: (1) the isobutylbenzene conversion rate, and (2) the accumulated mass of ibuprofen.

To handle the unequal-length trajectories in batch processes, this research constructs a dual-transformer deep neural network with multihead attention and layer normalization mechanism to extract shared information from the high-dimensional, uneven-length manipulated variable profiles into latent space, generating equal-dimensional latent codes. As an alternative strategy for feature extraction, a dual-autoencoder framework is also employed to achieve equal-dimensional representations. The representation vectors are then used as inputs for downstream deep learning models to predict the target variables.



Develop a Digital Twin System Based on a Physics-Informed Neural Networks for Pipeline Leakage Detection

Wei-Shiang Lin1, Yi-Hsiang Cheng2, Zhen-Yu Hung2, Yuan Yao1

1Department of Chemical Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan; 2Material and Chemical Research Laboratories, Industrial Technology Research Institute, Hsinchu 310401, Taiwan

As the demand for industrial and domestic resources continues to grow, the transportation of water, fossil fuels, and chemical products increasingly depends on pipeline systems. Therefore, monitoring pipeline transportation has become crucial, as leaks can lead to severe environmental disasters and safety risks. To address this challenge, this study is dedicated to developing a pipeline leakage detection system based on digital twin technology.

The core of this research lies in combining existing physical knowledge, such as the continuity and momentum equations, with neural network technology. These physical models are incorporated into the loss function of the neural network, enabling the model to be trained based on physical laws. By integrating physical models with neural networks, we aim to achieve high accuracy in detecting pipeline leakages. An advantage of Physics-informed Neural Networks (PINNs) is that they do not rely on large datasets and can enforce physical constraints during model training, making them a powerful tool for addressing pipeline safety challenges. Using the PINN model, we can more accurately simulate the fluid dynamics within pipelines, thereby significantly enhancing the prediction of potential leaks.

In detail, the system employs a fully connected neural network alongside the continuity and momentum partial differential equations to describe fluid pressure and flow rate variations. This equation not only predicts pressure transients and pressure wave propagation but also accounts for the impact of pipeline friction coefficients on flow behavior. By integrating data fitting with physical constraints, our model aims to minimize both prediction loss and partial differential equation loss, ensuring that predictions align closely with real-world data while adhering to physical laws. This approach provides both interpretability and reliability.

The PINN model is trained on data from normal pipeline operations to describe fluid dynamics in non-leakage conditions. When the input data reflects flow rates and pressures indicative of a leak, the predicted values will exhibit statistically significant deviations from the actual values. The process involves collecting prediction errors from the training data, evaluating their statistical distribution, and establishing a detection statistic using parametric or non-parametric methods. A rejection region and control limits are then defined, followed by the creation of a control chart to detect leaks. Finally, we test the accuracy and efficiency of the control chart using field or experimental data to ensure reliability.



Higher alcohol = higher value? Identifying Promising and Unpromising Synthesis Routes for 1-Propanol

Lukas Spiekermann, Mae McKenna, Luca Bosetti, André Bardow

Energy & Process Systems Engineering, Department of Mechanical and Process Engineering, ETH Zürich

In response to climate change, the chemical industry is investigating synthesis routes using renewable carbon sources (Shukla et al., 2022). CO2 and biomass have been shown to be convertible into 1-propanol, which could serve as a future platform chemical with diverse applications and higher value than traditional bulk chemicals (Jouny et al., 2018, Schemme et al., 2018, Gehrmann and Tenhumberg, 2020, Vo et al., 2021). A variety of potential pathways to 1-propanol have been proposed, but their respective benefits and disadvantages are unclear in guiding future innovations.

Here, we aim to identify the most promising routes to produce 1-propanol and establish development targets necessary to become competitive with benchmark technologies. To allow for a comprehensive assessment, we embed 1-propanol into the overall chemical supply chain. For this purpose, we formulate a technology choice model (Kätelhön et al., 2019, Meys et al., 2021) of the chemical industry to evaluate the cost-effectiveness and climate impact of various 1-propanol synthesis routes. The model includes thermo-catalytic, electrocatalytic, and fermentation-based synthesis steps with various intermediates to produce 1-propanol from CO2, diverse biomass feedstocks, and fossil resources. A comprehensive techno-economic analysis coupled with life cycle assessment quantifies both the economic and environmental potentials of new synthesis routes.

Our findings define performance targets for direct conversion of CO2 to 1-propanol via thermo-catalytic hydrogenation or electrocatalysis to become a beneficial synthesis route. If these performance targets are not met, the direct synthesis of 1-propanol is substituted by multi-step processes based on syngas and ethylene from CO2 or biomass.

Overall, our study demonstrates the critical role of synthesis route optimization in guiding the development of new chemical processes. By establishing quantitative benchmarks, we provide a roadmap for advancing 1-propanol synthesis technologies, contributing to the broader effort of reducing the chemical industry’s carbon footprint.

References

P. R. Shukla, et al., 2022, Climate Change 2022: Mitigation of Climate Change. Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC, Cambridge University Press, Cambridge, UK and New York, NY, USA)

M. Jouny, et al., 2018, Ind. Eng. Chem. Res. 57(6), 2165–2177

C. H. Vo, et al., 2021, ACS Sustain. Chem. Eng. 9(31), 10591–10600

S. Schemme, et al., 2018, Journal of CO2 Utilization 27, 223–237

S. Gehrmann, N. Tenhumberg, 2020, Chemie Ingenieur Technik 92(10), 1444–1458

A. Kätelhön, et al., 2019, Proceedings of the National Academy of Sciences 116(23), 11187–11194

R. Meys, et al., 2021, Science 374(6563), 71–76



A Python/Numpy-based package to support model discrimination and identification

Seyed Zuhair Bolourchian Tabrizi1,2, Elena Barbera1, Wilson Ricardo Leal da Silva2, Fabrizio Bezzo1

1Department of Industrial Engineering, University of Padova, via Marzolo 9, 35131 Padova PD, Italy; 2FLSmidth Cement, Green Innovation, Denmark

Process design, scale-up, and optimisation requires the precise determination of underlying phenomena and the identification of accurate models to describe them. This process can become complex when multiple rival models and higher uncertainty in the data exist, and the data needed to select and calibrate them is costly to obtain. Numerical techniques for screening various models and narrowing the pool of candidates without requiring additional experimental effort have been introduced to streamline the pre-discrimination stage [1]. These techniques have been followed by the development of model-based design of experiment (MBDoE) methods, which not only design new experiments to maximize the information for easier discrimination between rival models but also reduce the confidence ellipsoid volume of estimated parameters by enriching the information matrix through optimal experiment design [2].
Performing these techniques in an open source and user-friendly environment has been recognized by the community and has led to the development of several valuable packages, especially in the Python/PYOMO environment, which perform many of these numerical techniques [3,4]. These existing packages have made significant contributions to parameter estimation and calibration of models as well as model-based design of experiments. However, the need for a systematic package that flexibly performs all of these steps with a clear distinction between model simulation and model identification in an object-oriented approach is still highly advocated. To address these challenges, we present a new Python package that serves as an independent numerical wrapper around the kernel functions (models and their numerical interpretation). It facilitates crucial model identification steps, including the screening of rival models (through global sensitivity, identifiability, and estimability analyses), parameter estimation, uncertainty analysis, and model-based design of experiments to discriminate and calibrate models. This package not only brings together all the necessary steps but also conducts the analysis in an object-oriented manner, offering flexibility to adapt to the physical constraints of various processes. It is independent of specific programming structures and relies on Numpy and Python arrays, making it as general as possible while remaining compatible with features available in these packages. The application and advantages are demonstrated through an in-silico approach to a multivariate model identification case.

References:
[1] Moshiritabrizi, I., Abdi, K., McMullen, J. P., Wyvratt, B. M. & McAuley, K. B. Parameter estimation and estimability analysis in pharmaceutical models with uncertain inputs. AIChE Journal (2023).
[2] Asprey, S. P. & Macchietto, S. Statistical tools for optimal dynamic model building. Comput Chem Eng 24, (2000).
[3] Wang, J. & Dowling, A. W. Pyomo.DOE: An open-source package for model-based design of experiments in Python. AIChE Journal 68, (2022).
[4] Klise, K. A., Nicholson, B. L., Staid, A. & Woodruff, D. L. Parmest: Parameter Estimation Via Pyomo. in 41–46 (2019).



Experiences in Teaching Statistics and Data Science to Chemical Engineering Students at the University of Wisconsin-Madison

VICTOR ZAVALA

UNIVERSITY OF WISCONSIN-MADISON, United States of America

In this talk, I offer a perspective on my recent experiences in designing a course on statistics and data science for chemical engineers at the University of Wisconsin-Madison and in writing a textbook on the subject.

Statistics is one of the pillars of modern science and engineering and of emerging topics such as data science and machine learning; despite of this, its scope and relevance has remained stubbornly misunderstood and underappreciated in chemical engineering education (and in engineering education at large). Specifically, statistics is often taught by placing emphasis on data analysis. However, statistics is much more than that; statistics is a mathematical modeling paradigm that complements physical modeling paradigms used in chemical engineering (e.g., thermodynamics, transport phenomena, conservation, reaction kinetics). Specifically, statistics can help model random phenomena that might not be predictable from physics alone (or from deterministic physical laws), can help quantify the uncertainty of predictions obtained with physical models, can help discover physical models from data, and can help create models directly from data (in the absence of physical knowledge).

The desire design a new course on statistics for chemical engineering came about from my personal experience in learning statistics in college and in identifying the significant gaps in my understanding of statistics throughout my professional career. Similar feelings are often shared with me by professionals working in industry and academia. Throughout my professional career, I have been exposed to a broad range of applications in which knowledge of statistics has proven to be essential: uncertainty quantification, quality control, risk assessment, modeling of random phenomena, process monitoring, forecasting, machine learning, computer vision, and decision-making under uncertainty. These are applications that are pervasive in industry and academia.

The course that I designed at UW-Madison (and the accompanying textbook) follows a "data-models-decisions" pipeline. The intent of this design is to emphasize that statistics is a modeling paradigm that maps data to decisions; moreover, this design also aims to "connect the dots" between different branches of statistics. The focus on the pipeline is also important in reminding students that understanding the application context matters. Similarly, the nature of the decision and the data available influence the type of model used. The design is also intended for the student to understand the close interplay between statistical and physical modeling; specifically, we emphasize on how statistics provides tools to model aspects of a system that cannot be fully predicted from physics. The design is also intended to help the student appreciate how statistics provides a foundation to a broad range of modern tools of data science and machine learning.

The talk also offers insights into experiences in using software, as a way to reduce complex mathematical concepts to practice. Moreover, I discuss how statistics provides an excellent framework to teach and reinforce concepts of linear algebra and optimization. For instance, it is much easier to explain the relevance of eigenvalues when this is explained from the perspective of data science (e.g., they measure information).



Rule-based autocorrection of Piping and Instrumentation Diagrams (P&IDs) on graphs

Lukas Schulze Balhorn1, Niels Seijsener2, Kevin Dao2, Minji Kim1, Dominik P. Goldstein1, Ge H. M. Driessen2, Artur M. Schweidtmann1

1Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, The Netherlands; 2Fluor BV, Amsterdam, The Netherlands

Undetected errors or suboptimal designs in Piping and Instrumentation Diagrams (P&IDs) can cause increased financial costs, hazardous situations, unnecessary emissions, and inefficient operation. These errors are currently captured in extensive design processes leading to safe, operable, and maintainable facilities. However, grassroots engineering projects can involve tens to thousands of P&ID pages, leading to a significant revision workload. With the advent of digitalization and data exchange standards such as the Data Exchange in the Process Industry (DEXPI), there are new opportunities for algorithmic support of P&ID revision.

We propose a rule-based, automatic correction (i.e., autocorrection) of errors in P&IDs represented by the DEXPI data model. Our method detects potential errors, suggests improvements, and provides explanations for these suggestions. Specifically, our autocorrection method represents a DEXPI P&ID as a graph. Thereby, nodes represent DEXPI classes and directed edges the connectivity between them. The nodes retain all attributes of the DEXPI classes. Additionally, each rule consists of an erroneous P&ID template and the corresponding correct template, represented as a graph. The correct template includes the rule explanation as a graph attribute. Then, we apply the rules at inference time. The autocorrection method searches the erroneous template via subgraph isomorphism and replaces the erroneous with the corresponding correct template in the P&ID graph.

An industry case study demonstrates the method’s accuracy and performance, with rule inference taking less than a second. However, rules can conflict, requiring careful application order, and rules must be extended for specific cases. The explainability of the rule-based approach builds trust in the method and facilitates its integration into existing engineering workflows. Furthermore, DEXPI provides an existing interface between the autocorrection method and industrial P&ID development software.



Deposition rate constants: a DPM approach for particles in pipe flow

Alkhatab Bani Saad, Edward Obianagha, Lande Liu

University of Huddersfield, United Kingdom

Particle deposition is a natural phenomenon that occurs in many natural and industrial systems. Nevertheless, modelling and understanding of particle deposition in flow is still quite a big challenge especially for the determination of deposition rate constant. This study focuses on the use of the discrete particle model to calculate the deposition rate constant of particles flowing in a horizontal pipe. It was found that increasing the velocity of the flow decreases particle deposition. As the size of particles increases, deposition increases. Similarly, deposition flux was proportional to the concentration of the particles. The deposits per unit area of the inner pipe surface is higher at lower fluid velocity. Nonetheless, when the velocity of the continuous phase is increased by a factor of 100, the deposits volume per unit area decreased by half. The deposition rate constant was found to be nonlinear to both the location of the pipe and particle size. It was also interesting to see that the constant is substantially higher at the inlet of the pipe then gradually decreases along the axial direction of the flow. The change of deposition rate constant in particle size was found to be exponentially dependent.

Novelty in this research is that by extracting some quantitative parameters, deposition rate constants in this case, from a steady state Lagrangian simulation, the Eulerian approach based unsteady state population balance modelling can be made possible to determine the thickness of particle deposit in a pipe.



Plate heat exchangers: a CFD study on the effect of dimple shape on heat transfer

Mitchell Stolycia, Lande Liu

University of Huddersfield, United Kingdom

This article studies how heat transfer is affected across different dimple shapes on a plate within a plate heat exchanger using computational fluid dynamics (CFD). Four different dimple shapes were designed and studied: spherical, edge smoothed-spherical, normal distribution, and error distribution. In a pipe of 0.1 m in diameter with the dimple height being 0.05 m located at a distance of 0.3 m from the inlet under the fluid velocity of 0.5 m s–1, simulation shows that the normal distribution dimple lifted a 0.53 K increase in fluid temperature after 1.5 s. This increase is 10 times of the spherical, 8 times of the edge smoothed-spherical and 1.13 times of the error distribution shapes in their contributions to elevating fluid temperature. This was primarily due to the large increase in the intensity and number of eddies that the normal distribution dimple induced upon the fluid flow.

The effect that a fully developed velocity profile had on heat transfer was also analysed for an array of normal distribution dimples in a 5 m long pipe. It was found that fully developed flow resulted in the greatest temperature change, which was 9.5% more efficient than half developed flow and 31% more efficient than placing dimples directly next to one another.

Novelty in this research demonstrates how a typical plate heat exchanger equipment can be designed and optimised by a computational approach prior to manufacturing.



Modeling and life cycle assessment for ammonia cracking process

Heungseok Jin, Yeonsoo Kim

Kwangwoon University, Korea, Republic of (South Korea)

Ammonia (NH3) is gaining attention as a sustainable hydrogen (H2) carrier for long-distance transportation due to its higher boiling point and lower boil-off issues compared to liquefied hydrogen. These properties make ammonia a practical choice for storing and transporting hydrogen over long distances. However, extracting hydrogen from ammonia requires significant energy due to the endothermic nature of the reaction. Optimizing the operational conditions for this decomposition process is crucial to ensure energy-efficient hydrogen production. In particular, we focus on determining the amount of slipped ammonia that provides the most efficient energy generation through mixed oxidation, where both slipped ammonia (unreacted NH3) and a small amount of hydrogen are used.

Key factors include the temperature and pressure of the ammonia cracking process, the ammonia-to-hydrogen ratio in the fuel mixture, and catalyst kinetics. By optimizing these conditions, the goal is to maximize ammonia production while minimizing hydrogen consumption for fueling and NH3 consumption for NOx reduction.

In addition to the mass and energy balance derived from process modeling, a comprehensive life cycle assessment (LCA) is conducted to evaluate the sustainability of ammonia as a hydrogen carrier. The LCA considers the entire process, from ammonia production (often through the energy-intensive Haber-Bosch process or renewable energy-driven water electrolysis) to transportation and ammonia cracking for hydrogen extraction. This assessment highlights the environmental and energy impacts at each stage, offering insights into how to reduce the overall carbon footprint of using ammonia as a hydrogen carrier.



Technoeconomic Analysis of a Methanol Conversion Process Using Microwave-Assisted Dry Reforming and Chemical Looping

Omar Almaraz, Srinivas Palanki

West Virginia University, United States of America

The global methanol market size was valued at $28.78 billion in 2020 and is projected to reach $41.91 billion by 2026 [1]. Methanol has traditionally been produced from natural gas by first converting methane to syn gas and then converting syn gas to methanol. However, this is a very energy intensive process and produces a significant amount of the greenhouse gas carbon dioxide. Hence, there is motivation to look for alternative routes to the manufacture of methanol. In this research a novel microwave reactor is used for simulating the dry reforming process to convert methane to methanol. The objective is to produce 14,200 lbmol/h of methanol, which is the current production rate of methanol at Natgasoline LLC, Texas (USA) using the traditional steam reforming process [2].

Dry reforming requires a stream of carbon dioxide as well as a stream of methane to produce syn gas. Additional hydrogen is required to achieve the necessary carbon to hydrogen ratio to produce methanol from syn gas. These streams of carbon dioxide and hydrogen are generated via chemical looping. A three-reactor chemical looping system is developed that utilizes methane as the feed to produce a pure stream of hydrogen and a pure stream of carbon dioxide. The carbon dioxide stream from the chemical looping reactor system is mixed with a desulfurized natural gas stream and is sent to a novel microwave syngas reactor, which operates at a temperature of 800 °C and pressure of 1 bar to produce a mixture of carbon monoxide and hydrogen. The stream of hydrogen obtained via chemical looping is added to this syngas stream and sent to a methanol reactor train where methanol is produced. These reactors operate at a temperature range of 220-255°C and pressure of 76 bar. The reactor outlet stream is sent to a distillation train where the product methanol is separated from methane, carbon dioxide, hydrogen, and other products. The carbon dioxide is recycled back to the microwave reactor.

This process was simulated in ASPEN Plus. The thermodynamic property method used was RKSoave for the process to convert methane to syn gas and NRTL for the process to convert syn gas to methanol. The energy requirement for operating the microwave reactor is determined via simulation in COMSOL. Heat integration tools are utilized to reduce the hot utility and cold utility usage in this integrated plant that leads to optimal operation. A technoeconomic analysis is conducted to determine the overall capital cost and the operating cost of this novel process. The simulation results from this study demonstrate the significant potential of utilizing a microwave-assisted reactor for dry reforming of methane.

References

[1] Methanol Market by Feedstock (Natural Gas, Coal, Biomass), Derivative (Formaldehyde, Acetic Acid), End-use Industry (Construction, Automotive, Electronics, Solvents, Packaging), and Region - Global Forecast to 2028, Markets and Markets. (2023). https://www.marketresearch.com/MarketsandMarkets-v3719/Methanol-Feedstock-Natural-Gas-Coal-30408866/

[2] M. E. Haque, N. Tripathi, and S. Palanki,” Development of an Integrated Process Plant for the Conversion of Shale Gas to Propylene Glycol,” Industrial & Engineering Chemistry Research, 60 (1), 399-41 (2021)



A Techno-enviro-economic Transparency of a Coal-fired Power Plant: Integrating Biomass Co-firing and CO2 Sequestration Technology in a Carbon-priced Environment

Norhuda Abdul Manaf1, Nilay Shah2, Noor Fatina Emelin Nor Fadzil3

1Department of Chemical and Environmental Engineering, Malaysia-Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Kuala Lumpur; 2Department of Chemical Engineering, Imperial College London, SW7 2AZ, United Kingdom; 3Department of Chemical and Environmental Engineering, Malaysia-Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Kuala Lumpur

The energy industry, as the primary contributor to worldwide greenhouse gas emissions, plays a crucial role in addressing global climate issues. Despite numerous governmental commitments and initiatives aimed at combating the root causes of rising temperatures, carbon dioxide (CO2) emissions from industrial and energy-related activities continue to climb. Coal-fired power plants are significant contributors to this situation. Currently, two promising strategies for mitigating emissions from coal-fired power plants are CO2 capture and storage (CCS) and biomass gasification. CCS is a mature technology in the field, while biomass gasification, a process that converts biomass into gaseous fuel, offers an encouraging avenue for generating sustainable energy resources. While extensive research has explored the techno-economic potential of coal-biomass co-firing with CCS (CB-CCS) retrofit systems, no work has considered the synergistic impact of coal power plant stranded assets, carbon price schemes, and co-firing ratios. This study develops an hourly-resolution optimization model framework using mixed-integer linear programming to predict the operational profile and economic potential of CB-CCS-retrofit systems. Two dynamic scenarios for ten-year operations are evaluated: with and without carbon price imposition, subject to the minimum coal power plant stranded asset and CO2 emissions at different co-firing ratios. These scenarios reflect possible implementations in developed countries with established carbon price schemes, such as the United Kingdom and Australia, as well as developing or middle-income countries without strong carbon policy schemes, such as Malaysia and Indonesia. The outcome of this work will help determine whether retrofitting individual coal power plants is worthwhile for reducing greenhouse gas emissions. It is also significant to comprehend the role of CCS in the retrofit system and the associated co-firing ratio for biomass gasification systems. This work contributes to the international agenda delineated in the International Energy Agency (IEA) report addressing carbon lock-in and stranded assets, which potentially stem from the premature decommissioning of contemporary coal-based electricity generation facilities. This work also aligns with Malaysia's National Energy Transition Roadmap, which focuses on bioenergy and CCS.



Methodology for multi-actor and multi-scale decision support for Water-Food-Energy systems

Amaya Saint-Bois1, Ludovic Montastruc1, Marianne Boix1, Olivier Therond2

1Laboratoire de Génie Chimique, UMR 5503 CNRS, Toulouse INP, UPS, 4 Allée Emile Monso, 31432 Toulouse Cedex 4, France; 2UMR 1121 LAE INRAE- Université de Lorraine – ENSAIA, 54000 Nancy, France

We have designed a generic multi-actor multi-level framework to optimize the management of water-energy-food nexus systems. These are essential systems for human life characterized by water, energy and food synergies and trade-offs at varied spatial and temporal scales. They are managed by cross sector decision-makers at varied decision levels. They are complex and dynamic systems for which the operational level cannot be overlooked to design adequate management strategies.

Our methodology combines spatial operational multi-agent based integrated simulations of water-energy-food nexus systems with strategic decision-making methods (Saint-Bois et al., 2024). We have implemented it to allocate land-use alternatives to agricultural plots. The number of territory possible combinations of parcel land-use allocations equals the number of land-use alternatives explored for each parcel exponential the number of parcels in the territory. Stochastic multi-criteria decision-making methods have been designed to provide decision support for large territories (more than 1000 parcels). A multi-objective optimization method has been designed to produce optimized regional level land-use scenarios.

The methodology has been applied to an agricultural watershed of approximately 800 km2 and 15224 parcels situated downstream the French Aveyron River. The watershed experiences water stress and is located in one of France’s sunniest regions. Renewable energy production in agricultural land appears as a means to meet national renewable energy production targets and to move towards autonomous sustainable agricultural systems and regions. The installation of renewable energy generation units in agricultural land facing water stress is a perfect illustration of a complex water-energy-food system for which a holistic approach is required. MAELIA (Therond et al., 2014) (modelling of socio-agro-ecological systems for landscape integrated assessment), a multi-agent based platform developed by French researches to simulate complex agro-hydrological systems, has been used to simulate dynamics of water-energy-food nexus systems at operational level. Three strategic multi-criteria decision-making methods that combine Monte Carlo simulations with the Analytic Hierarchy Process method have been implemented. The first one is local; it selects land-use alternatives that optimize multi-sector parcel level indicators. The other two are regional; decisions are based on regional indicators. The first regional decision-making method identifies the best uniform regional scenario from those known and the second regional decision-making method explores combinations of parcel land-use allocations and selects the one that optimizes multi-sector criteria at regional level. A multi-objective optimization method that combines MILP (Mixed Integer Linear Programming) and goal programming has been implemented with IBM’s ILOG CPLEX optimization studio to find parcel level land-use allocations that optimize regional multi-sector criteria.

The three decision-making methods provide the same result: covering all land that is suitable for solar panels with solar panels optimizes parcel and regional multi-criteria performance indicators. Perspectives are simulating scenarios with positive agricultural governmental studies, adding social indicators and designing a game theory based strategic decision-making method.



Synergies of Adaptive Learning for Surrogate-Based Flowsheet Model Maintenance

Balázs Palotai1,2, Gábor Kis1, János Abonyi2, Ágnes Bárkányi2

1MOL Group Plc.; 2Faculty of Engineering, University of Pannonia

The integration of digital models with business processes and real-time data access is pivotal for advancing Industry 4.0 and autonomous systems. This evolution necessitates that digital models maintain high fidelity and credibility to ensure reliable decision support in dynamic environments. Flowsheet models, commonly used for process simulation and optimization in such contexts, often face challenges related to convergence issues and computational demands during optimization. Surrogate models, which approximate complex models with simpler mathematical representations, present a promising solution to mitigate these challenges by estimating calibration factors for flowsheet models efficiently. Traditionally, surrogate models are trained using Latin Hypercube Sampling to capture a broad range of system behaviors. However, physical systems in industrial applications are typically operated within specific local regions, where globally trained surrogate models may not perform adequately. This discrepancy limits the effectiveness of surrogate models in accurately calibrating flowsheet models, especially when the system deviates from the conditions used during the surrogate model training.

This paper introduces a novel adaptive calibration methodology that combines the principles of active and adaptive learning to enhance surrogate model performance for flowsheet model calibration. The proposed approach iteratively refines the surrogate model by generating new data points in the local operating regions of interest using the flowsheet model itself. This adaptive retraining process ensures that the surrogate model remains accurate across both local and global domains, thus providing reliable calibration factors for the flowsheet model.

A case study on a simplified refinery process demonstrates the effectiveness of the proposed methodology. The adaptive surrogate-based calibration significantly reduces the computational time associated with direct simulation-based calibration while maintaining high accuracy in model predictions. The results show an improvement in both the efficiency and precision of the flowsheet model calibration process, highlighting the synergistic benefits of integrating surrogate models into adaptive calibration strategies for industrial process engineering.

In summary, the synergies between adaptive maintenance of surrogate and flowsheet models offer a robust solution for maintaining model fidelity and reducing computational costs in dynamic industrial environments. This research contributes to the field of computer-aided process engineering by presenting a methodology that not only supports real-time decision-making but also enhances the adaptability and performance of digital models in the face of evolving physical systems.



Comparison of Prior Mean and Multi-Fidelity Bayesian Optimization of a Hydroformylation Reactor

Stefan Tönnis, Luise F. Kaven, Eike Cramer

Process Systems Engineering, RWTH Aachen University, Germany

Accurate process models are not always available and can be prohibitively expensive to obtain for model-based optimization. Hence, the process systems engineering (PSE) community has gained an interest in Bayesian Optimization (BO), for it approximates black-box objectives using the probabilistic Gaussian processes (GP) surrogate models [1]. BO fits the surrogate models by iteratively proposing experiments by optimizing over so-called acquisition functions and updating the surrogate model based on the results. Although BO is generally known as sample-efficient, treating chemical engineering design problems as fully black-box problems can still be prohibitively expensive, particularly for high-cost technical-scale experiments. At the same time, there is an extensive knowledge and modeling base for chemical engineering design problems that are fully neglected by black-box algorithms such as BO. One widely known option to include such prior knowledge in BO is prior mean modeling [2], where the user complements the BO algorithm with an initial guess, i.e., the prior mean. Alternatives include hybrid models or compositions of GPs with mechanistic equations [3]. A lesser-known alternative is augmenting the GP with lower fidelity data [4], e.g., from low-cost simulations or approximate models. Such low-fidelity data can give cheap yet valuable insights, which reduces the number of high-cost experiments. In this work, we compare the usage of prior mean and multi-fidelity modeling for BO in PSE design problems. We first review how prior mean and multi-fidelity modeling can be incorporated using multi-fidelity benchmark problems such as the well-known Forrester, Rosenbrock, and Rastrigin test functions. In a second step, we apply the two methods to optimize a multi-phase reaction mini-plant process, including a decanter separation step and a recycle stream. The process is based on the hydroformylation of 1-dodecene in microemulsion systems [5]. Overall, we observe accelerated convergence in the different test functions and the hydroformylation mini-plant. In fact, combining both prior mean and multi-fidelity modeling methods achieves the best overall fit of the GP surrogate models. However, our analysis also reveals how poorly chosen prior mean functions can cause the algorithm to get stuck in local minima or lead to numerical failure.

Bibliography
[1] Roman Garnett. Bayesian optimization. Cambridge University Press, Cambridge, United Kingdom,
2023.

[2] Aniket Chitre, Jayce Cheng, Sarfaraz Ahamed, Robert C. M. Querimit, Benchuan Zhu, Ke Wang,
Long Wang, Kedar Hippalgaonkar, and Alexei A. Lapkin. phbot: Self–driven robot for ph adjustment
of viscous formulations via physics–informed–ml. Chemistry–Methods, 4(2), 2024.

[3] Leonardo D. González and Victor M. Zavala. Bois: Bayesian optimization of interconnected systems.
IFAC-PapersOnLine, 58(14):446–451, 2024.

[4] Jian Wu, Saul Toscano-Palmerin, Peter I. Frazier, and Andrew Gordon Wilson. Practical multi-fidelity
bayesian optimization for hyperparameter tuning. In Ryan P. Adams and Vibhav Gogate, editors,
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, volume 115 of Proceedings
of Machine Learning Research, pages 788–798. PMLR, 2020.

[5] David Müller, Markus Illner, Erik Esche, Tobias Pogrzeba, Marcel Schmidt, Reinhard Schomäcker,
Lorenz T. Biegler, Günter Wozny, and Jens-Uwe Repke. Dynamic real-time optimization under
uncertainty of a hydroformylation mini-plant. Computers & Chemical Engineering, 106:836–848,
2017.



A global sensitivity analysis for a bipolar membrane electrodialysis capturing carbon dioxide from the air

Grazia Leonzio1, Alexia Thill2, Nilay Shah2

1Department of Mechanical, Chemical and Materials Engineering, University of Cagliari, via Marengo 2, 09123 Cagliari, Italy , Sargent Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK; 2Sargent Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK

Global warming and climate change are two critical, current global challenges. For this reason, as the concentration of atmospheric carbon dioxide (CO2) continues to rise, it is becoming increasingly imperative to invent efficient and cost-effective technologies for controlling the atmospheric CO2 concentration. In addition to the capture of CO2 from flue gases and industrial processes, new solutions to capture CO2 from the air have been proposed and investigated in the literature such as absorption, adsorption, ion exchange resin, mineral carbonation, membrane, photocatalysis, cryogenic separation, electrochemical approach and electrodialysis approaches (Leonzio et al., 2022). These are the well-known direct air capture (DAC) or negative emission technologies (NETs).

Among them, in the electrodialysis approach, a bipolar membrane electrodialysis (BPMED) stack is used to regenerate the hydroxide based-solved (NaOH or KOH water solution) coming from an absorption column and capturing CO2 from the air (Sabatino et al., 2020). In this way, it is possible to recycle the solvent to the column and release the captured CO2 for its storage or utilization.

Although not yet deployed at an industrial or even pilot scale, CO2 separation through BPMED has already been described and analyzed in the literature (Eisaman et al., 2011; Sabatino et al., 2020, 2022; Vallejo Castano et al., 2024).

Regarding the economic aspect, a preliminary levelized cost of the BPM-based process was suggested to be 770 $/tonCO2 due to the high cost of the membrane, the large electricity consumption, and uncertainties on the lifetime of the materials (Sabatino et al., 2020). Due to the relatively early stage of development, process optimization through the use of a mathematical model is therefore useful to support design and development through identifiation of the best operating conditions and parameters along with a Global Sensitivity Analysis (GSA) with the aim of suggesting significant operating parameters that could optimize both cost and energy consumption.

In this research, a mathematical model for a BPMED capturing CO2 from the air is proposed to conduct a GSA to identify the most effective operating condition for total costs (including capital and operating expenditures) and energy consumption, as the considered Key Performance Indicators (KPIs). The investigated uncertain parameters are: current density, concentration in the rich solution, membrane active area, number of cell pairs, CO2 partial pressure in the gas phase, load ratio and carbon loading.

References

Vallejo Castano, S., Shu, Q., Shi, M., Blauw, R., Loldrup Fosbøl, P., Kuntke, P., Tedesco, M., Hamelers, H.V.M., 2024. Chemical Engineering Journal 488, 150870

Eisaman, M. D.; Alvarado, L.; Larner, D.; Wang, P.; Littau, K.A. 2011. Energy Environ. Sci. 4 (10), 4031.

Leonzio, G., Fennell, P.S., Shah, N., 2022, Appli. Sci., 12(16), 8321

Sabatino, F., Mehta, M., Grimm, A., Gazzani, M., Gallucci, F., Kramer, G.J., and Annaland, M., 2020. Ind. Eng. Chem. Res. 59, 7007−7020

Sabatino, F., Gazzani, M., Gallucci, F., Annaland, M., 2022. Ind. Eng. Chem. Res. 61, 12668−12679



Refrigerant Selection and Cycle Design for Industrial Heat Pump Applications exemplified for Distillation Processes

Jonas Schnurr, Momme Adami, Mirko Skiborowski

Hamburg University of Technology, Institute of Process System Engineering, Germany

Abstract

In the scope of global warming the essential objectives for the industry are the transition to renewable energy and the improvement of energy efficiency. A potential approach to achieving both of these goals in a single step is the implementation of heat pumps, which effectively recover low-temperature waste heat that would otherwise be lost to the environment. By elevating the heat to a higher temperature level where it can be reused or recycled within the process, the application range of heat pumps is not limited to new designs and they have a huge potential; as retrofit options for existing processes in order to reduce the external energy demand [1] and electrify the industrial processes, thereby promoting a more sustainable industry with an increased share of renewable electricity generation.

Nevertheless, the optimal design of heat pumps depends heavily on the selection of an appropriate refrigerant, as the refrigerant performance is influenced by both thermodynamic properties and the heat pump cycle design, which is typically fixed in current selection approaches. Methods like iterative approaches [2], database screening followed by simulations [3], and optimization of thermodynamic parameters with subsequent identification of real refrigerants [4] are computationally intensive and time-consuming. Although these methods can identify thermodynamically beneficial refrigerants, practical application may be hindered by limitations of the compressor. Additionally, these approaches are challenging to implement in process design tools.

The current work presents a novel approach for a fast screening and identification of suitable refrigerant and heat pump cycle designs for specific applications, considering a variety of established refrigerants. The method automatically evaluates the performance of 38 pure refrigerants for any heat pump with defined heat sink and source, adapting the heat pump design by incorporating an internal heat exchanger, in case superheating the refrigerant prior to compression is required. By considering practical constraints such as compression ratio and compressor discharge temperature, the remaining suitable refrigerants are ranked based on energy demand or COP.

The application of an integrated process design and screening is demonstrated for the evaluation of different distillation processes, by linking the screening tool with an existing shortcut screening framework proposed by Skiborowski [5]. This integration enables the combination of heat pumps with other energy integration methods, like thermal coupling, thereby facilitating a more comprehensive assessment of potential process variants and the identification of the most promising process alternatives.

References

[1] A. A. Kiss, C. A. I. Ferreira, Heat Pumps in Chemical Process Industry, CRC Press, Boca Raton, 2017

[2] J. Jiang, B. Hu, T. Ge, R. Wang, Energy 2022, 241, 1222831.

[3] M. O. McLinden, J. S. Brown, R. Brignoli, A. F. Kazakov, P. A. Domanski, Nature Communications 2017, 8 (1), 1-9.

[4] J. Mairhofer, M. Stavrou, Chemie Ingenieur Technik 2023, 95 (3), 458-466.

[5] M. Skiborowski, Chemical Engineering Transactions 2018, 69, 199-204.



CO2 conversion to polyethylene based on power-to-X technology and renewable resources

Monika Dokl1, Blaž Likozar2, Chunyan Si3, Zdravko Kravanja1, Yee Van Fan3,4, Lidija Čuček1

1Faculty of Chemistry and Chemical Engineering, University of Maribor, Smetanova ulica 17, 2000 Maribor, Slovenia; 2Department of Catalysis and Chemical Reaction Engineering, National Institute of Chemistry, Hajdrihova 19, Ljubljana 1001, Slovenia; 3Sustainable Process Integration Laboratory, Faculty of Mechanical Engineering, Brno University of Technology, Technická 2896/2, 616 69 Brno, Czech Republic; 4Environmental Change Institute, University of Oxford, Oxford OX1 3QY, United Kingdom

In addition to increasing material and energy efficiency, the plastics sector is already stepping up its efforts further to minimize greenhouse gas emissions during the production phase in order to support the EU's transition to climate neutrality by 2050. These initiatives include expanding the circular economy in the plastics value chain through recycling, increasing the use of renewable raw materials, switching to renewable energy and developing advanced carbon capture and utilization methods. Bio-based plastics have been extensively explored as a potential substitute for plastics derived from fossil fuels. Despite the potential of bio-based plastics, there are concerns about sustainability, including the impact on land use, water resources and biodiversity. An alternative route is to convert CO2 into valuable chemicals using power-to-X technology. This includes the surplus of renewable energy to transform CO2 into fuels, chemicals and plastics. In this study, the process simulation of polyethylene production using CO2 and renewable electricity is performed to identify feedstocks aligned with climate objectives. CO2-based polyethylene production is compared with conventional fossil-based production and burdening and unburdening effects of potential transition to the production of renewable plastics are evaluated.



Design of Experiments Algorithm for Comprehensive Exploration and Rapid Optimization in Chemical Space

Kazuhiro Takeda1, Kondo Masaru2, Muthu Karuppasamy3,4, Mohamed S. H. Salem3,5, Takizawa Shinobu3

1Shizuoka university, Japan; 2University of shizuoka, Japan; 3Osaka university, Japan; 4Graduate School of Pharmaceutical Sciences, Osaka University, Japan; 5Suez Canal University, Egypt

1. Introduction

Bayesian Optimization (BO)1) is known for its ability to explore optimal conditions with a limited number of experiments. However, the number of experiments conducted through BO is often insufficient to fully understand the experimental condition space. To address this, various experimental design methods have been proposed. Among these, the Definitive Screening Design (DSD)2) has been introduced as a method that minimizes confounding and requires fewer experiments. This study proposes an algorithm that combines DSD and BO to reduce confounding, ensure sufficient experimentation to understand the experimental condition space and enable rapid optimization.

2. Fusion Algorithm of DSD and BO

In DSD, each factor is set at three levels (+, 0, -), and experiments are conducted with one factor at 0 and the others at + or -. This process is repeated for the number of factors m, and a final experiment is conducted with all factors set to 0, resulting in a total of 2m+1 experiments. Typically, after conducting experiments based on DSD, a model is created by selecting factors using criteria such as AIC (Akaike information criteria), followed by additional experiments to optimize the objective function. Using BO allows for optimization with fewer additional experiments.

In this study, the levels (+ and -) required by DSD are determined based on BO, enabling the integration of BO from the DSD experiment stage. The proposed algorithm is outlined as follows:

1. Formulate a DSD experimental plan with 0, +, and - levels.

2. Conduct experiments using the maximum and minimum ranges (as defined by DSD) until all variables are no longer unique.

3. For the next experimental condition, use BO to search within the range of the original planned values with the same sign.

4. Conduct experiments under the explored conditions.

5. If the experimental plan formulated in Step 1 is complete, proceed to the next step; otherwise, return to Step 3.

6. Use BO to explore the optimal conditions within the range.

7. Conduct experiments under the explored conditions.

8. If the convergence criteria are met, terminate the process; otherwise, return to Step 6.

3. Numerical Experiment

Numerical experiments were conducted to minimize each objective function. The upper and lower limits of each variable were set at (-2, 2), and the experiment was conducted 10 times. The results indicate that the proposed algorithm converges faster than BO alone. Moreover, the variability in convergence speed was also reduced. Although not shown due to space constraints, the proposed algorithm also demonstrated faster and more stable convergence compared to other experimental design methods combined with BO.

4. Conclusion

This study proposed an algorithm combining DSD and BO to minimize confounding, reduce the required experiments, and enable rapid optimization. Numerical experiments demonstrated that the algorithm converges early and stably. Future work will involve verifying the effectiveness of the proposed algorithm through actual experiments.

References

1. J. Snoek, et al.; arXiv:1206.2944, pp.1-9, 2012

2. B. Jones and C. J. Nachtsheim; J. Qual. Technol., Vol.43, pp.1-15, 2011



Surrogate Modeling for Real-Time Simulation of Spatially Distributed Dynamically Operated Chemical Reactors: A Power-to-X Case Study

Luisa Peterson1, Ali Forootani2, Edgar Ivan Sanchez Medina1, Ion Victor Gosea1, Peter Benner1,3, Kai Sundmacher1,3

1Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstraße 1, Magdeburg, 39106, Germany; 2Helmholtz Centre for Environmental Research, Permoserstraße 15, Leipzig, 04318 , Germany; 3Otto von Guericke University Magdeburg, Universitaetsplatz 2, Magdeburg, 39106, Germany

Spatially distributed dynamical systems are omnipresent in chemical engineering. These systems are often modeled by partial differential equations (PDEs) to describe complex, coupled processes. However, solving PDEs can be computationally expensive, especially for highly nonlinear systems. This is particularly challenging for outer-loop computations such as optimization, control, and uncertainty quantification, all requiring real-time performance. Surrogate models reduce computational costs and are classified into data-fit, reduced-order, and hierarchical models. Data-fit models use statistical techniques or machine learning to map input-output relationships, while reduced-order models project equations onto a lower-dimensional subspace. Hierarchical models simplify physical or numerical methods to reduce complexity.

In this study, we simulate the dynamic behavior of a catalytic CO2 methanation reactor, critical for Power-to-X applications that convert CO2 and green hydrogen to methane. The reactor must adapt to changing load conditions, which requires real-time executable simulation models. A one-dimensional mechanistic model, calibrated with pilot plant data, simulates temperature and CO2 conversion. We develop and test three surrogate models using load change simulation data. (i) Operator Inference (OpInf) projects the system into a lower dimensional subspace and infers a quadratic polynomial within this space, incorporating stability constraints to improve prediction reliability. (ii) Sparse Identification of Nonlinear Dynamics (SINDy) uncovers the system's governing equations through sparse regression. Our adaptation of SINDy uses Q-DEIM to efficiently select significant data for regression inputs and is implemented within a neural network framework with a Physics-Informed Neural Network (PINN) loss function. (iii) The proposed Graph Neural Network (GNN) uses a windowed graph structure with Graph Attention Networks.

When reproducing data from the mechanistic model, OpInf achieves a low relative Frobenius norm error of 0.043% for CO2 conversion and 0.030% for temperature. The quadratic, guaranteed stable polynomial provides a good balance between interpretability and performance. SINDy gives relative errors of 2.37% for CO2 conversion and 0.91% for temperature. While SINDy is the most interpretable model, it is also the most computationally intensive to evaluate, requires manual tuning of the regression library, and occasionally experiences stability issues. GNNs produce relative errors of 1.08% for CO2 conversion and 0.81% for temperature. GNNs offer the fastest evaluation and require the least domain-specific knowledge of the three methods, but their black-box nature limits interpretability and they are prone to overfitting and can struggle with extrapolation. All surrogate models reduce computational time while maintaining acceptable accuracy, making them suitable for real-time decision-making in dynamic reactor operations. The choice of model depends on the application requirements, in particular the balance between speed and interpretability. In this case, OpInf provides the best overall balance, while SINDy and GNNs provide useful trade-offs depending on whether interpretability or speed is prioritized [2].


References

[1] R. T. Zimmermann, J. Bremer, and K. Sundmacher, “Load-flexible fixed-bed reactors by multi-period design optimization,” Chemical Engineering Journal, vol. 428, 130771, 2022, DOI: 0.1016/j.cej.2021.130771.

[2] L. Peterson, A. Forootani, E. I. S. Medina, I. V. Gosea, K. Sundmacher, and P. Benner, “Towards Digital Twins for Power-to-X: Comparing Surrogate Models for a Catalytic CO2 Methanation Reactor”, Authorea Preprints, 2024, DOI: 10.36227/techrxiv.172263007.76668955/v1.



Computer Vision for Chemical Engineering Diagrams

Maged Ibrahim Elsayed Eid, Giancarlo Dalle Ave

McMaster University, Canada

This paper details the development of a state-of-the-art object, word, and connectivity detection system tailored for the analysis of chemical engineering diagrams, namely Process Flow Diagrams (PFDs), Block Flow Diagrams (BFDs), and Piping and Instrumentation Diagrams (P&IDs), utilizing cutting-edge computer vision methodologies. Chemical engineering diagrams play a pivotal role in the field, offering visual representations of plant processes and equipment. They are integral to both the design, analysis, and operational phases of chemical processes, aiding in process documentation and serving as a foundation for simulating and monitoring the performance of essential equipment operations.

The necessity of automating the interpretation of BFDs, PFDs, and P&IDs arises from their widespread use and the challenges associated with their manual analysis. These diagrams, often stored as image-based PDFs, present significant hurdles in terms of data extraction and interpretation. Manual processing is not only labor-intensive but also prone to errors and inconsistencies. Given the complexity and volume of these diagrams, which include intricate details of plant processes and equipment, manual methods can lead to delays and inaccuracies. Automating this process with advanced computer vision techniques addresses these challenges by providing a scalable, accurate, and efficient means to extract and analyze information.

The primary aim of this project is to automate the interpretation of various chemical engineering diagrams, a task that has traditionally relied on manual expertise. This automation encompasses the precise detection of unit operations, text recognition, and the mapping of interconnections between components. To achieve this, the proposed methodology relies on rule-based and predefined approaches. These approaches are employed to detect unit operations by analyzing visual patterns and shapes, recognizing text using OCR techniques, and mapping the interconnections between components based on spatial relationships. This method specifically avoids deep learning which can be computationally intensive and often requires extensive labeling to effectively differentiate between various objects. These challenges can complicate implementation and scalability, making deep learning less suitable for this application. The results showed high detection accuracy, successfully identifying unit operations, text, and interconnections with reliable performance, even in complex diagrams.



Digital Twin for Operator Training- and Real-Time Support for a Pilot Scale Packed Batch Distillation Column

Mads Stevnsborg, Jakob K. Huusom, Krist V. Gernaey

PROSYS DTU, Denmark

Digital Twin (DT) is a frequently used term by industry and academia to describe data-centric models that accurately depict a physical system counterpart. The DTs is typically used in either an offline context as Virtual Laboratories (VL) [4, 5] or in real-time applications as predictive toolboxes [2]. In processes restricted by a low degree of automation that instead rely greatly on operator competence in key decision-making situations the DTs can act as a guiding tool [1, 3]. This work explores the challenge of developing DTs to support operators by developing a combined virtual laboratory and decision support tool for students conducting experiments on a pilot scale packed batch distillation column at the Technical University of Denmark [2]. This operation is an unsteady operation, controlled by a set of manual valves, which the operator must continuously balance to meet purity constraints without an excessive consumption of utilities. The realisation is achieved by leveraging the software development and IT operations (DevOps) methodology with a modular compartmentalisation of DT resources to better leverage model applicability across various projects. The final solution is comprised of several stand-alone packages that together offer real-time communication to physical equipment through OPC-UA endpoints and a scalable simulation environment through web-based user interfaces (UI). The advantages of this implementation strategy are flexibility and speed allowing for continuously updating process models as data is generated and offering process operators with the necessary training and knowledge before and during operation to run experiments effectively and enhancing the learning outcome.

References

[1] F. Bähner et al., 2021,” Challenges in Optimization and Control of Biobased Process Systems: An Industrial-Academic Perspective”, Industrial and Engineering Chemistry Research, Volume 60, Issue 42, pp. 14985-15003

[2] M. Jones et al., 2022, “Pilot Plant 4.0: A Review of Digitalization Efforts of the Chemical and Biochemical Engineering Department at the Technical University of Denmark (DTU)”, Computer-aided Chemical Engineering, Volume 49, pp. 1525-1530

[3] V. Steinwandter et al., 2019, “Data science tools and applications on the way to Pharma 4.0”, Drug Discovery Today, Volume 24, Issue 9, pp. 1795-1805

[4] M. Schueler & T. Mehling, 2022, “Digital Twin- A System for Testing and Training”, Computer Aided Chemical Engineering, Volume 52, pp. 2049-2055

[5] J.Ismite et al., 2019, “A systems engineering framework for the design of bioprocess operator training simulators”, E3s Web of Conferences, 2019, Volume 78, pp. 03001

[6] N. Kamihama et al., 2011, “Isobaric Vapor−Liquid Equilibria for Ethanol + Water + EthyleneGlycol and Its Constituent Three Binary Systems”, Journal of Chemical and Engineering Data, Volume 57, Issue 2, pp. 339-344



Hybridizing Neural Networks with Physical Laws for Advanced Process Modeling in Chemical Engineering

Jana Mousa, Stéphane Negny

INP Toulouse, France

Neural networks (NNs) have become the talk of the century as they have been labeled as indispensable tools for modeling complex systems due to their ability to learn and predict from vast datasets. Their success spans a wide range of applications, including chemical engineering processes. However, one key limitation of NNs is their lack of physical interpretability, which becomes critical when dealing with complex systems governed by known physical laws. In chemical engineering, particularly in unit operations like reactors—considered the heart of any process—the accuracy and reliability of models depend not only on their predictive capabilities, but also on their adherence to physical constraints such as mass and energy balances, reaction kinetics, and equilibrium constants.

This study investigates the integration of neural networks with nonlinear data reconciliation (NDR) as a method to impose physical constraints on predictive models. Nonlinear data reconciliation is a mathematical technique used to adjust measured data to satisfy predefined physical laws, enhancing model consistency and accuracy. By embedding NDR into neural networks, the resulting hybrid models ensure physical realism while retaining the flexibility and learning power of NNs.

The framework first trains an NN to capture nonlinear system relationships, then applies NDR to correct predictions so that key physical metrics—such as conversion, selectivity, and equilibrium constants in reactors—are met. This ensures that the model aligns not only with data but also with fundamental physical laws, enhancing the model's interpretability and reliability. Furthermore, this method's efficacy has been evaluated by comparing it to other hybrid approaches, such as Karush-Kuhn-Tucker Neural Networks (KKT-NN) and KarushKuhn-Tucker Physics-Informed Neural Networks (KKT-PINN), both of which aim to enforce physical constraints within neural networks.

In conclusion, the integration of physical interpretability into neural networks through nonlinear data reconciliation significantly enhances modeling accuracy and reliability in engineering applications. Future enhancements may focus on refining the method to accommodate a wider range of engineering challenges, thereby facilitating its application in diverse fields such as process engineering, and system optimization



Transferring Graph Neural Networks for Soft Sensor Modeling using Process Topologies

Maximilian F. Theisen1, Gabrie M.H. Meesters2, Artur M. Schweidtmann1

1Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands; 2Product and Process Engineering, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands

Transfer learning allows - in theory - to re-use and fine-tune machine learning models and thus reduce data requirements. However, transferring data-driven soft sensor models is in practice often not possible. In particular, the fixed input structure of standard soft sensor models prohibits transfer if e.g. the sensor information is not identical in all plants.

We propose a process-aware graph neural network approach for transfer learning of soft sensor models across multiple plants. In our method, plants are modeled as graphs: Unit operations are nodes, streams are edges, and sensors are embedded as attributes. Our approach brings two advantages for transfer learning: First, we not only include sensor data but also crucial information on the plant topology. Second, the graph neural network algorithm is flexible with respect to its sensor inputs. We test the transfer learning capabilities of our modeling approach on ammonia synthesis loops with different process topologies (Moulijn, 2013). We build a soft sensor predicting the ammonia concentration in the product. After training on data from several processes, we successfully transfer our soft sensor model to a previously unseen process with a different topology. Our approach promises to extend the use case of data-driven soft sensors to cases where data from similar plants is leveraged.

References

Moulijn, J. A. (2013). Chemical process technology (2nd ed (Online-Ausg.) ed.). (M. Makkee, & A. Diepen, Eds.) Chichester, West, Sussex: John Wiley & Sons Inc.



Production scheduling based on Real-time Optimization and Zone Control Nonlinear Model Predictive Controller

José Matias1, Alvaro Marcelo Acevedo Peña2

1KU Leuven, Belgium; 2YPFB Refinación S.A.

Chemical industry has a high demand for process optimization methods and tools that enhance profitability while operating near the nominal capacity. Products inventory, both in-process and end-of-process, serve as buffers to mitigate fluctuations in operation and demand while maintaining consistent and predictable production. Efficient product inventory management is crucial for the a profitable operation of chemical plants. To ensure optimal operation, various strategies have been proposed that consider in-process storage and aim to satisfy mass balances while avoiding bottlenecks [1].

When final product demand is highly oscillatory with unexpected stoppages, end-of-process inventories must be carefully controlled within minimum and maximum bounds. This prevents plant shutdowns and ensures compliance with legal product supply requirements. In both cases, plant-wide operations should be considered when making in- and end-of-process product inventory level decisions to improve overall profitability [2].

To address this problem, we propose a holistic hierarchical two-layered strategy. The upper layer uses real-time optimization (RTO) to determine optimal plant flow rates from an economic perspective. The lower layer employs a zone control nonlinear model predictive controller (NMPC) to define inventory setpoints. The idea is that RTO defines setpoints for flow rates that manipulate plant throughput, while NMPC maintains inventory levels within desired bounds while keeping flow rates as close as possible to the RTO-defined setpoints. The use of this two-layered holistic approach is novel for this specific problem; however, our primary contribution lies in introducing an ensemble of optimization problems at the RTO level. Each RTO problem is associated with a different uncertain product demand scenario. This enables us to recompute optimal throughput plant manipulator setpoints based on the current scenario, improving the overall strategy performance.

We tested our strategy on a three-stage distillation column system that separates a mixture of four products, inspired by a LPG production plant with recycle split vapour (RSV) invented by Ortloff Ltd [3]. While the lightest and cheapest product is directly sent to a pipeline, the other three more valuable products are stored in tanks. Demand for these three products fluctuates significantly, but can be forecasted in advance, allowing for proactive measures. We compared the results of our holistic two-layered strategy to typical actions taken by plant operators in various uncertain demand scenarios. Our approach addresses the challenges of mitigating bottlenecks and minimizing inventory fluctuations and is more effective than the operator decisions from an economic perspective.

[1] Skogestad, S., 2004. Computers & Chemical Engineering, 28(1-2), pp.219-234.

[2] Downs, J.J. and Skogestad, S., 2011. Annual Reviews in Control, 35(1), pp.99-110.

[3] Zhang S. et al., 2020. Comprehensive Comparison of Enhanced Recycle Split Vapour Processes for Ethane Recovery, Energy Reports, 6, pp.1819–1837.



Talking like Piping and Instrumentation Diagrams (P&IDs)

Achmad Anggawirya Alimin, Dominik P. Goldstein, Lukas Schulze Balhorn, Artur M. Schweidtmann

Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands

Piping and Instrumentation Diagrams (P&IDs) are pivotal in process engineering, serving as comprehensive references across multiple disciplines (Toghraei, 2019). However, the intricate nature of P&IDs and the system's complexity pose challenges for engineers to examine flowsheet overview and details efficiently and accurately. Recent developments in flowsheet digitalization through computer vision and data exchange in the process industry (DEXPI) have opened up the potential to have a unified machine-readable format for P&IDs (Theisen et al., 2023). Yet, the industry DEXPI P&IDs are often extremely complex often including thousands pages.

We propose the ChatP&ID methodology that allows to communicate with P&IDs using natural language. In particular, we represent DEXPI P&IDs as labelled property graphs and integrate them with Large Language Models (LLMs). The approach consists of three main parts: 1) P&IDs graph representation is developed following DEXPI specification via our pyDEXPI Python package (Goldstein et al., n.d.). 2) A tool for generating P&ID knowledge graphs from pyDEXPI. 3) Integration of the P&ID knowledge graph to LLMs using graph-based retrieval augmented generation (graph-RAG). This approach allows users to communicate with P&IDs using natural language. It extends LLM’s ability to retrieve contextual data from P&IDs and mitigate hallucinations. Leveraging the LLM's large corpus, the model is also able to interpret process information in P&ID, which could help engineer daily tasks. In the future, this work also opens up opportunities in the context of other generative Artificial Intelligence (genAI) solutions on P&IDs such as auto-generation or auto-correction (Schweidtmann, 2024).

References

Goldstein, D.P., Alimin, A.A., Schulze Balhorn, L., Schweidtmann, A.M., n.d. pyDEXPI: A Python implementation and toolkit for the DEXPI information model.

Schweidtmann, A.M., 2024. Generative artificial intelligence in chemical engineering. Nat. Chem. Eng. 1, 193–193. https://doi.org/10.1038/s44286-024-00041-5

Theisen, M.F., Flores, K.N., Balhorn, L.S., Schweidtmann, A.M., 2023. Digitization of chemical process flow diagrams using deep convolutional neural networks. Digit. Chem. Eng. 6, 100072.

Toghraei, M., 2019. Piping and instrumentation diagram development. Wiley, Hoboken, NJ, USA.



Multi-Objective Optimization and Analytical Hierarchical Process for Sustainable Power Generation Alternatives in the High Mountain Region of Santurbán: case of Pamplona, Colombia

Ana María Rosso-Cerón2, Nicolas Cabrera1, Viatcheslav Kafarov1

1Department of Chemical Engineering, Carrera 27 Calle 9, Universidad Industrial de Santander, Bucaramanga, Colombia; 2Department of Chemical Engineering, Cl. 5 No. 3-93, Kilometro 1 Vía Bucaramanga, Universidad de Pamplona, Norte de Santander, Colombia

This study presents an integrated approach combining the Analytical Hierarchical Process (AHP) and a Mixed-Integer Multi-Objective Linear Programming (MOMILP) model to evaluate and select sustainable power generation alternatives for Pamplona, Colombia. The research focuses on the high mountain region of Santurbán, a páramo ecosystem that provides water to over 2.5 million people and supports rich biodiversity. Given the region’s vulnerability to climate change, sustainable energy solutions are essential to ensure environmental conservation and energy security [1].

The MOMILP model considers several power generation technologies, including photovoltaic panels, wind turbines, biomass, and diesel plants. These alternatives are integrated into the local electrical distribution system with the goal of minimizing two objectives: costs (net present value) and CO₂ emissions, while adhering to design, operational, and budgetary constraints. The ε-constraint method was employed to generate a Pareto-optimal set of solutions, balancing trade-offs between economic and environmental performance. Additionally, the study examines the potential for forming local energy communities by allowing surplus electricity from renewable sources to be sold, promoting local economic growth and energy independence.

The AHP is used to assess these alternatives based on a set of multi-criteria, including social acceptance, job creation, regional accessibility, technological maturity, reliability, pollutant emissions, land use, and habitat impact. Expert opinions were gathered through the Delphi method, and the criteria were weighted using Saaty’s scale. This comprehensive evaluation ensures that the decision-making process incorporates not only technical and economic aspects but also environmental and social considerations [2].

The analysis revealed that a hybrid solution combining solar, wind, and biomass technologies provides the best balance between economic viability and environmental sustainability. Solar energy, due to its technological maturity and minimal impact on the local habitat, emerged as a highly favourable option. Biomass, although contributing more to emissions than solar and wind, was positively evaluated for its potential to create local jobs and its high social acceptance in the region.

This study contributes to the growing body of literature on the integration of renewable energy sources into power distribution networks, particularly in ecologically sensitive areas like the Santurbán páramo. The combined use of AHP and MOMILP offers a robust framework for decision-makers, allowing for the systematic evaluation of sustainable alternatives based on technical performance and stakeholder priorities. This approach is particularly relevant for policymakers and utility companies engaged in Colombia’s energy transition efforts and sustainable development.

References

[1] Llambí, L. D., Becerra, M. T., Peralvo, M., Avella, A., Baruffol, M., & Díaz, L. J. (2019). Monitoring biodiversity and ecosystem services in Colombia's high Andean ecosystems: Toward an integrated strategy. Mountain Research and Development, 39(3). https://doi.org/10.1659/MRD-JOURNAL-D-19-00020.

[2] A. M. Rosso-Cerón, V. Kafarov, G. Latorre-Bayona, and R. Quijano-Hurtado, "A novel hybrid approach based on fuzzy multi-criteria decision-making tools for assessing sustainable alternatives of power generation in San Andrés Island," Renewable and Sustainable Energy Reviews, vol. 110, 159–173, 2019. https://doi.org/10.1016/j.rser.2019.04.053.



Environmental assessment of the catalytic Arabinose oxidation

Mouad Hachhach, Dmitry Murzin, Tapio Salmi

a Laboratory of Industrial Chemistry and Reaction Engineering (TKR), Johan Gadolin Process Chemistry Centre, Åbo Akademi University, Åbo-Turku FI-20500, Finland

Oxidation of arabinose to arabinoic acid present an innovative way to valorize local biomass to high add value product. Experiments on the oxidation of arabinose to arabinoic acid with molecular oxygen were previously to determine the optimum reaction conditions (Kusema et al., 2010; Manzano et al., 2021) and using the obtained results a scaled-up process has been designed and analysed from techno-economic aspect (Hachhach et al., 2021).

Also these results are used to analyse the environmental impact of the scaled-up process during its its life time using life cycle assessment (LCA) methodology. SimaPro software combined with impact assessment method IMPACT 2002+ were used in this work.

The results revealed that the heating seems to be the biggest contributor of the environmental impacts even though that the reaction is performed under mild conditions (70 C) which highlighted the importance of reducing the energy consumption via an efficient heat integration for example.



A FOREST BIOMASS-TO-HYDROCARBON SUPPLY CHAIN MATHEMATICAL MODEL FOR OPTIMIZING CARBON EMISSIONS AND ECONOMIC METRICS

Frank Piedra-Jimenez1, Rishabh Mehta2, Valeria Larnaudie3, Maria Analia Rodriguez1, Ana Inés Torres2

1Instituto de Investigación y Desarrollo en Ingeniería de Procesos y Química Aplicada (UNC-CONICET), Universidad Nacional de Córdoba. Facultad de Ciencias Exactas, Físicas y Naturales. Av. Vélez Sarsfield 1611, X5016GCA Ciudad Universitaria, Córdoba, Argentina; 2Department of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh PA 15213; 3Departamento de Bioingeniería, Facultad de Ingeniería, Universidad de la Republica, Julio Herrera y Reissig 565, Montevideo, Uruguay.

Forest supply chains (FSCs) are critical for achieving decarbonization targets (Santos et al., 2019). FSCs are characterized by abundant biomass residues, offering an opportunity to add value to processes while contributing to the production of clean energy products. One particularly interesting aspect is their potential integration with oil refineries to produce drop-in fuels, offering a transformative pathway to mitigate traditional refinery emissions (Barbosa-Povoa and Pinto, 2020).

In this article, a disjunctive mathematical programming approach is presented to optimize the design and planning of the FSC for the production of hydrocarbon products from biomass, optimizing both economic and environmental objectives. Various types of byproducts and residual biomass from forest harvesting activities, sawmill production, and the pulp and paper industries are considered. Alternative processing facilities and technologies can be established over a multi-period planning horizon. The design problem scope involves selecting forest areas for exploitation, identifying biomass sources, and determining the locations, technologies, and capacities of facilities that transform wood-based residues into methanol and pyrolysis oil, which are further processed in biodiesel and petroleum refinery plants, respectively. This problem is challenging due to the complexity of the supply chain network, which involves numerous decisions, constraints, and objectives.

Especially in the case of large geographical areas, transportation becomes a crucial aspect of supply chain design and planning because the low biomass density significantly impacts carbon emissions and costs. Thus, the planning problem scope includes selecting connections and material flows across the supply chain and analyzing the impact of different types of transportation vehicles.

To estimate FSC carbon emissions, the Life Cycle Assessment (LCA) methodology is used. A gate-to-gate analysis is carried out for each activity in the FSC. The predicted LCA results are then integrated as input parameters into a mathematical programming model for FSC design and planning, extending previous work (Piedra-Jimenez et al., 2024). In this article, a multi-objective approach is employed to minimize CO2-equivalent emissions while optimizing net present value from an economic standpoint. A set of efficient Pareto points is obtained and compared in a case study of the Argentine forest industry.

References

Barbosa-Povoa, A.P., Pinto, J.M. (2020). “Process supply chains: perspectives from academia and industry”. Comput. Chem. Eng., 132, 106606, 10.1016/J.COMPCHEMENG.2019.106606

Piedra-Jimenez, F., Torres, A.I., Rodriguez, M.A. (2024), “A robust disjunctive formulation for the redesign of forest biomass-based fuels supply chain under multiple factors of uncertainty.” Computers & Chemical Engineering, 108540, ISSN 0098-1354.

Santos, A., Carvalho, A., Barbosa-Póvoa, A.P, Marques, A., Amorim, P. (2019). “Assessment and optimization of sustainable forest wood supply chains – a systematic literature review.” For. Policy Econ., 105, pp. 112-135, 10.1016/J.FORPOL.2019.05.026



Introducing competition in a multi-agent system for hybrid optimization

Veerawat Udomvorakulchai, Miguel Pineda, Eric S. Fraga

University College London, United Kingdom

Process systems engineering optimization problems may be
challenging. These problems often exhibit nonlinearity, non-convexity,
discontinuity, and uncertainty, and often only the values of objective
and constraint functions are accessible. Black-box optimization
methods may be appropriate to tackle such problems. The effectiveness
of each method differs and is often unknown beforehand. Prior experience
has shown that hybrid approaches can lead to better outcomes than
using a single optimization method (1).

A general-purpose multi-agent framework for optimization, Cocoa, has
recently been developed to automate the configuration and use of
hybrid optimization, allowing for any number of optimization solvers,
including different instances of the same solver (2). Solvers can
share solutions, leading to better outcomes with the same
computational effort. However, the computational resource allocated
to each solver is inversely proportional to the number of solvers.
Allocating equal time to each solver may not be ideal.

This paper describes the implementation of competition to go alongside
cooperation: allocate more computational resource to solvers best
suited to a given problem. The allocation is dynamic and evolves as
the search progresses. Each solver is assigned a priority which
changes based on the results obtained by that solver. Scheduling is
priority based. The scheduler is similar to algorithms used by
multi-tasking operating systems (3). Individual solvers will be given
more or less access to the computational resource, enabling the system
to reward those solvers that do well while ensuring that all solvers
are allocated some computational resource.

The framework allows for the use of both metaheuristic and direct
search methods. Metaheuristics explore the full search space while
direct search methods are good at exploiting solutions. The framework
has been implemented in Julia (4) making full use of multiprocessing.

A case study on the design of a micro-analytic system is presented
(5). The model is dynamic and has uncertainties; the selection of
designs is based on multiple criteria. This is a good test of the
proposed framework as the computational demands are large and the
search space is complex. The case study demonstrates the benefits of
a multi-solver hybrid optimization approach with both cooperation and
competition. The framework adapts to the evolving requirements of the
search. Often, a metaheuristic method is allocated more computational
resource at the beginning of the search while direct search methods
are emphasized later.

1. Fraga ES. Hybrid methods for optimisation. In: Zilinskas J, Bogle
IDL, editors. Computer aided methods for optimal design and
operations. World Scientific Publishing Co.; 2006. p. 1–14.

2. Fraga ES, Udomvorakulchai V, Papageorgiou L. 2024. DOI: 10.1016/B978-0-443-28824-1.50556-1.

3. Madnick SE, Donovan JJ. Operating systems. McGraw-Hill Book Company;
1974.

4. Bezanson J, Edelman A, Karpinski S, Shah VB. Julia: A fresh approach
to numerical computing. SIAM rev. 2017;59(1):65–98.

5. Pineda M, Tsaoulidis D, Filho P, Tsukahara T, Angeli P, Fraga
E. 2021. DOI: 10.1016/j.nucengdes.2021.111432.



A Component Property Modeling Framework Utilizing Molecular Similarity for Accurate Predictions and Uncertainty Quantification

Youquan Xu, Zhijiang Shao, Anjan Kumar Tula

Zhejiang University, China, People's Republic of

In many industrial applications, the demand for high-performance products, such as advanced materials and efficient working media, continues to rise. A key step in developing these products lies in the design of their constituent molecules. Traditional methods, based on expert experience, are often slow, labor-intensive, and prone to overlooking molecules with optimal performance. As a result, computer-aided molecular design (CAMD) has garnered significant attention for its potential to accelerate and improve the design process. One of the major challenges in CAMD is the lack of mechanistic knowledge that accurately links molecular structure to its properties. As a result, machine learning models trained on existing molecular databases have become the primary tools for predicting molecular properties. The typical approach involves using these models to predict the properties of potential molecules and selecting the best candidates based on these predictions. However, prediction errors are inevitable, introducing uncertainty into the reliability of the design. This can result in significant discrepancies between the predicted and experimentally verified properties, limiting the effectiveness of molecular discovery.
To address this issue, we propose a novel molecular property modeling framework based on a similarity coefficient. This framework introduces a new formula for molecular similarity, which considers compound type identification to enable more accurate molecular comparisons. By calculating the similarity between a target molecule and those in an existing database, the framework selects the most similar molecules to form a tailored training dataset, ensuring that only the most informative molecules are selected for the training set, while less relevant or misleading data points are excluded, significantly improving the accuracy of property predictions. In addition to enhancing prediction accuracy, the similarity coefficient also quantifies the confidence in the property predictions. By evaluating the availability and magnitude of the similarity index, the framework provides a measure of uncertainty in the predictions, giving a clearer understanding of how reliable the predicted properties are. This is especially important for molecules where limited similar data is available, allowing for more informed decision-making in the selection process. In tests across various molecular properties, our framework not only enhances the accuracy of predictions but also offers a clear evaluation of prediction reliability, especially for molecules with high similarity. Our framework introduces a two-fold evaluation system for potential molecules, using both predicted properties and the similarity coefficient. This dual criterion ensures that only molecules with both excellent predicted properties and high similarity are selected, enhancing the reliability of the screening process. The improved prediction accuracy, particularly for molecules with high similarity, reduces the need for extensive experimental validation and significantly increases the overall confidence in the molecular design process by explicitly addressing prediction uncertainty.



A simple model for control and optimisation of a produced water re-injection facility

Rafael David de Oliveira1, Edmary Altamiranda2, Gjermund Mathisen2, Johannes Jäschke1

1Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), Trondheim, Norway; 2Subsea Technology, AkerBP ASA, Stavanger, Norway

Water injection (or water flooding) is an enhanced oil recovery technique that consists of injecting water into the reservoir to maintain the reservoir pressure. The injected water can come either from the sea or the water separated from the oil and gas production (produced water). The amount of injected water in each well is typically decided by the reservoir engineers, and many methodologies have been proposed so far in the literature where reservoir models are usually applied (Grema et al., 2016). Once the injection targets have been defined, the water injection network system can be optimised. A relevant optimisation problem in this context may consist of the optimal operation of the pump system topside while ensuring the integrity of the subsea water injection system by maximising the lifetime of the equipment. Usually, the works in that phase are modelling the system at a macro-level where each unit is represented as a node in a network (Ivo and Imsland, 2022). The use of simple and lower-level models where the manipulated variables and measured variables can be directly connected proved to be very useful in the design of new control strategies (Sivertsen et al., 2006) as well as in real-time optimisation formulations where the model parameters can be updated in real-time (Matias et al., 2022).

This work proposes a simple model for control and optimisation of a produced water re-injection facility. The model was based on a real facility in operation on the Norwegian continental shelf and consisted of a set of differential-algebraic equations. Data was gathered from the available sensors, pump operation and water injection targets. Model parameters related to equipment dimensions and the valve's flow coefficient were fixed as in the real plant. The remaining parameters were estimated from the field data, solving a nonlinear least-square problem. Uncertainty quantification was performed to assess the parameter's confidence intervals. Moreover, simulations were performed to evaluate and validate the proposed model. The results show that a simple model can be fitted to the plant and, at the same time, describe the key features of the plant dynamics. The developed model is expected to aid the implementation of strategies like self-optimising control and real-time optimisation on produced water re-injection facilities in the near future.

References

Grema, A. S., and Yi Cao. 2016. “Optimal Feedback Control of Oil Reservoir Waterflooding Processes.” International Journal of Automation and Computing 13 (1): 73–80.

Ivo, Otávio Fonseca, and Lars Struen Imsland. 2022. “Framework for Produced Water Discharge Management with Flow-Weighted Mean Concentration Based Economic Model Predictive Control.” Computers & Chemical Engineering 157 (January):107604.

Matias, José, Julio P. C. Oliveira, Galo A. C. Le Roux, and Johannes Jäschke. 2022. “Steady-State Real-Time Optimization Using Transient Measurements on an Experimental Rig.” Journal of Process Control 115 (July):181–96.

Sivertsen, Heidi, John-Morten Godhavn, Audun Faanes, and Sigurd Skogestad. 2006. “CONTROL SOLUTIONS FOR SUBSEA PROCESSING AND MULTIPHASE TRANSPORT.” IFAC Proceedings Volumes, 6th IFAC Symposium on Advanced Control of Chemical Processes, 39 (2): 1069–74.



An optimization-based conceptual synthesis of reaction-separation systems for glucose to chemicals conversion.

Syed Ejaz Haider, Ville Alopaeus

Department of Chemical and Metallurgical Engineering, School of Chemical Engineering, Aalto University, P.O. Box 16100, 00076 Aalto, Finland.

Abstract

Lignocellulosic biomass has emerged as a promising renewable alternative to fossil resources for the sustainable production of green chemicals [1]. Among the high-value biomass-derived building block chemicals, levulinic acid has gained significant attention due to its wide industrial applications [2]. It serves as a raw material for the synthesis of resins, plasticizers, textiles, animal feed, coatings, antifreeze, pharmaceuticals, and bio-based products [3]. In order to produce levulinic acid on commercial scale, it is essential to identify the most cost-effective and optimal synthesis route.

Two main methods exist to identify the optimal process structure: hierarchical decomposition and superstructure-based optimization. The hierarchical decomposition method involves making design decisions at each detail level based on heuristics; however, it struggles to capture interactions among decisions at different detail levels. In contrast, superstructure-based synthesis is a smart process systems engineering methodology that systematically evaluates a wide range of structural alternatives simultaneously using an equation-oriented approach to identify the optimal structure.

This study aims to identify the optimal process structure and parameters for the commercial-scale production of levulinic acid from glucose using mathematical programming approach. To achieve more valuable results, the reaction and separation systems were separately investigated under two optimization scenarios using two different objective functions.

Scenario 1 focuses on optimizing the glucose conversion reactor to enhance overall profit and minimize waste disposal. The optimization model includes a rigorous economic objective function that simultaneously considers product selling prices, capital and manufacturing costs over a 20-year project life, and waste disposal costs. A continuous tank reactor model was used as a mass balance constraint, utilizing the rate parameters from our recent research findings at chemical engineering research group, Aalto University. This non-linear programming (NLP) problem was implemented in GAMS and solved using the BARON solver to determine the optimal operating conditions and reactor size. The optimal reactor volume was found to be 13.2m3, with an optimal temperature of 197.8°C for the levulinic acid production capacity of 1593tonnes/year.

Scenario 2 addresses the synthesis of distillation-based separation sequences to separate the multicomponent reactor effluent into various product streams. All potential candidates are embedded in a superstructure, which is translated into a mixed-integer nonlinear programming problem (MINLP). Research is progressing towards solving this MINLP problem and identifying the optimal configuration of distillation columns for the desired separation task.

References

[1] F. H. Isikgor and C. R. Becer, "Lignocellulosic biomass: a sustainable platform for the production of bio-based chemicals and polymers," Polymer chemistry, vol. 6, no. 25, pp. 4497-4559, 2015.

[2] T. Werpy and G. Petersen, "Top value added chemicals from biomass: volume I--results of screening for potential candidates from sugars and synthesis gas," National Renewable Energy Lab.(NREL), Golden, CO (United States), 2004.

[3] S. Takkellapati, T. Li, and M. A. Gonzalez, "An overview of biorefinery-derived platform chemicals from a cellulose and hemicellulose biorefinery," Clean technologies and environmental policy, vol. 20, pp. 1615-1630, 2018.



Kinetic modeling of drug substance synthesis considering slug flow characteristics in a liquid-liquid reaction

Shunsei Yayabe1, Junu Kim1, Yusuke Hayashi1, Kazuya Okamoto2, Keisuke Shibukawa2, Hayao Nakanishi2, Hirokazu Sugiyama1

1The University of Tokyo, Japan; 2Shionogi Pharma Co., Ltd., Japan

In the production of drug substances (or active pharmaceutical ingredients), flow synthesis is increasingly being introduced due to its various advantages such as high surface volume ratio and small system size [1]. One promising application of flow synthesis is liquid-liquid reaction [2]. When two immiscible liquids enter together in a flow reactor, unique flow patterns, especially like slug flow, are formed. These patterns are determined by the fluid properties and the reactor specifications, and have a significant impact on the mass transfer rate. Previous studies have analyzed the effect of slug flow on mass transfer in liquid-liquid reactions using computational fluid dynamics [3, 4]. These studies provide valuable insights into the influence of flow characteristics on reaction. However, there is a lack of modeling approaches that simultaneously account for flow characteristics and reaction kinetics, which may limit the application of liquid-liquid reactions in flow synthesis.

We developed a kinetic model of drug substance synthesis by incorporating slug flow characteristics in a liquid-liquid reaction, with the aim to determine the feasible range of the process parameters. The target reaction was Stevens oxidation, which is a novel liquid-liquid reaction of organic and aqueous phases, producing the ester via a shorter pathway than the conventional route. To obtain kinetic data, experiments were conducted, varying the inner diameter, reaction temperature, and residence time. In Stevens oxidation, a catalyst was used, and experimental conditions were adjusted to form slug flow to promote the catalyst’s mass transfer. Using the obtained data, the model was developed for the change in concentrations of the starting material, desired product, intermediate, dimer, carboxylic acid, and the catalyst. In the change in the catalyst concentration model, mass transfer was considered using the overall volumetric mass transfer coefficient during slug flow formation.

The model successfully reproduced experimental results and demonstrated that, as the inner diameter increases, the efficiency of mass transfer in slug flow decreases with slowing down the reaction. The developed model was used to simulate the yields of the start material, the dimer, and the process mass intensity, in order to determine the feasible region. As a result, it was shown that when the reagent concentration was either too high or too low, operating conditions fell outside the feasible region. This kinetic model with flow characteristics will be useful for the process design of drug substance synthesis using liquid-liquid reactions. In the ongoing work, we are conducting validation of the feasible region.

[1] S. Diab, et al., React. Chem. Eng., 2021, 6, 1819. [2] L. Capaldo, et al., Chem. Sci., 2023, 14, 4230. [3] A. Mittal, et al., Ind. Eng. Chem. Res., 2023, 62, 15006. [4] D. Cheng, et al., Ind. Eng. Chem. Res., 2020, 59, 4397.



Learning-based control approach for nanobody-scorpion antivenom optimization

Juan Camilo Acosta-Pavas1, David C Corrales1, Susana M Alonso Villela1, Balkiss Bouhaouala-Zahar2, Georgios Georgakilas3, Konstantinos Mexis4, Stefanos Xenios4, Theodore Dalamagas3, Antonis Kokosis4, Michael O'donohue1, Luc Fillaudeau1, César A. Aceves-Lara1

1TBI, Université de Toulouse, CNRS UMR5504, INRAE UMR792, INSA, Toulouse, France, France; 2Laboratoire des Biomolécules, Venins et Applications Théranostiques (LBVAT), Institut Pasteur de Tunis, 13 Place Pasteur, BP-74, 1002 Le Belvédère, Tunis, Tunisia; 3Athena Research Center, Marousi, Greece; 4School of Chemical Engineering, National Technical University of Athens, Iroon Polytechneiou 9, Zografou, 15780 Athens, Greece

One market scope of bioindustries is the production of recombinant proteins in E. coli for its application in serotherapy (Alonso Villela et al., 2023). However, its process's monitoring, control, and optimization present limitations. There are different approaches to optimize bioprocess performance; one common is using model-based control strategies such as Model Predictive Control (MPC). Another strategy is learning-based control, such as Reinforcement Learning (RL).

In this work, an RL approach was applied to maximize the production of recombinant proteins in E. coli at induction mode. The aim was to find the optimal substrate feed rate (Fs) applied during the induction that maximize the protein productivity. The RL model was trained using the actor-critic Twin-Delayed Deep Deterministic (TD3) Policy Gradient agent. The reward corresponded to the maximum value of the productivity. The environment was represented with a dynamic hybrid model (DHM) published by Corrales et al. (2024). The simulated conditions consisted in a reactor with 2L of working volume (V) at 37°C for the batch (10gglucose/L) and fed-batch (feeding with 300gglucose/L) modes, and 28°C during induction stage. The first 3.4h was the batch mode. The fed-batch mode was operated with a Fs=1x10^-3L/h until reach 8h. Afterwards, the RL agent was trained in the induction mode until the process's final at 20h. The agent actions were updated every 2h. It was considered two types of constraints 1.49<V<5.00L and 1x10^-3<Fs≤5x10^-4 L/h. Finally, the results were compared with the MPC approach.

The training options for all the networks were a learning rate of 1x10^-3 for the critic and 1x10^-4 for the actor; gradient threshold of 1.0, mini-batch size of 1x10^2, a discount factor of 0.9, experience buffer length of 1x10^6, and agent sample time of 0.1h with maximum 700 episodes.

MPC and RL control strategies show similar behaviors. In both cases, the optimal action suggested is to apply the maximum Fs to increase the protein productivity at the end of the process until 4.81x10^-2 mg/h. Regarding computation time, the RL agent training spent a mean value of 0.3284s performing 14.0x10^3 steps in each action update. At the same time, the MPC required a mean value of 0.3366s to solve an optimization problem at every action update. The RL approach demonstrates to be a good alternative to explore the optimization in the production of recombinant proteins.

References

Alonso Villela, S. M., Kraïem-Ghezal, H., Bouhaouala-Zahar, B., Bideaux, C., Aceves Lara, C. A., & Fillaudeau, L. (2023). Production of recombinant scorpion antivenoms in E. coli: Current state and perspectives. Applied Microbiology and Biotechnology, 107(13), 4133-4152. https://doi.org/10.1007/s00253-023-12578-1

Corrales, D. C., Villela, S. M. A., Cescut, J., Daboussi, F., Fillaudeau, L., & Aceves-Lara, C. A. (2024). Dynamic Hybrid Model for Nanobody-based Antivenom Production (scorpion antivenom) with E. coli CH10-12 and E. coli NbF12-10.



Kinetics modeling of the thermal degradation of densified refuse-derived fuel (d-RDF)

Mohammad Ali Nazari, Juma Haydary

Institute of Chemical and Environmental Engineering, Slovak University of Technology in Bratislava, Slovak Republic

Currently, modern human life is experiencing an energy crisis and a massive generation of Municipal Solid Waste (MSW). The conversion of the carbon-containing fraction of MSW, known as refuse-derived fuel (RDF), into energy, fuel, and high-value bio-based chemicals has become a key focus in ongoing discussions on sustainable development, driven by rising energy demand, depleting fossil fuel reserves, and growing environmental concerns. However, a significant limitation of unprocessed RDF lies in its heterogeneous composition, which complicates material handling, reactor feeding, and the accurate prediction of its physical and chemical properties. The densification of RDF (d-RDF) offers a potential solution to these challenges by reducing material variability and generating a more uniform, durable form, thereby enhancing its suitability for processes such as pyrolysis. This research effort involves evaluating the physicochemical characteristics and thermal degradation of d-RDF using a thermogravimetric analyzer (TGA) under controlled conditions at various heating rates of 2, 5, 10, and 20 K·min⁻¹. The model-free methods, including Friedman (FRM), Flynn-Wall-Ozawa (FWO), Kissinger-Akahira-Sunose (KAS), Vyazovkin (VYZ), and Kissinger, were applied to determine the apparent kinetic and thermodynamic parameters within the conversion range of 1% to 85%. The physicochemical properties of d-RDF demonstrated its suitability for various thermochemical conversion applications. Thermal degradation predominantly occurred within the temperature range of 220–500°C, accounting for 98% of the total weight loss. The coefficients of determination (R²) for the fitted plots ranged from 0.90 to 1.00 across all applied models. The average activation energy (Eα) calculated using the FRM, FWO, KAS, and VYZ methods was 260, 247, 247, and 263 kJ·mol⁻¹, respectively. The evaluation of thermodynamic parameters (ΔH, ΔG, and ΔS) indicated the endothermic nature of the process. Statistical F-test was applied to identify the best agreement between experimental and calculated data. According to the F-value test, the differences of variance in FRM and VYZ models were insignificant, illustrating the best agreement with experimental data. Considering all results, including kinetic and thermodynamic parameters, along with the high heating value (HHV) of 25.20 MJ·kg⁻¹, d-RDF demonstrates a strong affinity for thermal degradation under pyrolysis conditions and can be regarded as a suitable feedstock to produce fuel and value-added products. Moreover, it serves as a viable alternative to fossil fuels, contributing to the United Nations 2030 Sustainable Development Goals.



Cost-optimal solvent selection for batch cooling crystallisation of flurbiprofen

Matthew Blair, Dimitrios I. Gerogiorgis

University of Edinburgh, United Kingdom

ABSTRACT

Choosing suitable solvents for crystallisation processes can be very challenging when developing new pharmaceuticals, given the vast number of choices, crystallisation techniques and performance metrics. A high-efficiency solvent must ensure high API recovery, low cost and minimal environmental impact,1 and allow batch (or possibly continuous) operation within an acceptable (not narrow) parameter space. To streamline this task, process and thermodynamic modelling tools2,3 can be used to systematically probe the behaviour of different crystallisation setups in silico prior to conducting lab-scale experiments. In particular, it has been found that we can use thermodynamic models alongside principles from solid-liquid equilibria (SLE) to determine the impact of key process variables (e.g. temperature and solvent choice)1 on the performance of different processes without (or prior to) testing them in the laboratory.2,3

This paper presents the implementation of a modelling framework that can be used to minimise the cost and environmental impact of batch crystallisation processes on the basis of thermodynamic principles. This process modelling framework (implemented in MATLAB®) is employed to study the batch cooling crystallisation of flurbiprofen, a non-steroidal anti-inflammatory drug (NSAID) used against arthritis.4 Moreover, we have used the Non-Random Two-Liquid (NRTL) activity coefficient model, to study its thermophysical and solubility properties in twelve (12) common upstream pharmaceutical solvents,4,5 namely three alkanes (n-hexane, n-heptane, n-octane), two (isopropyl, methyl-tert-butyl) ethers, five alcohols (n-propanol, isopropanol, n-butanol, isobutanol, isopentanol), an ester (isopropyl acetate), and acetonitrile, in an adequately wide temperature range (283.15-323.15 K). Established green metrics1 (e.g. E-factor) and costing methodologies are employed to comparatively evaluate process candidates.6

LITERATURE REFERENCES

  1. Blair et al., Process modeling, simulation and technoeconomic evaluation of batch vs continuous pharmaceutical manufacturing cephalexin. 2023 AIChE Annual Meeting, Orlando, to appear (2023).
  2. Watson et al., Computer aided design of solvent blends for hybrid cooling and antisolvent crystallization of active pharmaceutical ingredients. Organic Process Research & Development 25(5): 1123 (2021).
  3. Sheikholeslamzadeh et al., Optimal solvent screening for crystallization of pharmaceutical compounds from multisolvent systems. Industrial & Engineering Chemistry Research 51(42): 13792 (2012).
  4. Tian et al., Solution thermodynamic properties of flurbiprofen in twelve solvents (283.15–323.15 K). Journal of Molecular Liquids 296: 111744 (2019).
  5. Prat et al., CHEM21 selection guide of classical and less classical solvents. Green Chemistry 18(1): 288 (2016).
  6. Dafnomilis et al., Multiobjective dynamic optimization of ampicillin batch crystallization: sensitivity analysis of attainable performance vs product quality constraints, Industrial & Engineering Chemistry Research 58(40): 18756 (2019).


A Machine Learning (ML) implementation for beer fermentation optimisation

Dimitrios I. Gerogiorgis

University of Edinburgh, United Kingdom

ABSTRACT

Food and beverage industries receive key feedstocks whose composition is subject to geographic and seasonal variability, and rely on factories whose process conditions have limited manipulation margins but must rightfully meet stringent product quality specifications. Unlike chemicals, most of our favourite foods and beverages are highly sensitive and perishable, with relatively small profit margins. Although manufacturing processes (recipes) have been perfected over centuries or even millennia, quantitative understanding is limited. Predictions about the influence of input (feedstock) composition and manufacturing (process) conditions on final food/drink product quality are hazardous, if not impossible, because small changes can result in extreme variations. A slightly warmer fermentation renders beer undrinkable; similarly, an imbalance among sugar, lipid (fat) and protein can make chocolate unstable.

Artificial Neural Networks/ANN and their representational versatility for process systems studies is well known for decades.2 First-principles knowledge, though (mass-heat-momentum conservation, chemical reactions) is captured via deterministic (ODE/PDE) models, which invariably require laborious parameterisation for each particular process plant. Physics-Informed Neural Networks (PINN)3 though combine the best of both worlds: they offer chemistry-compliant NN with proven extrapolation power to revolutionise manufacturing, circumventing parametric estimation uncertainty and enabling efficient process control. Fermentation for specific products (e.g. ethanol4, biopharmaceuticals5) has been explored by means of ML/ANN (not PINN) tools, thus without embedded first-principles descriptions.3

Though Food Science cannot provide global composition-structure-quality correlations, Artificial Intelligence/AI can be used to extract valuable process knowledge from factory data. The case of beer, in particular, has been the focus of several of our papers,6-7 offering a sound comparison basis for evaluating model fidelity between precedents and new PINN approaches. Pursuing PINN modelling caters to greater complexity, in terms of plant flowsheet, and target product structure and chemistry. We thus revisit the problem with ML/PINN tools to efficiently predict process efficiency, which is instrumental in computational design and optimisation of key unit operations (e.g. fermentors). Traditional (first-principles) descriptions of these necessitate elaborate (e.g. CFD) submodels of extreme complexity, with at least two severe drawbacks: (1) cumbersome prerequisite parameter estimation with extreme uncertainty, (2) prohibitively high CPU cost. The complementarity of the two major approaches is thus investigated, and major advantages/shortcomings will be highlighted.

LITERATURE REFERENCES

  1. Gerogiorgis & Bakalis, Digitalisation of Food+Beverage Manufacturing, Food & Bioproducts Processing, 128: 259-261 (2021).
  2. Lee et al., Machine learning: Overview of recent progresses and implications for the Process Systems Engineering field, Computers & Chemical Engineering, 114: 111-121 (2018).
  3. Karniadakis et al., Physics-informed machine learning, Nature Reviews Physics, 3(6): 422-440 (2021).
  4. Pereira et al., Hybrid NN modelling and particle swarm optimization for improved ethanol production from cashew apple juice, Bioprocess & Biosystems Engineering 44: 329-342 (2021).
  5. Petsagkourakis et al., Reinforcement learning for batch bioprocess optimization. Computers & Chemical Engineering, 133: 106649 (2020).
  6. Rodman & Gerogiorgis, Multi-objective process optimisation of beer fermentation via dynamic simulation, Food & Bioproducts Processing, 100A: 255-274 (2016).
  7. Rodman & Gerogiorgis, Dynamic optimization of beer fermentation: Sensitivity analysis of attainable performance vs. product flavour constraints, Computers & Chemical Engineering, 106: 582-595 (2017).


Operability analysis of modular heterogeneous electrolyzer plants using system co-simulation

Michael Große1,3, Isabell Viedt2,3, Hannes Lange2,3, Leon Urbas1,2

1TUD Dresden University of Technology, Chair of Process Control Systems; 2TUD Dresden University of Technology, Process Systems Engineering Group; 3TUD Dresden University of Technology, Process-to-Order Lab

In the upcoming decades, the scale-up of hydrogen production will play a crucial role for the integration of renewable energy into future energy systems [1]. One scale-up strategy is the numbering-up of standardized electrolysis units in a modular plant concept, according to [2, 3]. The use of a modular plant concept can support the integration of different electrolyzer technologies into one heterogeneous electrolyzer plant to leverage technology-specific advantages and counteract disadvantages [4].

This work focuses on the analysis of technical operability and feasibility of large-scale modular electrolyzer plants in a heterogeneous plant layout using system co-simulation. Developed and available dynamic process models of low-temperature electrolysis components are combined in Simulink as a shared co-simulation environment. Strategies to control relevant process parameters, like temperatures, pressures, flow rates and component mass fractions in the different subsystems and the overall plant, are developed and presented. An operability analysis is carried out to verify the functionality of the presented plant layout and the corresponding control strategies [5].

The dynamic progression of all controlled parameters is presented for different operative states that may occur, like start-up, continuous operation, load change and hot-standby behavior. It is observed that the exemplary plant is operational, as all relevant process parameter can be held within the allowed operating range during all operative states. However, some limitations, regarding the possible operating range of individual technologies, are introduced. Possible solution approaches for these identified problems are conceptualized.

Additionally, relevant metrics for efficiency and flexibility, like the specific energy consumption and expected unserved flexible energy (EUFE) [4] are calculated to prove the feasibility and show the advantages of heterogeneous electrolyzer plant layouts, such as a heightened operational flexibility without mayor reductions in efficiency.

Sources

[1] I. International Energy Agency, „Global Hydrogen Review 2023“, 2023, doi: https://www.iea.org/reports/global-hydrogen-review-2023.

[2] L. Bittorf u. a., „Upcoming domains for the MTP and an evaluation of its usability for electrolysis“, in 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), Sep. 2022, S. 1–4. doi: 10.1109/ETFA52439.2022.9921280.

[3] H. Lange, A. Klose, L. Beisswenger, D. Erdmann, und L. Urbas, „Modularization approach for large-scale electrolysis systems: a review“, Sustain. Energy Fuels, Bd. 8, Nr. 6, S. 1208–1224, 2024, doi: 10.1039/D3SE01588B.

[4] M. Mock, I. Viedt, H. Lange, und L. Urbas, „Heterogenous electrolysis plants as enabler of efficient and flexible Power-to-X value chains“, in Computer Aided Chemical Engineering, Bd. 53, Elsevier, 2024, S. 1885–1890. doi: 10.1016/B978-0-443-28824-1.50315-X.

[5] V. Gazzaneo, J. C. Carrasco, D. R. Vinson, und F. V. Lima, „Process Operability Algorithms: Past, Present, and Future Developments“, Ind. Eng. Chem. Res., Bd. 59, Nr. 6, S. 2457–2470, Feb. 2020, doi: 10.1021/acs.iecr.9b05181.



High-pressure membrane reactor for ammonia decomposition: Modeling, simulation and scale-up using a Python-Aspen Custom Modeler interface

Leonardo Antonio Cáceres Avilez, Antonio Esio Bresciani, Claudio Augusto Oller do Nascimento, Rita Maria de Brito Alves

Universidade de São Paulo, Brazil

One of the current challenges for hydrogen-related technologies is its storage and transportation. The low volumetric density and low boiling point require high-pressure and low-temperature conditions for effective transport and storage. A potential solution to these challenges involves storing hydrogen in chemical compounds that can be easily transported and stored, with hydrogen being released through decomposition processes [1]. Ammonia is a promising hydrogen carrier due to its high hydrogen content, approximately 17.8% by mass, and its low volumetric density of H2, which is 121 kg/m³ at 10 bar pressure [2]. The objective of this study was to develop a mathematical model to analyze and design a packed bed membrane reactor (PBMR) for large-scale ammonia decomposition. The kinetic model for the Ru-K/CaO catalyst was obtained from the literature and validated with experimental data [3]. This catalyst was selected due to its effective performance under high-pressure conditions, which increases the drive force for hydrogen permeation in the membrane reactor. The model was developed in Aspen Custom Modeler (ACM) using a 1D pseudo-homogeneous approach. The governing equations for mass, energy, and momentum conservation were discretized via a first-order backward finite difference method and solved using a nonlinear solver. The effectiveness factor was incorporated to account for intraparticle mass transfer limitations, which are prevalent with the large particle sizes typically employed in industrial applications. The study further investigated the influence of sweep gas ratio, temperature, relative pressure, and spatial velocity on ammonia conversion and hydrogen recovery, employing response surface methodology generated through an ACM-Python interface. The proposed multi-tubular membrane reactor achieved approximately 90,4% ammonia conversion and 91% hydrogen recovery, operating at an inlet temperature of 400°C and a pressure of 40 bar. Under the same heat flux, the membrane reactor exhibited approximately 15% higher ammonia conversion compared to a conventional fixed bed reactor. Furthermore, the developed model is easily transferable to Aspen Plus, facilitating subsequent process conceptual design and economic analyses.

[1] I. Lucentini, G. García Colli, C. D. Luzi, I. Serrano, O. M. Martínez, and J. Llorca, ‘Catalytic ammonia decomposition over Ni-Ru supported on CeO2 for hydrogen production: Effect of metal loading and kinetic analysis’, Appl Catal B, vol. 286, p. 119896, 2021.

[2] J. W. Makepeace, T. J. Wood, H. M. A. Hunter, M. O. Jones, and W. I. F. David, ‘Ammonia decomposition catalysis using non-stoichiometric lithium imide’, Chem Sci, vol. 6, no. 7, p. 3805–3815, 2015.

[3] S. Sayas, N. Moerlanés, S. P. Katikaneni, A. Harale, B. Solami, J. Gascon. ‘High pressure ammonia decomposition on Ru-K/CaO catalysts’. Catal. Sci. Technol. vol. 10, p. 5027- 5035, 2020.



Developing a circular economy around jam production wastes

Carlos Sanz, Mariano Martin

Department of Chemical Engineering. Universidad de Salamanca, Plz Caídos 1-5, 37008, Salamanca, Spain

Abstract

The food industry is a significant source of waste. In the EU alone, more than 58 million tons of food waste are generated annually [1], with an estimated market value of 132 billion euros [2]. While over half of this waste is produced at the household level and thus consists of a mixture, one-quarter originates directly from manufacturing facilities. Traditionally, the mixed waste has been managed through municipal solid waste (MSW) treatment and valorization procedures [3]. However, there is an opportunity to valorize the waste produced in the agri-food sector to support the adoption of a circular economy within the food supply chain, beginning at the transformation facilities. This would enable the recovery of value-added products and reduce the need for external resources, creating a circular economy through process integration.

In this work, the valorization of biowaste for a circular economy is explored through the case of jam waste. An integrated process is designed to extract value-added products such as phenolic compounds and pectin, as well as to produce ethanol, a green solvent, for internal use and/or as a final product. The solid residue can then either be gasified (GA) or digested (AD) to produce hydrogen, thermal energy and power. These technologies are systematically compared using a mathematical optimization approach, with units modeled based on first principles and experimental yields. The base case focuses on a real jam production facility from a well-known company.

Waste processing requires an investment of €2.0-2.3 million to treat 37 tons of waste per year, yielding 5.2 kg/t of phenolic compounds and 15.9 kg/t of pectin. After extraction of the valuable products, the solids are subjected to either anaerobic digestion or gasification. The amount of biogas produced (368.1 Nm3/t) is about half that of syngas (660.2 Nm3/t), so the energy produced by the gasification process (5,085.6 kWh/t) is higher than that produced by anaerobic digestion (3,136.3 kWh/t). Nevertheless, both technologies are self-sufficient in terms of energy, but require additional thermal energy input. Conversely, although the energy produced by gasification is higher than that produced by anaerobic digestion, the latter is cheaper than the former and has a lower entry barrier, especially as the process scales. As the results show, incorporating such processes into jam production facilities is not only profitable, but also allows the application of circular economy principles, reducing waste and external energy consumption, while providing value-added by-products such as phenolic compounds and pectin.

References

[1] Eurostat, Food waste and food waste prevention - estimates, (2023).

[2] SWD, Impact Assessment Report, Brussels, 2023.

[3] EPA, Municipal Solid Waste, (2016). https://archive.epa.gov/epawaste/nonhaz/municipal/web/html/ (accessed April 13, 2024).



Data-driven optimization of chemical dosage in wastewater treatment: A surrogate model approach for enhanced physicochemical phosphorus removal

Florencia Caro1, Jimena Ferreira2,3, José Carlos Pinto4, Elena Castelló1, Claudia Santiviago1

1Biotechnological Processes for the Environment Group, Faculty of Engineering, Universidad de la República, Montevideo, Uruguay, 11300; 2Chemical & Process Systems Engineering Group, Faculty of Engineering, Universidad de la República, Montevideo, Uruguay, 11300; 3Heterogeneous Computing Laboratory, Faculty of Engineering, Universidad de la República, Montevideo, Uruguay, 11300; 4Programa de Engenharia Química/COPPE, Universidade Federal do Rio de Janeiro, Cidade Universitária, CP: 68502, Rio de Janeiro, 21941-972 RJ, Brazil

Excessive phosphorus discharge into water bodies can cause severe environmental issues, such as eutrophication [1]. Discharge limits have become more stringent and the operation of phosphorus removal systems from wastewater that are economically feasible and allow for regulatory compliance remains a challenge [2]. Physicochemical phosphorus removal (PPR) using metal salts is effective for achieving low phosphorus levels and can supplement biological phosphorus removal (BPR) [3]. PPR offers flexibility, as phosphorus removal can be adjusted by modifying chemical dosage [4], and is simple, requiring only a chemical dosing system and a clarifier to separate the treated effluent from the resulting precipitate [3]. Proper dosage control is relevant to avoid under or overdosing, which affects phosphorus removal efficiency and operational costs. PPR depends on the system design and effluent characteristics [4]. Therefore, dosages are generally established through laboratory experiments, data from other wastewater treatment plants (WWTPs), and dosing charts [3]. Modeling can enhance chemical dosing in WWTPs, and various sequential simulators can perform this task. BioWin exemplifies this capability, incorporating PPR using metal salts, accounting for pH, precipitation processes, and interactions with organic matter measured as chemical oxygen demand (COD). However, BioWin cannot directly optimize chemical dosing for specific WWTPs configurations. This work develops a surrogate model using BioWin's simulated data to create a tool to optimize chemical dosages based on influent characteristics, thus providing tailored solutions for an edible oil WWTP, which serves as the case study. The industry operates its own WWTP and discharges the treated effluent into a watercourse. Due to the production process, the influent has high and variable phosphorus concentrations. PPR is applied as a supplementary treatment to BPR when phosphorus levels exceed discharge limits. The decision variables in the optimization are the aluminum sulfate dosage for phosphorus removal and sodium hydroxide for pH adjustment, as aluminum sulfate lowers effluent pH. The chemical cost is set as the objective function, and effluent discharge parameters as constraints. The surrogate physicochemical model, which links influent parameters and dosing to effluent outcomes, is also included as a constraint. Data acquisition from BioWin is automated using Bio2Py. [5]. The optimization model is implemented in Pyomo.

Preliminary results indicate that influent COD significantly affects phosphorus removal and should be considered when determining chemical dosage. For high COD levels, more aluminum than the suggested by a rule of thumb [3] is required, whereas for moderate and low COD levels, less dosage is needed, leading to potential cost savings. Furthermore, it was found that pH adjustment is only necessary when phosphorus concentrations are high.

[1]V. Smith et al., Environ. Pollut. 100, 179–196 (1999). doi: 10.1016/S0269-7491(99)00091-3.

[2]R. Bashar, et al., Chemosphere 197, 280–290 (2018). doi: 10.1016/j.chemosphere.2017.12.169.

[3]Metcalf & Eddy, Wastewater Engineering: Treatment and Resource Recovery (McGraw-Hill, 2014).

[4]A. Szabó et al., Water Environ. Res. 80, 407–416 (2008). doi: 10.2175/106143008x268498.

[5]F. Caro et al., J. Water Process Eng. 63, 105426 (2024). doi: 10.1016/j.jwpe.2024.105426.



Leveraging Machine Learning for Real-Time Performance Prediction of Near Infrared Separators in Waste Sorting Plant

Imam Mujahidin Iqbal1, Xinyu Wang1, Isabell Viedt1,2, Leonhard Urbas1,2

1TUD Dresden University of Technology, Chair of Process Control Systems; 2TUD Dresden University of Technology, Process Systems Engineering Group

Abstract

Many small and medium-sized enterprises (SMEs), including waste sorting facilities, are not fully capitalizing on the data they collect. Recent advances in waste sorting technology are addressing this challenge. For instance, Tanguay-Rioux et al. (2022) used a mix modelling approach to develop a process model using data from Canadian sorting facilities, while Kroell et al. (2024) leveraged Near Infrared (NIR) data to create a machine learning model that optimizes NIR setup. A key obstacle for SMEs in utilizing their data effectively is the lack of technical expertise. Wang et al. (2024) demonstrated that the ecoKI platform is a viable solution for SMEs, as it is a low-code platform, requires no prior machine learning knowledge and is simple to use. This work forms part of the EnSort project, which aims to enhance automation and energy efficiency in waste sorting plants by utilizing the collected data. This study explores the application of the ecoKI platform to process measurement data into performance monitoring tools. Data, including material composition and belt weigher sensor readings, were collected from an operational waste sorting plant in Northen Europe. The data was processed using the ready-made building blocks provided within the ecoKI platform, avoiding the need for manual coding. The platform’s real-time monitoring feature was utilized to continuously track performance. Two neural network architectures—Multilayer Perceptrons (MLP) and Long Short-Term Memory (LSTM) networks—were explored for predicting NIR separation efficiency. Results demonstrated the potential of these data-driven models to accurately capture essential relationships between input features and NIR performance. This work illustrates how raw measurement data in waste sorting facilities is transformed into actionable insights for real-time performance monitoring, offering an accessible, user-friendly solution for industries that lack machine learning expertise. By enabling SMEs to leverage their existing data, the platform paves the way for improved operational efficiency and decision-making. Furthermore, this approach can be adapted to various industrial contexts besides waste sorting applications, setting the stage for future developments in automated, data-driven optimization of equipment performance.

References

Tanguay-Rioux, F., Spreutels, L., Héroux, M., & Legros, R. (2022). Mixed modeling approach for mechanical sorting processes based on physical properties of municipal solid waste. Waste Management, 144, 533–542.

Kroell, N., Maghmoumi, A., Dietl, T., Chen, X., Küppers, B., Scherling, T., Feil, A., & Greiff, K. (2024). Towards digital twins of waste sorting plants: Developing data-driven process models of industrial-scale sensor-based sorting units by combining machine learning with near-infrared-based process monitoring. Resources, Conservation and Recycling, 200, 107257.

Wang, X., Rani, F., Charania, Z., Vogt, L., Klose, A., & Urbas, L. (2024). Steigerung der Energieeffizienz für eine nachhaltige Entwicklung in der Produktion: Die Rolle des maschinellen Lernens im ecoKI-Projekt (p. 840).



A Benchmark Simulation Model of Ammonia Production: Enabling Safe Innovation in the Emerging Renewable Hydrogen Economy

Niklas Groll, Gürkan Sin

Process and Systems Engineering Center (PROSYS), Department of Chemical and Biochemical Engineering, Technical University of Denmark (DTU), 2800 Kgs.Lyngby, Denmark

The emerging hydrogen economy plays a vital part in transitioning to a sustainable industry. Green hydrogen can be a renewable fuel for process heat and a sustainable feedstock, e.g., for green ammonia. From today on, the necessity of producing green ammonia for the food industry and as a platform chemical will be inherent [1]. Accordingly, many developments focus on designing and optimizing hydrogen process routes. However, implementing new process ideas and designs also requires testing and ensuring safety.

Safety methodologies can be tested on so-called "Benchmark models." Several benchmark processes have been used to innovate new process control and monitoring methods. The Tennessee-Eastman process imitates the behavior of a standard chemical process, the Fed-Batch Fermentation of Penicillin serves as a benchmark for biochemical fed-batch operated processes, and with the COST benchmark model methodologies for wastewater treatment can be evaluated [2], [3], [4]. However, the established benchmark processes do not feature all relevant aspects of the renewable hydrogen pathways, e.g., sustainable feedstocks and energy supply or electrochemical reactions. Thus, lacking a basic benchmark model for the hydrogen industry creates unnecessary risks when adopting process monitoring and control technologies.

Introducing our unique simulation benchmark model, we pave the way for safer innovations in the hydrogen industry. Our model connects hydrogen production with renewable electricity to the Haber-Bosch process for ammonia production. By integrating electrochemical electrolysis with a standard chemical process, our ammonia benchmark process encompasses all key aspects of innovative hydrogen pathways. The model, built with the versatile Aveva Process Simulator, allows for a seamless transition between steady-state and dynamic simulations and easy adjustments to process design and control parameters. By introducing a set of failures, the model is a benchmark for evaluating risk monitoring and control methods. Furthermore, detecting and eliminating failures can also contribute to the development of new process safety methodologies.

Our new ammonia simulation model is a significant addition to the emerging hydrogen industry, filling the void of a missing benchmark. This comprehensive model serves a dual purpose: It can evaluate and confirm existing process safety methodologies and serve as a foundation for developing new safety methodologies specifically targeting safe hydrogen pathways.

[1] A. G. Olabi et al., ‘Recent progress in Green Ammonia: Production, applications, assessment; barriers, and its role in achieving the sustainable development goals’, Feb. 01, 2023, Elsevier Ltd. doi: 10.1016/j.enconman.2022.116594.

[2] U. Jeppsson and M. N. Pons, ‘The COST benchmark simulation model-current state and future perspective’, 2004, Elsevier Ltd. doi: 10.1016/j.conengprac.2003.07.001.

[3] G. Birol, C. Ündey, and A. Çinar, ‘A modular simulation package for fed-batch fermentation: penicillin production’, Comput Chem Eng, vol. 26, no. 11, pp. 1553–1565, Nov. 2002, doi: 10.1016/S0098-1354(02)00127-8.

[4] J. J. Downs and E. F. Vogel, ‘A plant-wide industrial process control problem’, Comput Chem Eng, vol. 17, no. 3, pp. 245–255, Mar. 1993, doi: 10.1016/0098-1354(93)80018-I.



Thermo-Hydraulic Performance of Pillow-Plate Heat Exchangers with Streamlined Secondary Structures: A Numerical Analysis

Reza Afsahnoudeh, Julia Riese, Eugeny Y. Kenig

Paderborn University, Germany

In recent years, pillow-plate heat exchangers (PPHEs) have gained attention as a promising alternative to conventional shell-and-tube and plate heat exchangers. Their advantages include high pressure resistance, leak-tight construction, and good cleanability. The pillow-like wavy channel structure promotes fluid mixing in the boundary layer, thereby improving heat transfer. However, a significant drawback of PPHEs is boundary layer separation near the welding spots, leading to large recirculation zones. Such zones are the primary cause of increased pressure drop and reduced heat transfer efficiency. Downsizing these recirculation zones is key to improving the thermo-hydraulic performance of PPHEs.

One potential solution is the application of secondary surface structuring [1]. Among others, this can be realized using Electrohydraulic Incremental Forming (EHIF) [2]. Afsahnoudeh et al. [3] demonstrated that streamlined secondary structures, particularly those with ellipsoidal geometries, improved thermo-hydraulic efficiency by up to 6% compared to unstructured PPHEs.

Building upon previous numerical studies, this work investigated the impact of streamlined secondary structures on fluid dynamics and heat transfer within PPHEs. The complex geometries of PPHEs, with and without secondary structures, were generated using forming simulations in ABAQUS 2020. Flow and heat transfer in the inner PPHE channels were simulated using FLUENT 24.1, assuming a single-phase, incompressible, and turbulent system with constant physical properties.

Performance evaluation was based on pressure drop, heat transfer coefficients, and overall thermo-hydraulic efficiency. Additionally, a detailed analysis of the Fanning friction factor and drag coefficient was conducted for various Reynolds numbers to provide deeper insights into the fluid dynamics in the inner channels. The results of these investigations are summarized in this contribution.

References

[1] M. Piper, A. Zibart, E. Djakow, R. Springer, W. Homberg, E.Y. Kenig, Heat transfer enhancement in pillow-plate heat exchangers with dimpled surfaces: A numerical study. Appl. Therm. Eng., vol 153, 142-146, 2019.

[2] E. Djakow, R. Springer, W. Homberg, M. Piper, J. Tran, A. Zibart, E.Y. Kenig, “Incremental electrohydraulic forming - A new approach for the manufacturing of structured multifunctional sheet metal blanks,” Proc. of the 20th International ESAFORM Conference on Material Forming, Dublin, Ireland, vol. 1896, 2017.

[3] R. Afsahnoudeh, A. Wortmeier, M. Holzmüller, Y. Gong, W. Homberg, E.Y. Kenig, Thermo-hydraulic Performance of Pillow-Plate Heat Exchangers with Secondary Structuring: A Numerical Analysis,” Energies, vol. 16 (21), 7284, 2023.



Modular and Heterogeneous Electrolysis Systems: a System Flexibility Comparison

Hannes Lange1,2, Michael Große2,3, Isabell Viedt2,3, Leon Urbas1,3

1TUD Dresden University of Technology, Process Systems Engineering Group; 2TUD Dresden University of Technology, Process to Order Lab; 3TUD Dresden University of Technology, Chair of Process Control Systems

Green hydrogen will play a key role in the decarbonization of the steel sector. As a result, the demand for hydrogen in the steel industry will increase in the coming years due to the direct reduction of iron [1]. As the currently commercially available electrolysis stacks are far too small for the production of green hydrogen, the scaling strategy of numbering up standardized process units can provide support [2]. In addition, a cost-effective production of green hydrogen requires the electrolysis system to be able to follow the electricity load, which necessitate a more efficient and flexible system. The modularization of electrolysis systems can provide an approach for this [3]. The potential to include different electrolysis technologies into one heterogeneous electrolysis system can help make use of technology specific advantages and reduce disadvantages [4]. In this paper, a design of such a heterogeneous electrolysis system is presented, which is built using the modularization of electrolysis process units and is scaled up for large-scale applications, such as a direct iron reduction process, by numbering up. The impact of different degrees of technological and production capacity-related heterogeneity is investigated using system co-simulation of existing electrolyzer models. The direct reduction of iron for green steel production must be provided a constant stream of hydrogen from a fluctuating electricity profile. To reduce cost and storage losses the hydrogen storage capacity must be minimized. For this presented use-case the distribution of technology and production capacity in the heterogeneous plant layout is optimized regarding overall system efficiency and the ability to follow flexible electricity profiles. The resulting pareto front is analyzed and the results are compared with a conventional homogenous electrolyzer plant layout. First results underline the benefits of combining different technologies and production capacities of individual systems in a large-scale heterogeneous electrolyzer plant.

1] Wietschel M, Zheng L, Arens M, Hebling C, Ranzmeyer O, Schaadt A, et al. Metastudie wasserstoff – auswertung von energiesystemstudien. Studie im auftrag des nationalen wasserstoffrats. Karlsruhe, Freiburg, Cottbus: Fraunhofer ISI, Fraunhofer ISE, Fraunhofer IEG; 2021.

[2] Lange H, Klose A, Beisswenger L, Erdmann D, Urbas L. Modularization approach for large-scale electrolysis systems: a review. Sustain Energy Fuels 2024:10.1039.D3SE01588B. https://doi.org/10.1039/D3SE01588B.

[3] Lange H, Klose A, Lippmann W, Urbas L. Technical evaluation of the flexibility of water electrolysis systems to increase energy flexibility: A review. Int J Hydrog Energy 2023;48:15771–83. https://doi.org/10.1016/j.ijhydene.2023.01.044.

[4] Mock M, Viedt I, Lange H, Urbas L. Heterogenous electrolysis plants as enabler of efficient and flexible Power-to-X value chains. Comput. Aided Chem. Eng., vol. 53, Elsevier; 2024, p. 1885–90. https://doi.org/10.1016/B978-0-443-28824-1.50315-X.



CFD-Based Shape Optimization of Structured Packings for Enhancing Separation Efficiency in Distillation

Sebastian Blauth1, Dennis Stucke2, Mohamed Adel Ashour2, Johannes Schnebele1, Thomas Grützner2, Christian Leithäuser1

1Fraunhofer ITWM, Germany; 2Ulm University, Germany

In past years the research in the field of structured packing development for laboratory-scale separation processes has intensified, where one of the main objectives is to miniaturize laboratory columns regarding the column diameter. This reduction has several advantages such as reduced operational costs and lower safety requirements due to a reduced amount of chemicals being used. However, a reduction in diameter also causes problems due to the increased surface-to-volume ratio, e.g., stronger impact of heat losses or liquid maldistribution issues. There are a lot of different approaches to design structured packings, such as using repeatedly stacked unit cells, but all of these approaches have in common that the development of new structures and the improvement of existing ones is based on educated guesses by the engineers.
In this talk, we investigate the novel approach of applying techniques from free-form shape optimization to increase the separation efficiency of structured packings in laboratory-scale distillation columns. A simplified single-phase computational fluid dynamics (CFD) model for the mass transfer in the distillation column is used and a corresponding shape optimization problem is solved numerically with the optimization software cashocs. The optimization approach uses a free-form shape optimization, where the shape is not parametrized, e.g., with the help of a CAD model, but all nodes of the computational mesh are moved to alter the shape. Particularly, this approach allows for more freedom in the packing design than the classical, parametrized approach. The goal of the shape optimization is to increase the mass transfer in the column by changing the packing's shape. The numerical shape optimization yields promising results and shows a greatly increased mass transfer for the simplified CFD model. To validate our findings, the optimized shape is additively manufactured and investigated experimentally. The experimental results are in great agreeement with the simulation-based prediction and show that the separation efficiency of the packing increased by around 20 % as consequence of the optimization. Our results show that the proposed approach of using free-form shape optimization for improving structured packings in distillation is extremely promising and will be pursued further in future research.



Multi-Model Predictive Control of a Distillation Column

Mehmet Arıcı1,3, Wachira Daosud2, Jozef Vargan3, Miroslav Fikar3

1Gaziantep Islam Science and Technology University, Gaziantep 27010, Turkey; 2Faculty of Engineering, Burapha University, Chonburi 20131, Thailand; 3Slovak University of Technology in Bratislava, Bratislava 81237, Slovakia

Due to the increasing demand for performance and rising complexity of systems, classical model predictive control (MPC) techniques are often inadequate and new applications often requires some modifications in predictive control mechanism. The modifications frequently include reformulation of optimal control in order to cope with system uncertainties, external perturbations and adverse effect of rapid changes in operating points. Besides, successful implementation of this optimization-driven control technique is highly dependent on an accurate and detailed model of the process which is relatively easy to obtain for chemical processes with simple structure. As complexity in the system increases, however linear approximation used in MPC may result with poor performance or even a total failure. In such a case, nonlinear system model can be used for optimal control signal calculation but lack of reliable dynamic process model is of major challenges in real time implementation of MPC schemes. Even though a model representing the complex behavior is available such model can be difficult to optimize in real time.
To demonstrate the potential challenges addressed above, a binary distillation column process is chosen as testbed. The process is multivariable and inherently nonlinear. Furthermore, linear model approximation for a critical operating point is valid in only a small neighborhood of operation. Therefore, we propose to employ multiple models that can describe the same process dynamics to a certain degree. In addition to the linear model, multi-layered feedforward network is used for data-based modeling and constitutes an additional process model. Both models collaborate to predict state variables individually, and their outputs and constraints are applied in the MPC algorithm. Various cost function formulations are proposed to cope with multiple models. The aim is to enhance efficiency and robustness in process control by compensating for the limitations of each individual model. Additionally, the offset-free technique is applied to eliminate steady-state errors resulting from model-process mismatch.
We compare the performance of the proposed method to MPC using the full nonlinear model and also to single-model MPC methods for both the linear model and neural network model. We show that the proposed method is only slightly suboptimal with respect to the best available performance and greatly improves over individual methods. In addition, the computational load is significantly reduced when compared to the full nonlinear MPC.



Enhancing Fault diagnosis for Chemical Processes via MSCNN with Hyperparameters Optimization and Uncertainty Estimation

Jingkang Liang, Gürkan Sin

Process and Systems Engineering Center (PROSYS), Department of Chemical and Biochemical Engineering, Technical University of Denmark

Fault diagnosis is critical for maintaining the safety and efficiency of chemical process, as undetected faults can lead to operational disruptions, safety hazards, and significant financial losses. Data-driven fault diagnosis methods, especially deep-learning-based methods have been widely used in the field of fault diagnosis of chemical process [1]. However, these deep learning methods often rely on manually tunning the hyperparameters to obtain an optimal model, which is time-consuming and labor-intensive [2]. Additionally, existing fault diagnosis methods typically lack consideration of uncertainty in their analysis, which is essential to assess the confidence in model predictions, especially in safety-critical industries. This underscores the need for research to provide reliable methods that not only improve accuracy but also provide uncertainty estimation in fault diagnosis for chemical processes. This sets the premise for the research focus in this contribution.,

To this end, we present assessment of a new approach that combines a Multiscale Convolutional Neural Network (MSCNN) with hyperparameter optimization and Bootstrap for uncertainty estimation. The MSCNN is designed to capture complex nonlinear features from chemical processes. Tree-Structured Parzen Estimator (TPE), a Bayesian optimization method was employed to automatically search for optimal hyperparameters, such as the number of convolutional layers, and kernel sizes in the multiscale module, minimizing manual tuning efforts and ensuring higher accuracy for training the deep learning models. Additionally, Bootstrap technique which was validated earlier for deep learning applications for property predictions [3], was employed to improve model accuracy and provide uncertainty estimation, making the model more robust and reliable.

A simulation study was carried out on the Tennessee Eastman Process dataset, which is a widely used benchmark for fault diagnosis in a chemical process. The dataset consists of 21 types of faults, each sample is a one-dimensional vector of 52 variables. Totally 26880 samples were collected, and was split randomly to training, validation and testing set according to the ratio of 0.6:0.2:0.2. Other state-of-the-art machine learning methods, including MLP, CNN, LSTM, and WDCNN were conducted for benchmarking of the proposed method. Performance is evaluated based on precision, recall, number of parameters, and quality of predictions (i.e. uncertainty estimation).

The benchmarking results showed that the proposed MSCNN with TPE and Bootstrap achieved the highest accuracy among all the methods considered. Ablation studies were carried out to verify the effectiveness of the TEP and Bootstrap in enhancing the fault diagnosis of chemical process. Confusion matrix and uncertainty estimation were presented to further discuss the effectiveness of the proposed method.

This work paves the way for more robust and reliable fault diagnosis systems in the chemical industry, offering a powerful tool to enhance process safety and efficiency.

References

[1] Melo et al. "Data-Driven Process Monitoring and Fault Diagnosis: A Comprehensive Survey." Processes 12.2 (2024): 251.

[2] Qin et al. "Adaptive multiscale convolutional neural network model for chemical process fault diagnosis." Chinese Journal of Chemical Engineering 50 (2022): 398-411.

[3] Aouichaoui et al. "Uncertainty estimation in deep learning‐based property models: Graph neural networks applied to the critical properties." AIChE Journal 68.6 (2022): e17696.



Machine learning-aided identification of flavor compounds with green notes in plant-based foods

Huabin Luo, Simen Akkermans, Thian Ping Wong, Ferdinandus Archie Pangestu, Jan F.M. Van Impe

BioTeC+, Chemical and Biochemical Process Technology and Control, Department of Chemical Engineering, KU Leuven, Ghent, Belgium

Plant-based foods have emerged as a global trend as consumers become increasingly concerned about sustainability and health. Despite their growing demand, the presence of off-flavors, especially green notes, significantly impacts consumer acceptance and preference. This study aims to develop a model using Machine Learning (ML) techniques to identify flavor compounds with green notes based on their molecular structure. To achieve this, a database of green compounds in plant-based foods was established by searching flavor databases and literature. Additionally, non-green compounds with similar structures and balanced chemical classes to green compounds were collected as a negative set for model training. Subsequently, molecular descriptors (MD) and molecular fingerprints (MF) were calculated based on the molecular structure of these collected flavor compounds and then used as input for ML. In this study, k-Nearest Neighbor (kNN), Logistic Regression (LR), and Random Forest (RF) were used to develop a model. Afterward, the developed models were optimized and evaluated. Results indicated that green compounds exhibit a wide range of structural variations. Topological structure, electronic properties, and surface area properties were essential MD to distinguish green and nongreen compounds. Regarding the identification of flavor compounds with green notes, the LR model performed best, correctly classifying more than 95% of the compounds in the test set, followed by the RF model with an accuracy of more than 92%. In summary, combining MD and MF as the input for ML provides a solid foundation for identifying flavor compounds with green notes. These findings provide knowledge tools for developing strategies to mitigate green off-flavor and control flavor quality in plant-based foods.



LSTMs and nonlinear State Space Models- are they the same?

Ashwin Chandrasekhar, Prashant Mhaskar

McMaster University, Canada

This manuscript identifies and addresses discrepancies in the implementation of Long Short-Term Memory (LSTM) neural networks for naturally occurring dynamical processes, specifically in cases claiming to capture input-output dynamic relationships using a state-space framework. While LSTMs are well-suited for these kinds of problems, there are two key issues in how LSTMs are currently structured and trained in this context.

First, the hidden and the cells states of the LSTM model are often reinitialized or discarded between input-output sequences in the training dataset. This practice essentially results in a framework where the initial hidden and cells states of each sequence are not being trained. However, in a typical state-space model identification process, both model parameters and states need to be identified simultaneously.

Second, the model structure of LSTMs differs from a traditional state-space (SS) representation. In state-space models, the current state is defined as a function of the previous state and input from the prior time step. In contrast, LSTMs use the input from the same time step, creating a structural mismatch. Moreover, for each LSTM cell, there is a corresponding hidden state and a cell state, representing the short- and long-term memory of a given state, and hence it is necessary to address this difference in structure conceptually.

To resolve these inconsistencies, two changes are proposed in this paper. First, the initial hidden and cell states for the training sequences should be trained. Second, to address the structural mismatch, the hidden and cell states from the LSTM are reformatted to match the state and data pairing that a state-space model would use.

The effectiveness of these modifications is demonstrated using data generated from a simple dynamical system modeled by a Linear Time-Invariant (LTI) state-space system. The importance of these corrections is shown by testing them individually. Interestingly, the worst performance was observed in the model with only trained hidden states, followed by the unmodified LSTM model. The model that only corrected the input timing (without trained hidden and cell states) showed a significant improvement. Finally, the best results were achieved when both corrections were applied together.



Simple Regulatory Control Structure for Proton Exchange Membrane Water Electrolysis Systems

Marius Fredriksen, Johannes Jäschke

Norwegian University of Science and Technology, Norway

Effective control of electrolysis systems connected to renewable energy sources (RES) is crucial to ensure efficient and safe plant operation due to the intermittent nature of most RES. Current control architectures for Proton Exchange Membrane (PEM) electrolysis systems primarily use relatively simple control structures such as Proportional-Integral-Derivative (PID) controllers and on/off controllers. Some works introduce more advanced control structures based on Model Predictive Controllers (MPC) and AI-based control methods (Mao et al., 2024). However, few studies have been conducted on advanced regulatory control (ARC) strategies for PEM electrolysis systems. These control structures have several advantages as they offer fast disturbance rejection, are easier to scale, and are less affected by model accuracy than many of the more computationally expensive control methods, such as MPC (Cammann & Jäschke, 2024).

In this work, we proposed an ARC structure for a PEM electrolysis system using the "Top-down" section of Skogestad's plantwide control procedure (Skogestad & Postlethwite, 2007, p. 384). First, we developed a steady-state model loosely based on the PEM system presented by Crespi et al. (2023). The model was verified by comparing the behavior of the polarization curve under varying pressure and temperature. We performed step responses on different system inputs to assess their impact on the outputs and to determine suitable pairings of the manipulated and controlled variables. Thereafter, we formulated an optimization problem for the plant and evaluated various implementations of the system's cost function. Finally, we mapped the active constraint regions of the electrolysis system to identify the active constraints in relation to the system's power input. From an economic perspective, controlling the active constraints is crucial, as deviating from the optimal constraint values usually results in an economic penalty (Skogestad, 2023).

We have shown that the optimal operation of PEM electrolysis systems is close to fully constrained in all regions. This implies that constraint-switching control may be used to achieve optimal system operation. The active constraint regions found for the PEM system share several similarities with those found for alkaline electrolysis systems by Cammann and Jäschke (2024). Finally, we have presented a simple constraint-switching control structure for the PEM electrolysis system using PID controllers and selectors.

References

Cammann, L. & Jäschke, J. A simple constraint-switching control structure for flexible operation of an alkaline water electrolyzer. IFAC-PapersOnLine 58, 706–711 (2024).

Crespi, E., Guandalini, G., Mastropasqua, L., Campanari, S. & Brouwer, J. Experimental and theoretical evaluation of a 60 kW PEM electrolysis system for flexible dynamic operation. Energy Conversion and Management 277, 116622 (2023).

Mao, J. et al. A review of control strategies for proton exchange membrane (PEM) fuel cells and water electrolysers: From automation to autonomy. Energy and AI 17, 100406 (2024).

Skogestad, S. Advanced control using decomposition and simple elements. Annual Reviews in Control 56, 100903 (2023).

Skogestad, S. & Postlethwaite, I. Multivariable Feedback Control: Analysis and Design. (John Wiley & Sons, 2007).



Solid streams modelling for process integration of an EAF steel plant.

Maura Camerin, Alexandre Bertrand, Laurent Chion

Luxembourg Institute of Science and Technology (LIST), Luxembourg

Global warming is an urgent matter that involves and heavily influences industrial activities. Steelmaking is one of the largest sources of industrial CO2 emissions globally, with key players setting ambitious targets to reduce these emissions by 2030 and/or achieve carbon neutrality by 2050. A key factor in reaching these goals is the efficient use of waste heat, especially in such industries that involve high-temperature processes. Waste heat valorisation (WHV) holds significant potential. (McBrien et al., 2016) highlighted that about 28% of the heating needs in a blast furnace plant could be met using existing WHV technologies. This figure could rise to 44% if solid streams, not just gaseous and liquid ones, are included.

At the current state, heat recovery from solid streams, like semi-finished products and slag, and its transfer to cold solid streams, such as scrap and DRI, is rather uncommon. Its mathematical definition for process integration (PI) / mathematical programming (MP) models poses unique challenges due to the need for specialized equipment (Matsuda et al., 2012).

The objective of this work is to propose novel WHV models of such solid streams, specifically formulated for PI/MP problems. In a first step, emerging technologies for slag treatment will be incorporated, and key parameters of the streams will be defined. The heat recovery potential of the slag will be modelled based on its charge weight and the recovery technology used, for example from a heat exchanger below the slag pit or using more advanced treatment technologies. The algorithm will calculate the resulting mass flow and temperature of the heat transfer medium, which can be incorporated into the heat cascade to meet the needs of cold streams such as scrap or DRI preheating.

The expected outcome is an improvement of solid streams models and, as such, more precise process integration results. The improved quantification of waste heat valorisation, especially through the inclusion of previously unconsidered streams, will be of significant benefit to support the decarbonization of the steel industry.

References:

Matsuda, K., Tanaka, S., Endou, M., & Iiyoshi, T. (2012). Energy saving study on a large steel plant by total site based pinch technology. Applied Thermal Engineering, 43, 14–19.

McBrien, M., Serrenho, A. C., & Allwood, J. M. (2016). Potential for energy savings by heat recovery in an integrated steel supply chain. Applied Thermal Engineering, 103, 592–606. https://doi.org/https://doi.org/10.1016/j.applthermaleng.2016.04.099



Design of Microfluidic Mixers using Bayesian Shape Optimization

Rui Miguel Grunert da Fonseca, Fernando Pedro Martins Bernardo

CERES, Department of Chemical Engineering, University of Coimbra, Portugal

Mixing and mass transfer are fundamental aspects of many chemical and biological processes. For instance, in the synthesis of nanoparticles, where a solute solution is mixed with an antisolvent to induce nanoprecipitation, highly efficient and controlled mixing conditions are required to obtain particles with low size variability. Specialized mixing technologies, such as microfluidic mixing, are therefore used. Microfluidic mixing is a continuous process in which passive mixing of two different streams of fluid takes place in micro-sized channels. The geometry and small volume of the device enable very fast mixing, which in turn reduces mass transfer limitations during the nanoparticle formation process. Several different mixer geometries, such as the toroidal and herringbone micromixer [1], have already been used for nanoparticle production. Since mixer geometry plays such a vital role in mixing performance, mathematical optimization of that geometry is clearly a tool to exploit in order to come up with superior designs.
In this work, a methodology for shape optimization of micromixers using Computational Fluid Dynamics (CFD) and Bayesian Optimization is presented. It consists in the sequential performance evaluation of mixer geometries defined through geometric variables, such as angles and lengths, with predefined bounds. The performance of a given geometry is evaluated through CFD simulation, using OpenFOAM software, of the Villermaux-Dushman reaction system [2]. It consists of two competing reactions: one quasi-instantaneous acid-base reaction and a very fast redox reaction. Mixing time can therefore be inferred by analyzing the reaction selectivity at the mixer's outlet. Using Bayesian Optimization, the geometric domain can be explored with an emphasis on maximizing the defined objective functions. This is done by assigning probabilistic functions to each objective based on previously attained data. An acquisition function is then optimized in order to determine the next geometry to be evaluated, balancing exploration and exploitation. This approach is especially appropriate when objective function evaluation is expensive, which is the case for CFD simulations. This methodology is very flexible and can be applied to many other equipment design problems. Its main challenge is the definition of the optimization problem and its domain. This is similar to network design problems, where the choice of the system's superstructure has a great impact on problem solvability. The domain must include as many viable solutions as possible while minimizing problem dimensionality and avoiding redundancy of solutions.
In this work, a case-study for the optimization of the toroidal mixer geometry is presented for three different operating conditions and seven geometric degrees of freedom. Both pressure drop and mixing time were considered as objective functions and the respective Pareto fronts were obtained. The trade-offs between objective functions were analyzed for each case and the general design features are presented.

[1] C. Webb et al, “Using microfluidics for scalable manufacturing of nanomedicines from bench to gmp: A case study using protein-loaded liposomes,” International Journal of Pharmaceutics, vol. 582, p. 119266, May 2020.

[2] J.-M. Commenge and L. Falk, “Villermaux–dushman protocol for experimental characterization of micromixers, ”Chemical Engineering and Processing: Process Intensification, vol. 50, no. 10, pp.979–990, Oct. 2011.



Solubility prediction of lipid compounds using machine learning

Agustin Porley Santana1, Gabriel Gutierrez1, Soledad Gutiérrez Parodi1, Jimena Ferreira1,2

1Grupo de Ingeniería de Sistemas Químicos y de Procesos, Instituto de Ingeniería Química, Facultad de Ingeniería, Universidad de la República, Montevideo, 11300, Uruguay; 2Heterogenoeus Computing Laboratory, Instituto de Computación, Facultad de Ingeniería, Universidad de la República, Montevideo, 11300, Uruguay

Aligned with the principles of biorefinery and circular economy, biomass waste valorization not only reduces the environmental impact of production processes but also presents economic opportunities for companies. Various natural lipids with complex chemical compositions are recovered from different types of biomass and further processed, such as essential oils from citrus waste and eucalyptus oil from wood.

In this context, wool grease, a complex mixture of esters of steroid and aliphatic alcohols with fatty acids, is a byproduct of wool washing. [1] Its derivatives, including lanolin, cholesterol, and lanosterol, differ in their methods of extraction and market value.

Purification of the high value products can be achieved using crystallization, chromatography, liquid-liquid extraction or solid-liquid extraction. The interaction of the selected compound with a liquid phase known as a solvent or diluent (depending on the case) is a crucial aspect in the design of this processes. To achieve an effective separation of target components, it is crucial to identify the solubility of the compounds in a solvent. Given the practical difficulties in determining solubility and the vast array of natural compounds, a comprehensive bibliographic source for their solubilities in different solvents remains elusive. Employing machine learning [2] is an alternative to predict the solubility of the target compound in alternate solvents.

This work focuses on the construction of a model to predict the solubility of several lipids in various solvents, using experimental data obtained from scientific articles and handbooks. It was collected almost 800 data points from 6 solutes and 34 solvents. As a first step it was evaluated 21 properties as input variables of the model, this includes properties of the solute, properties of the solvent, and temperature.

After data preprocessing, in the feature selection step it uses the Pearson and Spearman correlations between input variables to select the relevant input variables. The model is obtained using Random Forest and it is compared to a linear regression model. The dataset was divided between training and validation sets in an 80-20 split. It is analysed the usage of different compound between training and validation sets (extrapolation model), and a random separation of the sets (interpolation model).

It is compared the performance of the models obtained with a full and a reduced input variables set, as well as interpolation and extrapolation models.

In all cases, the Random Forest model performs than the linear one. The preliminaries results shown that the model using the reduced set of input variables performs better than the one that use the full set of input variables.

References

[1] S. Gutiérrez, M. Viñas (2003). Anaerobic degradation kinetics of a cholesteryl ester. Water Science and Technology, 48(6), 141-147.

[2] P. Daoutidis, J. H. Lee, S. Rangarajan, L. Chiang, B. Gopaluni, A. M. Schweidtmann, I. Harjunkoski, M. Mercangöz, A. Mesbah, F. Boukouvala, F. V. Lima, A. del Rio Chanona, C. Georgakis (2024). Machine learning in process systems engineering: Challenges and opportunities, Computers & Chemical Engineering, 181, 108523.



Refining Equation-Based Model Building for Practical Applications in Process Industry

Shota Kato, Manabu Kano

Kyoto University, Japan

Automating physical model building from literature databases holds significant potential for advancing the process industry, particularly in the rapid development of digital twins. Digital twins, based on accurate physical models, can effectively simulate real-world processes, yielding substantial operational and strategic benefits. We aim to develop an AI system that automatically extracts relevant information from documents and constructs accurate physical models.
One of the primary challenges is constructing practical models from extracted equations. The existing method [Kato and Kano, 2023] builds physical models by combining equations to satisfy two criteria: ensuring all specified variables are included and matching the number of degrees of freedom with the number of input variables. While this approach excels at quickly generating models that meet these requirements, it does not guarantee their solvability, leading to the inclusion of impractical models. This issue underscores the need for a robust validation mechanism.
To address this issue, we propose a filtering method that refines models generated by the approach above to identify solvable models. This method evaluates models by comparing variables across different equations, efficiently identifying redundant or conflicting equations to ensure that only coherent and functional models are retained. Furthermore, we generated an evaluation dataset comprising physical models relevant to chemical engineering and applied our proposed method. The results demonstrated that our method accurately identifies solvable models, significantly enhancing the automated model-building approach from the literature.
However, our method faces challenges mainly when the same variable is defined differently under varying conditions. For example, the concentration of a gas dissolved in a liquid might be determined by temperature via an equilibrium constant or by pressure using Henry's law. If extracted equations include these equations, the model-building algorithm may include both equations in the output models; then, the proposed method may struggle to filter models precisely. Another limitation is the necessity to compare multiple equations to determine the model's solvability. In cases where several reaction rate equations and corresponding rate constants are available, all possible combinations must be evaluated. This strategy can be complex and cannot be efficiently handled by our current methodology without additional enhancements.
In summary, aiming to automate physical model building, we proposed a method for refining the models generated by an existing approach. Our method successfully identified solvable models from sets that included redundant ones. Future work will focus on refining our algorithms to handle complexities such as variables defined under different conditions and integrating advanced natural language processing technologies to standardize notation and interpret nuanced relationships between variables, ultimately achieving truly automated physical model building.

References
Kato and Kano, "Efficient physical model building algorithm using equations extracted from documents," Computer Aided Chemical Engineering, 52, pp. 151–156, 2023.



Solar Desalination and Porphyrin Mediated Vis-Light Photocatalysis in Decolouration of Dyes as Biological Analogues Applied in Advanced Water Treatment

Evans Martin Nkhalambayausi Chirwa, Fisseha Andualem Bezza, Osemeikhain Ogbeifun, Shepherd Masimba Tichapondwa, Wesley Lawrence, Bonhle Manoto

University of Pretoria, South Africa

Engineering can be made simple and more impactful by observing and understanding how organisms in nature solve eminent problems. For example, scientists around the world have observed green plants thriving without organic food inputs using the complex photosynthesis process to kick start a biochemical food chain. Two case studies are presented in this study based on research under way at the University of Pretoria on Solar Desalination of sea water using plant-based carbon material as solar absorbers and the work on solar or vis-light photocatalysis using porphyrin based Bi-OCl and BiOIO3 compounds and simulations of the function of chlorophyll in advanced water treatment and recovery. In the study on solar desalination using 3D-printed Graphene Oxide (GO), 82% of water recovery has thus far been achieved using simple GO-Black TiO2 monolayer as a solar absorber supported on cellulose nanocubes. In preparation for possible scale-up of the process, methods are being investigated for inhibition or reversal of salting on the adsorbed surface which inhibits energy transfer. For the vis-light photocatalytic process for discoloration of dye, a in Porphyrin@Bi12O17Cl2 system was used to successfully degrade methyl Blue dye in batch experiments achieving up to 98% degradation within 120 minutes. These results show that more advances and more efficient engineered systems can achieved through observation of nature and how these systems have survived over billions of years. Based on these observations, the research group from the Water Utilisation Group at the University of Pretoria has studied and developed fundamental processes for degradation and remediation of unwanted compounds such as disinfection byproducts (DBPs), volatile organic compounds (VOCs) and pharmaceutical products from water.



Diagnosing Faults in Wastewater Systems: A Data-Driven Approach to Handle Imbalanced Big Data

Morteza Zadkarami1, Krist Gernaey2, Ali Akbar Safavi1, Pedram Ramin2

1Shiraz University, Iran, Islamic Republic of; 2Technical University of Denmark (DTU), Denmark

Process monitoring is critical in industrial settings to ensure system functionality, making it essential to identify and understand the causes of any faults that occur. Although a considerably broader range of research focuses on fault detection, significantly less attention has been devoted to fault diagnosis. Typically, faults arise either from abnormal instrument behavior, suggesting the need for calibration or replacement, or from process faults indicating a malfunction within the system [1]. A key objective of this study is to apply the proposed process fault diagnosis methodology to a benchmark that closely mirrors real-world conditions. In fact, we propose a fault diagnosis framework for a wastewater treatment plant (WWTP) that effectively addresses the challenges of imbalanced big data typically found in large-scale systems. Fault scenarios were simulated using the Benchmark Simulation Model No.2 (BSM2) [2], a highly regarded tool that closely mimics the operations of a real-world WWTP. Using BSM2 a dataset was generated which spans 609 days, comprising 876,960 data points across 31 process parameters.

In contrast to our previous research [3], [4], which primarily focused on fault detection frameworks for imbalanced big data in the BSM2, this study extends the approach to include a comprehensive fault diagnosis structure. Specifically, it determines whether a fault has occurred and, if so, identifies whether the fault is due to an abnormality in the instrument, the process, or both simultaneously. A major challenge lies in the highly imbalanced nature of the dataset: 87.82% of the data represent normal operating conditions, while 6% reflect instrument faults, 6.14% correspond to process faults, and less than 0.05% involve concurrent faults in both the process and instruments. To address this imbalance, we evaluated multiple deep network architectures and various learning strategies to identify a robust fault diagnosis framework that achieves acceptable accuracy across all fault scenarios.

References:

[1] Liu, Y., Ramin, P., Flores-Alsina, X., & Gernaey, K. V. (2023). Transforming data into actionable knowledge for fault detection, diagnosis and prognosis in urban wastewater systems with AI techniques: A mini-review. Process Safety and Environmental Protection, 172, 501-512.

[2] Al, R., Behera, C. R., Zubov, A., Gernaey, K. V., & Sin, G. (2019). Meta-modeling based efficient global sensitivity analysis for wastewater treatment plants–An application to the BSM2 model. Computers & Chemical Engineering, 127, 233-246.

[3] Zadkarami, M., Gernaey, K. V., Safavi, A. A., & Ramin, P. (2024). Big Data Analytics for Advanced Fault Detection in Wastewater Treatment Plants. In Computer Aided Chemical Engineering (Vol. 53, pp. 1831-1836). Elsevier.

[4] Zadkarami, M., Safavi, A. A., Gernaey, A. A., & Ramin, P. (2024). A Process Monitoring Framework for Imbalanced Big Data: A Wastewater Treatment Plant Case Study. In IEEE Access (Vol. 12, pp. 132139-132158). IEEE.



Industrial Time Series Forecasting for Fluid Catalytic Cracking Process

Qiming Zhao1, Yaning Zhang2, Tong Qiu1

1Department of Chemical Engineering, Tsinghua University, Beijing 100084, China; 2PetroChina Planning & Engineering Institute, Beijing 100083, China

Abstract

Industrial process systems generate complex time-series data, challenging traditional regression models that assume static relationships and struggle with system uncertainty and process drifts. These models may also be sensitive to noise and disturbances in the training data, potentially leading to unreliable predictions when encountering fluctuating inputs.

To address these limitations, researchers have explored various algorithms in time-series analysis. The wavelet transform (WT) has emerged as a powerful tool for analyzing non-stationary time series by representing them with localized signals. For instance, Hosseini et al. applied WT and feature extraction to improve gas-liquid two-phase flow meters in oil and petrochemical industries, successfully classifying flow regimes and calculating void fraction percentages with low errors. Another approach to modeling uncertainties in observations is through stochastic processes, with the Gaussian process (GP) gaining popularity due to its flexibility. Bradford et al. demonstrated its effectiveness by proposing a GP-based nonlinear model predictive control algorithm that considered state-dependent uncertainty, which they verified in a challenging semi-batch bioprocess case study. Recent research has explored the integration of WT and GP. Band et al. developed a hybrid model combining these techniques, which accurately predicted groundwater levels in arid areas. However, much of the current research focuses on one-step ahead forecasts rather than comprehensive process modeling.

This research explores a novel predictive modeling framework that integrates wavelet features with GP regression, thus creating a more robust predictive model capable of extracting both temporal and cross-variable information from the data while adapting to changing patterns over time. The effectiveness of this hybrid method is verified using an industrial dataset from fluid catalytic cracking (FCC), a complex petrochemical process crucial for fuel production. The results demonstrate the method’s robustness in delivering accurate and reliable predictions despite the presence of noise and system variability typical in industrial settings. Percentage yields are predicted with a mean absolute percentage error (MAPE) of less than 1% for critical products, meeting the requirements for industrial application in modeling and optimization.

References

[1] Band, S. S., Heggy, E., Bateni, S. M., Karami, H., Rabiee, M., Samadianfard, S., Chau, K.-W., & Mosavi, A. (2021). Groundwater level prediction in arid areas using wavelet analysis and Gaussian process regression. Engineering Applications of Computational Fluid Mechanics, 15(1), 1147–1158. https://doi.org/10.1080/19942060.2021.1944913

[2] Bradford, E., Imsland, L., Zhang, D., & del Rio Chanona, E. A. (2020). Stochastic data-driven model predictive control using Gaussian processes. Computers & Chemical Engineering, 139, 106844. https://doi.org/10.1016/j.compchemeng.2020.106844

[3] Hosseini, S., Taylan, O., Abusurrah, M., Akilan, T., Nazemi, E., Eftekhari-Zadeh, E., Bano, F., & Roshani, G. H. (2021). Application of Wavelet Feature Extraction and Artificial Neural Networks for Improving the Performance of Gas-Liquid Two-Phase Flow Meters Used in Oil and Petrochemical Industries. Polymers, 13(21), Article 21. https://doi.org/10.3390/polym13213647



Electrochemical conversion of CO2 into CO. Analysis of the influence of the electrolyzer type, operating parameters, and separation stage

Luis Vaquerizo1,2, David Danaci2,3, Bhavin Siritanaratkul4, Alexander J Cowan4, Benoît Chachuat2

1Institute of Bioeconomy, University of Valladolid, Spain; 2The Sargent Centre for Process Systems Engineering, Imperial College, UK; 3I-X Centre for AI in Science, Imperial College, UK; 4Department of Chemistry, Stephenson Institute for Renewable Energy, University of Liverpool, UK

The electrochemical conversion of CO2 into CO is an opportunity for the decarbonization of the chemical industry, turning the current linear utilization scheme of carbon into a more cyclic scheme. Compared to other existing CO2 conversion technologies, the electrochemical reduction of CO2 into CO benefits from the fact that is a room temperature process, it does not depend on the physical location of the plant, and its energy efficiency is in the range of 40-50%. Although some techno-economic analyses have already assessed the potential of this technology, finding that the CO production cost is mainly influenced by the CO2 cost, the availability and price of the electricity, and the maturity of the carbon capture technologies, none of them addressed the effect of the electrolyzer type, operating conditions, and separation stage on the final production cost. This work determines the impact of the electrolyzer type (either AEM or BPM), the operating parameters (current density and CO2 inlet flow), and the technology used for product separation (either PSA or, for the first time for this technology, cryogenic distillation) on the annual production cost of CO using experimental data for CO2 electrolysis. The main findings of this work are that the use of either AEM or BPM electrolyzers and either PSA or cryogenic distillation yields a very similar annual production cost (around 25 MM$/y for a 100 t/d CO plant) and that the operation beyond current intensities of 150 mA/cm2 and CO2 inlet flowrates of 3.2 (AEM) and 1 (BPM) NmL/min/cm2 slightly affect the annual production cost. For all the possible operating cases (AEM or BPM electrolyzer, and PSA or cryogenic distillation alternative), the minimum production cost is reached when maximizing the CO productivity in the electrolyzer. Moreover, it is found that although the downstream process alternative has minimum influence on the CO production cost, since the cryogenic distillation alternative requires also a final PSA to separate the column overhead products, a downstream process based on PSA separation seems, at least at this scale, more preferable. Finally, a minimum selling price of 868 $/t CO is estimated in this work considering a CO2 cost of 40 $/t and an electricity cost of 0.03 $/kWh. Although this value is higher than the current CO selling price (600 $/t), there is some margin for improvement if the current electrolyzer CAPEX and lifetime are improved.



Enhancing Pharmaceutical Development: Process Modelling and Control Strategy Optimization for Liquids Drug Products Multiphase Mixing and Milling Processes

Noor Al-Rifai, Guoqing Wang, Sushank Sharma, Maxim Verstraeten

Johnson & Johnson Innovative Medicine, Belgium

Recent regulatory trends from health authorities advocate for greater understanding of drug product and process, enabling more efficient drug development, supply chain agility and the introduction of new and challenging therapies and modalities. Traditional drug product process development and validation relies on fully experimental design spaces with limited insight into what drives process performance, and where every change (in equipment, material attributes, scale) triggers the requirement for a new experimental design space, post-approval submission, as well as risking issues with process performance. Quality-by-Design in process development and manufacturing helps to achieve these aims, aided by sufficient mechanistic understanding and resulting in flexible yet robust control strategies.

Mechanistic correlations and computational fluid dynamics simulations provide digital input towards demonstrating process robustness, scale-up and transfer; particularly for pharmaceutical mixing and milling setups involving complex and unconventional geometries.

This presentation will show synergistic workflows, utilizing mechanistic correlations and/or CFD and PAT to gain process understanding, optimize development work and construct control strategies for pharmaceutical multiphase mixing and milling processes.



Assessing operational resilience within the natural gas monetisation network for enhanced production risk management: Qatar as a case study

Noor Yusuf, Ahmed AlNouss, Roberto Baldacci, Tareq Al-Ansari

Hamad Bin Khalifa University, Qatar

The increased turbulence in energy consumer markets has imposed risks on energy suppliers regarding sustaining markets and profits. While risk mitigation strategies are essential when assessing new projects, retrofitting existing industrially mature infrastructure to adapt to the changing market conditions enforces added cost and time. For the state of Qatar, a gas-dependent economy, the natural gas industry is highly vulnerable to exogenous uncertainties in final markets, including demand and price volatilities. On the other hand, endogenous uncertainties could hinder the project’s profitability and sustainability targets due to poor proactive planning in the early design stages of the project. Hence, in order to understand the industrially mature network’s risk management capabilities, it is crucial to assess the resilience at a production system and overall network level. This is especially important in natural gas supply chains as failure in the production part would influence the subsequent components, represented by storage, shipping, and agreed volume sales to markets. This work evaluates the resilience of the existing Qatari natural gas monetisation infrastructure (i.e., production) by addressing the system’s failure to satisfy the targeted production capacity due to process-level disruptions and/or final market conditions. The network addressed herein comprises 7 direct and in-direct natural gas utilisaion industrial clusters (i.e., natural gas liquefaction, ammonia and urea, methanol and MTBE, power, and gas-to-liquids). Process technical data simulated using Aspen Plus, along with calculated emissions and economic data were used to estimate the resilience of individual processes and the overall network at different endogenous disruption scenarios. First, historical and forecasted demand and prices were used to simulate the optimal natural gas allocation to processes over a planning period between 2000-2032. Secondly the resilience index of the processes within the baseline allocation strategy were then investigated throughout the planning period. Overall, a resilience index value below 100% indicate low process resilience towards the changing endogenous and/or exogenous fluctuations. For methanol and ammonia processes within the investigated network, the annual resilience index was enhanced from 35% to 90% for ammonia process, and from 36% to 84% for methanol process. The increase in the value of the resilience index was mainly due to the introduction of operational bounds and forecasted demand and price data that aided in efficient resilient process management. Finally, qualitative recommendations were summarised to aid decision-makers with planning under different economic and environmental scenarios to maintain the resilience of the network despite the fluctuations imposed by unavoidable external factors, including climate change, policy change, and demand fluctuations.



Membrane-based Blue Hydrogen Production in Sub-Ambient Temperature: Process Optimization, Techno-Economic Analysis and Life Cycle Assessment

JIUN YUN1, BORAM GU1, KYUNHWAN RYU2

1Chonnam National University, Korea, Republic of (South Korea); 2Sunchon National University, Korea, Republic of (South Korea)

In 2022, 62% of hydrogen was produced using natural gas, while only 0.1% came from water electrolysis [1]. This suggests that an immediate shift to green hydrogen is infeasible in the short- to medium-term, which makes blue hydrogen production crucial. Auto-thermal reforming (ATR) processes, which combine steam methane reforming reaction and partial oxidation, offer high energy efficiency by eliminating additional heating. During the ATR process, CO2 can be captured from the shifted syngas, which consists mainly of a CO2/H2 binary mixture.

Recently, gas separation membranes have been gaining significant attention for their high energy efficiency for CO₂ capture. For instance, the Polaris CO₂-selective membrane, specifically designed to separate CO₂/H₂ mixtures, is known to offer a high CO₂ permeance of 1000 GPU and a CO₂/H₂ selectivity of 10. Furthermore, sub-ambient temperatures are reported to enhance its CO₂/H₂ selectivity up to 20, enabling the production of high-purity liquid CO₂ (over 98%) [1].

Hydrogen recovery rates are significantly affected by the H₂ purity at the PSA inlet and the pressure of the tail gas [2], which are dependent on the selected capture location. In the ATR process, CO2 capture can be applied to shifted syngas and PSA tail gas. Therefore, optimal location selection is crucial for improving hydrogen production efficiency.

In this study, an integrated process combining ATR with a sub ambient temperature membrane process for CO₂ capture was designed using gPROMS. Two different capture locations were compared, and economic feasibility of the integrated process was evaluated. The ATR process was developed as a flowsheet-based model, while the membrane unit was built using equation-based custom modeling, which consists of balances and permeation models. Concentration polarization effects were also accounted for, which play a significant role in performance when membrane permeance is high. In both cases, the CO₂ capture rate was fixed at 90%.

In the membrane-based CO2 capture process, the inlet gas was cooled to -35°C using a cooling cycle, increasing membrane selectivity up to 20. This enables energy savings and the capture of high-purity liquid CO₂. Our simulation results demonstrated that the H₂ purity at the PSA inlet reached 92% when CO2 was captured from syngas, and this high H₂ purity improved the PSA recovery rate. For PSA tail gas capture, the CO₂ capture rate was 98.8%, with only a slight increase in the levelized cost of hydrogen (LCOH). However, in the syngas capture case, higher capture rates led to increased costs. Overall, syngas capture achieved a lower LCOH due to the higher PSA recovery rate.

Further modeling of the PSA unit will be performed to optimize the integrated process and perform a CO₂ life cycle assessment. Our results will provide insights into the potential of sub-ambient membrane gas separation for blue hydrogen production and guidelines for the design and operation of PSA and gas separation membranes in the ATR process.

References

[1] International Energy Agency, Global Hydrogen Review 2023, 2023.

[2] C.R. Spínola Franco, Pressure Swing Adsorption for the Purification of Hydrogen, Master's Dissertation, University of Porto, 2014.



Dynamic Life Cycle Assessment in Continuous Biomanufacturing

Ada Robinson Medici1, Mohammad Reza Boskabadi2, Pedram Ramin2, Seyed Soheil Mansouri2, Stavros Papadokonstantakis1

1Institute of Chemical, Environmental and Bioscience Engineering TU Wien,1060 Wien, Austria; 2Department of Chemical and Biochemical Engineering, Søltofts Plads, Building 228A, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark

Process Systems Engineering (PSE) has seen rapid advancements since its inception in the 1970s. Currently, there is an increasing demand for tools that enable the integration of sustainability metrics into process simulation to cope with today’s grand challenges. In recent years, continuous manufacturing has gained attention in biologics production due to its ability to improve process monitoring and ensure consistent product quality. This work introduces a Python-based interface that integrates process simulation and control with cradle-to-gate Life Cycle Assessment resulting into dynamic process inventories and thus to dynamic life cycle inventories and impact assessment (dLCA), with the potential to improve environmental assessment and sustainability metrics in the biopharmaceutical industry.

This framework utilizes the open-source tool Activity Browser, a graphical user interface for Brightway25, that supports the analysis of environmental impacts using LCA (Mutel, 2017). The tool allows real-time tracking of environmental inventories of the foreground process and its impact assessment. Unlike traditional sustainability indicators, such as the E-factor, which focuses only on waste generation, the introduced approach can retrieve comprehensive environmental inventories from the 3.9.10 ecoinvent database to calculate mid-point (e.g. global warming potential)) and end-point LCA indicators (e.g. damage to ecosystems), according to the ReCiPE framework, a widely recognized method in life cycle impact assessment.

This study utilizes the KTB1 benchmark model as a dynamic simulation model for continuous biomanufacturing, which serves as a decision-support tool for evaluating process design, optimization, monitoring, and control strategies in real-time (Boskabadi et al., 2024). KTB1 is a comprehensive dynamic model developed in MATLAB-Simulink covering upstream and downstream components that provide an integrated production process perspective (Boskabadi, M. R., 2024). The functional unit for this study is the production of the typical maintenance dose commonly found in pharmaceutical formulations, 40 mg of pure Active Pharmaceutical Ingredient (API) Lovastatin, under dynamic manufacturing conditions.

Preliminary results show that control decisions can have a significant impact on the dynamic and integral LCA profile for selected resource and energy-related Life Cycle Impact Assessment (LCIA) categories. By integrating LCIA into the control framework, a multi-objective model predictive control (MO-MPC) is enabled with the potential to dynamically adjust process parameters and optimize process conditions based on real-time environmental and process data (Sohn et al., 2020). This work lays the foundation for an advanced computational platform for assessing sustainability in biomanufacturing, positioning it as a critical tool in the industry's ongoing transition toward more environmentally responsible continuous production methods.

Importantly, open-source tools ensure transparency, adaptability, and accessibility, facilitating collaboration and further development within the scientific community.

References

Boskabadi, M. R., 2024.KT-Biologics I (KTB1): A dynamic simulation model for continuous biologics manufacturing.

Boskabadi, M.R., Ramin, P., Kager, J., Sin, G., Mansouri, S.S., 2024. KT-Biologics I (KTB1): A dynamic simulation model for continuous biologics manufacturing. Computers & Chemical Engineering 188, 108770. https://doi.org/10.1016/j.compchemeng.2024.108770

Mutel, C., 2017. Brightway: An open source framework for Life Cycle Assessment. JOSS 2, 236. https://doi.org/10.21105/joss.00236

Sohn, J., Kalbar, P., Goldstein, B., Birkved, M., 2020. Defining Temporally Dynamic Life Cycle Assessment: AReview. Integr Envir Assess &amp; Manag 16, 314–323. https://doi.org/10.1002/ieam.4235



Multi-level modeling of reverse osmosis process based on CFD results

Yu-hyeok Jeong, Boram Gu

Chonnam National University, Korea, Republic of (South Korea)

Reverse osmosis (RO) is a membrane separation process that is widely used in desalination and wastewater treatment [1]. However, solutes blocked by the membrane can accumulate near the membrane, causing concentration polarization (CP), which hinders RO separation performance and reduces energy efficiency [2]. Structures called spacers are added between membrane sheets to create flow channels, which also induces disturbed flow that mitigates CP. Different types of spacers exhibit different hydrodynamic behavior, and understanding this is essential to designing the optimal spacer.

Computational fluid dynamics (CFD) can be a useful tool for theoretically analyzing the impact of these spacers, through which the impact of geometric characteristics of each spacer on RO performance can be understood. However, due to the requirement for large computing resources, it is limited to a small-scale RO unit. Alternatively, mathematical modeling of RO modules can help to understand the effect of spacers on process variables and separation performance by incorporating appropriate constitutive model equations. Despite its advantages of low demands of computing resources even for a large-scale simulation, the impact of spacers is approximated using simple empirical correlations usually derived from experimental data in the limited ranges of operating and geometric conditions.

To overcome this, we present a novel modeling approach that combines these two methods. First, three-dimensional (3D) CFD models of RO spacer units at a small scale possible to represent the periodicity of the spacer geometry were simulated for various spacers (a total of 20 geometries) and a wide range of operating conditions. By fitting the relationship between the operating conditions and simulation results with the response surface methodology, a surrogate model with the operating conditions as independent variables and the simulation results as dependent variables was derived for each spacer. Using the surrogate model, the outlet conditions were derived from the inlet conditions for a single unit. These outlet conditions were then iteratively applied as the inlet conditions for the next unit, thereby representing processes at various scales.

As we expected, the CFD analysis in this study showed varied hydrodynamic behaviors across the spacers, resulting in up to a 10% difference in water flux. The multi-level modeling using the surrogate model showed that the optimal spacer design may vary depending on the size of the process, as the ranking of performance indices, such as recovery and specific energy consumption, changes with process size. In particular, pressure losses were not proportional to process size, and water recovery did not increase linearly. This highlights the need for utilizing a surrogate model via CFD in large-scale process simulations.

By combining 3D CFD simulation with 1D mathematical modeling, the hydrodynamic behavior influenced by the geometric characteristics of the spacer and the varied effects of spacers at different process scales can be efficiently reflected, using as a platform for large-scale process optimization.

References

[1] Sung, Berrin, Novel technologies for reverse osmosis concentrate treatment: A review, Journal of Environmental Management, 2015.

[2] Haidari, Heijman, Meer, Optimal design of spacers in reverse osmosis, Separation and Purification Technology, 2018.



Optimal system design and scheduling for ammonia production from renewables under uncertainty: Stochastic programming vs. robust optimization

Alexander Klimek1, Caroline Ganzer1, Kai Sundmacher1,2

1Max Planck Institute for Dynamics of Complex Technical Systems, Department of Process Systems Engineering, Sandtorstr. 1, 39106 Magdeburg, Germany; 2Otto von Guericke University, Chair for Process Systems Engineering, Universitätsplatz 2, 39106 Magdeburg, Germany

Production of green ammonia from renewable electricity could play a vital role in a net zero economy, yet the intermittency of wind and solar energy poses challenges to sizing and scheduling of such plants [1]. One approach to investigate the interaction between fluctuating renewables and chemical processes is to model the production network in the form of a large-scale mixed-integer linear programming (MILP) problem [2, 3].

A wide range of parameters is necessary to characterize the chemical production system, including investment costs, wind speeds, solar irradiance, purchase and sales prices. These parameters are usually derived from literature data and fixed before optimization. However, parameters such as costs and capacity factors are not deterministic in reality but rather subject to uncertainty. Mathematical methods of optimization under uncertainty can be applied to deal with such deviations from the nominal parameter values. Stochastic programming (SP) and robust optimization (RO) in particular are widely used to address parameter uncertainty in optimization problems and to identify solutions that satisfy all constraints under all possible realizations of uncertain parameters [4].

In this work, we reformulate a deterministic MILP model for determining the optimal design and scheduling of an ammonia plant based on renewables as a SP and a RO problem. Using the Pyomo extensions mpi-sppy and ROmodel [5, 6], the optimization problems are implemented and solved under parameter uncertainty. The results in terms of plant design and operation are compared with the outcomes of the deterministic formulation. In the case of SP, a two-stage problem is used, whereby Monte Carlo sampling is applied to generate different scenarios. Analysis of the value of the stochastic solution (VSS) shows that when the model is constrained by the nominal scenario's first-stage decisions and subjected to the conditions of other scenarios, the deterministic model cannot handle even a 1% decrease in the wind potential, highlighting the model’s sensitivity. The stochastic approach mitigates this risk with a solution approximately 30% worse in terms of net present value (NPV) but more resilient to fluctuations. For RO, different approaches are chosen with regard to uncertainty sets and formulation. The very conservative approach using box uncertainty sets is relaxed by the use of auxiliary parameters, ensuring that only a certain number of uncertain parameters can take their worst-case value at the same time. The RO framework is extended by the use of adjustable decision variables, requiring a reduction in the time horizon compared to the SP formulation in order to solve these problems within a reasonable time frame.

References:
[1] Wang, H. et al. 2021. ACS Sust. Chem. Eng. 9, 7, 2816–2834.
[2] Ganzer, C. and Mac Dowell, N. 2020. Sust. En. Fuels 4, 8, 3888–3903.
[3] Svitnič, T. and Sundmacher, K. 2022. Appl. En. 326, 120017.
[4] Mavromatidis, G. 2017. PhD Dissertation. ETH Zurich.
[5] Knueven, B. et al. 2023. Math. Prog. Comp. 15, 4, 591–619.
[6] Wiebe, J. and Misener, R. 2022. Optim. & Eng. 23, 4, 1873–1894.



CO2 Sequestration and Valorization to Synthetic Fuels: Multi-criteria Based Process Design and Optimization for Feasibility

Thuy T. Hong Nguyen, Satoshi Taniguchi, Takehiro Yamaki, Nobuo Hara, Sho Kataoka

National Institute of Advanced Industrial Science and Technology, Japan

CO2 capture and utilization/storage (CCU/S) has been considered one of the linchpin strategies to reduce greenhouse gas (CO2 equivalent) emissions. CCS promises to contribute to removing large CO2 amount but faces high-cost barriers. CCU produces high-value products; thereby gaining some economic benefits but requires large supplies of energy. Different CCU pathways have been studied to utilize CO2 as renewable raw material for producing different valuable chemical products and fuels. Especially, many kinds of catalysts and synthesis conditions have been examined to convert CO2 to different types of gaseous and liquid fuels (methane, methanol, gasoline, etc.). As the demand of these synthetic fuels are exceptionally high, such CCU pathways potentially help mitigate large CO2 emissions. Nevertheless, implementation of these CCU pathways hinges on an ample supply of carbon free H2 raw material that is currently not available for large-scale production. Thus, to remove large industrial CO2 emission sources, combining these CCU pathways with sequestration is required.

This study aims to develop a CCUS system that can contribute to remove large CO2 emissions with high economic efficiency. Herein, multiple CCU pathways converting CO2 to different gaseous and liquid synthetic fuels (methane, methanol and Fischer-Tropsch fuels) were examined for integrating with CO2 sequestration in an economic manner. Process simulator is employed to design and optimize the operating conditions of all included processes. A multi-objective evaluation model is constructed to optimize the economic benefit and CO2 reduction amount. Based on the optimization results, the feasible synthetic fuel production processes that can be integrated with CO2 sequestration process for mitigating large CO2 emission sources can be proposed.

The results showed that the formulation of CCUS system (types of CCU pathways and the amounts of CO2 to be utilized and stored) heavily depends on the types and purchasing cost of hydrogen raw material, product selling prices, and carbon tax. The CCUS system integrating the CCU pathways converting CO2 to methanol and methane and CO2 sequestration can contribute to large CO2 reduction with low economic loss. The economic benefit of this system can be dramatically enhanced when the carbon tax increases up to $250/ton CO2. Due to the exceptionally high demand of energy supply and high initial investment cost, Fischer-Tropsch fuels synthesis process is the least competitive process in terms of both economic benefit and potential CO2 reduction.



Leveraging Pilot-scale Data for Real-Time Analysis of Ion Exchange Chromatography

Søren Villumsen, Jesper Frandsen, Jakob Huusom, Xiadong Liang, Jens Abildskov

Technical University of Denmark, Denmark

Chromatography processes are key in the downstream processing of bio-manufactured products to attain high-purity products. Chromatographic separation is hard to operate optimally due to hard-to-describe mechanisms, which can be partly described by partial differential equations of convection, diffusion, mass transfer and adsorption. The processes may also be subject to batch-to-batch variation in feed composition and operating conditions. To ensure high purity of products, chromatography may be operated in a conservative manner, meaning fraction collection may be started later than necessary and terminated prematurely. This results in sub-optimal chromatographic yields in productions, as operators are forced to make the tough decision to cut the purification process at a point where they know purity is ensured at the expense of product loss (Kozorog et al. 2023).

If the overall separation process were better understood and monitored, such that the batch-to-batch variation could be better accounted for, it may be possible to secure a higher yield in the separation process (Kumar and Lenhoff 2020). Using mechanistic models or hybrid models of the chromatographic process, the process may be analyzed in real-time leading to potential insights about the current process. These insights could be communicated to operators, allowing them to perform more optimal decision-making, increasing yield without sacrificing purity.

The potential for this real-time process prediction was investigated at a pilot scale ion-exchange facility at the Technical University of Denmark (DTU). The process is equipped with sensors for real-time data extraction and supports digital twin development (Jones et al. 2022). Using this data, mechanistic and hybrid models were fitted to predict key process events such as breakthrough. The partial differential equations were solved using state-of-the-art discretization methods that are sufficiently computationally fast to allow for real-time prediction of process events (Frandsen et al. 2024). This serves as proof-of-concept for real-time analysis through Monte Carlo simulation of chromatographic processes.

References

Frandsen, Jesper, Jan Michael Breuer, Eric Von Lieres, Johannes Schmölder, Jakob K. Huusom, Krist V. Gernaey, and Jens Abildskov. 2024. “Discontinuous Galerkin Spectral Element Method for Continuous Chromatography: Application to the Lumped Rate Model Without Pores.” In Computer Aided Chemical Engineering, 53:3325–30. Elsevier.

Jones, Mark Nicholas, Mads Stevnsborg, Rasmus Fjordbak Nielsen, Deborah Carberry, Khosrow Bagherpour, Seyed Soheil Mansouri, Steen Larsen, et al. 2022. “Pilot Plant 4.0: A Review of Digitalization Efforts of the Chemical and Biochemical Engineering Department at the Technical University of Denmark (DTU).” In Computer Aided Chemical Engineering, 49:1525–30. Elsevier.

Kozorog, Mirijam, Simon Caserman, Matic Grom, Filipa A. Vicente, Andrej Pohar, and Blaž Likozar. 2023. “Model-Based Process Optimization for mAb Chromatography.” Separation and Purification Technology 305 (January): 122528.

Kumar, Vijesh, and Abraham M. Lenhoff. 2020. “Mechanistic Modeling of Preparative Column Chromatography for Biotherapeutics.” Annual Review of Chemical and Biomolecular Engineering 11 (1): 235–55.



Model Based Flowsheet Studies on Cement Clinker Production Processes

Georgios Melitos1,2, Bart de Groot1, Fabrizio Bezzo2

1Siemens Industry Software Limited, 26-28 Hammersmith Grove, W6 7HA London, United Kingdom; 2CAPE-Lab (Computer-Aided Process Engineering Laboratory), Department of Industrial Engineering, University of Padova, 35131 Padova PD, Italy

The cement value chain is responsible for 7-8% of global CO­2 emissions [1]. These emissions originate both directly via chemical reactions (e.g. calcination) taking place in the process and indirectly via the process energy demands. Around 90% of these emissions come from the production of clinker, the main constituent of cement [1]. Clinker production comprises some high temperature and carbon intensive processes, which occur in the pyroprocessing section of a cement plant. The chemical and physical phenomena occurring in such processes are rather complex and to this day, these processes have mostly been examined and modelled in literature as standalone unit operations [2-4]. As a result, there is a lack of holistic model-based approaches on flowsheet simulations of cement plants in literature.

This paper presents first-principles mathematical models for the simulation of the pyro-process section of a cement production plant; more specifically the preheating cyclones, the calciner, the rotary kiln and the grate cooler. These mathematical models are then combined in an integrated flowsheet model for the production of clinker. The models incorporate the major heat and mass transport phenomena, reaction kinetics and thermodynamic property estimation models. These mathematical formulations have been implemented in the gPROMS® Advanced Process Modelling environment and solved for various reactor geometries and operating conditions.

The final flowsheet is validated against published data, demonstrating the ability to predict accurately operating temperatures, degree of calcination, gas and solids compositions, fuel consumption and overall CO2 emissions. The utilization of several types of alternative fuels is also investigated, to evaluate the potential for avoiding CO2 emissions by replacing part of the fossil-based coal fuel (used as a reference case). Trade-offs between different process KPIs (net energy consumption, conversion efficiency, CO2 emissions) are identified and evaluated for each fuel utilization scenario.

REFERENCES

[1] Monteiro, Paulo JM, Sabbie A. Miller, and Arpad Horvath. "Towards sustainable concrete." Nature materials 16.7 (2017): 698-699.

[2] Iliuta, I., Dam-Johansen, K., & Jensen, L. S. (2002). Mathematical modeling of an in-line low-NOx calciner. Chemical engineering science, 57(5), 805-820.

[3] Pieper, C., Liedmann, B., Wirtz, S., Scherer, V., Bodendiek, N., & Schaefer, S. (2020). Interaction of the combustion of refuse derived fuel with the clinker bed in rotary cement kilns: A numerical study. Fuel, 266, 117048.

[4] Cui, Z., Shao, W., Chen, Z., & Cheng, L. (2017). Mathematical model and numerical solutions for the coupled gas–solid heat transfer process in moving packed beds. Applied energy, 206, 1297-1308.



A Social Life Cycle Assessment for Sustainable Pharmaceutical Supply Chains

Inês Duarte, Bruna Mota, Andreia Santos, Tânia Pinto-Varela, Ana Paula Barbosa-Povoa

Centre for Management Studies of IST (CEG-IST), University of Lisbon, Portugal

The increasing pressure from governments, media, and consumers is driving companies to adopt sustainable practices by reducing their environmental and social impacts. While the economic dimension of sustainable supply chain management is always considered, and the environmental one has been thoroughly addressed, the social dimension remains underdeveloped (Barbosa-Póvoa et al., 2018) despite growing attention to social sustainability issues in recent years (Duarte et al., 2022). This imbalance is particularly concerning in the healthcare sector, especially within the pharmaceutical industry, given the significant impact of pharmaceutical products on public health and well-being. On the other hand, while vital to society, there are social risks incurred throughout the entire supply chain, from primary production activities to the manufacturing of the final product and its distribution. Addressing these concerns requires a comprehensive framework that captures the social impacts of every stage of the pharmaceutical supply chain.

Social LCA is a well-established approach to assessing the social performance of supply chains by identifying both the positive and negative social impacts linked to a system's life cycle. By adopting a four-step process as outlined in the ISO 14040 standard (ISO, 2006), Social LCA enables a thorough evaluation of the social sustainability of supply chain activities. This approach allows for the identification and mitigation of key social risks, thus enabling more informed decision-making and promoting sustainable development goals. Hence, in this work, a social life cycle assessment framework is developed and integrated into the pharmaceutical supply chain design and planning model of Duarte et al. (2022), a multi-objective mixed integer linear programming model. The economic objective is measured through the maximization of the Net Present Value, while the social objective maximizes equity in access through a Disability Adjusted Life Years (DALY) metric. The social life cycle assessment will allow a broader social assessment of the whole supply chain activities by evaluating social risks and generating actionable insights for minimizing the most significant social risks within the pharmaceutical supply chain.

A case study based on a global vaccine supply chain is conducted where the main social hotspots are identified, as well as trade-offs between the economic and accessibility objectives. Through this analysis, informed recommendations are developed to mitigate potential social impacts associated with the supply chain under study.

The integration of social LCA into a pharmaceutical supply chain design and planning optimization model constitutes the main contribution of this work, providing a practical tool for decision-makers to enhance the overall sustainability of their operations and address the complex social challenges of global pharmaceutical supply chains.

Barbosa-Póvoa, A. P., da Silva, C., & Carvalho, A. (2018). Opportunities and challenges in sustainable supply chain: An operations research perspective. European Journal of Operational Research, 268(2), 399–431. https://doi.org/10.1016/j.ejor.2017.10.036

Duarte, I., Mota, B., Pinto-Varela, T., & Barbosa-Póvoa, A. P. (2022). Pharmaceutical industry supply chains: How to sustainably improve access to vaccines? Chemical Engineering Research and Design, 182, 324–341. https://doi.org/10.1016/j.cherd.2022.04.001

ISO. (2006). ISO 14040:2006 Environmental management - Life cycle assessment - Principles and framework. Geneva, Switzerland: International Organization for Standardization.



Quantum Computing for Synthetic Bioprocess Data Generation and Time-Series Forecasting

Shawn Gibford1,2, Mohammed Reza Boskabadi2, Seyed Soheil Mansouri1,2

1Sqale; 2Denmark Technical University

Data scarcity in bioprocess engineering, particularly for single-cell organism cultivation in pilot-scale photobioreactors (PBRs), poses significant challenges for accurate model development and process optimization. This issue is especially pronounced in pilot-scale operations (e.g., 20L PBRs), where data acquisition is infrequent and costly. The nonlinear nature of these processes, coupled with various non-idealities, creates a substantial gap between lab-scale and pilot-scale operations, hindering the development of accurate mechanistic models and data-driven approaches.

To address these challenges, we propose a novel approach leveraging quantum computing and machine learning. Specifically, we employ a quantum Generative Adversarial Network (qGAN) to generate synthetic bioprocess time-series data, with a focus on quality indicator variables like Optical Density (OD) and Dissolved Oxygen (DO), key metrics for Dry Biomass estimation. The quantum approach offers potential advantages over classical methods, including better generalization capabilities and faster model training using tensor networks.

Various network and quantum circuit architectures were tested to capture the statistical characteristics of real process data. Our results show high fidelity in synthetic data generation and significant improvement in the performance of forecasting models, such as Long Short-Term Memory (LSTM) networks, when augmented with GAN-generated samples. This approach addresses critical data gaps, enabling better model development and parameter optimization in bioprocess engineering.

The success in generating high-quality synthetic data opens new avenues for bioprocess optimization and scale-up. By addressing the critical issue of data scarcity, this method enables the development of more accurate virtual twins and robust optimization strategies. Furthermore, the ability to continuously update models with newly acquired online data suggests a pathway towards adaptive, real-time process control.

This work not only demonstrates the potential of quantum machine learning in bioprocess engineering but also provides a framework for addressing similar data scarcity issues in other complex scientific domains. Future research will focus on refining the qGAN architectures, exploring integration with real-time sensor data, and extending the approach to other bioprocess systems and scale-up scenarios.

References:

Orlandi, F.; Barbierato, E.; Gatti, A. Enhancing Financial Time Series Prediction with Quantum-Enhanced Synthetic Data Generation: A Case Study on the S&P 500 Using a Quantum Wasserstein Generative Adversarial Network Approach with a Gradient Penalty. Electronics 2024, 13, 2158. https://doi.org/10.3390/electronics13112158



Optimising Crop Schedules and Environmental Impact in Climate-Controlled Greenhouses: A Hydroponics vs. Soil-Based Food Production Case Study

Sarah Namany, Farhat Mahmoud, Tareq Al-Ansari

Hamad bin Khalifa University, Qatar

Optimising greenhouse operations in arid regions is essential for sustainable agriculture due to limited water resources and high energy demands for climate control. This paper proposes a multi-objective optimisation framework aimed at minimising both the operational costs and environmental emissions of a climate-controlled greenhouse. The framework schedules the cultivation of three different crops counting tomato, cucumber, and bell pepper, throughout the year. These crops are selected for their varying growth conditions, which induce variability in energy and water inputs, providing a comprehensive assessment of the optimisation model. The model integrates factors such as temperature, humidity, light intensity, and irrigation requirements specific to each crop. It is solved using a genetic algorithm combined with Pareto front analysis to address the multi-objective nature effectively. This approach facilitates the identification of optimal trade-offs between cost and emissions, offering a set of efficient solutions for decision-makers. Applied to a greenhouse in an arid region, the model evaluates two scenarios: a hydroponic system and a conventional soil-based system. Results of the study indicate that the multi-objective optimisation effectively reduces operational costs and environmental emissions while fulfilling crop demand. The hydroponic scenario demonstrates higher water-use efficiency and allows for precise nutrient management, resulting in a lower environmental impact compared to the conventional soil system. Moreover, the optimised scheduling balances energy consumption for climate control across different crop requirements, enhancing overall sustainability. This study underscores the potential of advanced optimisation techniques in enhancing the efficiency and sustainability of greenhouse agriculture in challenging environments.



Technological Trends towards Sustainable and Circular Process Design

MAURICIO SALES-CRUZ, TERESA LOPEZ-ARENAS

Departamento de Procesos y Tecnología, Universidad Autónoma Metropolitana-Cuajimalpa, Mexico

Current trends in technology are being directed toward the enhancement of teaching methods and the applicability of engineering concepts to industry, especially in the areas of sustainability and circular process design. These shifts signal a transformation in the education of chemical and biological engineering students, who are being equipped with emerging skills through practical, digital-focused approaches that align with evolving industry needs and global sustainability objectives.

Within this educational framework, significant focus is placed on the computational modeling and simulation tools, sustainable design process and circular economy, which are recognized as essential in preparing students to implement efficient and environmentally friendly processes. For instance:

  • The circular economic concept is introduced, where waste is eliminated by redesigning production systems to enhance or maintain profitability. This model emphasizes product longevity, recycling, reuse, and the valorization of waste.
  • Process integration (the biorefineries concept) is highlighted as a complex challenge requiring advanced techniques in separation, catalysis, and biotechnology, integrating both chemical and biological engineering disciplines.
  • Modeling and simulation tools are essential in engineering education, enabling students to analyze and optimize complex processes without incurring the costs or time associated with experimental setups.
  • The use of programming languages (such as MATLAB or COMSOL), equation-based process simulators (such as gPROMS), and modular process simulators (such as ASPEN or SuperPro Designer) is strongly encouraged.

From a pedagogical viewpoint, primary educational trends for knowledge transfer and meaningful learning include:

  1. Problem-Based Learning (PBL) approaches are promoted, using practical industry-related problems to improve students' decision-making skills and knowledge application.
  2. Virtual Labs offer students remote or simulated access to complex processes, including immersive experiences in industrial plants and laboratory equipment.
  3. Integration of Industry 4.0 and Process Automation tools facilitate the analysis of massive data (Big Data) and introduce technologies such as artificial intelligence (AI).
  4. Interdisciplinary and Collaborative Learning fosters integration across disciplines such as biology, chemistry, materials engineering, computer science, and economics.
  5. Blended Learning Models combine traditional teaching methods with digital tools, with online courses, e-learning platforms, and multimedia resources enhancing in-person classes.
  6. Continuing Education and Micro-credentials are encouraged as technologies and approaches evolve rapidly, with short, specialized courses often offered through online platforms.

This paper critically examines these educational trends, emphasizing the shift toward practical and digital approaches that align with changing industry demands and sustainability goals. Additionally, student-led case studies on organic waste revalorization will be included, demonstrating the quantification of environmental impacts, assessments of economic viability in terms of investment and operational costs, and evaluations of innovative solutions grounded in circular economy principles.



From experiment design to data-driven modeling of powder compaction process

Rene Brands1, Vikas Kumar Mishra2, Jens Bartsch1, Mohammad Al Khatib2, Markus Thommes1, Naim Bajcinca2

1RPTU Kaiserslautern, Germany; 2TU Dortmund, Germany

Tableting is a dry granulation process for compacting powder blends into tablets. In this process, a blend of active pharmaceutical ingredients (APIs) and excipients are fed into the hopper of a rotary tablet press via feeders. Inside the tablet press, rotating feed frame paddle wheels fill powder into dies, with tablet mass adjusted by the lower punch position during the die filling process. Pre-compression rolls press air out of the die, while main compression rolls apply the force necessary for compacting the powder into tablets. In this paper, process variables such as feeder screw speeds, feed frame impeller speed, lower punch position during die filling, and punch distance during main compression have been systematically varied. Corresponding responses, including pre-compression force, ejection force, and tablet porosity have been evaluated to optimize the tableting process. After implementing an OPC UA interface, process variables can be monitored in real-time. To enable in-line monitoring of tablet porosity, a novel UV/Vis fiber optic probe has been implemented into the rotary tablet press. To further analyze the overall process, a data-driven modeling approach is adopted. Data-driven modeling is a valuable alternative to modeling real-world processes where, for instance, first principles modeling is difficult or infeasible. Due to the complex nature of the process, several model classes need to be explored. To begin with, linear autoregressive models with exogenous inputs (ARX) have been considered. Thereafter, nonlinear autoregressive models with exogenous inputs (NARX) have been considered. Finally, several experiments have been designed to further validate and test the effectiveness of the developed models in real-time scenarios.



Taking into account social aspects for the development of industrial ecology

Maud Verneuil, Sydney Thomas, Marianne Boix

Laboratoire de Genie Chimique, Toulouse INP, CNRS, Université Paul Sabatier, France

Industrial Ecology, in the context of decarbonization, appears to be an important and significant way to reduce carbon dioxide emissions. The development of eco-industrial parks are also real applications that can help to modify socio-ecological landscapes at the scale of a territory.

In the context of Industrial ecology, optimization models make possible to implement synergies according to economic and environmental criteria. Even if numerous studies have proposed several criteria such as: CO2 emissions, Net Present Value or other economic ones; to date, a few social indicators have been taken into account in multi-criteria models. Job creation is often used as a social indicator in this type of analysis. However, the social nature of this indicator is debatable.

The first aim of the present work is to question the relevance of job creation as a social indicator with a case study. Afterward, we will evaluate the need to measure the social impact of industrial ecology initiatives and query the meaning and the added value of social indicators in this context.

So, the case study is about the development of offshore wind energy expertise in the port of Port-La-Nouvelle, with the port of Sète as a rear base. The aim is to assess the capacity of the port of Sète to host component manufacturing and anchor system storage activities, by evaluating the economic, environmental and social impacts of this approach. We will then highlight the criteria chosen and assess their relevance and limitations, particularly with regard to the social aspect.

The second step is to define the needs and challenges of an industrial and territorial ecology approach. What are the key success factors? In attempting to answer this question, it became clear that an eco-industrial park could not survive without a climate of trust and cooperation (Diemer & Rubio, 2016). The complexity of this eco-system and its communicating vessels between industrialists on a micro scale, the park on a meso scale and its environment on a macro scale make the link and the building of relationships the sinews of war.

Thirdly, we will examine the real added value of social indicators on this relational dimension, in particular by studying the way in which social indicators are implemented. Indeed, beyond the indicator itself, the process chosen for its elaboration has a real influence on the indicator itself, as well as on the ability of users to appropriate it. We therefore need to consider which process seems most effective in enabling the use of social indicators to provide a new perspective on the context of an industrial and territorial ecology approach.

Finally, we will highlight the limits of metrics based on social indicators, and question their ability to capture a complex, multidimensional social environment. We will also explore the possibility of using other concepts and tools to account for social reality, and assess their relevance to industrial and territorial ecology.



Life cycle impacts characterization of carbon capture technologies for their integration in eco-industrial parks

Agathe Gabrion, Sydney Thomas, Marianne Boix, Stephane Negny

Laboratoire de Genie Chimique, Toulouse INP, CNRS, Université Paul Sabatier, France

Human activities since pre-industrial era have been recognized to be responsible of climate change. This influence on the climate is primarily driven by the combustion of fossil fuels. The burning of these fuels releases significant quantities of carbon dioxide (CO2) and other greenhouse gases into the atmosphere, contributing to the greenhouse effect.

Industrial activities are a major factor in climate change, given the amount of greenhouse gases released into the Earth’s atmosphere from fossil fuel burning and from the energy required for industrial processes. In an attempt to reduce the environmental impact of the industry on climate change, many methods are studied and considered.

This study focuses on one of these technologies – carbon capture. Carbon capture refers to the process of trapping CO2 molecules after the combustion of fossil fuels. Next, the carbon is used or stored in order to prevent him reaching the atmosphere. This whole process is referred as Carbon Capture, Utilization and Storage (CCUS). Carbon capture is declined into multiple technologies. This study focuses only on absorption capture method using amine because it represents 90% of the operational market. It does not evaluate the utilization and storage part.

In this study, the process of carbon capture is seen as part of a bigger project aiming at reducing the CO2 emissions of the industry referred to as an Eco-Industrial Park (EIP). Indeed, the process is studied in the context of an EIP in order to determine whether setting it up is more or less valuable in terms of ecological impact than the current situation consisting on releasing the greenhouse gases into the atmosphere. Results will conduct to study the integration of carbon capture alternative methods in the EIP.

To properly conduct this study, it was necessary to consider various factors of ecological impacts. While carbon absorption using an amine solvent reduces the amount of CO2 released into the atmosphere, the degradation associated with amine solvents must also be taken into account. Therefore, it was necessary to involve several different criteria in order to compare the ecological impact of a carbon capture and the ecological impact of the release of industry-produced greenhouse gases. The objective is to prevent the transfer of pollution from greenhouse gases to other forms of environmental contamination. To do so, the Life Cycle Analysis (LCA) method was chosen to assess the environmental impacts of both scenarios.

Using the SimaPro© software to conduct the LCA, this study showed that processing the stream gas exiting an industrial site offers environmental advantages compared to its direct release into the atmosphere. Within the framework of an Eco-Industrial Park (EIP), the implementation of a CO2 absorption process could contribute to mitigate climate change impacts. However, it is important to consider that other factors, such as ecotoxicity and resource utilization, may become more significant when the CO2 absorption process is incorporated into the EIP.



Dynamic simulation and life cycle assessment of energy storage systems connecting variable renewable sources with regional energy demand

Ayumi Yamaki, Shoma Fujii, Yuichiro Kanematsu, Yasunori Kikuchi

The University of Tokyo, Japan

Increasing reliance on variable renewable energy (VRE) is crucial to achieving a sustainable and carbon-neutral energy system. However, the inherent intermittency of VRE creates challenges in ensuring a reliable power supply that meets fluctuating electricity demand. Energy storage systems are pivotal in addressing this issue by storing surplus energy and supplying it when needed. This study explores the applicability of different energy storage technologies—batteries, hydrogen (H2) storage, and thermal energy storage (TES)—to control electricity variability from renewable energy sources, focusing on electricity demand and life cycle impacts.
This research aims to evaluate the performance and environmental impacts of the energy storage system when integrated with wind power. A model of an energy storage system connected to wind energy was constructed based on the existing model (Yamaki et al., 2024), and the annual energy flow simulation was conducted. The model assumes that all generated wind energy is stored and subsequently used to supply electricity to consumers. The energy flow was calculated hourly from 0:00 on January 1st to 24:00 on December 31st based on the model made by Yamaki et al. (2023). The amounts of energy storage and VRE installation were set, and then the maximum amount of power to be sold from the energy storage system was estimated. In the simulation, the stored energy was calculated hourly from the charge of VRE-derived power/heat and the discharge of power to be sold.
Life cycle assessment (LCA) was employed to quantify the environmental impacts of each storage technology from cradle to grave, considering both the energy storage system infrastructure and operational processes for various wind energy and energy storage scales. This study evaluated GHG emissions and abiotic resource depletion as environmental impacts.
The amount of power sold was calculated by energy flow simulation. The simulation results indicate that the amount of power sold increases as wind energy generation and storage capacity rise. However, when storage capacities are over-dimensioned, the stored energy diminishes due to battery self-discharge, H2 leakage, or thermal losses in TES. This loss of stored energy leads to a reduction in the power sold. The environmental impacts of each energy storage system depended on the specific storage type and capacity. Batteries, H2 storage, and TES exhibited different trade-offs regarding GHG emissions and abiotic resource depletion.
This study highlights the importance of integrating dynamic simulations with LCA to provide a holistic assessment of energy storage systems. By quantifying both the energy supply capacity and the environmental impacts, this research offers valuable insights for designing energy storage solutions that enhance the viability of VRE integration while minimizing environmental impacts. The findings contribute to developing more resilient and sustainable energy storage systems that are adaptable to regional energy supply conditions.

Yamaki, A., et al.; Life cycle greenhouse gas emissions of cogeneration energy hubs at Japanese paper mills with thermal energy storage, Energy, 270, 126886 (2023)
Yamaki, A., et al.; Comparative Life Cycle Assessment of Energy Storage Systems for Connecting Large-Scale Wind Energy to the Grid, J. Chem. Eng. Jpn., 57 (2024)



Optimisation of carbon capture utilisation and storage supply chains under carbon trading and taxation

Hourissa Soleymani Babadi, Lazaros G. Papageorgiou

The Sargent Centre for Process Systems Engineering, Department of Chemical Engineering,University College London (UCL), Torrington Place, London WC1E 7JE, UK

To mitigate climate change, and in particular, the rise of CO2 levels in the atmosphere, ambitious emissions targets have been set by political institutions such as the European Union, which aims to reduce 2050 emissions by 80% versus 1990 levels (Leonzio et al., 2019). One proposed solution to lower CO2 levels in the atmosphere is Carbon Capture, Storage and Utilisation (CCUS). However, studies in the literature to date have largely focused on utilisation and storage separately and neither considered the effects of CO2 taxation nor systematically studied the optimality criteria of the CO2 conversion products (Leonzio et al., 2019; Zhang et al., 2017; Zhang et al., 2020). A systematic study for a realistically large industrial supply chain that considers the aforementioned aspects jointly is necessary to inform political and industrial decision-making.

In this work, a Mixed Integer Linear Programming (MILP) framework for a supply chain network was developed to incorporate storage, utilisation, trading, and taxation as strategies for managing CO2 emissions. Possible CO2 utilisation products were ranked using Multi-Criteria Decision Analysis (MCDA) techniques, and three of the top 10 products were considered to serve as CO2 -based products in this supply chain network. The model included several power plants in one of the European countries with the highest CO2 emissions. The goal of the proposed model is to minimise the total cost of the supply chain taking into account the process and investment decisions. Furthermore, incorporating multi-objective optimisation that simultaneously considers CO2 reduction and supply chain costs can offer both environmental and economic benefits. Therefore, the ε-constraint multi-objective optimisation method was implemented as a solution procedure to minimise the total cost while maximising the CO2 reduction. The game theory Nash approach was applied to determine the fair trade-off between the two objectives. The investigated case study demonstrates the importance of including financial carbon management through tax and trade in addition to the physical CO2 capturing and storage, and utilisation.

References

Leonzio, G., Foscolo, P. U., & Zondervan, E. (2019). An outlook towards 2030: optimization and design of a CCUS supply chain in Germany. Computers & Chemical Engineering, 125, 499-513.

Zhang, D., Alhorr, Y., Elsarrag, E., Marafia, A. H., Lettieri, P., & Papageorgiou, L. G. (2017). Fair design of CCS infrastructure for power plants in Qatar under carbon trading scheme. International Journal of Greenhouse Gas Control, 56, 43-54.

Zhang, S., Zhuang, Y., Liu, L., Zhang, L., & Du, J. (2020). Optimization-based approach for CO2 utilization in carbon capture, utilization and storage supply chain. Computers & Chemical Engineering, 139, 106885.



Impact of energy sources on Global Warming Potential of hydrogen production: Case study of Uruguay

Vitória Olave de Freitas1, José Pineda1, Valeria Larnaudie2, Mariana Corengia3

1Unidad Tecnológica de Energias Renovables, Universidad Tecnologica del Uruguay; 2Depto. de Bioingeniería, Instituto de Ingeniería Química, Facultad de Ingeniería, Udelar; 3Instituto de Ingeniería Química, Facultad de Ingeniería, Udelar

In recent years, several countries have developed strategies to advance green hydrogen as a feedstock or energy carrier. Hydrogen can contribute to the decarbonization of various sectors, being of particular interest its use in the transport and industry sectors. In 2022, Uruguay launched its green hydrogen roadmap, outlining its plan to promote this market. The country has the potential to become a producer of green hydrogen derivatives for exportation due to: the availability and complementarity of renewable energies (solar and wind); an electricity matrix with a high share of renewable sources; the availability of water; and favorable logistics.

The energy source for water electrolysis is a key factor in both the final cost and the environmental impact of hydrogen production. In this context, this work performs the life cycle assessment (LCA) of a hydrogen production process by water electrolysis, combining different renewable energy sources available in Uruguay. The system evaluated includes a 50 MW electrolyzer and the installation of 150 MW of new power sources. Three configurations for power production were analyzed: (1) a photovoltaic farm, (2) a wind farm, and (3) a hybrid farm (solar and wind). In all cases, connection to the national power grid is assumed to ensure a reliable and uninterrupted energy supply for plant operation.

Different scenarios for the grid energy mix are analyzed to assess the environmental impact on the hydrogen produced. For the current case, the average generation over the past five years is considered, while for future projections it was evaluated the variation of fossil and renewable energy sources.

To determine the optimal combination of renewable energy sources for the hybrid generation scenario, the complementarity of solar and wind resources was analyzed using the standard deviation, a metric widely used for this purpose. This study was developed employing data for real plants in Uruguay. Seeking for the most stable generation, the optimal mix of power generation capacity implies 54% solar and 46% wind.

The environmental impact of the different case studies was evaluated through an LCA using OpenLCA software and the Ecoinvent database. For this analysis, 1 kg of produced hydrogen was considered the functional unit. The system boundaries included power generation and the electrolysis system used for hydrogen production. Among the impact categories that can be analyzed by LCA (human health, environmental, resource depletion, etc.), this work focused on the global warming potential (GWP). As hydrogen is promoted as an alternative fuel or feedstock that may diminish CO2 emissions, its GWP is a particularly relevant metric.

Implementing hybrid solar and wind energy systems increases the stability in the energy produced from renewable sources, thereby reducing the amount of energy taken from the grid. Then, these hybrid plants have the potential to reduce CO2 emissions per kg of hydrogen produced. Still, this impact is diminished when the electric grid has higher contributions of renewable energy.



Impact of the share of renewable energy integration in the selection of sustainable natural gas production pathways

Meire Ellen Gorete Ribeiro Domingos, Daniel Florez-Orrego, Oktay Boztas, Soline Corre, François Maréchal

Ecole Polytechnique Federale de Lausanne, Switzerland

Sustainable natural gas (SNG) can be produced via different routes, such as anaerobic digestion and thermal gasification. Other technologies, such as CO2 injection, storage systems (e.g., CH4, CO2) and reversible solid oxide cells (rSOC) can be also integrated in order to handle the seasonal fluctuations of renewable energy supply and market volatility. In this work, the impact of seasonal excess and deficit of electricity generation, and the renewable fraction thereof, on the sustainability metrics of different scenarios for the energy transition in the SNG production is evaluated. The analysis considers both the current energy mix scenario and a future energy mix scenario. In the latter, full renewable grid is modeled based on the generation taking into account GIS-based land-restriction, geo-spatial wind speed and irradiation data, and the maximum electricity production from renewable sources considering EU-wide low restrictions. Moreover, the electricity demand considers full electrification of the residential and mobility sectors. The biodigestion process considers a biomethane potential of 300 Nm3 CH4 per t of volatile solids using organic wastes. The upgraded biomethane is marketed and the CO2 rich stream follows to the biomethane production. The CO2 from the anaerobic digestion unit can be stored at -50 °C and 7 bar (1,155 kg/m3), so that it can be later regasified and fed to a methanation system. The necessary hydrogen is provided by the rSOC system operating at 1 bar, 800 °C, and 81% water conversion. The rSOC system can also be operated in fuel cell mode consuming methane to produce electricity. The gasification of the digestate from the anaerobic digestion unit uses steam as gasification agent, and hydrogen coming from the electrolyzer is used to adjust the syngas composition to be suitable for the methanation reaction. The methanation system is based on the TREMP® process, consisting of intercooled catalytic beds to achieve higher conversion. A mixed integer linear programming method is employed to identify optimal system configurations under different economic scenarios, helping elucidating the feasibility of the proposed processes, as well as the optimal planning production of SNG. As a result, the integration of renewable energy and the combination of different SNG production processes prove to be crucial for the strategic planning, enhancing the resilience against market volatility and also supporting the decarbonization of the energy sector. Improved handling of intermittent renewable energy allows an optimal CO2 and waste management to achieve year-round overall processes efficiencies above 55%. This systematic approach enables better decision-making, risk management, and investment planning, informing energy providers about the opportunities and challenges linked to the decarbonization of the energy supply.



Decarbonizing the German Aviation Sector: Assessing the feasibility of E-Fuels and their environmental implications

PABLO SILVA ORTIZ1, OUALID BOUKSILA2, AGNES JOCHER2

1Universidad Industrial de Santander-UIS, Colombia; 2Technical University of Munich-TUM, Germany

The aviation industry is united in its goal of achieving "net-zero" emissions by mid-century, in accordance with global targets like COP21 and European initiatives such as "Fit for 55" and "ReFuelEU Aviation." However, current advancements and capacities may be insufficient to meet these targets on time. Recognizing the critical need to reduce greenhouse gas GHG emissions, the German government and the European Commission strongly advocate measures to lower aviation emissions, which is expected to significantly increase the demand for sustainable aviation fuels, especially synthetic fuels. In this sense, import scenarios from North African countries to Germany are under consideration. Hence, we set the objective of this work in exploring the pathways and the life cycle environmental impacts of e-fuels production and import, focusing on decarbonizing the aviation sector. Through a multi-faceted investigation, this work aims to offer strategic insights into the future of aviation fuel, blending technological advancements with international cooperation for a sustainable aviation industry. Our analysis compares the feasibility of local production in Germany with potential imports from Maghreb countries—Tunisia, Algeria, and Morocco. To establish a comprehensive view, the study forecasts Germany’s aviation fuel demand across three key timelines: the current scenario, 2030, and 2050. These projections account for anticipated advancements in renewable energy, proton exchange membrane-PEM electrolysis, and Direct Air Capture-DAC technologies via Life Cycle Assessment-LCA prospective. A technical concept of a power-to-liquid fuel production is presented with the corresponding Life Cycle Inventory, reflecting a realistic consideration of the local conditions including the effect of water desalination. In parallel, the export potential of the Maghreb countries is evaluated, considering both social and economic dimensions. The environmental impacts of two export pathways—direct e-fuel export and hydrogen export as an intermediate product—are then assessed through cradle-to-gate and cradle-to-grave scenarios, offering a detailed analysis of their respective carbon footprints. Finally, the study determines the qualitative cost implications of each pathway, providing a comparative analysis that identifies the most promising approach for sustainable aviation fuel production. The results, related mainly to Global Warming Potential-GWP and Water Consumption Potential-WCP suggest that Algeria, doted with high-capacity factors for photovoltaic-PV solar and wind systems, achieves the most considerable WCP reductions compared to Germany, ranging from 31.2% to 57.1% in a cradle-to-gate scenario. From a cradle-to-grave perspective, local German PV solar scenarios fail to meet RED II sustainable fuel requirements, whereas most export scenarios achieve GWP reductions exceeding 70%. Algeria shows the best overall reduction, particularly with wind energy (85% currently to 88% by 2050), while Morocco excels with PV solar (70% currently to 75% by 2050). Despite onshore wind showing strong environmental numbers, PV solar offers the highest impact reductions and cost advantages, making Morocco and Algeria’s PV systems superior to German and North African wind systems.



Solar-Driven Hydrogen Economy Potential in the Greater Middle East: Geographic, Economic, and Environmental Perspectives

Abiha Abbas1, Muhammad Mustafa Tahir2, Jay Liu3, Rofice Dickson1

1Department of Chemical and Metallurgical Engineering, School of Chemical Engineering, Aalto University, P.O. Box 11000, FI-00076 Aalto, Finland; 2Department of Chemistry & Chemical Engineering, SBA School of Science and Engineering, Lahore University of Management Sciences (LUMS), Lahore, 54792, Pakistan; 3Department of Chemical Engineering, Pukyong National University, Busan, Republic of Korea

This study employed advanced GIS spatial analysis to assess land suitability for solar-powered hydrogen production across thirty countries in the GME region. Factors such as PVOUT, proximity to water sources and roads, land slope, land use and cover, and restricted/protected areas were evaluated. An AHP-based MCDM analysis was used to classify land into different suitability levels.

Techno-economic optimization models were then applied to assess the levelized cost of hydrogen (LCOH), production potential, and the levelized costs of ammonia (LCOA) and methanol (LCOM) for 2024 and 2050 under different scenarios. Sensitivity analysis quantified uncertainties, while cradle-to-grave life cycle analysis (LCA) calculated the CO₂ avoidance potential for highly suitable areas.

Key findings include:

  1. Water scarcity is a major factor in site selection for hydrogen production. Fifty-seven percent of the region lacks access to water or is over 10 km away from any source, posing a challenge for hydrogen facility placement. A minimum of 1.7 trillion liters of water is needed to meet conservative hydrogen production estimates, and up to 13 trillion liters for optimistic estimates. A reliable water supply chain is crucial to realize this potential.
  2. Around 14% of the land in the region is unsuitable for hydrogen production due to slopes exceeding 5°. In mountainous countries like Tajikistan, Kyrgyzstan, Lebanon, Armenia, and Türkiye, this figure rises to 50%.
  3. Forty percent of the region is unsuitable due to poor road access, highlighting the need for adequate transportation infrastructure. Roads are essential for the construction, operation, and maintenance of hydrogen facilities, as well as for transporting resources and products.
  4. Only 3.8% of the GME region (1,122,696 km²) is classified as highly suitable for solar hydrogen projects. This land could produce 167 Mt/y and 209 Mt/y of hydrogen in 2024 and 2050 under conservative estimates, with an LCOH of 4.7–7.9 $/kg in 2024 and 2.56–4.17 $/kg in 2050. Under optimistic scenarios, production potential could rise to 1,267 Mt/y in 2024 and 1,590 Mt/y in 2050. Saudi Arabia, Sudan, Pakistan, Iran, and Algeria account for over 50% of the region’s hydrogen potential.
  5. Green ammonia production costs in the region range from 0.96–1.38 $/kg in 2024, decreasing to 0.56–0.79 $/kg by 2050. Green methanol costs range from 1.12–1.59 $/kg in 2024, dropping to 0.67–0.93 $/kg by 2050. Egypt and Libya show the lowest production costs.
  6. LCA reveals significant potential for CO₂ emissions avoidance. In 2024, avoided emissions could range from 119–488 t/y/km² (481 Mt/y), increasing to 477–1952 t/y/km² (3,655 Mt/y) in the optimistic case. By 2050, avoided emissions could reach 4,586 Mt/y. Saudi Arabia and Egypt show the highest potential for CO₂ avoidance.

The study provides a multitude of insights, making significant contributions to the global hydrogen dialogue and offering significant contributions to the roadmap for policymakers to develop comprehensive strategies for expanding the hydrogen economy in the GME region.

 
4:00pm - 6:00pmT1: Modelling and Simulation - Session 3
Location: Zone 3 - Room E030
 
4:00pm - 4:20pm

Techno-economic evaluation of incineration, gasification and pyrolysis of refuse derived fuel

Matej Koritár, Maroš Križan, Juma Haydary

Slovak University of Technology, Slovak Republic

Refuse-derived fuel (RDF) is produced from municipal solid waste (MSW) by removing inorganics and biodegradables. RDF has a significantly higher heating value compared to MSW and can be used as a raw material in energy and material recovery processes such as incineration, gasification, and pyrolysis. Although several studies have focused on thermochemical conversion in recent years, due to the complex nature of the combustion, gasification, and pyrolysis processes, no studies have been published comparing all the techno-economic aspects of these processes comprehensively.

In this work, a comprehensive techno-economic evaluation of the three thermochemical conversion processes is presented. For processing 10 t/h of RDF, computer simulation models for incineration, gasification, and pyrolysis plants were developed using Aspen Plus. Flue gas cleaning and combined heat and power generation were included in the case of incineration, while syngas cleaning, heat and prioritized power generation were modeled for gasification. For the pyrolysis process, upgrading of char and oil yields was included. Material and energy integration for each thermal conversion plant was performed. In all cases, the treatment of outgoing gaseous streams was set to meet the required limits for contaminants. The input data for the models were obtained through experimental measurements. All three processes were evaluated and compared from technical, environmental, economic, and safety perspectives.

All three processes demonstrated the ability to be used for energy recovery. Pyrolysis showed the greatest potential for material recovery, specifically char and oil. Syngas production via gasification, when intended for purposes other than power and heat generation, requires additional syngas purification. Incineration had the lowest investment costs but the greatest environmental impact due to emissions and greenhouse gas generation. Gasification was the most complex process, with the highest investment costs, but offered higher energy efficiency and lower emissions compared to incineration. Assuming a market exists for upgraded pyrolysis products, pyrolysis appears to be the most profitable thermal conversion process, however, the highest amount of wastewater was produced pyrolysis products upgrading. Based on the calculated Dow Fire and Explosion Index (F&EI), gasification was identified as the most hazardous process, followed by pyrolysis and incineration. The toxicity index (TI) for all three processes was found to be similar, making all three processes hazardous.

In summary, based on this extensive techno-economic analysis, the ranking of processes from an economic standpoint is: pyrolysis, incineration, gasification. From an environmental perspective, the ranking is: gasification, pyrolysis, incineration. In terms of safety, the order is: incineration, pyrolysis, gasification.



4:20pm - 4:40pm

A 2-D Transient State CFD modelling of a fixed-bed reactor for ammonia synthesis

Manuel Figueredo1, Leonardo Bravo1, Camilo Rengifo2

1Energy, Materials and Environment Laboratory, Department of Chemical Engineering, Universidad de La Sabana, Campus Universitario Puente del Común, Km. 7 Autopista Norte, Bogotá Colombia; 2Department of Mathematics, Physics and Statistics, Universidad de La Sabana, Campus Universitario Puente del Común, km. 7 Autopista Norte, Bogotá, Colombia.

Power-to-Ammonia (PtA) technology offers a sustainable and efficient solution by integrating renewable energy sources with carbon-neutral fuel production. This approach allows energy to be stored in the form of ammonia, which can serve as a fuel, energy carrier, or key ingredient in fertilisers [1], [2]. However, the synthesis process presents complex multiscale challenges involving flow, heat, and mass transfer, particularly due to the highly exothermic nature of the reaction. Advanced modelling techniques, such as Computational Fluid Dynamics (CFD), are essential to address these challenges, enabling optimization of reactor performance, catalyst utilization, and overall energy efficiency [3].

Several studies have focused on CFD simulation of the PBR reactor in steady state. Gu et al. [4] explored decentralized ammonia synthesis for hydrogen storage and transport using a CFD model, focusing on a small-scale Haber-Bosch reactor with Ruthenium-based catalysts optimized for mild conditions. The results showed that temperature is the most influential factor affecting ammonia production and pressure primarily affects the chemical equilibrium. Furthermore, the study identified an optimal gas hourly space velocity (GHSV) of for efficient ammonia synthesis. Nikzad et al. [5] compared the performance of three different reactor configurations to find the optimal design to enhance nitrogen conversion and reduce pressure drops using 2D CFD simulations to analyze and compare the mass, energy and momentum conservation in each reactor type. Tyrański et al.[6] investigated the ammonia synthesis process in an axial-radial bed reactor using CFD focusing on understanding the influence of catalyst bed parameters, such as particle size and geometry modifications, on the efficiency of the reactor. The simulations demonstrated that smaller catalyst particles (1-2 mm) provided a higher ammonia formation rate, while larger particles showed a slower reaction rate spread throughout the bed.

However, in PtA systems, the intermittent nature of renewable energy sources, such as hydrogen production via electrolysis introduces additional complexities in reactor performance. Boundary conditions and external disturbances (variations in hydrogen flow or temperature fluctuations) can significantly impact key process variables, including reaction rates, heat transfer, and overall system stability. Therefore, understanding the reactor's transient response under such dynamic conditions is crucial. This study focuses on developing a 2D transient CFD model for ammonia synthesis within a fixed-bed reactor to analyze the effects of boundary conditions and external perturbations on reactor performance. By evaluating these transient states, the research aims to provide insights for reactor optimization and improvements on the scalability and economic viability of PtM technology under fluctuating energy supply conditions.



4:40pm - 5:00pm

Kernel-based estimation of wind farm power probability density considering wind speed and wake effects due to wind direction

Samuel Martínez-Gutiérrez, Daniel Sarabia, Alejandro Merino

University of Burgos, Spain

When planning wind farm projects, it is crucial to quantify and assess the wind resource of the candidate site. This assessment is typically conducted using the wind energy density, which requires the probability distribution of wind speed fv(v) and the power curve of a wind turbine PWT(v). Furthermore, based on fv(v), PWT(v) and using the change of variable theorem, it is possible to obtain the probability density function of the power of a wind turbine, fPWT(PWT), which provides additional insight into the energy that can be produced.

However, to optimize the management of a wind farm, it is necessary to know the probability density function of the power generated by the entire farm, fPWF(PWF). This function makes it possible to estimate the variability and availability of the power generated by the farm, facilitating production planning in a given period, and ensuring the integration of wind energy into the electricity grid, since by being able to determine the probability of obtaining a given power, optimal decisions can be made taking this probability into account.

One way to obtain fPWF(PWF) is to follow the same procedure used before. This requires a simple analytical expression of the wind farm power curve, PWF(v), that allows the use of the change of variable theorem. For example, the power produced by a wind farm can be taken to be the power of one wind turbine times the number of turbines, PWF(v)=nturb·PWT(v). However, this approach has the disadvantage that it does not consider a significant source of power loss such as the wake effect (turbines shading each other) due to wind direction. A first alternative to incorporate the wake effect would be to use a wind density function and a wind farm power curve dependent on wind speed and direction q, f(V,Θ)(v,θ), PWF(v,θ), however, obtaining these expressions and applying the change of variable theorem to distribution functions of several variables is very complicated. For this reason, some authors have tried to keep the PWF(v)=nturb·PWT(v) approximation and add a wake coefficient term (Feijóo & Villanueva, 2017), simplifying some wake model such as Jensen's, but this approximation is only valid for wind farms with n´m rectangular geometry. This paper proposes that, from a sample of historical wind speed and direction data from a wind farm location, a sample of wind farm power output data is generated, using the Katic wake model (Katic et al., 1987) to calculate the effective velocity incident on each wind turbine. Then, the power generated by each wind turbine PWT(v) is calculated and the total power of the wind farm is obtained as the sum of the individual powers. Finally, the wind farm power probability density function fPWF(PWF) is obtained using kernel estimators.

References

Feijóo, A., & Villanueva, D. (2017). Contributions to wind farm power estimation considering wind direction-dependent wake effects. Wind Energy, 20(2), 221–231.

Katic, I., Højstrup, J., & Jensen, N. O. (1987). A Simple Model for Cluster Efficiency. EWEC’86 Proceedings, 1, 407–410.



5:00pm - 5:20pm

Optimisation of Biomass-Energy-Water-Food Nexus under Uncertainty

Md Shamsul Alam, I. David L. Bogle, Vivek Dua

University College London, United Kingdom

Policy makers around the world are moving towards designing systems to foster a sustainable future by emphasising environmental conservation, notably through reduction of carbon footprints within the system. The three systems, water, energy and food, are intertwined since the effect of any of these systems can affect others. The biomass energy-water-food nexus system as a whole is a subject of considerable scholarly inquiry, pursued for diverse purposes including allocation of resources, energy, food and water security management and formulation of sustainable policy strategy.

Management of this system confronts some uncertainties in terms of parameters, which causes a diverse range of outputs for decision making by policymakers. This work proposes a mathematical model incorporating uncertain parameters in the biomass energy-water-food nexus system. The model is the used for carrying out model-based optimisation for allocating optimal resources, reducing carbon footprint, increasing economic potential and managing resources sustainably in the whole system.

The superstructure of the whole system includes power plants, effluent treatment plants, agricultural field, livestock sectors, deep wells, rainwater harvesting systems and solar energy generation units. The water sources include river water, underground water from aquifers, treated water from effluent treatment plants, water supplied from power plant and water from rainwater harvesting systems placed in agricultural sectors, livestock sectors, domestic sectors and in effluent treatment plants. The energy system includes power plants utilising biomass and natural gas. Moreover, solar systems, installed in all the four sectors, supply the required energy to the system. While energy and water are utilized to produce food in agriculture, biomass from food waste is used to generate energy in the system. Uncertain values of the parameters corresponding to the rainwater precipitation coefficient and solar energy radiation flux in the system are considered.

The novel aspects of this work include formulating and solving the problem as a mixed-integer linear program and addressing the presence of uncertain parameters through a stochastic mathematical programming approach. Additionally, expected values of generated scenarios of uncertain parameters are used to solve the overall optimisation model. The solution of the optimisation problem offers policy makers profound insight into resource allocation across diverse contexts. Taking maximising economic benefit as an objective function, this work compares the results of the deterministic model with the results computed through incorporating uncertainty in the parameters. The results indicate that incorporation of uncertainty gives rise to diminished profitability than the deterministic model, while the amount of GHG emission is reduced. On the other hand, when taking minimizing GHG emission as an objective function, a much greater loss in the profitability from the stochastic model is obtained compared to deterministic model. Apart from economic benefit and GHG amount calculations, the optimisation of the system also provides different structural decisions, such as the number of installations of effluent treatment plants, rainwater harvesting systems and solar panels in the system. The effect of uncertain parameters on economic and environmental objective functions and structural decisions will be discussed.



5:20pm - 5:40pm

A Century of Data: Thermodynamics and Ammonia Synthesis Kinetics on Various Commercial Iron-based Catalysts

Hilbert Keestra, Yordi Slotboom, Kevin Hendrik Reindert Rouwenhorst, Wim Brilman

University of Twente

This study presents highly accurate thermodynamic and kinetic predictions for ammonia synthesis on commercial iron-based catalysts, based on a century of experimental equilibrium and kinetic data. To address challenges in conventional Equations of State (EOS) that exhibit large deviations at high pressures due to ammonia's polarity, a modified Soave-Redlich-Kwong (SRK) EOS with an additional polarity correction factor is employed and consequently fitted to equilibrium data. This modification allows the use of a simple Arrhenius equation to predict equilibrium data effectively. A kinetic model is developed using a Langmuir-Hinshelwood approach, considering N* and H* species on the catalyst surface. The model is fitted to 11 datasets and incorporates a Relative Catalytic Activity factor for each catalyst. The model accurately describes all trends across all iron-based catalysts under a wide range of conditions, supporting and de-risking the global trend towards reducing operational pressure and temperature in ammonia production for energy savings.



5:40pm - 6:00pm

Mathematical modeling and simulation of multi-feeding rotating packed bed (MF-RPB) absorber for MEA-based carbon capture

Dongkyu Kim, Boram Gu

Chonnam National University, Korea, Republic of (South Korea)

Many efforts are underway to mitigate greenhouse gas emissions due to rising concerns about global warming. Among these gases, carbon dioxide (CO2) is one of the most significant, particularly as it is released in large amounts from power plants and chemical industries.

Researchers have studied post-combustion CO2 capture through chemical absorption in the conventional column for years. However, the large size of the conventional column has been recognized as a hindrance to CO2 capture, which is associated with significantly high capital and operating costs and high energy requirements. To tackle these issues, a rotating packed bed (RPB) has been proposed, where the rotational force enhances mass transfer by increasing the liquid-gas interfacial area, which significantly reduces the size of the bed (Agarwal et al., 2010).

Recently, there have been efforts to improve the RPB absorbers. For example, Wu et al. demonstrated a novel multiple-liquid inlet rotating packed bed (MLI-RPB), which showed higher liquid mass transfer compared to conventional RPBs (Wu et al., 2018). Oko et al. show that the liquid phase temperature could rise significantly, which can be mitigated by installing intercoolers for the RPB (Oko et al., 2018). Although both studies proposed new RPB structures, neither analyzed CO2 capture efficiency with these designs. In this study, we also suggest a multi-feeding strategy in RPB that combines the concept of intercooling and multi-inlet and analyze the CO2 capture efficiency using mathematical modeling and simulation.

The model for the multi-feeding RPB with monoethanolamine (MEA) was developed using balance equations coupled with the thermodynamic model (electrolyte-non-random two-liquid (eNRTL) model) and the two-film theory for the liquid-gas mass transfer. The developed RPB model was validated for single feeding conditions using experimental data in the literature (Jassim, 2002 and Kolawole, 2019)

Our simulation results show that introducing a low-temperature MEA solution in the middle of the absorber improves capture efficiency by 1.3–2.5% compared to the conventional lab-scale RPBs. As the size of the absorber increases, the efficiency of the multi-feeding RPB is expected to improve even further. This is due to the short residence time in the lab-scale RPB absorber. The temperature rise could be higher with a stronger MEA solution as a solvent and industrial-scale RPB absorber (Oko et al., 2018), which implies that the use of the multi-feeding RPB (MF-RPB) might be more beneficial in such situations. Further simulations will be carried out by varying operating variables, such as liquid-to-gas ratios, intercooling location and the ratio of side-feed to main-feed.

<Reference >

  1. Agarwal, L., Pavani, V., Rao, D. P., & Kaistha, N. (2010) Industrial and Engineering Chemistry Research, 49(20), 10046–10058.
  2. Wu, W., Luo, Y., Chu, G. W., Liu, Y., Zou, H. K., & Chen, J. F. (2018). Industrial and Engineering Chemistry Research, 57(6), 2031–2040.
  3. Oko, E., Ramshaw, C., & Wang, M. (2018). Applied Energy, 223, 302
  4. Jassim, M. S. (2002).
  5. Kolawole, T. O. (2019).
 
4:00pm - 6:00pmT2: Sustainable Product Development and Process Design - Session 3
Location: Zone 3 - Room E031
Chair: Effie Marcoulaki
Co-chair: Hideyuki Matsumoto
 
4:00pm - 4:20pm

Steady-State Digital Twin Development for Heat and Shaft-Work Integration in a Dual-Stage Pressure Nitric Acid Plant Retrofit

Stanislav Boldyryev, Goran Krajačić

University of Zagreb Faculty of Mechanical Engineering and Naval Architecture, Croatia

Agriculture is a key sector of the Ukrainian economy, relying heavily on large quantities of fertilizers to maintain global competitiveness. Despite domestic production capabilities, Ukraine imports approximately €5.5 million of nitrogen fertilizers annually. Nitric acid, a critical raw material in fertilizer production, plays a significant role in this context. Enhancing local nitric acid production is essential for bolstering economic security and diversifying supply chains. Beyond agriculture, nitric acid is also a vital intermediate in various industrial processes. Commercially, nitric acid is typically produced through the stepwise catalytic oxidation of ammonia with air.

This study aims to enhance the heat and shaft-power integration of existing nitric acid production processes to optimize waste heat recovery and identify opportunities for improving process efficiency. The plant under investigation employs a dual-stage pressure nitric acid production process with a capacity of 50 tons per hour of HNO3 (100% equivalent). The process utilizes 3.9 MPa steam and cooling water as utilities. Air and nitrous oxide compressors are powered by waste heat from tail gases, which are harnessed in dual-pressure turbines. The exothermic reaction, conducted under pressure on a catalyst bed to produce nitric oxide, is used for medium-pressure steam generation.

This work investigates the existing nitric acid plant by developing a steady-state digital twin in the Aspen HYSYS environment. A process integration methodology, incorporating a graphical analytical tool, was employed to identify energy inefficiencies in the existing system. The authors concurrently analyzed the thermal energy and expansion potential of tail gases to efficiently meet the heating, cooling, and power demands of the main process, while also increasing steam generation through waste heat recovery, all without compromising plant throughput. As a result, a retrofitted process concept was proposed, featuring an updated process flowsheet and enhanced waste heat utilization. The proposed retrofits result in a 30% reduction in steam and a 34% reduction in cooling water usage, while simultaneously increasing steam generation by 17%. These utility savings translate to a 10% increase in plant throughput, achieved with the existing primary process equipment (including columns, compressors, and turbines) and an updated heat recovery network.



4:20pm - 4:40pm

Sustainable and Intensified Process for Gamma-Valerolactone Production from Levulinic Acid: A Reactive Distillation Approach

Melanie Coronel-Muñoz1, Brenda Huerta-Rosas1, Juan José Quiroz-Ramírez2, Juan Gabriel Segovia-Hernández1, Eduardo Sánchez-Ramírez1

1Universidad de Guanajuato, Mexico; 2CIATEC A.C. Centro de Innovación Aplicada en Tecnologías Competitivas

In the context of Industry 4.0 and the growing emphasis on sustainable chemical production, the development of processes that maximize efficiency and minimize environmental impact is crucial. This study presents a novel, intensified process for the production of γ-valerolactone (GVL) from levulinic acid, utilizing a reactive distillation column that seamlessly integrates reaction and separation in a single unit. GVL is a versatile platform chemical with applications ranging from biofuels to green solvents and polymer precursors, making it an important product in the shift towards renewable and sustainable chemicals. This innovative approach eliminates the need for conventional multi-step operations, significantly reducing equipment footprint, energy consumption, and overall process complexity.

A multi-objective optimization framework, based on the Differential Evolution with Tabu List (DETL) algorithm, was employed to tackle the highly nonlinear and nonconvex nature of the process. Key parameters, including the number of stages, reflux ratio, feed location, and the placement of reactive stages, were optimized to strike a balance between economic feasibility, environmental sustainability, and operational efficiency. The optimization results demonstrated substantial improvements, with a 57% reduction in total annual cost (TAC), a 55% decrease in environmental impact (Eco Indicator 99, EI99), and a 63% reduction in energy demand compared to traditional production methods. Additionally, the optimized process achieved a 25% increase in GVL production, meeting the required product purity specifications. This intensified reactive distillation strategy represents a significant advancement in sustainable chemical processing, offering a more eco-friendly and cost-effective alternative to conventional GVL production technologies while aligning with global goals for greener industrial practices.



4:40pm - 5:00pm

Design and cost analysis of a reactive distillation column to produce ethyl levulinate using excess levulinic acid

Igor Ferreira Fioravante, Rian de Queiroz Nóbrega, Rubens Maciel Filho, Jean Felipe Leal Silva

School of Chemical Engineering, University of Campinas, Brazil

Despite the potential of electrification in transportation, diesel will remain one of the main fuels for decades to come. Total or partial replacement of diesel with biodiesel is one of the solutions to decrease the global warming potential of diesel engines. However, biodiesel has limited performance in cold weather and requires the use of biodiesel additives. In this context, it is important to choose biodiesel additives from non-edible, inexpensive, renewable sources. Ethyl levulinate, an ester derived from levulinic acid that can be produced from sugarcane, is a promising candidate biodiesel additive because it improves the cold flow properties of biodiesel and reduces soot emissions of diesel and biodiesel. In this work, a reactive distillation column was designed and optimized to promote the esterification of levulinic acid and ethanol to produce ethyl levulinate. Because of the volatility order of the components involved in this esterification process (ethanol, water, ethyl levulinate, and levulinic acid), levulinic acid was chosen as the excess reactant. A central composite design of experiments was used to assess the process design. These results were used together with response surface methodology to determine the optimized design conditions. Production cost was calculated based on ethanol price, capital cost, and operating expenses. The results showed that the optimized reactive distillation column using excess levulinic acid provides a production cost lower than that of an equivalent process comprised of a continuous stirred reactor followed by conventional distillation.



5:00pm - 5:20pm

Kinetic modelling, technoeconomic analysis and environmental impact of the production of triacetin from purified waste glycerol

Aya Sandid1, Vincenzo Spallina1, Jesús Esteban2,1

1Dept. of Chemical Engineering, The University of Manchester, M13 9PL Manchester (UK); 2Dept. of Chemical Engineering, Complutense University of Madrid, 28040 Madrid (Spain)

As the global energy demand continues to grow, the production of fossil fuel alternatives such as biodiesel have led to an increase in the by-product crude glycerol (Gly). The availability and low cost of crude Gly makes it an attractive feedstock to generate high-value chemicals such as triacetin (TA), a fuel additive [1]. However, crude Gly (30-60 wt.%) requires purification first via physio-chemical treatment to remove impurities such as ashes, water and matter organic non-glycerol (MONG) [2].

Our work explores the use of highly impure animal-byproduct crude Gly purifying it and using it as feedstock for esterification with acetic acid (AA) using Amberlyst 36, a commercial catalyst. The in-series reaction network generates the intermediates monoacetin (MA), diacetin (DA) and finally TA along water as a by-product. Based on experimental results and using Aspen custom modeler (ACM) V12.1, an Eley-Rideal (ER) model with a water adsorption term was developed, with a residual mean square error (RMSE) and variation explained (VE) of 0.0941 and 98.2%, respectively [2].

Process design and simulation in continuous mode was conducted using Aspen Plus V12.1 after exporting the ACM reactor model, which assumed a well-mixed isothermal CSTR operating at steady state. The plant generates approximately 18,000 tons yr-1 of TA (≥99.5 wt.%) assuming 8000 hours of annual operation. The plant begins with Gly pretreatment, in which a purified Gly stream from a physio-chemical treatment plant (82 wt.%) [3] undergoes vacuum distillation to further increase its purity to 96 wt.%. The stream is then mixed with AA and enters the reactors operating at 110ºC and 1.0 atm. The resulting stream is further processed to recover and recycle remaining AA via azeotropic distillation using hexane as an entrainer. The TA is recovered by liquid-liquid extraction using water as a solvent. Any unconverted Gly, MA and DA are also recovered and recycled to the reactors. The process attains a Gly conversion and TA yield of 100% and 73%, respectively [4].

Furthermore, the factorial method [5] was used to calculate the capital expenditure which was found to be £41.4 million. The total annualised cost (TAC) was found to be £35.3 million yr-1 assuming a plant-life time of 20 years and interest rate of 5%. Additionally, a CO2 tax assuming £50 ton(CO2)-1 was also considered, leading to a minimum selling price at of £1.8 kg(TA)-1 to start making profit. Finally, a LCA was performed based on cradle-to-gate approach using Sphera (GaBi), which highlighted that the effects on climate change, fossil depletion and freshwater consumption, with values of 44.3 kg(CO2), 13.5 kg(oil), and -2.3 m3 per kg(TA)-1, respectively[4].

[1] A. Sandid, V. Spallina, J. Esteban. Fuel Process. Technol., 253 (2024) 108008.

[2] A. Sandid, T. Attarbachi, R. Navarro-Tovar, M. Pérez-Page, V. Spallina, J. Esteban, Chem. Eng. J., 496 (2024) 153905.

[3] T. Attarbachi, M. Kingsley, V. Spallina, Ind. Eng. Chem. Res., 63 (2024) 4905-4917.

[4] A. Sandid, V. Zurba, S. Zapata-Boada, R. Cuellar-Franca, V. Spallina, J. Esteban, Sustain. Prod. Consump., (2024, Submitted).

[5] R. Sinnott, G. Towler, Chemical Engineering Design, 2020, pp. 275-369.



5:20pm - 5:40pm

Green Industrial-Scale Plant Design for Syngas Fermentation to Isopropyl Alcohol and Acetone: Economic and Environmental Sustainability Assessment

Gijs J.A. Brouwer, Tamara Janković, Adrie J.J. Straathof, Anton A. Kiss, John A. Posada

Delft University of Technology, The Netherlands

Syngas fermentation of steel mill offgas can provide (1) sustainable processes to replace petrochemical isopropyl alcohol (isopropanol) and acetone production and (2) reduce greenhouse gas emissions from the steel industry. Syngas fermentation using Clostridium autoethanogenum can convert the energy-rich steel mill offgas (50% CO, 10% H2, 20% CO2, 20% N2) to isopropyl alcohol, acetone or a mixture. The product yield depends on the genetic modifications done to the microorganism1. Typically in bioprocess development there is a big focus on achieving an as high as possible Titer, Rate and Yield (TRY)2. However, the possibility of multi-product processes is usually not considered for bioprocesses. Therefore, this study investigated the effects of product Titer and Yield during gas fermentation on the downstream processing (DSP) and the overall economic and environmental sustainability of the industrial-scale process.

The process design, in Aspen Plus V12.0, was based on the detailed syngas fermentation modelling to IPA3 and the gas fermentation DSP design using vacuum distillation and extractive distillation for the purification of IPA and acetone mixtures4. Three yield scenarios estimated from the pilot results1 were modelled stoichiometrically5 and studied with either (i) 90% product selectivity to IPA, (ii) acetone or a (iii) mixture of isopropyl alcohol (80%) and acetone (10%) with acetate and biomass as other main byproducts. The CO volumetric mass transfer rate during gas fermentation is a key process parameter3 and was increased to 10 g/L/h. In addition, (iv) a high CO volumetric mass transfer rate (14.9 g/L/h) has been studied for (iii) to investigate whether its sustainability effect diminishes. The complete process models have been extended with complete wastewater treatment and recycle of the process streams.

The four scenarios are compared in terms of economic and environmental sustainability. Environmental sustainability was assessed through cradle-to-gate LCA (ReCiPe 2016 (H)) including all three emission scopes for the Global Warming Potential, Stratospheric Ozone Depletion, Fine Particulate Matter Formation, Marine and Freshwater Eutrophication, Human Carcinogenic Toxicity, Land use and Water use.

The impact of syngas fermentation product yields and titers gave insight into the effects and trade-offs for industrial-scale economic and environmental sustainability. This puts the lab-focus on highest TRY into the large-scale plant-level perspective. Thus, integration of process modelling and economic and environmental trade-off assessment is required to choose the right product formulation and improve biotechnology parameters that are relevant for industrial-scale sustainability.

1. Liew, F. E. et al. Nat Biotechnol (2022).

2. Noorman, H. J. & Heijnen, J. J. Chem Eng Sci (2017).

3. Brouwer, G. J. A., Shijaz, H. & Posada, J. A. Computer Aided Chemical Engineering (2024).

4. Janković, T., Straathof, A. J. J. & Kiss, A. A. Journal of Chemical Technology and Biotechnology (2024).

5. Heijnen, J. J. & Van Dijken, J. P. Biotechnol Bioeng (1992).



5:40pm - 6:00pm

Sustainable Development Goals Assessment of Alternative Acetic Acid Synthesis Routes

Juan D. Medrano-García, Sachin Jog, Abhinandan Nabera, Gonzalo Guillén-Gosálbez

Institute for Chemical and Bioengineering, Department of Chemistry and Applied Biosciences, ETH Zurich, Vladimir-Prelog-Weg 1, 8093 Zurich, Switzerland

Acetic acid is an important chemical, with an annual demand of 17 Mt and many applications in the synthesis of plastics, dyes, insecticides and drugs. The standard fossil route to synthesize acetic acid relies on fossil natural gas-based methanol and carbon monoxide in the so-called Cativa process, involving four reaction steps (Le Berre et al., 2014). Research efforts currently focus on identifying pathways to replace fossil carbon with renewable carbon (i.e., CO2, chemical waste, biomass and biogas) in chemicals production. While this shift to renewable carbon often reduces CO2 emissions, it can lead to burden shifting, by which one environmental category improves at the expense of worsening others (Medrano-García et al., 2022).

In the context of sustainable acetic acid synthesis, the existing environmental studies are scarce and mainly focus on carbon footprint, not delving into other impact categories. To tackle this research gap, here we compare the environmental performance of alternative single-step acetic acid synthesis routes with the fossil, green and biogas-based conventional process. Moreover, for the first time, we analyze the absolute sustainability level of these synthesis pathways based on the Sustainable Development Goals (SDGs), which is computed considering the transgression levels attained relative to the Earth's carrying capacity defined by the Planetary Boundaries (PBs). Recently, the PBs concept was linked to the SDGs (Sala et al., 2020), opening the door for SDG-based assessments of industrial systems. More precisely, we here quantify the transgression level (TL) using the impacts computed with the EF v3.1 and LANCA v2.5 for land use, and the downscaled PBs determined following some sharing principles. We employ process simulation (Aspen Plus v12) and literature data to model three acetic acid synthesis pathways: the business-as-usual (BAU) methanol carbonylation, the novel gas-to-acid (GTA) methane carboxylation and semi-artificial photosynthesis (SAP) using captured CO2 with cysteine and water as potential electron donors.

Our results show that all evaluated acetic acid synthesis pathways outperform the fossil BAU in terms of climate change impact. However, burden-shifting is still found in human toxicity, eutrophication and minerals and metals resource use. The absolute sustainability analysis reveals that most instances of this collateral damage occur within the safe operating space (SOS), that is, within the allocated fraction of the PBs to acetic acid production. Despite the overall improvements, all the evaluated scenarios transgress at least one of the impact categories associated with the assessed SDGs. All in all, we show how it is possible to improve the sustainability level of the BAU, and demonstrate that absolute sustainability assessments provide very valuable insights in the evaluation of alternative synthesis pathways.

References

Le Berre, C., Serp, P., Kalck, P., Torrence, G.P., 2014. Acetic Acid, in: Ullmann’s Encyclopedia of Industrial Chemistry. Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, Germany, pp. 1–34.

Medrano-García, J.D., Charalambous, M.A., Guillén-Gosálbez, G., 2022. Economic and Environmental Barriers of CO2-Based Fischer-Tropsch Electro-Diesel. ACS Sustain. Chem. Eng. 10, 11751–11759.

Sala, S., Crenna, E., Secchi, M., Sanyé-Mengual, E., 2020. Environmental sustainability of European production and consumption assessed against planetary boundaries. J. Environ. Manage. 269.

 
4:00pm - 6:00pmSession in honour of Pedro Castro
Location: Zone 3 - Room E032
Chair: Ignacio E Grossmann
Co-chair: Henrique Matos
 
4:00pm - 4:20pm

Tribute to the contributions of Pedro Castro

Henrique Matos1, Grossman Ignacio2, Iiro Harjunkoski3

1henrimatos@tecnico.ulisboa.pt I CERENA – Instituto Superior Técnico, Universidade de Lisboa, Portugal; 2Carnegie Mellon University, United States of America; 3Aalto University, Finland

/



4:20pm - 4:40pm

A Novel Detailed Representation of Batch Processes for Production Scheduling

Alexandros Koulouris1, Georgios Georgiadis1,2

1International Hellenic University, Greece; 2Intelligen, Inc., USA

Optimal production scheduling is crucial for industries aiming to remain competitive, since profit margins are constantly shrinking. Efficient scheduling guarantees that companies can minimize delays, avoid overproduction, and allocate resources efficiently, which boosts profitability and customer satisfaction. However, achieving optimal scheduling is a complex task due to the dynamic and interconnected nature of modern production systems.

Traditionally, production scheduling is usually modeled and solved using mathematical optimization techniques (Georgiadis et al., 2019), such as MILP. However, applying these optimization methods in industry presents significant challenges (Harjunkoski, 2016). To make the scheduling problem computationally solvable, researchers often use approximate representations that do not fully capture the complexity of the actual production environment.

To overcome these limitations, we propose a novel process representation that offers a more granular but accurate depiction of the timing interrelations between production stages. In traditional representations, each processing stage is treated as a single, rigid block utilizing some resource and precedence constraints are implemented to sequence these stages. In reality, however, steps in chemical processing do not follow each other in a strict sense; most of the times they overlap in time and this overlap is determined by the timing of “finer” operating tasks executed in processing. In addition, these finer subtasks may require the use of auxiliary equipment whose occupancy must also be captured in addition to that of the main equipment.

For those reasons, it is important to break down processing tasks (procedures) into shorter, more primitive steps which will be called operations. Operations can be used to time the start or end of the procedure they belong or of other procedures. Both procedures and operations can make use of resources such as equipment or labor. The execution of operations is “tied” by constraints that mandate their sequencing in a strict way. In some cases, there is flexibility in the execution of an operation (e.g. a tank can have a dirty-hold time before it is cleaned). These flexibilities can be embedded in the representation by being modeled as dummy operations with flexible durations. So, even though the breaking-down of the process into procedures and operations seems to make the representation more complex, the implementation of actual constraints in the recipe execution ties in a deterministic way the start and end of all tasks within a batch (with the exception of flexible starts) making the scheduling problem simpler.

This detailed representation generates data that can be fed into a MIP model, improving the accuracy and reliability of scheduling decisions. Equipment-dependent durations, task-equipment matching constraints, connectivity/compatibility constraints can also be incorporated into the model for a more accurate representation of the industrial setting. The developed model can be adapted to different objectives, such as makespan minimization, as will be shown with the help of case studies presented in the paper.

Harjunkoski, I. (2016). Deploying scheduling solutions in an industrial environment. Computers &Chemical Engineering, 91, 127-135

Georgiadis, G., Elekidis, A., Georgiadis, M. (2019). Optimization-based scheduling for the process industries: from theory to real-life industrial applications. Processes, 7, 438.



4:40pm - 5:00pm

Enhancing Large-scale Production Scheduling using Machine-Learning Techniques

Maria E. Samouilidou, Nikolaos Passalis, Georgios P. Georgiadis, Michael C. Georgiadis

Department of Chemical Engineering, Aristotle University of Thessaloniki, Greece

As global competition rises and customer expectations increase, manufacturing industries must embrace digital transformation to remain competitive and operationally effective. In complex production environments, digital technologies can offer solutions to optimize key areas. Fluctuating demands (Hubbs et. al, 2020), equipment breakdowns and raw material availability are some challenges that necessitate for schedules to be flexible, to adapt quickly to changing conditions without causing order delays or increased costs. This is particularly difficult in the presence of frequent and expensive product changeovers, since production schedules are typically handled manually and combinatorial complexity of real-life industrial instances does not allow exact methods to deliver optimal solutions quickly.

This study focuses on optimizing production scheduling in multi-product plants with shared resources and costly changeover operations. Specifically, two main challenges are addressed, the unknown changeover behavior of new products and the need for rapid schedule generation when unforeseen events happen. An innovative framework integrating Machine Learning (ML) techniques with Mixed-Integer Linear Programming (MILP) is proposed. Initially, a regression model predicts unknown changeover times based on key product attributes. Then, a representation space (Passalis & Tefas, 2016) where distances correlate with changeover times is compiled through multidimensional scaling, allowing constrained clustering to group production orders according to available packing lines. Ultimately, the MILP model generates the production schedule within a constrained solution space, utilizing optimal product-to-line allocation from cluster segmentation.

A case study inspired by a Greek construction materials plant is used to validate the proposed approach. The results showed that this framework improves scheduling efficiency by providing rapid solutions, reducing downtime and facilitating the introduction of new products. Overall, a novel scheduling solution is proposed for manufacturing industries that face unknown production data and need quick schedule alternations without extra changeover costs.

References

Hubbs, C. D., Li, C., Sahinidis, N. V., Grossmann, I. E., & Wassick, J. M. (2020). A deep reinforcement learning approach for chemical production scheduling. Computers & Chemical Engineering, 141, 106982.

Passalis, N., & Tefas, A. (2016). Information clustering using manifold-based optimization of the bag-of-features representation. IEEE transactions on cybernetics, 48(1), 52-63.



5:00pm - 5:20pm

Multiscale analysis through the use of biomass residues and CO2 towards energetic security at country scale via methane production

Guillermo Galán1, Manuel Taifouris1, Mariano Martin1, Ignacio E. Grossmann2

1Department of Chemical Engineering. Universidad de Salamanca. Plz Caídos 1-5, 37008, Salamanca, SPAIN; 2Department of Chemical Engineering. Carnegie Mellon University. 5000 Forbes Ave, Pittsburgh, PA, U.S.A.

The development of industrial and transportation activities has increased CO₂ emissions, raising atmospheric CO₂ from 300 ppm in the late 19th century to 425 ppm in 2024 [1], causing a 1ºC temperature rise. Synthetic methane emerges as an interesting alternative, aligning with circular economy principles to reduce reliance on imported fossil fuels. There are two alternatives to capture CO2 and produce synthetic natural gas (SNG), biomass growth, and its capture from the air and other sources using human-made technologies.

This work develops a systematic comprehensive comparison to model the production of renewable methane from lignocellulosic dry residues via gasification and anaerobic digestion of wet waste, and synthetic methane production from captured CO2 and renewable electrolytic hydrogen using a multiscale approach. First, a techno-economic evaluation determines key performance indicators (KPI) of facilities and renewable energy sources. Then, a Facility Location Problem (FLP) identifies production capacities and the optimal facility locations. The decentralized use of lignocellulosic and wet waste, along with CO2 captured from point and dilute sources is analyzed due to the availability of the raw material and the high transportation costs. The problem is formulated as a Mixed-integer linear programming (MILP) model, optimizing waste and CO2 utilization, plant locations, PV panel surface areas, and wind turbines across Spain, at the level of agricultural shires, 356. It considers budget variations and carbon taxes for the years 2022, 2030, and 2050. A maximum of 2% of shires' surfaces, 10,120 km², is employed, installing between 20 and 50 wind turbines per shire.

Lignocellulosic dry waste and point sources for CO2 capture using MEA are preferred as predicted by the MILP model. PV panels are mainly selected due to their competitive cost, increasing their surface from 147 km² to 1,511 km² for the time horizon from 2022 to 2050, and using wind turbines to generate additional power, from 7.3 GW to 52.8 GW in the same period. The maximum synthetic biomethane production from lignocellulosic dry waste reaches 14,086 kt/year of synthetic methane. The CO2 captured from point sources is prioritized. Southeastern and coastal shires are selected, utilizing 1.46% of the available surface producing 11,104 kt/year of methane. The investment of 14,440 M€ for waste treatments, CO2 capture from point sources, and methane synthesis in the year 2022. A sensitivity analysis reveals methane prices range from 3.818 €/MMBTU to 13.837 €/MMBTU in the period from 2022 to 2050, requiring from 66% to 410% of the budget to achieve 100% methane self-sufficiency. Considering carbon taxes, the price of 3.146 €/MMBTU is projected for 2050, which is competitive with current natural gas prices.

[1] NASA, 2024. Carbon Dioxide. Direct measurements, 1958-Present. https://climate.nasa.gov/vital-signs/carbon-dioxide/?intent=121 (Accessed April 2024).



5:20pm - 5:40pm

A novel global sequence-based mathematical formulation for energy-efficient flexible job shop scheduling problems

Dan Li, Taicheng Zheng, Jie Li

Centre for Process Integration, Department of Chemical Engineering, School of Engineering, The University of Manchester, Manchester, M13 9PL, United Kingdom

With increment energy awareness, there has been growing recognition of the importance on incorporating energy considerations into the flexible job shop scheduling problem (FJSSP). To accommodate the energy-efficient FJSSP, three strategies have been explored: (i) speed-scaling framework, (ii) incorporation of time-of-use electricity price, and (iii) switching machines to a power-saving mode while idle. The third strategy is significant, as 65% of the total energy consumption (TEC) occurs while machines are idle [1]. Consequently, effective management of machine modes while idle has garnered increasing attention, although it remains in the early stage of exploration.

Many approaches, such as mathematical programming [1-4] and metaheuristics [2,3], have been attempted to tackle the energy-efficient FJSSP with consideration of machine modes. The mathematical programming approach prioritizes guaranteeing solution optimality and evaluating solution quality. Zhang et al. [2] proposed a mixed integer linear programming (MILP) model to minimize TEC. However, their model fails to generate energy-efficient solutions for industrial-scale problems. Followingly, the same problems are addressed by MILP models from Meng et al. [3] and Rakovitis et al. [1]. Although these models are more effective, they lack the robustness to efficiently address all industrial-scale cases. Moreover, Meng et al. [3] did not incorporate the turn on/off energy strategy, leading to high energy waste during machine idle periods. Li et al. [4] developed a local sequence-based formulation to manage the machine mode selection. However, they introduce excessive binary variables, resulting in computational inefficiency. It appears that no mathematical programming approach has been developed to effectively solve the energy-efficient FJSSP by managing machine modes at an industrial scale.

In this work, we proposed a novel global sequence-based mathematical formulation (MG) to optimize the energy-efficient FJSSP. Machines can select a power-saving mode between standby and turn-on/off while idle. A big-M constraint is introduced to identify the immediate successor operation of a given operation, allowing us to account for the idle duration between them. Computational results demonstrate that the proposed MG is robust to generate feasible solutions in highly complex examples. MG generates better TEC results in 64% of the examples relative to existing models, with a maximum reduction of 27.6%. More importantly, as the complexity of problems increases, the advantage of MG becomes more apparent.

[1] Rakovitis, N., Li, D., Zhang, N., Li, J., Zhang, L., & Xiao, X. (2022). Novel approach to energy-efficient flexible job-shop scheduling problems. Energy, 238, 121773.

[2] Zhang, L., Tang, Q., Wu, Z., & Wang, F. (2017). Mathematical modeling and evolutionary generation of rule sets for energy-efficient flexible job shops. Energy, 138, 210-227.

[3] Meng, L., Zhang, C., Shao, X., & Ren, Y. (2019). MILP models for energy-aware flexible job shop scheduling problem. Journal of cleaner production, 210, 710-723.

[4] Li, D., Zheng, T., Li, J., & Teymourifar, A. (2023). A hybrid framework integrating machine-learning and mathematical programming approaches for sustainable scheduling of flexible job-shop problems. Chemical Engineering Transactions, 103, 385-390.



5:40pm - 6:00pm

Optimization model and algorithms for the Unit Commitment problem

Javal Vyas1, Carl Laird1, Ignacio Grossmann1, Ricardo Lima4, Iiro Harjunkoski2,3, Marco Giuntoli2, Jan Poland2

1Carnegie Mellon University; 2Hitachi Energy Research; 3Aalto University; 4King Abdullah University of Science and Technology

As electrification becomes a major trend in the area of Process Systems engineering, the Unit Commitment (UC) problem is a critical optimization challenge that arises in the energy systems. The major goal in this problem is to schedule power-generating units while minimizing operational costs and adhering to physical and operational constraints. As energy systems grow more complex and electricity demands continue to rise, solving the UC problem efficiently is paramount to ensuring both cost-effectiveness and grid reliability. Various methodologies have been developed to address this problem, ranging from sophisticated optimization algorithms to heuristic-based methods[1]. Techniques used include genetic algorithms, Lagrangean relaxation, and Mixed-Integer Linear Programming (MILP), with each approach yielding varying degrees of optimality depending on the problem’s complexity[1]. Notably, the transition from Lagrangean relaxation (once employed by PJM Independent System Operators) to MILP has yielded significant cost savings, estimated at $5 billion annually for the energy sector[2].

In this work, we address the UC problem using MILP formulations and leverage the EGRET library to build an efficient, scalable model[3]. EGRET offers two new formulations, tight and compact formulation, which have been shown to be computationally competitive with the formulations from the literature. To further enhance computational efficiency, we integrate a Shrinking Horizon strategy. This strategy consists of an iterative decomposition method, where the binary variables are relaxed beyond a rolling time window, so that smaller integer programming subproblems are successively solved in the rolling window.

A significant advantage of the Shrinking Horizon method lies in its ability to balance computational performance with solution quality. By breaking down the problem into manageable chunks, it reduces computational complexity without compromising much on the optimality of the schedules generated. The decomposition framework enables the model to handle larger and more complex power systems, which is crucial as grids become increasingly integrated with renewable energy sources and distributed generation.

This approach has been tested on several benchmark case studies, including IEEE 118, WP2383, SP3120, and CASE6468RTE, each of which represents a system of varying size and complexity. These tests provide a rigorous evaluation of the method’s scalability, robustness, and practicality. Results demonstrate that the Shrinking Horizon approach achieves large computational speedup, reducing model-solving time significantly, at least up to an average of 22.2%. At the same time, the proposed approach does not compromise high-quality solutions since they are comparable to those obtained through traditional full-horizon methods, making it particularly well-suited for large-scale UC problems in real-world energy systems. These findings suggest that the combination of EGRET’s tight formulations and the Shrinking Horizon method can offer a promising solution for future large-scale UC applications.

References:

[1] N. P. Padhy, "Unit commitment-a bibliographical survey," in IEEE Transactions on Power Systems, vol. 19, no. 2, pp. 1196-1205, May 2004, doi: 10.1109/TPWRS.2003.821611
[2]O’Neill, R. P., Dautel, T., & Krall, E. (2011). Recent ISO software enhancements and future software and modeling plans. Federal Energy Regulatory Commission, Tech. Rep.
[3] Knueven, B., Ostrowski, J., & Watson, J. P. (2020). On mixed-integer programming formulations for the unit commitment problem. INFORMS Journal on Computing, 32(4), 857-876.

 
4:00pm - 6:00pmT4: Model Based optimisation and advanced Control - Session 3
Location: Zone 3 - Room E033
Chair: Adel Mhamdi
Co-chair: Alexander Mitsos
 
4:00pm - 4:20pm

Refinery optimal transitions by iterative linear programming

Michael Mulholland

University of KwaZulu-Natal, South Africa

This paper focuses on the control and dynamics of an oil refinery process on an intermediate level - the flows, masses and compositions of and between units within the refining operation. It aims to elucidate optimal strategies for the routing of streams during upset events imposed on the process. A general flowsheet simulation technique including tunable controllers for flows, compositions, levels and reaction extents is incorporated in a Linear Programming model.

In this work the flow/composition nonlinearity is dealt with by updating stream compositions iteratively in a series of LP solutions until convergence. A standard node represents a mixed receiving tank, with exit streams which can be split, converted and separated. These nodes can be inter-connected arbitrarily in the flowsheet, even allowing recycling.

Many processing industries use intermediate storages as materials advance through the various units. This presents a type of supply-chain problem where the phasing of transitions at various points can be optimized to improve process efficiency and avoid unit shut-downs. In the present work, it was sought to take advantage of the robust features of Linear Programming, which efficiently handles the large number of variables involved. The model includes:

  • Setpoint control of flows, storage masses & compositions, and reaction extents
  • High and Low constraints for flows, storage masses & compositions
  • All setpoints and setpoint weights, as well as constraints, can vary in time
  • A weight (value) can be set on the total mass in a tank for the objective function
  • Move suppression for steady control, by penalty weights on absolute flow changes in the objective function
  • Rate-of-change (ramp) limits for flows
  • Exit streams from a tank can undergo reaction, fractional component separation, or total flow splitting

The oil refining process is modelled with a crude distiller delivering four “straight run” streams (gasoline, naphtha, diesel and fuel oil) (Boucheikhchoukh et al, 2022). These split and combine through a flowsheet which includes catalytic reforming and catalytic cracking in order to maintain specifications for four different-valued products (premium gasoline, regular gasoline, diesel fuel and fuel oil). Intermediate nodes within the process may be set at various storage capacities to alleviate upsets.

Optimal strategies are presented for steady-state operation, crude supply and product demand steps, a crude change, and shut-downs of the catalytic cracker and catalytic reformer. In each case the dynamics and gross profit of the process are monitored over a 20-day period. The gross profit was based on the marginal values of initial and final stock, less operating costs. In the case of the reformer, a comparison is made between a planned shutdown as opposed to an unplanned shutdown.

The novelty of this work lies in the full optimisation on the dynamic changes over the defined horizon, with the process returning to a steady-state economic optimum. Product flows are maintained on specification by re-routing through the flowsheet. The iterative updating of stream compositions facilitates stoichiometric conversions, component separations and stream splitting.



4:20pm - 4:40pm

Probabilistic Model Predictive Control for Mineral Flotation using Gaussian Processes

Victor Dehon1, Paulina Quintanilla2, Antonio Del Rio Chanona1

1Department of Chemical Engineering, Imperial College London, South Kensington Campus, London SW7 2AZ, United Kingdom; 2Department of Chemical Engineering, Brunel University London, Uxbridge, UB8 3PH, United Kingdom

Mineral flotation, a critical physiochemical process in the mineral industry, is one of the largest separation techniques in mineral processing [1, 2]. This method extracts desired minerals through a froth phase by leveraging differences in surface properties. Despite its conceptual simplicity, problems such as non-linear dynamics, complex interactions, and multiphase instabilities pose control challenges in these systems [2]. Achieving optimal control is crucial, as even marginal improvements in recovery can yield substantial economic benefits due to the process’s large-scale operation [3]. Model Predictive Control (MPC) is a prevalent strategy, capable of handling non-linear, multi-variable processes and implementing explicit constraints. An integral part of MPC strategies is the process model, which was replaced in this study by a Gaussian Process (GP) representation of the mineral froth flotation system to capture its complex, nonlinear dynamics and provide probabilistic state predictions.

In this study, a GP-MPC strategy is proposed. The motivation is to directly leverage available process data and build a data-driven model that can be used for control. The rationale behind using GPs as the model of choice is the uncertainty quantification and probabilistic attributes that it provides, which allows to better control the complex and non-linear dynamics of the mineral froth flotation process. Simultaneously, this approach, capitalises on the predictive capabilities and multivariable control advantages associated with MPC. The use of the Google machine learning framework ’JAX’ was undertaken to accelerate the training process of the GP through leveraging just-in-time compilation as well as autodifferentiation for gradient computation. Altogether, the GP model was trained, optimised with JAX, assessed with relevant metrics and implemented to be used alongside a MPC strategy as a surrogate. This surrogate would then subsequently provide a probabilistic process model to be queried by the MPC optimisation process for future state predictions.

Overall, the results of this GP-MPC strategy are promising. The GP demonstrates accurate predictions with low amounts of errors on testing datasets, with an average root mean squared percentage error of 0.47% and average mean absolute percentage error of 0.33%. Furthermore, implementing JAX reduced training runtime and improved the accuracy of state predictions during training. The MPC portrayed low amounts of error between control setpoint and actual control actuation aswell as producing a control trajectory which resulted in high amounts of recovery of the desired mineral.

Altogether, this GP-MPC approach highlights the potential for integrating data-driven methods into control strategies for mineral flotation. This work aims to encourage the broader adoption of such approaches within the mineral flotation community, demonstrating that data-driven control strategies are a viable and promising option worthy of further, in-depth investigation.

[1] P. Quintanilla, S. J. Neethling, and P. R. Brito-Parada, "Modelling for froth flotation control: A review," Minerals Engineering, vol. 162, 2021.

[2] B. A. Wills, Wills’ Mineral Processing Technology, 8th ed., Butterworth-Heinemann, 2015.

[3] J. P. Ferreira II and B. K. Loveday, "An Improved Model for Simulation of Flotation Circuits," 2000.



4:40pm - 5:00pm

Revenue Optimization for a Hybrid Solar Thermal Power Plant for Dynamic Operation

Dibyajyoti Baidya, Mani Bhushan, Sharad Bhartiya

Indian Institute of Technology Bombay, India

Solar Thermal Power plants (STP) are used for large-scale electricity production from solar energy. However, STP faces significant challenges in operations resulting from (i) Supply-side disturbance, namely diurnal and seasonal solar radiation variations and cloud-cover induced uncertainties, (ii) Demand-side disturbance, namely fluctuating prices of electricity and (iii) Operational challenges, namely highly dynamic operation, and need for frequent plant shutdown if adequate energy storage is unavailable.

An optimal operating strategy is preferred to operate the STP in such a dynamic scenario. In literature, revenue maximization has been used as an objective to obtain optimal operating conditions (Camacho et al. 2013). However, existing research relies on simplified steady-state models while generating the optimal operating conditions, which may not adequately capture dynamic variations of the plant. In contrast, in the current work, we propose a dynamic plant-wide model-based revenue optimization approach to obtain optimal operating conditions. The objective function representing revenue in the proposed approach accounts for changing electricity prices and power generated by the STP. The objective function is proposed to be maximized while accounting for plant dynamics in the presence of several operational constraints. Thus, the proposed approach accounts for solar radiation variability, changing electricity demand, and process dynamics, enhancing revenue optimization and operational reliability.

To demonstrate the approach, we conducted a simulation case study on a 1 MW hybrid solar thermal power plant in Gurgaon, India (Surrender et al. 2019). This plant includes two solar fields, namely a Parabolic Trough Collector (PTC) field and a Linear Fresnel Reflector (LFR) field, along with a High-temperature Tank (HT), Low-temperature Tank (LT), Super-Heater (SH), Steam-Generator (SG), Pre-Heater (PH), and Steam Drum (SD). A dynamic model of plant behavior is available in Surrender et al. (2019) and is used in the current work. For short-term revenue optimization (over 1-day operation), with nominal variation in solar radiation, we focus on optimizing the oil mass flow rate through the PTC. Variation of this flow rate determines the amount of heat gained at the PTC outlet, which in turn affects the steam production by the heat-exchanger assembly (PH, SH, SG). The mass flow rate is allowed to vary at discrete time instants during the operation and is held constant between these time instants. Thus, from an optimization perspective, the flow rates at the specified discrete times become the decision variables. These decision variables affect the revenue (to be maximized) via the plant dynamic model, resulting in a Nonlinear Linear Programming (NLP) problem. These decision variables are optimized in the presence of several operational constraints.

The results show the effectiveness of our optimization strategy in generating an operational profile that significantly boosts the revenue generated by the STP.

References:

1) E. F. Camacho and A. J. Gallego, “Optimal operation in solar trough plants: A case study,” Solar Energy, vol. 95, pp. 106–117, 2013

2) S. Kannaiyan, S. Bhartiya, and M. Bhushan, “Dynamic modeling and simulation of a hybrid solar thermal power plant,” Industrial & Engineering Chemistry Research, vol. 58, pp. 7531–7550, 2019.



5:00pm - 5:20pm

Control of the WWTP Water Line Using Traditional and Model Predictive Control Approaches

Gheorghe Adrian Bodescu1, Romina Gabriela Dărăban1, Norbert Botond Mihály1, Castelia Eugenia Cristea1, Elisabeta Cristina Timiș1, Anton Alexandru Kiss2, Vasile Mircea Cristea1

1Babeş-Bolyai University of Cluj-Napoca; 2Delft University of Technology

Driven by increasing urbanization and industrialization, clean water demand has firmly become one of humanity's top problems. The restoration of wastewater quality and recovery of wastewater resources have emerged as topics of high interest for researchers and practitioners of the wastewater industry. As the typical wastewater plant (WWTP) accounts for 30% to 40% of the total urban energy consumption and greenhouse gas (GHG) of the WWTP sector accounts for 1% to 2 % of the global GHG emissions, the importance of economic and sustainable operation of the WWTP turns out to be very high. Modelling and process control have the remarkable potential to support the tough energy and GHG emissions restrictions and cope with continuous stricter regulations for clean water quality.

This study considered the WWTP's water line with an Anaerobic-Anoxic-Oxic design that operates according to the largest spread activated sludge technology. The developed model of the WWTP was calibrated with process data from the Cluj-Napoca municipal WWTP, characterized by a treatment capacity of around 420,000 PE.

The paper presents solutions for operating the WWTP based on advanced process control methods, merging the benefits of the cooperation between the lower-level decentralized control loops with the higher-level model predictive control strategy and setpoints optimization. The low-level control approach combines the feedback and feedforward control configurations with cascade control. They are used to control the nitrification in the aerated bioreactors and the denitrification in the anoxic reactor. The nitrate and nitrites concentration (NO) in the anoxic reactor is controlled by manipulating the internal recycle flowrate and the aeration is controlled either by directly controlling the Dissolved Oxygen (DO) or indirectly by controlling the ammonia (NH) concentration in the aerated reactors (in feedback or cascade configuration with DO control), using as manipulated variable the air flowrate. The traditional PI control of NO, DO, and NH is compared to the supervisory control having at the upper level either an optimization layer that provides the optimal values of the setpoints or the model predictive control layer implementing the traditional or the optimized setpoints emerged from an additional and uppermost optimization control layer. The optimized setpoints are computed based on a global optimization index consisting of a weighted sum of spent energy (aeration and pumping), effluent quality, and greenhouse gas (CO2 and N2O) emissions performance sub-indices.

Comparison of the various control strategies' performance, the inclusion of the GHG sub-index in the setpoints optimization-based computation and finding of the solution of the optimization task with the support of the specially trained artificial neural network, aimed to promptly predict the performance indices and support the real-time implementation, are the novel contributions of this work.



5:20pm - 5:40pm

A theoretical Chance-constrained explicit Model Predictive Control based framework for balancing Safety and Operational Efficiency

Sahithi Srijana Akundi1,2,3, Yuanxing Liu1,2,3, Austin Braniff4, Beatriz Dantas4, Shayan S Niknezhad1, Faisal Khan2,3, Yuhe Tian4, Efstratios N Pistikopoulos1,3

1Texas A&M Energy Institute, Texas A&M University, College Station, TX, USA; 2Mary Kay O’Connor Process Safety Center (MKOPSC), Texas A&M University, College Station, TX, USA; 3Artie McFerrin Department of Chemical Engineering, Texas A&M University, College Station, TX, USA; 4Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, WV, USA

In industrial processes, balancing stringent safety requirements with operational efficiency is a complex challenge, especially when safety constraints conflict with performance objectives. This research introduces a theoretical framework that integrates safety and control objectives through a chance-constrained Explicit Model Predictive Control (eMPC) approach, designed to achieve an optimal trade-off between operational performance and safety assurance. The core innovation of this framework is the introduction of tolerance-based safety constraints, which permit controlled violations of safety-critical limits. This flexibility allows the system to maximize operational efficiency while maintaining a robust level of safety, addressing the inherent trade-off between the two competing goals. Central to this framework is the incorporation of Bayesian-based dynamic risk assessment in a receding horizon manner, enabling the system to continuously update and adjust safety constraints in response to real-time shifts in risk associated with safety-critical variables. This real-time adaptability ensures that safety margins evolve in tandem with the operational context, enhancing the system’s ability to respond to uncertain and fluctuating conditions. Moreover, the eMPC model is equipped with learning/adaptive capabilities, allowing it to retain the knowledge of historical safety incidents and fault data. This learning mechanism enables the control system to proactively mitigate risks in future operations, anticipating safety issues before they escalate into critical failures. As a result, the system progressively refines its decision-making over time, achieving a stronger balance between safety compliance and operational efficiency. The framework's validation is presented through a case study involving a Continuous Stirred Tank Reactor (CSTR) under thermal runaway conditions. The study illustrates how the learning control model forecasts system vulnerabilities, diagnoses emerging faults and proactively adjusts control strategies to preempt safety-critical failures.



5:40pm - 6:00pm

A Novel Continuous Reformulation for Recipe-based Dynamic Optimization of Batch Processes

Carl Sengoba, Christian Hoffmann, Markus Illner, Jens-Uwe Repke

Technical University of Berlin, Process Dynamics and Operations Group, Straße des 17. Juni 135, Berlin D-10709, Germany

Batch processes are still mainly operated via operation recipes, which are sequential procedures[1] of consecutive operation steps conjoined by logical transition conditions. These recipes are typically derived from expert process knowledge. Their application is advantageous when the batch operation is optimized as it provides a parameterization of the (now restricted) control space and reduces the dimensionality of the optimization problem significantly, especially for nonlinear dynamic process models.

However, the use of heuristically determined recipes leads to an event-driven system, which complicates the implementation as an equation-based, smooth, dynamic optimization (DO) problem. Therefore, previous operation recipe implementations relied on imperative programming (e.g., while loops, if-else statements) to represent the recipe, requiring sequential solution methods.

In the presented novel recipe formulation, the information on the currently active recipe step is stored using auxiliary differential variables, which enables simultaneous optimization, including the sequence of the recipe itself. The decision variables determining active recipe steps are formulated using sigmoidal functions combined with auxiliary differential variables, thus avoiding the presence of binary variables in the DO problem[2]. Although this optimization formulation requires auxiliary variables and equations, the problem size only increases linearly with the number of recipe steps.

The proposed formulation is first applied and tested on a model of moderate size and solved in Python using a collocation approach. The examined zero-dimensional dynamic model consists of balance equations, kinetic rate equations, and further constitutive equations. For the proposed optimization problem, the computational expense of sequential vs. simultaneous methods is compared using the case process model.

For the moderate problem size in our case study, the computational expense is lower for the applied sequential method than for a simultaneous method. However, our recipe formulation facilitates parallelization for the dynamic optimization of recipe-based batch processes, while its performance against alternative formulations, such as mixed-integer dynamic optimization models needs to be benchmarked. In the next step, our formulation will be applied to a more complex case study of in-situ reaction/separation in an ammonia synthesis-sorption unit.

Sources

[1] Brand Rihm, Gerardo & Esche, Erik & Repke, Jens-Uwe. (2023). Efficient dynamic sampling of batch processes through operation recipes. Computers & Chemical Engineering. 179. 108433. 10.1016/j.compchemeng.2023.108433.

[2] Torben Talis, Erik Esche, Jens-Uwe Repke. (2024). A Smooth and Pressure-Driven Rate-Based Model for Batch Distillation in Packed Columns Using Hold-Time Constraints for Bang-Bang Controllers. (Eds.:) Flavio Manenti, Gintaras V. Reklaitis , BoA of (ESCAPE34/PSE24), June 2-6, 2024, Florence, Italy.

Acknowledgements

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them. Grant agreement No 101058643.

 
4:00pm - 6:00pmT5: Concepts, Methods and Tools - Session 3
Location: Zone 3 - Room D016
Chair: Arnaud DUJANY
Co-chair: Ludovic Montastruc
 
4:00pm - 4:20pm

Applying Time Series Extrinsic Regression to Parameter Estimation Problems for Dynamic Models – an Alternative to Gradient-Free Approaches?

Torben Talis, John Paul Gerakis, Christian Hoffmann, Jens-Uwe Repke

Technische Universität Berlin, Germany

Time series analysis is a well-established field within the machine learning community, with two prominent applications being time-series forecasting, i.e., surrogate models, predicting the next time step for the systems outputs, and time-series classification, where complete timeseries are mapped to discrete labels, e.g. a sensor is either working or defective. Time-Series Extrinsic Regression (TSER), however, is a method for predicting continuous, time-invariant variables from a time series by learning the relation between these underlying parameters and the complete dynamic time series of the outputs without focusing on the recent states. E.g., it can be used to predict the heart rate based on an ECG signal. TSER as a research field was only established in 2021, but it is gaining traction ever since and it is used e.g. in the field of manufacturing technology to predict steel surface roughness from laser reflection measurements. It is applied, when there are no models available.

Parameter Estimation (PE) is a common task in chemical engineering. It is used to adjust model parameters to better fit existing dynamic models to experimental time series data. This becomes more challenging in higher dimensions and for dynamic systems, where sensitivity and identifiability may change over time. There already exists a multitude of algorithms to solve the problem, including second-order methods that leverage information from Jacobian and Hessian matrices, as well as gradient-free optimization techniques, such as particle swarm optimization (PSO) or simulated annealing. However, with the growing establishment of machine learning (ML) in an increasing number of domains, the question arises as to whether, and if so, how, ML in general and TSER in particular can be employed to solve PE problems.

This study marks the first application of TSER to PE problems. A comparative analysis is conducted between TSER and PSO, in terms of prediction accuracy, computational cost and data efficiency. We investigate, whether it is viable to use TSER, when there is a model available.

Our methodology to regress model parameters via ML builds on the typical assumption, that a structurally correct and rigorous model, which can be simulated at low cost, is available. At the beginning, the boundaries of the parameter space are defined. This space is then sampled using Sobol sequences and the model is simulated. The resulting trajectories, along with their corresponding parameters, constitute the training data set. These trajectories are transformed through application of the “RandOm Convolutional Kernel Transform” method resulting in novel features, which are subsequently used to train the regressor model. This regressor returns predictions for the parameters.

In a case study, the method is applied to predict the heat transfer and kinetic parameters of a batch reactor based on simulated data. However, real measurements are often not continuously available, but are taken only at rare, discrete points in time, and different variables are measured at different, asynchronous intervals. This is also mimicked in the synthetic training data, so the influence of heterogeneity on the results can be shown and over- or undersampling strategies are applied to counteract the effect.



4:20pm - 4:40pm

GRAPSE: Graph-Based Retrieval Augmentation for Process Systems Engineering

Daniel Ovalle1, Arpan Seth2, John Kitchin1, Carl Laird1, Ignacio Grossmann1

1Carnegie Mellon University, United States of America; 2Evonik Corporation, United States of America

Large Language Models (LLMs) have demonstrated impressive capabilities, performing well across a wide range of tasks, including accelerating scientific discovery in different fields.
However, a significant limitation of these models is that they are restricted to the information found in their training data, which can be outdated and not tailored to specific domains.
This dependence on pre-training data means that the accuracy and quality of their responses may vary, particularly for topics that are less common in the training corpus [1].
As a result, general-purpose LLMs often lack the specialized knowledge required for fields like Process Systems Engineering (PSE), a rapidly advancing area of research.

Retrieval-Augmented Generation (RAG) enhances LLMs by integrating domain-specific knowledge through the indexing of extensive text in a separate information retrieval system.
This approach allows retrieved information to be presented to the LLM alongside the user question, facilitating access to up-to-date knowledge relevant to specific domains while improving interpretability and provenance tracking [2].
However, creating an indexed database for RAG from scientific documents presents several challenges. Scientific knowledge, especially in PSE, often involves complex mathematical expressions that standard LLM document parsers struggle to interpret, leading to a loss of essential semantic information [3].
Additionally, traditional indexed databases often overlook the relationships that exist across different papers, which can hinder the comprehensive retrieval of interconnected knowledge [4].

In this work, we propose a reflective-active LLM agent [5] that possesses domain-specific knowledge in PSE.
The agent accesses a graph-based index database where PSE papers are processed using a computer vision model to accurately capture concepts, relationships, and mathematical expressions contained within them.
Within this database, documents are grouped semantically and processed recursively, employing techniques such as embedding, clustering, and summarization to construct a hierarchical tree structure with varying levels of summarization from the bottom up [6].
This approach enables the agent to integrate multiple levels of abstraction effectively. The ultimate goal is to develop a tool that facilitates research and education in PSE, while also exploring the creation of benchmarks to evaluate the performance of LLM models specifically tailored to this field.

[1] F. Petroni, T. Rocktäschel, P. Lewis, A. Bakhtin, Y. Wu, A. H. Miller, and S. Riedel, "Language models as knowledge bases?" arXiv preprint arXiv:1909.01066, 2019.
[2] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel et al., "Retrieval-augmented generation for knowledge-intensive NLP tasks," Advances in Neural Information Processing Systems, vol. 33, pp. 9459 9474, 2020.
[3] L. Blecher, G. Cucurull, T. Scialom, and R. Stojnic, "Nougat: Neural optical understanding for academic documents," arXiv preprint arXiv:2308.13418, 2023.
[4] Y. Hu, Z. et al., "GRAG: Graph Retrieval-Augmented Generation," arXiv preprint arXiv:2405.16506, 2024.
[5] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, "ReAct: Synergizing reasoning and acting in language models," arXiv preprint arXiv:2210.03629, 2022.
[6] P. Sarthi, S. Abdullah, A. Tuli, S. Khanna, A. Goldie, and C. D. Manning, "RAPTOR: Recursive abstractive processing for tree-organized retrieval," arXiv preprint arXiv:2401.18059, 2024.



4:40pm - 5:00pm

Flexibility assessment via affine bound evaluation

Diogo Narciso1, Steven Sachio2,3, Maria M. Papathanasiou2,3

1CERENA, Department of Chemical Engineering, Instituto Superior Técnico, University of Lisbon, Lisbon, Portugal; 2The Sargent Centre for Process Systems Engineering, Imperial College London, London, United Kingdom; 3Department of Chemical Engineering, Imperial College London, London, United Kingdom

System design is guided primarily by the system performance with respect to its economic viability[1]. This process entails the selection of a set of variables within a feasible design space to achieve the optimal performance under nominal operating conditions[1]. In real life systems, however, system operation is often subject to multiple disturbances, which may cause their product output(s) to be off-spec. In such cases, flexibility becomes an important consideration in system design, since it promotes more robust designs and minimise the risk of operating under problematic regimes[2].

Several developments relating to flexibility in system design have been proposed since the 1980s. Swaney and Grossmann (1985) proposed the flexibility index metric which quantitatively describe the flexibility of a nominal process design based on inscribing hyperrectangles within the feasible space[3]. Since then, extended flexibility index formulations have been proposed to solve stochastic, dynamic, and non-convex problems[2]. However, recent works moved away from investigating nominal designs, to approximating the full feasible space (design space identification)[4].

This work proposes a novel approach for flexibility assessment. In design problems where the design space (DSp) is constrained by a set of affine bounds, flexibility may be expressed either as the minimum or the maximum distance with respect to the feasible (design) space bounds. For any point in the DSp, the minimum distance provides a good indicator on the minimum flexibility, as the direction that represents the highest risk of violating the constraints. An analogous conclusion can be drawn between the maximum distance and maximum flexibility. These distances (or flexibilities) can be computed exactly via geometrical considerations, enabling the calculation of a minimum-based and maximum-based flexibility metrics for all points in the DSp. This class of problems are in fact multiparametric programming problems as the goal is to obtain comprehensive flexibility maps, rather than investigating unique points in the DSp. In the case of minimum flexibility, their solutions comprise: (i) a set of critical regions defining a convex hull within the DSp (each associated with a unique nearest bound of the feasible space), (ii) the corresponding optimizer functions (projection at nearest bound), and (iii) objective functions (minimum distance).

A detailed mathematical framework was developed for this class of problems. It enables a new paradigm for flexibility assessment, which can be applied to design problems of any dimension. Problem complexity is significantly reduced in comparison with the classic multiparametric programming approaches, since only a limited number of active sets need to be considered during solution calculation.

References:

[1] Smith, R. 2005. Chemical process: design and integration, John Wiley & Sons.

[2] Pistikopoulos, E. N. & Tian, Y. 2024. Advanced Modeling and Optimization Strategies for Process Synthesis. Annual Review of Chemical and Biomolecular Engineering, 15, 81-103.

[3] Swaney, R. E. & Grossmann, I. E. 1985. An index for operational flexibility in chemical process design. Part I: Formulation and theory. AIChE Journal, 31, 621-630.

[4] Geremia, M., Bezzo, F. & Ierapetritou, M. G. 2023. A novel framework for the identification of complex feasible space. Computers & Chemical Engineering, 179, 108427.



5:00pm - 5:20pm

Interval Hessian-based Algorithm for Nonconvex Optimization

Ashutosh Sharma1, Gauransh Dingwani2, Ishan Bajaj1

1Department of Chemical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016, India; 2Department of Chemical Engineering, Indian Institute of Technology Roorkee, Roorkee 247667, India

Second-order optimization algorithms based on the Hessian of the objective function have been proven to achieve a faster convergence rate than the first-order methods. However, their application to large-scale nonconvex problems is hindered due to the computational cost associated with: (1) Hessian evaluation, (2) Hessian matrix inversion, and (3) modifying the Hessian matrix to ensure its positive-definiteness.

Accordingly, we propose a new search direction based on interval Hessian and incorporate it in a line-search framework to find a local minimum of unconstrained nonconvex optimization problems. The algorithm works as follows.

Step 0: Define p as the index corresponding to the Hessian update and k corresponding to the iterate update.

Step 1: Define a region of size Δ around the current iterate xk=xp, obtain the variable bounds (xk, xp ∈ [xpl, xpu]), and estimate the interval Hessian ([∇2ƒp]).

Step 2: Estimate the lower bound on minimum eigenvalue (λp) of [∇2ƒp] . We use Gerschgorin, E-Matrix and Mori-Kokame methods for this step.

Step 3: Obtain a Hessian matrix Hp=∇2ƒppI that that approximates the Hessian of the objective function and guaranteed to be positive-definite in the interval [xpl, xpu]. The matrix Hp can be viewed as the Hessian of the αBB convex underestimator [1] developed for deterministic global optimization of twice differentiable nonconvex problems.

Step 4: Obtain the search direction by taking the product of (Hp)-1 and the gradient ∇ƒk.

Step 5: The new iterate is obtained by xk+1 = xk - θk (Hp)-1∇ƒk, where θk is the step length satisfying the Armijo conditions.

Step 6: If ||∇ƒk|| < ε, then terminate the algorithm; otherwise, update the index k ← k+1. If xk+1 ∈ [xpl, xpu], go to step 4. Otherwise, p ←p+1 and go to step 2.

The novelty of the algorithm is that, unlike traditional second-order methods such as the Newton method, we avoid performing expensive operations at all iterations. Specifically, for the iterations with xk ∈ [xpl, xpu], we compute the Hessian only once, and the search direction is obtained by matrix-vector multiplication. On the other hand, at every iteration of the Newton method, Hessian needs to be computed and finding the search direction requires O(n3) operations. Moreover, for the Newton method, the Hessian may become indefinite for noncovex problems and needs to be modified to guarantee a descent direction. However, in our algorithm, the matrix Hp (defined in Step 3 above) is a positive-definite approximation of the Hessian of the original function and need not be modified.

We apply our algorithm on a set of 210 problems and show that our method converges to a local minimum for 70% problems using Δ = 0.1 in less than 1000 Hessian evaluations. We compared the traditional method of Hessian modification based on trial and error with the proposed algorithm and showed that for less than 21 O(n3) operations our proposed algorithm could solve 44% of problems and the traditional method could solve 33% problems.

References

[1] Adjiman et al. (1998). Computers & Chemical Engineering, 22(9):1137–1158



5:20pm - 5:40pm

Efficient exploration of near-optimal designs by exploiting convexity

Evren Mert Turan, Stefano Moret, André Bardow

ETH Zurich, Switzerland

The transition away from fossil resources necessitates a massive shift in the energy, chemicals, and fuels industry. To facilitate this shift, researchers have developed green-field models of future supply chains, technology mixes, and infrastructure requirements. These predictions are derived from systems models which are typically formulated as large-scale linear programs that minimize the capital and operating cost of the greenfield system while meeting specified sustainability goals. However, Trutnevyte (2016) showed that providing a single cost-optimal solution is both limiting and naïve: Firstly, due to the deep uncertainty of the future and simplifying assumptions, we can be certain that the cost-optimal solution is not the “real life” cost-optimal solution. Secondly, a model usually neglects crucial factors that decision-makers will consider in planning the transition. Some of these factors are challenging to state mathematically, e.g., public acceptance of technologies, and so including these factors directly in the optimization constraints or additional objectives is not feasible.

This realisation has led to the proposal of various algorithms for generating many near-cost-optimal designs (e.g., 10% suboptimal). The designs can then be analysed at a later stage taking all decision factors into account. These algorithms are unified under the name Modelling to Generate Alternatives (MGA, Brill et al. 1982). MGA algorithms solve a sequence of optimization problems to explore the near-optimal space. MGA has two goals: (i) to identify maximally different solutions and (ii) to ensure a diversity of solutions to prevent a bias towards certain combinations. An ideal MGA algorithm will meet these goals while avoiding excessive computational effort and not requiring major changes of existing models. Surprisingly, MGA algorithms are not compared based on their effectiveness at meeting these goals but instead with respect to the computational cost per iteration (Lau et al., 2024).

In this work, we propose tractable metrics to measure MGA goals and propose a prototype MGA algorithm that exploits the convexity of the design problem to efficiently generate near-optimal designs. The algorithm is based on the classic hit-and-run Markov chain Monte Carlo algorithm and involves solving a sequence of feasibility problems, instead of optimization problems, by generating new designs along line segments within the near-optimal region. This greatly reduces the computational cost per new design.

We benchmark the proposed MGA metrics and algorithm to alternative methods from the literature. Our results show the benefits of exploiting the convexity of the solutions space. The resulting methodology can be directly applied to any design problem stated as a linear program and is applicable with minor modifications to any convex design problem.

References

Brill Jr, E. D., Chang, S. Y., & Hopkins, L. D. (1982). Modeling to generate alternatives: The HSJ approach and an illustration using a problem in land use planning. Management Science, 28(3), 221-235.

Trutnevyte, E. (2016). Does cost optimization approximate the real-world energy transition?. Energy, 106, 182-193.

Lau, M., Patankar, N., & Jenkins, J. D. (2024). Measuring Exploration: Review and Systematic Evaluation of Modelling to Generate Alternatives Methods in Macro-Energy Systems Planning Models. arXiv preprint arXiv:2405.17342.



5:40pm - 6:00pm

SNoGloDe: A Structured Nonlinear Global Decomposition Solver

Georgia Stinchfield1, Arsh Bhatia1, Michael Bynum2, Yankai Cao3, Carl Laird1

1Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA; 2Discrete Math and Optimization, Sandia National Laboratories, Albuquerque, NM; 3University of British Columbia, Chemical and Biological Engineering, Vancouver, British Columbia, Canada

Many industrial scale optimization problems cannot be solved directly with off the shelf solvers. These large-scale optimization problems typically require decomposition strategies and customized optimization algorithms to reach a feasible solution within a realistic computational time window. Many decomposition techniques rely on reformulating the problem such that it has a block-angular structure. In this work, we focus on implementing and extending a decomposition algorithm originally posed for nonlinear two-stage stochastic programs (stochastic programs exhibit this block-angular structure), as first proposed by Cao and Zavala [1]. While many decomposition methods are targeted towards linear or convex models the approach proposed in [1] has global optimality guarantees on a class of nonlinear, non-convex problems.

The general algorithmic framework proposed by [1] involves traversing a spatial branch and bound tree in the space of the first-stage variables of the stochastic program. The lower bounding problem is formulated by removing the linking equality constraints, and the upper bounding problem is typically solved by fixing the first-stage variables, making the major algorithmic steps of this approach trivially parallelizable.

We develop a general framework, utilizing the algebraic modeling language Pyomo [2], to decompose and solve large-scale optimization problems with variants of a block angular structure; in other words, extending beyond just two-stage nonlinear stochastic programs to other cases involving complicating variables. For example, consider problems that are decomposed temporally. Instead of enforcing equality constraints across a subset of first-stage variables, neighboring time periods from the original time horizon must have equality constraints imposed on certain variables from the end of time period N-1 and the start of time period N. These problems can have combinations of discrete and continuous decisions, along with linear and nonlinear constraints. Our tool aims to create a unified framework to apply and extend the general algorithm proposed in [1] to allow for customizations to the upper and lower bounding problems (e.g., problem-specific generations of candidate solutions, custom relaxations at the lower bound), provides a parallelized implementation, and allows for a variety of problem structures that exhibit complicating variables.

References

[1] Cao, Yankai, and Victor M. Zavala. "A scalable global optimization algorithm for stochastic nonlinear programs." Journal of Global Optimization 75.2 (2019): 393-416.

[2] Bynum, Michael L., et al. Pyomo-optimization modeling in python. Vol. 67. No. s 32. Berlin/Heidelberg, Germany: Springer, 2021.

Acknowledgements & Disclaimers

Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

 
4:00pm - 6:00pmT6: Digitalization and AI - Session 2
Location: Zone 3 - Room D049
Chair: Fatima Rani
Co-chair: Sergio Lucia
 
4:00pm - 4:20pm

Computational Assessment of Molecular Synthetic Accessibility using Economic Indicators

Friedrich Hastedt1, Klaus Hellgardt1, Sophia Yaliraki1, Antonio del Rio Chanona1, Dongda Zhang2

1Imperial College London, United Kingdom; 2University of Manchester, United Kingdom

The field of molecular and drug discovery has made significant progress in generating promising compounds with desired physical properties. However, a significant gap remains in addressing the practical feasibility of synthesizing these compounds. While current research focuses primarily on the prediction of molecular activity and properties, the economic viability of synthesis, including the cost and complexity of producing these compounds at scale, is often overlooked. Without incorporating market-driven constraints such as price into early-stage predictions, many molecules that appear promising in silico are ultimately inviable from a process engineering or market perspective. This results in wasted time and resources.

In recent years, several machine-learning (ML) approaches have been developed to guide virtual screening and de novo molecule generation toward synthesizable compounds1. A promising strategy involves computationally efficient scoring functions that classify molecules as “easy-to-synthesize (ES)” or “hard-to-synthesize (HS)”. These functions use either i) complexity-based indicators2 or ii) retrosynthetic analysis3 to assess synthetic accessibility. Although both methods have their merits, they face a significant limitation: the inability to generalize to out-of-distribution molecules, which are in fact the molecules of interest. Additionally, these scoring systems are typically based on binary classifications (1 for ES, 0 for HS) or predefined continuous ranges (e.g., 1 to 10, where 10 represents HS), which lack a clear physical interpretation.

To overcome the limitations, we propose a novel molecular synthetic accessibility score based on the market price of a molecule. Our model (MolPrice) is trained on a database of 5 million molecules with associated catalogued prices. Leveraging self-supervised learning, MolPrice differentiates between the prices of synthetically accessible molecules and more complex, out-of-distribution molecules, such as inaccessible natural products or HS compounds. By grounding the score in the market value of molecules, MolPrice provides a physically interpretable metric. Compared to existing models for price prediction, MolPrice demonstrates superior accuracy, speed, and reliability, as well as enhanced generalizability.

We validate MolPrice through multiple case studies, including virtual screening and retrosynthetic planning. In virtual screening, MolPrice steers the search toward synthetically accessible molecules, while preserving molecules with desirable properties. In retrosynthetic planning, MolPrice efficiently prioritizes promising synthetic routes, potentially reducing synthetic complexity and cost. In summary, MolPrice is a versatile tool, offering both accurate molecular price predictions and reliable synthetic accessibility assessments.

1. Stanley, M., and Segler, M. 2023. Fake it until you make it? Generative de novo design and virtual screening of synthesizable molecules. Current Opinion in Structural Biology, 82, p.102658

2. Ertl, P., and Schuffenhauer, A. 2009. Estimation of synthetic accessibility score of drug-like molecules based on molecular complexity and fragment contributions. Journal of Cheminformatics, 1(1), p.8.

3. Huang, Q., Li, L.L., & Yang, S.Y. (2011). RASA: A Rapid Retrosynthesis-Based Scoring Method for the Assessment of Synthetic Accessibility of Drug-like Molecules. Journal of Chemical Information and Modeling, 51(10), p. 2768–2777.

4. Sanchez-Garcia, R., Havasi, D., Takács, G., Robinson, M., Lee, A., Delft, F., and Deane, C. 2023. CoPriNet: graph neural networks provide accurate and rapid compound price prediction for molecule prioritisation. Digital Discovery, 2(1), p.103–111.



4:20pm - 4:40pm

ML-based adsorption isotherm prediction of metal-organic frameworks for carbon dioxide and methane separation adsorbent screening

Dongin Jung, Donggeun Kang, Donghyeon Kim, Siuk Roh, Jiyong Kim

School of Chemical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea

Biogas, composed of carbon dioxide and methane, is primarily separated using pressure swing adsorption (PSA) to upgrade it into a high-energy content gas with high methane purity. Recently, metal-organic frameworks (MOF) have emerged as promising carbon dioxide adsorbents for PSA in biogas treatment, owing to their high porosity and tunable structures. Estimating the adsorption capacity of MOF is essential for screening high performing adsorbents. While molecular simulations are commonly used to estimate the adsorption capacities, their computational intensity acts as a bottleneck in screening MOF adsorbents. This study proposes a new AI-leveraged high-throughput screening methodology (HTS) to rapidly identify high-performance MOF adsorbents for biogas treatment. A graph neural network (GNN) model was developed to predict the adsorption capacities of all MOF candidates, replacing the time-consuming molecular simulations. The GNN model processes the structural graphs of MOFs, capturing their spatial configurations, such as surface structure and pore characteristics, which are closely related to adsorption performance. Based on the adsorption capacities predicted by the GNN model, an isotherm model was derived to characterize the adsorption behavior of the MOFs. In the first stage of screening, we eliminate unsuitable MOFs with disordered structures or those containing precious metals. To ensure the viability of our screening method, we performed PSA process simulations on the remaining MOF candidates under various conditions (i.e., pressures and biogas compositions). We then identify the optimal MOFs, which exhibit high and stable methane recovery. Finally, the identified optimal MOFs outperformed conventional PSA adsorbents, demonstrating the effectiveness of our methodology. The proposed screening methodology not only contributes to rapid screening of MOFs at the process scale for biogas treatment but also paves the way for broader applications of MOFs in carbon dioxide separation technologies.

References

Ga, S., An, N., Lee, G. Y., Joo, C., & Kim, J. (2024). Multidisciplinary high-throughput screening of metal–organic framework for ammonia-based green hydrogen production. Renewable and Sustainable Energy Reviews, 192, 114275. https://doi.org/10.1016/j.rser.2023.114275

Choudhary, K., DeCost, B. Atomistic Line Graph Neural Network for improved materials property predictions. npj Comput Mater 7, 185 (2021). https://doi.org/10.1038/s41524-021-00650-1



4:40pm - 5:00pm

Predicting Surface Tension of Organic Molecules Using COSMO-RS Theory and Machine Learning

Flora Esposito1, Ulderico Di Caprio1, Bruno Rodrigues3, Florence Vermeire2, Idelfonso Bessa dos Reis Nogueira3, Mumin Enis Leblebici1

1Center for Industrial Process Technology, Department of Chemical Engineering, KU Leuven, Agoralaan Building B, 3590 Diepenbeek, Belgium; 2KU Leuven, Department of Chemical Engineering, Celestijnenlaan 200F-bus 2424, Leuven 3001, Belgium; 3Chemical Engineering Department, Norwegian University of Science and Technology, Sem Sælandsvei 4, Kjemiblokk 5, Trondheim 793101, Norway

Surface tension is a key property at the liquid/gas interface and plays an important role in various chemical engineering processes, including liquid flow and transport through porous media. Experimental measurement results for this property require specialized equipment and are time-consuming; therefore, predictive models for surface tension are essential. Available modeling methods rely on parametric fitting of experimental data. However, recently Gaudin proposed a COSMO-RS-based model to predict surface tension (γX) of pure liquids.1 The model is described by: γX = - ΔGxG→X/sx, where γX (mN/m) is the surface tension of a compound X, ΔGxG→X (kcal/mol) is the Gibbs free energy of self-solvation of a compound X, and sx2) is the molecular surface of a compound X. The implicit assumption of this model is that any surface segment has an equal probability of being exposed at the surface. While this approach assumes uniform molecular orientation at the surface and was originally tested on a limited set of molecules at 20 °C, this work aims to: 1. test the prediction capabilities of the current approach across a wider range of temperatures and compounds; and 2. develop a corrective machine learning (ML) model correlating the deviations from the theoretical prediction with molecular descriptors. The deviation is expressed as the ratio between experimental and simulated surface tensions: ε = γXexpXsim. Surface tension data for 93 common organic molecules are obtained from surface-tension.de website. They include alcohols, amines, amides, nitro compounds, ketones, hydrocarbons, ethers, diols, etc. Their molecular descriptors are computed using the Mordred library in Python. COSMOTherm software is employed to predict ΔGxG→X and Sx.

A neural network with four hidden layers is optimized using Bayesian techniques to minimize Mean Squared Error (MSE) on the validation set, achieving an MSE of 0.0056 and an R2 of 0.8452 on the test set. Gaudin's model for predicting surface tension exhibited a mean absolute percentage error (MAPE) of approximately 16–17%, a root mean square error (RMSE) of 9–7 mN/m, and a mean absolute error (MAE) of 6–5 mN/m when evaluated across a temperature range of 5–50 °C. However, integrating Gaudin’s model with a machine learning-based corrective approach significantly improved predictive accuracy. The hybrid model reduced the MAPE to 6–7%, the RMSE to 3.7–3.3 mN/m, and the MAE to 2-3 mN/m over the same temperature range. This enhancement represents a reduction in prediction errors of approximately 60.6% in MAPE, 56.3% in RMSE, and 54.5% in MAE compared to the standalone Gaudin model. Nevertheless, the ML model will be retrained on a larger and broader set of molecules, retrieved from the Jasper 2 dataset, aiming to improve its prediction capabilities.

Bibliography.

1 Gaudin, T. Chem Phys Lett 706, 308–310 (2018)

2 Jasper, J. J. J Phys Chem Ref Data 1, 841–1010 (1972)



5:00pm - 5:20pm

Hybrid model development for Succinic Acid fermentation: relevance of ensemble learning for enhancing model prediction

Juan Federico Herrera Ruiz, Javier Fontalvo, Oscar Andres Prado-Rubio

Universidad Nacional de Colombia sede Manizales, Colombia

The increasing focus on sustainable development goals has spurred significant research into bioprocesses optimization, particularly through technological advancements in process monitoring, data storage, and computational capabilities. These developments, combined with modelling techniques and simulation tools, are driving substantial advances on digitalization of biomanufacturing. In this context, hybrid modelling has emerged as a powerful approach, combining parametric and non-parametric methods to mitigate their individual drawbacks [1].

This study focuses on developing a hybrid model to harness limited experimental data and improve states’ predictions of succinic acid fermentation by Escherichia coli [2]. Succinic Acid is considered as one of the key molecules to pave the way for sustainable bio-based production of chemicals. However, parametric kinetic models for succinic acid fermentation tend to be overparameterized and uncertain, leading to poor predictive performance [3]. The present research was conducted in two stages. First, the experimental data was pretreated using established methodologies for removing outliers and noise [4]. In the second stage, different hybrid models were proposed for the system, with varying degrees of hybridization (including from one to all reaction rates). For each hybrid model, the predictive power of different machine learning (ML) algorithms such as ANN, SVM, and Gaussian Processes were investigated. Besides, two tunning strategies were tested: a) using the original kinetic parameters and b) recalibrating the remaining kinetic parameters after the ML training. During the model validation, high variability of the quality of predictions was observed. Therefore, an ensemble learning approach was implemented to mitigate this issue.

The data for training and validation were the same as the original research. The results showed that the hybrid models perform better than the parametric model, with a validation RMSE of 3.0456 for models (a) and (b), with the highest degree of hybridization (darkest models); compared to a RMSE of 7.0268 for the parametric model. These results depict the advantages of hybrid modeling in accurately describing succinic acid fermentations even with limited data, which aids in the prospects of bioprocess scale-up, digitalization and biorefinery development.

References

[1] de Azevedo CR, Díaz VG, Prado‐Rubio OA, Willis MJ, Préat V, Oliveira R, et al. Hybrid Semiparametric Modeling: A Modular Process Systems Engineering Approach for the Integration of Available Knowledge Sources. Systems Engineering in the Fourth Industrial Revolution, Wiley; 2019, p. 345–73. https://doi.org/10.1002/9781119513957.ch14.

[2] Chaleewong T, Khunnonkwao P, Puchongkawarin C, Jantama K. Kinetic modeling of succinate production from glucose and xylose by metabolically engineered Escherichia coli KJ12201. Biochem Eng J 2022;185:108487. https://doi.org/10.1016/j.bej.2022.108487.

[3] Leonov P. Bio-succinic acid production from alternative feedstock. Denmark Technical University, 2022. PhD Thesis.

[4] Sánchez-Rendón JC, Morales-Rodriguez R, Matallana-Pérez LG, Prado-Rubio OA. Assessing Parameter Relative Importance in Bioprocesses Mathematical Models through Dynamic Sensitivity Analysis, 2020, p. 1711–6. https://doi.org/10.1016/B978-0-12-823377- 1.50286-X



5:20pm - 5:40pm

Addressing the bottlenecks in implementing artificial intelligence for decarbonisation of thermal power plants

Waqar Muhammad Ashraf, Vivek Dua

University College London, United Kingdom

Artificial intelligence (AI) has had transformative impact on many industrial sectors including healthcare and banking; AI adoption in industrial thermal power systems remains relatively slow. This is attributed to data-centric nature of AI modelling algorithms lacking the interpretability notion [1, 2] and ineffective introduction of system-based constraints in the modelling algorithm(s) and optimisation problem. As a result, the trained AI models and solution estimated from the optimisation problem, though feasible mathematically, may not be tested on real-time operation of industrial systems particularly thermal power plants. To these potential challenges impeding the adoption of AI in thermal power plants, we present a comprehensive AI based analysis toolkit that incorporates the interpretable AI model(s) [3], uncertainty quantification in the model-based point-prediction [4] and improved formulation of optimisation problem for the efficient solution estimation for the performance enhancement of the thermal power plants.

The developed AI based analysis toolkit will be implemented on a 660 MW supercritical thermal power plant to maximise thermal efficiency and minimise heat rate under the ramp-up and ramp-down of the power plant. We will also demonstrate how AI model-based optimisation analysis without introducing the system-specific constraints may estimate the practically ineffective solutions to implement on the plant operation; though the solutions are feasible to the formulated optimisation problem. The improved solution estimation strategy through the developed AI based toolkit can have a transformative impact towards the AI adoption in the industrial systems that enhances the responsible use of AI for enhancing the performance of the industrial systems. It is further anticipated that smart operation of thermal power plants can significantly reduce the fossil fuel consumption thus cutting down huge volumes of emissions to environment to support the decarbonisation of thermal power plants.

References

[1] Saleem, R., Yuan, B., Kurugollu, F., Anjum, A., and Liu, L., 2022, "Explaining deep neural networks: A survey on the global interpretation methods," Neurocomputing, 513, pp. 165-180.

[2] Decardi-Nelson, B., Alshehri, A. S., Ajagekar, A., and You, F., 2024, "Generative AI and process systems engineering: The next frontier," Computers & Chemical Engineering, 187, p. 108723.

[3] Ashraf, W. M., and Dua, V., 2024, "Data Information integrated Neural Network (DINN) algorithm for modelling and interpretation performance analysis for energy systems," Energy and AI, p. 100363.

[4] Ashraf, W. M., and Dua, V., 2024, "Storage of weights and retrieval method (SWARM) approach for neural networks hybridized with conformal prediction to construct the prediction intervals for energy system applications," International Journal of Data Science and Analytics, pp. 1-15.



5:40pm - 6:00pm

Thermodynamics-informed graph neural networks for transition enthalpies

Roel Leenhouts1, Sebastien Jankelevitch1, Roel Raike1, Simon Müller2, Florence Vermeire1

1Department of Chemical Engineering, KU Leuven, Leuven, Belgium; 2Institute of Thermal Separation Processes, Hamburg University of Technology, Hamburg, Germany

Phase transition enthalpies represent the amount of heat absorbed or released during phase transitions such as melting, vaporization, and sublimation. The prediction of these enthalpies is essential for early-stage process screening and for modeling the temperature dependence of a range of thermodynamic properties. Despite their importance, measuring phase transition enthalpies can be time-consuming and costly, leading to a growing interest in computational methods that can provide reliable predictions. Graph neural networks (GNNs), known for their ability to learn complex molecular representations, have emerged as state-of-the-art tools for predicting various thermophysical properties. Despite their success GNNs do not inherently obey thermodynamic laws in their predictions.

In this study, we present a multi-task GNN designed to predict vaporization, fusion, and sublimation enthalpies of organic compounds. The GNN employed in this work utilizes a directed message passing architecture. To train the model, we digitized the extensive Chickos and Acree compendium, which encompasses 32,023 experimentally measured transition enthalpy values collected over 135 years [1]. This dataset serves as a comprehensive resource for developing machine learning models capable of accurately predicting phase transition enthalpies across diverse molecular families. In addition, we modified the loss function of the GNN, based on the thermodynamic cycle shown in Equation (1), to impose thermodynamic consistency between the enthalpies of different phase changes. For the thermodynamics-informed constraints, we explored two approaches: soft constraints, which guide the model toward thermodynamically consistent solutions while maintaining flexibility, and hard constraints, which strictly enforce thermodynamic consistency.

(1) ΔHsub = ΔHfus + ΔHvap

The results demonstrated that the multi-task GNN achieved mean absolute errors (MAEs) of 11.0 kJ/mol for sublimation, 6.1 kJ/mol for fusion, and 4.6 kJ/mol for vaporization on the test set. Importantly, incorporating a soft constraint improved the thermodynamic consistency without compromising accuracy, while a hard constraint ensured fully consistent predictions but reduced accuracy. Thus, soft-constrained physics-informed neural networks (PINNs) offer an optimal balance between consistency and accuracy for this application, whereas hard-constrained PINNs prioritize thermodynamic fidelity at the cost of predictive accuracy.

In addition, we compared our GNN against SoluteML, a state-of-the-art method for predicting Abraham solute parameters that can be combined with LSER published by Chickos and Acree to calculate transition enthalpies [2]. A modest improvement in prediction accuracy was observed. The model's performance analyzed across molecular subgroups revealed that prediction accuracy increases as the size of the training data for each subgroup grows, highlighting the importance of expanding experimental datasets. Overall, this work demonstrates the potential of thermodynamics-informed GNNs for accurate and physically consistent prediction of phase transition enthalpies.

[1] W. Acree and J. S. Chickos, “Phase transition enthalpy measurements of organic and organometallic compounds and ionic liquids. sublimation, vaporization, and fusion enthalpies from 1880 to 2015,” Journal of Physical and Chemical Reference Data, 2017.
[2] Y. Chung, F. H. Vermeire, H. Wu, P. J. Walker, M. H. Abraham, and W. H. Green, “Group contribution and machine learning approaches to predict Abraham solute parameters, solvation free energy, and solvation enthalpy,” Journal of Chemical Information and Modeling, 2021.

 
4:00pm - 6:00pmT8: CAPE Education and Knowledge Transfer - Session 1
Location: Zone 3 - Aula E036
Chair: Lidija Cucek
Co-chair: Iqbal M Mujtaba
 
4:00pm - 4:20pm

Beyond ChatGMP: Improving LLM generation through user preferences

Fiammetta Caccavale1, Carina L. Gargalo1, Krist V. Gernaey1, Ulrich Krühne1, Alessandra Russo2

1Technical University of Denmark (DTU), Denmark; 2Imperial College London, UK

Prompt engineering - improving the command given to a large language model (LLM) - is becoming increasingly useful for maximizing the performance of the model and, therefore, the quality of the output. However, in certain circumstances, LLM outputs need to be dynamically personalised personalized to specific users, for them to be effective. Prompting should, in this case, adapt dynamically according to the needs and preferences of the individual users. It is, therefore hence, useful to enrich prompt engineering with models that express user's preferences and implement them directly in the prompt. Logic-based machine learning has recently witnessed the development of human-interpretable, robust, and data-efficient algorithms and systems, called (ILASP), capable of learning preference models from data and background knowledge [1, 2]. These systems can play an important role in the development and advancement of digitalization strategies. They can be used, for instance, to learn personal user's preferences without sacrificing the human interpretability of the learned outcomes.

The Technical University of Denmark (DTU) offers a course on Good Manufacturing Practices (GMP). As part of the course, students are required to participate in an audit exercise, in which they interview a fictional company, represented by teachers, about its good manufacturing practices. In spring 2024, teachers have agreed to test the replacement of the physical teacher with ChatGMP, an AI-powered digital audit tool to represent the company. The chatbot, tested on a subset of volunteering groups, proved to be considered a viable alternative by both teachers and students, and it is therefore now regularly used in the course. Currently, the prompt given to the chatbot is the question formulated by the students. However, the prompt could be enriched with other features learned, such as the students' personal preferences.

In this work, we demonstrate, as proof of concept, how ILASP can be used to learn personal preferences of students to tailor and improve the prompts and generate targeted responses. More specifically, three different cases are investigated: (i) detecting missing questions in the audit, i.e., automatically assessing whether the groups have prepared questions regarding two central topics; (ii) checking for students' question repetition, which would suggest that ChatGMP's answer was inadequate, being either incomplete or not understood by the students; and, (iii) learn students' preferences, intended as both general rules to achieve a good performance (e.g., questions about a specific topics are to be preferred to maximize students' performance), as well as personal preferences of the groups, such as length and complexity of the questions or order of the topics.

References

[1] Mark Law, Alessandra Russo, and Krysia Broda. Logic-Based Learning of Answer Set

Programs. Lecture Notes in Computer Science (including subseries Lecture Notes in

Artificial Intelligence and Lecture Notes in Bioinformatics), 11810 LNCS:196–231, 2019.

[2] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael

Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learn-

ing with a unified text-to-text transformer. Journal of Machine Learning Research,

21(140):1–67, 2020



4:20pm - 4:40pm

Teaching Automatic Control for Chemical Engineers

Miroslav Fikar, Lenka Galčíková

STU in Bratislava, Slovak Republic

In this paper, we present our recent advances and achievements in automatic control course in the engineering study of cybernetics at the Faculty of Chemical and Food Technology STU in Bratislava. We describe the course elements and procedures used to improve teaching and learning experience. We discuss on-line learning management system, various teaching aids like e-books with/without solutions to practice examples, computer generated questions, video lectures, choice of computation and simulation tools.
The course is provided in the presence form of study for about 20 students but it relies heavily on on-line tools and methods. Starting from this academic year, flipped design of the course was designed. We describe our experience in the preparation of such a change and some initial feedback from the students.
The course concentrates on input/output linear approximation of processes in chemical and food technology and discusses poles/zeros, process dynamics, frequency and time methods, closed-loop system, design and tuning of PID controllers, root-locus analysis.
Although majority of laboratory excercises is performed using simulation tools, we also provide easy-to-use laboratory device that can be given to all students to work either at school or at home.



4:40pm - 5:00pm

Variations on the Flipped Class: The good, the bad and the surprising

Daniel Lewin1, Cleo Kontoravdi2, Nilay Shah2, Abigail Barzilai1

1Technion, Israel; 2Imperial College London, U. K.

Extended Abstract

In the 2023-24 academic year, the lead author of this paper was on sabbatical at London’s Imperial College Department of Chemical Engineering, where he taught three undergraduate courses, all taught using variations of the “flipped class” paradigm:

  1. The “good” course, on numerical methods, used flipping in each of the ten weeks of the course involving an online lesson, which students needed to complete in advance of a weekly class meeting of three hours, mostly focusing on problem-solving exercises by the students themselves with assistance from the course staff. This was the implementation that most closely followed the flipped class paradigm implemented at the Technion (Lewin and Barzilai, 2022, 2023).
  2. The “bad” course consisted of six weeks covering process dynamics, taught by conventional lecturing with no hands-on exercises by the students, followed by five week covering process control, taught with a flipped approach, in which each week, students completed an online lesson in advance of weekly class meetings of three hours in total, mostly focusing on problem-solving exercises by the students themselves assisted by the course staff.
  3. The “surprising,” one-week workshop was an integral part of the process design course. It involved three online lessons completed by the students in advance of class activity, with each online lesson associated with a hands-on problem-solving class meeting, to a total of seven contact hours held during a single week of the course.

This paper describes these three implementations in detail and presents and analyzes the responses from student surveys intended to ascertain students’ perceptions about the level of their satisfaction with the flipped class approach and the degree to which they achieved mastery of the taught materials. The actual learning outcomes are also analyzed, that is, the exam results in the case of the first two courses, and the performance of the students on their design project, for the last course. The paper ends with conclusions and recommendations concerning how the flipped class method should be implemented for success, depending for which classroom situation it is intended.

Keywords: Chemical engineering education, numerical methods, process control, process design, heat exchanger network synthesis, flipped classroom, active learning.

References

Lewin, D. R. and A. Barzilai (2022). “The Flip Side of Teaching Process Design and Process Control to Chemical Engineering Undergraduates – and Completely Online to Boot,” Education for Chemical Engineers, 39, 44-57.

Lewin, D. R. and A. Barzilai (2023). “A Hybrid-Flipped Course in Numerical Methods for Chemical Engineers,” Comput. Chem. Eng., 172, 108167.



5:00pm - 5:20pm

Integrating Project-Based Learning in Chemical Thermodynamics Education

Marie-Noelle Dumont, Antoine Rouxhet, Nathalie Job, Grégoire Léonard

université de Liège, Belgium

This Chemical Thermodynamics course is designed for Bachelor of Engineering Students as well as first-year master’s students in Chemical Engineering and Materials Science.

The approach is not about introducing innovative activities but rather re-phasing existing activities based on the three-phase competency evaluation model described by Carette (2007) and cited by Dierendonck (2014):

  1. Phase 1: Students are given a project, which is a complex task requiring the selection and combination of several elementary tasks they have performed during the tutorials (TDs).
  2. Phase 2: The project remains, but the complex task is broken down into elementary tasks. Each week, at the end of the TD, students receive explicit instructions that allow them to progress in their project using the knowledge they have just acquired. These instructions will enable students to complete the overall complex task. However, for each elementary task, it is up to the student to determine the procedure to implement from those they are supposed to know.
  3. Phase 3: During the TDs, students are presented with a series of simple, decontextualized tasks, which are the exercises proposed. These tasks correspond to the elementary procedures that will need to be mobilized to accomplish the complex task from Phase 1.

The TDs will systematically follow the same structure:

  • Theoretical reminders related to the exercises performed, constructed with the help of students via interactive tools like Wooclap (or others).
  • A specific exercise solved on the board, with each step detailed aloud to allow students to reproduce the reasoning. Pauses are planned during the resolution to enable students to develop their own reasoning, followed by a sharing of ideas.
  • Additional exercises to be solved in pairs, with the teacher available to answer questions and provide hints.
  • Definition of the part of the project that can already be solved thanks to the exercises performed. Students will be invited to solve it for the following week.

The course is divided into two parts:

  1. The first part is dedicated to methods for evaluating the physical and thermodynamic properties of pure substances. It also reviews the main methods for predicting thermodynamic properties and the major families of equations of state.
  2. The second part of the course focuses on these methods for mixtures of different substances. It also covers the description of multiphase equilibria of mixtures and the thermochemical quantities characterizing chemical reactions.

Projects are presented at the beginning of each part and will be subject to formative evaluation at mid-term. The goal is to lead students to a better understanding of various theoretical concepts through the implementation of an authentic project. Additionally, the aim is to support their motivation by specifying the different tasks to be accomplished each week.

References

Carette, V. (2007). L’évaluation au service de la gestion des paradoxes liés à la notion de compétence. Mesure et évaluation en éducation, 30(2), 49-71.

Dierendonck C. et Fagnant A. (2014) Approche par compétences et évaluation à large échelle : deux logiques incompatibles ?



5:20pm - 5:40pm

Exergy in Chemical Engineering Education

Thomas Alan Adams II

Norwegian University of Science and Technology (NTNU), Department of Energy and Process Engineering, Trondheim, Norway

Exergy is a very useful process systems engineering analysis tool, particularly as an aid for process design, synthesis, modelling, and performance benchmarking. At its core, exergy is a form of thermodynamic value that takes into account both the quantity and quality of energy. Although many engineers want to learn about exergy, they often find the topic impenetrable and soon discover that the effort required to compute exergy values can be too time consuming to be worth the effort. However, my colleagues and I have developed some new educational materials that make it drastically easier to not only learn the concept but also actually apply it for something useful, gaining insights into your process that you could not easily see without it.

In this talk, I will explain the concept of exergy in plain language, focusing on the thermomechanical (heat/pressure/phase) and chemical (chemical bond/phase) forms of exergy that are most relevant to chemical engineers. I will demonstrate how engineers can use the new book Exergy Tables which contains a compendium of pre-computed exergy values for thousands of chemicals at a wide variety of temperatures and pressures using a rigorous and well-defined referencing system. Moreover, all exergy computation results using Exergy Tables can be directly compared between different applications, researchers, and studies because they all use the same set of reference standards, which is particularly important for chemical exergies relevant for chemical engineering applications. In other words, the book does all the hard parts of the computation up front so that engineers can very quickly apply the results to real problems and immediately see the benefits.

In the talk, I will show how to use the book to compute exergy quickly and easily for substances and systems at high and low temperatures, high and low pressures, and for various chemicals. I will discuss how this material can be integrated into chemical engineering education (especially unit operation and process design) by providing many in-class examples for applications such as fuel combustion, steam and power generation, CO2 capture from power plants, CO2 sequestration, direct air capture, heat exchanger network design, water treatment systems, work-heat integration, and others. I will discuss how to use exergy numbers as a part of systems analyses through methods such as visualizing exergy flows, computing exergy-based efficiencies or other key performance indicators, utility and capital cost estimation, value proposition, and thermodynamic benchmarking, all in an educational context. The result is that with a relatively small amount of effort, you can give your students a powerful tool to use throughout their chemical engineering education.



5:40pm - 6:00pm

Closing the loop: embedded customized coding courses and chatbots in a virtual lab to teach bioprocesses

Fiammetta Caccavale, Carina L. Gargalo, Krist V. Gernaey, Ulrich Krühne

Technical University of Denmark (DTU), Denmark

Current progress in digitalization has led to a wide interest in learning more from available data. Advanced data analytics can be achieved through commercially available software; however, learning to program allows for more flexibility and ultimately more freedom in the potentially tailor-suited research. Among other programming languages, Python is one of the most requested. However, the integration of programming into the curricula might imply fundamental restructuring [1].

To train engineers willing to take on the challenge, we previously implemented sPyCE [2], an open-source series of Python courses tailored to chemical engineers. These courses cover topics such as design of chemical reactors, stoichiometry relationships, data pre-processing, data analysis, and data science. We also implemented FermentAI [3], a chatbot trained to answer questions about fermentation processes. To intensify the previously carried out efforts and create both a pedagogical framework to teach programming to (bio)chemical engineers, as well as to provide students with the opportunity to ask questions in a semi-private and tailored environment, we explore the integration of sPyCE and FermentAI into the BioVL platform [4], a virtual laboratory for teaching (bio)processes.

The main goal of this work is to enable students to (i) learn about bioprocesses and, simultaneously, (ii) learn how to model them, and (iii) ask questions to a chatbot phrasing their doubts about the teaching content. We believe that the Python tutorials, given the stimulant material used along with gamification, will be an engaging and insightful way to learn to model bioprocesses. Moreover, the chatbot will enable students to ask for clarification right away without any time delay – facilitating their learning progress.

In a small-scale test, we evaluated how the tools are integrated into the educational platform and conducted quantitative interviews with the volunteering students. The results show positive feedback regarding the usefulness of implementing the Python tutorials in BioVL. Moreover, students appreciate the possibility of having a chatbot available to ask questions and clarify doubts. All the collected feedback is used to further improve the platform, with the goal to provide as seamless an experience as possible. Finally, it was suggested that further development of other tailored chatbots could facilitate the students' learning and progress.

References

[1] Hermann J. Feise and Eric Schaer. “Mastering digitized chemical engineering”. In: Education for Chemical Engineers 34 (2021), pp. 78–86. ISSN: 1749-7728. DOI: https://doi.org/10.1016/j.ece.2020.11.011. URL: https://www.sciencedirect.com/science/article/pii/S1749772820300622.

[2] Caccavale, F., Gargalo, C. L., Gernaey, K. V., & Krühne, U. (2023). SPyCE: A structured and tailored series of Python courses for (bio) chemical engineers. Education for Chemical Engineers, 45, 90-103.

[3] Caccavale, F., Gargalo, C. L., Gernaey, K. V., & Krühne, U. (2024). FermentAI: Large Language Models in Chemical Engineering Education for Learning Fermentation Processes. In Computer Aided Chemical Engineering (Vol. 53, pp. 3493-3498). Elsevier.

[4] Caño de las Heras, S., Gargalo, C. L., Caccavale, F., Kensington-Miller, B., Gernaey, K. V., Baroutian, S., & Krühne, U. (2022). From Paper to web: Students as partners for virtual laboratories in (Bio) chemical engineering education. Frontiers in Chemical Engineering, 4, 959188.

 
4:00pm - 6:00pmT9: PSE4Food and Biochemical - Session 2
Location: Zone 3 - Aula D002
Chair: Dimitrios I. Gerogiorgis
Co-chair: Alexandros Koulouris
 
4:00pm - 4:20pm

A generalised optimization approach for the characterization of non-conventional streams

Michaela Vasilaki1, Effie Marcoulaki2, Antonis Kokossis1

1Department of Process Analysis and Plant Design, School of Chemical Engineering, National Technical University of Athens, Athens, Greece; 2System Reliability and Industrial Safety Laboratory, National Center for Scientific Research “Demokritos”, Athens, Greece

Biorefinery facilities are a sustainable alternative to fossil fuel production, where valuable biobased materials such as plastics, chemicals and fuels are produced through the conversion of biomass. Green refineries are highly adjustable due to the different combinations of technologies, platforms and substrates available. Appropriate integration of biorefinery systems to industrial facilities would significantly contribute to a more sustainable energy future. However, the adaptability of such facilities complicates the process design often leading to poor decision-making. Even more since biomass feedstocks are inherently diverse through origin, geographic location and storage conditions. Available tools and flowsheeting techniques are often inadequate in establishing the thermodynamic properties of such perplexed organic mixtures. Hence, it’s essential to develop a generalised approach capable of efficiently and accurately profiling non-conventional material and waste streams.

The purpose of this study is to provide standardized models for the chemical characterization of complex streams, ensuring the necessary adaptations while considering the differences in biomass types and forms. This approach provides significant insight in biomass profiling, while allowing for the model to be implemented as a starting point for the design and modelling of biorefinery-associated technologies such as hydrothermal liquefaction. HTL can accommodate a wide range of feedstocks (solid, liquid or even sludge) regardless of their moisture content, thereby eliminating the need for energy intensive pre-treatment making it a viable option for biomass conversion.

This paper conducts a comprehensive analysis of relevant literature to develop an efficient biomass characterization model. Several datasets are gathered and examined to establish a valid representation of the mixture, according to industry accepted standards and laboratory protocols. For reliable property estimation, correlations of key biomass properties are obtained from both computational models and experimental measurements.

A generic mathematical programming approach is followed, using MINLP technologies to efficiently characterize HTL associated material streams (e.g. biomass, biocrude). The problem is formulated with:

Variables that include

  • Chemicals in available databases
  • Composition in the solution
  • Property estimates

Specification parameters that include

  • Substrate classification (e.g. breakdown of proteins, sugars, lipids etc.)
  • Thermodynamic properties (e.g. densities, HHV, LHV, viscosity etc.)
  • Elemental and stoichiometric composition (e.g. ratios of C:H:O:N:S:P)
  • Experimental measurements (e.g. moisture content, fixed carbon etc.)

Objective functions featuring

  • A vector stream with suitable matching properties and relevance to the nature of the substrate

Integer cuts are implemented to produce classes of solutions of small deviance resulting in alternative populations of components that match the optimization requirements. Integer cuts provide (i) multiple feasible solution points that could establish key chemical components (ii) broader potential for data manipulation (iii) increased flexibility in choosing the appropriate mixture profile.



4:20pm - 4:40pm

Integrated hybrid modelling of lignin bioconversion

Sidharth Laxminarayan, Lily Cheung, Fani Boukouvala

Georgia Institute of Technology, United States of America

As sustainability gains global importance, bio-manufacturing pipelines have attracted more attention.[1] Of particular interest is the valorization of lignocellulosic biomass materials. Studies have demonstrated that Pseudomonas putida can convert lignin into cis,cis-muconic acid, a bioplastics precursor.[2] This study focuses on the conversion of catechol, a lignin derivative, with the aid of glucose, a growth encouraging substrate, to muconic acid by P.putida.

Cells are extremely complex, with numerous reaction pathways, intermediates, products, and regulatory networks. Precise models of cells are a necessity for optimizing performance and controlling bioprocesses. Current bioprocess phenomenological models, similar to reactor kinetic models, bury information regarding biomass heterogeneity and intercellular reactions within empirical parameters based on biological intuition.[3] On the other hand, purely machine learning (ML) models have shown to capture the nonlinear complexity of bioprocess but struggle with extrapolation and physical interpretability.[3] Experimental datasets are often sparse and noisy resulting in poor development and calibration of both these models. To leverage the physical constraints of phenomenological models, and the flexibility and practicality of ML models, hybrid models have been proposed.[3] In this work, an embedded hybrid modelling structure is explored wherein parameters like growth and consumption rates are modelled using ML models. The hybrid modelling approach will aid in capturing the complex relationships between the external metabolites and the bacteria physiology.

A time variant parameter estimation (TV-PE) technique is employed to train the ML component of the hybrid model. A parameter estimation strategy is explored wherein the errors in the state space and derivative space are minimized to reinforce the learning of the underlying physics behavior. This method is compared to the traditional method of minimizing the error in the state space. Another variation in the hybridization structure is explored: (i) A sequential method where the TV-PE and ML model training is performed separately and (ii) An integrated method where the two steps occur simultaneously. A combination of all these methods is evaluated to determine which can best capture the physics of the bioprocess case study under interpolating and extrapolating scenarios. The hybrid model was shown to consistently outperform purely phenomenological and black box models across various data availability and noise levels.

Moreover, hybrid models lend a framework of interpretability by performing sensitivity analyses on the ML components, providing qualitative insight for biological-intuition guided empirical models for specific phenomenon. Accurate models will aid in process design and optimization and will allow for implementation of more sound and sophisticated control strategies. These are necessary steps for expediting the widespread scale-up and commercialization of bio-manufacturing processes.

References:

[1] J. Wesseler, et al., “Measuring the Bioeconomy: Economics and Policies,” Annu. Rev. Resour. Econ., 2017, pp. 275–298, Oct. 2017.

[2] N.-Z. Xie, et al., “Biotechnological production of muconic acid: current status and future prospects,” Biotechnol. Adv., vol. 32, no. 3, pp. 615–622, May 2014.

[3] A. Tsopanoglou, et al., “Moving towards an era of hybrid modelling: advantages and challenges of coupling mechanistic and data-driven models for upstream pharmaceutical bioprocesses,” Curr. Opin. Chem. Eng., vol. 32, p. 100691, Jun. 2021.



4:40pm - 5:00pm

Plant-wide Modelling of a Biorefinery: Microalgae for the Valorization of Digestate in Biomethane plants

Davide Carecci1, Elena Ficara1, Gianni Ferretti1, Alberto Leva1, Ignazio Geraci2

1Politecnico di Milano, Italy; 2A2A S.p.A.

Microalgae cultivation on liquid digestate from the anaerobic co-digestion of agricultural feedstocks is an interesting option for digestate nutrient removal (preventing soil/groundwater pollution) and resource recovery coupled to value-added biomass production. Indeed, some resilient microalgae strains (es. green algae such as Scenedesmus and Chlorella) have been found to show plant bio-stimulating properties, making them particularly suitable for the integration of algal nutrient recovery in a highly growing market.

Altough the process technical feasibility has been already demonstrated in literature at small pilot-scale, no work has been done on the overall process modelling for consequent scenario analysis and techno-economic design optimization, to the best of the authors' knowledge. Indeed, the aim of the authors is also to open novel discussions for plant-wide models’ implementation for this type of biotechnologies.

In this work, a first-principle plant-wide model of the process was developed and thereby described. Two well-established mechanistic, grey-box, physico-chemical and biological three-phase (liquid, solid, gas) models for anaerobic digestion (IWA – ADM1) and algae-based bioremediation processes (ALBA) were considered and modified with necessary equations and extensions to develop a coherent and comprehensive Copp-like interface between the state variables of the two systems, preserving chemical oxygen demand (COD) and atomic mass conservation.

In particular: (i) ADM1 hydrolysis was modified to embrace the co-digestion of agro-zootechnical substrates (es. maize silage, cattle slurry, cattle dung); (ii) inorganic phosphorus dynamic was introduced in ADM1; (iii) main salts' precipitation/dissolution, as well as non-ideality of the multiple acid-base equilibria, were included in both biological models; (iv) the ALBA model was extended with a mechanistic sub-model for simulating the evolution of the raceway pond temperature (also under greenhouse), including the possibility to simulate feedback control closedloop to maintain the culture within strain-specific optimal ranges of temperature.

The resulting system is described by a highly non-linear, stiff DAE system of index-6. So far, it entails also the presence of some non-smooth equations due to the introduction of sign operators in the precipitation/dissolution salt equations (even though those can be easily smoothened). Due to its complexity, the whole plant was implemented in OpenModelica (an open-source software). The implicit and variable-step DASSL DAE solver was exploited as integrator.

Openloop scenario analysis for different upstream co-digester design and operating conditions was carried out to assess the impacts on the downstream microalgae outputs. Yearly dynamic trajectory of reactors' temperature, bacteria concentration/microalgae productivity as well as nutrients removal is reported.

Results highlighted the importance of a proper biorefinery design (with particular care to phosphorous limitation) and yet a noteworthy robustness of the system's performance. The use of the model can facilitate: (i) a more realistic assessment of the technicaleconomic feasibility of the process and (ii) the design of classical and advanced closedloop control strategies.

Futher works involve: (i) the experimental validation of the model for uncertain parameters estimation and (ii) the combination with economic analysis to outline the techno-economic optimization problem of the process design.



5:00pm - 5:20pm

Control-oriented modelling and parameter estimation for full-scale anaerobic co-digestion

Davide Carecci1, Arianna Catenacci2, Alberto Leva1, Gianni Ferretti1, Elena Ficara2

1DEIB Department, Politecnico di Milano, Italy; 2DICA Department, Politecnico di Milano, Italy

To match the growing demand for biomethane production, anaerobic digestors need an optimal management of the input diet. In many cases, co-digestion outcompetes mono-digestion, but is far more complicated to govern, also considering the very limited availability of measurements in full-scale plants, especially for agro-zootechnical plants. The state-of-the-art modelling of the process, the IWA-ADM1, is very useful when it comes to understand and optimize the process, but due to its complexity and overparameterization, literature agrees on its structural and practical unobservability in real-life conditions. Considering the well-known benefits of advanced model-based control, a structurally identifiable reduced-order model called AM2 was derived. Typically, the uncertain parameters of the latter are firstly identified minimizing the simulation error over synthetic ADM1 data, and later refined on real-plant data. Nevertheless, (i) the AM2 model is not suitable to describe the process in the case of coarse/slowly-biodegradable co-substrates and (ii) data synthetically generated or collected without inhibition-active transients cannot be considered fully informative of the non-linearities of the system, so that at least model adaptation by online recursive parameter estimation would be required.

Few works are present in the literature to derive a control-oriented model suitable for agro-zootechnical co-digestion. In addition, no systematic, comprehensive and robust procedures for offline/online uncertain parameter identification are present, to the best of the author’s knowledge.

This work has the novelty to present: (i) the performance comparison between two reduced-order models designed as extensions of the AM2 (hereafter AM2HN and AM2HNtan); (ii) the exploitation of a very informative (yet realistic for full-scale applicability) dataset including transients between different diets and batch activity/biomethane potential tests (initialized with a tailored-made recursive tool); (iii) a parameter subset selection scheme (PSS) based on sensitivity and collinearity analysis used to select only the identifiable parameters for offline identification; (iv) a novel approach to design regularization terms for the online recursive and “moving-horizon”-fashion update of time-varying parameters.

The uncertain parameters of AM2HN and AM2HNtan were firstly estimated using synthetic data generated by an extended ADM1 (hereafter agri-AcoDM) with active acetate and ammonia inhibitions. Only the identifiable parameters pointed out by the PSS were then corrected on real-plant data.

Reasonably, biogas flowrate and composition were the only “continuous” measurements considered. Some spot/”discontinuous” measurement of volatile fatty acids (VFAs), pH and total ammoniacal nitrogen (TAN) was considered. The batch tests were included in the dataset only for the agri-AcoDM parameter estimation, as they can be economically performed realistically few times per year on full-scale inoculum.

Results show narrow 95% confidence intervals from Fisher Information Matrix (FIM) evaluation and interesting fitting performances. As both high-fidelity and reduced-order models fails in fully grasping a VFA accumulation in transient operation, time-varying parameter update was tested and proved to be effective while limiting the risk of overfitting/catastrophic forgetting.

The developed reduced-order models and parameter identification schemes can be exploited in the design/testing of state observers and, eventually, in the design of a nonlinear model predictive control scheme (NMPC) that can be realistically used to optimally control full-scale agricultural anaerobic co-digestion plants.



5:20pm - 5:40pm

Process design for a novel fungal biomass valorisation approach

Matteo Gilardi1, Theresa Rücker1, Bernd Wittgens1, Thomas Brück2

1SINTEF AS, Norway; 2Technical University of Munich, Germany

Despite the considerable potential of biomass as a renewable resource, only a small proportion is currently being converted effectively into bio-based materials. New technologies are essential to build a robust and economically viable bioeconomy. A key target is to valorize available and underutilized raw materials, in particular wastes. In this context, the VALUABLE project [1] aims at demonstrating an innovative platform for the valorisation of Aspergillus Niger biomass deriving from the microbial production of citric acid. This biomass, whose market size is expected to reach 3.3 million tons by 2028, is currently sold as low-value animal feed (300€/t). The focus is the production of multiple value-added products, including yeast oil as a greener alternative to palm oil and non-animal-derived chitosan. Yeast oils will be exploited in cosmetics (i.e., stearates) and alkyd resins (coatings), while non-animal derived chitosan is a multifunctional environmentally and vegan-friendly component for applications in food, agriculture, medicine, pharmaceuticals, and cosmetics.

The process can be divided into six main steps. The approach initially involves an enzymatic solubilisation of the fungal biomass. The resulting sugar-rich aqueous solution is conveyed to an on-site acetic acid production. In the main fermenter, a lipid-rich yeast biomass is grown consuming residual glucose and acetic acid as carbon sources. The produced lipids are released in the following hydrolysis step, and the product is separated into three phases: solid, aqueous phase, and oil phase. On the other hand, the non-solubilized, chitin-rich fraction from the enzymatic solubilization step is deacetylated by a combination of enzymatic and chemical treatment to form chitosan.

Data-driven sub-models for the individual units were developed as plug-ins and integrated in COCO-COFE, a CAPE-OPEN process simulator, to characterize the mass and energy balance of the plant. Adjustable coefficients were tuned to experimental data collected within the project under different operating conditions covering the temperature, residence time, and enzyme content range of interest for commercial applications. A representative chemical formula for each compound, including complex structures of bio-organisms like enzymes and cell mass, was defined based on both in-house data and previous literature to close the mass balance in each conversion step. The process model was exploited to determine the Key Performance Indicators (KPIs) for both productivity and energy consumption, providing a comprehensive overview of the process’ efficiency. The most important finding is that around 180 kg of triglyceride oils and 130 kg of chitosan can be produced from 1 ton of Aspergillus Niger. The total enzyme consumption is 130 kg/ton of fungus. In addition, the production of acetic acid on site through fermentation reduces the need for external acetic acid by 45%.

These estimates illustrate the potential of this innovative approach to producing yeast oil and chitosan. This work establishes the foundational framework necessary for conducting a comprehensive Techno-Economic Analysis (TEA) and Life Cycle Assessment (LCA) of this process, ensuring a thorough evaluation of its economic viability and environmental impact.

[1]: https://valuable-project.eu/



5:40pm - 6:00pm

Valorization of suspended solids from wine effluents through hydrothermal liquefaction: a sustainable solution for residual sludge management

Carlos Eduardo Guzmán Martínez1, Sergio Iván Martínez Guido1, Valeria Caltzontzin Rabell1, Salvador Hernández2, Claudia Gutiérrez Antonio1

1Facultad de Ingeniería, Universidad Autónoma de Querétaro, Mexico; 2Departamento de Ingeniería Química, Universidad de Guanajuato, Mexico

The growing concern over the environmental impacts of the wine industry has driven the search for sustainable technologies to manage its waste, particularly the residual sludge generated during effluent treatment. These sludges, rich in organic matter, represent a significant source of pollution if not properly treated. However, their energy content offers a valuable opportunity to turn this environmental liability into an asset through innovative valorization processes. In this context, hydrothermal liquefaction (HTL) emerges as a promising technology. This process, conducted under subcritical high-temperature and pressure conditions, allows the direct conversion of residual sludge into high-energy-value liquid biofuels. Unlike other treatment methods, HTL can process wet biomass without needing prior drying, making it particularly suitable for managing sludge from wine effluents.

Thus, this research aims to evaluate the conversion of residual sludge, derived from wine effluent treatment, into biofuels through a hydrothermal liquefaction simulation, integrating this process into a sustainable biorefinery for levulinic acid and bioethanol production. The methodology considers the determination of composition and treatment of the wine effluent in order to define a case study is defined. The biorefinery processes as well as the HTL are designed and simulated in Aspen Plus, and also they are evaluated technically and economically. As a result, levulinic acid, sustainable aviation fuel (as product derived from HTL and subsequent bio-oil hydrogenation processes), bioethanol, ethylene glycol, and electrical energy are produced. In addition, the biorefinery reduces the Chemical Oxygen Demand (COD) of the effluent by 99%. In conclusion, valorizing suspended solids from wine effluents through hydrothermal liquefaction is technically and economically feasible. Also, this strategy not only provides an efficient solution for waste management, but also contributes to the transition toward a circular economy by turning waste into energy-rich and value-added products. This research highlights the potential to reduce the wine industry's environmental footprint while generating a renewable energy source.

 
8:00pm - 9:30pmConcert at Saint Bavo's Cathedral
Location: Saint Bavo's Cathedral
Date: Tuesday, 08/July/2025
8:00am - 8:30amRegistration
Location: Entrance Hall - Cafetaria
8:30am - 10:30amT1: Modelling and Simulation - Session 4
Location: Zone 3 - Room E031
Chair: Brahim Benyahia
 
8:30am - 8:50am

Wind Turbines Power Coefficient estimation using manufacturer’s information and real data

Carlos Gutiérrez, Daniel Sarabia, Alejandro Merino

Universidad de Burgos, Spain

The dynamic modelling of wind turbines and their simulation (individually or grouped in the form of wind farms) is a very useful tool for studying their behaviour, developing and testing turbine and/or farm control strategies, optimal management strategies of the wind farm setpoints, etc. One of the key elements concerning physical models of wind turbines is the power coefficient Cp(λ, β) which acts as an efficiency in the extraction of power from the wind. In modern pitch-controlled turbines with variable rotor speed, this coefficient depends mainly on the wind speed v, the rotor angular velocity ωr and the pitch angle β of the blades. Unfortunately, this coefficient is often unknown a priori as it does not usually appear in the information provided by manufacturers.

Power coefficient is often modelled as an exponential curve (1), described in [1], where λ = ωr R/v is the tip speed ratio and R is the blade radius.

C ̂p (λ,β)=C1 (C2i -C3 β-C4 βC5-C6) exp⁡((-C7)/λi ) (1)

1/λi =(1/(λ+C8 β))-(C9/(β3+1))

This article describes a methodology for obtaining the power coefficient (1) of different wind turbine commercial models by using the power curve provided by the manufacturer, which shows the theoretical power Pj that the wind turbine is able to produce for each wind speed vj. For this purpose, an optimization problem (2) is solved to minimize the difference between theoretical power and power calculated through power coefficient, where Ck (k=1,…,9) are the coefficients of equation (1) to be estimated and βj is the pitch angle for each wind speed vj, being both, Ck and βj, decision variables.

min┬{ckj } ⁡∑j▒(Pj-P ̂(λjj ))2

s.t.: λj=(ωr,j R)/vj (2)

P ̂(λjj )=1/2 ρπR2 vj3 C ̂pjj )

This methodology has been tested with variable-speed wind turbines and pitch-controlled, showing good results.

In this way, the theoretical behaviour of different commercial models can be characterised and incorporated into detailed simulations. However, this methodology can also be used based on historical series of turbine operation data, which allows characterising the real behaviour of a turbine already installed, opening up other possibilities in the use of simulations, such as being able to determine in real time the producible power, determine whether the turbine is operating properly, model update in digital twins, etc.

For this purpose, data series from the Kelmarsh [2] and Penmanshiel [3] wind farms (both in the UK) have been used. There are stored the most important variables, such as generated power, rotational speed, pitch angle and wind speed, every 10 minutes from 2016 to 2021.

References

[1] Slootweg, J. G., de Haan, S. W. H., Polinder, H., & Kling, W. L. (2002). General Model for Representing Variable-Speed Wind Turbines in Power System Dynamics Simulations. IEEE Power Engineering Review, 22(11), 56–56. https://doi.org/10.1109/MPER.2002.4311816

[2] Kelmarsh wind farm data. https://doi.org/10.5281/zenodo.5841834

[3] Penmanshiel wind farm data. https://doi.org/10.5281/zenodo.5946808



8:50am - 9:10am

Techno-economic analysis of a novel small-scale blue hydrogen and nitrogen production system

Adrian Irhamna, George M. Bollas

University of Connecticut, USA

As global energy systems are geared toward cleaner sources, the demand for clean hydrogen is projected to surge, potentially reaching 125 – 585 million tons annually by 2050 and accounting for over 70% of global hydrogen demand (McKinsey & Company, 2023). While conventional steam methane reforming dominates the current hydrogen production infrastructure, the future lies in blue and green hydrogen technologies. Blue hydrogen, particularly in regions with low natural gas prices, is anticipated to be more cost-competitive than green hydrogen (Ueckerdt et al., 2024). Emerging applications in steel manufacturing, synthetic fuels, and heavy-duty transport are expected to drive demand for cleaner hydrogen production route. Additionally, distributed small-scale hydrogen production could accelerate the transition to a hydrogen economy by enabling on-site generation at locations such as hydrogen refueling stations (Navarro et al., 2015), addressing challenges in storage and transportation.

This paper presents an economic analysis of a blue hydrogen and nitrogen production system, using a novel intensified reformer previously proposed (Irhamna & Bollas, 2024c) with a hydrogen production efficiency of 80% when integrated with a shift reactor (Irhamna & Bollas, 2024b). The system’s capability to produce both high-purity hydrogen and nitrogen opens opportunities for small-scale blue hydrogen and distributed ammonia production (Burrows & Bollas, 2022). We analyze two production scales: 500 kg/day and 5000 kg/day, corresponding to small and large hydrogen fueling stations (Kurtz et al., 2020), respectively. We optimized a system that comprises three identical reforming fixed bed reactors, a heat recovery system, and shift reactors. The system was studied using a dynamic model, simulated and optimized in (Irhamna & Bollas, 2024a). The optimized system was then used to perform Techno-Economic Analysis (TEA), considering factors affecting both capital and operating expenses. TEA revealed that the hydrogen production cost for the 500 kg/day system is approximately 3.05 USD/kgH2, while the 5000 kg/day system is 2.68 USD/kgH2. These results are consistent with but higher than larger-scale blue hydrogen production systems (2.0-2.5 USD/kg) studied in prior work (Argyris et al., 2023; Spallina et al., 2019; Szima et al., 2019). We also conducted a sensitivity analysis exploring the impact of key factors such as oxygen carrier lifetime, oxygen carrier price, and natural gas price on blue hydrogen costs. This research contributes valuable insights into the economic viability of small-scale blue hydrogen production, potentially facilitating the broader adoption of hydrogen technologies in a cleaner energy future.



9:10am - 9:30am

A new computational method for the simulation of catalyst deactivation in fluidized bed reactors

Andrea Pappagallo2, Hugo Petremand2, Oliver Krocher2, Flavio Manenti1, EMANUELE MOIOLI1

1Politecnico di Milano, Italy; 2Paul Scherrer Institute, Switzerland

Modelling catalyst deactivation in fluidized bed reactors is up to now a challenging task. The main difficulties are related to the estimation of the movement of the particles over the reactor, resulting in the division of the particles in several classes with different flow patterns. This leads to the presence of different populations of particles with diverse deactivation profiles. In this work, we elucidate a new methodology developed to address this challenge. The methodology was developed starting from a pilot plant operating the fluidized bed CO2 methanation, from which axial concentration and flow pattern profiles were obtained. Additionally, a model for the solid phase movement was calibrated, determining the different movement patterns of the most represented particle classes. The reference deactivation used in this study is the decomposition of ethylene, which is often present in the feed streams to methanation reactors. On the base of these results, the deactivation profiles for the main classes of particles were calculated. According to the knowledge developed with this model, the system was optimized by modifying the flow pattern, so that the residence time in the deactivation zone could be reduced. The most promising solutions to decrease the catalyst deactivation were validated experimentally in the pilot plant.

A fluidized bed methanation reactor with a throughput of ca. 50 Nm3/h of reactive gases was used to measure concentration and bubble rise profiles. These data were used to validate a fluidized bed reactor model, developed on the base of the two phase assumption. This assumption considers the reactor as composed of a bubble phase (with plug flow characteristics) and a dense phase including the catalyst, where the reaction occurs (this phase has CSTR characteristics due to the particle motion). Based on the experimental results, we calibrated a model of the particle movement. The result is a complete description of the fluidization in terms of reactivity and flow patterns. In this model, we plugged a deactivation model developed considering the effect of the presence of ethylene in the methanation feed gas. To characterize the deactivation phenomenon, we performed experiments by adding ethylene as a co-feed to the methanation reaction. We performed experiments simulating various bed heights, to understand the influence in the change of composition in the reactor on the activation/deactivation pattern of the catalyst. Based on the experimental evidence, we derived a deactivation kinetic model obtained by regression. This was implemented in the global model to understand the deactivation pattern in the fluidized bed reactor and to optimize the catalyst lifetime by changing the flow pattern and/or adding steam to the feed gas.



9:30am - 9:50am

Kinetic modelling and optimisation of CO2 capture and utilisation to methane on dual function material

Meshkat Dolat1, Andrew D. Wright2, Mohammadamin Zarei1, Melis S. Duyar1,3, Michael Short1,3

1School of Chemistry and Chemical Engineering, University of Surrey, Guildford, Surrey GU2 7XH, United Kingdom; 2Department of Chemical Engineering, School of Engineering, The University of Manchester, UK; 3Institute for Sustainability, University of Surrey, Guildford, Surrey GU2 7XH, UK

Power-to-Gas (PtG) technology offers a sustainable solution for converting surplus renewable energy into synthetic natural gas (SNG) via the CO2 methanation (Sabatier) reaction. Particularly well-suited for post-combustion carbon capture applications, PtG facilitates efficient energy storage while promoting a carbon-neutral cycle when paired with renewable hydrogen. CO2 adsorption, however, remains a costly process, and the methanation reaction presents significant thermodynamic and kinetic challenges, requiring careful reactor design and effective management of heat and mass transfer. Dual-function material (DFM) technology (Duyar et al., 2015) has emerged as a promising alternative, combining CO2 adsorption and in situ hydrogenation, enabling direct conversion of CO2 from diluted streams without the need for energy-intensive purification steps typically required in cyclic adsorption/absorption processes. To scale this novel technology for industrial applications, comprehensive studies are needed to elucidate the kinetics of CO2 adsorption, purge, and hydrogenation, as well as their interdependencies with process conditions and material properties.

This study builds on kinetic modelling efforts by Bermejo-López et al., (2020) by applying a bespoke reaction rate expression for CO2 methanation using Ni-Ru/CeO2-Al2O3 catalysts developed at the University of Surrey. The model simulates the cyclic adsorption, purge, and hydrogenation processes within a dynamic plug flow reactor framework, employing finite difference methods (FDM) to discretise the coupled partial differential equations (PDEs) governing the transient nature of these processes. Python is used for simulation and parameter estimation, with the least-squares method applied for the latter.

The model explores various process conditions, such as different CO2 and hydrogen concentrations, as well as varying temperature, pressure, and cycle times, to optimise process conditions for industrial applications. This work provides a valuable framework for designing efficient PtG systems, offering insights into CO2 conversion mechanisms and identifying optimal conditions for CO2 utilisation using DFM technology.

Acknowledgments

We would like to acknowledge that this work was supported by the Engineering and Physical Sciences Research Council (EPSRC) [grant number EP/Y005600/1].

References

Bermejo-López, A., Pereda-Ayo, B., González-Marcos, J. A., & González-Velasco, J. R. (2020). Modeling the CO2 capture and in situ conversion to CH4 on dual function Ru-Na2CO3/Al2O3 catalyst. Journal of CO2 Utilization, 42, 101351. https://doi.org/10.1016/J.JCOU.2020.101351

Duyar, M. S., Treviño, M. A. A., & Farrauto, R. J. (2015). Dual function materials for CO2 capture and conversion using renewable H2. Applied Catalysis B: Environmental, 168–169, 370–376. https://doi.org/10.1016/J.APCATB.2014.12.025



9:50am - 10:10am

Real-time carbon accounting and forecasting for reduced emissions in grid-connected processes

Rafael Castro-Amoedo1,2, Alessio Santecchia1, Manuel Oliveira1, François Maréchal3

1Emissium Labs, Portugal; 2Instituto Superior Técnico, Portugal; 3Industrial Process and Energy Systems Engineering (IPESE), École Polytechnique Fédérale de Lausanne, Sion, Switzerland

The lack of granular data in energy systems presents a significant challenge for effectively managing electricity consumption and reducing carbon emissions. As energy systems grow more complex, precise data becomes essential for operating and optimizing energy use. However, current emissions tracking methods often lack the temporal and spatial resolution required for efficient decision-making, leaving industries and energy managers without the insights to align operations with periods of low-carbon electricity generation.

In response to this challenge, we have developed a highly granular electricity emissions tracking system that integrates advanced machine learning algorithms and digital twin models of power grids. Our system forecasts emissions based on real-time grid conditions, enabling industries to schedule energy-intensive operations during periods of low carbon intensity. This innovative approach allows companies to balance operational needs with environmental sustainability, reducing their overall carbon footprint without compromising productivity.

Our methodology leverages the predictive capabilities of machine learning to enhance emissions forecasting, accounting for variables such as energy demand, weather patterns, and grid congestion. Digital twin models replicate the physical power grid, providing a virtual environment for simulating and optimizing energy flows. Preliminary results demonstrate that, in countries with a high penetration of renewable energy, this approach can lead to up to 40% reductions in carbon emissions by intelligently aligning electricity consumption with greener energy availability.

This research highlights the potential of advanced data analytics and grid simulations for transforming energy management practices. Our findings underline the pressing need for granular emissions tracking to empower industries and grid operators with actionable insights, driving the transition toward a sustainable, low-carbon economy.



10:10am - 10:30am

Liquid Organic Hydrogen Carriers: comparing alternatives through energy and exergy analysis

Elvira Spatolisano, Federica Restelli, Laura Annamaria Pellegrini

Politecnico di Milano, Italy

In the transition towards sustainable energy, green hydrogen has gained attention as a low-emission alternative. However, its transport is hindered by its low volumetric density. To address this challenge, various hydrogen carriers have been proposed as a more practical solution. These hydrogen-bearing compounds can be transported more easily at milder conditions and dehydrogenated upon arrival to release hydrogen. Among all the possible options, liquid organic hydrogen carriers (LOHCs) are considered promising due to their compatibility with existing infrastructure (Lin and Bagnato, 2024). Various LOHCs have been explored in the literature. While some compounds are more suitable for hydrogenation and dehydrogenation, certain criteria help in identifying optimal candidates. The ideal LOHC should have a low melting point and high boiling point to avoid solidification and volatility issues, respectively, a high hydrogen storage capacity, low dehydrogenation enthalpy, low toxicity and be cost-effective at the same time (Pellegrini et al., 2024).

Once the hydrogen carrier is selected, a detailed techno-economic assessment of the entire hydrogen transport value chain is essential to compare alternatives and demonstrate their feasibility. The H2 value chain typically includes: hydrogenation of the organic molecule exploiting green H2, produced where renewable energy sources are extensively available, transport of the hydrogenated compound up to its final destination and, upon arrival, dehydrogenation to release hydrogen.

In this framework, the aim of this work is to present a systematic methodology for comparing different hydrogen value chains. Toluene and dibenzyltoluene (DBT) are selected as representative carriers due to their promising characteristics. A harbor-to harbor scenario is discussed, to study long distance H2 transport. The hydrogenation and dehydrogenation stages were designed in Aspen Plus V11®. Based on process simulations, a detailed technical assessment is provided. Each stage of the value chain (i.e., hydrogenation, seaborne transport, dehydrogenation) is assessed through energy and exergy analysis. Energy consumptions are expressed in terms of equivalent H2, which, in the end, lowers the amount of hydrogen delivered at the utilization hub. In this way, the efficiency of each stage can be easily quantified. Weaknesses and drawbacks are pointed out, to pave the way for future process intensification.

References

Pellegrini, L.A., Spatolisano, E., Restelli, F., De Guido, G., de Angelis, A.R., Lainati, A., 2024. Green H2 Transport through LH2, NH3 and LOHC: Opportunities and Challenges. SpringerBriefs in Applied Sciences and Technology, Part F3263.

Lin, A., Bagnato, G., 2024. Revolutionising energy storage: The Latest Breakthrough in liquid organic hydrogen carriers. International Journal of Hydrogen Energy 63, 315-329.

 
8:30am - 10:30amT2: Sustainable Product Development and Process Design - Session 4
Location: Zone 3 - Room D016
Chair: Cristhian Molina Fernández
Co-chair: Stavros Papadokonstantakis
 
8:30am - 8:50am

Sustainable downstream process design for HMF conversion to value-added chemicals

Norbert-Botond Mihaly1, Miruna Prodan1, Vasile Mircea Cristea1, Anton Alexandru Kiss2

1Babes-Bolyai University, Romania; 2Delft University of Technology

Biomass is the sole renewable organic carbon source found in nature, as such its conversion to chemical derivatives and essential intermediates is studied as a long-term strategy for the chemical sector. Among the numerous valuable chemicals obtained from biomass, 5-hydroxymethylfurfural (HMF) is considered an industrially relevant compound due to its capacity to be converted into a variety of value-added chemicals. HMF hydrogenation and amination products stand out, owing to their high demand and versatility, as platform chemicals for sustainable polymeric materials and pharmaceuticals, such as 2,5-bis(aminomethyl)furan (BHMF) and 5-(aminomethyl)-2-furanmethanol (AMF).

Compared to conventional catalytic synthesis, bio-catalysis has emerged as a potential greener substitute for HMF conversion to value-added compounds. Enzymatic bio-catalysis operates in milder circumstances; however, the reactions are often incomplete, need longer incubation periods while offering lower yields. Hydrogenation of HMF has been thoroughly investigated, however few studies focused on the amination, while seldom do any of these studies present the separation and purification process of the obtained high added-value products. The current study focuses on the enzymatic conversion of HMF to high added-value chemicals, such as BHMF and AMF, when phenethylamine is utilized as amine donor. Considering the full conversion of HMF, four compounds are stoichiometrically obtained in the reaction mixture, i.e., BHMF, AMF, phenethylamine and phenethyl alcohol. The separation and purification processes are designed by means of rigorous simulations carried out in Aspen Plus V11.

The first step in the separation of the mentioned compounds is a cationic ion exchange process to separate the amines and alcohols, followed by the neutralization of both streams. The neutralization process results in the formation of mineral salts which can be removed by means of crystallization and filtration. The resulting aqueous solution of the two amines was separated through fractional distillation, obtaining AMF as bottom product with >98 wt.% purity and phenethylamine as side product with >98 wt.% purity. As for the separation of the alcohols, distillation process is utilized to obtain BHMF as bottom product with >99 wt.% purity. The aqueous solution of phenethyl alcohol is introduced in a liquid-liquid extraction section, followed by a distillation process where over 99 wt.% phenethyl alcohol is obtained at the bottom, and over 99 wt.% hexane at the top of the column. The global specific energy demand for the manufacture of a kg of product is approximately 1.79 kWh/kg.

The novelty of the research consists in the development of a new amination-based option for the enzymatic bio-catalytic conversion of HMF to BHMF and AMF (experimentally proven), and the design by process simulations of a feasible path for the separation of intermediates and products, by a cost effective and eco-efficient combination of crystallization, filtration, distillation, liquid-liquid extraction and ion-exchange unit operations.



8:50am - 9:10am

Sustainable Two-Column Design for the Separation of Ethyl Acetate, Methanol, and Water

Prakhar Srivastava, Nitin Kaistha

Indian Institute of Technology, Kanpur, India

Ethyl Acetate (EtAc) and Methanol (MeOH) are among the most frequently used organic solvents in the manufacturing of pharmaceuticals [1]. With growing environmental and sustainability concerns, newer regulations are pushing the industry to recover the organic solvents from the often dilute aqueous waste solvent and reuse the same for a “green” manufacturing process [2]. In this overall context, this study explores the design and synthesis of a two-column distillation (TCD) process to separate a dilute ternary EtAc-MeOH-water waste solvent into nearly pure components. The separation is complicated by the presence of a homogeneous EtAc-MeOH azeotrope and a heterogeneous EtAc-water azeotrope, which results in a distillation boundary that partition the ternary composition space into two distinct distillation regions. The proposed flowsheet leverages liquid-liquid phase separation to cross the distillation boundary for separation feasibility. Also, the pressure sensitivity of the distillation boundary is utilized to reduce the total recycle rate for energy efficiency. The basic TCD flowsheet, which consists of a decanter, a high-pressure simple column, and a low-pressure divided-wall column (DWC), is heat-integrated (HI) using external heat process-to-process exchangers as well as vapour recompression (HR)driven reboilers on the two columns. The resulting energy-efficient HIVR-TCD configuration is significantly cheaper and energy-efficient compared to existing literature designs [3]. Specifically, the total annualized cost (TAC) of the proposed HIVR-TCD process design is 15.4% lower than a three-column HIVR design recently reported in the literature. Also, the energy consumption and CO2 emissions are lower by 34.3% and 31.4%, respectively, which represents a significant improvement.

References

[1] C.-G. ,. K. H. D. J Constable, "Perspective on solvent use in the pharmaceutical industry," Organic process research & development, vol. 11, pp. 133-137, 2007.

[2] L. P.-B. C. J. García-Serna, "New trends for design towards sustainability in chemical engineering: Green engineering," Chemical Engineering Journal, vol. 133, no. 1-3, pp. 7-30, 2007.

[3] A. Yang, S. Sun, Z. Y. Kong, S. Zhu, J. Sunarso and W. Shen, "Energy-efficient heterogeneous triple-column azeotropic distillation process for recovery of ethyl acetate and methanol from waste water," Computers and Chemical Engineering, p. 108618, 2024.



9:10am - 9:30am

Separation Sequencing in Batch Distillation: An Extension of Marginal Vapor Rate Method

Prachi Sharma, Sujit Jogwar

Indian Institute of Technology, Bombay, India

Distillation is an important separation technique and widely employed in the chemical industry. With batch distillation, separation of multi-component mixture is achieved using a single column where products are recovered in a particular sequence. Conventionally, the feed is added in the reboiler and products are withdrawn as distillate in the decreasing order of their volatilities (direct sequence). On the other hand, in an inverse configuration, the feed is added in the reflux drum and the products are withdrawn from the bottom in the increasing order of volatility (indirect sequence). Although these sequences yield the same products, they exhibit different capital and operating costs. A key question in the design of batch distillation processes is to obtain the optimal sequence of separation. The existing literature in this context either uses heuristics or rigorous optimization [1]. As the number of components to be separated increase, the number of potential separation sequence alternatives increase drastically, necessitating a systematic approach to obtain the best sequence. Motivated by this, present work aims at obtaining these sequences in a generic and computationally efficient manner.

In continuous distillation of multi-component mixtures, marginal vapor rate method is used to obtain the best separation sequence [2]. In our approach, this method is extended for batch separation. There are two major challenges for this extension. Firstly, unlike steady state operation in continuous distillation, the feed as well as product composition change as a function of time in batch distillation. Secondly, instead of molar flow rate, there are material balance constraints on total moles. These two challenges are addressed in this work by approximating the batch distillation task as a sequence of continuous distillation tasks with varying feed composition and making simplifying assumption like negligible tray holdup [3]. Accordingly, marginal vapor is computed for each binary separation by integrating the corresponding marginal vapor rate over batch time. Subsequently, the marginal vapours for each separation task are added together to obtain total marginal vapor for a particular separation sequence. The sequence with the lower marginal vapor is recommended as the best sequence.

The proposed approach offers computational advantage over rigorous optimization as well as helps identifying optimal non-trivial separation sequences. The methodology is validated with simulation case studies involving separation of ternary and quaternary mixtures.

References

[1] E. Sørense and S. Skogestad, “Comparison of regular and iverted batch distillation,” Chemical Engineering Science, p. 13, 1996.

[2] A. K. Modi and A. W. Westerberg, “Distillation column sequencing using marginal price,” Ind. Eng. Chem. Res., p. 9, 1992.

[3] U. Diwekar, Batch Distillation: simulation, optimal design and control, 2011.



9:30am - 9:50am

Energy efficient process designs for acrylonitrile production by propylene ammoxidation

Qing Li1, Alexandre C. Dimian2, Anton A. Kiss1

1Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, 2629 HZ, Delft, The Netherlands; 2Department of Chemical and Biochemical Engineering, University Politehnica of Bucharest, 313 Spl. Independentei, 060042 Bucharest-6, Romania

Acrylonitrile (AN) is a critical commodity chemical used to produce a variety of industrial polymers, such as carbon fibers, plastics, rubber, etc. Standard Oil of Ohio (SOHIO) process is the current commercial process for AN production based on propylene ammoxidation and accounts for over 90% of global acrylonitrile production. However, this process involves several distillation columns in the downstream separation, which is energy demanding due to the low thermal efficiency of distillation columns. Nonetheless, the propylene ammoxidation highly exothermic (△H = –123 kcal/mol). Ideally, this reaction heat from the upstream reactor could be utilized and integrated with the downstream separation. Given the current rise in energy costs, and the increased environmental concerns, designing an energy integrated and more sustainable process for the acrylonitrile production is of great importance.

This original study is the first to provide a rigorous process design of the full process from a holistic viewpoint, covering all sections of acrylonitrile production: reaction, acid quenching, absorption-desorption, hydrogen cyanide recovery, acrolein recovery, acrylonitrile-acetonitrile-water separation, and acetonitrile recovery sections. Furthermore, in order to improve the energy efficiency and the sustainability metrics such as greenhouse emissions of the developed process, four energy integration options with different focuses are developed: (1) Implement process intensification technologies for downstream separations, including pressure-swing distillation with full-heat integration, dividing wall columns, etc., exploring also the mechanical vapor recompression heat-pumping potentials. (2) Synthesize the heat exchanger network (HEN) for simultaneous optimization of direct heat integration and heat pumps, compare the optimum design with the results from pinch analysis. (3) Remove reactor surplus heat by hot oil and incorporate it as a new hot utility with the rest HEN, optimize the new HEN to fully utilize this new heat source. (4) Use the surplus heat from the reactor to generate power by downgrading the heat from very high pressure steam (VHP, 100 bar) to low pressure steam (LP, 6 bar), and then used the LP steam to satisfy the heating requirements in the process.

Thermodynamic analysis, sensitivity analysis, economic and sustainability assessments are carried out for option 1, and results show that the heat integrated intensified process enables 27.27% energy savings and 28.20% reduction of greenhouse gas emissions. By doing heat integration systematically in option 2, the optimum HEN shows 60% energy savings as compared to the non-heat integrated intensified process. In option 3, by utilizing the heat of reaction, the optimum HEN is provided in which no external hot utility is required. As for the option 4, using the reaction surplus heat can generate 7 MW power (to be exported) and 48 MW LP steam (to be utilized as the downstream hot utility). The advantages and disadvantages of each approach are analyzed and evaluated, leading to industrial guidance. As the first complete and comprehensive description of the design and optimization of the entire acrylonitrile production process via the SOHIO method, this work highlights the potential for improved energy efficiency in the acrylonitrile production, with the proposed process representing a major step towards achieving these goals.



9:50am - 10:10am

Modelling of the crystallization of Ni-Mn-Co hydroxide co-precipitation

Erik Guillermo Resendiz-Mora, Solomon F. Brown

University of Sheffield, United Kingdom

The world efforts towards achieving net zero include the development of a mix technologies aiming at the sustainable production and storage of energy. A share of these efforts is focused on the development of Li-ion batteries, whose utilisation has grown in multiple applications like electric portable devices, EVs and storage due to their ability to store and deliver energy over cycles of use. The performance and cost of the batteries relates to the properties of the cathode materials. Several chemistries have been researched to improve the performance of cathode active materials including the use of lithium-iron phosphate, nickel-manganese-cobalt and nickel-manganese-aluminium. In this work, we look at the manufacturing of combined hydroxides of nickel-manganese-cobalt.

This investigation addresses the mathematical modelling of the co-precipitation of combined hydroxides of nickel, manganese, and cobalt (NMC), particularly with stoichiometry 8:1:1. The mathematical model considers a lab-scale stirred semi-batch crystalliser fed with solutions of transition metals sulphates (i.e., nickel, manganese, and cobalt), precipitant agent (i.e., sodium hydroxide) and chelating agent (i.e., ammonium hydroxide).

The modelling methodology involved the application of mass balances for the ionic species in the reactor, along with the application of the population balance equation to track the number of particles produced out of the precipitation process as well as the particle size distribution of the product. The following assumptions were considered during the modelling efforts:

  1. The precipitated crystals are spherical.
  2. A clear solution is utilised at initial conditions and no crystals are present in any of the feeds.
  3. The reactor/crystalliser is well mixed; hence, no mass transfer limitations occur during the formation of primary and secondary particles.
  4. Particle growth is independent of crystal size.
  5. The precipitation of individual transition metal hydroxides does not occur.
  6. The reaction rate of both the ionic complexes and the hydroxide formation are fast; the controlling mechanisms of the process are the nucleation and growth of the crystals.
  7. The crystalliser operates isothermally.
  8. Agglomeration and breakage of particles are neglected.

The produced model is a set of 30 integral, partial, differential, and algebraic equations (IPDAEs); it is implemented and solved in the commercial solver Aspen Custom Modeler utilising the method of lines and an upwind scheme to discretise the spatial differentials of the population balance equation.

The model testing, and a sensitivity analysis of the crystallisation kinetic parameters were carried out over the range 104<kb<1010, 1<b<5, 10-10<kg<10-6 and 1<g<2.5 and successfully reproduced the expected features of the process regarding particle size distribution, mean particle size, supersaturation and the evolution of the concentration profiles of the ionic species present in the system. Moreover, the model was qualitatively compared against experimental data available for ranges of pH of 10.4 – 11.0 and concentration of the chelating agent of 0.3 – 1.5M, rendering a good agreement with the experimental data.



10:10am - 10:30am

Robust pharmaceutical tableting process through combined probabilistic design space and flexibility analysis

Ashish Yewale, Xuming Yuan, Brahim Benyahia

Loughborough University, United Kingdom

The development and production of pharmaceutical products is governed by stringent regulations that necessitate a comprehensive understanding of manufacturing processes. This understanding encompasses the impact of the critical material attributes (CMAs) as well as the critical process parameters (CPPs), which define the operational conditions during production that significantly influence critical quality attributes (CQAs) of the final product. Establishing a design space (DS), a multidimensional framework that captures acceptable variances in CMAs and CPPs, enables manufacturers to optimize processes while ensuring consistent product quality (Peterson 2008). However, the reliability of any DS determined through a process model is contingent upon the accuracy of that model. If the uncertain parameters of the model follow a specific probability distribution, the design space becomes probabilistic rather than clearly defined (Kusumo et al. 2020). This shift may necessitate changes to the DS and could trigger a regulatory process for postapproval changes; however, such changes are not required as long as the process parameters remain within the limits of the approved DS.

Here, the probabilistic design space is constructed for a tableting process by propagating the model parameter uncertainty. In the process, lubrication extent (27-6736) and porosity (0.09 - 0.30) are identified as the CPPs, while the tablet tensile strength is designated as the CQA. The empirical model developed by Nassar et al. (2019) is used to capture the impact of CPPs by involving five unknown parameters (θ = [a1 (MPa), a2 (-), b1 (-), b2 (-), and g (dm-1)]. The uncertainty of these model parameters is represented by a sampled distribution (mean = [10, 1.2, -6.8, 0.42, 0.0022], variance = [0.425, 0.2875, 0.3, 0.045, 0.000325]), enabling the application of Monte Carlo and Bayesian techniques to propagate this model parameter uncertainty to the CQAs and estimate the feasibility probability for achieving a reliability value greater than 0.9. This probabilistic design space allows manufacturers to assess the likelihood of meeting CQAs under varying conditions, further emphasizing its importance in facilitating regulatory compliance. Additionally, integrating flexibility analysis provides a comprehensive assessment of the tableting process's ability to adapt to changes in critical process parameters (CPPs) while still achieving the desired CQAs. Preliminary findings suggest the identification of a robust design space defined by specific combinations of lubrication rate and porosity, which ensure exceptional tableting performance even in the presence of uncertainties.

 
8:30am - 10:30amT3: Large Scale Design and Planning/ Scheduling - Session 3
Location: Zone 3 - Aula E036
Chair: Edwin Zondervan
Co-chair: Iiro Harjunkoski
 
8:30am - 8:50am

Genetic Algorithm-Driven Design and Rolling-Horizon Expansion of CCTS and Hydrogen Pipeline Networks

Joseph Hammond, Solomon Brown

The University of Sheffield, United Kingdom

Shared-use infrastructures, such as CCUS and hydrogen pipeline networks, are expected to be key developments for achieving a low-carbon energy transition by facilitating the decarbonisation of industrial sites and reducing emissions across sectors. Nevertheless, their construction and uptake remain stalled (IEA, 2023; IEA, 2024). Some studies attribute the stagnation to circular dependency in infrastructure dynamics (Brozynski and Leibowicz, 2022), where stakeholders each wait for others to act first. This stalemate creates decision paralysis and deep uncertainties for stakeholders, compounded by the variety of options available for industrial sites, the anticipated first customers of these infrastructures.

These uncertainties are particularly evident in network planning and design, where the location and timing of uptake are unknown. In the literature, pipeline infrastructure is often designed over multiple time periods as a component of a centralised planning strategy where site, equipment, and network planning decisions are determined within an optimisation formulation (Becattini et al., 2022). However, in reality, network planning strategy remains flexible to changes in stakeholder behaviour and mitigates against the risks associated with independent actors’ decisions.

Building on the authors’ previous work (Hammond et al., 2024) using the Steiner tree with Obstacles Genetic Algorithm (StObGA) to route networks across complex cost surfaces, this study proposes a novel modification. The updated algorithm enables infrastructure networks to evolve iteratively, incorporating previous design iterations and adapting to industrial site decisions as they emerge. This offers a flexible and dynamic network planning approach in uncertain environments.

The novel approach is applied to market-growth scenarios in the Humber industrial cluster in the UK, where many large emitters and potential hydrogen consumers reside. The method’s effectiveness is evaluated over a rolling horizon, providing insights into the impact of internal stakeholder decision-making on the development and costs of transitional infrastructures.

References

IEA (2023). Hydrogen production and infrastructure projects database. [online] Available at: https://www.iea.org/data-and-statistics/data-product/hydrogen-production-and-infrastructure-projects-database [Accessed 16 Jul. 2024].

IEA (2024). CCUS projects database. [online] Available at: https://www.iea.org/data-and-statistics/data-product/ccus-projects-database [Accessed 16 Sep. 2024].

Hammond, J., Rosenberg, M. and Brown, S. (2024). A genetic algorithm-based design for hydrogen pipeline infrastructure with real geographical constraints. In: Proceedings of the 34th European Symposium on Computer Aided Process Engineering / 15th International Symposium on Process Systems Engineering (ESCAPE34/PSE24) Computer Aided Chemical Engineering, Vol. 53, Springer, pp. 631-636. Available at: https://doi.org/10.1016/B978-0-443-28824-1.50106-X

Becattini, V., Gabrielli, P., Antonini, C., Campos, J., Acquilino, A., Sansavini, G. and Mazzotti, M. (2022). Carbon dioxide capture, transport and storage supply chains: Optimal economic and environmental performance of infrastructure rollout. International Journal of Greenhouse Gas Control, [online] 112, p.103635. Available at: https://doi.org/10.1016/j.ijggc.2022.103635.

Brozynski, M.T. and Leibowicz, B.D. (2022). A multi-level optimization model of infrastructure-dependent technology adoption: Overcoming the chicken-and-egg problem. European Journal of Operational Research, 300, pp.755-770.



8:50am - 9:10am

Optimized Power Allocation in Dual-Stack Fuel Cells to Minimize Hydrogen Consumption

Beril Tümer1, Yaman Arkun1, Deniz Şanlı Yıldız2

1Koç University, Turkiye; 2Ford Otosan R&D Center, Turkiye

Fuel cells are devices that convert chemical energy into electrical energy via transfer of electrons and protons. They possess key characteristics such as high efficiency, high power density, low corrosion, low emissions and moderate operating temperatures, making them highly favorable in the automotive industry. The high power demand of a vehicle often requires several cells that are arranged in a single or multiple ‘Fuel Cell Stacks’. However, each stack may exhibit different efficiencies varying in time due to differences in operating conditions and the rate of cell aging. The efficiency of a stack directly impacts the amount of hydrogen consumed to meet a given power demand. Therefore, careful power distribution between multiple fuel cell stacks is essential to minimize the total hydrogen consumption while fulfilling the vehicle’s power requirements. In this study, we consider two fuel cell stacks, each consisting of 65 parallel cells with different efficiency profiles (i.e., low and high). The fuel cell stacks are modeled using first principles and simulated by MATLAB Simulink. In a given drive cycle, the power demand changes as a funcion of time. A constrained optimization is performed, in which hydogen consumption is minimized by optimally distributing the power to the individual stacks while meeting the total demand. For proper power management each fuel stack has its own power controller which manipulates the stack current to control the stack power at its desired-set point. Computed power values from optimization constitute the desired set-points for the local power PID controllers of the individual stacks. Closed-loop simulations are perfomed by simulating the developed mechanistic model together with optimization and PID controllers in SIMULINK platform. The closed loop simulations demonstrate how well the power demand f the drive cycle is tracked and the hydrogen consumption is minimized. To assess the impact of optimization strategy used in this study, hydrogen consumption is compared to that of equivalent and a daisy-chain power sharing strategies. The results demonstrate that the hydrogen consumption minimization strategy effectively reduces total hydrogen consumption.



9:10am - 9:30am

Optimal Operation of Water Electrolysis for Clean Hydrogen Production: Case Study for Jeju Island in South Korea

Hongjun Jeon1, Hyojin Lee2, Woohyun Kim2, Kosan Roh1

1Chungnam National University, Korea, Republic of (South Korea); 2Korea Institute of Energy Research, Korea, Republic of (South Korea)

The economic and environmental viability of clean (or low-carbon) hydrogen production through water electrolysis, depends on reduction in electricity costs and carbon emissions. To tackle this challenge, we apply demand side management (DSM) to a proton exchange membrane (PEM) electrolysis system, considering the temporal variation of hourly electricity prices and carbon footprints. DSM is further combined with the purchase of renewable energy certificates and the utilization of curtailed electricity to enhance sustainability. We formulate an optimization problem based on the historical data in Jeju Island in South Korea, where renewable energy resources are plentiful. The objective is to minimize the levelized cost of hydrogen (LCOH) while ensuring the compliance with the Clean Hydrogen certification system in South Korea (less than 4 kg-CO2eq/kg-H2) by optimizing the hourly operation level. As a result, the optimal operation leads to meeting the carbon footprint requirements while reducing LCOH to below the hydrogen sales price (9,900 KRW/kg-H2).



9:30am - 9:50am

Green hydrogen transport across the Mediterranean Sea: a comparative study of liquefied hydrogen and ammonia as carriers

Federica Restelli, Elvira Spatolisano, Laura Annamaria Pellegrini

Politecnico di Milano, Italy

Green hydrogen is commonly regarded as a key player in the transition towards a low-carbon future [1]. It can be efficiently produced in regions with abundant renewable resources, which are often remote and distant from major consumption centers. Therefore, the efficient and cost-effective transportation of H2 from production hubs to end users is essential. However, H2 has an extremely low volumetric density at ambient conditions. To overcome this issue, hydrogen carriers, such as liquefied hydrogen and ammonia, are being explored for its transportation on a large-scale [2, 3].

This study assesses the energy consumption involved in the supply chain of these carriers, focusing on the processes of hydrogen conversion to the carrier and its reconversion back to hydrogen. The analysis employs on the “net equivalent hydrogen” method, which is similar to the widely adopted “net equivalent methane” method [4]. This approach evaluates the equivalent amount of hydrogen that would need to be burned to power specific equipment. By using this unified energy basis, the method allows for a fair comparison between different processes. The net delivered hydrogen is then calculated by subtracting both the net equivalent hydrogen consumed and the hydrogen lost due to the boil-off phenomenon during storage and shipping from the total hydrogen fed into the plant.

A sensitivity analysis is performed on the assumptions used in designing hydrogen supply chains. Different scenarios are examined, considering variable harbor-to-harbor distances and diverse hydrogen end uses. The study identifies the optimal carrier for each case study and highlights critical issues to guide future large-scale implementations.

References

[1] Pellegrini LA, Spatolisano E, Restelli F, De Guido G, de Angelis AR, Lainati A. Green H2: One of the Allies for Decarbonization. In: Pellegrini LA, Spatolisano E, Restelli F, De Guido G, de Angelis AR, Lainati A, editors. Green H2 Transport through LH2, NH3 and LOHC: Opportunities and Challenges. Cham: Springer Nature Switzerland; 2024. p. 1-6.

[2] Restelli F, Spatolisano E, Pellegrini LA, Cattaneo S, De Angelis AR, Lainati A, et al. Liquefied hydrogen value chain: a detailed techno-economic evaluation for its application in the industrial and mobility sectors. International Journal of Hydrogen Energy. 2024;52:454-66.

[3] Aziz M, Wijayanta AT, Nandiyanto ABD. Ammonia as effective hydrogen storage: A review on production, storage and utilization. Energies. 2020;13:3062.

[4] Pellegrini LA, De Guido G, Valentina V. Energy and exergy analysis of acid gas removal processes in the LNG production chain. Journal of Natural Gas Science and Engineering. 2019;61:303-19.



9:50am - 10:10am

Optimisation Under Uncertain Meteorology: Stochastic Modelling of Hydrogen Export Systems

Cameron Aldren, Nilay Shah, Adam Hawkes

Imperial College London, United Kingdom

Due to uncertainty associated with weather forecasting, the production of green hydrogen for energy export requires a broad arsenal of systems engineering modelling tools to derive a realistic view of future prospects. Due to the dearth of full-scale hydrogen export facilities and limited pilot systems in operation, there is currently a substantial impetus to rigorously optimise, forecast and rank different scenarios and design options for the supply chain.

Here we present the findings of a non-deterministic green hydrogen production model. This operations model is focused on synthesising hydrogen for energy export, thus employing either liquefaction or the Haber-Bosch process to convert the hydrogen to a format that is suitable for long-distance transport onboard ships. This model expands on existent research efforts, which primarily employ deterministic optimisation models to derive optimal value chain performance under fixed weather profiles of a year’s ‘representative operation’, which are assumed to repeat for the facility’s entire lifetime. Whilst such analyses derive key findings, especially those which employ temporally distributed meteorological profiles, these Mixed-Integer supply chain models have perfect foresight, so can make unrealistic premeditated actions in response to knowledge of future events that, in reality, would not be known a-priori.

This non-deterministic operations model describes a months’ operation of an islanded hydrogen export facility located in Chile, with an objective of maximising the production rate from the liquefaction and Haber-Bosch process, respectively. In the base case, strategic operational decisions are made on a weekly basis, assuming a perfect meteorological forecast of the next week’s weather data is available. These findings are compared to a counterfactual deterministic supply chain model, which has also been used to generate equipment sizes for the non-deterministic model, subject to a cost-minimisation objective. Such a comparison allows for the robustness of the findings of these deterministic models to be analysed for the first time, especially regarding production volumes, storage requirements, equipment availability and the quantity of supplementary energy purchased from the grid. The robustness to such considerations is crucial, given the capital intense nature of certain technologies, such as hydrogen storage at $500,000 USD tonne-1.

When subject to the stochastic system, the average availability of the liquefaction plant fell by 20% and Haber-Bosch processes fell by 30%. As this reduction in throughput is symptomatic of the ‘overoptimized’ nature of the deterministic model, current equipment specifications from deterministic studies are likely to see suboptimal performance in real systems. Furthermore, whilst many studies consider the production of hydrogen in either ‘islanded’ or ‘grid-connected’ mode, thereby deriving different equipment sizes for each scenario, it is likely that real systems will need the inherent flexibility to operate in both modes for periods of time. As such, we present a series of pareto fronts, demonstrating the influence of different configurational limitations on the output of the production facility.

 
8:30am - 10:30amT4: Model Based optimisation and advanced Control - Including keynote
Location: Zone 3 - Aula D002
Chair: Srinivas Palanki
Co-chair: Radoslav Paulen
 
8:30am - 9:10am

Keynote: Principles and Applications of Model-free Extremum Seeking – A Tutorial Review

Laurent Dewasme, Alain Vande Wouwer

University of Mons, Belgium

Extremum seeking has become a large family of perturbation-based methods whose origins can be traced back to the work of the French Engineer Leblanc in 1922, who sought to transmit electrical power to a train car contactlessly. Since then, extremum seeking has gained significant attention, especially in the past three decades. It is a practical approach aimed at achieving optimal performance of a system by continuously seeking the extremum of an online-measured cost function. The method finds various applications in different fields, provided that the required measurement information is available and an optimality principle can be formulated. While many simulation studies have confirmed the effectiveness of extremum seeking, relatively few experimental studies have been conducted to validate its real-world applications. This gap between theory and practice is a common challenge in real-time optimization and control. This presentation will provide an in-depth introduction to the foundational principles and applications of extremum seeking, including variations and extensions of the method. Practical applications will be presented in various fields, such as energy, biotechnology, and robotics, with, among others, a focus on the authors’ experience through several research studies over the last 15 years.



9:10am - 9:30am

Extremum seeking control applied to operation of dividing wall column

Ivar J. Halvorsen1,2, Mark Haring1, Sigurd Skogestad2

1SINTEF Digital, Norway; 2Norwegian University of Science and Technology

The dividing wall column is an attractive arrangement since there is a significant energy saving potential compared to conventional column sequences. The realisation of the saving potential requires control structures that can track the optimal operation point despite inevitable changes in feed properties, performance characteristics and other uncertainties. The characteristic of the optimum is known, given a good model and key measurements that enable precise information about the internal states. However, there will always be uncertainties, in the model, in the measurements, in realisation for manipulative variables and the most informative measurements that would be some key composition data inside the arrangement are usually not available, but some information may be inferred via temperature profiles. Extremum seeking control (ESC) is a model-free optimisation technique that by active perturbation of selected manipulative variables infers gradient properties of a measured cost function and by that enable tracking of a moving optimum. Extremum seeking control can be used also in combination with other approaches, e.g. self-optimising-control (SOC). The key point is that for any kind of model-based control and optimisation technique there will be remaining uncertainties such that further adjustments by model-free ESC on top will be beneficial. Some structures of ESC for DWC will be analysed and will be used to plan experiments on a pilot plant.



9:30am - 9:50am

MORL4PC: An Adaptive Multi-Objective Reinforcement Learning approach for Process Control

Niki Kotecha, Max Bloor, Calvin Tsay, Antonio del Rio Chanona

Sargent Centre for Process Systems Engineering, Imperial College London, United Kingdom

Industrial process control presents a complex challenge of balancing multiple objectives, such as optimizing productivity, reducing energy consumption, minimizing environmental impact, and maintaining safe operation [1]. Model-based traditional control methods often struggle to handle these competing goals, particularly when confronted with unexpected disruptions that deviate from their underlying assumptions. These disturbances can render pre-defined models inaccurate, leading to suboptimal control decisions, emergency shutdowns, and costly downtime [2]. Multi-objective reinforcement learning (MORL) offers a powerful solution to this problem by learning a set of Pareto-optimal policies that provide trade-offs between various objectives. With MORL, operators can seamlessly switch between policies in response to system disruptions, preventing shutdowns and ensuring smoother operation under changing conditions [3].

This paper explores the application of MORL in process control systems, where uncertain and time-varying environments present significant challenges for traditional control methods. By utilizing MORL, we enable controllers to optimize multiple competing objectives simultaneously, generating a Pareto front of policies that allow for flexible decision-making. This Pareto front provides operators with a set of pre-learned policies that offer different trade-offs, enabling quick adaptation to disruptions like equipment failure, raw material fluctuations, or unanticipated system variations.

In this work, we integrate multi-objective evolutionary algorithms (MOEA) within a reinforcement learning framework to find a set of adaptable policies that effectively balance two conflicting objectives. We employ the MOEA to adapt the neural network (policy) parameters by applying the evolutionary algorithm in the parameter space, resulting in a Pareto front in the policy space. Non-dominated Sorting Genetic Algorithm II (NSGA-II) is used to evaluate and sort the policies, yielding a final population that represents a Pareto front of non-dominated solution. Our methodology employs an adaptive strategy where when a disruption hits the system, the policy switches dynamically to another policy from the Pareto front set of policies obtained during training.

One of the key advantages of MOEA-RL for process control is its ability to enhance system resilience. When a disruption hits, instead of relying on a single static policy, operators can choose the most appropriate policy from the Pareto front to prioritize certain objectives, such as minimizing waste or reducing energy consumption, while maintaining system stability. This adaptability greatly reduces the need for emergency shutdowns, as the system can continue to operate under new conditions with an optimized, situation-specific policy. This results in improved operational efficiency, fewer downtime incidents, and increased overall process stability. The effectiveness of our method is shown through a series of case studies, simulating various disruptions. Throughout these, our approach consistently showcases adaptability and robustness across diverse disruptions.

In conclusion, MOEA-RL allows the system to seamlessly switch between policies from the trained Pareto set in response to disruptions, allowing operators to respond more effectively, minimizing downtime and improving overall system performance. Future work will focus on incorporating curiosity driven exploration and exploring methods to further enhance policy switching in highly dynamic environments.

[1]Simkoff,J.M., etal."Process control and energy efficiency."Annu.Rev.Chem.Biomol.Eng.11(2020):423–445.

[2]Bloor,Maximilian,etal."Control-Informed ReinforcementLearning for ChemicalProcesses."arXivpreprint arXiv:2408.13566(2024).

[3]Hayes,Conor F.,etal."A practical guide to multi-objective reinforcement learning and planning."AutonomousAgents 36.1(2022):26.



9:50am - 10:10am

Perturbation methods for Modifier Adaptation with Quadratic approximation

Mohamed Tarek Aboelnour1,2, Sebastian Engell2

1BASF SE, Germany; 2Technical University Dortmund

In recent years, the field of real-time optimization (RTO) has gained significant attention due to the increasing pressure to operate processing plants optimally from both an economic and emissions point of view. A key problem in RTO is that the plant models must be accurate in order to obtain an admissible and optimal operating point of the plant. While the re-estimation of model parameters online can improve performance if the plant model is structurally correct, Modifier Adaptation (MA) can handle both parametric and structural plant-model mismatches. It is an iterative method that adapts the gradients of the cost function and the gradients of the constraints based on measurement information. Upon convergence, the first-order optimality conditions are satisfied.

In this contribution, we employ Modifier Adaptation with Quadratic Approximation (MAWQA) [1]. MAWQA combines modifier adaptation with a quadratic approximation approach adapted from derivative-free optimization, and thereby alleviates the problem of estimating the gradients from noisy measurements. The quadratic optimization is computed using information from past operating points. However, in the quadratic approximation, the distribution of the points from which the approximation is computed has a significant influence. It is possible for the optimization to become stuck in a region away from the true optimum because the information from the previous operating points is not sufficiently rich. In such cases, additional trials (or perturbations) are necessary.

For gradients computed by finite differences, it was proposed in [2] to solve an additional optimization problem aimed at maximizing the inverse of the condition number of the inputs to compute a trial point that would best improve the geometry. While this strategy proves effective for simplex gradients, it is less well-suited for quadratic approximations.

In this paper, we propose two different methods from derivative-free optimization for measuring the poisdness of the distribution of the observations. In addition, we introduce perturbation methods that are better suited for improving the quality of quadratic surrogate functions, based on the poisdness measures. The first method leverages Lagrange polynomials to compute a set of inputs with improved geometric distribution. The second method utilizes pivot polynomials in combination with Gaussian elimination to plan the plant trials. We compare these methods with others proposed in the literature and validate their efficiency using the Williams-Otto reactor benchmark, a well-established test case for RTO methodologies.

[1] Gao, W., Wenzel, S., and Engell, S. (2016). A reliable modifier-adaptation strategy for real-time optimization. Computers & Chemical Engineering, 91, 318–328.

[2] Gao, W. and Engell, S. (2005). Iterative set-point optimization of batch chromatography. Computers & Chemical Engineering, 29, 1401–1409.

Perturbation methods for Modifier Adaptation with Quadratic approximation



10:10am - 10:30am

Optimal Energy Scheduling for Battery and Hydrogen Storage Systems Using Reinforcement Learning

Moritz Zebenholzer1, Lukas Kasper1, Alexander Schirrer2, René Hofmann1

1TU Wien, Institute of Energy Systems and Thermodynamics, Austria; 2TU Wien, Institute of Mechanics and Mechatronics, Austria

Due to the energy transition, the share of renewable forms of energy, such as wind and photovoltaic, is steadily increasing. These are highly volatile, resulting in a gap between generation and demand, which must be balanced by storage at any time but which is difficult to predict. To accomplish this in a highly efficient and reliable way in relation to time and energy, sector-coupled multi-energy systems (MES) combined with battery and hydrogen storage systems are deployed. The optimal and safe operation of such MES requires operational planning, typically done by rule-based controllers (RBC) in the industry, whereas more elaborate model predictive control (MPC) strategies are the subject of current research. This form of optimal control is generally seen as delivering the best possible performance, usually based on minimum operating costs in compliance with the system-relevant constraints.
However, the main obstacle to realizing MPC is that the optimization depends heavily on an adequate model of the system dynamics, which requires extensive effort to build. In addition, the uncertain prediction of stochastic fluctuating quantities such as renewable energy generation, demand and electricity prices strongly affect the control performance. Moreover, in use cases that require long prediction horizons and detailed models, the arising mixed-integer MPC problems may require excessive computation effort.
This work aims to use Reinforcement Learning (RL) to overcome these difficulties without applying elaborate mixed-integer linear programming (MILP). A hybrid neural network (NN) combines a classification module for binary variables with a regression layer for continuous values to efficiently model discontinuous system behaviour. The self-learning algorithm, which requires no prior knowledge of the system dynamics, can inherently learn the uncertainties of the input variables on the system only through interaction with the model. In a case study, it is demonstrated that RL can learn complex system behaviour with comparable quality to the MPC and outperforms the RBC. The trained policy of the RL agent is then deployed while significantly decreasing the computational effort.
Methods shall be developed in future work to enable the RL agent to enhance or outperform the MPC operational strategy. Here, statistical analyses will be used to derive additional information about the predictions of energy generation, demand and electricity prices to obtain an enriched RL agent. In addition, feature and reward engineering will incorporate relevant information into the training process.

 
8:30am - 10:30amT5: Concepts, Methods and Tools - Session 4
Location: Zone 3 - Room E030
Chair: Gintaras Reklaitis
Co-chair: Guido Sand
 
8:30am - 8:50am

Enhancing Batch Chemical Manufacturing via Development of Deep Learning based Predictive Monitoring with Transfer Learning

Yee Hung Hong, Zhao Jinsong

Tsinghua University, China, People's Republic of

In the specialized field of chemical engineering, particularly in managing batch chemical processes, the necessity for precise and reliable monitoring and control systems is critical. These processes are episodic, traversing various operational phases under fluctuating conditions, which introduce challenges such as high dimensionality, significant batch-to-batch variability, non-linearity, and dynamic behavior. Traditional monitoring systems often fall short in adapting to the unpredictability of batch operations, highlighting the urgent need for an advanced, adaptable solution. Our study introduces a novel approach integrating a deep learning framework with transfer learning to significantly enhance the accuracy and adaptability of process monitoring in batch chemical processes. This methodology leverages Temporal Convolutional Networks (TCN) for feature extraction and Multi-Layer Perceptrons (MLP) for predicting Quality-Indicative Variables (QIV), forming the core of our innovative process monitoring system. TCNs excel in analyzing temporal data, making them perfectly suited for capturing the complex temporal dependencies and patterns characteristic of batch process operational phases. This meticulous feature extraction by TCNs provides a comprehensive understanding of process dynamics, essential for the predictive modeling that follows. The extracted features are then processed by an MLP architecture, tasked with predicting QIVs. The MLP's predictive capability is crucial for preemptive process monitoring, enabling timely intervention to rectify issues before they develop into significant problems. To ensure the model's robustness and adaptability across various operational strategies and conditions—a frequent scenario in batch processing where modifications are commonly made, our approach incorporates transfer learning. This allows the model to adjust to new or altered processes with minimal retraining by leveraging previously learned features and patterns, ensuring the fault detection system's effectiveness and reliability even as the monitored process evolves. A case study using a simulation dataset from a penicillin fermentation simulation process (IndPenSim) validates our method. This simulation offers a complex, dynamic environment that closely simulates real-world batch chemical processing challenges. The successful application of our methodology to the IndPenSim dataset underscores its ability to accurately predict QIVs and detect potential process deviations, affirming the potential of our integrated TCN and MLP framework, augmented with transfer learning, to revolutionize process monitoring and control in the chemical engineering sector. Our study's integration of TCN-based feature extraction with MLP-based QIV prediction, enhanced by strategic transfer learning, marks a significant advancement in chemical engineering. This comprehensive approach addresses the complexities of monitoring and controlling batch chemical processes and offers a model of unmatched accuracy and flexibility. By providing a solution adaptable to the dynamic nature of batch operations, our study represents a significant step towards improved operational efficiency and safety in the chemical processing industry, heralding a new era of precision and reliability in batch process monitoring and control.



8:50am - 9:10am

Soft-Sensor-Enhanced Monitoring of an Alkylation Unit via Multi-Fidelity Model Correction

Rastislav Fáber1, Marco Vaccari2, Riccardo Bacci di Capaci2, Karol Ľubušký3, Gabriele Pannocchia2, Radoslav Paulen1

1Slovak University of Technology in Bratislava, 812 37 Bratislava, Slovakia; 2Department of Civil and Industrial Engineering, University of Pisa, 561 22 Pisa, Italy; 3Slovnaft, a.s., Bratislava 824 12, Slovakia

Accurate dynamic modeling is essential for optimizing refinery operations, such as alkylation units, where process precision dictates performance outcomes. This study investigates multi-fidelity (MF) modeling techniques, utilizing historical measurements from a comprehensive industrial dataset. The dataset, consisting of 1085 online process measurements, serves as the foundation for both static and dynamic modeling approaches. We explore the efficacy of low-fidelity data from online composition analyzers combined with high-fidelity data from infrequent laboratory sampling to improve operational decision making.

We implement a novel correction strategy based on a Gaussian process (GP) (Rasmussen, 2004) to construct a MF model based on a low-fidelity (LF) model and high-fidelity (HF) data. We train a static LF model using Principal Component Regression, Partial Least Squares, LASSO and Stepwise regression (SR). These models demonstrate notable practicality and computational efficiency. SR emerges as the most suitable method, balancing predictive accuracy and simplicity, achieving satisfactory results in terms of normalized RMSE values for training and testing, respectively, on LF data. The dynamic models, trained using the Systems Identification Package for Python (SIPPY) (Armenise et al., 2018), effectively capture time-dependent behavior, yet require more complex structure for satisfactory prediction performance. The MF models, realized through the GP-based correction, capture the residual error between the LF model and HF data.

The MF models based on dynamic LF models outperform those based on static LF models, achieving RMSE of 0.37. For comparison HF model, trained only with HF data, reached the RMSE of 0.58. This result shows a major potential for improving decision making in complex industrial processes by a deeper understanding of the process behavior and an integrated usage of different types of data.



9:10am - 9:30am

A data-driven hybrid multi-objective optimization framework for Large-Scale partial differential algebraic equation systems

Siyang Ma, Jie Li

University of Manchester, United Kingdom

Optimizing large-scale partial differential algebraic equation (PDAE) systems has always been a challenging task in chemical engineering. In recent years, there has been an increasing demand for multi-objective optimization of PDAE systems with coupled constraints. Therefore, it is necessary to develop an optimization framework that can be used to solve such problems. The main challenges in optimizing PDAE systems with coupled constraints include:

1. The coupled constraints with PDAEs make it difficult to obtain feasible solutions.

2. Solving PDAEs is very expensive, and satisfactory Pareto fronts need to be obtained within a limited number of solution iterations.

3. The gradient is not available, which greatly restricts the use of gradient-based algorithms.

Pressure swing adsorption (PSA) for gas separation is a typical PDAE problem and the most common research subject. The commonly used method for solving PSA problems is Nondominated Sorting Genetic Algorithm II (NSGA-II). Although this heuristic algorithm is computationally expensive, it can obtain a good Pareto front. Researchers have used Artificial Neural Networks to solve PSA problems, which can also achieve good results, but require a large number of PDAE problem solutions to obtain training samples. Recently, Hao et al. developed a hybrid optimization framework efficiently solving unconstrained PSA optimization problems using the TSEMO algorithm and DyOS, but it cannot handle PSA problems with coupled constraints. Pini et al. used penalty function method to handle coupled constraints, but the selection of penalty function parameters becomes a new issue.

To tackle this, we propose a hybrid optimization framework, which integrates three steps. In the first step, we establish surrogate models for the constraints using Gaussian processes (GPs) and employ multi-objective Bayesian optimization to search for feasible points that satisfy the constraints. In the second step, we establish surrogate models for the objective function and constraints using GPs and utilize constrained multi-objective Bayesian optimization to search for an approximate Pareto front. In the third step, we perform a local search based on the approximate Pareto front. By employing the trust region filter method, we construct quadratic models for each constraint and objective function and refine the Pareto front to achieve local optimality. This framework demonstrates the efficiency of Bayesian optimization and the local optimality of the trust region method. A comparison with the popular evolutionary algorithm, NSGA-II, showed that this framework had a higher hypervolume of the Pareto front while halving the runtime and reducing the number of simulations by a factor of 20.



9:30am - 9:50am

Computer Vision Approach based on Mutual Information for Measuring Interface Level in Process Equipment

Sakshi Rasanya1, Babji Srinivasan2,3, Rajagopalan Srinivasan1,3

1Department of Chemical Engineering, Indian Institute of Technology Madras, Chennai; 2Department of Applied Mechanics and Biomedical Engineering, Indian Institute of Technology Madras, Chennai; 3American Express Lab for Data Analytics, Risk and Technology, Indian Institute of Technology Madras, Chennai

Accurate and continuous monitoring of process variables are crucial for ensuring process control and safety. Interface level is a commonly occurring variable in many processes. Typically, differential pressure cells and nucleonic profilers are used to measure interface level [1]. However, these sensors may provide inaccurate measurements due to malfunctions, potentially leading to equipment downtime and complicating process control and supervision. At worst, sensor failure can lead to disasters, like the 2005 Buncefield incident, where a stuck sensor and disabled safety switch caused a petrol tank to overfill [2]. Computer vision techniques can be utilized to overcome these issues. Images can provide valuable insights into process equipment and their dynamics. Monitoring cameras can be installed at remote locations to analyze video data from the equipment's sight glass. This serves as motivation to develop robust image-based sensors for interface-level detection.

Previously, liquid level detection has been studied using techniques such as image segmentation and edge detection. Jampana et al. (2010) proposed a simple edge detection method combined with a particle filter to estimate interface level. Liu et al. (2016) proposed a Markov random field-based image segmentation method to convert raw data into binary and utilized the vertical profile of averaged pixel values for level measurement. Vicente (2019) estimated the froth-middling interface level using static and dynamic image processing. The performance of such methods degrade when variations occur in the image quality such as due to lighting variations, occlusion, and artifacts on the sight glass. We seek to a robust method that addresses these complexities.

We propose an unsupervised approach to detect interface levels using mutual information within pixels of an image. Our method utilizes extremely local pixel features and baseline rarity between two features to detect crisp and highly localized edges. Affinity is modelled using pointwise mutual information, i.e., the log ratio of the observed joint probability to the feature pair probability in an image. Low affinity is expected across object boundaries and high affinity within similar texture regions. The overall affinity function guides pixel grouping. This is followed by spectral clustering to identify boundaries in an image. After segmentation, we applied vertical profiling, averaging pixel values for each row across all columns to detect abrupt intensity changes suggesting potential level positions in the images. The methodology's effectiveness is evaluated on a lab scale unit at IIT Madras process control lab. Various noise factors were introduced, such as lighting variations, stains, artifacts, and occlusion on the sight glass. Results from the proposed level detection are evaluated against ground truth, and demonstrate that the model has high accuracy even in the presence of various noise.

References-

  1. Jampana et al. 2010. Computer vision-based interface level control in a separation cell.
  2. Ansaldi et al. 2016. Incidents Triggered by Failures of Level Sensors.
  3. Vicente et al. 2019. Computer vision system for froth-middlings interface level detection in the primary separation vessels.


9:50am - 10:10am

Kolmogorov Arnold Networks as surrogate models for process optimization

Tanuj Karia, Giacomo Lastrucci, Artur M. Schweidtmann

Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology

Surrogate models are widely used for improving the tractability of process optimization (Misener and Biegler, 2023). Some commonly used surrogate models are obtained via machine learning, such as multi-layer perceptrons (MLPs), gaussian processes, and decision trees. Recently, a new class of machine learning models named Kolmogorov Arnold Networks (KANs) have been proposed (Liu et al., 2024). Broadly, KANs are similar to MLPs, yet they are based on the Kolmogorov representation theorem instead of the universal approximation theorem for MLPs. Compared to MLPs, it was reported that KANs require significantly fewer parameters to approximate a given input/output relationship (Liu et al., 2024). One of the bottlenecks preventing the embedding of MLPs into optimization formulations is that MLPs with a high number of parameters (larger width or depth) are more challenging to globally optimize (Schweidtmann and Mitsos, 2019). We investigate whether the parameter efficiency of KANs relative to MLPs can be translated to computational benefits when embedding them into optimization problems and solving them to global optimality. We apply our recently proposed mixed-integer nonlinear programming formulation of a KAN. Three case studies of varying input dimensions considering both regression (1 and 2) and classification (3) tasks are chosen for investigation from the literature: (1) optimization of auto thermal reforming process (Bugosen et al., 2024), (2) optimization of methanol synthesis process (Bampou et al., 2023), and (3) designing homogeneous solvent mixtures for pharmaceutical crystallization (Karia et al., 2024). We observe that KANs offer significant computational benefits over MLPs, particularly when globally optimizing over surrogate models with less than five inputs. For surrogate models with higher input dimensions, stronger formulations must be developed to improve global optimization of models with KANs embedded.

References

M. Bampaou, S. Haag, A.-S. Kyriakides, K. Panopoulos, P. Seferlis, 2023. Optimizing methanol synthesis combining steelworks off-gases and renewable hydrogen. Renewable and Sustainable Energy Reviews 171, 113035. URL https://www.sciencedirect.com/science/article/pii/S1364032122009169

S. Bugosen, C. D. Laird, R. B. Parker, 2024. Process Flowsheet Optimization with Surrogate and Implicit Formulations of a Gibbs Reactor. Systems and Control Transactions 3, 113 – 120, The Proceedings of the 10th International Conference on Foundations of Computer Aided Process Design (FOCAPD 2024). URL https://doi.org/10.69997/sct.148498

T. Karia, G. Chaparro, B. Chachuat, C. S. Adjiman, 2024. Classifier surrogates to ensure phase stability in optimisation-based design of solvent mixtures. Available at SSRN 4898054. URL https://dx.doi.org/10.2139/ssrn.4898054

Z. Liu, Y. Wang, S. Vaidya, F. Ruehle, J. Halverson, M. Soljačić, T. Y. Hou, M. Tegmark, 2024. KAN: Kolmogorov-Arnold Networks. URL https://arxiv.org/abs/2404.19756

R. Misener, L. Biegler, 2023. Formulating data-driven surrogate models for process optimization. Computers & Chemical Engineering 179, 108411. URL https://www.sciencedirect.com/science/article/pii/S0098135423002818

A. M. Schweidtmann, A. Mitsos, 2019. Deterministic Global Optimization with Artificial Neural Networks Embedded. Journal of Optimization Theory and Applications 180 (3), 925–948. URL https://doi.org/10.1007/s10957-018-1396-0



10:10am - 10:30am

Data-driven modeling of dynamic systems via convolutional neural networks

Christian Hoffmann, Joshua Reichert, Janina Deichl, Jens-Uwe Repke

Technische Universität Berlin, Process Dynamics and Operations Group, Straße des 17. Juni 135, 10623 Berlin, Germany

Data-driven models have become a valuable asset for real-time applications in chemical and process engineering. Typical examples include feed-forward neural networks, recurrent neural networks, or networks with long short-term memory. However, these networks may struggle when many dynamic inputs are required. This is typically the case for systems with large time constants where there is a significant time period between an input and its observable consequence for outputs. Within this contribution, we propose the use of convolutional neural networks (CNNs) to counteract this problem.

CNNs are currently preferably used for image processing. They reduce the dimension of the input space, before a standard neural network is trained via so-called convolutions These convolutions are averaging operations on the inputs and outputs. Of particular interest is the filter size, i.e., the number of inputs that are considered when calculating this average, and the stride, i.e., the number of inputs the filter is moved before calculating the next average. The number of filters used determines the number of feature maps. Each filter receives its own filter weights – hence, more filters result in a larger number of trainable parameters. Too many feature maps cause overfitting and poor prediction outside the training data.

To study the potential of CNNs for dynamic models in chemical engineering, training data is generated and normalized. Afterwards, the training algorithm is started and combined with hyperparameter tuning. This hyperparameter tuning determines the optimal number of past states and controls, i.e., how much data from the past is required, filter size and stride, the number of feature maps, and the activation function for the neural network. The CNN framework is applied to two case studies: the chloralkali electrolysis (CAE, a simple almost linear example) and a capillary tube, which serves as throttling device in a heat pump (more complex with hysteresis).

For the CAE, the training and validation data is generated with a rigorous dynamic process model [1]. The dynamic model receives the current density, inlet temperature, water dilution, and inlet anolyte feed as inputs. The mass fractions of sodium ions on both the anolyte and the catholyte side are predicted outputs.

The data of the capillary process was generated within a test rig in our lab. The CNN takes inlet pressure, inlet enthalpy, and mass flow as inputs, whereas outlet pressure and enthalpy are predicted. This second case study shows hysteresis in the measured data, which makes it a more challenging application of CNNs.

To analyze the computational advantages of this network type, the results are compared to those for more classical structures, i.e., a recurrent neural network. It is found that CNNs can reduce the input space for dynamic systems and that CNNs can automatically recognize at which frequency new measurements are required to accurately describe the dynamic profiles.

References

[1] J. Weigert, C. Hoffmann, E. Esche, P. Fischer, J.-U. Repke (2021): Towards demand-side management of the chlor-alkali electrolysis: Dynamic modeling and model validation. Computers & Chemical Engineering 149, 107287. DOI: 10.1016/j.compchemeng.2021.107287.

 
8:30am - 10:30amT6: Digitalization and AI - Session 3
Location: Zone 3 - Room E033
Chair: Rajagopalan Srinivasan
Co-chair: Dongda Zhang
 
8:30am - 8:50am

AI-Driven Automatic Mechanistic Model Transfer Learning for Accelerating Process Development

Alexander William Rogers1, Amanda Lane2, Philip Martin1, Dongda Zhang1

1The University of Manchester, United Kingdom; 2Unilever Research Port Sunlight, United Kingdom

Identifying accurate kinetic models for new biochemical systems is a great challenge. Kinetic models are represented by differential equations, where the parameters and terms of the symbolic expressions hold physical significance. Hybrid modelling and traditional machine transfer learning can leverage previously discovered relations about different but related systems to minimise the time and experimental resources necessary to develop accurate predictive models for new systems. However, these methods are non-interpretable, by only updating the data-driven and kinetic parameters of an existing hybrid model, they leave the kinetic model structure unchanged so are unable to provide additional physical insight into the newly investigated system.

To address this challenge, we propose a novel model structural transfer learning methodology that integrates symbolic regression (SR) with artificial neural network (ANN) feature attribution to streamline the discovery of interpretable differential equations for new biochemical reaction systems. The feature attribution technique effectively guides SR towards targeted modifications for existing erroneous or low-fidelity mechanistic models, addressing the traditional challenge of efficiently exploring the large combinatorial space of expression structures. More importantly, this approach, combined with strategic sampling and model-based design of experiments (MbDoE), maximises knowledge extraction while minimising experimental resource requirements.

Through a comprehensive in-silico case study, our framework effectively adapted the structure of a kinetic model taken from one biochemical system for a new but related biochemical system, discovering the underlying kinetic equations. The predictive accuracy and uncertainty are then benchmarked against traditional hybrid modelling techniques. The impact of prior knowledge quantity and fidelity are also explored, demonstrating the framework’s ability to either rebuild equations from scratch or make targeted corrections during model structural transfer learning. To glean a physical interpretation of the differences in the underlying process mechanisms, the terms that have been modified could be compared and their physical meaning could be easily understood, altogether highlighting the framework’s significant potential for advancing automated knowledge discovery and novel biochemical process development.



8:50am - 9:10am

CompArt: Next-Generation Compartmental Models for Complex Systems Powered by Artificial Intelligence

Antonello Raponi, Zoltan Nagy

Purdue University, United States of America

The CompArt project explores new approaches to modelling complex systems, focusing on the integration of next-generation compartmental models powered by artificial intelligence (AI). Our primary objective is to streamline three-dimensional (3D) Computational Fluid Dynamics (CFD) simulations while preserving critical spatial characteristics and system heterogeneity. By modelling the 3D system as a network of interconnected sub-systems with uniform properties[1], we significantly reduce computational costs and enhance simulation efficiency without compromising accuracy. A central innovation of CompArt lies in the application of AI-driven clustering techniques for the compartmentalization process. This approach enables the AI to autonomously determine the optimal number of compartments based on user-defined parameters, automating a traditionally expert-driven process.
Consequently, our framework becomes accessible to a broader audience, including non-experts, while delivering transparent and user-friendly outputs. Although multiple clustering techniques such as k-means, agglomerative clustering, and DBSCAN were tested, here we report the results obtained using Self-Organizing Maps (SOM). SOM are unsupervised neural networks that project high-dimensional data onto a two-dimensional grid while preserving the topological relationships of the input space. This makes them particularly effective for clustering spatial data, such as velocity distributions. The results demonstrate how the choice of input parameters influences clustering. When only the velocity distribution is used, SOM accurately capture zones of differing velocities, mapping system heterogeneity effectively. However, relying solely on velocity is insufficient in many chemical engineering processes. In reactive crystallization, for instance, a critical variable is the turbulent kinetic energy dissipation rate (ε)[2]. When both velocity and ε are used as inputs, the optimal number of clusters decreases, reflecting a trade-off between the two variables that better describe the process heterogeneity. The key takeaway is that the algorithm automatically determines the optimal hyperparameters, such as the map size and learning rate, through a Silhouette score, removing the need for user intervention or prior knowledge. Additionally, the algorithm can handle multiple controlling variables simultaneously. Beyond velocity and ε, it could incorporate variables such as the saturation field or temperature, critical parameters in processes like pharmaceutical manufacturing. In this regard, CompArt aims to address the critical challenges of scaling processes, particularly where process heterogeneity impacts Critical Quality Attributes (CQA). This capability is vital in pharmaceutical manufacturing applications such as stirred tank reactors, where scaling decisions directly influence product quality. The AI-driven model, capable of integrating key variables, represents a flexible and powerful tool for capturing system heterogeneity, enabling more efficient and accurate process simulations across various industrial sectors.

References

(1) Jourdan, N.; Neveux, T.; Potier, O.; Kanniche, M.; Wicks, J.; Nopens, I.; Rehman, U.; Le Moullec, Y.; "Compartmental Modelling in chemical engineering: A critical review", Chemical Engineering Science, 2019, 210, 115196.

(2) Raponi, A.; Achermann, R.; Romano, S.; Trespi, S.; Mazzotti, M.; Cipollina, A.; Buffo, A.; Vanni, M.; Marchisio, D.; "Population balance modelling of magnesium hydroxide precipitation: Full validation on different reactor configurations", Chemical Engineering Journal, 2023, 477, 146540.



9:10am - 9:30am

Large Language models (LLMs) for reverse engineering of perovskite solar cells

Naveen Bhati1, Mohammad Khaja Nazeeruddin2, François Maréchal1

1Industrial Process and Energy Systems Engineering, Ecole Polytechnique Fedérale de Lausanne, Switzerland; 2Institute of Chemical Sciences and Engineering, Ecole Polytechnique Fedérale de Lausanne, Switzerland

With climate change taking a prominent role in driving innovation, focus on developing novel renewable energy technologies has become imperative. Recently, perovskite solar cells have achieved an efficiency of>26.5%1 in single-junction devices and >34.5%1 in silicon/perovskite tandem solar cells. However, issues related to stability2 still pose the most pressing challenge in taking this novel technology to market which has the potential to compete with existing PV technologies in terms of cost and environmental footprints3. With the huge amount of data in the literature on perovskite solar cells, it is crucial to use this data optimally and efficiently to not only identify the trends in the existing data but also to generate new recipes that might have better stability than the existing recipes in the literature. Moreover, the advent of large language models (LLMs) and significant progress in using them for specific tasks have opened the venues to test their abilities in dealing with more scientific problems apart from the more generic tasks like summarization or text completion4. In this research, an attempt has been made to use both open-source LLMs like Llama and closed-source LLMs like OpenAI GPT models to generate recipes that could have similar or better performance compared to the existing ones. This involves testing different kinds of data formats, large language models, and hyperparameter fine-tuning, to get the best performance using these models. The modeling approach is a first attempt to generate not only material choices for main layers like the electron transport layer, hole transport layer, perovskite layer, and rear and front electrodes but also the different additives, processing routes, and other process steps involved in the complete fabrication of the perovskite solar cells. The detailed architecture in dealing with this problem involves using LLMs for both the generation of the strings and then the prediction of the generated strings to fine-tune the generating LLM to achieve the required target of stabilities. The architecture is inspired by the framework of tuning the instruction-tuned GPT models. Using the proposed framework, the capabilities of LLM models in interpreting the causal relationships will help in tackling the challenges of optimizing material-process design problems (MPDPs) along with identifying the limitations of the LLMs.

References:

  1. National Renewable Energy Laboratory, Best Research-Cell Efficiency Chart
  2. Duan, L., Walter, D., Chang, N., Bullock, J., Kang, D., Phang, S. P., ... & Shen, H. (2023). Stability challenges for the commercialization of perovskite–silicon tandem solar cells. Nature Reviews Materials, 8(4), 261-281.
  3. Bhati, N., Nazeeruddin, M. K., & Maréchal, F. (2024). Environmental impacts as the key objectives for perovskite solar cells optimization. Energy, 299, 131492.
  4. Xie, T., Wan, Y., Zhou, Y., Huang, W., Liu, Y., Linghu, Q., ... & Hoex, B. (2024). Creation of a structured solar cell material dataset and performance prediction using large language models. Patterns, 5(5).


9:30am - 9:50am

Enhancing Predictive Maintenance in Used Oil Re-Refining: A Hybrid Machine Learning Approach

Francesco Negri1,2, Andrea Galeazzi3,4, Francesco Gallo1, Flavio Manenti2

1Itelyum Regeneration S.p.A., Via Tavernelle 19, Pieve Fissiraga 26854, Lodi, Italy; 2Politecnico di Milano, CMIC Dept. “Giulio Natta”, Piazza Leonardo da Vinci 32, Milan 20133, Italy; 3Sargent Centre for Process Systems Engineering, Imperial College London, SW7 2AZ, United Kingdom; 4Department of Chemical Engineering, Imperial College London, SW7 2AZ, United Kingdom

Maintenance is vital to the smooth operation and safety of any industrial plant. Various maintenance strategies can be employed, often in combination, depending on the industrial sector and the plant's specific operating environment. Common approaches in the European industrial context include corrective, preventive, opportunistic, condition-based, and predictive maintenance (Bevilacqua and Braglia, 2000).

Predictive maintenance is a relatively new concept for used oil refineries, offering the potential to enhance traditional condition-based methods by using machine learning. While scientific literature has applied predictive maintenance based on Gaussian Processes to thermodeasphalting columns with good results, showing reduced runtime in suboptimal regions (Negri et al., 2024), there is room for further improvement through Hybrid Machine Learning (HML).

In this work, an equation-based model is developed to describe the pressure differential (ΔP) along the column, adapting literature models for structured packing (Rocha et al., 1993) to account for fouling through a time-dependent parameter. Given the difficulty in modeling fouling growth due to the changing composition of used oil and the stochastic nature of the phenomenon, this parameter is estimated using Gaussian Process Regressions, providing a most probable growth rate estimation, and a 95% bounded confidence interval. The hybrid model effectively captures the exponential rise in ΔP at the end of the run, which data-driven approaches often missed (Negri et al., 2024).

The model is applied to DCS datasets obtained from the Itelyum Regeneration used oil refinery located in Pieve Fissiraga, Lodi, Italy, and predictive maintenance strategies based on it are proposed and evaluated. These strategies significantly reduce suboptimal column runtime while ensuring economically sustainable operations. This approach is innovative, as hybrid machine learning has not been applied to fouling issues in the used oil industry, offering a more robust maintenance tool that adapts to varying feedstock conditions.

References

Bevilacqua, M., Braglia, M., 2000. Analytic hierarchy process applied to maintenance strategy selection. Reliability Engineering and System Safety 70, 71–83. https://doi.org/10.1016/S0951-8320(00)00047-8

Negri, F., Galeazzi, A., Gallo, F., Manenti, F., 2024. Application of a Predictive Maintenance Strategy Based on Machine Learning in a Used Oil Refinery, in: Manenti, F., Reklaitis, G.V. (Eds.), Computer Aided Chemical Engineering, 34 European Symposium on Computer Aided Process Engineering / 15 International Symposium on Process Systems Engineering. Elsevier, pp. 3175–3180. https://doi.org/10.1016/B978-0-443-28824-1.50530-5

Rocha, J.A., Bravo, J.L., Fair, J.R., 1993. Distillation Columns Containing Structured Packings: A Comprehensive Model for Their Performance. 1. Hydraulic Models. Industrial and Engineering Chemistry Research 32, 641–651. https://doi.org/10.1021/ie00016a010

 
8:30am - 10:30amT7: CAPEing with Societal Challenges - Session 2
Location: Zone 3 - Room E032
Co-chair: Gonzalo Guillén-Gosálbez
 
8:30am - 8:50am

On the Economic Uncertainty and Crisis Resiliency of Decarbonization Solutions for the Aluminium Industry

Dareen Dardor1,2, Daniel Flórez-Orrego1, Reginald Germanier3, Manuele Margni2, François Maréchal1

1Industrial Process and Energy Systems Engineering, Ecole Polytechnique Fédérale de Lausanne, EPFL, Sion, Switzerland; 2Institute of Sustainable Energy, School of Engineering, University of Applied Sciences and Arts Western Switzerland (HES-SO), Sion, Switzerland; 3Novelis Switzerland S.A., Sierre, Switzerland

The aluminium industry emits around 1.1 billion tonnes of CO2-eq annually, constituting roughly 2% of global industrial emissions. Currently, the sector is developing decarbonization pathways to achieve net-zero emissions by 2050. Potential solutions include biomass gasification, power-to-gas, energy storage, direct electrification of furnaces, and carbon capture. However, this involves making decisions today, based on economic assumptions that reflect current market conditions, about technologies that will operate over the coming decades. A situation which results in significant economic uncertainties due to the volatility of energy prices. Historically, price forecasting models have often failed to account for major market disruptions, such as the 2022 energy crisis, due to the unpredictability of geopolitical and market factors. In this context, decision-makers require systematic methodologies to manage uncertainty. In this work, Monte-Carlo Analysis (MCA), is used to evaluate the financial stability of various decarbonization configurations by simulating the effects of fluctuating energy prices over the operational lifetime of the technology. This study applies MCA to two decarbonization pathways for the aluminium industry: a biomass-based option, which relies on biomass gasification to generate syngas for furnaces, and an oxyfuel scenario, which uses electrified furnaces in combination with oxyfuel burners and carbon capture and storage. The biomass pathway was found to have a 63-78% probability of negative incremental Net Present Value (iNPV) over 25 years with reference to the base case. Conversely, the oxyfuel option demonstrates a 54% likelihood of economic loss under similar conditions. Further analysis in this work, introduces the modelling of economic crisis scenarios where sudden price shocks occur during the plant’s lifetime. The results indicate that the oxyfuel solution is 75% more susceptible to financial risk during energy crises than the biomass-based option. This is because the former depends heavily on electricity, whose prices can spike dramatically while the latter demonstrates a diversification of less volatile energy sources such as biomass. This analysis highlights the importance of considering energy price volatility and the need for diversified energy sources when developing decarbonization strategies for the aluminium industry.



8:50am - 9:10am

Transition pathways for the Belgian Industry: application to the case of the lime sector

Rafailia Mitraki1, Muhammad Salman1, François Maréchal2, Grégoire Léonard1

1Chemical Engineering, University of Liège, Belgium; 2Federal Polytechnic School of Lausanne, IPESE group, Switzerland

The lime industry, essential for construction, steelmaking, and effluent treatment, contributes significantly to CO2 emissions, accounting for about 1% of global anthropogenic emissions. The calcination process itself emits 0.786 tCO2 per ton of lime and requires high temperatures (900–1100°C), typically achieved by burning fossil fuels. Depending on the kiln technology, total emissions range from 1 to 1.8 tCO2 per ton of lime.

The objective of this study is to analyze various pathways for achieving CO2 emission reduction in the lime sector. For this purpose, a Blueprint (BP) model of the lime sector is developed, consisting of detailed mass and energy balances, as well as economic considerations (i.e., annualized CAPEX and OPEX). This BP model incorporates a superstructure of various energy transition pathways such as fuel switching (towards hydrogen, biogas, solid biomass), kiln electrification (using plasma torches) and CO2 capture (CC) (via chemical absorption with MEA or oxycombustion (‘NGOxy’)). Furthermore, the OSMOSE tool (an optimization framework), developed at EPFL, is utilized for evaluating the superstructure for three different years (2030, 2040, 2050) and three different energy scenarios based on EnergyVille’s PATHS2050 study, impacting utilities cost and the CO2 emissions cost. Finally, a comparison between all alternative routes and the base case ’NG’ (natural gas-fired kiln without CC) is performed on the basis of three key performance indicators (KPI): specific energy consumption (kWh/tlime), specific CO2 emissions (kgCO2/tlime) and specific total cost (€/tlime).

From results, in 2030, the optimum pathways are ’Biomass-CC’ and ’NGOxy-CC’. Significant CO2 emissions reduction (-115% compared to ‘NG’) and lower fuel costs result in a specific total cost (STC) reduction of 27% compared to ‘NG’ (€270/tlime), for ‘Biomass-CC’, despite increased total energy consumption (+120% compared to ‘NG’). The 90% lower CO2 emissions related to ‘NGOxy-CC’ enable a STC reduction of 16–18% compared to ’NG’, depending on the scenario considered, despite a 16% higher energy consumption. ’Plasma-CC’ comes 3rd, with a cost reduction of 12–18% due to an energy consumption reduction of 12% and an emission reduction of 93% compared to ’NG’. Similar trends are observed for 2040, with the economically optimal solution remaining ’Biomass-CC’. In 2050, the STC of ’NG’ reaches €476/tlime. ’Biomass-CC’ remains among the optimal routes with a STC reduction of 61% compared to ‘NG’, while lower electricity prices in 2050 scenarios enable a STC reduction of 51–62% for ’Plasma-CC’. ’NGOxy-CC’ also remains a suitable route with a STC 51–54% lower than ‘NG’. The use of hydrogen in lime kilns, on the other hand, represents one of the most expensive transition pathways for the sector in all scenarios due to higher energy consumption, fuel price and CAPEX than the base case.

In conclusion, this study offers a foundation for decision-making based on specific KPIs for future scenarios. CO2 capture coupled with biomass-fired kilns, plasma technology or oxycombustion configuration represent the most cost-effective routes for emissions reduction in the lime sector. However, despite relatively low costs, the problems associated with biomass availability and the low TRL of plasma technology should not be overlooked.



9:10am - 9:30am

Resource and Pathways Analysis for Decarbonizing the Pulp and Paper Sector in Quebec

Marie-Hélène Talbot, Mélissa Lemire, Jean Noël Cloutier

Laboratoire des technologies de l'énergie (LTE), Hydro-Québec, Canada

Decarbonizing industries could significantly increase electricity demand, necessitating strategic
grid expansion. This study evaluates the impact of decarbonizing the Pulp and Paper Sector under
four 2050 scenarios: carbon capture, biomass-based, direct electrification, and indirect electrifi-
cation. A bottom-up approach is employed to estimate 2020 final energy demand by heat grade
and subsector. Both final and primary energy demand systems are modeled, accounting for the
efficiencies of end-use technologies and primary energy transformation processes. The analysis
compares primary renewable energy demand (electricity and biomass) normalized per ton of
equivalent CO2 avoided against a business-as-usual scenario. It also considers the requirements
for wood residues, organic waste, and CO2 storage. The carbon capture scenario, while low in
electricity demand, requires significant organic waste for renewable natural gas production and
2.6 Mt of CO2 storage to offset direct and indirect emissions, making it the least feasible due to
uncertainties around carbon storage in Quebec. Among the remaining scenarios, the direct elec-
trification stands out by offering the lowest primary energy demand. It combines heat pumps with
electric boilers for steam production and lime kilns are converted to a plasma-based solution. The
study also includes a sensitivity analysis highlighting the potential of energy efficiency measures
to ease the burden of decarbonization.



9:30am - 9:50am

Comparative assessment of chemical absorption-based CO2 capture and injection systems and alternative decarbonization technologies in the cement industry

Muhammad Salman1, Daniel Flórez-orrego2, François Maréchal2, Grégoire Léonard1

1Chemical Engineering, Université de Liège, Belgium; 2IPESE group, Federal Polytechnic School of Lausanne, Sion, Switzerland.

Cement production is one of the largest sources of global industrial CO2 emissions. To achieve ambitious target of net-zero emissions by 2050, conventional CO2 capture technologies alone are considered insufficient. In fact, although chemical absorption is a mature technology, it suffers from significant issues due to solvent regeneration process and incomplete carbon separation. For this reason, novel CO2 capture and mineralization approaches must be implemented, which can also provide minerals and additives, thus increasing the economic attractiveness and sustainability of the overall process. In this work, the performance of the process integration between a chemical absorption process and a cement plant is compared to that of a novel CO2 capture and sequestration system based on ex-situ CO2 mineralization. CO2 mineralization offers the potential for permanent carbon sequestration by converting captured CO2 into stable carbonates, utilizing industrial by-products such as slags, fly ash, and other waste materials.

A superstructure-based methodology is developed to explore and evaluate multiple solutions, e.g. chemical absorption, Oxycombustion, as well as CO2 mineralization, injection and calcium looping. Evaluated key performance indicators include total annualized costs (€/tcement), CO2 emissions (tCO2/tcement), and specific energy consumption (GJ/tcement), with each pathway evaluated under future energy scenarios: 2030, 2040, and 2050, dased on EnergyVille’s PATHS2050, aiming to reflect the evolving commodity prices and carbon pricing. Preliminary results indicate that CO2 mineralization may become more cost-effective than chemical absorption by 2050, particularly due to its lower energy requirements and the ability to produce marketable by-products. In a near term (2030), chemical absorption remains competitive due to its established infrastructure and relatively higher capture efficiency. Yet, as CO2 pricing escalates and renewable electricity becomes more affordable by 2050, solutions shift towards mineralization technology. The integration of CO2 capture with mineralization in cement kilns, combined with renewable energy, could offer a more economically viable and environmentally sustainable solution, considering twofold benefit of emission reduction and generation of useful materials for other sectors, such as construction.

By comparing emerging technologies like mineralization with traditional chemical absorption, this study identifies key trade-offs between costs, emissions reduction potential, and technology readiness. The insights generated will assist policymakers and industry stakeholders in formulating long-term strategies for achieving climate targets in the cement sector.



9:50am - 10:10am

Participative tool-based method to develop indicators to support a transition to a circular economy

Léa van der Werf1, Gabriel Colletis2, Stéphane Negny1, Ludovic Montastruc1

1Laboratoire de Génie Chimique, CNRS/INP/UPS, Université de Toulouse, France; 2Laboratoire d'Étude et de Recherche sur l'Économie, les Politiques et les Systèmes sociaux, Université de Toulouse, France

A growing number of circular engineering projects are being developed. They propose to reduce, reuse, recycle or recover resources. Their aim is to contribute to the transition to more sustainable systems. However, they are embedded in an economic, social and environmental context. This context will impact the project (e.g. social acceptability), which in turn will impact it (e.g. CO2 emissions). Depending on the context, the project will not necessarily improve the global sustainability. It may be necessary to rethink elements of this context, particularly socio-economic ones. In this case, the project is part of a transition to a circular economy, going beyond the scope of circular engineering. A question to be asked in engineering project management is: How to support the development of projects really contributing to a transition to a more circular economy and thus to more sustainable systems?

Indicators are central decision-making tools, useful including in computer-aided tool. There are numerous indicators relating to the circular economy. However, most of them are specific, biased by an aggregation method and incomplete (De Pascale et al., 2021). So, what aspects should these indicators represent to support the transition? how? and how can they be adapted to the specificities of a given context?

Studies in ecological economics show that stakeholder participation improve (i) the consistence between the indicator and the context ; and (ii) there usage in decision-making (Fraser et al., 2006). Defining indicators is then also a way to co-define which system to aim for and how to get there. This involvement of stakeholders in the decision-making process seems necessary for real and desirable transitions. But how to support this participatory process?

This study proposes a tool-based participatory method for developing indicators aiming to support decision-making on projects of transition to a circular economy. The method aims to be generic (industrial sector, scale). Indicator sets developed are context-specific, multi-scale, multi-stakeholders and aim for strong sustainability. The tools used are multidisciplinary (e.g. economy, management, industrial engineering). The central tool of the method is a database of 370 non-aggregated indicators classified in a framework. Both the database and the framework were deduced from the literature. The frame categories are aspects to be potentially considered. Indicators are examples of how these aspects can be represented. The method was tested on a project to set up a food processing plant.

De Pascale, A., Arbolino, R., Szopik-Depczyńska, K., Limosani, M., Ioppolo, G., 2021. A systematic review for measuring circular economy: The 61 indicators. Journal of Cleaner Production 281, 124942. https://doi.org/10.1016/j.jclepro.2020.124942

Fraser, E.D.G., Dougill, A.J., Mabee, W.E., Reed, M., McAlpine, P., 2006. Bottom up and top down: Analysis of participatory processes for sustainability indicator identification as a pathway to community empowerment and sustainable environmental management. Journal of Environmental Management 78, 114–127. https://doi.org/10.1016/j.jenvman.2005.04.009

 
8:30am - 10:30amT9: PSE4Food and Biochemical - Session 3
Location: Zone 3 - Room D049
Chair: Zorka Novak-Pintaric
Co-chair: Ihab Hashem
 
8:30am - 8:50am

Machine learning-enhanced Sensitivity Analysis for Complex Pharmaceutical Systems

Daniele Pessina1,2, Roberto Andrea Abbiati3, Davide Manca4, Maria M. Papathanasiou1,2

1Sargent Centre for Process Systems Engineering, Imperial College London, SW7 2AZ, UK; 2Department of Chemical Engineering, Imperial College London, SW7 2AZ, UK; 3Roche Pharma Research and Early Development, Predictive Modeling and Data Analytics, F. Hoffmann-La Roche Ltd, Grenzacherstrasse 124, 4070 Basel, Switzerland; 4PSE-Lab, Process Systems Engineering Laboratory - Dipartimento di Chimica, Materiali e Ingegneria Chimica “Giulio Natta” Politecnico di Milano - Piazza Leonardo da Vinci 32, 20133 Milano, Italy

Sensitivity Analysis (SA) is well-established in systems modelling, aiding the identification and quantification of the impact that parametric uncertainty can have on model outputs (Triantafyllou et al., 2023; Kotidis et al., 2019). Beyond that, Global SA (GSA) is allowing investigation of what is known as “second order interactions”, referring to the investigation of impact that parametric uncertainty can have on the system parameters, instead of outputs (Saltelli et al., 2019; Andalibi et al., 2020). The latter can shed light on underlying, perhaps non-evident, interactions enhancing system understanding. Despite its potential and usefulness, GSA performance is dependent to the model complexity. In this context, large-scale and nonlinear models can render GSA challenging to perform, requiring excessive computational effort. This is further augmented in cases of large sets of parameters. To this end, approaches have been developed, successfully reducing the complexity of the GSA, by segmenting the set to smaller groups of parameters (Sheikholeslami et al., 2019). This, however, can limit the potential of a full-scale GSA as it does not consider the parametric set universally at once. Metamodels have also been used as model surrogates, however they are prone to overfitting in higher dimensions (Becker, 2015).

In this work, we investigate the potential of Machine Learning (ML) to reduce the complexity of Pharmacokinetic/Pharmacodynamic (PK/PD) models. Such models (Abbiati et al., 2018) are suitable for this analysis given their non-linearity and because they involve parameters reflecting patient characteristics. In this context, understanding the parameter space-model output relationship implies linking patient population heterogeneity to the therapeutic outcome variability. Here, we explore how the level of hybridisation can impact the GSA performance, but also, critically, whether the use of surrogates affects the resulting model sensitivity to parametric uncertainty.

Abbiati, R.A., Savoca A., Manca D., (2018) Chapter 2 - An engineering oriented approach to physiologically based pharmacokinetic and pharmacodynamic modeling. Computer Aided Chemical Engineering, 42, 37-63.
https://doi.org/10.1016/B978-0-444-63964-6.00002-7.

Andalibi, M.R., Bowen, P., Carino, A. & Testino, A. (2020) Global uncertainty-sensitivity analysis on mechanistic kinetic models: From model assessment to theory-driven design of nanoparticles. Computers & Chemical Engineering. 140, 106971. doi:10.1016/j.compchemeng.2020.106971.

Kotidis, P., Demis, P., Goey, C.H., Correa, E., McIntosh, C., Trepekli, S., Shah, N., Klymenko, O.V. & Kontoravdi, C. (2019) Constrained global sensitivity analysis for bioprocess design space identification. Computers & Chemical Engineering. 125, 558–568. doi:10.1016/j.compchemeng.2019.01.022.

Saltelli, A., Aleksankina, K., Becker, W., Fennell, P., Ferretti, F., Holst, N., Li, S. & Wu, Q. (2019) Why so many published sensitivity analyses are false: A systematic review of sensitivity analysis practices. Environmental Modelling & Software. 114, 29–39. doi:10.1016/j.envsoft.2019.01.012.

Sheikholeslami, R., Razavi, S., Gupta, H.V., Becker, W. & Haghnegahdar, A. (2019) Global sensitivity analysis for high-dimensional problems: How to objectively group factors and measure robustness and convergence while reducing computational cost. Environmental Modelling & Software. 111, 282–299. doi:10.1016/j.envsoft.2018.09.002.

Triantafyllou, N., Sarkis, M., Shah, N., Kontoravdi, C. & Papathanasiou, M.M. (2023) Integrated Process and Supply Chain Design and Optimization. In: R. Pörtner (ed.). Biopharmaceutical Manufacturing: Progress, Trends and Challenges. Cham, Springer International Publishing. pp. 213–239. doi:10.1007/978-3-031-45669-5_7.



8:50am - 9:10am

Deacidification of Used Cooking Oil: Modeling and Validation of Ethanolic Extraction in a Liquid-Liquid Film Contactor

Sergio Andrés Rojas Prieto, Alvaro Orjuela, Paulo Cesar Narváez

Universidad Nacional de Colombia, Colombia

Used cooking oils (UCOs) or waste cooking oils (WCOs) are widely generated urban residues as a result of food preparation at households, restaurants, hosteling sites, and industry. Despite UCOs are highly contaminated with components of different nature, they are mainly composed of triacylglycerols of fatty acids, which can be used as raw materials for the oleochemical industry. However, a significant challenge in the valorization of UCOs is their high content of free fatty acids (FFA), which can lead to equipment corrosion, deactivate alkaline catalysts during transesterification reactions and reduce yields in various processes. To address these issues, it is necessary to reduce the acidity of UCOs. Industrially, this is typically achieved through high-energy and materials-intensive processes such as low vacuum distillation and neutralization (Noriega et al., 2022). For this reason, alternative processes such as alcoholic extraction have proven to be very effective, exhibiting lower energy consumption, reduced residue generation, and the potential to utilize the recovered free fatty acids (FFAs) through further esterification with the solvent.

In recent studies it was verified that FFAs could be removed from acid vegetable oils and/or UCOs by using ethanolic extraction in a high surface area liquid-liquid contactor under continuous operation (Cárdenas et al., 2022; Noriega et al. 2022). This equipment seeks to maximize the contact area between liquid phases using a semi structural packing, which in turn allows the equipment to operate under laminar regime reducing dispersion and facilitating downstream decantation. In a previous experimental exploration (Cárdenas et al., 2022) it was found that in a single stage contactor of 1.07m, it was possible to reduce UCO’s acidity by 51%, whereas using acidified palm oil (Noriega et al., 2022) it was possible to reduce acidity below 0.1 wt.%. In both cases, separation was carried out in a single stage contactor under a fixed ethanol-to-oil ratio.

In this regard, this work was aimed to develop and correlate a mathematical model to describe the operation of a liquid-liquid film contactor in the FFAs extraction from UCOs. In previous research (Noriega et al., 2022), it was established that mass transfer in the oil phase was the primary resistance affecting the overall process. Accordingly, the developed model proposes a rigorous description of the liquid-liquid mass transfer in the oil phase based upon previously validated phase equilibria data and reported physicochemical properties of the mixture. Additionally, by applying subsequent stochastic and deterministic optimization algorithms, the mass transfer parameters were adjusted by using reported data from deacidification experiments carried out under different UCO flowrates, ethanol to UCO mass flow ratios and contactor lengths (Cárdenas et al., 2022). Once the parameters were regressed and validated, the liquid-liquid extraction model was used to determine the best operating conditions and configuration to carry out the deacidification of UCOs. It was found that a multistage configuration was required to reduce UCO’s acidity below the specifications of oleochemical feedstocks (< 0.5 wt.%), and that the contactor enabled to intensify the mass transfer process in comparison with traditional configurations of mixed tank contactors with settlers.



9:10am - 9:30am

Modelling the in vitro FooD Digestion SIMulator FooDSIM

Stylianos Floros, Satyajeet S. Bhonsale, Sotiria Gaspari, Simen Akkermans, Jan F.M. Van Impe

BioTeC+, KU Leuven, Gebroaders De Smetstraat 1, Gent 9000, Belgium

The human digestion is a complex phenomenon that takes places by utilizing multiple resources and processes so as to ensure the longevity and maximal absorption of essential nutrients. Due to the controversial and costly nature of human medical intervention for research purposes, an advanced demand for in vitro model systems is evident throughout the recent years. A major drawback of these systems though is their laborious and expensive operation as well as the contained experimentation extent, as a result of aiming to mimic accurately physiologically based processes. Moreover, their susceptibility towards environmental conditions (e.g. survival of gut microbiota), highlights the need for radical in silico models, thus the importance for creating a FooDSIM digital twin was given birth to.
Main aim of this work is to create a mathematical digital twin that will perform predictive modelling, while upper aim is the future incorporation of gut microbiota’s microbial interactions in the system, under steady state conditions of operation and intake of different input formulas.
For the formulation of the model, processes as hydraulics, biochemical interactions between enzymes and substrates, as well as their dependency on pH are expressed through Modified Ordinary Differential Equations (ODEs), and the simulation is performed at 3 different intervals (6 hours, 24 hours, 7 days). To accomplish this, the digital twin simulates the human gastrointestinal tract (GIT) by employing 4 bioreactors in series, which represent distinct organs of the GIT. The emptying profile of the 1st reactor representing the stomach, follows the Elashoff exponential equation, and influences all subsequent feeding and emptying processes taking place in the system, where the addition of simulated digestion fluids, enzyme solutions and of a food model system as input, creates a standardized model environment. Lastly, the absorption processed is simulated by expressing the function of a hollow fibre membrane, which operates with a cut-off limit and restricts the passage of molecules based on their molecular weight.
During the simulations, enzyme concentration increases initially because they cannot be absorbed during dialysis, thus affecting the kinetic parameters used to simulate interactions between enzymes and substrates, which follow Michaelis-Menten kinetics. The pH’s impact on enzyme activity is also emphasized, as pH levels outside the optimal range cause enzyme inactivation. To model this condition, enzyme activity is fitted to two different cardinal models, and their goodness of fit is assessed. For the shake of simulation, MATLAB’s ode15s solver is used due to the system’s stiffness and complexity, whereas simulations interval progress, enzyme concentration changes, impacting substrate digestion. Initial results show system stabilization after 3-4 feeding cycles, with substrate concentrations plateauing after 24-30 hours. The dialysis phase causes significant enzyme concentration changes, affecting substrate consumption rates. Cardinal models show good performance (MSE < 0.0114 and 0.0134), while only pancreatic trypsin follows the second model. GSA reveals key interactions between substrates and enzyme activity, especially for starch



9:30am - 9:50am

Towards Sustainable Production: Automated Solvent Design for Downstream Processing in Methyl ketone Fermentation

Lukas Rasspe-Lange1, Lukas Polte2, Henry Hilker2, Andreas Jupke2, Kai Leonhard1

1Institute of Technical Thermodynamics, RWTH Aachen University; 2Department of Chemical Engineering, Fluid Process Engineering, RWTH Aachen University

To reduce the global dependence on fossil fuels, large scale production of biofuels becomes increasingly important. A promising class of potential biofuels are methyl ketones that can be produced via fermentation [1]. A big challenge is the downstream purification of methyl ketones as it requires an industrially scalable process for the extraction of a hydrophobic product that is highly diluted in an aqueous fermentation broth. A key factor in ensuring an economically viable process is the selection of a suitable solvent [2].

This study focusses on the simultaneous design of a suitable extraction solvent along with the respective extraction distillation process to minimize the expected overall cost of the methyl ketone fermentation process [3]. Therefore, this study extends and refines a previously developed framework for the simultaneous design of molecules, processes parameters and equipment size [4].

The framework is based on a genetic algorithm that iteratively optimizes solvent structure, process parameters and equipment size to minimize the overall costs including the estimation of equipment size [5]. In each generation, the genetic algorithm generates a fixed number of potential solvents from a given set of molecular fragments. All relevant physical properties of the constructed solvents are then predicted via methods of computational chemistry and passed to the process model. The process model optimizes process parameters based on operating and investment cost for all created molecules of a generation and ranks them according to total cost. This information is passed back to the genetic algorithm and used to improve the next generation.

In this study, the framework is refined by three major improvements. First, the integration of the Rectification Body Method [6] complemented by a sequencing algorithm for a more accurate simulation of the distillation. Second, the framework is extended by a screening-guided warm start method for CAMD design [7]. This method improves the convergence speed of the genetic algorithm and furthermore allows the design of more suitable solvent candidates by improving the library of molecular fragments used to source solvent candidates. Third, we evaluate the potential of predictive methods for considering bio compatibility. We show that the algorithm is able to identify promising solvent candidates outperforming literature solvents, highlighting the potential of molecule design and the importance of early-stage process and equipment design.

References:

[1] Grütering et al. (2024), DOI: 10.1039/D4SE00035H

[2] Zhou et al. (2020), DOI: 10.1016/j.coche.2019.10.007

[3] Ziegler et al.(2023), DOI: 10.1093/jimb/kuad029.

[4] Polte et al. (2023), DOI: 10.1002/cite.202200144

[5] Douguet et al. (2005), DOI: 10.1021/jm0492296

[6] Bausa et al.(1998), DOI: 10.1002/aic.690441008

[7] Wang et al. (?),DOI: in submission.



9:50am - 10:10am

COMPUTER-AIDED DESIGN AND OPTIMIZATION OF A LYCOPENE PRODUCTION PROCESS FROM TOMATO WASTE

Nereyda Vanessa Hernández-Camacho1, Fernando Israel Gómez-Castro1, Mariano Martín2, Ehecatl Antonio del Rio-Chanona3, Oscar Daniel Lara-Montaño4

1Universidad de Guanajuato, Mexico; 2Universidad de Salamanca, Spain; 3Imperial College London, United Kingdom; 4Universidad Autónoma de Querétaro, Mexico

Approximately 13.2% of the world's food is lost before being harvested (United Nations, 2023). Thus, food waste management has high relevance. In Mexico, the agricultural sector contributes with 3.4% of the national GDP, where the fruits and vegetables sector stand out with 45% of the sector's exportations. Tomatoes contributes to exportation with 8.41%. Mexico is the main supplier of this product worldwide, with a 19% share of the exported volume in the period 2003-2017 (Montaño Méndez et. al., 2021). However, not all tomatoes are used due to their short shelf life, causing approximately 30% of the crop to be wasted (Méndez-Carmona et al., 2022). In addition, waste from tomato sauce production can also be valorized. Tomato waste can be used to isolate high-value compounds such as carotenoids, polyphenols, vitamins, fibers, flavonoids, among others. Among these, lycopene is considered a well-known carotenoid and the most abundant pigment in tomatoes, responsible for giving them their color. It is used as a raw material in the cosmetic, pharmaceutical and food industries (Kuvendziev et al., 2024). Lycopene from tomato waste is traditionally recovered by solvent extraction, but research on this topic has only been carried out through laboratory-scale experimentation (e.g. Poojary et al., 2015; Catalkaya et al., 2019).

This work addresses the systematic design and optimization of an industrial-scale process to obtain lycopene from tomato waste. A comparison is made between the use of conventional solvents, as the mixture acetone: hexane, with an ethanol-based extraction. As results, industrial-scale processes have been proposed and designed to produce lycopene from tomato waste. The process include the drying of the tomato waste, grinding, extraction of lycopene, and recovery of the solvent. Yields of lycopene from dried tomato waste of 21.96 mg/kg and 4.03 mg/kg are obtained for acetone: hexane and ethanol, respectively. The acetone: hexane route has been identified as a promissory route in terms of yield, but the ethanol route is promissory in terms of environmental impact and potential application in the food and pharmaceutical areas.

References

Catalkaya, G., Kahveci, D. (2019). Optimization of enzyme assisted extraction of lycopene from industrial tomato waste. Separation and Purification Technology, 219, 55-63.

Kuvendziev, S., Lisichkov, K., Marinkovski, M., Stojchevski, M., Dimitrovski, D., Andonovikj, V. (2024). Valorization of tomato processing by-products: Predictive modeling and optimization for ultrasound-assisted lycopene extraction. Ultrasonics Sonochemistry, 110, 107055.

Montaño Méndez, I.E., Valenzuela Patrón, I.N., Villavicencio López, K.V. 2021. Competitividad del tomate rojo de México en el mercado internacional: análisis 2003-2017. Revista Mexicana de Ciencias Agrícolas, 12 (7), 1185-1197. (Spanish)

Méndez-Carmona, J.Y., Ascacio-Valdez, J.A, Alvarez-Perez, O.B., Hernández-Almaraz, A-Y., Ramirez-Guzman, N., Sepúlveda, L., Aguilar-González, M.A., Ventura-Sobrevilla, J.M., Aguilar, C.N. 2022. Tomato waste as a bioresource for lycopene extraction using emerging technologies. Food Bioscience, 49, 101966.

United Nations. 2023. The Sustainable Development Goals Report: Special Edition. At https://unstats.un.org/sdgs/report/2023/The-Sustainable-Development-Goals-Report-2023.pdf (accessed september 20, 2024).



10:10am - 10:30am

Kinetic modelling for PHB biosynthesis and biodegradation

Ariyan Amirifar, Constantinos Theodoropoulos

University of Manchester, United Kingdom

It is estimated that 79% of all plastics ever produced have accumulated in the environment, while only 9% have been incinerated and a mere 12% recycled1. Furthermore, about 95.5% of plastics are petroleum-derived (PlasticsEurope), whose environmental impact is no longer a mystery to anyone. To address the dual challenges of plastic pollution and fossil fuel dependence, fully biodegradable plastics made from sustainable biological resources present a promising and environmentally friendly alternative. In the realm of bioplastics, polyhydroxyalkanoates (PHAs) stand out as a prominent class of naturally occurring intracellular microbial polyesters with poly(3-hydroxybutyrate) (PHB) being the model and most studied member of the family 2. Large scale production of PHAs is hindered mainly due to high feedstock costs and low PHA productivities. A promising solution to these challenges is integrating PHB production into biodiesel facilities, using crude glycerol—a byproduct of biodiesel production—as the fermentation substrate 3,4. This promising bioprocess can be systematically designed and optimized in silico, eliminating the need for time-consuming trial-and-error experiments. Furthermore, to the best of our knowledge, no prior research has specifically examined the individual steps of PHB biodegradation or the factors influencing the degradation rate at each stage.

In continuation of previous work conducted in our research group 3,4, we are developing a holistic mechanistic kinetic model for PHB production by Cupriavidus necator DSM 545, using glycerol as the sole carbon source and ammonium sulfate (AS) as the nitrogen source. The model is constructed on data from various batch and fed-batch bioreactor fermentations conducted under controlled pH 6.8 and dissolved oxygen (DO) at 30% saturation, at a range of initial concentrations of carbon, nitrogen, and biomass to derive the kinetic constants. Additionally, the obtained PHBs are subjected to biodegradation under different processing conditions to identify the relationship between these parameters and the biodegradation rate.

References

1. Dimante-Deimantovica, I. et al. Downward migrating microplastics in lake sediments are a tricky indicator for the onset of the Anthropocene. Sci. Adv. 10, eadi8136 (2024).

2. Park, H. et al. PHA is not just a bioplastic! Biotechnol. Adv. 71, 108320 (2024).

3. Pérez Rivero, C., Sun, C., Theodoropoulos, C. & Webb, C. Building a predictive model for PHB production from glycerol. Biochem. Eng. J. 116, 113–121 (2016).

4. Sun, C., Webb, C. & Theodoropoulos, C. Dynamic Metabolic Modelling of Cupriavidus necator DSM 545 in PHB Production from Glycerol. in Computer Aided Chemical Engineering vol. 38 2217–2222 (Elsevier B.V., 2016).

 
10:30am - 11:00amCoffee Break
Location: Zone 2 - Cafetaria
10:30am - 11:00amPoster Session 2
Location: Zone 2 - Cafetaria
 

Rebalancing CAPEX and OPEX to Mitigate Uncertainty and Enhance Energy Efficiency in Renewable Energy-Fed Chemical Processes

Ghida Mawassi, Alessandro Di Pretoro, Ludovic Montastruc

LGC (INP - ENSIACET), France

The conventional approach in process engineering design has always been based on the exploitation of the degrees of freedom of a process system for the optimization of the operating conditions with respect to a selected objective function. The latter was usually defined based on the best compromise between capital and operating expenses. However, although the first cost item has played a role of major importance during the life period of the industrial sector focused on the production capacity expansion, the operating aspect is becoming more and more predominant in the current industrial landscape due to the increasing concerns towards carbon-free energy sources and the higher equilibrium between offer and demand. In essence, the reliance on fluctuating and intermittently available energy resources - renewable resources - is increasing, making it essential to maximize product output while minimizing energy consumption.

Based on these observations, it appears evident that the acceptance of higher investments for an improvement in the process performances could be a fruitful opportunity to further improve the efficiency of energy intensified and renewables fed chemical processes. To explore the potential of this design paradigm reconsideration from a quantitative perspective, a dedicated biogas-to-methanol case study was set up for a comparative study. The reaction and separation sections for grade AA biomethanol production were designed and simulated based on both the total annualized and utility costs minimization and compared. The optimal choice was to focus on the most energy-intensive section of the process, the purification. To this end, distillation columns were intentionally oversized. Although this approach increased the initial investment cost, it led to significant energy savings.

The investment increase for each layout and the corresponding energy savings were assessed and analyzed. The outcome of the simulation shows relevant improvements in terms of energy savings equal to 15 % with respect to the conventional layout. As a consequence, the possibility of establishing a new break-even operating point between equipment and utilities related expenses as the optimal decision at the design stage is worth to be analyzed in deeper detail in future studies. Notably, this break-even point is extremely dependent on both the cost and availability of energy. In scenarios where energy availability is limited or costs are higher, the advantages of oversizing become more pronounced.



Operational and Economic Feasibility of Green Solvent-Based Extractive Distillation for 1,3-Butadiene Recovery: A Comparison with Conventional Toxic Solvents

João Pedro Gomes1, Rodrigo Silva2, Clemente Nunes3, Domingos Barbosa1

1LEPABE / ALiCE, Faculdade de Engenharia da Universidade do Porto; 2Repsol Polímeros, S.A., Complexo Petroquímico; 3CERENA, Instituto Superior Técnico

The increasing demand for safer and environmentally friendly processes in the petrochemical industry requires replacing harmful solvents with safer alternatives. One such process, extractive distillation (ED) of 1,3-butadiene, typically employs potential toxic solvents like N,N-dimethylformamide (DMF) and N-methyl-2-pyrrolidone (NMP). Although highly effective, these solvents may pose significant health and environmental risks. This study explores the viability of using propylene carbonate (PC), a green solvent, as a substitute in the butadiene ED process.

A comprehensive simulation study using Aspen Plus® was conducted to model the PC behavior in comparison with DMF (Figure 1). Due to the scarcity of experimental data for the system PC/C4 hydrocarbons, it was crucial to have a reliable prediction of vapor-liquid equilibrium (VLE) to derive accurate pairwise interaction parameters (bij) and ensure a more realistic representation of molecular interactions. Initially, the COSMO-RS (Conductor-like Screening Model for Real Solvents) was employed, leveraging its quantum chemical foundation to predict VLE based on molecular surface polarization charge densities. Subsequently, new energy interaction parameters were obtained for the Non-Random Two-Liquid (NRTL) model, coupled with the Redlich-Kwong (RK) equation of state, a model that is particularly effective for systems with non-ideal behavior, such as those involving polar compounds, strong molecular interactions (like hydrogen bonding), and highly non-ideal mixtures. Thus, making it particularly well-suited for systems, such as those present in the extractive distillation processes. Key operational parameters, such as energy consumption, solvent recovery, and product purity, were evaluated to assess the process efficiency and feasibility. Additionally, an energy analysis of the process with the new solvent was conducted to evaluate its energy-saving potential. This was achieved using the pinch methodology from the Aspen Energy Analysis tool to optimize the existing process for the new solvent. Economic evaluations, including capital (CapEx) and operational costs (OpEx), were carried out to provide a holistic comparison between the solvents.

The initial analysis of the solvent's selectivity showed slightly lower selectivity compared to the conventional, potentially toxic, solvents, along with a higher boiling point. As a consequence, higher solvent-to-feed ratio may be required to achieve the desired separation efficiency. The higher boiling point will also require increased heat duties, leading to higher overall energy consumption. Nevertheless, the study underscores the potential of this green solvent to improve the sustainability of petrochemical processes while striving to maintain economic feasibility.



Optimizing Heat Recovery: Advanced Design of Integrated Heat Exchanger Networks with ORCs and Heat Pumps

Zinet Mekidiche Martínez, José Antonio Caballero Suárez, Juan Labarta

Universidad de Alicante, Spain

An advanced model has been developed to facilitate the simultaneous design of heat exchanger networks integrated with organic Rankine[JACS1] cycles (ORCs) and heat pumps, addressing two primary objectives. The model utilizes heat pumps to reduce reliance on external services by enhancing heat recovery within the system. Secondly, ORCs capitalize on residual heat streams or generate additional energy, effectively integrating with the existing heat exchanger network.

Effective integration of these components requires careful selection of fluids for the ORCs and heat pumps and determining optimal operating temperatures for these cycles to achieve maximum efficiency, the heat exchanger network, in which inlet and outlet temperatures are not necessarily fixed, the number of Organic Rankine cycles and heat pumps, as well as their operating conditions, are simultaneously optimized.

This method aims to minimize costs associated with external services, electricity, and equipment such as compressors and turbines. The approach leads to the design of a heat exchanger network that optimizes both the use of residual heat streams and the integration of other streams within the system. This not only enhances operational efficiency and sustainability but also demonstrates the potential of incorporating an Organic Rankine Cycle (ORC) with various energy streams, not limited solely to residual heat.



CO2 recycling plant for decarbonizing hard-to-abate industries: Empirical modelling and Process design of a CCU plant- A case study

Jose Antonio Abarca, Stephanie Arias-Lugo, Lucia Gomez-Coma, Guillermo Diaz-Sainz, Angel Irabien

Departamento de Ingenierías Química y Biomolecular, Universidad de Cantabria

Achieving a net-zero CO2 society by 2050 is an ambitious target set by the European Commission Green Deal. Reaching this goal will require implementing various strategies to reduce CO2 emissions. Conventional decarbonization approaches are well-established, such as using renewable energies, electrification, and improving energy efficiency. However, different industries, known as "hard-to-abate sectors," face unique challenges due to the inherent CO2 emissions from their processes. For these sectors, alternative strategies must be developed. Carbon Capture and Utilization (CCU) technologies offer a promising and sustainable solution by capturing CO2 and converting it into valuable chemicals, thereby contributing to the circular economy.

This study focuses on designing a CO2 recycling plant for the cement or textile industry as a case study. The proposed plant integrates a CO2 capture process using membrane technology and a utilization stage where CO2 is electrochemically converted into formic acid. During the capture stage, several experiments are carried out at varying inlet concentrations to optimize process parameters and maximize the CO2 output flow. The membrane capture potential is determined by its CO2 permeability and selectivity, making highly selective membranes for efficient CO2 separation from the flue gas stream. Key variables affecting the capture process include flue gas concentration, inlet pressure, and total membrane area. Previous laboratory studies have demonstrated that at least a minimum CO2 concentration of 50 % and a flow rate of 15 mL min-1 cm-2 electrode are required for an efficient CO2 conversion to formic acid [1]. Thus, these variables are crucial for an effective integration of the capture and utilization stages.

For the utilization stage, a three-compartment electrochemical cell is proposed for the direct production of formic acid via CO2 electroreduction. The primary operational variables influencing formic acid production include the CO2 inlet flow rate and composition (determined by the capture stage), applied current density, inlet stream humidity, and water flow rate in the central compartment [2].

The coupling of capture and utilization stages is necessary for the development of CO2 recycling plants. However, it remains in the early stages, especially for the integration of membrane capture technologies and CO2 electroreduction. This work aims to empirically model both the CO2 capture and electroreduction systems using neural networks, resulting in an integrated predictive model for the entire CO2 recycling plant. This model will optimize the performance of the capture-utilization system, facilitating the design of a sustainable process for CO2 capture and conversion into formic acid. Ultimately, this approach will contribute to reducing the products carbon footprint.

Acknowledgments

The authors acknowledge the financial support received from the Spanish State Research Agency through the project PLEC2022-009398 MCIN/AEI/10.13039/501100011033 and Unión Europea Next Generation EU/PRTR. This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101118265. Jose Antonio Abarca acknowledges the predoctoral research grant (FPI) PRE2021-097200.

[1] G. Diaz-Sainz, J. A. Abarca, M. Alvarez-Guerra, A. Irabien, Journal of CO2 Utilization. 2024, 81, 102735

[2] J. A. Abarca, M. Coz-Cruz, G. Diaz-Sainz, A. Irabien, Computer Aided Chemical Engineering, 2024, 53, pp. 2827-2832



Integration of direct air capture with CO2 utilization technologies powered by renewable energy sources to deliver negative carbon emissions

Calin-Cristian Cormos1, Arthur-Maximilian Bathori1, Angela-Maria Kasza1,2, Maria Mihet2, Letitia Petrescu1, Ana-Maria Cormos1

1Babes-Bolyai University, Faculty of Chemistry and Chemical Engineering, Romania; 2National Institute for Research and Development of Isotopic and Molecular Technologies, Romania

Reduction of greenhouse gas emissions is an important environmental element to actively combat the global warming and climate change. To achieve climate neutrality by the middle of this century, several options are envisaged such as increasing the share of renewable energy sources (e.g., solar, wind, biofuels etc.) to gradually replace the fossil fuels, large-scale implementation of Carbon Capture and Utilization (CCUS) technologies, improving overall energy efficiencies of both production and utilization steps etc. In respect to reduce the CO2 concentration from the atmosphere, the Direct Air Capture (DAC) options are of particular interest and very promising in delivering negative carbon emissions. The negative carbon emission is a key element for climate neutrality to balance the still remaining positive emission systems and the hard-to-decarbonize processes. The integration of renewable-powered DAC systems with CO2 utilization technologies can deliver both negative carbon emissions and reduce the energy and economic penalties of such promising decarbonization processes.

This work evaluates the innovative energy-efficient potassium - calcium looping cycle as promising direct air capture technology integrated with various CO2 catalytic transformations into basic chemicals (e.g., synthetic natural gas, methanol etc.). The integrated system will be powered by renewable energy (in terms of both heat and electricity requirements). The investigated DAC concept is set to capture 1 Mt/y CO2 with about 75% carbon capture rate. A fraction of this captured CO2 stream (about 5 - 10%) will be catalytically converted into synthetic methane or methanol using green hydrogen produced by water electrolysis, the rest being sent to geological storage. Conceptual design, process modelling, and model validation followed by overall energy optimization done by thermal integration analysis, were relevant engineering tools used to assess the global mass and energy balances for quantifying key techno-economic and environmental performance indicators. As the results show, the integrated DAC - CO2 utilization system, powered by renewable energy, has promising performances in terms of delivering negative carbon emissions and reduced ancillary energy consumptions. However, significant technological developments (e.g., scale-up, reducing solvent and sorbent make-ups, better process intensification and integration, improved catalysts) are still needed to advance this innovative technology from the current state of the art to a relevant industrial size.



Repurposing Existing Combined Cycle Power Plants with Methane Production for Renewable Energy Storage

Diego Santamaría, Antonio Sánchez, Mariano Martín

Department of Chemical Engineering, University of Salamanca, Plz Caidos 1-5, 37008, Salamanca, Spain

Nowadays, various technologies exist to generate renewable energy, such as solar, wind, hydroelectric power, etc. However, most of these energy sources have fluctuations due to the weather variations. A reliable energy storage is essential to promote a higher share of renewable energy integration into the current energy system. Moreover, energy storage keeps energy security under control. Power-to-Gas technologies consist of storage renewable energy in the form of gaseous chemicals. In this case, Power-to-Methane is the technology of choice since methane allows the use of existing infrastructures for it transport and storage.

This work proposes the integration and optimization of methane energy storage into the existing combined cycle power plants. This involves the introduction of carbon capture systems and methane production reusing the existing power production section. The process leverages renewable energy to produce hydrogen, which is then transformed into methane for easier storage. When the energy demand arises, the stored methane is burned in the combined cycle power plant. Two wastes are produced: water and CO2. The water produced is collected and returned to the electrolyzer while the CO2 is captured and then it is combined with hydrogen to synthesize methane again (Ghaib & Ben-Fares, 2018). This results in a circular process that repurposing the existing infrastructure.

Two different types of combustion method, ordinary and oxy-combustion (Elias et al., 2018) are optimized to evaluate both alternatives and their economic feasibility. In ordinary combustion, air is used as the oxidizer, while in oxy-combustion, pure oxygen is employed, including the oxygen produced in the electrolyzer. However, CO2 recirculation is necessary in oxy-combustion to prevent excessive the flame temperature (Stanger et al., 2015). In addition, is also analysed the potential energy storage capacity of the existing combined cycle power plants in a country, specifically across Spain. This would avoid their decommissioning and reuse the natural gas distribution network, adapting it for use in conjunction with a renewable energy storage system.

References

Elias, R. S., Wahab, M. I. M., & Fang, L. (2018). Retrofitting carbon capture and storage to natural gas-fired power plants: A real-options approach. Journal of Cleaner Production, 192, 722–734.

Ghaib, K., & Ben-Fares, F.-Z. (2018). Power-to-Methane: A state-of-the-art review. Renewable and Sustainable Energy Reviews, 81, 433–446.

Stanger, R., Wall, T., Spörl, R., Paneru, M., Grathwohl, S., Weidmann, M., Scheffknecht, G., McDonald, D., Myöhänen, K., Ritvanen, J., Rahiala, S., Hyppänen, T., Mletzko, J., Kather, A., & Santos, S. (2015). Oxyfuel combustion for CO2 capture in power plants. International Journal of Greenhouse Gas Control, 40, 55–125.



Powering chemical processes with variable renewable energy: A case of iron making in steel industry

Dorcas Tuitoek, Daniel Holmes, Binjian Nie, Aidong Yang

University of Oxford, United Kingdom

The steel industry is responsible for ~8% of global energy demand and emits 7% of CO2 emissions annually 1. Increased adoption of renewable energy in the iron-making process, which is the primary step of the steel-making process, is one of the promising ways to decarbonise the industry. The intermittent nature of renewable energy, as well as the difficulty in storing it, causes a variable energy supply profile necessitating a shift in the operation modes of manufacturing processes to make efficient use of renewable energy. Through dynamic simulation, this study explores a case of the direct reduction process, where iron ore is charged to a shaft furnace reactor where it is reduced to solid iron with green hydrogen.
Existing mathematical modelling and simulation studies of the shaft furnace have only investigated its behaviour assuming constant gas and solid feed rates. Here, we simulate iron ore reduction in a 1D model using COMSOL Multiphysics, with intermittent hydrogen supply, to predict the effects of a time-varying hydrogen feed on the degree of iron ore reduction. The dynamic model of the counter-current moving bed captures chemical reaction kinetics ,mass and heat transfer. With settings relevant to industrial scale operations, our results show that the system can tolerate drops of hydrogen feed rate by up to ~10% without leading to a reduction in the metallisation rate of the product. To tolerate greater fluctuation of H2 feed rate, strategies were tested which could alter the residence time and change the thermal profile in the reactor, to ensure complete metallic iron formation.
These findings show that it is possible to operate a shaft furnace with a certain degree of hydrogen feed variability, hence providing an approach to mitigating the challenges of intermittent renewable energy supply as a solution to decarbonize industries.

1. International Energy Agency (IEA). Iron and Steel Technology Roadmap. Towards More Sustainable Steelmaking. https://www.iea.org/reports/iron-and-steel-technology-roadmap (2020).



Early-Stage Economic and Environmental Assessment for Emerging Chemical Technologies: Back-casting Approach

Yeonguk Kim, Dami Kim, Kosan Roh

Chungnam National University, Korea, Republic of (South Korea)

The emergence of alternative chemical technologies has made their reliable economic and environmental assessments indispensable for guiding future research and development. However, these assessments are inherently challenging due to the lack of comprehensive understanding and technical knowledge of such technologies, particularly at low technology readiness levels (TRLs). This knowledge gap complicates accurate predictions of their real-world performance, economics, and potential environmental impacts. To address these challenges, we adopt a back-casting approach to demonstrate a TRL-based early-stage evaluation procedure, as previously proposed by Roh et al. (2020, Green Chem. 22, 3842). In this work, we apply this framework to methanol production based on the reforming of natural gas, which is a mature chemical technology, to explore its suitability for evaluating emerging chemical technologies. The target technology is assumed to be at three distinct stages of maturity: theoretical, intermediate, and engineering stages. We analyze economic and environmental indicators of the technology using the available information at each stage and then see how similar the indicators calculated at the theoretical and intermediate stages are compared to those at the engineering stage. The analysis shows that the performance indicators are lowest at the theoretical stage due to relying solely on reaction stoichiometry. In the case of the intermediate stage, despite considering various factors, it yields slightly higher performance indicators than the engineering stage due to the lack of process optimization. The outcomes of this study enable a proactive assessment of emerging chemical technologies, providing insights into their feasibility at various stages of development.



A White-Box AI Framework for Interpretable Global Warming Potential Prediction

Jaewook Lee, Ethan Errington, Miao Guo

King's College London, United Kingdom

The transformation of the chemical industry towards sustainable manufacturing requires reliable yet robust decision-making tools involving Life Cycle Assessment (LCA). LCA offers a standardised method to evaluate the environmental profiles of chemical processes and products. However, with the emergence of numerous novel chemicals and processes, existing LCA Inventory databases are increasingly resource-intensive to develop, often delayed in reporting, and suffer from data gaps. Research efforts have been made to address the knowledge gaps by developing predictive models that can estimate LCA properties based on chemical structures. However, the previously published research has been hampered dataset availability and reliance on complex black-box models such as Deep Neural Network (DNN), which often provide low predictive accuracy and lack the interpretability needed for industrial adoption. Understanding the rationale behind model predictions is crucial, particularly in industrial applications where decision-making relies on both the accuracy and transparency. In this study, we introduce a Kolmogorov–Arnold Networks (KAN) model based LCA predictions for emerging chemicals. Designed to bridge the gap between accuracy and interpretability by incorporating domain knowledge into the learning process.

We utilized 15 key LCA categories from the Ecoinvent v3.8 database, comprising 2,239 data points. To address large data scale variation, we applied logarithmic transformation. Using chemical structures represented as SMILES, we converted them into MACCS keys (166-bit fingerprints) and Mordred descriptors (1,825 physicochemical properties), incorporating features like molecular weight and hydrophobicity. These features were used to train a KAN, Random Forest, and DNN to predict LCA values across all categories. KAN consistently outperformed Random Forest and DNN models in 12 out of 15 LCA categories, achieving an average R² value of 74% compared to 66% and 67% for Random Forest and DNNs, respectively. For critical categories like Global Warming Potential, Terrestrial Ecotoxicity, and Ozone Formation–Human Health, KAN achieved high predictive accuracies of 0.84, 0.86, and 0.87, respectively, demonstrating an 8% improvement in overall accuracy. Our feature analysis indicated that MACCS keys provided nearly the same predictive power as Mordred descriptors, despite containing significantly fewer features. Furthermore, we identified that retaining data points with extremely large LCA values (top 3%) could degrade model performance, underscoring the importance of careful data curation. In terms of model interpretability, the use of Gini importance and SHapley Additive exPlanations (SHAP) revealed that functional groups such as halogens, oxygen, and methyl groups had the most significant impact on LCA predictions, aligning with domain knowledge. The SHAP analysis further highlighted that KAN was able to capture more complex structure-property relationships compared to conventional models.

In conclusion, the application of the KAN model for LCA predictions provides a robust and accurate framework for evaluating the environmental impacts of emerging chemicals. By integrating domain-specific knowledge, this approach not only enhances the reliability of LCA prediction but also offers deeper insights into the structural drivers of environmental outcomes. Its demonstrated success in identifying key molecular features makes it a valuable tool for accelerating sustainable innovations in both chemical process transformations and drug development, where precise environmental assessments are essential.



Data-driven approach for reaction mechanism identification using neural ODEs

Junu Kim1,2, Itushi Sakata3, Eitaro Yamatsuta4, Hirokazu Sugiyama1

1The University of Tokyo, Japan; 2Auxilart Co., Ltd., Tokyo, 104-0061, Japan; 3Institute of Physical and Chemical Research, Hyogo, 660-0813, Japan; 4Independent researcher, Japan

In the fields of reaction engineering and process systems engineering, mechanistic models have traditionally been the focus due to their explainability and extrapolative power, as they are based on fundamental principles governing the system. For chemical reactions, kinetic studies are crucial in developing these mechanistic models, providing insights into reaction mechanisms and estimating model parameters [1, 2]. However, kinetic studies often require extensive cycles of experimental data acquisition, reaction pathway generation, model construction, and parameter estimation, making the process laborious and time-consuming. In response to these challenges, machine learning techniques have gained attention. A recent approach involves using neural network models trained on simulation data to classify reaction mechanisms [3]. While effective, these methods demand vast amounts of training data, and expanding the reaction boundaries further increases the data requirements.

In this study, we present a direct, data-driven approach to identifying reaction mechanisms and constructing mechanistic models from experimental data without the need for large datasets. As an initial attempt, we focused on amination and Grignard reactions, which are widely used for various chemical and pharmaceutical synthesis. Since chemical reactions can be expressed as differential equations, our hypothesis is that by calculating first- or higher-order derivatives directly from experimental data, we can estimate the relationships between different chemical compounds in the system and identify the reaction mechanism, order, and parameter values. The major challenge arises with real experimental data, where the number of data points is often limited (e.g., around ten), making it difficult to estimate differential values directly. To address this, we employed neural ordinary differential equations (neural ODEs) to effectively interpolate these sparse datasets [4]. By applying neural ODEs, we were able to generate interpolated data, which enabled the calculation of derivatives and the development of mechanistic models that accurately reproduce the observed data. For future work, we plan to validate our methodology across a broader range of reactions and further automate the process to enhance efficiency and applicability.

References

[1] P. Sagmeister, et al. React. Chem. Eng. 2023, 8, 2818. [2] S. Diab, et al. React. Chem. Eng. 2021, 6, 1819. [3] J. Bures and I. Larrosa Nature 2023, 613, 689. [4] R. T. Q. Chen et al. NeurlIPS. 2018.



Generalised Disjunctive Programming for Process Synthesis

Lukas Scheffold, Erik Esche

Technische Universität Berlin, Germany

Automating process synthesis presents a formidable challenge in chemical engineering. Particularly challenging is the development of frameworks that are both general and accurate, while remaining computationally tractable. To achieve generality, a building block-based modelling approach was proposed in previous contributions by Kuhlmann and Skiborowski [1] and Krone et al. [2]. This model formulation incorporates Phenomena-based Building Blocks (PBBs), capable of depicting a wide array of separation processes [1], [3]. To maximize accuracy, the PBBs are interfaced with CAPE-OPEN thermodynamics, allowing for detailed thermodynamic models [2] within the process synthesis problem. However, the pursuit of generality and accuracy introduces increased model complexity and poses the risk of combinatorial explosion. To address this and enhance tractability, [1] developed a structural screening method that forbids superstructures leading to infeasible configurations. These combined innovations allow for general, accurate, and tractable superstructures.

To further increase the solvable problem size, we propose an advanced optimization framework, leveraging generalized disjunctive programming (GDP). It allows for multiple improvements over existing MINLP formulations, aiming at improving feasibility and solution time. This is achieved by deactivation of unused model equations during the solution procedure. Additionally, Grossmann [4] showed that a disjunctive branch-and-bound algorithm can be postulated. This provides tighter bounds for linear problems than those obtained through reformulations used in conventional MINLP solvers, reducing the required solution time.

Building on these insights, it is of interest whether these findings extend to nonlinear systems. To investigate this, we developed a MathML/XML-based automatic code generation tool inside MOSAICmodeling [5], which formulates complex nonlinear GDP and exports them to conventional optimization environments (Pyomo, GAMS etc.). These are then coupled with structural screening methods [1] and solved using out-of-the-box functionalities for GDP solution. To validate the proposed approach, a case study is conducted involving two PBBs, previously published by Krone et al. [2]. The study compares the performance of the GDP-based optimization framework against conventional MINLP approaches. Preliminary results suggest that the GDP-based framework offers computational advantages over conventional MINLP formulations. The full paper will present detailed comparisons, offering insights into the practical applicability and benefits of GDP.

References

[1] H. Kuhlmann und M. Skiborowski, „Optimization-Based Approach To Process Synthesis for Process Intensification: General Approach and Application to Ethanol Dehydration,“ Industrial & Engineering Chemistry Research, Bd. 56, Nr. 45, p. 13461–13481, 2017.

[2] D. Krone, E. Esche, N. Asprion, M. Skiborowski und J.-U. Repke, „Enabling optimization of complex distillation configurations in GAMS with CAPE-OPEN thermodynamic models,“ Computers & Chemical Engineering, Bd. 157, p. 107626, 2022.

[3] H. Kuhlmann, M. Möller und M. Skiborowski, „Analysis of TBA‐Based ETBE Production by Means of an Optimization‐Based Process‐Synthesis Approach,“ Chemie Ingenieur Technik, Bd. 91, Nr. 3, p. 336–348, 2019.

[4] I. E. Grossmann, „Review of Nonlinear Mixed-Integer and Disjunctive Programming Techniques,“ Optimization and Engineering, Nr. 3, p. 227–252, 2002.

[5] E. Esche, C. Hoffmann, M. Illner, D. Müller, S. Fillinger, G. Tolksdorf, H. Bonart, G. Wozny und J. Repke, „MOSAIC – Enabling Large‐Scale Equation‐Based Flow Sheet Optimization,“ Chemie Ingenieur Technik, Bd. 89, Nr. 5, p. 620–635, 2017.



Optimal Design and Operation of Off-Grid Electrochemical Nitrogen Reduction to Ammonia

Michael Johannes Rix1, Judith M. Schwindling1, Karim Bidaoui1, Alexander Mitsos2,1,3

1RWTH Aachen University, Germany; 2JARA-ENERGY, 52056 Aachen, Germany; 3Energy Systems Engineering (ICE-1), Forschungszentrum Jülich, Germany

Electrochemical processes can aid in defossilizing the chemical industry. When operated off-grid with its own renewable electricity (RE) production, the electrochemical process and the RE plants must be optimized together. We optimize the design and operation of an electrochemical system for nitrogen reduction to ammonia coupled with wind and solar electricity generation to minimize ammonia production costs. Electrochemical nitrogen reduction allows ammonia production from RE, water, and air in one electrolyzer [1]. Comparable design and operation optimizations for coupling RE with electrochemical systems were already performed in the literature for different systems (e.g., for water electrolysis by [2] and others).

We optimize the design and operation of the electrolyzer and RE plant over the scope of one year. We calculate investment costs for the electrolyzer and RE plants annualized over their respective lifetime. We calculate the electricity production from weather data on hourly resolution and the design of the RE plant. From the design of the electrolyzer and the electricity production, we calculate the ammonia production. We investigate three operating strategies: (i) direct coupling of RE and electrolyzer, (ii) curtailment of electricity, and (iii) battery storage and curtailment. In direct coupling, the electrolyzer electricity consumption must follow the RE generation, thus the electrolyzer is sized for the peak power of the RE plant. Therefore, it can only be operated at full load at peak electricity generation which will be only at one or a few times of the year. Curtailment and battery storage allow the decoupling of electricity production and consumption. Thus, the electrolyzer can be operated at full or higher load multiple times of the year.

Operation with curtailment increases the load factor of the electrolyzer and reduces the production cost. The RE plant can be over-designed such that the electrolyzer can operate at full or higher load at off-peak RE generation. Achieving a high load factor and few on/off cycles of the electrolyzer is important since on/off cycles can lead to catalyst degradation due to reverse currents [3]. Implementation of battery storage can further increase the load factor of the electrolyzer. However, battery costs are too high, resulting in increased production costs.

We run the optimization for different locations with different RE potentials. At all locations, operation with curtailment is beneficial, and battery costs are too expensive. The availability of wind and solar determines the optimal design of the electrolyzer and RE plant, the optimal operation, the production cost, and the load factor.

References
1. MacFarlane, D. R. et al. A Roadmap to the Ammonia Economy. Joule 4, 1186–1205 (2020).
2. Hofrichter, A., et al. Determination of the optimal power
ratio between electrolysis and renewable energy to investigate the effects on the hydrogen
production costs. International Journal of Hydrogen Energy 48, 1651–1663 (2023).
3. Kojima, H. et al. Influence of renewable energy power fluctuations on water electrolysis
for green hydrogen production. International Journal of Hydrogen Energy 48, 4572–4593. (2023).



A Stochastic Techno-Economic Assessment of Emerging Artificial Photosynthetic Bio-Electrochemical Systems for CO₂ Conversion

Haris Saeed, Aidong Yang, Wei Huang

Oxford University, United Kingdom

Artificial Photosynthetic Bioelectrochemical Systems (AP-BES) are a promising technology for converting CO2 into valuable bioproducts, addressing both carbon mitigation and sustainable production challenges. By integrating biological and electrochemical processes to emulate natural photosynthesis, AP-BES offer potential for scalable, renewable biomanufacturing. However, their commercialization faces significant challenges related to process efficiency, system integration, and economic uncertainties. A thorough techno-economic assessment (TEA) is crucial for evaluating the viability and scalability of this technology. This study employs a stochastic TEA to assess the bioelectrochemical conversion of CO2 to bioproducts, accounting for variability and uncertainty in key technical and economic parameters. Unlike traditional deterministic TEA, which relies on fixed-point estimates, the stochastic approach uses probability distributions to capture a broader range of potential outcomes. Critical factors such as energy consumption, CO2 conversion efficiency, and bioproduct market prices are modeled probabilistically, offering a more accurate reflection of real-world uncertainties. The novelty of this research lies in its comprehensive application and advanced methodology. This study is one of the first to apply a full-system TEA to AP-BES, covering the entire process from carbon capture to product purification. Moreover, the stochastic approach, utilizing Monte Carlo simulations, enables a more robust analysis by incorporating uncertainties in both technical and economic factors. This combined methodology provides more realistic insights into the system's economic potential and commercial feasibility compared to conventional deterministic models. Monte Carlo simulations are used to generate probability distributions for key economic metrics, including total annualized cost (TAC), internal rate of return (IRR), and levelized cost of product (LCP). By performing thousands of iterations, the model offers a comprehensive understanding of AP-BES's financial viability, delivering confidence intervals and risk assessments often missing from deterministic approaches. Key variables include electricity price fluctuations, a significant driver of operating costs, and changes in bioproduct market prices due to varying demand. The model also accounts for uncertainties in future technological improvements, such as enhanced CO2 conversion efficiencies and potential economies of scale that could lower both capital expenditure (CAPEX) and operational expenditure (OPEX) per kg of CO2 processed. Sensitivity analyses further identify the most influential factors impacting economic outcomes, guiding future research and development. The results underscore the critical role of uncertainty in evaluating the economic viability of AP-BES. While the technology demonstrates significant potential for both economic and environmental benefits, substantial risks remain, particularly concerning electricity price volatility and unpredictable bioproduct markets. Compared to static point estimates in deterministic approaches, Monte Carlo simulations provide a more nuanced understanding of the financial risks and opportunities. This stochastic TEA offers valuable insights for optimizing processes, reducing costs, and guiding investment and research decisions in the development of artificial photosynthetic bioelectrochemical systems.



Empowering LLMs for Mathematical Reasoning and Optimization: A Multi-Agent Symbolic Regression System

Shaurya Vats, Sai Phani Chatti, Aravind Devanand, Sandeep Krishnan, Rohit Karanth Kota

Siemens Technology and Services Pvt. Ltd

Understanding data with complex patterns is a significant part of the journey toward accurate data prediction and interpretation. The relationships between input and output variables can unlock diverse advancement opportunities across various processes. However, most AI models attempting to uncover these patterns are not explainable or remain opaque, offering little interpretation. This paper explores an approach in explainable AI by introducing a multi-agent system (MaSR) for extracting equations between features using data.

We developed a novel approach to perform symbolic regression by discovering mathematical functions using a multi-agent system of LLMs. This system addresses the traditional challenges of genetic optimization, such as random seed generation, complexity, and the explainability of the final equation. The agent-based system divides the process into various steps, including initial function generation, loss and complexity calculation, mutation and crossbreeding of equations, and explanation of the final equation to improve the accuracy and decrease the workload.

We utilize the in-context learning capabilities of LLMs trained on vast amounts of data to generate accurate equations more quickly. Additionally, we incorporate methods like retrieval-augmented generation (RAG) with tabular data and web search to further enhance the process. The system creates an explainable model that clarifies each process step leading to the final equation for a given dataset. We also use the capability of the method in combination with existing technologies to develop innovative solutions, such as incorporating physical laws derived from data using multi-agent symbolic regression (MaSR) to reduce illogical predictions and improving extrapolations, passing the generated equations to LLMs as context for explaining the large number simulation results.

Our solution is compared with symbolic regression methods such as GPlearn and PySR against various benchmarks. This study presents research on expanding the reasoning capacities of large language models alongside their mathematical understanding. The paper serves as a benchmark in understanding the capabilities of LLMs in mathematical reasoning and can be a starting point for solving numerous complex tasks using LLMs. The MaSR framework can be applied in various areas where the reasoning capabilities of LLMs are tested for complex and sequential tasks. MaSR can explain the predictions of black-box models, develop data-driven models, identify complex relationships within the data, assist in feature engineering and feature selection, and generate synthetic data equations to address data scarcity, which are explored as further directions for future research in this paper.



Solid Oxide Cells and Hydrogen Storage to Prevent Grid Congestion

Dorsan Lepour, Arthur Waeber, Cédric Terrier, François Maréchal

École Polytechnique Fédérale de Lausanne, Switzerland

The electrification of heating and mobility sectors, alongside increasing photovoltaic (PV) capacities, places significant pressure on electricity grids, particularly in urban neighborhoods and densely populated zones. High penetration of heat pumps and electric vehicles as well as significant PV deployment can indeed induce supply shortfall or require curtailment, respectively. Grid reinforcement is a potential solution, but is costly and involves substantial structural engineering work. Altough some local energy storage systems have been extensively studied as an alternative (primarily batteries), the potential of integrating reversible solid oxide cells (rSOC) coupled with hydrogen storage in the context of urban energy systems planning remains underexplored. This study aims to address this gap by investigating the technical and economic feasibility of such systems at building or district-scale.

This work uses the framework of REHO (Renewable Energy Hub Optimizer), a decision-support tool for sustainable urban energy system planning. REHO takes into account the endogenous resources of a defined territory, diverse end-use demands (e.g., heating, mobility), and multiple energy carriers (electricity, heat, hydrogen). Multi-objective optimizations are conducted across economic, environmental, and energy efficiency criteria to determine under which circumstances the deployment of rSOC and hydrogen storage becomes relevant.

The study considers several typical districts with their import and export capacities and examines two key scenarios: (1) a closed-loop hydrogen system where hydrogen is produced and consumed locally, and (2) a scenario involving connection to a broader hydrogen network. Results indicate that in areas where grid capacity is strained, rSOC coupled wih hydrogen tank offer a compelling storage solution. They enhance energy self-consumption by converting surplus electricity into hydrogen for later use, while the heat generated during cell operation can be used to meet building space heating and domestic hot water demands.

These findings suggest that hydrogen-based energy storage can be a viable alternative to traditional grid reinforcement, particularly for areas facing an increased penetration of renewables in a saturated grid. The study highlights that for such regions approaching grid congestion, integrating local hydrogen capacities can provide both flexibility and efficiency gains while reducing the need for expensive grid upgrades.



A Modern Portfolio Theory Approach for Chemical Production with Supply Chain Considerations for Efficient Investment Planning

Mohamad Almoussaoui, Dhabia Al-mohannadi

Texas A&M University at Qatar, Qatar

The integrated supply chains of large chemical commodities and fuels play a major role in energy security. These supply chains are at risk of global shocks such as the COVID-19 pandemic [1]. As such, major natural gas producers and exporters such as Qatar aim to balance their supply chain investment returns with the export risks as the hydrocarbon sector contributes primarily to more than one-third of its Gross Demostic Product (GDP) [2]. Hence, this work introduces a modern portfolio theory (MPT) model formulation based on chemical commodities and fuel supply chains. The model uses Markowitz’s optimization [3] model to meet an exporting country’s financial objective of maximizing the investment return and minimizing the associated risk. By defining a supply chain asset as a combination of an exporting country, a traded chemical commodity, and an importing country, the model calculates the return for every supply chain investment, and the risk associated with the latter due to price fluctuations. Solving the optimization problem, a set of Pareto-optimal supply chain portfolios and the efficient frontier, is obtained. The model integrates both the chemical process production [4] and the shipping stages of a supply chain. This work's case study showcases the importance of considering the integrated supply chain in building the MPT model and its impact on the number and allocations of the resulting optimal portfolios. The developed model can guide investment planners to achieve their financial goals at a minimum risk.

References

[1]

M. Shehabi, "Modeling long-term impacts of the COVID-19 pandemic and oil price declines in Gulf oil economies," Economic Modelling, vol. 112, 2022.

[2]

"Qatar - Oil & Gas Field Machinery Equipment," 29 7 2024. [Online]. Available: https://www.trade.gov/country-commercial-guides/qatar-oil-gas-field-machinery-equipment. [Accessed 18 9 2024].

[3]

H. Markowitz, "PORTFOLIO SELECTION*," The Journal of Finance, vol. 7, no. 1, pp. 77-91, 1952.

[4]

S. Shehab, D. M. Al-Mohannadi and P. Linke, "Chemical production process portfolio optimization," Chemical Engineering Research and Design, vol. 167, pp. 207-217, 2021.



Co-gasification of crude glycerol and plastic waste using air/steam mixtures: a modelling approach

BAHIZIRE MARTIN MUKERU, BILAL PATEL

UNIVERSITY OF SOUTH AFRICA, South Africa

There has been an unprecedented growth in plastic waste and current management techniques such as landfilling and incineration are unsustainable, particularly due to the environmental impact associated with these practises [1]. Gasification is considered as one of the most sustainable ways not only to address these issues, but also produce energy from waste plastics [1]. However, issues such as tar and coke formation are associated with plastic waste gasification which reduces syngas quality [1],[2]. Another typical waste in huge quantities is crude glycerol, with low value, which is a by-product from biodiesel production. The cost involved in its purification is exceedingly high and therefore this limits its applications as a purified product [3]. Co-feeding plastic wastes with crude glycerol for syngas production cannot only address issues related to plastic gasification, but also allow the utilization of crude glycerol and enhance syngas quality [3]. This study evaluates the performance of a downdraft gasifier to produce hydrogen and syngas from the co-gasification of crude glycerol and plastic wastes, by means of thermodynamic analysis and modelling using Aspen Plus simulation software. Performance indicators such as cold gas efficiency (CGE), carbon conversion efficiency (CCE) and syngas yield (SY) to determine the technical feasibility of the co-gasification of crude glycerol and plastic wastes at different equivalent ratios (ER). Results demonstrated that an increase in ER increased CGE, CCE and SY. For a blend ratio of 50%, a CCE of 100% was attained at an ER of 0.35 whereas the CGE of 88.29% was attained at ER of 0.3. Increasing the plastic content to 75%, a maximum CCE and CGE of 94.16% and 81.86% were achieved at ER of 0.4. The hydrogen composition reached its maximum of 36.70% and 39.19% at an ER of 0.1 when the plastic ratio increased from 50% to 75% respectively. A 50% plastic bend ratio achieved a syngas ratio (H2/CO) of 1.99 at ER of 0.2 whereas a 75% reached a ratio of 2.05 at an ER of 0.25. At these operating conditions the syngas lower heating value (LHV), SY, CGE and CCE were found to be 6.23 MJ/Nm3, 3.32 Nm3, 66.58%, 76.35% and 6.27 MJ/Nm3, 3.60 Nm3, 59.12%, 53.22% respectively. From these results it can be deduced that the air co-gasification is a promising technology for the sustainable production of energy from waste glycerol and plastic waste.

References

[1] Mishra, R., Shu, C.M., Gollakota, A.R.K. & Pan, S.Y ‘Unveiling the potential of pyrolysis-gasification for hydrogen-rich syngas production from biomass and plastic waste’, Energ. Convers. Manag. 2024: 118997 doi: 10.1016/j.enconman.2024.118997.

[2] Chunakiat,P., Panarmasar,N. & Kuchonthara, P. “Hydrogen Production from Glycerol and Plastics by Sorption-Enhanced Steam Reforming,” Ind. Eng. Chem. Res.2023; 62(49): 21057-21066. doi: 10.1021/acs.iecr.3c02072



COMPARATIVE AND STATISTICAL STUDY ON ASPEN PLUS INTERFACES USED FOR STOCHASTIC OPTIMIZATION

Josue Julián Herrera Velázquez1,3, Erik Leonel Piñón Hernández1, Luis Antonio Vega Vega1, Dana Estefanía Carrillo Espinoza1, Julián Cabrera Ruiz1, J. Rafael Alcántara Avila2

1Universidad de Guanajuato, Mexico; 2Pontificia Universidad Católica del Perú, Peru; 3Instituto Tecnológico Superior de Guanajuato, Mexico

New research on complex intensified schemes has popularized the use of multiple commercial process simulation software. The interfaces between software and computer systems for process optimization have allowed us to maintain rigor in the models. This type of optimization is mentioned in the literature as "Black Box Optimization" since successive evaluations are taken exploiting the information from the simulator without altering the model that integrates it. The writing/reading results are from the contribution of 1) Process simulation software, 2) Middleware protocol, 3) Wrapper protocol, and 4) Platform (IDE) with the optimization algorithm (Muñóz-López et al., 2017). The middleware protocol allows the automation of the process simulator and the transfer of information in both directions. The Wrapper protocol works to interpret the information transferred by the previous protocol and make it useful for both parties, for the simulator and the optimizer. Aspen Plus ® software has become popular due to the rigor of its models and the reliability of its results, as well as the customization it offers for different conventional and unconventional schemes. Few studies have been reported regarding the efficiency and effectiveness of the various computer systems where the programming of the optimization algorithm or the reported packages is carried out. Santos-Bartolome and Van-Gerven (2022) carried out the study of comparing different computer systems (Microsoft Excel VBA ®, MATLAB ®, Python ®, and Unity ®) with the Aspen Hysys ® software, evaluating the accuracy of communication, information exchange time, and the deviation of the results, reaching the conclusion that the best option is to use VBA ®. Ponce-Rocha et al. (2023) studied the execution time between Aspen Plus ® - MATLAB ® and Aspen Plus ® - Python ® connections in multi-objective optimization using the respective optimization packages, reaching the conclusion that the fastest connection occurs in the Python ® connection.

This work proposes to do a comparative study for the Aspen Plus ® software and its interfaces with Microsoft Excel VBA ®, Python ®, and MATLAB ®. 5 schemes are analyzed (conventional and intensified columns). The optimization of the Total Annual Cost is carried out by a modified Simulated Annealing Algorithm (m-SAA) (Cabrera-Ruiz et al., 2021). This algorithm has the same programming for all platforms, using the respective random number functions to make the study as homogeneous as possible. Each optimization is done ten times doing hypothesis testing to eliminate anomalous cases. The aspects to evaluate are the time per iteration, the standard deviation between each test and the number of feasible solutions. The results indicate that the best option to carry out the optimization is using the interface with VBA ®, however the one carried out with Python ® is not very different from this. There is not much literature on optimization algorithm packages in VBA ®, so, connecting with Python ® may be the most efficient and effective for performing stochastic optimization with Aspen Plus ® software in addition to being an open-source language.



3D simulation and design of MEA-based absorption system for biogas purification

Debayan Mazumdar, Wei Wu

National Cheng Kung University, Taiwan

The shape and geometry design of MEA-based absorption system by using ANSYS Fluent R22 is addressed. By conducting CFD studies for observing the effect of liquid distribution quality on counter current two-phase absorption under different liquid distributor designs. By simulation and analysis, the detailed exploration of fluid dynamics offers critical insights and enabling performance optimization. Unlike previous literature which focused on unstructured packing have been done on structure Mellapak 500X Packing. Demonstrating the overall efficiency for a MEA-based absorption system according to different distributor patterns. The previous model of calculation for liquid distribution quality is used for a detailed understanding between the initial layers of packing and pressure difference.



Enhancing Chemical Process Design: Aligning DEXPI Process with BPMN 2.0 for Improved Efficiency in Data Exchange

Shady Khella1, Markus Schichtel2, Erik Esche1, Frauke Weichhardt2, Jens-Uwe Repke1

1Process Dynamics and Operations Group, Technische Universität Berlin, Berlin, Germany; 2Semtation GmbH, Potsdam, Germany

BPMN 2.0 is a widely adopted standard across various industries, primarily used for business process management outside of the engineering sphere [1]. Its long history and widespread use have contributed to a mature ecosystem, offering advanced software tools for editing and optimizing business workflows.

DEXPI Process, a newly developed standard for early-phase chemical process design, focuses on representing Block Flow Diagrams (BFDs) and Process Flow Diagrams (PFDs), both crucial in the conceptual design phase of chemical plants. It provides a standardized way to document design activity, offering engineers a clear rationale for design decisions [2], which is especially valuable during a plant’s operational phases. While DEXPI Process offers a robust data model, it currently lacks an established serialization format for efficient data exchange. As Cameron et al. note in [2], finding a suitable format for DEXPI Process remains a key research area, essential for enhancing its usability and adoption. So far, Cameron et al. have explored several serialization formats for exchanging DEXPI Process information, including AutomationML, an experimental XML, and UML [2].

This work aims to map the DEXPI Process data model to BPMN 2.0, providing a standardized serialization for the newly developed standard. Mapping DEXPI Process to BPMN 2.0 also unlocks access to BPMN’s extensive software toolset. We investigate and validate the effectiveness of this mapping and the enhancements it brings to the usability of DEXPI Process through a case study based on the Tennessee-Eastman process, described in [3]. We then compare our approach with those of Cameron et al. in [2].

We conclude by presenting our findings and the key benefits of this mapping, such as improved interoperability and enhanced toolset support for chemical process engineers. Additionally, we discuss the challenges encountered during the implementation, including aligning the differences in data structures between the two models. Furthermore, we believe this mapping serves as a bridge between chemical process design engineers and business process management teams, unlocking opportunities for better collaboration and integration of technical and business workflows.

References:

[1] ISO. (2022). Information technology — Object Management Group Business Process Model and Notation. ISO/IEC 19510:2013. https://www.iso.org/standard/62652.html

[2] Cameron, D. B., Otten, W., Temmen, H., Hole, M., & Tolksdorf, G. (2024). DEXPI Process: Standardizing Interoperable Information for process design and analysis. Computers &amp; Chemical Engineering, 182, 108564. https://doi.org/10.1016/j.compchemeng.2023.108564

[3] Downs, J. J., & Vogel, E. F. (1993). A plant-wide industrial process control problem. Computers & chemical engineering, 17(3), 245-255. https://doi.org/10.1016/0098-1354(93)80018-I



Linear and non-linear convolutional approaches and XAI for spectral data: classification of waste lubricant oils

Rúben Gariso, João Coutinho, Tiago Rato, Marco Seabra Reis

University of Coimbra, Portugal

Waste lubricant oil (WLO) is a hazardous residue that requires careful management. Among the available options, regeneration is the preferred approach for promoting a sustainable circular economy. However, regeneration is only viable if the WLO does not coagulate during the process, which can cause operational issues, possibly leading to premature shutdown for cleaning and maintenance. To mitigate this risk, a laboratory analysis using an alkaline treatment is currently employed to assess the WLO coagulation potential before it enters the regeneration process. Nevertheless, such a laboratory test is time-consuming, presents several safety risks, and its outcome is subjective, depending on visual interpretation by the analyst.

To expedite decision-making, process analytics technology (PAT) and machine learning were employed to develop a model to classify WLOs according to their coagulation potential. To this end, three approaches were followed, with a focus on convolutional methodologies spanning both linear and non-linear modeling structures. The first approach (benchmark) uses partial least squares for discriminant analysis (PLS-DA) (Wold, Sjöström and Eriksson, 2001) and interval partial least squares (iPLS) (Nørgaard et al., 2000) combined with standard spectral pre-processing techniques (27 model variants). The second approach applies the wavelet transform (Mallat, 1989) to decompose the spectra into multiple frequency components by convolution with linear filters, and PLS-DA for feature selection (10 model variants). Finally, the third approach consists of a convolutional neural network (CNN) (Yang et al., 2019) to estimate the optimal filter for feature extraction (1 model variant). These models were trained on real industrial data provided by Sogilub, the organization responsible for the management of WLO in Portugal.

The results show that the three modeling approaches can attain high accuracy, with an average accuracy of 91%. The development of the benchmark model requires an exhaustive search over multiple combinations of pre-processing filters since the optimal scheme cannot be defined a priori. On the other hand, implicit spectral filtering using wavelet transform convolution significantly lowers the complexity of the model development task, reducing the computational burden while maintaining the interpretability of linear approaches. The CNN was also capable of circumventing the pre-processing burden by implicitly estimating convolutional filters in the inner layers. Additionally, the use of explainable artificial intelligence (XAI) techniques demonstrated that the relevant features of the CNN model are in good accordance with the linear models. In summary, with an adequate level of expertise and effort, different approaches can provide similar prediction performances. However, the development process can be made faster, simpler, and computationally less demanding through a proper convolutional methodology, namely the one based on the wavelet transform.

References:

Mallat, S.G. (1989) IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(7), pp. 674–693.

Nørgaard, L., Saudland, A., Wagner, J., Nielsen, J.P., Munck, L. and Engelsen, S.B. (2000) Applied Spectroscopy, 54(3), pp. 413–419.

Wold, S., Sjöström, M. and Eriksson, L. (2001) Chemometrics and Intelligent Laboratory Systems, 58(2), pp. 109–130.

Yang, J., Xu, J., Zhang, X., Wu, C., Lin, T. and Ying, Y. (2019) Analytica Chimica Acta, 1081, pp. 6–17.



Mathematical Modeling of Ammonia Nitrogen Dynamics in RAS Integrated with Bayesian Parameter Optimization

Lingwei Jiang1, Tao Chen1, Bing Guo2, Daoliang Li3

1School of Chemistry and Chemical Engineering, University of Surrey, United Kingdom; 2School of Sustainability, Civil and Environmental Engineering, University of Surrey, United Kingdom; 3National Innovation Center for Digital Fishery, China Agricultural University,China

The concentration of ammonia nitrogen is a critical parameter in aquaculture as excessive levels can be toxic to the aquatic animals, hampering their growth or even resulting in death. Therefore monitoring of ammonia nitrogen concentration in aquaculture is important for productivity and animal welfare. However, commercially available ammonia nitrogen sensors are expensive, have short lifespans thus requiring frequent maintenance, and can provide unreliable results during use. In contrast, sensors for other water quality parameters (e.g., temperature, dissolved oxygen, pH) are well-developed, accurate, and they could provide useful information to help predict ammonia nitrogen concentration through a mathematical model. In this study we present a new mathematical model for predicting ammonia nitrogen, combining fish bioenergetics with mass balance of ammonia nitrogen. We conduct a sensitivity analysis of the model parameters to identify the key ones and then use a Bayesian optimisation algorithm to calibrate these key parameters to data collected from a recirculating aquaculture system in our lab. We demonstrate that the model is able to give reasonable prediction of ammonia nitrogen on the experimental data not used in model calibration.



Computer-Aided Design of a Local Biorefinery Scheme from Water lily (Eichhornia Crassipes) to Produce Power and Bioproducts

Maria de Lourdes Cinco-Izquierdo1, Araceli Guadalupe Romero-Izquierdo2, Ricardo Musule-Lagunes3, Marco Antonio Martínez-Cinco1

1Universidad Michoacana de San Nicolás de Hidalgo, Facultad de Ingeniería Química, México; 2Universidad Autónoma de Querétaro, Facultad de Ingeniería, Mexico; 3Universidad Veracruzana, Instituto de Investigaciones Forestales, México

Lake ecosystems provide valuable services, such as vegetation and fauna, fertile soils, nutrient and climatic regulation, carbon sequestration, and recreation and tourism activities. Nevertheless, some are currently affected by high resource extraction, climatic change, or alien plant invasion (API), which causes the loss of local species and deterioration of ecosystem function. Regarding API, reports have identified 665 invasive exotic plants in México (IMTA, 2020), wherein the Water lily (Eichhornia crassipes) is highlighted due to its quick proliferation rate covering most national aquatic bodies. Thus, some strategies for controlling and using E. crassipes have been proposed (Gutiérrez et al., 1994). Specifically, after extraction, the water hyacinth biomass has been used as raw material for the production of several bioproducts and bioenergy; however, most of them have not covered the region's needs, and their economic profitability has not been reached. In this work, we propose a local biorefinery scheme to produce power and bioproducts from water lilies, using Aspen Plus V.10.0, per the needs of the Patzcuaro Lake community in Michoacán, Mexico. The scheme has been designed to process 197.6 kg/h of water lily, aligned to the extraction region schedule (COMPESCA, 2023), separating the biomass into two main compounds: root (RT, 47 wt % of total plant) and stems-leaves (S-L, 53 wt % of total plant). The power and steam are generated by RT flow (combustion process), while the S-L are separated in two fractions, 50 wt % for each one. The first fraction is the feedstock for an anaerobic digestion process operated to 35 °C to produce a fertilizer stream from the process sludge and biogas, which is converted to power using a turbine. On the other hand, the second fraction of S-L enters to drying equipment to reduce its moisture content; then, the dried biomass is divided in two processing zones: 1) pyrolysis to produce bio-oil, biochar, and high-temperature gases and 2) gasification to generate syngas, which is converted to power. According to the results, the total generated power is capable of covering all the electric requirements of the scheme, producing a super plus of 45 % regarding the total consumption; also, the system covers all heating requirements. On the other hand, fertilizer and biochar are helpful products for regional needs, improving the total annual cost (TAC) of the scheme.

References

COMPESCA. (2023, November 01). Comisión de Pesca del Estado de Michoacán. Informe anual de avances del Programa: Mantenimiento y Rehabilitación de Embalses.

Gutiérrez López, F. Arreguín Cortés, R. Huerto Delgadillo, P. Saldaña Fabela (1994). Control de malezas acuáticas en México. Ingeniería Hidráulica En México, 9(3), 15–34.

IMTA. (2020, July 15). Instituto Mexicano de Tecnología del Agua. Plantas Invasoras.



System analysis and optimization of replacing surplus refinery fuel gas by coprocessing with HTL bio-crude off-gas in oil refineries.

Erik Lopez Basto1,2, Eliana Lozano Sanchez3, Samantha Eleanor Tanzer1, Andrea Ramírez Ramírez4

1Department of Engineering Systems and Services, Faculty of Technology, Policy, and Management, Delft University of Technology, Delft, the Netherlands; 2Cartagena Refinery. Ecopetrol S.A., Colombia; 3Department of Energy, Aalborg University, Aalborg, Denmark.; 4Department of Chemical Engineering, Faculty of Applied Sciences, Delft University of Technology, Delft, the

Sustainable production is a critical goal for the oil refining industry supporting the energy transition and reducing climate change impacts. This research uses Ecopetrol, Colombia's state-owned oil and gas company, and one of its high-complexity refineries (processing 11.45 Mtpd of crude oil) as a case study to explore CO2 reduction strategies. Decarbonizing refineries requires a combination of technologies, including low-carbon hydrogen (Low-C H2), sustainable energy, carbon capture, utilization, and storage (CCUS), bio-feedstocks, and product changes.

A key question addressed is the impact of CCUS on refinery performance and the potential to repurpose surplus refinery fuel gas while balancing techno-economic and environmental in the short and long-term goals.

Colombia’s biomass resources offer opportunities for advanced biofuel technologies like Hydrothermal Liquefaction (HTL), which produces bio-crude compatible with existing refinery infrastructure and off-gas with biogenic carbon that can be used in CCU processes. This research is grounded on the opportunity to utilize refinery fuel gas and HTL bio-crude off-gas in conversion processes to produce more valuable and sustainable products (see Figure 1 for the simplified system block diagram).

Our systems optimization approach, using mixed-integer linear programming (MILP) in Linny-R software, evaluates refinery operations and minimizes costs under CO2 emission constraints. Building on optimized low-C H2 and CCS systems (Lopez, E., et al. 2024), the first step assesses surplus refinery fuel gas, and the second screens CCU technologies, selecting steam reforming and autothermal reforming to convert fuel gas into methanol. HTL bio-crude off-gas is integrated into thermocatalytic processes for further methanol production, with techno-economic data sourced from literature and Aspen Plus simulations. Detailed techno-economic assessment presented in the work by Lozano, E., et al. (2024) is used as input for this study.

The objective function in the system analysis focuses on cost minimization while achieving specified CO2 reduction targets.

Results show that CCU technologies and surplus gas utilization can significantly reduce CO2 emissions, offering valuable insights into how refineries can contribute to global decarbonization efforts. Integrating biomass-derived feedstocks and CCU technologies provides a viable path for sustainable refinery operations, advancing the industry's role in a more sustainable energy future.

Figure 1. Simplified system block diagram

References

Lopez, E., et al. (2024). Assessing the impacts of low-carbon intensity hydrogen integration in oil refineries. Manuscript in press.

Lozano, E., et al. (2024). TEA of co-processing refinery fuel gas and biogenic gas streams for methanol synthesis. Manuscript submitted for publication in Escape Conference 2025.



Technical Assessment of direct air capture using piperazine in an advanced solvent-based absorption process

Shengyuan Huang, Olajide Otitoju, Yao Zhang, Meihong Wang

University of Sheffield, United Kingdom

CO2 emissions from power generation and industry increase the concentration of CO2 in the atmosphere to 422ppm, which generates a series of climate change and environmental problems. Carbon capture is one of the effective ways to mitigate global warming. Direct air capture (DAC), as one of the negative emission technologies, has great potential for commercial development to achieve capturing 980Mt CO2 in 2050 by IEA Net Zero Emissions Scenario.

DAC can be achieved through absorption using solvents, adsorption using solid adsorbents or a combination of both. This study is based on liquid phase DAC (L-DAC) because it requires smaller land requirement and specific energy consumption compared with other technologies, which is more suitable for large-scale commercial deployment. In the literature, MEA is widely used in DAC. However, use of MEA in DAC process has two big challenges: high energy consumption 6 to 8.8 GJ/tCO2 and high cost up to $340/tCO2. These are the barriers to prevent DAC deployment.

This research aims to study DAC using Piperazine (PZ) with different configurations and evaluate the technic and economic performance at large scale through process simulation. PZ as the new solvent could improve the absorption capacity and performance. The simulation is implemented in Aspen Plus®. The DAC process using PZ will be compared using simulation data from literature to ensure the model’s accuracy. Different configurations (e.g. standard configuration vs advanced flash stripper), different loadings and carbon capture levels will be studied to achieve better system performance and energy consumption performance. The research outcome from this study can be useful for process design by the industrial practitioners and also policymakers.

Acknowledgement: The authors would like to thank the financial support of the EU RISE project OPTIMAL (Grant Agreement No: 101007963).



TEA of co-processing refinery fuel gas and biogenic gas streams from thermochemical conversion for methanol synthesis

Eliana Lozano Sanchez1, Erik Lopez Basto2, Andrea Ramirez Ramirez2

1Aalborg University, Denmark; 2Delft University of Technology, The Netherlands

Heat decarbonization is a key strategy for fossil refineries to lower their emissions in the short/medium term. Direct electrification and other low carbon heat sources are expected to play a major role, however, current availability of refinery fuel gases (RFG) - mixture of residual gases rich in hydrocarbons used for on-site heat generation - may limit decarbonization if alternative uses for surplus RFG are not explored. Thus, evaluating RFG utilization options is key for refineries, while integration of renewable carbon sources remains crucial to decrease fossil crude dependance.

This study presents a techno-economic assessment of co-processing biogenic CO2 sources from biomass thermochemical conversion with RFG to produce methanol, a key chemical with high demand in industry and as shipping fuel. Hydrothermal liquefaction (HTL) and fast pyrolysis (FP) are the technologies evaluated due to their integration potential in a refinery context: these produce bio-oils with drop-in fuel potential that can use existing infrastructure and a by-product gas rich in CO2/CO to be co-processed with the RFG into methanol, which remains unexplored in literature and stands as the main contribution of this study.

The process is simulated in Aspen HYSYS assuming a fixed gas input of 25 tonne/h, which corresponds to estimated RFG surplus in a refinery case study after some emission reduction measures. The process comprises a reforming step to produce syngas (steam and autothermal reforming -SMR/ATR- are evaluated) followed by methanol synthesis via CO2/CO hydrogenation. The impact of gas co-processing is evaluated for increasing ratios of HTL/FP gas relative to the RFG baseline in terms of hydrogen requirement, carbon conversion to methanol, overall water balance and specific energy consumption.

Preliminary results indicate that the valorization of RFG using SMR allows for an increased share of biogenic gas up to 45 wt% without having a negative impact in the overall carbon conversion to methanol. SMR of the RFG results in a syngas with excess hydrogen, which makes possible to accommodate additional biogenic CO2 to operate at lower stoichiometric numbers without a negative impact in conversion and without additional H2 input, being this a key advantage of this integration. Although overall carbon conversion is not affected, the methanol throughput is reduced by 24-27 % relative to the RFG baseline due to the higher concentration of CO2 in the mix that lowers the carbon content and increases water production during methanol synthesis. The ATR case results in lower energy consumption but produces less hydrogen, limiting the biogenic gas share to only 7 wt% before requiring additional H2 for methanol synthesis.

This study aims to contribute to the discussions on integration of low carbon technologies into refinery operations, highlighting synergies between fossil and biobased feedstocks that expand the state-of-the art of co-processing of bio-feedstocks from thermochemical biomass conversion. Future results include the estimation of trade-offs between production costs and methanol carbon intensity, motivating the integration of these technologies in more comprehensive system analysis of fossil refineries and their net zero pathways.



Surrogate Model-Based Optimisation of Pressure-Swing Distillation Sequences with Variable Feed Composition

Laszlo Hegely, Peter Lang

Department of Building Services and Process Engineering, Faculty of Mechanical Engineering, Budapest University of Technology and Economics, Hungary

For separating azeotropic mixtures, special distillation methods must be used, such as pressure-swing (PSD), extractive or heteroazeotropic distillation. The advantage of PSD is that it does not require the addition of a new component. However, it can only be applied if the azeotrope is pressure-sensitive, and its energy demand can also be high. The configuration of the system depends on not only the type of the homoazeotrope but also the feed composition (z). If z is between the azeotropic compositions at the pressures of the columns, the feed can be introduced into either the low- (LP-HP sequence) or the high-pressure column (HP-LP sequence). Depending on z, one of the sequences will be optimal, whether with respect to energy demand or total annual cost (TAC).

Hegely et al. (2022) studied the separation of a maximum-boiling azeotropic mixture water-ethylenediamine by PSD where z (35 mol% water) was between the azeotropes at 0.1 and 2.02 bar. The TAC of both sequences was minimised without and with heat integration. The LP-HP sequence was more favourable at this composition. The optimisation was performed by two methods: a genetic algorithm (GA) and a surrogate model-based optimisation method (SMBO). By SMBO, algebraic surrogate models were fitted to simulation results by the ALAMO software (Cozad et al., 2014) and the resulting optimisation problem was solved. Different decomposition techniques were tested with the models fitted (1) to elements of TAC (heat duty of LPC, column diameters), (2) to capital and energy costs or (3) to TAC itself. The best results were achieved with the highest level of decomposition. Although TAC obtained by SMBO was lower than that of GA only once, the difference was always within 5 %.

In this work, our aim is to (1) improve the accuracy of surrogate models, thus, the performance of SMBO and (2) study the influence of z on the optimum of the two sequences, using the case study of Hegely et al. (2022). The first goal is achieved by fitting the models to the results of the single columns instead of the two-column system. Achieving the second goal requires repeated optimisation at different feed compositions, which would be very time-consuming with conventional optimisation methods. However, an advantage of SMBO is that z can be included as input variable of the models. This enables quickly finding the optimum for any feed composition.

The novelty of our work consists of determining the optimal PSD system as a function of the feed composition by SMBO. Additionally, this is the first work that uses ALAMO to fit the models to be used in the optimisation to the individual columns.

References

Cozad A., Sahinidis N.V., Miller D.C., 2014. Learning surrogate models for simulation-based optimization. AIChE Journal, 60, 2211–2227.

Hegely L., Karaman Ö.F., Lang P., 2022, Optimisation of Pressure-Swing Distillation of a Maximum-Azeotropic Mixture with the Feed Composition between the Azeotropes. In: Klemeš J.J., Nižetić S., Varbanov P.S. (eds.) Proceedings of the 25th Conference on Process Integration, Modelling and Optimisation for Energy Saving and Pollution Reduction. PRES22.0188.



Unveiling Probability Histograms from Random Signals using a Variable-Order Quadrature Method of Moments

Menwer Attarakih1, Mark W. Hlawitschka2, Linda Al-Hmoud1, Hans-Jörg Bart3

1The University of Jordan, Jordan, Hashemite Kingdom of; 2Johannes Kepler University; 3RPTU Kaiserslautern

Random signals play crucial role in chemical and process engineering where industrial plants collect and analyse big data for process understanding and decision-making. This requires unveiling the underlying probability histogram from process random signals with a finite number of bins. Unfortunately, finding the optimal number of bins is still based on empirical optimization and general rules of thumb (e.g. Scott and Freedman formula). The disadvantages here are the large number of bins that maybe encountered, and the inconsistency of the histogram with low-order moments of the true data.

In this contribution, we introduce an alternative and general method to unveil the probability histograms based on the Quadrature Method Of Moments (QMOM). As being data compression method, it works using the calculated moments of an unknown weight probability density function. Because of the ill-conditioned inverse moment problem, there is no simple and general inversion algorithm to recover the unknown probability histogram which is usually required in many design applications and real time online monitoring (Thibault et al., 2023). Our method uses a novel variable integration order QMOM which adapts automatically depending on the relevance of the information contained in the random data. The number of bins used to recover the underlying histogram increases as the information entropy does. In the hypothetical limit where the data has zero information entropy, the number of bins is reduced to one. In the QMOM realm, the number of bins is explored in an evolutionary algorithm that assigns the nodes in an optimal manner to sample the unknown function or process from which the data is generated. The algorithm terminates when no more important information is available for assignment to the newly created node up to a user predefined threshold. If the date is coming from a dynamic source with varying mean and variance, the boundaries of the bins will move dynamically to reflect the nature of the data.

The application of the method is applied to many case studies including moment-consistent histogram unveiled from monthly mean maximum air to surface temperature in Amman city from 1901 to 2023 using only 13 bins with a bimodal histogram. In another case study, the diastolic and systolic blood pressure measurements are found to follow a normal distribution histogram using a data series spanning a six-year period with 11 bins. In a unique dynamic case study, batch particle aggregation plus growth is simulated based on an initial 11 bins where the simulation ends with 14 bins after 5 seconds simulation time. The result is a histogram which is consistent with 28 low-order moments. In addition to this, measured droplet distribution from a pilot plant sparger of toluene in water is found to follow a normal distribution histogram with 11 bins.

As a main conclusion, our method is a universal histogram reconstruction method which only needs enough number of moments to work with intensive validation using real-life problems.

References

E. Thibault, Chioua, M., McKay, M., Korbel, M., Patience, G. S., Stuart, P. R. (2023), Cand. J. Chem. Eng., 101, 6055-6078.



Sensitivity Analysis of Key Parameters in LES-DEM Simulations of Fluidized Bed Systems Using generalized polynomial chaos

Radouan Boukharfane, Nabil El Mocayd

UM6P, Morocco

In applications involving fine powders and small particles, the accuracy of numerical simulations—particularly those employing the Discrete Element Method (DEM) for predicting granular material behavior—can be significantly impacted by uncertainties in critical parameters. These uncertainties include coefficients of restitution for particle-particle and particle-wall collisions, viscous damping coefficients, and other related factors. In this study, we utilize stochastic expansions based on point-collocation non-intrusive polynomial chaos to conduct a sensitivity analysis of a fluidized bed system. We consider four key parameters as random variables, each assigned a specific probability distribution over a designated range. This uncertainty is propagated through high-fidelity Large Eddy Simulation (LES)-DEM simulations to statistically quantify its impact on the results. To effectively explore the four-dimensional parameter space, we analyze a comprehensive database comprising over 1200 simulations. Notably, our findings reveal that variations in the particle and wall Coulomb friction coefficients exert a more pronounced influence on streamwise particle velocity than do variations in the particle and wall normal restitution coefficients.



An Efficient Convex Training Algorithm for Artificial Neural Networks by Utilizing Piecewise Linear Approximations and Semi-Continuous Formulations

Ece Serenat Koksal1, Erdal Aydin1, Metin Turkay2,3

1Department of Chemical and Biological Engineering, Koç University, Turkiye; 2Department of Industrial Engineering, Koç University, Turkiye; 3SmartOpt, Turkiye

Artificial neural networks (ANNs) are mathematical models representing the relationships between inputs and outputs, inspired by the structure of neuron connections in the human brain. ANNs consist of input and output layers, along with user-defined hidden layers containing neurons, which are interconnected through activation functions such as rectified linear unit (ReLU), hyperbolic tangent and sigmoid. A feedforward neural network (FNN) is a type of ANN that propagates information in one direction, from input to output. ANNs are widely used as data-driven approaches, especially for complex systems like chemical engineering, where mechanistic modelling poses significant challenges. However, they often encounter issues such as overfitting, insufficient data, and suboptimal training.

To address suboptimal training, piecewise linear approximations of nonlinear activation functions, such as sigmoid and hyperbolic tangent, can be employed. This approach may enable the transformation of the non-convex problem into a convex one, enabling training via a special ordered set type II (SOS2) formulation at the same time (Koksal & Aydin, 2023; Sildir & Aydin, 2022). The resulting formulation is a mixed-integer linear programming (MILP) problem. However, as the number of neurons, number of approximation pieces or dataset size increases, the computational time rises due to the exponential complexity increase associated with binary variables, hyperparameters and data points.

In this work, we propose a novel training algorithm for FNNs by employing SOSX variables, as defined by Keha et al., (2004) instead of the conventional SOS2 formulation. By modifying the branching algorithm, we transform the MILP problem into subsets of linear programming (LP) problems. This transformation also brings about parallelizable properties, which may further reduce the computational time for training the FNNs. Results demonstrate that this change in the branching strategy significantly reduces computational time, making the formulation more efficient for convexifying the FNN training process.

References

Keha, A. B., De Farias, I. R., & Nemhauser, G. L. (2004). Models for representing piecewise linear cost functions. Operations Research Letters, 32(1), 44–48. https://doi.org/10.1016/S0167-6377(03)00059-2

Koksal, E. S., & Aydin, E. (2023). Physics Informed Piecewise Linear Neural Networks for Process Optimization. Computers and Chemical Engineering, 174. https://doi.org/10.1016/j.compchemeng.2023.108244

Sildir, H., & Aydin, E. (2022). A Mixed-Integer linear programming based training and feature selection method for artificial neural networks using piece-wise linear approximations. Chemical Engineering Science, 249. https://doi.org/10.1016/j.ces.2021.117273



Economic evaluation of Solvay processes for sodium bicarbonate production with brine and carbon tax considerations

Dina Ewis, Zeyad Ghazi, Sabla Y. Alnouri, Muftah H. El-Naas

Gas Processing Center, College of Engineering, Qatar University, Doha, Qatar

Reject brine discharge and high CO2 emissions from desalination plants are major contributors to environmental pollution. Managing reject brine involves significant costs, mainly due to the energy-intensive processes required for brine dilution and disposal. In this context, Solvay process represents a mitigation scheme that can effectively reduce reject brine salinity and sequestering CO2 while producing sodium bicarbonates simultaneously. The Solvay process represents a combined approach that can effectively manage reject brine and CO2 in a single reaction while producing an economically feasible product. Therefore, this study reports a systematic techno-economics assessment of conventional and modified Solvay processes, while incorporating brine and carbon tax. The model evaluates the significance of implementing a brine and CO2 tax on the economics of conventional and Ca(OH)2 modified Solvay compared to industries expenditures on brine dilution and treatment before discharge to the sea. The results show that the conventional Solvay process becomes profitable after applying a brine tax of 1 dollar per meter cube of brine and a CO2 tax of 42 dollars per tonne CO2 —both figures lower than the current costs associated with brine treatment and existing carbon taxes. Moreover, the profitability of the Ca(OH)₂-modified Solvay process increases even further with minimal brine and CO₂ taxes. The findings highlight the significance of adopting modified Solvay process as an integrated solution for sustainable brine management and carbon capture.



THE GREEN HYDROGEN SUPPLY CHAIN IN THE BRAZILIAN STATE OF BAHIA: A DETERMINISTIC APPROACH

Leonardo Santana1, Gustavo Santos1, Pessoa Fernando1, Barbosa-Póvoa Ana Paula2

1SENAI CIMATEC university center, Brazil; 2Instituto Superior Técnico – Universidade de Lisboa, Portugal

Hydrogen is increasingly recognized as a pivotal element in decarbonizing energy, transport, chemical industry, and agriculture sectors. However, significant technological challenges related to production, transport, and storage hinder its broader integration into these industries. Overcoming these barriers requires the development of a sustainable hydrogen supply chain (HSC). This paper aims to design and plan a HSC by developing a Mixed-Integer Linear Programming (MILP) for the Brazilian state of Bahia, the fifth largest state of Brazil (as big as France), a region with significant potential for sustainable electricity and electrolytic hydrogen production. The case study utilizes existing road infrastructure, liquefied and compressed hydrogen via trucks or trains are considered. A monetization strategy is employed to consolidate both economic and environmental aspects into a single objective function, translating CO2 emissions into costs using carbon credit prices. Facility locations are selected based on the preference locations for hydrogen production from Bahia’s government report, considering four dimensions: economic, social, environmental, and technical. Wind power, solar PV, and grid electricity are considered energy sources for hydrogen production facilities, and the model aims to select the optimal combination of energy sources for each plant. The outcomes include the selection of specific hydrogen production plants to meet the demand center's requirements, alongside decisions regarding the preferred form of hydrogen storage (liquefied or compressed) and the optimal energy source (solar, wind, or grid) for each facility. This model provides a practical contribution to the implementation of a sustainable green hydrogen supply chain in Bahia, focusing on the industrial sector's needs. The study offers a replicable and accessible computational approach to solving complex supply chain problems, especially in regions with growing interest in green hydrogen production.



A combined approach to optimization of soft sensor architecture and physical sensor configuration

Lukas Furtner1, Isabell Viedt1, Leon Urbas1,2

1Process Systems Engineering Group, TU Dresden, Germany; 2Chair of Process Control Systems, TU Dresden, Germany

In the chemical industry, soft sensors are deployed to reduce equipment cost or allow for a continuous measurement of process variables. Soft sensors monitor parameters not via physical sensors but infer them from other process variables, often by means of parametric equations like balances and thermodynamic or kinetic dependencies. Naturally, the precision of soft sensors is affected by the uncertainty of their input variables. This paper proposes a novel approach to automatically identify the most precise soft sensor based on a set of process system equations and the configuration of physical sensors in the chemical plant. Furthermore, the method assesses the benefit of deploying additional physical sensors to increase a soft sensor’s precision. This enables engineers to derive adjustments of the existing sensor configuration in a chemical plant. Based on approximating the uncertainty of soft sensors to infer a critical process variable via Monte Carlo simulation, the proposed method is insusceptible against dependent, non-Gaussian uncertainties. Additionally, the approach allows to incorporate hybrid semi-parametric soft sensors [1], modelling poorly understood effects and dependencies within the process system with data-driven, non-parametric parts. Applied to the Tennessee Eastman process [2], the method identifies Pareto-optimal sensor configurations, considering sensor cost and monitoring precision for critical process variables. Finally, the method's deployment in real-world chemical plants is discussed.

Sources
[1] J. Sansana et al., “Recent trends on hybrid modeling for Industry 4.0,” Computers & Chemical Engineering, vol. 151, p. 107365, Aug. 2021
[2] J. J. Downs and E. F. Vogel, “A plant-wide industrial process control problem,” Computers & Chemical Engineering, vol. 17, no. 3, pp. 245–255, Mar. 1993



Machine Learning Models for Predicting the Amount of Nutrients Required in a Microalgae Cultivation System

Geovani R. Freitas1,2,3,4, Sara M. Badenes3, Rui Oliveira4, Fernando G. Martins1,2

1Laboratory for Process Engineering, Environment, Biotechnology and Energy (LEPABE); 2Associate Laboratory in Chemical Engineering (ALiCE); 3Algae for Future (A4F); 4LAQV-REQUIMTE

Effective prediction of nutrient demands is crucial for optimising microalgae growth, maximising productivity, and minimising resources waste. With the increasing amount of data related to microalgae cultivation systems, data mining (DM) and machine learning (ML) methods to extract additional knowledge has gained popularity over time. In the DM process, models can be evaluated using ML algorithms, such as random forest (RF), artificial neural network (ANN) and support vector regression (SVR). In the development of these ML models, data preprocessing stage is necessary due to the poor quality of data. While cleaning and outlier removal techniques are employed to eliminate missing data or outliers, normalization is used to standardize features, ensuring that no single feature is more relevant to the model due to differences in scale. After this stage, feature selection is employed to identify the most relevant parameters, such as solar irradiance and initial dry weight of biomass. Once the optimal features are identified, data splitting and cross-validation strategies are employed to ensure that the models are trained and evaluated with representative subsets of the dataset. Proper data splitting into training and testing sets prevents overfitting, allowing the models to generalize effectively to new, unseen data. Cross-validation techniques, such as k-fold and repeated k-fold cross-validation, are used to rigorously test model performance across multiple iterations, ensuring that results are not dependent on any single data partition. Principal component analysis (PCA) can also be applied as a dimensionality reduction technique to simplify complex environmental datasets by reducing the number of variables or features in the data while retaining as much information as possible. To further improve prediction capabilities, ensemble methods are incorporated, leveraging multiple models to achieve a higher overall performance. Stacking, a popular ensemble technique, is used to combine the outputs of individual models, such as RF, ANN, and SVR, into a single meta-model. This approach takes advantage of the strengths of each base model, such as the non-linear mapping capabilities of ANN, the robustness of RF against overfitting, and the effectiveness of SVR in handling complex feature interactions. By combining these diverse models, the stacked ensemble method provides more accurate and reliable predictions of nutrient requirements. The application of these ML techniques has been demonstrated using a dataset acquired from the cultivation of the microalgae Dunaliella in a flat-panel photobioreactor (FP-PBR). The results showed that the data mining workflow, in combination with different ML models, was able to describe the nutrients requirements to obtain a good performance of microalgae Dunaliella production in carotenogenic phase, for b-carotene production, in a FP-PBR system.



Dynamical modeling of ultrafine particle classification in tubular bowl centrifuges

Sandesh Athni Hiremath1, Marco Gleiss2, Naim Bajcinca1

1RPTU Kaiserslautern, Germany; 2KIT Karlsruhe, Germany

Ultrafine or colloidal particles are widely used in industry as aerogels, coatings, filtration aids or thin films and require a defined particle size. For this purpose tubular centrifuges are suitable for particle separation and classification due to the high g-forces. The design and optimization of tubular centrifuges requires a large number of pilot tests, which is time-consuming and costly. Additionally, the centrifuge while operating semi-continuously under constant process conditions, results in temporal changes of particle size distribution and solids volume fraction especially at the outlet. Altogether, these aspects makes the task of designing an efficient centrifuge challenging. This work presents a dynamic model for the real-time simulation of the behavior during particle classification in a pilot-scale tubular centrifuge and also provide a novel data-driven algorithm for model validation. The combination of the two greatly facilitates the design and control of the centrifugation process, in particular the tubular centrifuge being considered. First, we discuss the new continuous mathematical model as an improvement over the previously published multi-compartment (discrete) model by Winkler et al. [1]. Based on simulation we show the influence of operational conditions and material behavior on the classification of a colloidal silica-water slurry. Subsequently, we validate the dynamical model by comparing experimental data with the simulations for the temporal change of product loss, grade efficiency and sediment build-up. For validation, we propose a new data driven method which uses neural-odes that incorporates the proposed new centrifugation model thus capable of encoding the physical (transport) laws in the network parameters. In summary, our work provides the following novelties:

1. A continuous dynamical model for a tubular centrifugation process that establishes a strong foundation for continuous and semi-continuous control of the process.

2. A new data-driven validation algorithm that not only allows the use of physics based continuous model thus serving as a base methodology for developing a full-fledged learning based observer model which can be used as a state-estimator during continuous process control.

[1] Marvin Winkler, Frank Rhein, Hermann Nirschl, and Marco Gleiss. Real-time modeling of volume and form dependent nanoparticle fractionation in tubular centrifuges. Nanomaterials, 12(18):3161, 2022.



Towards a multi-scale process optimization coupling custom models for unit operations, process simulator, and environmental impact.

Thomas Hietala1, Sonja Herres-Pawlis2, Pedro S.F. Mendes1

1Centro de Química Estrutural, Instituto Superior Técnico, Portugal; 2Institute of Inorganic Chemistry, RWTH Aachen University, Germany

To achieve utmost process efficiency, all scales, from phenomena within a given unit operation to mass and energy integration, matter. For instance, the way mass transfer and kinetics are optimized in a chemical reactor (e.g., focusing either on activity or selectivity) will impact the downstream separation train and, thus, the process as a whole. Currently, as the design of sustainable processes is mostly performed independently at different scales, the overall influence of design choices at different levels is not assessed in a seamless way, leading to a trial-and-error and inefficient design workflow. In order to consider all scales simultaneously, a multi-scale model has been developed that couples a process model to a complex mass-transfer limited reactor model and to an environmental and/or social impact assessment tool. The production of Polylactic-acid (PLA), the most produced bioplastic to date[1], was chosen as the case study for the development of this tool.

The multi-scale model covers, as of today, the reactor, process and environmental analysis scales. The process model simulating the production process of PLA was developed in Aspen Plus simulation software employing the Polymer Plus module and PolyNRTL as the thermodynamic method, based on literature implementation[2]. The production process consists firstly of an oligomerization reaction step of lactic acid to a PLA pre-polymer. It is followed by a depolymerization step which converts the pre-polymer into lactide. After a purification step, the lactide forms the high molecular weight PLA in a ring-opening polymerization step. The PLA product is obtained after a final purification step. The depolymerization step, in industry, is performed in a devolatilization equipment, which is a mass-transfer limited reactor. As there are no adequate mass-transfer limited reactor models in Aspen Plus, a Python CAPE-Open Unit Operation module[3] was developed to couple a realistic devolatilization reactor model into the process model. If mass-transfer would not be accounted for in the reactor, the ultimate PLA production would be underestimated by 8-times, with the corresponding impact on profitability and environmental.

From the process model, the economic performance of the process can be determined. To determine the environmental performance of the designed process simultaneously and seamlessly, a Life Cycle Analysis (LCA) model, performed in OpenLCA software, is coupled with Aspen Plus using an interface coded in Python. With this multi-scale model built-in, the impact of the design variables at the various scales on the process's overall economic and environmental performance can be determined and optimized.

This multi-scale model creates a basis to develop a multi-objective optimization framework using economic and environmental objective functions directly from Aspen Plus and OpenLCA software. This could enable a reduction in the environmental impact of processes without disregarding the profitability of the process.

[1] - European Bioplastics, Bioplastics market data, 2023, https://www.european-bioplastics.org/news/publications/ accessed on 25/09/2024

[2] - K. C. Seavey and Y. A. Liu, Step-growth polymerization process modeling and product design. New Jersey: Wiley, 2008

[3] - https://www.colan.org/process-modeling-component/python-cape-open-unit-operation/ accessed on 25/09/2025



Enhancing hydrodynamics simulations in Distillation Columns Using Smoothed Particle Hydrodynamics (SPH)

RODOLFO MURRIETA-DUEÑAS1, JAZMIN CORTEZ-GONZÁLEZ1, ROBERTO GUTIÉRREZ-GUERRA2, JUAN GABRIEL SEGOVIA-HERNÁNDEZ3, CARLOS ENRIQUE ALVARADO-RODRÍGUEZ3

1TECNOLÓGICO NACIONAL DE MÉXICO/ CAMPUS IRAPUATO, MÉXICO; 2UNIVERSIDAD DE GUANAJUATO, MÉXICO; 3UNIVERSIDAD TECNOLÓGICA DE LEÓN, CAMPUS LEÓN, MÉXICO

Distillation is one of the most widely applied unit operations in chemical engineering, renowned for its effectiveness in product purification. However, traditional distillation processes are often hampered by significant inefficiencies, driving efforts to enhance thermodynamic performance in both equipment design and operation. While many alternatives have been evaluated using MESH equations and sequential simulators, comparatively less attention has been given to Computational Fluid Dynamics (CFD) modeling, largely due to its complexity. CFD methodologies typically fall under either Eulerian or Lagrangian approaches. The Eulerian method relies on a mesh to discretize the medium, providing spatial averages at the fluid interfaces. Popular techniques include finite volume and finite element methods, with finite volume commonly employed to simulate the hydrodynamics, mass transfer, and momentum in distillation columns (Haghshenas et al., 2007; Lavasani et al., 2018; Zhao, 2019; Ke, 2022). Despite its widespread use, the Eulerian approach faces challenges such as interface modeling, convergence issues, and selecting appropriate turbulence models for simulating turbulent flows. In contrast, Lagrangian methods, which discretize the continuous medium using non-mesh-based points, offer detailed insights into interfacial phenomena. Among these, Smoothed Particle Hydrodynamics (SPH) stands out for its ability to model discontinuous media and complex geometries without requiring a mesh, making it ideal for studying various systems, including microbial growth (Martínez-Herrera et al., 2022), sea wave dynamics (Altomare et al., 2023), and stellar phenomena (Reinoso et al., 2023). This versatility and robustness make SPH a promising tool for distillation process modeling. In this study, we present a numerical simulation of a liquid-vapor (L-V) thermal equilibrium stage in a plate distillation column, employing the SPH method. The focus is on Sieve and Bubble cap plates, with periodic temperature conditions applied to facilitate thermal equilibrium. Column sizing was performed using Aspen One for an equimolar Benzene-Toluene mixture, operating under conditions ensuring a condenser cooling water temperature of 120°F. The Chao-Seader thermodynamic model was applied, with both sieve and bubble plates integrated into a ten-stage column. Stage 5 was designated as the feed stage, and a 98% purification and recovery rate for both components was assumed. This setup provided critical operational parameters, including liquid and vapor velocities, viscosity, density, pressure, and column diameter. Three-dimensional CAD models of the distillation column and the plates were generated using SolidWorks and subsequently imported into DualSPHysics (Domínguez et. al., 2022) for CFD simulation. Stages 6 and 7 were selected for detailed analysis, as they are positioned just below the feed stage. The results showed that the sieve plate achieved thermal equilibrium more rapidly than the bubble cap plate, a difference attributable to the steam injection zone in the bubble cap design. Moreover, the simulations allowed the calculation of heat transfer coefficients based on plate geometry, providing insights into heat exchange at the fluid interfaces. In conclusion, this study highlights the potential of using periodic temperature conditions to simulate thermal equilibrium in distillation columns. Additionally, the SPH method has demonstrated its utility as a powerful and flexible tool for simulating fluid dynamics and thermal equilibrium in distillation processes.



Electric arc furnace dust waste management: A process synthesis approach.

Agustín Porley Santana1, Mayra Doldan1,2, Martín Duarte Guigou2,3, Mauricio Ohanian1, Soledad Gutiérrez Parodi1

1Instituto de Ingeniería Química, Facultad de Ingeniería, Universidad de la República, Montevideo, 11300, Uruguay; 2Viento Sur Ingeniería, Ruta 61, Km 19, Nueva Helvecia, Colonia, Uruguay.; 3Grupo de Ingeniería de Materiales, Inst. Tecn. Reg. Sur-Oeste, Universidad Tecnológica del Uruguay, Horacio Meriggi 905, CP60000, Paysandú, Uruguay.

The residue from the solid collection system of steel mills is known as electric arc furnace dust (EAFD). It contains significant amounts of iron, zinc, and lead in the form of oxides, silicates, and carbonates, along with minor components such as chromium, tin, nickel, and cadmium. Therefore, most countries classify this waste as hazardous waste. Its management presents scientific and technical challenges that significantly impact the economics of the steelmaking process.

Currently, the management of this waste consists of burying it in the final disposal plant. However, there are multiple treatment alternatives to reduce its hazardousness by recovering and immobilizing marketable heavy metals such as Zn and Pb. This process can be carried out through a hydrometallurgical dissolution with selective extraction of Zn, leaving the rest of the metals components in the solid. Zn has amphoteric properties, but it shares this characteristic with Pb, so alkaline extraction solubilizes both metals simultaneously, leaving iron compounds in an insoluble form. At this stage, two currents result, one solid and one liquid. The liquid stream is a zinc-rich solution from which Zn could be electrochemically recovered as a valuable product, ensuring that the electrodeposited material shows characteristics that allow for easy recovery through mechanical means. The solid stream can be stabilized by incorporating it into an alkali-activated inorganic polymer (geopolymer) to obtain a product or waste that captures the heavy metals, immobilizing them, or it can be managed by a third party. To avoid lead contamination of the product of interest (pure Zn), the liquid stream can go through a precipitation process with sodium sulfide, removing the lead as lead sulfide or electrodepositing pure lead by controlling (the voltage or current) before electrodepositing the Zn in a subsequent stage. Pilot-scale testing of these processes has been conducted previously. [1]

Each step generates different costs and alternatives for managing this residue. For this, the process synthesis approach is considered suitable, allowing for the simultaneous analysis of these alternatives and the selection of the one that generates the greatest benefit.

This work studies the management of steel mill residue with a process synthesis approach combining experimental data from pilot-scale operations, data collected from metallurgical companies, and data based on expert judgment. The stages to achieve this objective involve: superstructure conception, its translation into mathematical language, and implementation in a mathematical programming software (GAMS). The aim is to assist in decision-making at the managerial level, so the objective function chosen was to maximize commercial value per ton of EAFD to be managed. A superstructure model is proposed that combines binary variables for operations and binary variables for artificial streams, enabling accurate modeling of the various connections involved in this process management network. Artificial streams were used to formally describe disjunctions. Sensitivity analyses are currently being conducted.

References

[1] M.Doldán, M. Duarte Guigou, G. Pereira, M. Ohanian, Electrodeposition of Zinc and Lead from Electric Arc Furnace Dust Dissolution: A Kinetic Study in A Closer Look at Chemical Kinetics, Editorial Nova Science Publishers 2022



Network theoretical analysis of the reaction space in biorefineries

Jakub Kontak, Jana Marie Weber

Intelligent Systems Department, Delft University of Technology, Netherlands

Abstract

Large chemical reaction space has been analysed intensively to learn the patterns of chemical reactions (Fialkowski et al., 2005; Jacob & Lapkin, 2018; Llanos et al., 2019, Mann & Venkatasubramanian, 2023) and to understand the wiring structure to be used for network pathway planning problems (Weber et al., 2019; Ulonska et al., 2016). With increasing pressure towards more sustainable production systems, it becomes worthwhile to model the reaction space reachable from biobased feedstocks, e.g. through integrated processing steps in biorefineries.

In this work we focus on a network-theoretical analysis of biorefinery reaction data. We obtain biorefinery reaction data from the REAXYS web interface, propose a directed all-to-all mapping between reactants and products for comparability purposes with related work, and finally compare the reaction space obtained from biorefineries with the network of organic chemistry (NOC) (Jacob & Lapkin, 2018). Our findings indicate that despite having 1000 times fewer molecules, the constructed network resembles the NOC in terms of its scale-free nature and shares similarities regarding its “small-world” property. Our results further suggest that the biorefinery network space has a higher centralisation and clustering coefficient. Additionally, we inspect the coverage rate of our data querying strategy and find that our network covers most of common second and third intermediates, yet only few biorefinery end-products and direct feedstock molecules are present.

References

Fialkowski, M., Bishop, K. J., Chubukov, V. A., Campbell, C. J., & Grzybowski, B. A. (2005). Architecture and evolution of organic chemistry. Angewandte Chemie International Edition, 44(44), 7263-7269.

Jacob, P. M., & Lapkin, A. (2018). Statistics of the network of organic chemistry. Reaction Chemistry & Engineering, 3(1), 102-118.

Llanos, E. J., Leal, W., Luu, D. H., Jost, J., Stadler, P. F., & Restrepo, G. (2019). Exploration of the chemical space and its three historical regimes. Proceedings of the National Academy of Sciences, 116(26), 12660-12665.

Mann, V., & Venkatasubramanian, V. (2023). AI-driven hypergraph network of organic chemistry: network statistics and applications in reaction classification. Reaction Chemistry & Engineering, 8(3), 619-635.

Weber, J. M., Lió, P., & Lapkin, A. A. (2019). Identification of strategic molecules for future circular supply chains using large reaction networks. Reaction Chemistry & Engineering, 4(11), 1969-1981.

Ulonska, K., Skiborowski, M., Mitsos, A., & Viell, J. (2016). Early‐stage evaluation of biorefinery processing pathways using process network flux analysis. AIChE Journal, 62(9), 3096-3108.



Applying Quality by Design to Digital Twin Supported Scale-Up of Methyl Acetate Synthesis

Jessica Ebert1, Amy Koch1, Isabell Viedt1,3, Leon Urbas1,2,3

1TUD Dresden University of Technology, Process Systems Engineering Group; 2TUD Dresden University of Technology, Chair of Process Control Systems; 3TUD Dresden University of Technology, Process-to-Order Lab

The scale-up from lab to production scale is an essential cost and time factor in the development of chemical processes, especially when high demands are placed on product quality. Quality by Design is a common method used in the pharmaceutical industry to ensure product quality through the production process (Yu et al., 2014), which is why the QbD methodology could be a useful tool for the process development in chemical industry as well. Concepts from literature demonstrate how mechanistic models are used for the direct scale-up from laboratory equipment to production equipment by dispensing with intermediate scales in order to shorten the time to process (Furrer et al., 2021). The integration of Quality by Design into a direct scale-up approach promises further advantages, such as a deeper process understanding and the assurance of process safety. Digital twins consisting of simulation models digitally represent the behavior of plants and the processes running on it, enable the model-based scale-up.

In this work a simulation-based workflow for the digital twin supported scale-up of processes and process plants is proposed, which integrates various aspects of the quality by design methodology. The key element is the determination of the design space defining Critical Quality Attributes and identifying Critical Process Parameters as well as Critical Material Attributes (Yu et al., 2014). The design space is transferred from the laboratory scale model to the production scale model. To illustrate the concept, the workflow is implemented for the use case of the synthesis of methyl acetate. The process is scaled from a 2 L laboratory stirred tank reactor to a 50 L production plant fulfilling each step of the scale-up workflow: modelling, definition of the target product quality, experiments, model adaption, parameter transfer and design space identification. Thereby, the presentation of the results focusses on the design space identification and transfer using global system analysis. Finally, benefits and limitations of the implementation of Quality by Design in the direct scale-up using digital twins are discussed.

References

Schindler, Polyakova, Harding, Weinhold, Stenger, Grünewald & Bramsiepe (2020). General approach for technology and Process Equipment Assembly (PEA) selection in process design. Chemical Engineering and Processing – Process(159), Article 108223.

T. Furrer, B. Müller, C. Hasler, B. Berger, M. Levis & A. Zogg (2021). New Scale-up Technologies for Hydrogenation Reactions in Multipurpose Pharmaceutical Production Plants. Chimia(75), Article 11.

X.. L.Yu, G. Amidon, M. A. Khan, S. W. Hoag, J. Polli, G.K. Raju & J. Woodcock (2014). Understanding Pharmaceutical Quality by Design. The AAPS Journal(16), 771–783.



Digital Twin supported Model-based Design of Experiments and Quality by Design

Amy Koch1, Jessica Ebert1, Isabell Viedt1,2, Andreas Bamberg4, Leon Urbas1,2,3

1TUD Dresden University of Technology, Process Systems Engineering Group; 2TUD Dresden University of Technology, Process-to-Order Lab; 3TUD Dresden University of Technology, Chair of Process Control Systems; 4Merck Electronics KGaA, Frankfurter Str. 250, Darmstadt 64293, Germany

In the specialty chemical industries, faster time-to-process is a significant measure of success. One key aspect which supports faster time-to-process is reducing the time required for experimental efforts in the process development phase. Here, Digital Twin workflows based on methods such as global system analysis, model-based design of experiments (MBDoE), and the identification of the design space as well as leveraging prior knowledge of the equipment capabilities can be utilized to reduce the experimental load (Koch et al., 2023). MBDoE utilizes prior knowledge (model structure & initial parameter estimates) to optimally design an experiment by identification of optimum process conditions, thereby reducing experimental effort (Franceschini & Macchietto, 2008). Further benefit can be achieved by applying Quality by Design methods (Katz & Campbell, 2012) to these Digital Twin workflows; here, the prior knowledge supplied by the Digital Twin is used to pre-screen combinations of critical process parameters and model parameters to identify suitable parameter combinations for inclusion in the MBDoE optimization problem (Mädler, 2023). In this paper, first a Digital Twin workflow based on incorporating prior knowledge of equipment capabilities into global system analysis and subsequent MBDoE is presented and relevant methodology explained. This workflow is illustrated with a prototypical implementation using the process simulation tool gPROMS for the specific use case of an esterification process in a stirred tank reactor. As a result, benefits such as improved parameter estimation and reduced experimental effort compared to traditional DoE are illustrated as well as a critical evaluation of the applied methods.

References

G. Franceschini & S. Macchietto (2008). Model-based design of experiments for parameter precision: State of the art. Chemical Engineering Science 63(19), 4846-4872. Chemical Engineering Science, 63, 4846–4872.

P . Katz, & C. Campbell, 2012, FDA 2011 process validation guidance: Process validation revisited, Journal of GXP Compliance, 16(4), 18.

A. Koch, J. Mädler, A. Bamberg, and L. Urbas, 2023. Digital Twins for Scale-Up in Modular Plants: Requirements, Concept, and Roadmap. In Computer Aided Chemical Engineering, 2063-2068, Elsevier.

J. Mädler, 2023. Smarte Process Equipment Assemblies zur Unterstützung der Prozessvalidierung in modularen Anlagen.



Bioprocess control using hybrid mechanistic and Gaussian process modeling

Lydia Katsini, Satyajeet Sheetal Bhonsale, Jan F.M. Van Impe

BioTeC+, Chemical & Biochemical Process Technology & Control, KU Leuven, Belgium

Control of bioprocesses is crucial for achieving optimal yield of various products. In this study, we focus on the fermentation of Xanthophyllomyces dendrorhous, a yeast known for its ability to produce astaxanthin, a high-value carotenoid with applications in pharmaceuticals, nutraceuticals, and aquaculture. Successful application of optimal control, requires, however, accurate and robust process models (Bhonsale et al., 2022). Since the system dynamics are non-linear and biological variability is an inherent property of the process, modeling such a system is demanding.

Aiming to tackle the system complexity, our approach in modeling this process follows Vega-Ramon et al. (2021), who combined two distinct methods: mechanistic and machine learning models. On the one hand, mechanistic models, based on existing knowledge, provide valuable insights into the underlying phenomena but are limited by their demand for accurate parameterization and may struggle to adapt to process disturbances. On the other hand, machine learning models, based on experimental data, can capture the underlying pattern without previous knowledge, however, they are also limited to the domain of the training data utilized to build them.

A key challenge in both modeling approaches is dealing with uncertainty, and more specifically biological variability, which is inherent in biological systems. To address this, we utilize Gaussian Process (GP) modeling, a flexible, non-parametric machine learning technique that provides a framework for uncertainty quantification. In this study, the use of GPs allows for robust control of the fermentation by accounting for the biological variability of the system.

Optimal control framework is implemented both for the hybrid model and the mechanistic model to identify the optimal sugar feeding strategy for maximizing astaxanthin yield. This study demonstrates how optimal control can benefit from hybrid mechanistic and machine learning bioprocess modeling.

References

Bhonsale, S. et al. (2022). Nonlinear Model Predictive Control based on multi-scale models: is it worth the complexity? IFAC-PapersOnLine, 55(23), 129-134. https://doi.org/10.1016/j.ifacol.2023.01.028

Vega-Ramon, F. et al. (2021). Kinetic and hybrid modeling for yeast astaxanthin production under uncertainty. Biotechnology and Bioengineering, 118, 4854–4866. https://doi.org/10.1002/bit.27950



Tune Decomposition Schemes for Large-Scale Mixed-Integer Programs by Bayesian Optimization

Guido Sand1, Sophie Hildebrandt1, Sina Nunes1, Chung On Yip1, Meik Franke2

1Pforzheim University of Applied Science, Germany; 2University of Twente, The Netherlands

Heuristic decomposition schemes are a common approach to approximately solve large-scale mixed-integer programs (MIPs). A typical example are moving horizon schemes applied to scheduling problems. Decomposition schemes usually exhibit parameters which can be used to tune their performance. Examples for parameters of moving horizon schemes are the horizon length and the step size of its movement. Systematic tuning approaches are seldomly reported in literature.

In a previous paper by the first two authors, Bayesian optimization was proposed as a methodological approach to systematically tune decomposition schemes for mixed-integer programs. This approach is reasonable since the tuning problem is a black-box optimization problem with an expensive to evaluate objective function: Each evaluation of the objective function of the Bayesian optimization requires the solution of the mixed-integer program using the specifically parametrized decomposition scheme. The mentioned paper demonstrated by an exemplary mixed-integer hoist scheduling model and a moving horizon scheme that the proposed approach is feasible and effective in principle.

After the proof of concept in the previous paper, the paper at hand discusses detailed results of three studies of the Bayesian optimization-based approach using the same exemplary hoist scheduling model:

  1. Examine the solution space:
    The graphs of the objective function (makespan or computational cost) of the tuning problem are analysed for small instances of the mixed-integer model considering the sequences of evaluations of the Bayesian optimization in the integer-valued space of tuning parameters. The results show that the Bayesian optimization converges relatively fast to good solutions even though the visual inspection of the graphs of the objective function exhibit only little structure.
  2. Compare different acquisition functions:
    The type of acquisition function is studied since it is assumed to be a tuning parameter of the Bayesian optimization with a major impact on its performance. Four types of acquisition functions are applied to a set of test cases and compared with respect to the mean performance and its variance. The results show a similar performance of three types and a slightly inferior performance of the fourth type.
  3. Enlarge the tuning-parameter space:
    The scaling behaviour of the Bayesian optimization-based approach with respect to the dimension of the space of tuning-parameters is studied: The number of tuning-parameters is increased from two to four parameters (three integer- and one real-valued). First results indicate that the studied approach is also feasible for real-valued tuning parameters and remains effective in higher dimensional spaces.

The results indicate that Bayesian optimization is a promising approach to tune decomposition schemes for large-scale mixed-integer programs. Future work will investigate the optimization of tuning-parameters for multiple instances in two directions. Direction one is inspired by hyperparameter optimization methods and aims at tuning one decomposition scheme which is on average optimal for multiple instances. Direction two is motivated by algorithm selection methods and aims at predicting good tuning parameters from previously optimized tuning parameters.



Enhancing industrial symbiosis to reduce CO2 emissions in a Portuguese industrial park

Ricardo Nunes Dias1,2, Fátima Nunes Serralha2, Carla Isabel Costa Pinheiro1

1Centro de Química Estrutural, IMS, Department of Chemical Engineering, Instituto Superior Técnico/Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal; 2RESILIENCE – Center for Regional Resilience and Sustainability, Escola Superior de Tecnologia do Barreiro, Instituto Politécnico de Setúbal, 2839-001 Lavradio, Portugal

The primary objective of any industry is to generate profit, which often results in a focus on the efficiency of product production, not neglecting environmental and social issues. However, it is important to recognise that every process has multiple outlets, including the desired products and residues. In some cases, the effort required to process these residues further may outweigh the benefits (at first glance), leading to their disposal at a cost to the industry. Many of these residues can be sorted to enhance their value, enabling their sale instead of disposal [1].

This work presents a model developed in GAMS to identify and quantify potential symbiosis, that are already occurring, or could occur, if the appropriate relations between enterprises were established. A network flow is modelled to establish as much symbiosis as possible. The objective function maximises material exchange between enterprises while ensuring that every possible symbiosis is established. This will result in exchanges between enterprises that may involve too small amounts of wastes to be implemented. However, this outcome is beneficial for decision-makers, as having multiple sinks for a given residue can be beneficial [2,3]. EMn,j,i,n’ (exchanged material) is the main decision variable of the model, where the indices are: n and n', the donor and receiver enterprises (respectively), j, the category, and i, the residue. A binary variable, Yn,j,i,n’, is also used to allow or not a given exchange between enterprises. Each residue is categorised according to the role it has in each enterprise, as it can be an industrial residue (category 3), or a resource (category 0), categories 1 and 2 are reserved for products and subproducts, respectively. The wastes produced are converted into CO2eq (carbon dioxide equivalent), as a quantification of environmental impact. Reducing the amount of waste produced can significantly reduce the environmental impact of a given enterprise. This study assesses the largest industrial park in Portugal, which encompasses a refinery and a petrochemical plant, as the two largest facilities within the park. The direct CO2 emissions mitigated by the deployment of CO2 utilisation processes can be quantified. The establishment of a methanol plant utilising CO2 can reduce the CO2 emissions from the park by 335,560 tons. A range of CO2 utilisation processes will be evaluated to determine the optimal processes for implementation.

Even though a residue can have impacts on several environmental aspects, in this work we focus on reducing carbon emissions. Furthermore, it was found that cooperation between local enterprises and the announced investments of these enterprises can lead to significant environmental gains in the region studied.

References

[1] J. Patricio,, Y. Kalmykova,, L. Rosado,, J. Cohen,, A. Westin,, J. Gil, Resour Conserv Recycl 185 (2022). 10.1016/j.resconrec.2022.106437.

[2] D.C.Y. Foo, PROCESS INTEGRATION FOR RESOURCE CONSERVATION, 2016.

[3] L. Fraccascia,, D.M. Yazan,, V. Albino,, H. Zijm, Int J Prod Econ 221 (2020). 10.1016/j.ijpe.2019.08.006.



Blue Hydrogen Plant: Accurate Hybrid Model Based on Component Mass Flows and Simplified Thermodynamic Properties is Practically Linear

Farbod Maghsoudi, Raunak Pandey, Vladimir Mahalec

McMaster University, Canada

Current models of process plants are either rigorous first principles models based on molar flows and fractions (used for process design or optimization of operating conditions) or simple mass or volumetric flows (used for production planning and scheduling). Detailed models compute stream properties via nonlinear calculations which employ mole fractions resulting in many nonlinearities and limit plant wide models to a single time-period computation. Planning models are flow-based models, usually linear and therefore solve rapidly which makes them suitable for multi-time period representation of the plant at the expense of lower accuracy.

Once a plant is in operation, most of its streams stay at or close to the normal operating conditions which are maintained by the process control loops. Therefore, each stream can be described by its properties at these normal operating conditions (unit enthalpy, temperature, pressure, density, heat capacity, vapor fraction, etc.). It should be noted that these bulk properties per unit mass are much less sensitive to changes in stream composition if one employs mass units instead of moles (e.g. latent heat of C5 to C10 hydrocarbons varies much less in energy/mass than in energy/mole units).

Based on these observations, this work employs a new plant modelling paradigm which leads to models that have accuracy close to the rigorous models and at the same time the models are (almost) linear, thereby permitting rapid solution of large-scale single-period and multi-period models. Instead of total molar flow and mole fractions, we represent streams by mass flows of components and total mass flow. In addition, we employ simplified thermodynamic properties based on [property value/mass], which eliminates the need to use mole or mass fractions.

This paradigm has been used to model a blue hydrogen plant described in the NETL report [1]. The plant converts natural gas into hydrogen and CO2 via autothermal reforming (ATR) and water-gas shift (WGS) reactors . Oxygen is supplied from the air separation unit, while steam and electricity are supplied by a combined heat and power (CHP) unit. Stream properties at normal operating conditions have been obtained from AspenPlus plant model. Surrogate reactor models employ mass component flows and have only one bilinear term, even though their AspenPlus counterpart is a highly nonlinear RGIBBS model. The entire plant model has a handful of bilinear terms, and its results are within 1% to 2% of the rigorous AspenPlus model.

Novelty of our work is in changing the plant modelling paradigm from molar flows, fractions, and rigorous thermodynamic properties calculation to mass component flows and simplified thermodynamic properties. Rigorous properties calculation is used to update the simplified properties after the hybrid model converges. This novel plant modelling paradigm greatly reduces nonlinearities of plant models while maintaining high accuracy. Due to its rapid convergence, the same plant model can be used for optimization of operating condition, multi-time period production planning, and for scheduling.

References:

  1. Comparison of Commercial State-of-the-art Fossil-based Hydrogen Production Technologies, DOE/NETL-2022/3241, April 12, 2022


Synergies between the distillation of first- and second-generation sugarcane ethanol for sustainable biofuel production

Luiz De Martino Costa1,2, Abhay Athaley3, Zach Losordo4, Adriano Pinto Mariano1, John Posada2, Lee Rybeck Lynd5

1Universidade Estadual de Campinas, Brazil; 2Delft Universtity of Technology, The Netherlands; 3National Renewable Energy Laboratory, United States; 4Terragia Biofuel Incorporated, United States; 5Dartmouth College, United States

Despite the yearly opening of second-generation (2G) sugarcane distilleries in Brazil, 2G bagasse ethanol distillation remains a challenging unit operation due to low-titer ethanol having increased heat duty and production costs per ethanol mass produced. For this reason, and because of the logistics involving transporting sugarcane bagasse, 2G bagasse ethanol is currently commercially produced in plants annexed to first-generation (1G) ethanol plants, and this configuration can likely become one path of evolution for 2G ethanol production in Brazil.

In the context of 1G2G integrated sugarcane ethanol plants, mixing ethanol beers from both processes may reduce the production costs of 2G ethanol (personal communication with a 2G ethanol producer). However, the energy, process, economic, and environmental advantages of this integrated model compared to its stand-alone counterpart remain unclear. Thus, this work focused on the energy synergies between the distillation of integrated first- and second-generation sugarcane ethanol mills.

For this investigation, integrated and separated 1G2G distillation simulations were conducted using Aspen Plus v.10. The separated distillation arrangement consisted of two RadFrac columns: one to distillate 1G beer and another to distillate 2G beer until near azeotropic levels (91.5% wt ethanol). In the integrated distillation arrangement, two columns were used: one to rectify 2G beer and another to distillate 2G vapor and 1G beer until azeotropic levels. The mass flow ratio between 1G to 2G beer was assumed to be 3:1, both mixtures enter the columns as saturated liquid and consist of only water and ethanol. The 1G beer titer was assumed 100 g/L and the 2G beer titer was varied from 10 to 40 g/L to understand and compare the energy impacts for low titer 2G beer. The energy analysis was conducted by quantifying and comparing the reboilers’ duty and distilled ethanol production to calculate heating energy demand.

1G2G integration resulted in an overall heating energy demand for ethanol distillation at a near-constant value of 3.25 MJ/kgEthanol, regardless of the 2G ethanol titer. In comparison, the separated scenario had energy demand ranging from 3.60 (40 g/L 2G beer titer) to 3.80 (10 g/L 2G beer titer) MJ/kgEthanol, meaning that it is possible to obtain energy savings from 9.5% to 14.5%. Additionally to the energy savings, the energy demand value found for the integrated scenario is almost the same for only 1G beer. The main reason for these results is that the energy demand for 2G ethanol is reduced due to the reflux ratio necessary for distillation, lowering in an integrated 1G2G column to be near to only 1G conditions. This can be observed in the integrated scenario by the 2G ethanol heat demand in isolation being the near-constant value of 3.35 MJ/kgEthanol for the studied range of 2G ethanol titer while changing from 5.81 to 19.92 MJ/kgEthanol in the separated scenario. These results indicate that distillation integration should be chosen for the 1G2G sugarcane distilleries for a less energy-demanding process, and, therefore, more sustainable biofuel.



Development of anomaly detection models independent of noise and missing values using graph Laplacian regularization

Yuna Tahashi, Koichi Fujiwara

Department of Materials Process Engineering, Nagoya University, Japan

Process data frequently suffer from imperfections such as missing values or measurement noise due to sensor malfunctions. Such data imperfections pose significant challenges to process fault detection, potentially leading to false positives or overlooking rare faulty events. Fault detection models with high sensitivity may excessively detect these irregularities, which leads to disturbing the identification of true faulty events.

To address this challenge, we propose a new fault detection model based on an autoencoder architecture with graph Laplacian regularization that considers specific temporal relationships among time series data. Laplacian regularization assumes that neighboring samples remain similar, imposing significant penalties when neighboring samples lack smoothness. In addition, graph Laplacian regularization can take the smoothness of graph structures into account. Since normal samples in close temporal proximity should keep similar characteristics, a graph can be utilized to represent temporal dependencies between successive samples in a time series. In the proposed model, the nearest correlation (NC) method which is a structural learning algorithm considering the correlation among variables is used. Using graph Laplacian regularization with the NC method, it is expected that missing values or measurement noise are corrected automatically from the viewpoint of the correlation among variables in the normal process condition, and only significant changes like faulty events can be detected because they cannot be corrected sufficiently. The proposed method has broad applicability to various models because the graph regularization term based on the NC method is simply added to the objective function when a model is trained.

To demonstrate the efficacy of our proposed model, we conducted a case study using simulation data generated from a vinyl acetate monomer (VAM) production process, employing a rigorous process model built on Visual Modeler (Omega Simulation Inc., Japan). In the VAM simulator, six faulty scenarios, such as sudden changes in feed composition and pressure, were generated.

The results show that the fault detection model with graph Laplacian regularization provides higher fault detection accuracy compared to the model without graph Laplacian regularization in some faulty scenarios. The false alarm rate (FAR) and the missing alarm rate (MAR) were improved by up to 0.4% and 50.1%, respectively. In addition, the detection latency (DL) was shortened at most 1,730 seconds. Therefore, it was confirmed that graph-Laplacian regularization with the NC method is particularly effective for fault detection.

The use of graph Laplacian regularization with the NC method is expected to realize a more reliable fault detection model, which would be capable of robustly handling noise and missing values, reducing false positives, and identifying true faulty events rapidly. This advancement promises to enhance the efficiency and reliability of process monitoring and control across various industrial applications.



Comparing incinerator kiln model predictions with measurements of industrial plants

Lionel Sergent1,2, Abderrazak Latifi1, François Lesage1, Jean-Pierre Corriou1, Thibaut Lemeulle2

1Université de Lorraine, Nancy, France; 2SUEZ, Schweighouse-sur-Moder, France

Roughly 30% of municipal waste is incinerated in the EU. Because of the heterogeneity of the waste and the lack of local measurements, the industry relies on traditional control strategy, including manual piloting. Advanced modeling strategies have been used to gain insights on the design of such facilities. Despite two decades of scientific efforts, obtaining good model accuracy and reliability is still challenging.
In this work, the predictions of a phenomelogical model based on the simplification of literature works is compared to the measurements of an industrial incinerator. The model consists of two sub-models, namely the bed model and the freeboard model. The bed refers to the solid waste traveling through the kiln, while the freeboard refers to the gaseous space above the bed where the flame resides.
The bed of waste is simulated with finite volumes and a walking columns approach, while the freeboard is modeled with the zone method and the interface with the boiler is taken into account through a three layer system. The code implementation of the model takes into account various geometry and other plant important characteristics in a way that allows to easily simulate different types of grate kilns.
The incinerator used as a reference for the development of the model is located in Alsace, France. It features a waste chute, a three zone grate, water walls in the kiln, four secondary air injection points and a cooling water injection. The simulation results are compared with temperature and gas composition measurements. Except for oxygen concentration, gas composition data needs to be retrocalculated from stack gas analyzers. Simulated bed height is compared with the observable fraction of the actual bed. The model reproduces well static behavior and general dynamic tendencies.
The very strong model sensitivity to particle diameter is discussed. Additionally, the model is configured for two other incinerators and shallow comparison with industrial data is performed to assess the generality of the model.
Despite encouraging results, the importance of more work regarding the solid behavior is highlighted.



Modeling the freeboard of a municipal waste incinerator

Lionel Sergent1,2, Abderrazak Latifi1, François Lesage1, Jean-Pierre Corriou1, Thibaut Lemeulle2

1Université de Lorraine, Nancy, France; 2SUEZ, Schweighouse-sur-Moder, France

Roughly 30% of municipal waste is incinerated in the EU. Despite the apparent simplicity, the heterogeneity of the waste and the scarcity of local measurement make waste incineration a challenging process to mathematically describe.
Most of the modeling efforts are concentrated on the bed behavior. However, the gaseous space above the bed, named the freeboard, also needs to be modeled in order to mathematically represent the behavior of the kiln. Indeed, there is a tight coupling between these two spaces, as the bed feeds the freeboard with pyrolysis gases allowing a flame to form in the freeboard, while the flame radiates heat back to the bed, allowing the drying and the pyrolysis to take place.
The freeboard may be modeled using various techniques. The most accurate and commonly used technique is CFD, generally with established commercial software. CFD allows to obtain detailed flow characteristics, which is very valuable to optimize secondary air injection. However, CFD setup is quite heavy and harder to interface with the custom codes typically used for bed modeling. In this work, we propose a coarse model, more adapted to operational use. Each grate zone is associated a with a freeboard gas space where homogeneous combustion reactions occur. Radiative heat transfer is modeled using the zonal method. Three layers are used to represent the interface with the boiler and the thermal inertia the refractory induces. Flow description is reduced to its minimum and solved through the combination of the continuity equation and the ideal gas law, without momentum balance.
The resulting mathematical model is a system of ODEs than can be easily solved with general purpose stiff ODE solvers based on backward differentiation formulas. Steady state simulation results show good agreements with the few measurements available. Dynamic effects are hard to validate due to the lack of local measurements, but general tendencies seem well represented. The coarse freeboard representation is shown to be enough to obtain the radiation profile arriving on the bed.



Superstructure as a communication tool in pre-emptive life cycle design engaging society: Findings from case studies on battery chemicals, plastics, and regional resources

Yasunori Kikuchi1, Ayumi Yamaki1, Aya Heiho2, Jun Nakatani1, Shoma Fujii1, Ichiro Daigo1, Chiharu Tokoro1,3, Shisuke Murakami1, Satoshi Ohara1

1The University of Tokyo, Japan; 2Tokyo City University, Japan; 3Waseda University, Japan

Emerging technologies require sophisticated design and optimization engaging social systems due to their innovative and rapidly advancing characteristics. Despite the fact that they have the significant capacity to change material flows and life cycles by their penetration, their future development and sociotechnical regimes, e.g., regulatory environment, societal infrastructure, and market, are still uncertain and may affect the optimal systems to be implemented in the future. Multiple technologies are being considered simultaneously for a single issue, and appropriate demarcation and synergistic effects are not being evaluated. Superstructures in process systems engineering can visualize all alternative candidates for design problems and contain emerging technologies as such candidates.

In this study, we are tackling pre-emptive life cycle design in social challenges implementing emerging technologies with case studies on battery chemicals, plastics, and regional resources. Appropriate alternative candidates were generated with stakeholders in industries and national projects by constructing superstructures. Based on the consensus superstructures, life cycles have been proposed considering life cycle assessment (LCA) by the simulations of applying emerging technologies.

Regarding the battery chemistry issue, the nickel-manganese-cobalt (NMC) type lithium batteries have become dominant, although the lithium iron phosphate (LFP) type has also been considered as a candidate. The battery chemistries and recycling technologies are emerging technologies in this issue and superstructures were proposed for recycling systems (Yonetsuka et al., 2024). Through communication with the managers of Japanese national projects on battery technology, the scenarios on battery resource circulation have been developed. The issue of plastics has become the design problem of systems applying biomass-derived and recycle-based carbon sources (Meng et al., 2023; Kanazawa et al., 2024). Based on superstructure (Nakamura et al., 2023), the scenario planning and LCA have been conducted and shared with stakeholders for designing future plastic resource circulations. Regional resources could be circulated by implementing multiple technologies (Kikuchi et al., 2023). Through communication with residents and stakeholders, the demonstration test was conducted.

The case studies in this study find the facts below. The superstructures with technology assessments could support the common understanding of the applicable technologies and their pros and cons. Because technologies could not be implemented without social acceptance, CAPE tools should be able to discuss the sociotechnical and socioeconomical aspects of process systems.

D. Kanazawa et al., 2024, Scope 1, 2, and 3 Net Zero Pathways for the Chemical Industry in Japan, J. Chem. Eng. Jpn., 57 (1). DOI: 10.1080/00219592.2024.2360900.

Y. Kikuchi et al., 2024, Prospective life-cycle design of regional resource circulation applying technology assessments supported by CAPE tools, Comput. Aid. Chem. Eng., 53, 2251-2256

F. Meng et al., 2023, Planet compatible pathways for transitioning the chemical industry, Proc. Natl. Acad. Sci., 120 (8) e2218294120.

T. Nakamura et al., 2024, Assessment of Plastic Recycling Technologies Based on Carbon Resource Circularity Considering Feedstock and Energy Use, Comput. Aid. Chem. Eng., 53, 799-804

T. Yonetsuka et al., 2024, Superstructure Modeling of Lithium-Ion Batteries for an Environmentally Conscious Life-Cycle Design, Comput. Aid. Chem. Eng., 53, 1417-1422



A kinetic model for transesterification of vegetable oils catalyzed by sodium methylate—Insights from inline Raman spectroscopy

Ilias Bouchkira, Mohammad El Wajeh, Adel Mhamdi

Process Systems Engineering (AVT.SVT), RWTH Aachen University, 52074 Aachen, Germany

The transesterification of triolein by methanol for biodiesel production is of great interest due to its potential to provide a sustainable and environmentally friendly alternative to fossil fuels. Biodiesel can be produced from renewable sources like vegetable oils, thereby contributing to reducing greenhouse gas emissions and dependency on non-renewable energy. The process also yields glycerol, a valuable by-product that is used in various industries. Given the growing global demand for cleaner energy and sustainable chemical processes, understanding and modeling the kinetics of biodiesel production is critical for improving efficiency, reducing costs, and ensuring scalability of biodiesel production, especially for model-based process design and control (El Wajeh et al., 2023).

We present a kinetic model of the transesterification of triolein by methanol to produce fatty acid methyl esters (FAME), i.e. biodiesel, and glycerol. For parameter estimation, we perform transesterification experiments using an automated lab-scale system consisting of a semi-batch reactor, dosing pumps, stirring system and a cooling/heating thermostat. An important contribution in this work is that we use inline Raman spectroscopy instead of taking samples for offline analysis. The application of Raman spectroscopy enables continuous concentration monitoring of key species involved in the reaction, i.e. FAME, triglycerides, methanol, glycerol and catalyst.

We employ sodium methylate as a catalyst, addressing a gap in the literature, where kinetic parameter values for the transesterification with this catalyst are lacking. To ensure robust parameter estimation, we perform a global sensitivity-based estimability analysis (Bouchkira et al., 2024), confirming that the experimental data is sufficient for accurate model calibration. The parameter estimation is carried out using genetic algorithms, and we determine the confidence intervals of the estimated parameters through Hessian matrix analysis. This approach ensures reliable and meaningful model parameters for a broad range of operating conditions.

We perform experiments for several temperatures relevant for industrial application, with a specific focus on the range around 60°C. The Raman probe used inside the reactor is calibrated offline with high precision, achieving excellent calibration accuracy for concentrations (R2 = 0.99). The results show excellent agreement with experimental data. The predicted concentrations from the model align with experimental data, with deviations generally under 2%, demonstrating the accuracy and reliability of the proposed kinetic model across different operating conditions.

References

El Wajeh, M., Mhamdi, A., & Mitsos, A. (2023). Dynamic modeling and plantwide control of a production process for biodiesel and glycerol. Industrial & Engineering Chemistry Research, 62(27), 10559-10576.

Bouchkira, I., Latifi, A. M., & Benyahia, B. (2024). ESTAN—A toolbox for standardized and effective global sensitivity-based estimability analysis. Computers & Chemical Engineering, 186, 108690.



Integration of renewable energy and reversible solid oxide cells to decarbonize secondary aluminium production and urban systems

Daniel Florez-Orrego1, Dareen Dardor1, Meire Ribeiro Domingos1, Reginald Germanier2, François Maréchal1

1Ecole Polytechnique Federale de Lausanne, Switzerland; 2Novelis Sierre S.A.

The aluminium recycling and remelting industry is a key actor in advancing a sustainable and circular economy within the aluminium sector. Currently, energy conversion processes in secondary aluminium production are largely dependent on natural gas, exposing the industry to volatile market prices and contributing to significant environmental impacts. To mitigate this, efforts are focused on reducing reliance on fossil fuels by incorporating renewable energy and advanced cogeneration systems. Due to the intermittent nature of renewable energy, a combination of technologies can be employed to improve energy integration and enhance process resilience in heavy industry. These technologies include energy storage systems, oxycombustion furnaces, carbon abatement, power-to-gas technologies, and biomass thermochemical conversion. This configuration allows for seasonal storage of renewable energy, optimizing its use during periods of high electricity and natural gas prices. High-temperature reversible solid oxide cells play a critical role in balancing energy needs, while increasing exergy efficiency within the integrated facility, offering advantages over traditional cogeneration systems. When thermally integrated into an aluminium remelting plant, the whole system functions as an industrial battery (i.e. fuel and gases storage), cascading low-grade waste heat to a nearby urban agglomeration. The waste heat temperature from aluminium furnaces and biomass energy conversion technologies supports the integration of high-temperature reversible solid oxide cells. The post-combustion of tail gas from these cells provides heat to the melter furnace, while the electricity generated can be used elsewhere in the system, such as for powering electrical furnaces, rolling processes, ancillary demands, and district heating heat pumps. In fact, by optimally tuning the operating parameters of the rSOC, which in turn depend on the partial load and the utilization factor, the heat-to-power ratio can be modulated to satisfy the energy demands of all the industrial and urban systems involved. The chemically-driven heat recovery in the reforming section is also compared to other energy recovery systems, such as supercritical CO2 power cycles and preheater-melter furnace integration. In all the cases, the low-grade waste heat recovery, typically rejected to environment, is used to supply the heat to the city using an anergy district heating network via heat pumping systems. In this advanced integrated scenario, energy consumption increases by only 30% compared to conventional systems based on natural gas and biomass combustion. However, CO2 emissions are reduced by a factor of three, particularly when combined with a carbon management and sequestration system. Further reductions in emissions can be achieved if higher shares of renewable electricity become available. Moreover, the use of local renewable energy resources promotes the energy security and sustainability of industries traditionally reliant on fossil energy resources.



A Novel Symbol Recognition Framework for Digitization of Piping and Instrumentation Diagrams

Zhiyuan Li1, Zheqi Liu2, Jinsong Zhao1, Huahui Zhou3, Xiaoxin Hu3

1Department of Chemical Engineering, Tsinghua University, Beijing, China; 2Department of Computer Science and Engineering, University of California, San Diego, US; 3Sinopec Ningbo Engineering Co., Ltd, Ningbo, China

Piping and Instrumentation Diagrams (P&IDs) are essential in the chemical industry, but most exist as scanned images, limiting seamless integration into digital workflows. This paper proposes a method to digitize P&IDs and automate unit operation selection for Hazard and Operability (HAZOP) analysis. We combined convolutional neural networks and transformers to detect devices, pipes, instrumentation, and text in image-format P&IDs. Then we reconstructed the process topology and control structures for each P&ID using distance metric learning. Furthermore, multiple P&IDs were integrated into a comprehensive chemical process knowledge graph by stream and equipment identifiers. To facilitate automated HAZOP analysis, we developed a node-merging algorithm that groups equipment according to predefined unit operation categories, thereby identifying specific analysis objects for intelligent HAZOP analysis.

An evaluation conducted on a dataset comprising 500 simulated Piping and Instrumentation Diagrams (P&IDs) revealed that the device recognition process achieved over 99% precision and recall, with 93% accuracy in text extraction. Processing time was reduced by threefold compared to conventional methods, and the node-merging algorithm yielded satisfactory results. This study improves data sharing in chemical process design and facilitates automated HAZOP analysis.



Twin Roll Press Washer Blockage Prediction: A Pulp and Paper Plant Case Study

Bryan Li1,2, Isaac Severinsen1,2, Wei Yu1,2, Timothy Walmsley2, Brent Young1,2

1Department of Chemical and Materials Engineering, The University of Auckland, Auckland 1010, New Zealand; 2Ahuora – Centre for Smart Energy Systems, School of Engineering, The University of Waikato, Hamilton 3240, New Zealand

A process fault is considered an unacceptable deviation from the normal state. Process faults can incur significant product and revenue loss, as well as damage to personnel and equipment. The aim of this research is to create a self-learning digital twin that closely replicates and interfaces with a physical plant to appropriately advise plant operators of a potential plant fault in the near future. A key challenge to accurately predicting process faults is the lack of fault data due to the scarcity of fault occurrences. To overcome this challenge, this study creates synthetic data indistinguishable from the real limited process fault datasets with generative artificial intelligence, so that deep learning algorithms can better learn the fault behaviours. The model capability is further enhanced with real-time fault library updates employing methods of low computational cost: principal component analysis and transfer learning.

A pulp bleaching and washing process is used as an industrial case study. This process is connected to downstream black liquor evaporators and chemical recovery boilers. Successful development of this model can aid the decarbonisation progress in pulp and paper industry by decreasing energy wastage, water usage, and process downtime.



Addressing Incomplete Physical Models in Chemical Processes: A Novel Physics-Informed Neural Network Approach

Zhiyuan Xie, Feiya Lv, Jinsong Zhao

Tsinghua University, China, People's Republic of

In recent years, machine learning—particularly neural networks—has exerted a transformative influence on various facets of chemical processes, including variable prediction, fault detection, and fault diagnosis. However, when data is incomplete or insufficient, purely data-driven neural networks often encounter difficulties in achieving high predictive accuracy. Physics-Informed Neural Networks (PINNs) address these limitations by embedding physical knowledge and prior domain expertise into the neural network framework, thereby constraining the solution space and facilitating effective training with limited data. This methodology offers notable advantages in handling scarce industrial datasets.Despite these strengths, PINNs depend on explicit formulations of nonlinear partial differential equations (PDEs), which present significant challenges when modeling the intricacies of complex chemical processes. To overcome these limitations, this study introduces a novel PINN architecture capable of accommodating processes with incomplete PDE descriptions. Experimental evaluations on a Continuous Stirred Tank Reactor (CSTR) dataset, along with real-world industrial datasets, validate the proposed architecture’s effectiveness and demonstrate its feasibility in scenarios involving incomplete physical models.



A Physics-based, Data-driven Numerical Framework for Anomalous Diffusion of Water in Soil

Zeyuan Song, Zheyu Jiang

Oklahoma State University, United States of America

Precision modeling and forecasting of soil moisture are essential for implementing smart irrigation systems and mitigating agricultural drought. Agro-hydrological models, which describe irrigation, precipitation, evapotranspiration, runoff, and drainage dynamics in soil, are widely used to simulate the root-zone (top 1m of soil) soil moisture content. Most agro-hydrological models are based on the standard Richards equation [1], a highly nonlinear, degenerate elliptic-parabolic partial differential equation (PDE) with first order time derivative. However, research has shown that standard Richards equation is unable to model preferential flow in soil with fractal structure. In such a scenario, the soil exhibits anomalous non-Boltzmann scaling behavior. For soils exhibiting non-Boltzmann scaling behavior, the soil moisture content is a function of $frac{x}{t^{alpha/2}}$, where $x$ is the position vector, $t$ denotes the time, and $alpha$ is a soil-dependent parameter indicating subdiffusion ($alpha in (0,1)$) and superdiffusion ($alpha in (1,2)$). Incorporating this functional form of soil moisture into the Richards equation leads to a generalized, time-fractional Richards equation based on fractional time derivatives. Clearly, solving the time-fractional Richards equation for accurate modeling of water flow dynamics in soil faces extensive theoretical and computational challenges. Naïve approaches typically discretizes the time-fractional Richards equation using finite difference method (FDM). However, the stability of FDM is not guaranteed. Furthermore, the underlying physical laws (e.g., mass conservation) are often lost during the discretization process.

Here, we propose a novel numerical method that synergistically integrates finite volume method (FVM), adaptive linearization scheme, global random walk, and neural network to solve the time-fractional Richards equation. Specifically, the fractional time derivatives are first approximated using trapezoidal quadrature formula, before discretizing the time-fractional Richards equation by FVM. Leveraging our previous findings [2], we develop an adaptive linearization scheme to solve the discretized equation iteratively, thereby overcoming the stability issues associated with directly solving a stiff and sparse matrix equation. To better preserve the underlying physics during the solution process, we reformulate the linearized equation using global random walk algorithm. Next, as opposed to making the prevailing assumption that, in any discretized cell, the soil moisture is proportional to the number of particles, we show that this assumption does not hold. Instead, we propose to use neural networks to model the highly nonlinear relationships between the soil moisture content and the number of particles. We illustrate the accuracy and computational efficiency of our proposed physics-based, data-driven numerical method using numerical examples. Finally, a simple way to efficiently identify the parameter is developed to match the solutions of time-fractional Richards equation with experimental measurements.

References

[1] L.A. Richards, Capillary conduction of liquids through porous mediums, Physics, 1931, 1(5): 318-333.

[2] Z. Song, Z. Jiang, A Novel Data-driven Numerical Method for Hydrological Modeling of Water Infiltration in Porous Media, arXiv preprint arXiv:2310.02806, 2023.



Supersaturation Monitoring for Batch Crystallization using Empirical and Machine Learning Models

Mohammad Reza Boskabadi, Merlin Alvarado Morales, Seyed Soheil Mansouri, Gürkan Sin

Department of Chemical and Biochemical Engineering, Søltofts Plads, Building 228A, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark

Batch crystallization serves as a downstream process within the pharmaceutical and food industries, providing a high degree of flexibility in the purification of a wide range of products. Effective control over the crystal size distribution (CSD) is essential in these processes to minimize waste and the need for recycling, as crystals falling outside the target size range are typically considered waste or are recycled (Boskabadi et al., 2024). The resulting CSD is significantly influenced by the supersaturation (SS) of the mother liquor, a key parameter driving crystal nucleation and growth. Supersaturation is governed by several nonlinear factors, including concentration, temperature, purity, and other quality parameters of the mother liquor, which are often determined through laboratory analysis. Due to the complexity of these dependencies, no direct measurement method or single instrument exists for supersaturation assessment (Morales et al., 2024). This lack of efficient monitoring contributes to the GHG emissions associated with sugar production, estimated at 1.47 kg CO2/kg sugar (Li et al., 2024).

The primary objective of this study is to develop a machine learning (ML)-based model to predict sugar supersaturation using the sugar solubility dataset provided by Van der Sman (2017), aiming to establish correlations between temperature and sucrose solubility. To this end, different ML models were developed, and each model underwent rigorous statistical evaluations to verify its ability to capture solubility trends effectively. The results were compared to the saturation curve predicted by the Flory-Huggins thermodynamic model. The ML model simplifies predictions by accounting for impurities and temperature dependencies, validated using experimental datasets. The findings indicate that this predictive model allows for more precise dynamic control of the crystallization process. Finally, the effect of the developed model on sustainable sugar production was investigated. It was demonstrated that using this model may reduce the mean batch residence time during the crystallization stage, lowering energy consumption, reducing the CO2 footprint, increasing production capacity, and ultimately contributing to sustainable process development.

References:

Boskabadi, M. R., Sivaram, A., Sin, G., & Mansouri, S. S. (2024). Machine Learning-Based Soft Sensor for a Sugar Factory’s Batch Crystallizer. In Computer Aided Chemical Engineering (Vol. 53, pp. 1693–1698). Elsevier.

Li, K., Zhao, M., Li, Y., He, Y., Han, X., Ma, X., & Ma, F. (2024). Spatiotemporal Trends of the Carbon Footprint of Sugar Production in China. Sustainable Production and Consumption, 46, 502–511.

Morales, H., di Sciascio, F., Aguirre-Zapata, E., & Amicarelli, A. (2024). Crystallization Process in the Sugar Industry: A Discussion On Fundamentals, Industrial Practices, Modeling, Estimation and Control. Food Engineering Reviews, 1–29.

Van der Sman, R. G. M. (2017). Predicting the solubility of mixtures of sugars and their replacers using the Flory–Huggins theory. Food & Function, 8(1), 360–371.



Role of process integration and renewable energy utilization for the decarbonization of the watchmaking sector.

Pullah Bhatnagar1, Daniel Alexander Florez Orrego1, Vibhu Baibhav1, François Maréchal1, Manuele Margni2

1EPFL, Switzerland; 2HES-SO Valai Wallis, Switzerland

Switzerland is the largest exporter of watches and clocks worldwide. The Swiss watch industry contributes 4% to Switzerland's GDP, amounting to CHF 25 billion annually. As governments and international organizations accelerate efforts to achieve net-zero emissions, industries are increasingly pressured to adopt more sustainable practices. Decarbonizing the watch industry is therefore essential. One way to improve sustainability is by enhancing energy efficiency, which can significantly reduce the consumption of various energy sources, leading to lower emissions. Additionally, recovering waste heat from different industrial processes can further enhance energy efficiency.

The watch industry operates across five distinct typical days, each characterized by different levels of average power demand, plant activity, and duration. Among these, typical working days experience the highest energy demand, while vacation periods see the lowest. Adjusting the timing of vacation periods—such as shifting the month when the industry closes—can also improve energy efficiency. This becomes particularly relevant with the integration of decarbonization technologies like photovoltaic (PV) and solar thermal (ST) systems, which generate more energy during the summer months.

This work also explores the techno-economic feasibility of incorporating energy storage solutions (both for heat and electricity) and developing a tailored charging and dispatch strategy. The strategy would be designed to account for the variations in energy demand observed across the different characteristic time periods within a month.



An Integrated Machine Learning Framework for Predicting HPNA Formation in Hydrocracking Units Using Forecasted Operational Parameters

Pelin Dologlu1, Berkay Er1, Kemal Burçak Kaplan1, İbrahim Bayar2

1SOCAR Turkey, Digital Transformation Department, Istanbul 34485, Turkey; 2SOCAR STAR Oil Refinery, Process Department, Aliaga, Izmir 35800, Turkey

The accumulation of heavy polynuclear aromatics (HPNAs) in hydrocracking units (HCUs) poses significant challenges to catalyst performance and process efficiency. This study proposes an integrated machine learning framework that combines ridge regression, K-nearest neighbors (KNN), and long short-term memory (LSTM) neural networks to predict HPNA formation, enabling proactive process management. For the training phase, weighted average bed temperature (WABT), catalyst deactivation phase—classified using unsupervised KNN clustering—and hydrocracker feed (HCU feed) parameters obtained from laboratory analyses are utilized to capture the complex nonlinear relationships influencing HPNA formation. In the simulation phase, forecasted WABT values are generated using a ridge regression model, and future HCU feed changes are derived from planned crude oil blend data provided by the planning department. These forecasted WABT values, predicted catalyst deactivation phases, and anticipated HCU feed parameters serve as inputs to the LSTM model for predicting future HPNA levels. This approach allows us to simulate various operational scenarios and assess their impact on HPNA accumulation before they manifest in the actual process. By identifying critical process parameters and their influence on HPNA formation, the model enhances process engineers' understanding of the hydrocracking operation. The ability to predict HPNA levels in advance empowers engineers to implement corrective actions proactively, such as adjusting feed compositions or operating conditions, thereby mitigating HPNA formation and extending catalyst life. The integrated framework demonstrates high predictive accuracy and robustness, underscoring its potential as a valuable tool for optimizing HCU operations through advanced predictive analytics and informed decision-making.



Towards the Decarbonization of a Conventional Ammonia Plant by the Gradual Incorporation of Green Hydrogen

João Fortunato, Pedro Castro, Diogo A. C. Narciso, Henrique A. Matos

Centro de Recursos Naturais e Ambiente, Department of Chemical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisboa, Portugal

As the second most produced chemical worldwide, ammonia (NH3) production depends heavily on fossil fuel consumption. The ammonia production process is highly energy-intensive and results in the emission of 1-2 % of total carbon dioxide emissions1 and the consumption of 2 % of the energy consumed worldwide1. Ammonia is industrially produced by the Haber-Bosch (HB) process, by reacting hydrogen with nitrogen. Hydrogen can be obtained from a variety of feedstocks, such as coal and naphtha, but is typically obtained from the processing of natural gas, via Steam Methane Reforming (SMR)1. In the latter case, atmospheric air can be used directly as a nitrogen source without the need for previous separation, since the oxygen is completely consumed by the methane partial oxidation reaction2.

The ammonia industry is striving for decarbonization, driven by increasing carbon neutrality policies and energy independence targets. In Europe, the Renewable Energy Directive III requires that 42 % of the hydrogen used in industrial processes come from renewable sources by 20303, setting a critical shift towards more sustainable ammonia production methods.

The literature includes many studies focusing on the production of low-carbon ammonia entirely from green hydrogen, without considering its production via SMR. However, this approach could threaten the competitiveness of the current industry and the loss of opportunity to continue valorizing previous investments.

This work addresses the challenges involved with the incorporation of green hydrogen into a conventional ammonia production plant (methane-fed HB process). An Aspen Plus V14 model was developed, and two different green hydrogen incorporation strategies were tested: S-I and S-II. These were inspired by existing operating procedures at one real-life plant, therefore the model simulations main focus are to determine the feasible limits of using an existing conventional NH3 plant and observe the associated main KPIs, when green H2 is available to add.

The S-I strategy reduces the production of grey hydrogen by reducing natural gas and process steam in the SMR. The intake of green hydrogen allows hydrogen and ammonia production to remain fixed.

In strategy S-II, grey hydrogen production remains unchanged, resulting in higher total hydrogen production. By taking in larger quantities of process air, higher NH3 production can be achieved.

These strategies introduce changes to the SMR process and NH3 synthesis, which imply modifications to the operating conditions of the plant. These changes lead to a technical limit for the incorporation of green hydrogen into the conventional HB process. Nevertheless, both strategies make it possible to reduce carbon emissions per quantity of NH3 produced and promote the gradual decarbonization of the current ammonia industry.

1. IEA International Energy Agency. Ammonia Technology Roadmap - Towards More Sustainable Nitrogen Fertiliser Production. https://www.iea.org/reports/ammonia-technology-roadmap (2021).

2. Appl, M. Ammonia, 2. Production Processes. in Ullmann’s Encyclopedia of Industrial Chemistry (Wiley, 2011). doi:10.1002/14356007.o02_o11.

3. RED III Directive (UE) 2023-2413 of 18 October 2023.



Gate-to-Gate Life Cycle Assessment of CO₂ Utilisation in Enhanced Oil Recovery: Sustainability and Environmental Impacts in Dukhan Field, Qatar

Razan Sawaly, Ahmad Abushaikha, Tareq Al-Ansari

hbku, Qatar

This study examines the potential impact of implementing a cap and trade system to reduce CO₂ emissions in Qatar's industrial sector, which is a significant contributor to global emissions. Using data from seven key industries, the research sets emission caps, allocates allowances through a grandfathering method, and allows trading of these allowances to create economic incentives for emission reductions. The study utilizes a model with a carbon price of $12.50 per metric ton of CO₂ and compares baseline emissions with future reduction strategies. Results indicate that while some industrial plants, such as the LNG and methanol plants, achieved substantial emission reductions and financial surpluses through practices like carbon capture and switching to hydrogen, others continued to face deficits. The findings highlight the system's potential to promote sustainable practices, suggesting that tighter caps and auction-based allowance allocations could further enhance the effectiveness of the cap and trade system in Qatar's industrial sector.



Robust Flowsheet Synthesis for Ethyl Acetate, Methanol and Water Separation

Aayush Gupta, Kartavya Maurya, Nitin Kaistha

Indian institute of technology Kanpur, India

Ethyl acetate and methanol are commonly used solvents in the pharmaceutical, textile, dye, fine organic, and paint industry [1] [2]. The waste solvent from these industries often contains EtAc and MeOH in water in widely varying proportions. Sustainability concerns reflected in increasingly stringent waste discharge regulations now dictate complete recovery, recycle and reuse of the organic species from the waste solvent. For the EtAc-MeOH-water waste solvent, simple distillation cannot be used due to the presence of a homogeneous EtAc-MeOH azeotrope and a heterogeneous EtAc-water azeotrope. Synthesizing a feasible flowsheet structure that separates a given waste solvent mixture into its nearly pure constituents (EtAc, MeOH and water) then becomes challenging. The flowsheet structure, among other things, depends on the waste solvent composition. A flowsheet that is feasible for a dilute waste solvent mixture may become infeasible for a more concentrated waste solvent. Given that the flowsheet structure, once chosen, remains fixed and cannot be changed, and that wide variability in the waste solvent composition is expected, in this work, we propose a “robust” flowsheet structure with guaranteed feasibility, regardless of the waste solvent composition. Such a “robust” flowsheet structure has the potential to significantly improve the economic viability of a waste solvent processing plant, as the same equipment can be used to separate the wide range of received waste solvents.

The key to the robust flowsheet design is the use of a liquid-liquid extractor (LLX) with recycled water as the solvent. For a sufficiently high-water rate to the LLX, the raffinate composition is close to the EtAc-water edge (nearly MeOH free), on the liquid-liquid envelope and in the EtAc rich distillation region. The raffinate is distilled to obtain pure EtAc bottoms product and the overhead vapour is condensed and decanted with the organic layer refluxed into the column. The aqueous distillate is mixed with the MeOH rich extract and stripped to obtain an EtAc free MeOH-water bottoms. The overhead vapour is condensed and recycled back to the LLX. The MeOH-water bottoms is further distilled to obtain pure MeOH distillate and pure water bottoms. A fraction of the bottoms is recirculated to the LLX as the solvent feed. Converged designs are obtained for an equimolar waste solvent composition as well as an EtAc rich, MeOH rich and water rich compositions to demonstrate the robustness of the flowsheet structure to a large change in the waste solvent composition.

References

[1] C. S. a. S. M. J. a. C. W. A. a. C. D. J. Slater, "Solvent use and waste issues," Green chemistry in the pharmaceutical industry, pp. 49-82, 2010.

[2] T. S. a. L. Z. a. C. H. a. Z. H. a. L. W. a. F. Y. a. H. X. a. S. Longyan, "Method for separating and recovering ethyl acetate and methanol". China Patent CN102746147B, May 2014.



Integrating offshore wind energy into the optimal deployment of a hydrogen supply chain: a case study in Occitanie

Melissa Cherrouk1,2, Catherine Azzaro-Pantel1, Marie Robert2, Florian Dupriez Robin2

1France Energies Marines / Laboratoire de Génie Chimique, France; 2France Énergies Marines, Technopôle Brest-Iroise, 525 Avenue Alexis de Rochon, 29280, Plouzané, France

The urgent need to mitigate climate change and reduce dependence on fossil fuels has led to the exploration of alternative energy solutions, with green hydrogen emerging as a key player in the global energy transition. Thus, the aim of this study is to assess the feasibility and competitiveness of producing hydrogen at sea using offshore wind energy, evaluating both economic and environmental perspectives.

Offshore wind energy offers several advantages for hydrogen production. These include access to water for electrolysis, potentially lower export costs for hydrogen compared to electricity, and the ability to smooth the variability of wind energy through hydrogen storage systems. Proper storage plays a crucial role in addressing the intermittency of wind power, making the hydrogen output more stable. This positions storage not only as an advantage but also as a key step for the successful coupling of offshore wind with hydrogen production. However, challenges remain, particularly regarding the capacity and cost of such storage solutions, alongside the high capital expenditures (CAPEX) and operational costs (OPEX) required for offshore systems.

This research explores the potential of offshore wind farms (OWFs) to contribute to hydrogen production by extending a techno-economic model based on Mixed-Integer Linear Programming (MILP). The model optimizes the number and type of production units, storage locations, and distribution methods, employing an optimization approach to determine the best hydrogen flows between regional hubs . The case study focuses on the Occitanie region in southern France, where hydrogen could be produced offshore from a 30 MW floating wind farm with three turbines located 30 km from the coast and transported via pipelines. Other energy sources may complement offshore wind energy to meet hydrogen supply demands. The study evaluates two scenarios: minimizing hydrogen production costs and minimizing greenhouse gas emissions over a 30-year period, divided into six five-year phases.

Initial findings show that, from an economic standpoint, the Levelized Cost of Hydrogen (LCOH) from offshore wind remains higher compared to traditional hydrogen production methods. However, the Global Warming Potential (GWP) of hydrogen produced from offshore wind ranks it among the most environmentally friendly options. Despite this, the volume of hydrogen produced in the current configuration does not meet the demand required for significant impact in Occitanie's hydrogen market, which points out the need to test higher power levels for the OWF and potential hybridization with other renewable energy sources.

The results underline the importance of future multi-objective optimization methods to better balance the economic and environmental trade-offs and make offshore wind a more competitive option for hydrogen production.

Reference:
Sofía De-León Almaraz, Catherine Azzaro-Pantel, Ludovic Montastruc, Marianne Boix, Deployment of a hydrogen supply chain by multi-objective/multi-period optimisation at regional and national scales, Chemical Engineering Research and Design, Volume 104, 2015, Pages 11-31, https://doi.org/10.1016/j.cherd.2015.07.005.



Robust Techno-economic Analysis, Life Cycle Assessment, and Quality by-Design of Three Alternative Continuous Pharmaceutical Tablet Manufacturing Processes

Shang Gao, Brahim Benyahia

Loughborough University, United Kingdom

This study presents a comprehensive comparison of three key downstream tableting manufacturing methods for pharmaceuticals: i) Dry Granulation (DG) through roller compaction, ii) Direct Compaction (DC), and iii) Wet Granulation (WG). First, integrated mathematical models of these downstream (drug product) processes were developed using gPROMS Formulated Products, along with data from the literature and our recent experimental work. The process models were designed and simulated to reliably capture the impact of different design options, process parameters, and material attributes. Uncertainty analysis was conducted using global sensitivity analysis to identify the critical process parameters (CPPs) and critical material attributes (CMAs) that most significantly influence the quality and performance of the final pharmaceutical tablets. These are captured by the critical quality attributes (CQAs), which include tablet hardness, dissolution rate, impurities/residual solvents, and content uniformity—factors crucial for ensuring product safety and efficacy. Based on the identified CPPs and CMAs, combined design spaces that guarantee the attainment of the targeted CQAs were identified and compared. Additionally, techno-economic analyses were conducted alongside life cycle assessments (LCA) based on the process simulation results and inventory data. The LCA provided an in-depth evaluation of the environmental impacts associated with each manufacturing method, considering aspects such as energy consumption, raw material usage, emissions, and waste generation across a cradle-to-gate approach. By integrating CQAs within the LCA framework, this study offers a holistic analysis that captures both the environmental sustainability and product quality implications of the three tableting processes. The findings aim to guide the selection of more sustainable and efficient manufacturing practices in the pharmaceutical industry, balancing trade-offs between environmental impact and product quality.

Keywords: Dry Granulation, Direct Compaction, Wet Granulation, Life Cycle Assessment (LCA), Techno-economic Analysis (TEA), Quality-by-Design (QbD)

Acknowledgements

The authors acknowledge funding from the UK Engineering and Physical Sciences Research Council (EPSRC), for Made Smarter Innovation – Digital Medicines Manufacturing Research Centre (DM2), EP/V062077/1.



Systematic Model Builder, Model-Based Design of Experiments, and Design Space Identification for A Multistep Pharmaceutical Process

Xuming Yuan, Ashish Yewale, Brahim Benyahia

Loughborough University, United Kingdom

Mathematical models of different processing unit are usually established and optimized individually, even when these processes are meant to be combined in a sequential way in the real world, particularly in continuous operating plants. Although, this traditional approach may help reduce complexity, it may deliver suboptimal solutions or/and overlook the interactions between the unit operations. Most importantly, it can dramatically increase the development time, wastes, and experimental costs inherent to the raw materials, solvents, cleaning, etc. This study aims at developing a systematic approach to establish and optimize integrated mathematical models of interactive multistep processes. This methodology starts with suggesting various model candidates for different unit operations initially based on the prior knowledge. A combination of the model candidates for different unit operations is performed, which gives several candidates of integrated models for the multistep process. A model discrimination based on structural identifiability analysis and model prediction performance (Yuan and Benyahia, 2024) reveals the best integrated model for the multistep process. Afterwards, the refinement of the model, consisting of estimability analysis (Bouchkira and Benyahia, 2023) and model-based design of experiment (MBDoE), is conducted to give the optimal experimental design that guarantees the most information-rich data. With the acquisition of the new experimental data, the reliability and robustness of the multistep mathematical model is dramatically enhanced. The optimized model is subsequently used to identify the design space of the multistep process, which delivers the optimal operating range of the critical process parameters (CPPs) that satisfy the targeted critical quality attributes (CQAs). A blending-tableting process of paracetamol is selected as a case study in this work. The methodology applies the prior knowledge from Kushner and Moore (2010), Nassar et al. (2021) and Puckhaber et al. (2022) to establish model candidates for this two-unit-operation process, where the effects of the lubrication in the blender as well as the composition and porosity of the tablet on the tablet tensile strength are taken into consideration. Model discrimination and model refinement are then performed to identify and improve the optimal integrated model for this two-step process, and the enhanced model is applied for the design space identification under specified CQA targets. The results confirm the effectiveness of the proposed methodology, which demonstrates its potential in achieving higher optimality for the processes involving multiple unit operations.



The role of long-term storage in municipal solid waste treatment systems: Multi-objective resources integration

Julie Dutoit1,2, Jaroslav Hemrle2, François Maréchal1

1École Polytechnique Fédérale de Lausanne (EPFL), Industrial Process Energy Systems Engineering (IPESE), Sion, 1951, Switzerland; 2Kanadevia Inova AG, Zürich, 8005, Switzerland

Estimations for the horizon 2050 predict significant municipal solid waste (MSW) generation increase in every world region, whereas important discrepancies remain between net-zero decarbonization targets of the Paris Agreement and current waste treatment technologies’ environmental performance. This creates an important area of research and development to improve the solutions, especially with regards to circular economy goals for material recovery and transitioning energy supply systems. As shown for plastic chemical recycling by Martínez-Narro et al., 2024, promising technologies may include energy-intensive steps which need integration to renewable energy to be environmentally viable. With growing intra-daily and seasonal variations of power availability due to the increase of renewable production share, Demand Side Response (DSR) measures play a crucial role beside energy storage systems to support power grid stability. In current research, DSR applicability to industrial process models is under-represented relatively to the residential sector, with little attention brought to control strategies or input predictions in system analysis (Bosmann and Eser, 2016, Kirchem et al., 2020).

This contribution presents a framework to evaluate the potential of waste treatment system to shift energy loads for a better integration into energy systems of industrial clusters or residential areas. The waste treatment systems scenarios are modeled, simulated and optimized in a hybrid framework OpenModelica/Python, described by Dutoit et al., 2024. In particular, pinch analysis (Linnhoff and Hindmarsh, 1983) is used for the heat integration assessment. The multi-objective approach relies on key performance indicators including process, economic and environmental impact aspects.

For the case study application, core technologies included are waste sorting, waste incineration and post-combustion amine-based carbon capture, which are integrated to heat and power utilities to satisfy varying external demand from the power grid and the district heating network. The heterogeneous modeling of the waste flows allows to define several design options on the material recovery facility for waste plastic fraction sorting, and scenarios are simulated to evaluate the system performance under the described control strategies. Results provide insights for optimal system operations and integration from an industrial perspective.

References

Bosmann, T., & Eser, E. J. (2016). Model-based assessment of demand-response measures – A comprehensive literature review. Renewable and Sustainable Energy Reviews, 57, 1637–1656. https://doi.org/10.1016/j.rser.2015.12.031.

Dutoit, J., Hemrle, J., Maréchal, F. (2024). Supporting Life-Cycle Impact Assessment Transparency in Waste Treatment Systems Simulation: A Decision-Support Methodology. In preparation.

Kirchem, D., Lynch, M. A., Bertsch, V., & Casey, E. (2020). Modelling demand response with process models and energy systems models: Potential applications for wastewater treatment within the energy-water nexus. Applied Energy, 260, 114321. https://doi.org/10.1016/j.apenergy.2019.114321

Linnhoff, B., & Hindmarsh, E. (1983). The pinch design method for heat exchanger networks. Chemical Engineering Science, 38(5), 745–763. https://doi.org/10.1016/0009-2509(83)80185-7

Martínez-Narro, G., Hassan, S., N. Phan, A. (2024). Chemical recycling of plastic waste for sustainable polymer manufacturing – A critical review. Journal of Environmental Chemical Engineering, 12, 112323. https://doi.org/10.1016/j.jece.2024.112323.



A Comparative Study of Feature Importance in Process Data: Neural Networks vs. Human Visual Attention

Rohit Suresh1, Babji Srinivasan1,3, Rajagopalan Srinivasan2,3

1Department of Applied Mechanics and Biomedical Engineering, Indian Institute of Technology Madras, Chennai 600036, India; 2Department of Chemical Engineering, Indian Institute of Technology Madras, Chennai 600036, India; 3American Express Lab for Data Analytics, Risk and Technology Indian Institute of Technology Madras, Chennai 600036, India

Artificial Intelligence(AI) and Automation technologies have revolutionized the way many sectors operate. Specifically in process industries and power plants, there is a lot of scope of enhancing production and efficiency with AI through predictive maintenance, condition monitoring, inspection and quality control etc. However, despite these advancements, human operators are the final decision-makers in such major safety-critical systems. Fostering collaboration between human operators and AI systems is the inevitable next step forward. A primary step towards achieving this goal is to capture the representation of information acquired by both human operators and AI-based systems in a mutually comprehensible way. This would aid in understanding their rationale behind the decision. AI-based systems with deep networks and complex architecture often achieve the best results. However, they are often disregarded by human operators due to lack of transparency. While eXplainable AI(XAI) is an active research area that attempts to comprehend the deep networks, understanding the human rationale behind decision-making is largely overlooked.

Several popular XAI techniques such as local interpretable model-agnostic explanations(LIME), and Gradient-Weighted Class Activation Mapping(Grad-CAM) provide explanations via feature attribution. In the context of process monitoring, Bahkte et al. (2022) used shapeley value framework with integrated gradients to estimate the marginal contribution of process variables in fault classification. One popular way to evaluate the explanations provided by various XAI algorithm is through human eye gaze tracking. Human participants’ visual attention over the stimus is estimated using eye tracking which is compared with the results of XAI.

Eye tracking also has the potential to characterise the mental models of control room operators during different experimental scenarios(Shahab et al., 2022). In this work, participants, acting as control room operators were given tasks of disturbance rejection based on alarm signals and process variable trends in HMI. Extending that in this work we attempt to explain the human operator’s rationale behind the decision making through eye tracking. Participants’ dynamic attention allocation over the stimulus is objectively captured using various eye gaze metrics which are further used to extract the major causal factors that influenced the decision of participants. The effectiveness of the method is demonstrated with a case study. We conduct eye tracking experiments where participants are required to identify the fault in the process. During the experiment the images of trend panel with trajectories of all major process variables captured at a specific instant are shown to the participants. The process variable responsible for the fault is objectively identified using operator knowledge. Our future work will focus on integrating this human rationale with XAI which will pave the way for human-machine teaming.

References:
Bhakte, A., Pakkiriswamy, V. and Srinivasan, R., 2022. An explainable artificial intelligence based approach for interpretation of fault classification results from deep neural networks. Chemical Engineering Science, 250, p.117373.
Shahab, M.A., Iqbal, M.U., Srinivasan, B. and Srinivasan, R., 2022. HMM-based models of control room operator's cognition during process abnormalities. 1. Formalism and model identification. Journal of Loss Prevention in the Process Industries, 76, p.104748.



Parameter Estimation and Model Comparison for Mixed Substrate Biomass Fermentation

Tom Vinestock, Miao Guo

King's College London, United Kingdom

Single cell protein (SCP) fermentation is an effective way of transforming carbohydrate-rich substrates into high-protein foodstuffs and is more sustainable than conventional animal-based protein production [1]. However, whereas cows and other ruminants can be fed agricultural residues, such as rice straw, SCP fermentations generally depend on high-purity single substrate feedstocks as a carbon-source, such as starch-derived glucose, which are expensive, and directly compete for food crops.

Consequently, there is interest in switching to feedstocks derived from cheap agricultural lignocellulosic residues. However, treatment of such lignocellulosic residues produces a mixed feedstock, typically containing both glucose and xylose [2]. Accurate models of mixed-substrate growth are needed for fermentation decision-makers to understand the trade-offs associated with transitioning to the more sustainable lignocellulosic feedstocks. Such models are also a prerequisite for optimizing the operation and control of mixed-substrate fermentations.

In this work, recently published biomass and substrate concentration time-series data, for glucose-xylose batch fermentation of F. Venenatum [3] is used to estimate parameters for different unstructured models of diauxic growth. A Bayesian optimisation methodology is employed to identify the best parameters in each case. A novel model for diauxic growth with substrate cross-inhibition, mediated by variable enzyme production, is proposed, based on Nakamura et al. [4], but simplified to reduce the number of states and parameters, and hence improve identifiability and reduce overfitting. This model is shown to have a lower error on both the calibration dataset and the validation dataset, than the model in Banks et al. [3], itself based on work by Vega-Ramon et al. [5], which models substrate cross-inhibition effects directly. The performance of the model proposed by Kompala and Ramkrishna [6], based on growth-optimised cellular resource allocation, is also evaluated.

This work could lead to improved modelling of mixed substrate fermentation, and therefore help address the technical barriers to wider-scale use of lignocellulose-derived feedstocks in fermentation. Future research could test the generalisability of the diauxic growth models considered using data from a continuous or fed-batch mixed substrate fermentation.

References

[1] Good Food Institute, “Fermentation: State of the industry report,” 2021.

[2] L. Qin, L. Liu, A.P. Zeng, and D. Wei, “From low-cost substrates to single cell oils synthesized by oleaginous yeasts,” Bioresource Technology, Dec. 2017.

[3] M. Banks, M. Taylor, and M. Guo, “High throughput parameter estimation and uncertainty analysis applied to the production of mycoprotein from synthetic lignocellulosic hydrolysates,” 2024.

[4] Y. Nakamura, T. Sawada, F. Kobayashi, M. Ohnaga, and M. Suzuki, “Stability analysis of continuous culture in diauxic growth,” Journal of Fermentation and Bioengineering, 1996.

[5] F. Vega-Ramon, X. Zhu, T. R. Savage, P. Petsagkourakis, K. Jing, and D. Zhang, “Kinetic and hybrid modeling for yeast astaxanthin production under uncertainty,” Biotechnology and Bioengineering, Dec. 2021.

[6] D. S. Kompala, D. Ramkrishna, N. B. Jansen, and G. T. Tsao, “Investigation of bacterial growth on mixed substrates: Experimental evaluation of cybernetic models,” Biotechnology and Bioengineering, July 1986.



Identification of Suitable Operational Conditions and Dimensions for Supersonic Water Separation in Exhaust Gases from Offshore Turbines: A Case Study

Jonatas de Oliveira Souza Cavalcante1, Marcelo da Costa Amaral2, Fernando Luiz Pellegrini Pessoa1,3

1SENAI CIMATEC University Center, Brazil; 2Leopoldo Américo Miguez de Mello Research Center (CENPES); 3Federal University of Rio de Janeiro (UFRJ)

Due to space, weight, and energy efficiency constraints in offshore environments, the efficient removal of water from turbine exhaust gases is a crucial step to optimize operational performance in gas treatment processes. In this context, replacing conventional methods, such as molecular sieves, with supersonic separators emerges as a promising alternative. This work aims to determine the optimal operational conditions and dimensions for supersonic water separation in turbine exhaust gases on offshore platforms. The simulation was conducted using a unit operation extension in Aspen HYSYS, based on the compositions of exhaust gases from Brazilian pre-salt wells. Parameters such as operational conditions, separator dimensions, and shock Mach were optimized to maximize process efficiency and minimize separator size. The results indicated the near-complete removal of water, demonstrating that supersonic separation technology, in addition to being compact, offers a viable and efficient alternative for water removal from exhaust gases, particularly in space-constrained environments.



On optimal hydrogen production pathway selection using the SECA multi-criteria decision-making method

Caroline Kaitano, Thokozani Majozi

University of the Witwatersrand, South Africa

The increasing global population has resulted in the scramble for more energy. Hydrogen offers a new revolution to energy systems worldwide. Considering its numerous uses, research interest has grown to seek sustainable production methods. However, hydrogen production must satisfy three factors, i.e. energy security, energy equity, and environmental sustainability, referred to as the energy trilemma. Therefore, this study seeks to investigate the sustainability of hydrogen production pathways through the use of a Multi-Criteria Decision- Making model. In particular, a modified Simultaneous Evaluation of Criteria and Alternatives (SECA) model was employed for the prioritization of 19 options for hydrogen production. This model simultaneously determines the overall performance scores of the 19 options and the objective weights for the energy trilemma in a South African context. The results obtained from this study showed that environmental sustainability has a higher objective weight value of 0.37 followed by energy security with a value of 0.32 and energy equity being the least with 0.31. Of the 19 options selected, steam reforming of methane with carbon capture and storage was found to have the highest overall performance score, considering the trade-offs in the energy trilemma. This was followed by steam reforming of methane without carbon capture and storage and the autothermal reforming of methane with carbon capture and storage. The results obtained in this study will potentially pave the way for optimally producing hydrogen from different feedstocks while considering the energy trilemma and serve as a basis for further research in sustainable process engineering.



On the role of Artificial Intelligence in Feature oriented Multi-Criteria Decision Analysis

Heyuan Liu1,2, Yi Zhao1, Francois Marechal1

1Industrial Process and Energy Systems Engineering (IPESE), Ecole Polytechnique Fédérale de Lausanne, Sion, Switzerland; 2École Polytechnique, France

In industrial applications, balancing economic and environmental goals is crucial amidst challenges like climate change. To address conflicting objectives, tools like multi-objective optimization (MOO) and multi-criteria decision analysis (MCDA) are utilized. MOO generates a range of viable solutions, while MCDA helps select the most suitable option considering factors like profitability, environmental impact, safety, and efficiency. These tools aid in making informed decisions amidst complex trade-offs and uncertainties.

In this study, we propose a novel approach for MCDA using advanced machine learning techniques and applied the method to analyze the decarbonization solutions to a typical European refinery. First, a hybrid dimensionality reduction method combining AutoEncoders and Principal Component Analysis (PCA) is developed to reduce high-dimensional data while retaining key features. The effectiveness of dimensionality reduction is demonstrated by clustering the reduced data and mapping the clusters back to the original high-dimensional space. The high clustering quality scores indicate that spatial distribution characteristics were well preserved. Furthermore, Geometric analysis techniques, such as Intrinsic Shape Signatures (ISS), Harris Corner Detection, and Mesh Saliency, further refine the identification of typical configurations. Specifically, 15 typical solutions identified by the ISS method are used as baselines to represent distinct regions in the solution space. These solutions serve as a reference set for further comparison.

Building upon this reference set, we utilize Large Language Models (LLMs) to further enhance the decision-making process. First, LLMs are employed to generate and refine ranking criteria for evaluating the identified solutions. We employ LLM with an iterative self-update mechanism to dynamically adjust weighting strategies, enhancing decision-making capabilities in complex environments. To address input size limitations encountered in the problem, we apply heuristic design approaches that effectively manage and optimize the information. Additionally, effective prompt engineering techniques are integrated to improve the model's reasoning and adaptability.

In addition to ranking, LLM technology provides comprehensive and interpretable explanations for the selected solutions. This includes breaking down the criteria used for each decision, clarifying trade-offs between competing objectives, and offering insights into how different configurations perform across various key performance indicators. These explanations help stakeholders better understand the rationale behind the chosen solutions, enabling more informed decision-making in practical applications.



Optimizing CO2 Utilization in Reverse Water-Gas Shift Membrane Reactors with Parametric PINNs

Zahir Aghayev1,2, Zhaofeng Li3, Michael Patrascu3, Burcu Beykal1,2

1Department of Chemical & Biomolecular Engineering, University of Connecticut, Storrs, CT 06269, USA; 2Center for Clean Energy Engineering, University of Connecticut, Storrs, CT 06269, USA; 3The Wolfson Department of Chemical Engineering, Technion – Israel Institute of Technology, Haifa 3200003, Israel

With atmospheric CO₂ levels reaching a record 426.91 ppm in June 2024, the urgency for innovative carbon capture and utilization (CCU) strategies to reduce emissions and repurpose CO₂ into valuable products has become even more critical [1]. One promising solution is the reverse water-gas shift (RWGS) reaction, which transforms CO₂ and hydrogen—produced through renewable energy-powered electrolysis—into carbon monoxide, a key precursor for synthesizing fuels and chemicals. By integrating membrane reactors that selectively remove water vapor, the process shifts the equilibrium forward, resulting in higher CO₂ conversion and CO yield at lower temperatures, in accordance with the Le Chatelier's principle. However, modeling this intensified system remains challenging due to the complex, nonlinear interaction between reaction kinetics and membrane transport.

In this study, we developed a physics-informed neural network (PINN) model that integrates first-principles physics with machine learning to model the RWGS process within a membrane reactor. This approach embeds governing physical laws into the network's architecture, reducing the computational burden typically associated with solving highly nonlinear ordinary differential equations (ODEs), while maintaining both accuracy and interpretability [2]. Our model demonstrated robust predictive performance, achieving an R² value exceeding 0.95, successfully capturing flow rate changes and reaction dynamics along the reactor length. Using this validated PINN model, we performed data-driven optimization, identifying operational conditions that maximized CO₂ conversion efficiency and reaction yield [3-6]. This hybrid modeling approach enhances prediction accuracy and optimizes the reactor conditions, offering a scalable solution for industries integrating renewable energy into chemical production and reducing carbon emissions. Our findings demonstrate the potential of advanced modeling to intensify CO₂ utilization processes, with significant implications for sustainable chemical production and energy systems.

References

  1. NOAA Global Monitoring Laboratory. (2024). Trends in atmospheric carbon dioxide [online]. Available at: https://gml.noaa.gov/ccgg/trends/ [Accessed 10/13/2024].
  2. Raissi, M., Perdikaris, P. and Karniadakis, G.E., 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378, pp.686-707.
  3. Beykal, B. and Pistikopoulos, E.N., 2024. Data-driven optimization algorithms. In Artificial Intelligence in Manufacturing (pp. 135-180). Academic Press.
  4. Boukouvala, F. and Floudas, C.A., 2017. ARGONAUT: AlgoRithms for Global Optimization of coNstrAined grey-box compUTational problems. Optimization Letters, 11, pp.895-913.
  5. Beykal, B., Aghayev, Z., Onel, O., Onel, M. and Pistikopoulos, E.N., 2022. Data-driven Stochastic Optimization of Numerically Infeasible Differential Algebraic Equations: An Application to the Steam Cracking Process. In Computer Aided Chemical Engineering (Vol. 49, pp. 1579-1584). Elsevier.
  6. Aghayev, Z., Voulanas, D., Gildin, E., Beykal, B., 2024. Surrogate-Assisted Optimization of Highly Constrained Oil Recovery Processes Using Classification-Based Constraint Modeling. Industrial & Engineering Chemistry Research (submitted).


Modeling, simulation and optimization of a carbon capture process through a fluidized TSA column

Eduardo dos Santos Funcia1, Yuri Souza Beleli1, Enrique Vilarrasa Garcia2, Marcelo Martins Seckler1, José Luís de Paiva1, Galo AC Le Roux1

1Polytechnic School of the University of Sao Paulo, Brazil; 2Federal University of Ceará, Brazil

Carbon capture technologies have recently emerged as a way to mitigate climate change and global warming by removing carbon dioxide from the atmosphere. Furthermore, by removing carbon dioxide from biomass-originated flue gases, an energy process with negative carbon footprint can be achieved. Among carbon capture processes, the fluidized temperature swing adsorption (TSA) columns are a promising low-pressure alternative, where carbon dioxide flowing upwards is exothermally adsorbed into a fluidized solid sorbent flowing downwards, and later endothermically extracted through higher temperatures while regenerating the sorbent for recirculation. Although an interesting venture, the TSA process has been developed only in small scale, and remains to be scaled-up to become an industrial reality. This work aims to model, simulate and optimize a TSA multi-stage equilibrium system in order to obtain a conceptual design for future process scale up. A mathematical model was developed for adsorption using an approach that makes it easy to extend the model to various configurations. The model was extended to include multiple stages, each with a heat exchanger, and was also coupled to the desorption operation. Each column, adsorption and desorption, includes one external heat exchanger at the bottom for a preliminary heat load of the inward gas flow. The system also included a heat exchanger in the recirculating solid sorbent flow, before the regenerated solid enters the top of the adsorption column. The model is based on molar and energy balances, coupled to pressure drops in a fluidized bed designed to operate close to the minimum fluidization velocity (calculated through semi-empirical correlations), and to thermodynamics of adsorption equilibrium of a mixture of carbon dioxide and nitrogen in solid sorbents. The Toth Equilibrium isotherm was used with parameters experimentally obtained in a previous work (which suggested that the heterogeneity parameter for nitrogen should be fixed at unity). The complete fluidized TSA adsorption/desorption process has been optimized to minimize energy, adsorbent and operating costs, as well as equipment investment and installing, considering equilibrium in each fluidized bed stage. The optimal configuration for heat exchangers is determined and a unit cost for carbon dioxide capture was estimated. It was found that 2 stages are sufficient for an effective removal of carbon dioxide in the adsorption column, while at least 5 stages are necessary to meet captured carbon specification at 95% molar purity. It was also possible to conclude that not all stages in the columns needed heat exchangers, with some heat loads being set at 0 during the optimization. Pressure drop for each stage was estimated as smaller than 0.072 bar for a bed 1 m high, and air velocity was 40-45 cm/s (minimum fluidization velocity was 10-11 cm/s), with low particle Reynolds numbers of about 17, which indicates the system readily fluidizes. These findings show that the methodology here developed is useful for guiding the conceptual design of fluidized TSA process for carbon capture.



Unlocking Process Dynamics: Interpretable PDE Solutions via Symbolic Regression

Benjamin G. Cohen, Burcu Beykal, George M. Bollas

University of Connecticut, USA

Physics-informed symbolic regression (PISR) offers an innovative approach to automatically learn explicit, analytical approximate solutions to partial differential equations (PDEs). Chemical processes often involve dynamics that PDEs can effectively capture, providing valuable insights for engineers and scientists to improve process design and control. Traditionally, solving PDEs requires expertise in analytical methods or costly numerical schemes. However, with the advent of AI/ML, tools like physics-informed neural networks (PINNs) have emerged, learning solutions to PDEs by constraining neural networks to satisfy differential equations and boundary information. Applying PINNs in safety-critical systems is challenging due to the many neural network parameters and black-box nature.

To address these challenges, we explore the effect of replacing the neural network in PINNs with a symbolic regressor to create PISR. Guided by a carefully selected information-theoretic loss function that balances model agreement with differential equations and boundary information against identifiability, PISR can learn approximate solutions to PDEs that are symbolic rather than neural network approximations. This approach yields concise, clear analytical approximate solutions that balance model complexity and fit quality. Using an open-source symbolic regression package in Julia, we demonstrate PISR’s efficacy by learning approximate solutions to several PDEs common in process engineering and compare the learned representations to those obtained using PINNs. The PISR models, when compared to the PINN models, are straightforward, easy to understand, and contain very few parameters, making them ideal for sensitivity analysis and ensuring robust process design and control.



Eco-Designing Pharmaceutical Supply Chains: A Process Engineering Approach to Life Cycle Inventory Generation

Indra CASTRO VIVAR1, Catherine AZARO-PANTEL1, Alberto A. AGUILAR LASSERRE2, Fernando MORALES-MENDOZA3

1Laboratoire de Génie Chimique, Université de Toulouse, CNRS, INPT, UPS, Toulouse, France; 2Tecnologico Nacional de México, Instituto Tecnológico de Orizaba, México; 3Universidad Autónoma de Yucatán, Facultad de Ingeniería Química, Mérida, Yucatán, México

The environmental impact of Active Pharmaceutical Ingredients (APIs) is an increasingly significant research focus, as global pharmaceutical manufacturing practices face heightened scrutiny regarding sustainability. Paracetamol (acetaminophen), one of the most extensively used APIs, requires closer examination due to its current production practices. Most paracetamol is manufactured in large-scale facilities in India and China, with production capacities ranging from 2,000 to 40,000 tons annually.

Offshoring pharmaceutical manufacturing, traditionally a cost-saving strategy, has increased supply chain complexity and dependency on foreign API sources. This reliance has made Europe’s pharmaceutical production vulnerable, especially during global crises or geopolitical tensions, such as the disruptions seen during the COVID-19 pandemic. Consequently, there is growing interest in reshoring pharmaceutical production to Europe. The European Pharmaceutical Strategy (2020)[1] advocates decentralizing production to create shorter, more sustainable value chains. This move seeks to enhance access to high-quality medicines while minimizing the environmental impacts of long-distance transportation.

In France, the government has introduced measures to relocate the production of 50 essential drugs as part of a re-industrialization plan to address medication shortages. Paracetamol sales were restricted in 2022 and early 2023 due to supply chain issues, leading to the relocation of several manufacturing plants.

Yet, pharmaceuticals present unique challenges when assessed using Life Cycle Assessment (LCA), mainly due to a lack of comprehensive life cycle inventory (LCI) data. This scarcity is particularly evident for API synthesis (upstream) and downstream phases such as usage and end-of-life management.

This study aims to apply LCA methodology to evaluate various paracetamol API supply chain scenarios, focusing on the potential benefits of reshoring production to France. A major contribution of this work is the generation of LCI data for paracetamol production through process engineering and chemical process modeling. Aspen Plus software was used to model the paracetamol API manufacturing process, including mass and energy balances. This approach ensures that the datasets generated are robust and validated against available reference data. SimaPro software was used to conduct the LCA using the EcoInvent database and the Environmental Footprint (EF) impact assessment method.

One key finding is the reduction of greenhouse gas emissions for the selected functional unit (FU) of 1 kg of API. Significant differences in electricity use and steam heat generation were observed. According to the EF database, electricity in India results in emissions of 83 g CO₂ eq, while steam heat generation emits 1.38 kg CO₂ eq per FU. In contrast, French emissions are significantly lower, with electricity contributing 5 g CO₂ eq and steam heat generating 1.18 kg CO₂ eq per FU. These results highlight the environmental advantages of relocating production to regions with decarbonized power grids.

This study underscores the value of process modeling in generating LCI data for pharmaceuticals and enhances the understanding of the environmental benefits of reshoring paracetamol manufacturing. The developed methodology can be applied to other chemicals with limited LCI data, supporting more sustainable decision-making in the pharmaceutical sector's eco-design, particularly during re-industrialization efforts.

[1] European Commission Communication from the Commission: A New Industrial Strategy for Europe, vol.102,COM(2020), pp.1-17



Safe Reinforcement Learning with Lyapunov-Based Constraints for Control of an Unstable Reactor

José Rodrigues Torraca Neto1, Argimiro Resende Secchi1,2, Bruno Didier Olivier Capron1, Antonio del-Rio Chanona3

1Chemical and Biochemical Process Engineering Program/School of Chemistry, Universidade Federal do Rio de Janeiro (UFRJ), Brazil; 2Chemical Engineering Program/COPPE, Universidade Federal do Rio de Janeiro (UFRJ), Brazil; 3Sargent Centre for Process Systems Engineering, Imperial College London

Safe reinforcement learning (RL) is essential for real-world applications with uncertainty and safety constraints, such as autonomous robotics and chemical reactors. Recent advances (Brunke et al., 2022) focus on integrating control theory with RL to ensure safety during learning and deployment. These approaches include robust RL frameworks, constrained Markov decision processes (CMDPs), and safe exploration strategies. We propose a novel approach where RL algorithms—PPO (Schulman et al., 2017), SAC (Haarnoja et al., 2018), DDPG (Lillicrap et al., 2016), and TD3 (Fujimoto et al., 2018)—are trained with Lyapunov-based constraints to ensure stability. As our reward function, −(x-xSP)², inherently generates negative rewards, we applied penalties to positive critic values and decreases in critic estimates over time.

For off-policy algorithms (SAC, DDPG, TD3), penalties were applied directly to Q-values, discouraging non-negative values and preventing unexpected decreases. On-policy algorithms (PPO) applied these penalties directly to the value function. DDPG used Ornstein-Uhlenbeck noise for exploration, while TD3 used Gaussian noise, with optimized parameters. Hyperparameters, including safe RL constraints, were tuned using Optuna (Akiba et al., 2019), optimizing learning rates, network architectures, and penalty weights.

Our method was tested on an unstable Continuous Stirred Tank Reactor (CSTR) under random disturbances. Despite challenges posed by disturbances, the Safe RL approach was evaluated for resilience under dynamic conditions. A cosine annealing schedule dynamically adjusted learning rates, ensuring stable training. Base RL algorithms (without safety constraints) were trained on ten parallel environments with disturbances and compared to a Nonlinear Model Predictive Control (NMPC) benchmark. SAC performed best, achieving an optimality gap of 7.73×10⁻⁴ on the training pool and 3.65×10⁻⁴ on new disturbances. DDPG and TD3 exhibited instability due to temperature spikes without safety constraints.

Safe RL significantly improved SAC’s performance, reducing the optimality gap to 2.88×10⁻⁴ on the training pool and 2.36×10⁻⁴ on new disturbances, nearing NMPC performance. Safe RL also reduced instability in DDPG and TD3, preventing temperature spikes and reducing policy noise, though it increased offset from the setpoint, resulting in larger optimality gaps. Despite this tradeoff, Safe RL made these algorithms more reliable, considering unseen disturbances. Overall, Safe RL brought SAC close to optimality across disturbance conditions while it mitigated instability in DDPG and TD3 at the cost of higher setpoint offsets.

References:
L. Brunke et al., 2022, "Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning," Annual Review of Control, Robotics, and Autonomous Systems, Vol. 5, pp. 411–444.
J. Schulman et al., 2017, "Proximal Policy Optimization Algorithms," arXiv:1707.06347.
T. Haarnoja et al., 2018, "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor," Proceedings of the 35th ICML, Vol. 80, pp. 1861-1870.
T.P. Lillicrap et al., 2016, "Continuous Control with Deep Reinforcement Learning," arXiv:1509.02971.
S. Fujimoto et al., 2018, "Addressing Function Approximation Error in Actor-Critic Methods," Proceedings of the 35th ICML, Vol. 80, pp. 1587-1596.
T. Akiba et al., 2019, "Optuna: A Next-generation Hyperparameter Optimization Framework," Proceedings of the 25th ACM SIGKDD, pp. 2623-2631.



A two-level model to assess the economic feasibility of renewable urea production from agricultural wastes

Diego Costa Lopes, Moisés Teles Dos Santos

Universidade de São Paulo, Brazil

Agroindustrial wastes can be an abundant source of chemicals, biofuels and energy. Based on this assumption, this work presents a tow-level modeling (process models and supply chain model) and an optimization framework for an integrated biorefinery system to convert agricultural residues into renewable urea via gasification routes with a possible additional hydrogen input from electrolysis. A process model of the gasification process was developed in Aspen Plus® to identify key performance indicators such as energy consumption and relative yields for urea for different biomasses and operating conditions; then, these key process data were used in a mixed-integer linear programming (MILP) model, designed to identify the optimal combination of energy source, technological route of urea production and plant location that maximizes the net present value of the system. The gasification step was modeled with an equilibrium approach. Besides the gasifier, the plant is comprised of a conditioning system to adjust syngas composition, CO2 capture, ammonia and urea synthesis.

Based on the model’s results, three technological routes (oxygen gasification, air gasification and water electrolysis) were chosen as the most promising, and 6 different biomasses (rice husks, coffee husks, corn stover, soybean straw, sugarcane straw and bagasse) were identified as representative of the Brazilian agricultural scenario. The country was divided into 5569 cities and 558 micro-regions. Each region's agricultural production was evaluated to estimate biomass supply and urea demand. Electricity prices were also considered based on current tariffs. A MILP model was developed to maximize the net present value, combining energy sources, location and route as decision variables, respecting constraints on biomass supply, urea demand and transport between regions. The model was applied to the whole country divided in the microregion level. It was found that the Assis microregion in the São Paulo state is the optimal location for the plant, leveraging the proximity of large sugarcane and soybean crops and cheaper electricity prices compared to the rest of the country, with a positive NPV for an 80 tons of urea / h plant. Biomass dominates the total costs of plant (60%), followed by power (25%) and urea transport (10%). Biomass supplies were not found to be a major constraint in any region; urea demand is the main limiting factor, with more than 30 microregions needed to consume the plant’s production, highlighting the need for close proximity between production and consumption to minimize logistic costs.

The model was also constrained to other regions of Brazil to evaluate local feasibility. The north and northeast regions were not found to be viable locations for a plant with NPVs close to 0, given the lower biomass supplies and urea demands, and larger distances between microregions. Meanwhile, in the southern and midwest regions, large availability of soybean residues also create good conditions for a renewable urea plant, with NPVs of US$ 105 mil and US$ 103 mil respectively. The results indicate the feasibility of producing renewable urea from agricultural wastes and the importance of considering a two-level approach to assess the economic performance of the entire system.



Computer-based Chemical Engineering Education for Green and Digital Transformation

Zorka Novak Pintarič, Miloš Bogataj, Zdravko Kravanja

Faculty of Chemistry and Chemical Engineering, University of Maribor, Smetanova ulica 17, SI-2000 Maribor, Slovenia

The mission of Chemical Engineering Education, particularly Computer-Aided Chemical Engineering, is to equip students with the knowledge and skills they need to drive the green and digital transformation. This involves integrating Chemical Engineering and Process Systems Engineering (PSE) within the Bologna 3-cycle study system. The EFCE Bologna recommendations for Chemical Engineering programs will be reviewed, with a focus on PSE topics, especially those relevant to the green and digital transformation. Key challenges in introducing sustainability and digital knowledge will be highlighted, along with the necessary development of teaching methods and tools.

While chemical engineering programs contain elements of green and digital engineering, their systematic integration into core subjects is limited. The analysis of our study program shows that only a few principles of green engineering, such as maximizing efficiency and energy flow integration, are explicitly addressed. Other principles are indirectly presented through case studies but lack structured inclusion. Digital skills in the current curricula focus mainly on spreadsheets for data processing, basic programming, and process simulation. Green and digital content is more extensively explored in project work and advanced studies, with elective courses and final theses offering deeper engagement.

Artificial Intelligence (AI), as a critical element of digitalization, will significantly impact chemical engineering education, influencing both teaching methods and process optimization. However, the interdisciplinary complexity of AI poses challenges. Students need a solid foundation in programming, data science, and statistics to master AI tools, making its gradual introduction essential. The question therefore arises as to how AI can be effectively integrated into chemical engineering education by striking a balance between technical skills and critical thinking, fostering creativity and ethical awareness while preserving and not sacrificing the engineering fundamentals.

Given the rapid pace of change in the industry, chemical engineering education needs to be reformed, particularly at the bachelor's and master's levels. Core challenges include systematically integrating essential green and digital topics into syllabi, introducing new courses like AI and data science, modernizing textbooks with numerical examples, and providing teachers with training to keep pace with technological advancements.



Development of a Hybrid Model for the Paracetamol Batch Dissolution in Ethanol Using Universal Differential Equations

Fernando Arrais Romero Dias Lima1,2, Amyr Crissaff Silva1, Marcellus G. F. de Moraes3,4, Amaro G. Barreto Jr.1, Argimiro R. Secchi1,4, Idelfonso Nogueira2, Maurício B. de Souza Jr.1,4

1School of Chemistry, EPQB, Universidade Federal do Rio de Janeiro, Av. Horácio Macedo, 2030, CT, Bloco E, 21941-914, Rio de Janeiro, RJ – Brazil; 2Chemical Engineering Department, Norwegian University of Science and Technology, Trondheim, 793101, Norway; 3Instituto de Química, Rio de Janeiro State University (UERJ), Rua São Francisco Xavier, 524, Maracanã, Rio de Janeiro, RJ, 20550-900, Brazil; 4PEQ/COPPE – Universidade Federal do Rio de Janeiro, Av. Horácio Macedo, 2030, CT, Bloco G, G115, 21941-914, Rio de Janeiro, RJ – Brazil

Crystallization is a relevant process in the pharmaceutical industry for product purification and particle production. An efficient crystallization is characterized by crystals produced with the desired attributes. Therefore, modeling this process is a key point to achieving this goal. In this sense, the objective of this work is to propose a hybrid model to describe paracetamol dissolution in ethanol. The universal differential equations methodology is considered in the development of this model, using a neural network to predict the dissolution rate combined with the population balance equations to calculate the moments of the crystal size distribution (CSD) and the concentration. The model was developed using the experimental batches developed by Kim et al. [1]. The dataset is composed of concentration measurements obtained using attenuated total reflectance-Fourier transform infrared (ATR-FTIR). The objective function of the optimization problem is to minimize the difference between the experimental and the predicted concentration. The hybrid model could efficiently predict the concentration compared to the experimental measurements. Moreover, the hybrid approach made predictions of the moments of the CSD similar to the population balance model proposed by Kim et al. [1], being able to successfully calculate batches not considered in the training dataset. Moreover, the performance of the hybrid model was similar to the phenomenological one based on population balance but without the necessity of accounting for solubility information. Therefore, the universal differential equations approach is presented an efficient methodology for modeling crystallization processes with limited information.

1. Kim, Y., Kawajiri, Y., Rousseau, R.W., Grover, M.A., 2023. Modeling of nucleation, growth, and dissolution of paracetamol in ethanol solution for unseeded batch cooling crystallization with temperature-cycling strategy. Industrial & Engineering Chemistry Research 62, 2866–2881.



Novel PSE applications and knowledge transfer in joint industry - university energy-related postgraduate education

Athanasios Stefanakis2, Dimitra Kolokotsa2, Evaggelos Kapartzianis2, Ioannis Bonis2, Emilia Kondili1, JOHN K. KALDELLIS3

1Optimisation of Production Systems Lab., University of West Attica; 2Hellenic Energy S.A,; 3Soft Energy Applications and Environmental Protection Lab., University of West Attica

The area of process systems engineering has historically a profound theoretical involvement (noting especially Stephanopoulos 1985 but also Himmelblau 1993) who were testing the new ideas of all forms of artificial intelligence known today as such. While doing so the computer hardware but also the available data were not in the capacity required by these models.

The situation today with large amounts of data available in the industry and highly available cloud computing has been essential in making sense of broad machine learning models’ applications. In the area of process systems engineering the type of problems currently potentially solved with machine learning routines are

  1. The control system, or in terms of companies' equipment the Distributed Control systems implemented in servers with real-time OS. Predictive algorithms with millions of coefficients (or less but more robust lately) are better in addressing larger systems than simple pieces of equipment, for example with Neural Networks and Deep Learning. The plant wide optimization has not yet happened, but the Supply Chain Optimization is an area which is already seen applications and it is studied in the Academia.
  2. The Process Safety System (also known as Interlock system or emergency shutdown system ) implemented in PLCs has also been augmented by ML with the fault prediction and diagnosis method. Most applied in rotating machine performance (asset performance management systems) it is predicting their failure in advance so that companies can take on-time measures minimizing the risks of accidents but also production loss (predictive maintenance).

The subject has three challenges to be taught:

  1. Process systems engineering subject. This subject requires model understanding which already is not an easy subject.
  2. Machine learning subject. It is also requiring a modeling understanding but at the same time it is not a core subject in PSE
  3. Data engineering subject. As the systems become larger (soon they will be plantwide) knowledge of databases and cloud operating systems is becoming at least required to understand the structure of the models to be used.

These subjects have not a similar language not even close and are three separate frames of knowledge. Re-framing of the PSE is required to include in its core all three new disciplines and this is required to be done faster than in the past. The potential of young generations is enormous as they learn "hands-on" but for the older this is already overwhelming.

For next period the Machine Learning is evolving in the form of Plant optimizers and Fault detection and diagnosis model.

The present article will present the continuous evolution and progress of the cooperation between the largest energy company of Greece and the University in the implementation of knowledge transfer and advanced postgraduate and doctoral education courses. Furthermore, the novel ideas of AI implementation in the process industry as mentioned above will also be described and the prospects of this inspiration for both, the industry and the university will be highlighted.



Optimal Operation of Middle Vessel Batch Distillation using an Economic MPC

Surendra Beniwal, Sujit Jogwar

IIT Bombay, India

Middle vessel batch distillation (MVBD) is an alternative configuration of the conventional batch distillation with improved sustainability index. MVBD consists of two column sections separated by a (middle) vessel for the separation of a ternary mixture. It works on the principle of multi-effect operation wherein vapour from one column section (effect) is used to drive the subsequent effect, thus reducing the overall energy consumption [1]. The entire system is operated under total reflux and at the end of the batch, the three products are accumulated in the three vessels (reflux drum, middle vessel and reboiler).

Previously, it is shown that the performance of the MVBD can be quantified in terms of overall performance index (OPI) which captures the trade-off between separation and energy efficiency [2]. Furthermore, during the operation, holdup trajectory of each vessel can be manipulated to maximize OPI. In order to track these optimal holdup profiles during a batch, a feedback control system is needed.

The present work compares two approaches; sequential (open-loop optimization + closed-loop control) and simultaneous (closed-loop optimization + control). In the sequential approach, optimal set-point trajectory generated with offline optimization is tracked using a model predictive controller (MPC). Alternatively, in the simultaneous approach, OPI maximization is used as an objective function for the controller. This formulation is similar to the economic MPC. As the prediction horizon for this controller is much smaller than the batch time, the problem is reformulated to ensure feasibility of end of batch constraints. The two approaches are compared in terms of the effectiveness, overall performance index, robustness (to disturbance and plant-model mismatch) and associated implementation challenges (computational time). A simulation case study with the separation of a ternary mixture consisting of benzene, toluene and o-xylene is used to illustrate the controller design and performance.

References:

[1] Davidyan, A. G., Kiva, V. N., Meski, G. A., & Morari, M. (1994). Batch distillation in a column with a middle vessel. Chemical Engineering Science, 49(18), 3033-3051.

[2] Beniwal, S., & Jogwar, S. S. (2024). Batch distillation performance improvement through vessel holdup redistribution—Insights from two case studies. Digital Chemical Engineering, 13, 100187.



Recurrent deep learning models for multi-step ahead prediction: comparison and evaluation for real Electrical Submersible Pump (ESP) system.

Vinicius Viena Santana1, Carine de Menezes Rebello1, Erbet Almeida Costa1, Odilon Santana Luiz Abreu2, Galdir Reges2, Téofilo Paiva Guimarães Mendes2, Leizer Schnitman2, Marcos Pellegrini Ribeiro3, Márcio Fontana2, Idelfonso Nogueira1

1Norwegian University of Science and Technology, Norway; 2Federal University of Bahia, Brazil; 3CENPES, Petrobras R&D Center, Brazil

Predicting future states from historical data is crucial for automatic control and dynamic optimization in engineering. Recent advances in deep learning have provided new opportunities to improve prediction accuracy across various engineering disciplines, particularly using Artificial Neural Networks (ANNs). Recurrent Neural Networks (RNNs), particularly, are well-suited for time series prediction due to their ability to model dynamic systems through recurrent updates1.

Despite RNNs' high predictive capacity, their potential can be underutilized if the model training does not consider the intended future usage scenario2,3. In applications like Model Predictive Control (MPC), the model must evolve over time, relying only on its own predictions rather than ground truth data. Training a model to predict only one step ahead may result in poor performance when applied to multi-step predictions, as errors compound in the auto-regressive (or generative) mode.

This study focuses on identifying optimal strategies for training deep recurrent neural networks to predict critical operational time series data from a real Electric Submersible Pump (ESP) system. We evaluate the performance of RNNs in multi-step-ahead predictions under two conditions: (1) when trained for single-step predictions and recursively applied to multi-step forecasting, and (2) using a novel training approach explicitly designed for multi-step-ahead predictions. Our findings reveal that the same model architecture can exhibit markedly different performance in multi-step-ahead forecasting, emphasizing the importance of aligning the training process with the model's intended real-time application to ensure reliable predictions.

[1] Himmelblau, D.M. Applications of artificial neural networks in chemical engineering. Korean J. Chem. Eng. 17, 373–392 (2000). https://doi.org/10.1007/BF02706848

[2] Marrocos, P.H., Iwakiri, I.G.I., Martins, M.A.F., Rodrigues, A.E., Loureiro, J.M., Ribeiro, A.M., & Nogueira, I.B.R. (2022). A long short-term memory based Quasi-Virtual Analyzer for dynamic real-time soft sensing of a Simulated Moving Bed unit. Applied Soft Computing, 116, 108318. https://doi.org/10.1016/j.asoc.2021.108318

[3] Nogueira, I.B.R., Ribeiro, A.M., Requião, R., Pontes, K.V., Koivisto, H., Rodrigues, A.E., & Loureiro, J.M. (2018). A quasi-virtual online analyser based on artificial neural networks and offline measurements to predict purities of raffinate/extract in simulated moving bed processes. Applied Soft Computing, 67, 29-47. https://doi.org/10.1016/j.asoc.2018.03.001



Simulation and optimisation of vacuum (pressure) swing adsorption with simultaneous consideration of real vacuum pump data and bed fluidisation

Yangyanbing Liao, Andrew Wright, Jie Li

Centre for Process Integration, Department of Chemical Engineering, School of Engineering, The University of Manchester, United Kingdom

Pressure swing adsorption (PSA) is an essential technology for gas separation and purification. A PSA process where the highest pressure is above the atmospheric pressure and the lowest pressure is at a vacuum level is referred to as vacuum pressure swing adsorption (VPSA). In contract, vacuum swing adsorption (VSA) refers to a PSA process with the highest pressure equal to or slightly above the atmospheric pressure and the lowest pressure below atmospheric pressure.

Most computational studies concerning simulation of V(P)SA processes have assumed a constant vacuum pump efficiency ranging from 60% to 100%. Nevertheless, Krishnamurthy et al. [3] highlighted 72% is a typical efficiency value for compressors, but not representative for vacuum pumps. They reported a low efficiency value of 30% estimated based on their pilot-plant data. As a result, the energy consumption of the vacuum pump could have been underestimated by at least a factor of two in many computational studies.

In addition to assuming a constant vacuum pump efficiency, efficiency correlations have been proposed to more accurately evaluate the vacuum pump performance [4-5]. However, these correlations fail to conform to the trend suggested by the data points at higher pressures or to accurately represent the vacuum pump performance.

The adsorption bed fluidisation is another key factor in designing the PSA process. This is because bed fluidisation incurs rapid adsorbent attrition and eventually results in a substantial decrease in the separation performance [6]. However, the impacts of fluidisation on PSA optimisation have not been comprehensively addressed. More importantly, existing studies have not simultaneously incorporated real vacuum pump performance data and bed fluidisation limits into PSA optimisation.

To address the above research gaps, in this work we develop accurate prediction models for the pumping speed and power of the vacuum pump based on real performance curves using the data-driven modelling approach [7-8]. We then develop a new, comprehensive V(P)SA model that allows for an accurate evaluation of the V(P)SA process performance without relying on estimated vacuum pump energy efficiency or pressure/flow rate BCs at the vacuum pump end of the adsorption bed. A new optimisation problem that simultaneously incorporates the proposed V(P)SA model and the bed fluidisation constraints is thus constructed.

The computational results demonstrate that vacuum pump efficiency falls within 20%-40%. Using an estimated vacuum pump efficiency, the optimal cost is underestimated by at least 42% compared to that obtained using the proposed performance models. When the fluidisation constraints are incorporated, a low feed velocity and an exceptionally long cycle time are essential for maintaining a small pressure drop across the bed to prevent fluidisation. The optimal total cost is increased by at least 16% than cases where bed fluidisation constraints are not incorporated. Hence, it is important to incorporate vacuum pump performance prediction models developed using real data and bed fluidisation constraints to accurately evaluate the PSA performance.

References

1. Compt. Aided Chem. Eng.2012:1217-21.

2. Energy2017;137:495-509.

3. AIChE J.2014;60(5):1830-42.

4. Int J Greenh Gas Con.2020;93:102902.

5. Ind. Eng. Chem. Res.2019;59(2):856-73.

6. Adsorption2014;20:757-68.

7. AIChE J.2016;62(9):3020-40.

8. Appl. Energy2022;305:117751.



Sociotechnical Transition: An Exploratory Study on the Social Appropriability of Users of Smart Meters in Wallonia.

Elisa Boissézon

Université de Mons, Belgium

Optimal and autonomous daily use of new technologies isn’t a reality for everyone. In a societal context driven by sociotechnical transitions (Markard et al., 2012), many people lack access to digital equipment, do not possess the required digital skills for their use, and, consequently, are unable to participate digitally in social life via e-services. This reality is called digital inequalities (Agence du numérique, 2023) and is even more crucial to consider in the context of the increasing digitalization of society, in all areas, including energy. Indeed, according to the European Union directives, member states are required to develop various means of action, including digital, which are essential to achieving the three strategic axes envisioned by the European energy transition scenario, namely: investment in renewable energies, energy efficiency, and energy sobriety (Dufournet & Marignac, 2018).

In this specific instance, our research focuses on the question of social appropriation (Zélem, 2018) of new technologies in the context of the deployment of smart meters in Wallonia, and the use of associated digital tools by the publics. These individuals, with their unique socio-economic and sociodemographic profiles, are not equally equipped to utilize all the functionalities offered by this new digital system for managing energy consumption (Agence du Numérique, 2023; Van Dijk, 2017; Valenduc, 2013). This exploratory and phenomenological study aims, firstly, to investigate the experiences of the publics concerning the support received during the installation of the new smart metering system and to identify the barriers to the social appropriation of new technologies. Secondly, the field surveys aim to determine to what extent individual participatory forms of support (Benoît-Moreau et al., 2013; Cadenat et al., 2013), developed through the lens of active pedagogies such as experiential learning (Brotcorne & Valenduc, 2008, 2009), and collective forms (Bernaud et al., 2015; Turcotte & Lindsay, 2008) can promote the inclusion of digitally vulnerable users. The central role of field professionals as interfaces (Cihuelo & Jobert, 2015) is also highlighted within the service relationship (Gadrey, 2003) that connects, on one hand, the end consumers and, on the other hand, the organization responsible for deploying the smart meters. Our qualitative investigations were conducted with four types of samples, through semi-structured interviews, considering several determining factors regarding the engagement in the use of new technologies, from both individual and collective perspectives. Broadly speaking, our results indicate that while the standardized support protocol applied by field professionals during the installation of smart meter is sufficient for digitally proficient users, the situation is more nuanced for vulnerable populations who have specific needs requiring close support. In this context, collective participatory support in workshops in the form of focus groups seems to have further promoted the digital inclusion of participants.



Optimizing Methane Conversion in a Flow Reactor System Using Bayesian Optimization and Fisher Information Matrix Driven Experimental Design Approaches: A Comparative Study

Michael Aku, Solomon Gajere Bawa, Arun Pankajakshan, Ye Seol Lee, Federico Galvanin

University College London, United Kingdom

Reaction processes are complex systems requiring optimization techniques to achieve optimal performance in terms of key performance indicators (KPIs) such as yield, conversion, and selectivity [1]. Optimisation efforts often relies on the accurate modelling of reaction kinetics, thermodynamics and transport phenomena to guide experimental design and improve reactor performance. Bayesian Optimization (BO) and Fisher Information Matrix-driven (FIMD) techniques are two key approaches used in the optimization of reaction systems [2].
BO helps in identifying conditions efficiently by starting from an exploratory means of the design space, while FIMD approaches have been recently proposed to maximise the information gained from experiments and progressively improve parameter estimation [3] by focusing more on exploitation of the decision space to reduce the uncertainty in kinetic model parameters [4]. Both techniques have been used largely within the scientific and industrial domains, but they exhibit a fundamental difference in their approach on the balance between exploration (gaining new knowledge) and exploitation (using current knowledge to optimize outcomes) during model calibration.

This study presents a comparative assessment of BO and FIMD methods for optimal experimental design, focusing on the complete oxidation of methane in an automated flow reactor system [5]. The performance of both methods is evaluated in terms of methane conversion optimization, experimental efficiency (i.e., the number of required runs to achieve the optimum), and information. Our preliminary findings suggest that while BO readily converges to a high methane conversion, FIMD can be a valid alternative to reduce the number of required experiments, offering more insights into the sensitivities of each parameter and process dynamics. The comparative analysis paves way towards developing explainable or physics-informed data-driven models to map the relationship between predicted experimental information and KPI. The comparison also highlights trade-offs between convergence speed and robustness in experimental design, which are key aspects to consider for a comprehensive evaluation of both approaches in online reaction process optimization.

References

[1] Taylor, C. J., Pomberger, A., Felton, K. C., Grainger, R., Barecka, M., Chamberlain, T. W., & Lapkin, A. A. (2023). A brief introduction to chemical reaction optimization. Chemical Reviews, 123(6), 3089-3126.

[2] Quirino, P. P. S., Amaral, A. F., Manenti, F., & Pontes, K. V. (2022). Mapping and optimization of an industrial steam methane reformer by the design of experiments (DOE). Chemical Engineering Research and Design, 184, 349-365.

[3] Friso, A., & Galvanin, F. (2024). An optimization-free Fisher information driven approach for online design of experiments. Computers & Chemical Engineering, 187, 108724.

[4] Green, P. L., & Worden, K. (2015). Bayesian and Markov chain Monte Carlo methods for identifying nonlinear systems in the presence of uncertainty. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 373(2051), 20140405.

[5] Pankajakshan, A., Bawa, S. G., Gavriilidis, A., & Galvanin, F. (2023). Autonomous kinetic model identification using optimal experimental design and retrospective data analysis: methane complete oxidation as a case study. Reaction Chemistry & Engineering, 8(12), 3000-3017.



OPTIMAL CONTROL OF PSA UNITS BASED ON EXTREMUM SEEKING

Beatriz Cambão da Silva1,2, Ana Mafalda Ribeiro1,2, Diogo Filipe Rodrigues1,2, Alexandre Filipe Porfírio Ferreira1,2, Idelfonso Bessa Reis Nogueira3

1Laboratory of Separation and Reaction Engineering−Laboratory of Catalysis and Materials (LSRE LCM), Department of Chemical Engineering, University of Porto, Porto, 4200-465, Portugal; 2ALiCE−Associate Laboratory in Chemical Engineering, Faculty of Engineering, University of Porto, Porto, 4200-465, Portugal; 3Chemical Engineering Department, Norwegian University of Science and Technology, Sem Sælandsvei 4, Kjemiblokk 5, Trondheim, 793101, Norway

The application of RTO to dynamic operations is challenging due to the complexity of the nonlinear problems involved, making it difficult to achieve robust solutions [1]. Regarding cyclic adsorption processes, particularly Pressure Swing Adsorption (PSA) and Temperature Swing Adsorption (TSA), the control of the process in real-time is essential to maintain or increase productivity.

The literature on Real-time Optimization in PSA units relies on Model Predictive Control (MPC) and Economic Model Predictive Control (EMPC) [2] . These options rely heavily on the accurate model representation of the industrial plant, requiring a high computational effort and time to ensure optimal control [3]. Given the importance of PSA and TSA systems on multiple separation operations, establishing alternatives for control and optimization in real-time is in order. With that in mind, this work aimed to explore alternative model-free real-time optimization techniques that depend on simple control elements, as is the case of Extremum Seeking Control (ESC).

The chosen case study was Syngas Upgrading, which is relevant since it precedes the Fischer‑Tropsch reactions that enable an alternative to fossil fuels. Syngas Upgrading can also provide H2 for ammonia production and diminish CO2 emission. The operation of the PSA unit for syngas upgrading used as the basis for this study was discussed in the work of Regufe et al. [4].

Extremum-seeking control is a method that aims to control the process by moving an objective’s gradient towards zero while estimating that gradient based on persistent perturbations. A High-pass Filter (HF) eliminates the signal’s DC component to get a clearer response to the changes in the system. The input variable 𝑢 is continually disrupted by a sinusoidal wave, which helps assess the evolution of the objective function by keeping the system in a state of constant perturbation. The integration will determine the necessary adjustment in 𝑢 to bring the objective function closer to its optimum. This adjustment is often scaled by a gain 𝐾 to accelerate convergence.

The PSA model was implemented in gPROMS, representing the behaviour of the industrial plant, with communication with MATLAB and Simulink, where the ESC was implemented.

Extremum Seeking Control successfully optimized the CO2 productivity in PSA units for syngas upgrading/H2 purification. This shows that ESC can be a valuable tool in optimizing and controlling PSA processes and does not require the unit to reach a Cyclic Steady State to adjust the operation.

[1] S. Kameswaran and L. T. Biegler, “Simultaneous dynamic optimization strategies: Recent advances and challenges,” Computers & Chemical Engineering, vol. 30, no. 10, pp. 1560–1575, 2006, doi: 10.1016/j.compchemeng.2006.05.034.

[2] H. Khajuria and E. N. Pistikopoulos, “Optimization and Control of Pressure Swing Adsorption Processes Under Uncertainty,” AIChE Journal, vol. 59, no. 1, pp. 120–131, Jan. 2013, doi: 10.1002/aic.13783.

[3] S. Skogestad, “Advanced control using decomposition and simple elements,” Annual Reviews in Control, vol. 56, p. 100903, 2023, doi: 10.1016/j.arcontrol.2023.100903.

[4] M. J. Regufe et al., “Syngas Purification by Porous Amino-Functionalized Titanium Terephthalate MIL-125,” Energy & Fuels, vol. 29, no. 7, pp. 4654–4664, 2015, doi: 10.1021/acs.energyfuels.5b00975.



Enhancing Higher Education Capacity for Sustainable Data Driven Food Systems in Indonesia – FIND4S

Monika Polanska1, Yoga Pratama2, Setya Budi Abduh2, Ahmad Ni'matullah Al-Baarri2, Jan Van Impe1

1BioTeC+, Chemical & Biochemical Process Technology & Control, KU Leuven, Belgium; 2Department of Food Technology, Diponegoro University, Indonesia

The Capacity Building Project entitled “Enhancing Higher Education Capacity for Sustainable Data Driven Food Systems in Indonesia” (FIND4S, “FIND force”) aims to boost the institutional and administrative resources of seven Indonesian higher education institutions (HEIs) in Central Java.

The EU overarching priorities addressed through the FIND4S project include Green Deal and Digital Transformation, through developing knowledge, competences, skills and values. The modernized, competitive and innovative curricula will stimulate green jobs and pave the way to sustainable food systems including environmental impact to be taken into account. The essential elements of risk assessment, predictive modelling and computational optimization are to be brought together with both sustainability principles of food production and food processing as well as energy and food chain concepts (Life Cycle Assessment) within one coherent structure. The project will offer a better understanding of ecological and food systems dynamics and offer strategies in terms of regenerating natural systems by usage of big data and providing predictive tools for the food industry. The predictive modelling tools can be applied to evaluate the effects of climate change on food safety with regard to managing this new threat for all stakeholders. Raising the quality of education through digital technologies will enable the learners to acquire essential competences and sector-specific digital skills. The inclusion of data management to address sustainability challenges will reinforce scientific, technical and innovation capacities of HEIs and foster links between academia, research and industry.

Initially, the FIND4S project will modernize Bachelor’s degree curricula to include food systems and technology-oriented programs at partner universities in Indonesia. This modernization will aim to meet the desired accreditation standards and better prepare graduates for postgraduate studies. Additionally, in the central hub university, the project will develop a new and innovative Master’s degree program in sustainable food systems that integrates sustainability and environmental awareness into graduate education. This program will align with labor market demands and address the challenges, agriculture and food systems are facing, providing insights into potential threats and opportunities for knowledge transfer to Indonesia through education and research.

The recognition and implementation of novel and innovative programs will be tackled via significant improvement of food science education by designing new curricula and upgrading existing ones, training academic staff, creating a research center and equipping laboratories, as well as expanding the network of collaboration with European Higher Education Institutions. The project will utilize big data, quantitative modeling, and engineering tools to engage all stakeholders, including industry partners. The comprehensive MSc program will meet the growing demand for knowledge, experience, and standards in Indonesia, contributing to a greener and more sustainable economy and society. Ultimately, this initiative will support the necessary transformation towards socially, environmentally, and economically sustainable food systems.



Optimization of Specific Heat Transfer Area for Multiple Effects Desalination (MED) Process

Salih Alsadaie1, Sana I Abukanisha1, Iqbal M Mujtaba3, Amhamed A Omar2

1Sirte University, Libya; 2Sirte Oil Company, National Oil Corporation, Libya; 3University of Bradford, United Kingdom

The world population is expected to increase massively in coming decades putting more stress on the desalination industries to cope with the increasing demand for fresh water. However, with the increasing cost of living, freshwater production processes face the challenge of producing freshwater at higher quality and lower cost. The most known techniques for water desalination are thermal based such as Multistage Flash desalination (MSF) and Multiple Effect desalination (MED) and membrane based such as Reverse Osmosis (RO). Although the installed capacity of (RO) remarkably surpasses the MSF and MED, the MED process is more preferred option for new construction plants in different locations around the world where waste heat is available. However, the MED desalination technology is also required to cut off more costs by optimizing their operating and design parameters.

There are several studies in the literature that focus on optimizing the MED process. Most of these studies focus on increasing production rate or minimizing energy consumption by optimizing operating conditions, using of more efficient control systems, integration with power plants and hybrid with other desalination techniques. However, none of the available studies focused on optimum design configuration such as heat transfer area and number of effects.

In this paper, a mathematical model describing the MED process is developed and solved using gPROMs software. For a fixed production rate, the heat transfer area is optimized by variation of seawater temperature and flowrate steam temperature and flowrate, and the number of effects. The design and operating data are taken from an almost new existing small MED process plant with two large effects and two small effects.

Keywords: MED desalination, gPROMs, optimization, heat transfer area, multiple effects.

References

  1. Mayor, B., 2019. Growth patterns in mature desalination technologies and analogies with the energy field. Desalination, 457, pp.75-84.
  2. Prajapati, M., Shah, M. and Soni, B., 2022. A comprehensive review of the geothermal integrated multi-effect distillation (MED) desalination and its advancements. Groundwater for Sustainable Development, 19, p.100808.


Companies’ operation and trading strategies under the triple trading of electricity, carbon quota and commodities: A game theory optimization modelling

Chenxi Li1, Nilay Shah2, Zheng Li1, Pei Liu1

1State Key Lab of Power System Operation and Control, Department of Energy and Power Engineering, Tsinghua-BP Clean Energy Centre, Tsinghua University, Beijing, 100084, China; 2Department of Chemical Engineering, Imperial College London, SW7 2AZ, United Kingdom

Trading has been recognized as an effective measure for decarbonization, especially with the recent global focus on carbon reduction targets. Due to the high correlation in terms of participants and traded goods, carbon and electricity trading are highly coupled, leading operational strategies of companies involved in the couple transaction unclear. Therefore, the research on the coupled trading is essential, as it helps companies identify optimal strategies and enables policymakers to detect potential policy loopholes. This study presents a novel game theory optimization model involving both power generation companies (GenCos) and factories. Aiming to achieve a Nash equilibrium that maximizes each company’s benefits, this model explores optimal operation strategies for both power generation and consumption companies under electricity-carbon joint trading. It fully captures the operational characteristics of power generation units and the technical energy consumption of electricity-using enterprises to describe the relationship between renewable energy, fossil fuels, electricity, and carbon emissions detailedly. Electricity and carbon prices in the transaction are determined through negotiation between buyers and sellers. Considering the relationship between production volume and price of the same product, the case actually encompasses three trading systems: electricity, carbon, and commodities. The model’s nonlinearity, caused by the Nash equilibrium and the product of price and output, is managed using piecewise linearization and discretization, transforming the problem into a mixed-integer linear problem. Using this triple trading model, this study quantitatively explores three issues based on a virtual case involving three GenCos and four factories: the enterprises’ operational strategies under varying emission reduction requirements, the pros and cons of cap and benchmark carbon quota allocation mechanisms, and the impact of integrating zero-emission enterprises into carbon trading. Results indicate that GenCos tend to act as sellers of both electricity and carbon quotas. Meanwhile, since consumers may cut production rather than implementing low-carbon technologies to lower emissions, driving up product prices to maintain profits, high electricity and carbon prices become unsustainable for GenCos due to reduced electricity demand. Moreover, while benchmark mechanisms may incentivize production, they can also lower overall system profits, which is undesirable for policymakers. Lastly, under strict carbon reduction targets, zero-emission companies may transform the carbon market into a seller's market by purchasing carbon to raise carbon prices, thereby reducing electricity prices and lowering their own operating costs.



Solvent and emission source dependent amine-based CO2 capture costs estimation methodology for systemic level analysis

Yi Zhao1, Aron Beck1, Hayato Hagi2, Bruno Delahaye2, François Maréchal1

1Ecole Polytechnique Fédérale de Lausanne, Switzerland; 2TotalEnergies OneTech, France

Amine-based carbon capture effectively reduces industrial emissions but faces challenges due to high investment costs and the energy penalty associated with solvent regeneration. Existing cost estimation either rely on complex and costly simulation processes or provide overly general results, limiting their applicability for systemic analysis. This study presents a shortcut approach to estimating amine-based carbon capture costs, considering varying solvents and emission sources in terms of flow rates and CO2 concentrations. The results show that scaling effects significantly impact smaller plants, with costs dropping from 200–500 $/t-CO2 to 50–100 $/t-CO2 as capacity increases from 0.1 to 100 t-CO2/h, with Monoethanolamine (MEA) as the solvent. For larger plants, heat utility costs dominate, representing around 80% of the total costs, assuming a natural gas price of 35 $/MWh (10.2 $/MMBTU). Furthermore, MEA-based plants can be up to 25% more expensive than those with alternative solvents. In short, this study provides a practical method for initial amine-based carbon capture cost estimation, enabling a systemic assessment of its technoeconomic potential and facilitating its comparison with other CO2 abatement technologies.



Energy Planning Toward Absolute Environmental Sustainability: Key Decisions and Actionable Insights Through Interpretable Machine Learning

Nicolas Ghuys1, Diederik Coppitters1, Anne van den Oever2, Maarten Messagie2, Francesco Contino1, Hervé Jeanmart1

1Université catholique de Louvain, Belgium; 2Vrije Universiteit Brussel, Belgium

Energy planning models traditionally support the energy transition by focusing on cost-optimized solutions that limit greenhouse gas emissions. However, this narrow focus risks burden-shifting, where reducing emissions increases other environmental pressures, such as freshwater use, solving one problem while creating others. Therefore, we integrated Planetary Boundary-based Life Cycle Assessment (PB-LCA) into energy planning to identify solutions that respect absolute environmental sustainability limits. However, integrating PB-LCA into energy planning introduces challenges, such as adopting distributive justice principles, interpreting trade-offs across PB indicator impacts, and managing subjective weighting in the objective function. To address these, we employed weight screening and interpretable machine learning to extract key decisions and actionable insights from the numerous quantitative solutions generated. Preliminary results for a single weighting scenario show that the transition scenario exceeds several PB thresholds, particularly for ecosystem quality and mineral resource depletion, underscoring the need for a balanced weighting scheme. Next, we will apply screening and machine learning to pinpoint key decisions and provide actionable insights for achieving absolute environmental sustainability.

 
11:00am - 12:00pmBrewery visit
Location: On-campus brewery
11:00am - 12:00pmT1: Modelling and Simulation - Session 5
Location: Zone 3 - Room D016
Chair: Laurent Dewasme
Co-chair: Flavio Manenti
 
11:00am - 11:20am

Reaction Pathway Optimization Using Reinforcement Learning in Steam Methane Reforming and Associated Parallel Reactions

Martin Rodríguez-Fragoso, Octavio Elizalde-Solis, Edgar Ramirez-Jimenez

Instituto Politecnico Nacional, Mexico

In catalytic processes such as Steam Methane Reforming (SMR), multiple parallel and competing reactions occur, influencing product yields and reactor efficiency. The objective of this work is to develop a methodology based on reinforcement learning (RL) to accurately map the most probable reaction pathways by utilizing experimental data, such as partial pressures of methane (CH₄), hydrogen (H₂), carbon monoxide (CO), and carbon dioxide (CO₂) measured over time and temperature. By leveraging this data, the RL model dynamically selects the reaction pathways that best reflect the underlying reaction kinetics and mechanisms, distinguishing itself from conventional deterministic methods used in the literature.

Unlike traditional reaction modeling approaches, which often rely on predefined mechanisms, this methodology allows the RL agent to explore the reaction space autonomously. It considers a wide array of reactions, including Steam Methane Reforming, Water-Gas Shift (WGS), Dry Methane Reforming (DRM), methane decomposition, the Boudouard reaction, methanation, reverse WGS, and CO hydrogenation. The RL agent is trained using a Q-learning framework with an ε-greedy exploration-exploitation policy, which balances the search for new reaction pathways (exploration) and the optimization of the best-known reactions (exploitation). The algorithm optimizes the selection of these pathways by iteratively improving the match between predicted gas compositions and experimental data, learning which reactions dominate under specific conditions, such as varying temperatures and residence times.

The model is designed to incorporate data-driven adaptability into the pathway synthesis, enabling it to select the optimal reaction scheme that best reflects the behavior of the reactive system under varying operational conditions, such as changes in temperature, pressure, and residence time. This real-time adaptability is crucial for accurately capturing the dynamic nature of catalytic processes, which traditional deterministic models often struggle to account for. Furthermore, the RL model employs a reward function that penalizes the selection of pathways that are either infeasible or deviate from established thermodynamic principles, ensuring that the reaction networks remain physically consistent while accurately representing known reaction kinetics.

Initial results show that this RL-based pathway selection significantly reduces prediction errors and enhances the identification of dominant reaction mechanisms, particularly in complex systems where multiple parallel reactions compete. The model’s ability to adjust dynamically to experimental data demonstrates its superiority over classical methods, providing a flexible and robust tool for optimizing catalytic processes like SMR.

This study demonstrates how the integration of Reinforcement Learning with Reaction Engineering can enhance the understanding and prediction of reaction pathways, offering a scalable solution for both research and industrial applications in reaction mechanism optimization.



11:20am - 11:40am

Modelling of a Heat Recovery System (HRS) integrated with Steam Turbine Combined Heat and Power (CHP) unit in a petrochemical plant

Daniel Sousa1, Miguel Castro Oliveira1,2, Maria Cristina Fernandes1

1Department of Chemical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Avenida Rovisco Pais 1, 1049-001 Lisboa, Portugal; 2Research, Development and Innovation, ISQ, Avenida Prof. Dr. Cavaco Silva, 33 Taguspark, 2740-120 Porto Salvo, Portugal

This work subsists on the simulation and optimisation modelling of an integrated Heat Recovery System (HRS) for a petrochemical plant. The conceptualised system in this work includes four combustion-based processes (CBP) (two using natural gas as fuel and other two using liquified petroleum gas) and a condensing steam turbine combined heat and power generation (ST-CHP) system. The conceptualised system furtherly incorporates technologies to recover heat from the exhaust gases of the combustion-based processes, resulting in fuel savings (through preheating of combustion air at the inlet of the four CBP’s and of the water at the inlet of the ST-CHP’s boiler), as well as electricity generation via an organic Rankine cycle (ORC).

An existing methodology (Castro Oliveira, 2023) approaching the highest level of waste heat stream recirculation between a determinate number of production processes is expanded to include energy supply systems (a category in which CHP is included) as well. In this sense, this work introduces an innovative perspective in which the production processes in a plant (such as CBP’s) and the energy-using units within energy supply systems may be analysed in a similar manner due to the possibility to incorporate of the same type of improvement technologies (in this case, heat exchangers) within the overall systems (and thus to attend to the most fundamental aim of the energy efficiency improvement of the plant). The objectives of this work are framed on the aims of the EU Strategy on Energy System Integration (European Commision, 2020), namely its first pillar (energy efficiency and circular economy nexus).

The simulation model for the proposed system was developed using the Modelica language, using the capabilities of the ThermWatt Modelica library (Castro Oliveira, 2023), which has been specifically designed for the development of models of energy recovery and water systems. An optimisation model using the non-linear programming (NLP) methodology was then developed based on this second scenario (using the same set of variables and governing equations of the corresponding simulation model). The Python language-based GEKKO optimisation package was used to build this model. The objective was defined as the minimisation of operational costs associated to the plant’s energy consumption (including both fuel and electricity consumption).

A post-processing assessment was furtherly performed subsisting on the determination of the economic and environmental viability associated to the engineering project of the conceptualised system associated to 108.9 TJ/year total fuel savings (corresponding to a relative 13.8% reduction) and 147.2 MWh/year electricity savings. In this prospect, a payback time of about 2 year and 3 months and a 5.14 kt/year reduction of equivalent carbon dioxide (CO2,eq) emissions have been estimated, which are significantly reasonable values in comparison to respective benchmarks 2 – 3 years reasonable payback time (Tello and Weerdmeester, 2013) and a 12 – 19 kt CO2,eq /year reduction (ABB, 2023).

References

M. Castro Oliveira, 2023, ThermWatt Home Page, https://fenix.tecnico.ulisboa.pt/homepage/ist178789/thermwatt---ferramenta-amp-servico-de-engenharia.

European Commision, 2020, Powering a climate-neutral economy: An EU Strategy for Energy System Integration.

P. Tello, R. Weerdmeester, 2013, Spire Roadmap, 106.

ABB, 2023, Energy efficiency opportunities in chemical manufacturing.



11:40am - 12:00pm

Application of K-means for the Identification of Multiphase Flows Based on Computational Fluid Dynamics

Patrick Souza Lima1, Leonardo Silva Souza2, Leizer Schnitman2, Idelfonso Bessa dos Reis Nogueira1

1NTNU, Norway; 2UFBA, Brazil

The complexity of describing multiphase systems universally has led to the development of various models tailored to specific industrial flow scenarios (Oran & Boris, 2002). Multiphase flows, commonly found in industries such as oil, gas, and chemical processing, require systems to operate within specific flow regimes. When these regimes are violated, systems can fail, making it critical to identify flow regimes accurately to maintain operational safety and efficiency (Xu et al., 2022).

Existing techniques for identifying flow regimes often rely on data from real systems, requiring expensive infrastructure and offering limited adaptability to new conditions. As an alternative, this study uses computational fluid dynamics (CFD) simulations, which allow for controlled experiments without the need for costly equipment. CFD was applied to simulate water-oil mixtures with different flow regimes, including slug, annular, and dispersed bubble patterns, providing data for flow classification. Typically, multiple sensors or variables are used for flow classification (Wang et al., 2019; El-Sebakhy, 2010), but to minimize costs, this study used only the apparent density in a cross-section of the simulated pipe, which could be measured by a single sensor in real systems.

To simplify the classification process, the numerical integration of the density was proposed as a one-dimensional variable representing flow characteristics. This simplified representation allowed for the use of machine learning techniques, and k-means was chosen as the classification method. Traditional classification of multiphase flows is often done visually, which can lead to subjective interpretations (Wu et al., 2001). To avoid this, k-means, an unsupervised learning method, was used. K-means works by minimizing an objective function based on random cluster assignments, making it ideal for problems where labeled data is not available.

Despite the randomness of k-means, the method consistently provided accurate classifications, with none of the results showing an accuracy lower than 80%. This demonstrates that k-means can reliably differentiate between flow regimes using only the integrated density variable. The decision to use k-means was based on its ability to perform well in scenarios without predefined classification criteria, making it suitable for multiphase flow studies where labeled data is scarce or unavailable.

In conclusion, applying k-means for flow regime identification using CFD data presents a cost-effective and reliable solution. By relying on only one variable—apparent density—the method significantly reduces the need for expensive instrumentation, making it practical for real-world applications. Furthermore, this study highlights the potential of CFD simulations to provide valuable data for flow classification, offering an efficient alternative to traditional experimental methods. This approach can improve safety and operational efficiency in industries that deal with complex multiphase flows.

TRANSLATE with x English ArabicHebrewPolish BulgarianHindiPortuguese CatalanHmong DawRomanian Chinese SimplifiedHungarianRussian Chinese TraditionalIndonesianSlovak CzechItalianSlovenian DanishJapaneseSpanish DutchKlingonSwedish EnglishKoreanThai EstonianLatvianTurkish FinnishLithuanianUkrainian FrenchMalayUrdu GermanMalteseVietnamese GreekNorwegianWelsh Haitian CreolePersian TRANSLATE with COPY THE URL BELOW Back EMBED THE SNIPPET BELOW IN YOUR SITE Enable collaborative features and customize widget: Bing Webmaster Portal Back

  • Adicionar ao Dicionário
    • Não há listas de palavras para Inglês → Português...
    • Criar uma nova lista de palavras...
  • Copiar
 
11:00am - 12:00pmT2: Sustainable Product Development and Process Design - Session 5
Location: Zone 3 - Room E033
Chair: Maria Papathanasiou
Co-chair: Francois Marechal
 
11:00am - 11:20am

Decarbonisation pathways for packaging bioplastic alternatives

Marie J. Jones1,2, Jana Lukic1, Antoine Astour1, Jeremy Luterbacher2, François Maréchal1

1Industrial Energy Systems Laboratory, École Polytechnique Fédérale de Lausanne (EPFL),1950 Sion, Switzerland.; 2Laboratory of Sustainable and Catalytic Processing, École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland.

Decarbonising the packaging plastic industry requires a disruptive change in both our production and consumption habits. Bioplastics produced from non-edible lignocellulosic biomass will become the backbone of a circular plastic economy. However, current processes retrofitting biobased molecules to petrochemicals result in high process complexity and low biomass utilisation efficiency (BUE), therefore hindering economic competitiveness and absolute sustainability due to increased tensions on water and land resources [1], [2]. Alternative bioplastics with similar properties to PET, the main packaging plastic, but retaining as much biogenic atoms as possible in native-like structures have recently been developed to address this problem [3].

In this study, trade-offs between traditional retrofitting pathways and those novel technologies, with improved process efficiency albeit lower drop-in readiness levels, are compared on several levels of modelling complexity to assess their large-scale viability.

Inherent mass and energy losses are identified at an early-stage using the Second-law Thermodynamic Analysis and complemented with detailed process modelling, techno-economic assessment and life-cycle analysis of four chemo-catalytic processing routes: (1) PET via methanol obtained from biomass gasification, (2) PET via 5-chlromethylfurural (CMF) [4], (3) PEF (polyethylene furanoate) via CMF and (4) PHX (polyhexylene xylosediglyoxylate), a new polymer recently engineered by our group [5]. The latter is characterised by high BUE (97%) and chemical exergy efficiency, which results in lower manufacturing costs and CO2 emissions, achieved through aldehyde functionalisation. The additional chemicals represent the main environmental burden which could be further reduced by producing them from CO2 or biomass.

To systematically investigate such symbiotic relationships within the chemical industry, we developed a superstructure proposing decarbonised production pathways for all major reagents around the processes of interest. Through multi-objective optimisation for cost and carbon footprint minimisation, progressively self-sufficient biorefinery configurations are generated. As options with the lowest abatement costs are selected first, our methodology helps decision makers prioritise efforts towards the decarbonation of the whole plastic production supply chain while realising the best use of the biomass resource.

References

[1] M. Bachmann et al., “Towards circular plastics within planetary boundaries,” Nat Sustain, vol. 6, no. 5, Art. no. 5, May 2023, doi: 10.1038/s41893-022-01054-9.

[2] P. Gabrielli et al., “Net-zero emissions chemical industry in a world of limited resources,” One Earth, May 2023, doi: 10.1016/j.oneear.2023.05.006.

[3] L. P. Manker, M. J. Jones, S. Bertella, J. Behaghel de Bueren, and J. S. Luterbacher, “Current strategies for industrial plastic production from non-edible biomass,” Current Opinion in Green and Sustainable Chemistry, vol. 41, p. 100780, Jun. 2023, doi: 10.1016/j.cogsc.2023.100780.

[4] M. Mascal, “5-(Chloromethyl)furfural (CMF): A Platform for Transforming Cellulose into Commercial Products,” ACS Sustainable Chem. Eng., vol. 7, no. 6, pp. 5588–5601, Mar. 2019, doi: 10.1021/acssuschemeng.8b06553.

[5] L. P. Manker et al., “Sustainable polyesters via direct functionalization of lignocellulosic sugars,” Nat. Chem., vol. 14, no. 9, Art. no. 9, Sep. 2022, doi: 10.1038/s41557-022-00974-5.



11:20am - 11:40am

Modeling and Simulation of a Novel Process that Converts Low Density Polyethylene to Ethylene

Xiaoyan Wang, Omar Almaraz, Jianli Hu, Srinivas Palanki

West Virginia University, United States of America

Globally, it is estimated that around 70 million tons of polyethylene is produced via polymerization of ethylene [1,2], with the majority (~79%) ending up in landfills or the environment. The current process to make the monomer ethylene involves the high-temperature cracking of ethane and is very energy intensive. This process also produces a significant amount of greenhouse gases [3]. For this reason, there is significant interest in developing novel depolymerization processes that utilize waste plastics to produce the monomer ethylene.

In this project, a novel process is developed that utilizes low density polyethylene from plastic waste to produce ethylene. In this process, waste polyethylene is reacted in a microwave reactor to produce ethylene. Preliminary experimental results indicate that it is possible to get 41% selectivity in the production of ethylene. A conceptual flowsheet based on this reactor is developed in the ASPENPlus environment. A membrane separator is utilized to separate the syn gas from ethylene, and the syn gas is sent to another reactor to produce additional ethylene. The ethylene is purified to polymer grade via a train of distillation columns. The heat duty for the microwave reactor is computed via simulation in COMSOL. Heat integration tools are utilized to reduce the hot and cold utilities used in this process. This novel design is compared with the conventional process of making ethylene from ethane via cracking. A technoeconomic analysis is conducted to demonstrate the economic feasibility of this process. Furthermore, a life cycle analysis is conducted to demonstrate the decarbonization potential of this process.

Acknowledgment: This study was supported by the United States Department of Energy.

References:

  1. https://www.statista.com/statistics/1099345/ethylene-demand-globally/
  2. IEA, Technology Roadmap - Energy and GHG Reductions in the Chemical Industry via Catalytic Processes, IEA, Paris, 2013.
  3. CO2 Emissions in 2022, https://www.iea.org/reports/co2-emissions-in-2022
 
11:00am - 12:00pmT3: Large Scale Design and Planning/Scheduling - Session 4
Location: Zone 3 - Room D049
Chair: Zheyu Jiang
Co-chair: Benoit Chachuat
 
11:00am - 11:20am

Methods for Efficient Solutions of Spatially Explicit Biofuels Supply Chain Models

Phuc M. Tran1,2, Eric G. O'Neill1,2, Christos T. Maravelias1,2,3

1Department of Chemical and Biological Engineering, Princeton University, Princeton, NJ 08540, USA; 2DOE Great Lakes Bioenergy Research Center, Madison, WI, 53726, USA; 3Andlinger Center for Energy and the Environment, Princeton University, Princeton, NJ 08540, USA

Biofuels as a renewable energy source play a crucial role in the transition to a low-carbon energy system. Given the recently identified limitations of biofuels produced from food crops, largely related to the competition for land use and between food and energy, greater emphasis is placed on second-generation biofuels produced from non-food, lignocellulosic sources. Emerging research has pointed to the utilization of marginal lands for bioenergy crop production. Marginal lands, typically characterized by poor soil quality and other unfavorable growing conditions, present a promising opportunity for the cultivation of lignocellulosic crops, contributing to energy security and environmental sustainability without compromising food production. Studies have typically used mathematical programming and simulations to optimize the biofuels supply chain (SC) according to a range of objectives. Nonetheless, limited research has addressed the interactions between SCND and the upstream landscape-scale modeling associated with producing biomass. Broadening the SCND system boundary to include landscape design enables better control of feedstock supply and more precise estimates of GHG emissions.

Recent advancements in the fine-scale modeling of field-specific crop productivities and SOC sequestration potentials pose an opportunity for the use of integrated models. However, the highly geographically heterogeneous properties of fields introduce additional layers of spatial complexity. The inclusion of such details in optimization models leads to a significant increase in the number of variables and constraints. Models eventually become intractable, too big to be solved in an efficient amount of time. Consequently, the use of advanced techniques in data processing are needed for these large-scale models to be solved effectively and accurately.

We propose data processing methods to address spatial complexities in integrated biofuels SCND optimization models. Specifically, we employ two different methods to deal with the large number of fields for bioenergy feedstock production - composite curves and network reduction. The composite-curves-based approach serves to transform field-specific decision variables into lower-resolution ones without homogenizing properties of fields. This is achieved by establishing an order of selection for fields within that lower resolution and approximating the resulting composite curve, reducing the number of field variables. Network reduction is utilized to create clusters of fields that are nearby enough so that a single transportation arc to dedicated facilities can be assumed for them. This method aims to reduce the number of transportation-related variables in the model while ensure accurate representation of small fields. We also prepare an iterative linearization process for the estimation of composite curves and a two-step process in which the true field-to-facility transportation cost is recovered after a solution is obtained for network reduction. To demonstrate the feasibility and effectiveness of these methods in reducing the size of the model while maintaining its accuracy, a case study was conducted for the SCND optimization of second-generation biofuels in 8 states of the US Midwest. Finally, results reveal that while an optimization-based clustering method for network reduction leads to a more accuracy representation of the system, the use of composite curves is able to reduce the model’s run time by up to 83%.



11:20am - 11:40am

Multi-objective Optimization of Steam Cracking Microgrid for Clean Olefins Production

Saba Ghasemi, Tylee Kareck, Zheyu Jiang

Oklahoma State University, United States of America

Olefins are widely used as crucial precursors and essential building blocks in the manufacturing of chemical products, including plastic, detergent, adhesive, rubber, and food packaging. Ethylene is the most important olefin with global annual production exceeding 200 million metric tons. Currently, ethylene is almost entirely produced via steam cracking of gaseous and liquid hydrocarbon feedstocks such as ethane, propane, and naphtha. Steam cracking is one of the most energy and carbon-intensive processes in the chemical industry. As the U.S. energy landscape continues to transition toward clean, renewable electricity, one promising solution to decarbonize the steam cracking process is to implement electric cracking technology. Nevertheless, due to (1) the sheer size of most ethylene plants in the U.S., (2) the need to run these plants around the clock, and (3) the intermittent nature of variable renewable electricity (VRE) from solar and wind whose proportion in the U.S. electricity generation will continue to increase, it would be economically unrealistic and practically impossible to install massive energy storage systems (e.g., batteries) or perform complete plant reconfiguration to accommodate such a large power demand from electrified crackers.

Accounting for these complications and practical limitations, our vision for using electricity to provide process heat for steam cracking comprises diverse energy sources. We envision that the electrification of steam cracking will take place gradually due to the large capital investment associated with the decommissioning of existing crackers and the installation of new cracker units. Thus, both electrified and conventional crackers are present in the superstructure. Battery storage, electrolyzer, and hydrogen storage are used in conjunction with VRE generated onsite to support round-the-clock ethylene plant operation. Electrified crackers can be powered by electricity from the main grid, electricity generated in-house from dispatchable generators and fuel cell units, as well as from batteries. On the other hand, conventional crackers can be powered by fresh natural gas feedstock as well as the methane fraction byproduct (containing methane and hydrogen) from both conventional and electrified crackers. Essentially, the future ethylene plant becomes a microgrid, a local electric grid that acts as a single controllable entity with respect to the main grid. A microgrid can operate in either grid-connected mode or islanded mode, offering benefits such as improved resilience, economic operation, and flexibility.

In this work, we formulate a multi-objective optimization problem to minimize the total costs and carbon footprint associated with operating the ethane cracking microgrid. We build a differential-algebraic equation (DAE) numerical model that determines the energy requirement of conventional and electrified cracking. The energy demand obtained from this mechanistic model is then used to formulate a deterministic, steady-state operation of the ethylene plant. We also consider the uncertainties associated with VRE generation and market price predictions. This results in a scenario-based mixed-integer linear programming (MILP) model which is solved to optimality. By considering a hypothetical ethylene plant located on Texas Gulf Coast, we draw several insights regarding how decarbonized ethylene plants should be operated subject to the trade-offs between economic benefits and environmental impacts.

 
11:00am - 12:00pmT5: Concepts, Methods and Tools - Including keynote
Location: Zone 3 - Aula E036
Chair: Eike Cramer
Co-chair: Filip Logist
 
11:00am - 11:40am

Keynote: Pimp my distillation sequence – Shortcut-based screening of intensified configurations

Momme Adami, Dennis Espert, Mirko Skiborowski

Hamburg University of Technology, Institute of Process Systems Engineering, Hamburg/Germany


1790-Keynote-Adami_b.pdf
 
11:00am - 12:00pmT7: CAPEing with Societal Challenges - Session 3
Location: Zone 3 - Room E032
Chair: Solomon Brown
Co-chair: Henrique Matos
 
11:00am - 11:20am

Absolute Sustainability Assessment of Sustainable Aviation Fuels

Marina T. Chagas, Juan D. Medrano-García, Lucas F. Santos, Gonzalo Guillén-Gonsálbez

Institute for Chemical and Bioengineering, Department of Chemistry and Applied Biosciences, ETH Zürich, Vladimir-Prelog-Weg 1, 8093 Zürich, Switzerland

With 2.5% of current global CO2 annual emissions1, aviation is one of the most challenging sectors to decarbonize, due to the high-energy-density fuel requirements. As such, according to the International Air Transport Association (IATA), more than 99% of the fuel used today comes from fossil origin2. Ongoing decarbonization efforts include scaling the use of sustainable aviation fuels (SAF), which are drop-in replacements to fossil kerosene. They are expected to provide most of the sector’s carbon abatement until 2050, contributing to over 60% of the reduction in emissions required for aviation to reach a net-zero scenario2,3.

SAFs can be produced from different carbon feedstocks, such as biomass, organic waste and captured CO2, which, combined with the available array of synthesis technologies, results in several potential pathways. Nevertheless, there is limited literature on the simultaneous assessment and comparison of the alternatives for SAF production, while existing studies provide limited insights into the global environmental implications of their large scale deployment.

In this work, we analyzed different renewable carbon sources and pathways to produce SAF and compared them to fossil jet fuel production routes based on absolute sustainability criteria. Differently from standard life cycle assessments (LCA), which are suitable for comparing different products or processes, absolute sustainability assessments can provide insights into their environmental performance relative to the planet’s ecological limits and carrying capacity4. These limits, known as planetary boundaries (PBs), define together the safe operating space (SOS) for anthropogenic activities5. As environmental assessment studies of SAF production have focused on relative LCA so far, this is the first time, from the authors’ knowledge, that PBs are incorporated in a SAF production study.

Different pathways for SAF production, differing in the carbon feedstocks and production technologies, were evaluated through the PBs framework and the transgression levels relative to the SOS were quantified. The processes were simulated in Aspen Plus v12 and the respective life cycle inventories were modeled using the mass and energy balances results. The environmental assessment was carried out in Brightway2 v2.4.6 using the Ecoinvent database v3.10 and the Environmental Footprint (EF) method v3.1 1 in combination with the LANCA v2.5.

Our results showcase the trade-offs between the different strategies for SAF production and the benefits compared to the fossil routes. Moreover, they highlight the importance of policy support to promote SAF production.

References

1. Ritchie, H. & Roser, M. What share of global CO₂ emissions come from aviation? Our World in Data (2024).

2. International Air Transport Association (IATA). Executive Summary Net Zero Roadmaps. (2023).

3. McCausland, R. Net zero 2050: sustainable aviation fuels. (2023).

4. Sala, S., Crenna, E., Secchi, M. & Sanyé-Mengual, E. Environmental sustainability of European production and consumption assessed against planetary boundaries. Journal of Environmental Management 269, 110686 (2020).

5. Rockström, J. et al. A safe operating space for humanity. Nature 461, 472–475 (2009).



11:20am - 11:40am

Techno-economic Assessment of Sustainable Aviation Fuel Production via H2/CO2-Based Methanol Pathway

Pierre Guilloteau1, Hugo Silva2, Anders Andreasen2, Niklas Groll1, Anker Degn Jensen3, Gürkan Sin1

1Process and Systems Engineering Center (PROSYS), Department of Chemical and Biochemical Engineering, Technical University of Denmark (DTU), 2800 Kgs.Lyngby, Denmark; 2Energy Transition - Process Department, Ramboll Energy, Hannemanns Allé 53 2300 København S, Danmark; 3Catalysis and High Temperature Engineering Center (CHEC), Department of Chemical and Biochemical Engineering, Technical University of Denmark (DTU), 2800 Kgs.Lyngby, Denmark

To reach long-term carbon neutrality in aviation, transitioning from fossil-based jet fuels to Sustainable Aviation Fuels (SAF) derived from renewable resources is essential. This study provides a comprehensive Techno-Economic and Life Cycle Assessment of a SAF production process utilizing renewable hydrogen (H2) and carbon dioxide (CO2) through the Methanol To Olefins (MTO) and Mobil Olefins to Gasoline Distillate (MOGD) pathway.

We investigated the methanol formation kinetics, comparing various models and reactor types and designs. Results indicated that models including both CO2-to-methanol conversion and the Reverse Water-Gas Shift reaction showed excellent accuracy. In addition, a water boiling reactor with external utility achieved a higher single-path conversion (22.6% at 314 m³) compared to an adiabatic reactor (20.9% at 314 m³), with optimal conditions identified at 450 K and 75 bara. Whereas previous studies mainly examined temperature and pressure effects, the influence of reactor design parameters appeared to be overlooked for industrial implementation. This in-depth study provides new insight into how the methanol reactor design impacts its conversion rates. Furthermore, it highlights the significant impact of the methanol reactor design on Capital and Operational Cost.

While MTO and MOGD reactors were designed based on experimental conversion, this study also emphasizes the final distillation process. We determined that a two-column distillation system was necessary to reach a purity of 97% C8-C16 hydrocarbons for 38 kT/year of kerosene. Detailed sensitivity analyses on distillation parameters, including boil-up ratio, reflux ratio, and column sizing, enabled to achieve optimal parameters of 1.1 for boil-up ratio and 1.7 for reflux ratio.

Economic evaluation established a Minimum Selling Price of $2.46/kg, higher than the current fossil jet fuel cost of $0.68/kg but consistent with prior SAF studies. Additionally, our uncertainty analysis on the Minimum Selling Price showed a standard deviation of 2.30, due to the uncertain H2 price, highly dependent on innovations in performance, cost-breakdown of electrolyzer and renewable electricity production technologies. Moreover, the sensitivity analysis revealed high sensitivity to reactant prices, particularly for H2. Finally, the plant's Life Cycle Assessment was performed, achieving a carbon footprint of 0.67 kgCO2eq/MJ, aligning with European regulations.

This comprehensive study underscores the economic and environmental potential of the SAF production process. By identifying the optimal methanol reactor design configuration, it demonstrates the importance of development of an advanced methanol formation model. These results offer a novel perspective on SAF production optimization and the necessity of realistic industrial design specifications assessment and selection.



11:40am - 12:00pm

Sustainable Aviation Fuels Production via Biogas Reforming and Fischer-Tropsch Process Integrated with Solid Oxide Electrolysis

Muhammad Nizami, Konstantinos Anastasakis

Department of Biological and Chemical Engineering, Aarhus University, Hangøvej 2, Aarhus 8200, Denmark

The use of sustainable aviation fuels (SAFs) is pivotal in gradually replacing fossil kerosene and lowering the carbon emissions without changing the existing infrastructure. One of the pathways to produce SAFs is through the Fischer-Tropsch synthesis (FTS) process. FTS is a catalytic reaction which converts a mixture of CO and H2 (syngas) into a variety of hydrocarbons products. The syngas can be produced from reforming process at high temperatures from biogas and/or CO2 as the carbon sources, which can reduce the lifecycle carbon emissions by 50 to 100% (Peacock et al., 2024).

The present work proposes an integrated process for SAFs production from biogas through reforming process, Fischer-Tropsch synthesis (FTS) and solid oxide electrolysis process. Aspen Plus v14 is used to develop rigorous kinetic models for biogas reforming, FTS and hydrocracking of the resulting heavy fractions, based on established kinetics from literature (Park et al., 2014; Todic et al., 2013), followed by distillation to the final fuels’ cuts (naphtha, kerosene and diesel). Different scenarios for H2 supply (upstream or downstream of the reformer) and tail gas recycling (to FT or to reformer) are assessed based on the developed integrated process model. The technical evaluation is assessed with several key performance indicators, such as carbon efficiency and process efficiency.

The simulation results showed carbon efficiencies between 79.9% and 95.1%, and process efficiency between 40.2% and 41.7%. The lower process efficiency is due to the high energy consumption required for hydrogen production in the SOEC process. The developed model sets the basis for further optimization of the design and will facilitate the evaluation of the economic and environmental impacts, determining both the production cost of SAFs and their carbon reduction compared to conventional jet fuels. Furthermore, projection cost analyses are essential to predict future decreases in fuel production costs.

References

  1. Peacock J, Cooper R, Waller N, Richardson G. Decarbonising aviation at scale through synthesis of sustainable e-fuel: A techno-economic assessment. Int J Hydrogen Energy 2024;50:869–90.
  2. Park, N., Park, M.-J., Baek, S.-C., Ha, K.-S., Lee, Y.-J., Kwak, G., Park, H.-G., & Jun, K.-W. (2014). Modeling and optimization of the mixed reforming of methane: Maximizing CO2 utilization for non-equilibrated reaction. Fuel, 115, 357–365.
  3. Todic, B., Bhatelia, T., Froment, G. F., Ma, W., Jacobs, G., Davis, B. H., & Bukur, D. B. (2013). Kinetic Model of Fischer–Tropsch Synthesis in a Slurry Reactor on Co–Re/Al 2 O 3 Catalyst. Industrial & Engineering Chemistry Research, 52(2), 669–679.
 
11:00am - 12:00pmT8: CAPE Education and Knowledge Transfer - Session 2
Location: Zone 3 - Aula D002
Chair: Zorka Novak-Pintaric
Co-chair: Miroslav Fikar
 
11:00am - 11:20am

Trends and Challenges in the Teaching of the Capstone Process Design Course

Ana Torres, Ignacio E Grossmann

Carnegie Mellon University, United States of America

The major goal of this presentation is to discuss new trends and challenges in teaching the senior undergraduate design course. Process design has been the traditional capstone course for chemical engineering, in which material learned in previous semesters is integrated and is applied to a major design project where students work in groups. A major component of this course is decision-making since students have to select the process technology, the flowsheet configuration and its operating parameters. Modern educational trends emphasize process invention (i.e., synthesis of flowsheets) and the teaching of systematic techniques for design. These include synthesis strategies (e.g., hierarchical decomposition, separation synthesis, heat exchanger networks, energy integration, and water networks), process simulation tools (e.g. AspenPlus) to perform the mass and energy balances, detailed modeling tools for chemical reactors and separators, and economic evaluation. The recent trend has also been to include topics on energy, decarbonization, and evaluation of sustainability and environmental impact (e.g. through life cycle analysis using EPA Greenscope), topics which are becoming highly relevant in the professional education of chemical engineering students. Clearly, the design course, aside from providing essential professional skills to chemical engineering students, is an ideal vehicle to expose students to these new emerging topics. Furthermore, in our experience students have been highly motivated with design projects that focus on these areas, which in turn can help to increase enrollments in chemical engineering.

Major challenges, however, that are faced in teaching the process design course is first the increasing reduction of fundamental chemical engineering courses (e.g. thermodynamics, fluid mechanics, heat and mass transfer, unit operations) at the expense of the introduction of new courses, especially related to courses in new areas (e.g. biological engineering and nano-materials). This decrease in fundamentals makes it hard to teach the design course as one has to review additional basic topics that normally would be taken for granted in this course. Another important issue is that in many departments the course is increasingly being taught either by adjunct faculty (teaching lecturers) or individuals retired from industry with essentially no involvement from tenure-track faculty. It is only those departments who have faculty in process systems engineering that the design course is taught by a regular faculty member. We believe this is a worrying trend because it means that the faculty is increasingly unable to teach this core course in chemical engineering. Finally, given the recent emphasis on wellness, university administrators are increasingly discouraging time-consuming and open-ended courses like the design course. In this presentation we discuss some of these challenges that are being currently faced, and emphasize that the process design course is an ideal vehicle for addressing energy and sustainability issues, which have become major challenges and a source of new opportunities for chemical engineers for making an impact in the real world.



11:20am - 11:40am

Smart Manufacturing Course: Proposed and Executed Curriculum Integrating Modern Digital Tools into Chemical Engineering Education

Montgomery Laky, Gintaras Reklaitis, Zoltan Nagy, Joseph Pekny

Purdue University, Davidson School of Chemical Engineering, West Lafayette, IN, United States of America

The paradigm shift into an era of Industry 4.0 has emphasized the need for intelligent networking between process equipment and industrial processes. This has brought on an age of research and framework development for smart manufacturing in the name of Industry 4.0 [1]. While the physical and digital advancements towards smart manufacturing integration are tangible the advancement of engineers is equally important. Assante et al. discuss educational efforts in Europe to create and implement smart manufacturing curriculum for non-traditional or adult learners already integrated in the workforce but little work has been shown previously on the generation of smart manufacturing curriculum for pre-career students [2]. We, the teaching team of CHE 554:Smart Manufacturing at Purdue University, proposed and implemented a curriculum geared towards the training of undergraduate, graduate, and non-traditional students in methods of smart manufacturing as they apply to industrial applications.

Through this elective course, taught primarily through the context of chemical engineering, we introduce an interdisciplinary body of students to concepts not covered in its entirety by any engineering core curriculum. Our course includes, but is not limited to material on, data reconciliation, machine learning, chemometrics, data-driven fault detection, digital twin development, and process optimization. Further, when available these concepts are executed through the context of open-source Python packages, enabling the accessible and practical application of smart manufacturing for assignments and in the context of professional application [3,4]. With the integration of modern tools and Python libraries the industrial practicality becomes evident for students who would otherwise be unaware of these resources.

Purdue University, a rich environment where co-ops, internships, and other opportunities for industrial growth are encouraged, adds an additional layer to the unique preparation of our students for industrial careers through this elective course. Purdue Online furthers the industrial reach adding accessibility for non-traditional and adult students gaining professional development through the course. Special attention to the asynchronous format of this course ensures that the course remains accessible for both these non-traditional students and traditional students. This course develops a theoretically grounded and practically trained generation of engineering students interested in industrial applications. The asynchronous nature of the course relaxes time constraints, while regular and by-appointment remote/in-person office hours lets students access expert help when needed.

  1. Davis, J., Edgar, T., Graybill, R., Korambath, P., Schott, B., Swink, D., & Wetzel, J. (2015). Smart Manufacturing. Annual Review of Chemical and Biomolecular Engineering.
  2. Assante, D., Caforio, A., Flamini, M., & Romano, E. (2019). Smart Education in the Context of Industry 4.0. In 2019 IEEE Global Engineering Education Conference.
  3. Garcia-Munoz, S. (2019). https://github.com/salvadorgarciamunoz/pyphi.
  4. Casas-Orozco, D., Laky, D., Wang, V., Abdi, M., Feng, X., Wood, E., ... & Nagy, Z. K. (2021). PharmaPy: An Object-oriented Tool for the Development of Hybrid Pharmaceutical Flowsheets. Computers & Chemical Engineering.


11:40am - 12:00pm

Using realistic process design problems in chemical engineering education

Nagma Zerin

Johns Hopkins University, United States of America

Mass and Energy Balances (MEB) is in general the first core course in the chemical engineering major. It introduces the fundamentals of process design, which primarily involves conducting generation consumption analysis, drawing block flow diagrams for processes, and performing mass and energy balances on different process units. Although traditional problem-solving through lectures and recitation sessions helps to enhance conceptual understanding, students often feel disconnected from the real-life applications of these concepts. To address this issue, project-enhanced learning has been incorporated into a lecture-based MEB course, offered in the Chemical and Biomolecular Engineering (ChemBE) department of a large R1 University in the United States. The students collaborate on a group project with 3-4 students to solve a realistic and meaningful process design problem. An example project is the production of the Active Pharmaceutical Ingredient (API) for the non-steroidal anti-inflammatory drug, ibuprofen. The student groups are provided with a hypothetical scenario, where they work as part of the design team in a pharmaceutical company. The colleagues from the pilot plant observe that the current production of API is less than the target of 10 g of API per hour and the design team is assigned to analyze the process to identify the issue. Students obtain computational solutions for the flow rates in different process units using either Excel or Python, which allows them to test the impact of various design changes on the final production rate of API. Due to the company’s goal to move towards a more sustainable API production, the student groups also assess the environmental impacts associated with the current process and reflect on strategies to mitigate the impacts. Additionally, the student groups address the issue of a potential rise in API selling price due to a financial loss in the company, which could increase the final cost of the medicine and reduce its access to lower-income individuals and people of color. Collaborating on realistic projects like this not only improves students’ critical thinking, problem-solving, and teamwork skills but also raises their ethical and social awareness, which is important training for a rising chemical engineer. Therefore, it is imperative to implement project-enhanced learning on a larger scale in chemical engineering education.

Reference:

Zerin N. Project-Enhanced Learning in the Mass and Energy Balances (MEB) Course. Chemical Engineering Education. 2024; 58 (3): 201-204. DOI: 10.18260/2-1-370.660-134209.

 
11:00am - 12:00pmT9: PSE4Food and Biochemical - Session 4
Location: Zone 3 - Room E031
Chair: Dongda Zhang
Co-chair: Simen Akkermans
 
11:00am - 11:20am

Metabolic network reduction based on Extreme Pathway sets

Wannes Mores, Satyajeet S. Bhonsale, Filip Logist, Jan F.M. Van Impe

BioTeC+, KU Leuven, Belgium

The use of metabolic networks is extremely valuable for design and optimisation of bioprocesses as they provide great insight into cellular metabolism. In model-based bioprocess optimisation, they have been used successfully, enabling better (economic) objective performance through more accurate network-based models. One of the drawbacks of using a metabolic networks is their underdeterminacy, leading to non-unique flux distributions for a given set of measurements. Flux Balance Analysis (FBA) overcomes this issue by assuming that the cell is trying to fulfill a certain objective function. However, for metabolic networks of higher complexity, FBA can still have non-unique solutions to the LP [1]. Metabolic network reduction can greatly reduce this effect but can be difficult when data is limited.

Structural analysis of the metabolic network through Elementary Flux Modes or Extreme Pathways can help find relevant information in the network. Recently, [2] defined a selection procedure for EFMs to find macroscopic relationships between metabolites. This work expands on this selection concept, presenting a network reduction approach based on the active EPs for a given set of measurements. Many of the pathways present in the network will not be active during the process and a significantly smaller network can therefore be constructed, reducing the underdeterminacy significantly.

The novel approach to network reduction is then applied to a case study of oxygen-limited Escherichia coli. The vast set of EPs is generated for the metabolic network of E. coli. From this set, the most informative EPs are selected based on in-silico data and a smaller network is constructed using only the reactions active in those EPs. This leads to much lower complexity metabolic networks while keeping the necessary information on cellular metabolism for the given process.

References

[1] Mahadevan, R., & Schilling, C. H. (2003). The effects of alternate optimal solutions in constraint-based genome-scale metabolic models. Metabolic engineering, 5(4), 264-276.

[2] Maton, M., Bogaerts, P., & Wouwer, A. V. (2022). A systematic elementary flux mode selection procedure for deriving macroscopic bioreaction models from metabolic networks. Journal of Process Control, 118, 170-184.



11:20am - 11:40am

Context based multi-omics pathway embeddings

Lennart Otte1, Christer Hogstrand2, Miao Guo1, Adil Mardinoglu1,2

1King's College London, United Kingdom; 2Science for Life Laboratory, KTH - Royal Institute of Technology, Stockholm, Sweden

Applications of machine learning algorithms in Chemistry and Biology have inspired numerous vector embeddings for biological entities (metabolites, proteins, genes, enzymes, etc.) but due to their different specialisations often disregard contextual information. Disease and biosynthesis pathways are elucidated in increasingly complex ways covering various types of omics data and intricate sequential signalling and reaction pathways. We propose an embedding that relates sequential and multi-modal measurements. Thinking of modalities as directions in space instead of unrelated entities of different types can give a more unified idea of what a molecule or a gene is. Just like the famous example from natural language processing where the vector of king – man + woman = queen we conclude that there is a direction representing gender we can think of a gene and a protein in similar way where a gene implies something about a protein and vice versa. By using a model architecture inspired by natural language processing a pathway sequence is broken down into context pairs of the same or different omics. Subsequently, the model aligns pathway steps in close proximity. Because the embeddings are produced using pathway sequences they can be used to optimise reaction sequences (retrosynthesis) in microbiome or even in human health. In these setting numerous competing pathways interact and can support or deprive each other of substrates. A model that optimises these processes has to respect the relationships between pathways and compounds within different pathways. Therefore, an embedding that makes it easy to infer interacting proteins, genes in molecules that all reside in the same space can achieve a higher performance compared with other embeddings. Our model aims to extend the way flux balance works by transferring it to a ML-based environment where gene knockouts but also inserts and novel pathways are predicted computationally and don’t necessarily rely on a pre-existing characterisation of a strains pathways. Due to the connection to FBA we can use validation of FBA results to validate our model. When metabolites, genes, proteins etc. align closely then we expect them to co-react in similar ways to knockouts. That is when we perturb the system similarly reacting entities should be aligned as they must have a connection through a pathway. Through benchmark classifiers we can compare different embeddings in their ability to identify pathways, predict co-expressions and find targets for optimising pathways or identifying drug targets. We find that the our embedding performs better in these tasks and can thus lie the groundwork in elucidating new unknown pathways, optimising product formation and identifying interactions in disease.



11:40am - 12:00pm

Metabolic optimization of Vibrio natriegens based on metaheuristic algorithms and the genome‐scale metabolic model

YiXin Wei1,2, Tong Qiu1,2, Zhen Chen1

1Department of Chemical Engineering, Tsinghua University, Beijing 100084, China; 2Beijing Key Laboratory of Industrial Big Data System and Application, Tsinghua University, Beijing 100084, China

In recent years, the burgeoning interest across various sectors in products derived from microbial production has significantly propelled the evolution of the field of metabolic engineering. This discipline aspires to augment the production of specific target compounds as much as possible through the optimal designs for microbial cell factories[1]. Escherichia coli, Saccharomyces cerevisiae (commonly employed for ethanol production), and Corynebacterium glutamicum (frequently used for amino acid production) stand as the most prevalent biological hosts for constructing these cellular factories. Vibrio natriegens is a Gram-negative bacterium known for its remarkable growth rate, holding promise as a prospective standard biotechnological host for laboratory and industrial bio-production, specifically tailored to produce target metabolites[2].

A genome-scale metabolic model (GSMM) is a cellular model constructed using mathematical methods, encompassing known metabolites within the cell, enzymatic reactions between metabolites, and the genes that express enzymes within the cell. Widely utilized in the field of metabolic engineering, GSMMs are instrumental in the computational, simulation, and analysis of cellular behavior under different gene expression conditions and environmental variations, predicting cell growth and specific metabolite production rates, and serving as essential tools for gene necessity analysis and cell viability prediction. With the flourishing development of genome sequencing technologies, the quantity and quality of available GSMMs have been steadily increasing. In 2023, Coppens et al.[2] developed the first GSMM for Vibrio natriegens, a model that showed good consistency with experimental data. Given the progress in the field of bioinformatics, and considering the efficiency and effectiveness of metaheuristic algorithms in achieving global optimal solutions, scientists have begun to employ metaheuristic algorithms in the analysis of GSMMs for hosts such as Escherichia coli[3].

In this study, we combine different metaheuristic algorithms such as particle swarm optimization (PSO) with the GSMM of Vibrio natriegens, using metaheuristic algorithms to explore optimal gene knockout strategies that can achieve the maximum production flux of specific metabolites. The solution of GSMM is based on analysis methods such as flux balance analysis (FBA) and minimization of metabolic adjustment (MOMA). Simulation results demonstrate that the hybrid approach proposed in this study effectively enhances the production capacity of specific target metabolites in Vibrio natriegens, offering strategic guidance for gene knockouts in practical experimental testing.

References:

[1] Bai L, You Q, Zhang C, Sun J, Liu L, et al. 2023. Systems Microbiology and Biomanufacturing 3: 193-206.

[2] Coppens L, Tschirhart T, Leary DH, Colston SM, Compton JR, et al. 2023. Molecular Systems Biology 19: e10523.

[3] Lee MK, Mohamad MS, Choon YW, Mohd Daud K, Nasarudin NA, et al. 2020. A Hybrid of Particle Swarm Optimization and Minimization of Metabolic Adjustment for Ethanol Production of Escherichia Coli, Cham.

 
11:00am - 12:00pmT10: PSE4BioMedical and (Bio)Pharma -Including keynote by JNJ
Location: Zone 3 - Room E030
Chair: Dimitrios I. Gerogiorgis
Co-chair: Satyajeet Bhonsale
Keynote by Geert Crassaerts (JNJ)

Transforming Pharmaceutical Development: The Power of AI and Predictive Modeling

Presenters: Niels Vandervoort (Director digital transformation J&J) and Geert Craessaerts (Director Data&Process engineering)

In today's rapidly evolving pharmaceutical landscape, J&J is presented with exciting opportunities alongside exciting challenges. As our new molecules grow in complexity and the development cycles for new medicines are much shorter, it is clear that we must adapt our ways of developing new medicines to thrive in this dynamic environment. We will need a paradigm shift from iterative insights where we learn from doing to predictive insights in which we can anticipate product & process behavior at scale or in the patient. Therefore, digital transformation is not only a technological revolution; it is a business transformation in which we rethink how we develop our products and how digital can be an enabler and a catalyst.

By harnessing the power of new & advanced digital technologies like AI and by using the predictive power of advanced modelling; we will be able to leverage historical insights much better, and we can kick start our product & process design and start from a much better place of knowledge. Additionally, when we can generate the right insights faster through predictive models or by executing the right experiments through model-driven DOE and experimental iterations; we will be able to make better product & process design decisions much faster in the development lifecycle. Pharmaceutical companies typically make use of a wide range of predictive modelling technologies, including traditional methods based on mechanics, data-driven techniques, and hybrid models. We adopt our modelling technology based on what we know about the science, the physics and the availability of data. The presentation will highlight several examples from drug substance and drug product manufacturing, showing how these flexible modelling technologies help us with better and faster designing the processes. There is a great momentum for the broader usage of AI in pharmaceutical development given the support from authorities like FDA. However when entering the AI process modelling world, we should emphasize that transparency and explainability in AI systems are crucial for gaining trust from stakeholders and meeting regulatory requirements.
 

Keynote by Geert Crassaerts

Geert Crassaerts

JNJ

 
 
12:00pm - 2:00pmLunch
Location: Zone 2 - Cafetaria
1:00pm - 2:00pmIChemE Panel
Location: Zone 2 - B012
This year, ESCAPE35 will be co-hosting together with IChemE’s CAPE SIG a panel discussion under the theme “Career Options within the CAPE landscape”. The panellists will feature a diverse group of well-respected CAPE individuals from industry and academia, across different career stages. The panel discussion will focus on career possibilities within industry and academia for individuals within the CAPE community, and a networking event afterwards.
1:00pm - 2:00pmPoster Session 2
Location: Zone 2 - Cafetaria
 

Rebalancing CAPEX and OPEX to Mitigate Uncertainty and Enhance Energy Efficiency in Renewable Energy-Fed Chemical Processes

Ghida Mawassi, Alessandro Di Pretoro, Ludovic Montastruc

LGC (INP - ENSIACET), France

The conventional approach in process engineering design has always been based on the exploitation of the degrees of freedom of a process system for the optimization of the operating conditions with respect to a selected objective function. The latter was usually defined based on the best compromise between capital and operating expenses. However, although the first cost item has played a role of major importance during the life period of the industrial sector focused on the production capacity expansion, the operating aspect is becoming more and more predominant in the current industrial landscape due to the increasing concerns towards carbon-free energy sources and the higher equilibrium between offer and demand. In essence, the reliance on fluctuating and intermittently available energy resources - renewable resources - is increasing, making it essential to maximize product output while minimizing energy consumption.

Based on these observations, it appears evident that the acceptance of higher investments for an improvement in the process performances could be a fruitful opportunity to further improve the efficiency of energy intensified and renewables fed chemical processes. To explore the potential of this design paradigm reconsideration from a quantitative perspective, a dedicated biogas-to-methanol case study was set up for a comparative study. The reaction and separation sections for grade AA biomethanol production were designed and simulated based on both the total annualized and utility costs minimization and compared. The optimal choice was to focus on the most energy-intensive section of the process, the purification. To this end, distillation columns were intentionally oversized. Although this approach increased the initial investment cost, it led to significant energy savings.

The investment increase for each layout and the corresponding energy savings were assessed and analyzed. The outcome of the simulation shows relevant improvements in terms of energy savings equal to 15 % with respect to the conventional layout. As a consequence, the possibility of establishing a new break-even operating point between equipment and utilities related expenses as the optimal decision at the design stage is worth to be analyzed in deeper detail in future studies. Notably, this break-even point is extremely dependent on both the cost and availability of energy. In scenarios where energy availability is limited or costs are higher, the advantages of oversizing become more pronounced.



Operational and Economic Feasibility of Green Solvent-Based Extractive Distillation for 1,3-Butadiene Recovery: A Comparison with Conventional Toxic Solvents

João Pedro Gomes1, Rodrigo Silva2, Clemente Nunes3, Domingos Barbosa1

1LEPABE / ALiCE, Faculdade de Engenharia da Universidade do Porto; 2Repsol Polímeros, S.A., Complexo Petroquímico; 3CERENA, Instituto Superior Técnico

The increasing demand for safer and environmentally friendly processes in the petrochemical industry requires replacing harmful solvents with safer alternatives. One such process, extractive distillation (ED) of 1,3-butadiene, typically employs potential toxic solvents like N,N-dimethylformamide (DMF) and N-methyl-2-pyrrolidone (NMP). Although highly effective, these solvents may pose significant health and environmental risks. This study explores the viability of using propylene carbonate (PC), a green solvent, as a substitute in the butadiene ED process.

A comprehensive simulation study using Aspen Plus® was conducted to model the PC behavior in comparison with DMF (Figure 1). Due to the scarcity of experimental data for the system PC/C4 hydrocarbons, it was crucial to have a reliable prediction of vapor-liquid equilibrium (VLE) to derive accurate pairwise interaction parameters (bij) and ensure a more realistic representation of molecular interactions. Initially, the COSMO-RS (Conductor-like Screening Model for Real Solvents) was employed, leveraging its quantum chemical foundation to predict VLE based on molecular surface polarization charge densities. Subsequently, new energy interaction parameters were obtained for the Non-Random Two-Liquid (NRTL) model, coupled with the Redlich-Kwong (RK) equation of state, a model that is particularly effective for systems with non-ideal behavior, such as those involving polar compounds, strong molecular interactions (like hydrogen bonding), and highly non-ideal mixtures. Thus, making it particularly well-suited for systems, such as those present in the extractive distillation processes. Key operational parameters, such as energy consumption, solvent recovery, and product purity, were evaluated to assess the process efficiency and feasibility. Additionally, an energy analysis of the process with the new solvent was conducted to evaluate its energy-saving potential. This was achieved using the pinch methodology from the Aspen Energy Analysis tool to optimize the existing process for the new solvent. Economic evaluations, including capital (CapEx) and operational costs (OpEx), were carried out to provide a holistic comparison between the solvents.

The initial analysis of the solvent's selectivity showed slightly lower selectivity compared to the conventional, potentially toxic, solvents, along with a higher boiling point. As a consequence, higher solvent-to-feed ratio may be required to achieve the desired separation efficiency. The higher boiling point will also require increased heat duties, leading to higher overall energy consumption. Nevertheless, the study underscores the potential of this green solvent to improve the sustainability of petrochemical processes while striving to maintain economic feasibility.



Optimizing Heat Recovery: Advanced Design of Integrated Heat Exchanger Networks with ORCs and Heat Pumps

Zinet Mekidiche Martínez, José Antonio Caballero Suárez, Juan Labarta

Universidad de Alicante, Spain

An advanced model has been developed to facilitate the simultaneous design of heat exchanger networks integrated with organic Rankine[JACS1] cycles (ORCs) and heat pumps, addressing two primary objectives. The model utilizes heat pumps to reduce reliance on external services by enhancing heat recovery within the system. Secondly, ORCs capitalize on residual heat streams or generate additional energy, effectively integrating with the existing heat exchanger network.

Effective integration of these components requires careful selection of fluids for the ORCs and heat pumps and determining optimal operating temperatures for these cycles to achieve maximum efficiency, the heat exchanger network, in which inlet and outlet temperatures are not necessarily fixed, the number of Organic Rankine cycles and heat pumps, as well as their operating conditions, are simultaneously optimized.

This method aims to minimize costs associated with external services, electricity, and equipment such as compressors and turbines. The approach leads to the design of a heat exchanger network that optimizes both the use of residual heat streams and the integration of other streams within the system. This not only enhances operational efficiency and sustainability but also demonstrates the potential of incorporating an Organic Rankine Cycle (ORC) with various energy streams, not limited solely to residual heat.



CO2 recycling plant for decarbonizing hard-to-abate industries: Empirical modelling and Process design of a CCU plant- A case study

Jose Antonio Abarca, Stephanie Arias-Lugo, Lucia Gomez-Coma, Guillermo Diaz-Sainz, Angel Irabien

Departamento de Ingenierías Química y Biomolecular, Universidad de Cantabria

Achieving a net-zero CO2 society by 2050 is an ambitious target set by the European Commission Green Deal. Reaching this goal will require implementing various strategies to reduce CO2 emissions. Conventional decarbonization approaches are well-established, such as using renewable energies, electrification, and improving energy efficiency. However, different industries, known as "hard-to-abate sectors," face unique challenges due to the inherent CO2 emissions from their processes. For these sectors, alternative strategies must be developed. Carbon Capture and Utilization (CCU) technologies offer a promising and sustainable solution by capturing CO2 and converting it into valuable chemicals, thereby contributing to the circular economy.

This study focuses on designing a CO2 recycling plant for the cement or textile industry as a case study. The proposed plant integrates a CO2 capture process using membrane technology and a utilization stage where CO2 is electrochemically converted into formic acid. During the capture stage, several experiments are carried out at varying inlet concentrations to optimize process parameters and maximize the CO2 output flow. The membrane capture potential is determined by its CO2 permeability and selectivity, making highly selective membranes for efficient CO2 separation from the flue gas stream. Key variables affecting the capture process include flue gas concentration, inlet pressure, and total membrane area. Previous laboratory studies have demonstrated that at least a minimum CO2 concentration of 50 % and a flow rate of 15 mL min-1 cm-2 electrode are required for an efficient CO2 conversion to formic acid [1]. Thus, these variables are crucial for an effective integration of the capture and utilization stages.

For the utilization stage, a three-compartment electrochemical cell is proposed for the direct production of formic acid via CO2 electroreduction. The primary operational variables influencing formic acid production include the CO2 inlet flow rate and composition (determined by the capture stage), applied current density, inlet stream humidity, and water flow rate in the central compartment [2].

The coupling of capture and utilization stages is necessary for the development of CO2 recycling plants. However, it remains in the early stages, especially for the integration of membrane capture technologies and CO2 electroreduction. This work aims to empirically model both the CO2 capture and electroreduction systems using neural networks, resulting in an integrated predictive model for the entire CO2 recycling plant. This model will optimize the performance of the capture-utilization system, facilitating the design of a sustainable process for CO2 capture and conversion into formic acid. Ultimately, this approach will contribute to reducing the products carbon footprint.

Acknowledgments

The authors acknowledge the financial support received from the Spanish State Research Agency through the project PLEC2022-009398 MCIN/AEI/10.13039/501100011033 and Unión Europea Next Generation EU/PRTR. This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101118265. Jose Antonio Abarca acknowledges the predoctoral research grant (FPI) PRE2021-097200.

[1] G. Diaz-Sainz, J. A. Abarca, M. Alvarez-Guerra, A. Irabien, Journal of CO2 Utilization. 2024, 81, 102735

[2] J. A. Abarca, M. Coz-Cruz, G. Diaz-Sainz, A. Irabien, Computer Aided Chemical Engineering, 2024, 53, pp. 2827-2832



Integration of direct air capture with CO2 utilization technologies powered by renewable energy sources to deliver negative carbon emissions

Calin-Cristian Cormos1, Arthur-Maximilian Bathori1, Angela-Maria Kasza1,2, Maria Mihet2, Letitia Petrescu1, Ana-Maria Cormos1

1Babes-Bolyai University, Faculty of Chemistry and Chemical Engineering, Romania; 2National Institute for Research and Development of Isotopic and Molecular Technologies, Romania

Reduction of greenhouse gas emissions is an important environmental element to actively combat the global warming and climate change. To achieve climate neutrality by the middle of this century, several options are envisaged such as increasing the share of renewable energy sources (e.g., solar, wind, biofuels etc.) to gradually replace the fossil fuels, large-scale implementation of Carbon Capture and Utilization (CCUS) technologies, improving overall energy efficiencies of both production and utilization steps etc. In respect to reduce the CO2 concentration from the atmosphere, the Direct Air Capture (DAC) options are of particular interest and very promising in delivering negative carbon emissions. The negative carbon emission is a key element for climate neutrality to balance the still remaining positive emission systems and the hard-to-decarbonize processes. The integration of renewable-powered DAC systems with CO2 utilization technologies can deliver both negative carbon emissions and reduce the energy and economic penalties of such promising decarbonization processes.

This work evaluates the innovative energy-efficient potassium - calcium looping cycle as promising direct air capture technology integrated with various CO2 catalytic transformations into basic chemicals (e.g., synthetic natural gas, methanol etc.). The integrated system will be powered by renewable energy (in terms of both heat and electricity requirements). The investigated DAC concept is set to capture 1 Mt/y CO2 with about 75% carbon capture rate. A fraction of this captured CO2 stream (about 5 - 10%) will be catalytically converted into synthetic methane or methanol using green hydrogen produced by water electrolysis, the rest being sent to geological storage. Conceptual design, process modelling, and model validation followed by overall energy optimization done by thermal integration analysis, were relevant engineering tools used to assess the global mass and energy balances for quantifying key techno-economic and environmental performance indicators. As the results show, the integrated DAC - CO2 utilization system, powered by renewable energy, has promising performances in terms of delivering negative carbon emissions and reduced ancillary energy consumptions. However, significant technological developments (e.g., scale-up, reducing solvent and sorbent make-ups, better process intensification and integration, improved catalysts) are still needed to advance this innovative technology from the current state of the art to a relevant industrial size.



Repurposing Existing Combined Cycle Power Plants with Methane Production for Renewable Energy Storage

Diego Santamaría, Antonio Sánchez, Mariano Martín

Department of Chemical Engineering, University of Salamanca, Plz Caidos 1-5, 37008, Salamanca, Spain

Nowadays, various technologies exist to generate renewable energy, such as solar, wind, hydroelectric power, etc. However, most of these energy sources have fluctuations due to the weather variations. A reliable energy storage is essential to promote a higher share of renewable energy integration into the current energy system. Moreover, energy storage keeps energy security under control. Power-to-Gas technologies consist of storage renewable energy in the form of gaseous chemicals. In this case, Power-to-Methane is the technology of choice since methane allows the use of existing infrastructures for it transport and storage.

This work proposes the integration and optimization of methane energy storage into the existing combined cycle power plants. This involves the introduction of carbon capture systems and methane production reusing the existing power production section. The process leverages renewable energy to produce hydrogen, which is then transformed into methane for easier storage. When the energy demand arises, the stored methane is burned in the combined cycle power plant. Two wastes are produced: water and CO2. The water produced is collected and returned to the electrolyzer while the CO2 is captured and then it is combined with hydrogen to synthesize methane again (Ghaib & Ben-Fares, 2018). This results in a circular process that repurposing the existing infrastructure.

Two different types of combustion method, ordinary and oxy-combustion (Elias et al., 2018) are optimized to evaluate both alternatives and their economic feasibility. In ordinary combustion, air is used as the oxidizer, while in oxy-combustion, pure oxygen is employed, including the oxygen produced in the electrolyzer. However, CO2 recirculation is necessary in oxy-combustion to prevent excessive the flame temperature (Stanger et al., 2015). In addition, is also analysed the potential energy storage capacity of the existing combined cycle power plants in a country, specifically across Spain. This would avoid their decommissioning and reuse the natural gas distribution network, adapting it for use in conjunction with a renewable energy storage system.

References

Elias, R. S., Wahab, M. I. M., & Fang, L. (2018). Retrofitting carbon capture and storage to natural gas-fired power plants: A real-options approach. Journal of Cleaner Production, 192, 722–734.

Ghaib, K., & Ben-Fares, F.-Z. (2018). Power-to-Methane: A state-of-the-art review. Renewable and Sustainable Energy Reviews, 81, 433–446.

Stanger, R., Wall, T., Spörl, R., Paneru, M., Grathwohl, S., Weidmann, M., Scheffknecht, G., McDonald, D., Myöhänen, K., Ritvanen, J., Rahiala, S., Hyppänen, T., Mletzko, J., Kather, A., & Santos, S. (2015). Oxyfuel combustion for CO2 capture in power plants. International Journal of Greenhouse Gas Control, 40, 55–125.



Powering chemical processes with variable renewable energy: A case of iron making in steel industry

Dorcas Tuitoek, Daniel Holmes, Binjian Nie, Aidong Yang

University of Oxford, United Kingdom

The steel industry is responsible for ~8% of global energy demand and emits 7% of CO2 emissions annually 1. Increased adoption of renewable energy in the iron-making process, which is the primary step of the steel-making process, is one of the promising ways to decarbonise the industry. The intermittent nature of renewable energy, as well as the difficulty in storing it, causes a variable energy supply profile necessitating a shift in the operation modes of manufacturing processes to make efficient use of renewable energy. Through dynamic simulation, this study explores a case of the direct reduction process, where iron ore is charged to a shaft furnace reactor where it is reduced to solid iron with green hydrogen.
Existing mathematical modelling and simulation studies of the shaft furnace have only investigated its behaviour assuming constant gas and solid feed rates. Here, we simulate iron ore reduction in a 1D model using COMSOL Multiphysics, with intermittent hydrogen supply, to predict the effects of a time-varying hydrogen feed on the degree of iron ore reduction. The dynamic model of the counter-current moving bed captures chemical reaction kinetics ,mass and heat transfer. With settings relevant to industrial scale operations, our results show that the system can tolerate drops of hydrogen feed rate by up to ~10% without leading to a reduction in the metallisation rate of the product. To tolerate greater fluctuation of H2 feed rate, strategies were tested which could alter the residence time and change the thermal profile in the reactor, to ensure complete metallic iron formation.
These findings show that it is possible to operate a shaft furnace with a certain degree of hydrogen feed variability, hence providing an approach to mitigating the challenges of intermittent renewable energy supply as a solution to decarbonize industries.

1. International Energy Agency (IEA). Iron and Steel Technology Roadmap. Towards More Sustainable Steelmaking. https://www.iea.org/reports/iron-and-steel-technology-roadmap (2020).



Early-Stage Economic and Environmental Assessment for Emerging Chemical Technologies: Back-casting Approach

Yeonguk Kim, Dami Kim, Kosan Roh

Chungnam National University, Korea, Republic of (South Korea)

The emergence of alternative chemical technologies has made their reliable economic and environmental assessments indispensable for guiding future research and development. However, these assessments are inherently challenging due to the lack of comprehensive understanding and technical knowledge of such technologies, particularly at low technology readiness levels (TRLs). This knowledge gap complicates accurate predictions of their real-world performance, economics, and potential environmental impacts. To address these challenges, we adopt a back-casting approach to demonstrate a TRL-based early-stage evaluation procedure, as previously proposed by Roh et al. (2020, Green Chem. 22, 3842). In this work, we apply this framework to methanol production based on the reforming of natural gas, which is a mature chemical technology, to explore its suitability for evaluating emerging chemical technologies. The target technology is assumed to be at three distinct stages of maturity: theoretical, intermediate, and engineering stages. We analyze economic and environmental indicators of the technology using the available information at each stage and then see how similar the indicators calculated at the theoretical and intermediate stages are compared to those at the engineering stage. The analysis shows that the performance indicators are lowest at the theoretical stage due to relying solely on reaction stoichiometry. In the case of the intermediate stage, despite considering various factors, it yields slightly higher performance indicators than the engineering stage due to the lack of process optimization. The outcomes of this study enable a proactive assessment of emerging chemical technologies, providing insights into their feasibility at various stages of development.



A White-Box AI Framework for Interpretable Global Warming Potential Prediction

Jaewook Lee, Ethan Errington, Miao Guo

King's College London, United Kingdom

The transformation of the chemical industry towards sustainable manufacturing requires reliable yet robust decision-making tools involving Life Cycle Assessment (LCA). LCA offers a standardised method to evaluate the environmental profiles of chemical processes and products. However, with the emergence of numerous novel chemicals and processes, existing LCA Inventory databases are increasingly resource-intensive to develop, often delayed in reporting, and suffer from data gaps. Research efforts have been made to address the knowledge gaps by developing predictive models that can estimate LCA properties based on chemical structures. However, the previously published research has been hampered dataset availability and reliance on complex black-box models such as Deep Neural Network (DNN), which often provide low predictive accuracy and lack the interpretability needed for industrial adoption. Understanding the rationale behind model predictions is crucial, particularly in industrial applications where decision-making relies on both the accuracy and transparency. In this study, we introduce a Kolmogorov–Arnold Networks (KAN) model based LCA predictions for emerging chemicals. Designed to bridge the gap between accuracy and interpretability by incorporating domain knowledge into the learning process.

We utilized 15 key LCA categories from the Ecoinvent v3.8 database, comprising 2,239 data points. To address large data scale variation, we applied logarithmic transformation. Using chemical structures represented as SMILES, we converted them into MACCS keys (166-bit fingerprints) and Mordred descriptors (1,825 physicochemical properties), incorporating features like molecular weight and hydrophobicity. These features were used to train a KAN, Random Forest, and DNN to predict LCA values across all categories. KAN consistently outperformed Random Forest and DNN models in 12 out of 15 LCA categories, achieving an average R² value of 74% compared to 66% and 67% for Random Forest and DNNs, respectively. For critical categories like Global Warming Potential, Terrestrial Ecotoxicity, and Ozone Formation–Human Health, KAN achieved high predictive accuracies of 0.84, 0.86, and 0.87, respectively, demonstrating an 8% improvement in overall accuracy. Our feature analysis indicated that MACCS keys provided nearly the same predictive power as Mordred descriptors, despite containing significantly fewer features. Furthermore, we identified that retaining data points with extremely large LCA values (top 3%) could degrade model performance, underscoring the importance of careful data curation. In terms of model interpretability, the use of Gini importance and SHapley Additive exPlanations (SHAP) revealed that functional groups such as halogens, oxygen, and methyl groups had the most significant impact on LCA predictions, aligning with domain knowledge. The SHAP analysis further highlighted that KAN was able to capture more complex structure-property relationships compared to conventional models.

In conclusion, the application of the KAN model for LCA predictions provides a robust and accurate framework for evaluating the environmental impacts of emerging chemicals. By integrating domain-specific knowledge, this approach not only enhances the reliability of LCA prediction but also offers deeper insights into the structural drivers of environmental outcomes. Its demonstrated success in identifying key molecular features makes it a valuable tool for accelerating sustainable innovations in both chemical process transformations and drug development, where precise environmental assessments are essential.



Data-driven approach for reaction mechanism identification using neural ODEs

Junu Kim1,2, Itushi Sakata3, Eitaro Yamatsuta4, Hirokazu Sugiyama1

1The University of Tokyo, Japan; 2Auxilart Co., Ltd., Tokyo, 104-0061, Japan; 3Institute of Physical and Chemical Research, Hyogo, 660-0813, Japan; 4Independent researcher, Japan

In the fields of reaction engineering and process systems engineering, mechanistic models have traditionally been the focus due to their explainability and extrapolative power, as they are based on fundamental principles governing the system. For chemical reactions, kinetic studies are crucial in developing these mechanistic models, providing insights into reaction mechanisms and estimating model parameters [1, 2]. However, kinetic studies often require extensive cycles of experimental data acquisition, reaction pathway generation, model construction, and parameter estimation, making the process laborious and time-consuming. In response to these challenges, machine learning techniques have gained attention. A recent approach involves using neural network models trained on simulation data to classify reaction mechanisms [3]. While effective, these methods demand vast amounts of training data, and expanding the reaction boundaries further increases the data requirements.

In this study, we present a direct, data-driven approach to identifying reaction mechanisms and constructing mechanistic models from experimental data without the need for large datasets. As an initial attempt, we focused on amination and Grignard reactions, which are widely used for various chemical and pharmaceutical synthesis. Since chemical reactions can be expressed as differential equations, our hypothesis is that by calculating first- or higher-order derivatives directly from experimental data, we can estimate the relationships between different chemical compounds in the system and identify the reaction mechanism, order, and parameter values. The major challenge arises with real experimental data, where the number of data points is often limited (e.g., around ten), making it difficult to estimate differential values directly. To address this, we employed neural ordinary differential equations (neural ODEs) to effectively interpolate these sparse datasets [4]. By applying neural ODEs, we were able to generate interpolated data, which enabled the calculation of derivatives and the development of mechanistic models that accurately reproduce the observed data. For future work, we plan to validate our methodology across a broader range of reactions and further automate the process to enhance efficiency and applicability.

References

[1] P. Sagmeister, et al. React. Chem. Eng. 2023, 8, 2818. [2] S. Diab, et al. React. Chem. Eng. 2021, 6, 1819. [3] J. Bures and I. Larrosa Nature 2023, 613, 689. [4] R. T. Q. Chen et al. NeurlIPS. 2018.



Generalised Disjunctive Programming for Process Synthesis

Lukas Scheffold, Erik Esche

Technische Universität Berlin, Germany

Automating process synthesis presents a formidable challenge in chemical engineering. Particularly challenging is the development of frameworks that are both general and accurate, while remaining computationally tractable. To achieve generality, a building block-based modelling approach was proposed in previous contributions by Kuhlmann and Skiborowski [1] and Krone et al. [2]. This model formulation incorporates Phenomena-based Building Blocks (PBBs), capable of depicting a wide array of separation processes [1], [3]. To maximize accuracy, the PBBs are interfaced with CAPE-OPEN thermodynamics, allowing for detailed thermodynamic models [2] within the process synthesis problem. However, the pursuit of generality and accuracy introduces increased model complexity and poses the risk of combinatorial explosion. To address this and enhance tractability, [1] developed a structural screening method that forbids superstructures leading to infeasible configurations. These combined innovations allow for general, accurate, and tractable superstructures.

To further increase the solvable problem size, we propose an advanced optimization framework, leveraging generalized disjunctive programming (GDP). It allows for multiple improvements over existing MINLP formulations, aiming at improving feasibility and solution time. This is achieved by deactivation of unused model equations during the solution procedure. Additionally, Grossmann [4] showed that a disjunctive branch-and-bound algorithm can be postulated. This provides tighter bounds for linear problems than those obtained through reformulations used in conventional MINLP solvers, reducing the required solution time.

Building on these insights, it is of interest whether these findings extend to nonlinear systems. To investigate this, we developed a MathML/XML-based automatic code generation tool inside MOSAICmodeling [5], which formulates complex nonlinear GDP and exports them to conventional optimization environments (Pyomo, GAMS etc.). These are then coupled with structural screening methods [1] and solved using out-of-the-box functionalities for GDP solution. To validate the proposed approach, a case study is conducted involving two PBBs, previously published by Krone et al. [2]. The study compares the performance of the GDP-based optimization framework against conventional MINLP approaches. Preliminary results suggest that the GDP-based framework offers computational advantages over conventional MINLP formulations. The full paper will present detailed comparisons, offering insights into the practical applicability and benefits of GDP.

References

[1] H. Kuhlmann und M. Skiborowski, „Optimization-Based Approach To Process Synthesis for Process Intensification: General Approach and Application to Ethanol Dehydration,“ Industrial & Engineering Chemistry Research, Bd. 56, Nr. 45, p. 13461–13481, 2017.

[2] D. Krone, E. Esche, N. Asprion, M. Skiborowski und J.-U. Repke, „Enabling optimization of complex distillation configurations in GAMS with CAPE-OPEN thermodynamic models,“ Computers & Chemical Engineering, Bd. 157, p. 107626, 2022.

[3] H. Kuhlmann, M. Möller und M. Skiborowski, „Analysis of TBA‐Based ETBE Production by Means of an Optimization‐Based Process‐Synthesis Approach,“ Chemie Ingenieur Technik, Bd. 91, Nr. 3, p. 336–348, 2019.

[4] I. E. Grossmann, „Review of Nonlinear Mixed-Integer and Disjunctive Programming Techniques,“ Optimization and Engineering, Nr. 3, p. 227–252, 2002.

[5] E. Esche, C. Hoffmann, M. Illner, D. Müller, S. Fillinger, G. Tolksdorf, H. Bonart, G. Wozny und J. Repke, „MOSAIC – Enabling Large‐Scale Equation‐Based Flow Sheet Optimization,“ Chemie Ingenieur Technik, Bd. 89, Nr. 5, p. 620–635, 2017.



Optimal Design and Operation of Off-Grid Electrochemical Nitrogen Reduction to Ammonia

Michael Johannes Rix1, Judith M. Schwindling1, Karim Bidaoui1, Alexander Mitsos2,1,3

1RWTH Aachen University, Germany; 2JARA-ENERGY, 52056 Aachen, Germany; 3Energy Systems Engineering (ICE-1), Forschungszentrum Jülich, Germany

Electrochemical processes can aid in defossilizing the chemical industry. When operated off-grid with its own renewable electricity (RE) production, the electrochemical process and the RE plants must be optimized together. We optimize the design and operation of an electrochemical system for nitrogen reduction to ammonia coupled with wind and solar electricity generation to minimize ammonia production costs. Electrochemical nitrogen reduction allows ammonia production from RE, water, and air in one electrolyzer [1]. Comparable design and operation optimizations for coupling RE with electrochemical systems were already performed in the literature for different systems (e.g., for water electrolysis by [2] and others).

We optimize the design and operation of the electrolyzer and RE plant over the scope of one year. We calculate investment costs for the electrolyzer and RE plants annualized over their respective lifetime. We calculate the electricity production from weather data on hourly resolution and the design of the RE plant. From the design of the electrolyzer and the electricity production, we calculate the ammonia production. We investigate three operating strategies: (i) direct coupling of RE and electrolyzer, (ii) curtailment of electricity, and (iii) battery storage and curtailment. In direct coupling, the electrolyzer electricity consumption must follow the RE generation, thus the electrolyzer is sized for the peak power of the RE plant. Therefore, it can only be operated at full load at peak electricity generation which will be only at one or a few times of the year. Curtailment and battery storage allow the decoupling of electricity production and consumption. Thus, the electrolyzer can be operated at full or higher load multiple times of the year.

Operation with curtailment increases the load factor of the electrolyzer and reduces the production cost. The RE plant can be over-designed such that the electrolyzer can operate at full or higher load at off-peak RE generation. Achieving a high load factor and few on/off cycles of the electrolyzer is important since on/off cycles can lead to catalyst degradation due to reverse currents [3]. Implementation of battery storage can further increase the load factor of the electrolyzer. However, battery costs are too high, resulting in increased production costs.

We run the optimization for different locations with different RE potentials. At all locations, operation with curtailment is beneficial, and battery costs are too expensive. The availability of wind and solar determines the optimal design of the electrolyzer and RE plant, the optimal operation, the production cost, and the load factor.

References
1. MacFarlane, D. R. et al. A Roadmap to the Ammonia Economy. Joule 4, 1186–1205 (2020).
2. Hofrichter, A., et al. Determination of the optimal power
ratio between electrolysis and renewable energy to investigate the effects on the hydrogen
production costs. International Journal of Hydrogen Energy 48, 1651–1663 (2023).
3. Kojima, H. et al. Influence of renewable energy power fluctuations on water electrolysis
for green hydrogen production. International Journal of Hydrogen Energy 48, 4572–4593. (2023).



A Stochastic Techno-Economic Assessment of Emerging Artificial Photosynthetic Bio-Electrochemical Systems for CO₂ Conversion

Haris Saeed, Aidong Yang, Wei Huang

Oxford University, United Kingdom

Artificial Photosynthetic Bioelectrochemical Systems (AP-BES) are a promising technology for converting CO2 into valuable bioproducts, addressing both carbon mitigation and sustainable production challenges. By integrating biological and electrochemical processes to emulate natural photosynthesis, AP-BES offer potential for scalable, renewable biomanufacturing. However, their commercialization faces significant challenges related to process efficiency, system integration, and economic uncertainties. A thorough techno-economic assessment (TEA) is crucial for evaluating the viability and scalability of this technology. This study employs a stochastic TEA to assess the bioelectrochemical conversion of CO2 to bioproducts, accounting for variability and uncertainty in key technical and economic parameters. Unlike traditional deterministic TEA, which relies on fixed-point estimates, the stochastic approach uses probability distributions to capture a broader range of potential outcomes. Critical factors such as energy consumption, CO2 conversion efficiency, and bioproduct market prices are modeled probabilistically, offering a more accurate reflection of real-world uncertainties. The novelty of this research lies in its comprehensive application and advanced methodology. This study is one of the first to apply a full-system TEA to AP-BES, covering the entire process from carbon capture to product purification. Moreover, the stochastic approach, utilizing Monte Carlo simulations, enables a more robust analysis by incorporating uncertainties in both technical and economic factors. This combined methodology provides more realistic insights into the system's economic potential and commercial feasibility compared to conventional deterministic models. Monte Carlo simulations are used to generate probability distributions for key economic metrics, including total annualized cost (TAC), internal rate of return (IRR), and levelized cost of product (LCP). By performing thousands of iterations, the model offers a comprehensive understanding of AP-BES's financial viability, delivering confidence intervals and risk assessments often missing from deterministic approaches. Key variables include electricity price fluctuations, a significant driver of operating costs, and changes in bioproduct market prices due to varying demand. The model also accounts for uncertainties in future technological improvements, such as enhanced CO2 conversion efficiencies and potential economies of scale that could lower both capital expenditure (CAPEX) and operational expenditure (OPEX) per kg of CO2 processed. Sensitivity analyses further identify the most influential factors impacting economic outcomes, guiding future research and development. The results underscore the critical role of uncertainty in evaluating the economic viability of AP-BES. While the technology demonstrates significant potential for both economic and environmental benefits, substantial risks remain, particularly concerning electricity price volatility and unpredictable bioproduct markets. Compared to static point estimates in deterministic approaches, Monte Carlo simulations provide a more nuanced understanding of the financial risks and opportunities. This stochastic TEA offers valuable insights for optimizing processes, reducing costs, and guiding investment and research decisions in the development of artificial photosynthetic bioelectrochemical systems.



Empowering LLMs for Mathematical Reasoning and Optimization: A Multi-Agent Symbolic Regression System

Shaurya Vats, Sai Phani Chatti, Aravind Devanand, Sandeep Krishnan, Rohit Karanth Kota

Siemens Technology and Services Pvt. Ltd

Understanding data with complex patterns is a significant part of the journey toward accurate data prediction and interpretation. The relationships between input and output variables can unlock diverse advancement opportunities across various processes. However, most AI models attempting to uncover these patterns are not explainable or remain opaque, offering little interpretation. This paper explores an approach in explainable AI by introducing a multi-agent system (MaSR) for extracting equations between features using data.

We developed a novel approach to perform symbolic regression by discovering mathematical functions using a multi-agent system of LLMs. This system addresses the traditional challenges of genetic optimization, such as random seed generation, complexity, and the explainability of the final equation. The agent-based system divides the process into various steps, including initial function generation, loss and complexity calculation, mutation and crossbreeding of equations, and explanation of the final equation to improve the accuracy and decrease the workload.

We utilize the in-context learning capabilities of LLMs trained on vast amounts of data to generate accurate equations more quickly. Additionally, we incorporate methods like retrieval-augmented generation (RAG) with tabular data and web search to further enhance the process. The system creates an explainable model that clarifies each process step leading to the final equation for a given dataset. We also use the capability of the method in combination with existing technologies to develop innovative solutions, such as incorporating physical laws derived from data using multi-agent symbolic regression (MaSR) to reduce illogical predictions and improving extrapolations, passing the generated equations to LLMs as context for explaining the large number simulation results.

Our solution is compared with symbolic regression methods such as GPlearn and PySR against various benchmarks. This study presents research on expanding the reasoning capacities of large language models alongside their mathematical understanding. The paper serves as a benchmark in understanding the capabilities of LLMs in mathematical reasoning and can be a starting point for solving numerous complex tasks using LLMs. The MaSR framework can be applied in various areas where the reasoning capabilities of LLMs are tested for complex and sequential tasks. MaSR can explain the predictions of black-box models, develop data-driven models, identify complex relationships within the data, assist in feature engineering and feature selection, and generate synthetic data equations to address data scarcity, which are explored as further directions for future research in this paper.



Solid Oxide Cells and Hydrogen Storage to Prevent Grid Congestion

Dorsan Lepour, Arthur Waeber, Cédric Terrier, François Maréchal

École Polytechnique Fédérale de Lausanne, Switzerland

The electrification of heating and mobility sectors, alongside increasing photovoltaic (PV) capacities, places significant pressure on electricity grids, particularly in urban neighborhoods and densely populated zones. High penetration of heat pumps and electric vehicles as well as significant PV deployment can indeed induce supply shortfall or require curtailment, respectively. Grid reinforcement is a potential solution, but is costly and involves substantial structural engineering work. Altough some local energy storage systems have been extensively studied as an alternative (primarily batteries), the potential of integrating reversible solid oxide cells (rSOC) coupled with hydrogen storage in the context of urban energy systems planning remains underexplored. This study aims to address this gap by investigating the technical and economic feasibility of such systems at building or district-scale.

This work uses the framework of REHO (Renewable Energy Hub Optimizer), a decision-support tool for sustainable urban energy system planning. REHO takes into account the endogenous resources of a defined territory, diverse end-use demands (e.g., heating, mobility), and multiple energy carriers (electricity, heat, hydrogen). Multi-objective optimizations are conducted across economic, environmental, and energy efficiency criteria to determine under which circumstances the deployment of rSOC and hydrogen storage becomes relevant.

The study considers several typical districts with their import and export capacities and examines two key scenarios: (1) a closed-loop hydrogen system where hydrogen is produced and consumed locally, and (2) a scenario involving connection to a broader hydrogen network. Results indicate that in areas where grid capacity is strained, rSOC coupled wih hydrogen tank offer a compelling storage solution. They enhance energy self-consumption by converting surplus electricity into hydrogen for later use, while the heat generated during cell operation can be used to meet building space heating and domestic hot water demands.

These findings suggest that hydrogen-based energy storage can be a viable alternative to traditional grid reinforcement, particularly for areas facing an increased penetration of renewables in a saturated grid. The study highlights that for such regions approaching grid congestion, integrating local hydrogen capacities can provide both flexibility and efficiency gains while reducing the need for expensive grid upgrades.



A Modern Portfolio Theory Approach for Chemical Production with Supply Chain Considerations for Efficient Investment Planning

Mohamad Almoussaoui, Dhabia Al-mohannadi

Texas A&M University at Qatar, Qatar

The integrated supply chains of large chemical commodities and fuels play a major role in energy security. These supply chains are at risk of global shocks such as the COVID-19 pandemic [1]. As such, major natural gas producers and exporters such as Qatar aim to balance their supply chain investment returns with the export risks as the hydrocarbon sector contributes primarily to more than one-third of its Gross Demostic Product (GDP) [2]. Hence, this work introduces a modern portfolio theory (MPT) model formulation based on chemical commodities and fuel supply chains. The model uses Markowitz’s optimization [3] model to meet an exporting country’s financial objective of maximizing the investment return and minimizing the associated risk. By defining a supply chain asset as a combination of an exporting country, a traded chemical commodity, and an importing country, the model calculates the return for every supply chain investment, and the risk associated with the latter due to price fluctuations. Solving the optimization problem, a set of Pareto-optimal supply chain portfolios and the efficient frontier, is obtained. The model integrates both the chemical process production [4] and the shipping stages of a supply chain. This work's case study showcases the importance of considering the integrated supply chain in building the MPT model and its impact on the number and allocations of the resulting optimal portfolios. The developed model can guide investment planners to achieve their financial goals at a minimum risk.

References

[1]

M. Shehabi, "Modeling long-term impacts of the COVID-19 pandemic and oil price declines in Gulf oil economies," Economic Modelling, vol. 112, 2022.

[2]

"Qatar - Oil & Gas Field Machinery Equipment," 29 7 2024. [Online]. Available: https://www.trade.gov/country-commercial-guides/qatar-oil-gas-field-machinery-equipment. [Accessed 18 9 2024].

[3]

H. Markowitz, "PORTFOLIO SELECTION*," The Journal of Finance, vol. 7, no. 1, pp. 77-91, 1952.

[4]

S. Shehab, D. M. Al-Mohannadi and P. Linke, "Chemical production process portfolio optimization," Chemical Engineering Research and Design, vol. 167, pp. 207-217, 2021.



Co-gasification of crude glycerol and plastic waste using air/steam mixtures: a modelling approach

BAHIZIRE MARTIN MUKERU, BILAL PATEL

UNIVERSITY OF SOUTH AFRICA, South Africa

There has been an unprecedented growth in plastic waste and current management techniques such as landfilling and incineration are unsustainable, particularly due to the environmental impact associated with these practises [1]. Gasification is considered as one of the most sustainable ways not only to address these issues, but also produce energy from waste plastics [1]. However, issues such as tar and coke formation are associated with plastic waste gasification which reduces syngas quality [1],[2]. Another typical waste in huge quantities is crude glycerol, with low value, which is a by-product from biodiesel production. The cost involved in its purification is exceedingly high and therefore this limits its applications as a purified product [3]. Co-feeding plastic wastes with crude glycerol for syngas production cannot only address issues related to plastic gasification, but also allow the utilization of crude glycerol and enhance syngas quality [3]. This study evaluates the performance of a downdraft gasifier to produce hydrogen and syngas from the co-gasification of crude glycerol and plastic wastes, by means of thermodynamic analysis and modelling using Aspen Plus simulation software. Performance indicators such as cold gas efficiency (CGE), carbon conversion efficiency (CCE) and syngas yield (SY) to determine the technical feasibility of the co-gasification of crude glycerol and plastic wastes at different equivalent ratios (ER). Results demonstrated that an increase in ER increased CGE, CCE and SY. For a blend ratio of 50%, a CCE of 100% was attained at an ER of 0.35 whereas the CGE of 88.29% was attained at ER of 0.3. Increasing the plastic content to 75%, a maximum CCE and CGE of 94.16% and 81.86% were achieved at ER of 0.4. The hydrogen composition reached its maximum of 36.70% and 39.19% at an ER of 0.1 when the plastic ratio increased from 50% to 75% respectively. A 50% plastic bend ratio achieved a syngas ratio (H2/CO) of 1.99 at ER of 0.2 whereas a 75% reached a ratio of 2.05 at an ER of 0.25. At these operating conditions the syngas lower heating value (LHV), SY, CGE and CCE were found to be 6.23 MJ/Nm3, 3.32 Nm3, 66.58%, 76.35% and 6.27 MJ/Nm3, 3.60 Nm3, 59.12%, 53.22% respectively. From these results it can be deduced that the air co-gasification is a promising technology for the sustainable production of energy from waste glycerol and plastic waste.

References

[1] Mishra, R., Shu, C.M., Gollakota, A.R.K. & Pan, S.Y ‘Unveiling the potential of pyrolysis-gasification for hydrogen-rich syngas production from biomass and plastic waste’, Energ. Convers. Manag. 2024: 118997 doi: 10.1016/j.enconman.2024.118997.

[2] Chunakiat,P., Panarmasar,N. & Kuchonthara, P. “Hydrogen Production from Glycerol and Plastics by Sorption-Enhanced Steam Reforming,” Ind. Eng. Chem. Res.2023; 62(49): 21057-21066. doi: 10.1021/acs.iecr.3c02072



COMPARATIVE AND STATISTICAL STUDY ON ASPEN PLUS INTERFACES USED FOR STOCHASTIC OPTIMIZATION

Josue Julián Herrera Velázquez1,3, Erik Leonel Piñón Hernández1, Luis Antonio Vega Vega1, Dana Estefanía Carrillo Espinoza1, Julián Cabrera Ruiz1, J. Rafael Alcántara Avila2

1Universidad de Guanajuato, Mexico; 2Pontificia Universidad Católica del Perú, Peru; 3Instituto Tecnológico Superior de Guanajuato, Mexico

New research on complex intensified schemes has popularized the use of multiple commercial process simulation software. The interfaces between software and computer systems for process optimization have allowed us to maintain rigor in the models. This type of optimization is mentioned in the literature as "Black Box Optimization" since successive evaluations are taken exploiting the information from the simulator without altering the model that integrates it. The writing/reading results are from the contribution of 1) Process simulation software, 2) Middleware protocol, 3) Wrapper protocol, and 4) Platform (IDE) with the optimization algorithm (Muñóz-López et al., 2017). The middleware protocol allows the automation of the process simulator and the transfer of information in both directions. The Wrapper protocol works to interpret the information transferred by the previous protocol and make it useful for both parties, for the simulator and the optimizer. Aspen Plus ® software has become popular due to the rigor of its models and the reliability of its results, as well as the customization it offers for different conventional and unconventional schemes. Few studies have been reported regarding the efficiency and effectiveness of the various computer systems where the programming of the optimization algorithm or the reported packages is carried out. Santos-Bartolome and Van-Gerven (2022) carried out the study of comparing different computer systems (Microsoft Excel VBA ®, MATLAB ®, Python ®, and Unity ®) with the Aspen Hysys ® software, evaluating the accuracy of communication, information exchange time, and the deviation of the results, reaching the conclusion that the best option is to use VBA ®. Ponce-Rocha et al. (2023) studied the execution time between Aspen Plus ® - MATLAB ® and Aspen Plus ® - Python ® connections in multi-objective optimization using the respective optimization packages, reaching the conclusion that the fastest connection occurs in the Python ® connection.

This work proposes to do a comparative study for the Aspen Plus ® software and its interfaces with Microsoft Excel VBA ®, Python ®, and MATLAB ®. 5 schemes are analyzed (conventional and intensified columns). The optimization of the Total Annual Cost is carried out by a modified Simulated Annealing Algorithm (m-SAA) (Cabrera-Ruiz et al., 2021). This algorithm has the same programming for all platforms, using the respective random number functions to make the study as homogeneous as possible. Each optimization is done ten times doing hypothesis testing to eliminate anomalous cases. The aspects to evaluate are the time per iteration, the standard deviation between each test and the number of feasible solutions. The results indicate that the best option to carry out the optimization is using the interface with VBA ®, however the one carried out with Python ® is not very different from this. There is not much literature on optimization algorithm packages in VBA ®, so, connecting with Python ® may be the most efficient and effective for performing stochastic optimization with Aspen Plus ® software in addition to being an open-source language.



3D simulation and design of MEA-based absorption system for biogas purification

Debayan Mazumdar, Wei Wu

National Cheng Kung University, Taiwan

The shape and geometry design of MEA-based absorption system by using ANSYS Fluent R22 is addressed. By conducting CFD studies for observing the effect of liquid distribution quality on counter current two-phase absorption under different liquid distributor designs. By simulation and analysis, the detailed exploration of fluid dynamics offers critical insights and enabling performance optimization. Unlike previous literature which focused on unstructured packing have been done on structure Mellapak 500X Packing. Demonstrating the overall efficiency for a MEA-based absorption system according to different distributor patterns. The previous model of calculation for liquid distribution quality is used for a detailed understanding between the initial layers of packing and pressure difference.



Enhancing Chemical Process Design: Aligning DEXPI Process with BPMN 2.0 for Improved Efficiency in Data Exchange

Shady Khella1, Markus Schichtel2, Erik Esche1, Frauke Weichhardt2, Jens-Uwe Repke1

1Process Dynamics and Operations Group, Technische Universität Berlin, Berlin, Germany; 2Semtation GmbH, Potsdam, Germany

BPMN 2.0 is a widely adopted standard across various industries, primarily used for business process management outside of the engineering sphere [1]. Its long history and widespread use have contributed to a mature ecosystem, offering advanced software tools for editing and optimizing business workflows.

DEXPI Process, a newly developed standard for early-phase chemical process design, focuses on representing Block Flow Diagrams (BFDs) and Process Flow Diagrams (PFDs), both crucial in the conceptual design phase of chemical plants. It provides a standardized way to document design activity, offering engineers a clear rationale for design decisions [2], which is especially valuable during a plant’s operational phases. While DEXPI Process offers a robust data model, it currently lacks an established serialization format for efficient data exchange. As Cameron et al. note in [2], finding a suitable format for DEXPI Process remains a key research area, essential for enhancing its usability and adoption. So far, Cameron et al. have explored several serialization formats for exchanging DEXPI Process information, including AutomationML, an experimental XML, and UML [2].

This work aims to map the DEXPI Process data model to BPMN 2.0, providing a standardized serialization for the newly developed standard. Mapping DEXPI Process to BPMN 2.0 also unlocks access to BPMN’s extensive software toolset. We investigate and validate the effectiveness of this mapping and the enhancements it brings to the usability of DEXPI Process through a case study based on the Tennessee-Eastman process, described in [3]. We then compare our approach with those of Cameron et al. in [2].

We conclude by presenting our findings and the key benefits of this mapping, such as improved interoperability and enhanced toolset support for chemical process engineers. Additionally, we discuss the challenges encountered during the implementation, including aligning the differences in data structures between the two models. Furthermore, we believe this mapping serves as a bridge between chemical process design engineers and business process management teams, unlocking opportunities for better collaboration and integration of technical and business workflows.

References:

[1] ISO. (2022). Information technology — Object Management Group Business Process Model and Notation. ISO/IEC 19510:2013. https://www.iso.org/standard/62652.html

[2] Cameron, D. B., Otten, W., Temmen, H., Hole, M., & Tolksdorf, G. (2024). DEXPI Process: Standardizing Interoperable Information for process design and analysis. Computers &amp; Chemical Engineering, 182, 108564. https://doi.org/10.1016/j.compchemeng.2023.108564

[3] Downs, J. J., & Vogel, E. F. (1993). A plant-wide industrial process control problem. Computers & chemical engineering, 17(3), 245-255. https://doi.org/10.1016/0098-1354(93)80018-I



Linear and non-linear convolutional approaches and XAI for spectral data: classification of waste lubricant oils

Rúben Gariso, João Coutinho, Tiago Rato, Marco Seabra Reis

University of Coimbra, Portugal

Waste lubricant oil (WLO) is a hazardous residue that requires careful management. Among the available options, regeneration is the preferred approach for promoting a sustainable circular economy. However, regeneration is only viable if the WLO does not coagulate during the process, which can cause operational issues, possibly leading to premature shutdown for cleaning and maintenance. To mitigate this risk, a laboratory analysis using an alkaline treatment is currently employed to assess the WLO coagulation potential before it enters the regeneration process. Nevertheless, such a laboratory test is time-consuming, presents several safety risks, and its outcome is subjective, depending on visual interpretation by the analyst.

To expedite decision-making, process analytics technology (PAT) and machine learning were employed to develop a model to classify WLOs according to their coagulation potential. To this end, three approaches were followed, with a focus on convolutional methodologies spanning both linear and non-linear modeling structures. The first approach (benchmark) uses partial least squares for discriminant analysis (PLS-DA) (Wold, Sjöström and Eriksson, 2001) and interval partial least squares (iPLS) (Nørgaard et al., 2000) combined with standard spectral pre-processing techniques (27 model variants). The second approach applies the wavelet transform (Mallat, 1989) to decompose the spectra into multiple frequency components by convolution with linear filters, and PLS-DA for feature selection (10 model variants). Finally, the third approach consists of a convolutional neural network (CNN) (Yang et al., 2019) to estimate the optimal filter for feature extraction (1 model variant). These models were trained on real industrial data provided by Sogilub, the organization responsible for the management of WLO in Portugal.

The results show that the three modeling approaches can attain high accuracy, with an average accuracy of 91%. The development of the benchmark model requires an exhaustive search over multiple combinations of pre-processing filters since the optimal scheme cannot be defined a priori. On the other hand, implicit spectral filtering using wavelet transform convolution significantly lowers the complexity of the model development task, reducing the computational burden while maintaining the interpretability of linear approaches. The CNN was also capable of circumventing the pre-processing burden by implicitly estimating convolutional filters in the inner layers. Additionally, the use of explainable artificial intelligence (XAI) techniques demonstrated that the relevant features of the CNN model are in good accordance with the linear models. In summary, with an adequate level of expertise and effort, different approaches can provide similar prediction performances. However, the development process can be made faster, simpler, and computationally less demanding through a proper convolutional methodology, namely the one based on the wavelet transform.

References:

Mallat, S.G. (1989) IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(7), pp. 674–693.

Nørgaard, L., Saudland, A., Wagner, J., Nielsen, J.P., Munck, L. and Engelsen, S.B. (2000) Applied Spectroscopy, 54(3), pp. 413–419.

Wold, S., Sjöström, M. and Eriksson, L. (2001) Chemometrics and Intelligent Laboratory Systems, 58(2), pp. 109–130.

Yang, J., Xu, J., Zhang, X., Wu, C., Lin, T. and Ying, Y. (2019) Analytica Chimica Acta, 1081, pp. 6–17.



Mathematical Modeling of Ammonia Nitrogen Dynamics in RAS Integrated with Bayesian Parameter Optimization

Lingwei Jiang1, Tao Chen1, Bing Guo2, Daoliang Li3

1School of Chemistry and Chemical Engineering, University of Surrey, United Kingdom; 2School of Sustainability, Civil and Environmental Engineering, University of Surrey, United Kingdom; 3National Innovation Center for Digital Fishery, China Agricultural University,China

The concentration of ammonia nitrogen is a critical parameter in aquaculture as excessive levels can be toxic to the aquatic animals, hampering their growth or even resulting in death. Therefore monitoring of ammonia nitrogen concentration in aquaculture is important for productivity and animal welfare. However, commercially available ammonia nitrogen sensors are expensive, have short lifespans thus requiring frequent maintenance, and can provide unreliable results during use. In contrast, sensors for other water quality parameters (e.g., temperature, dissolved oxygen, pH) are well-developed, accurate, and they could provide useful information to help predict ammonia nitrogen concentration through a mathematical model. In this study we present a new mathematical model for predicting ammonia nitrogen, combining fish bioenergetics with mass balance of ammonia nitrogen. We conduct a sensitivity analysis of the model parameters to identify the key ones and then use a Bayesian optimisation algorithm to calibrate these key parameters to data collected from a recirculating aquaculture system in our lab. We demonstrate that the model is able to give reasonable prediction of ammonia nitrogen on the experimental data not used in model calibration.



Computer-Aided Design of a Local Biorefinery Scheme from Water lily (Eichhornia Crassipes) to Produce Power and Bioproducts

Maria de Lourdes Cinco-Izquierdo1, Araceli Guadalupe Romero-Izquierdo2, Ricardo Musule-Lagunes3, Marco Antonio Martínez-Cinco1

1Universidad Michoacana de San Nicolás de Hidalgo, Facultad de Ingeniería Química, México; 2Universidad Autónoma de Querétaro, Facultad de Ingeniería, Mexico; 3Universidad Veracruzana, Instituto de Investigaciones Forestales, México

Lake ecosystems provide valuable services, such as vegetation and fauna, fertile soils, nutrient and climatic regulation, carbon sequestration, and recreation and tourism activities. Nevertheless, some are currently affected by high resource extraction, climatic change, or alien plant invasion (API), which causes the loss of local species and deterioration of ecosystem function. Regarding API, reports have identified 665 invasive exotic plants in México (IMTA, 2020), wherein the Water lily (Eichhornia crassipes) is highlighted due to its quick proliferation rate covering most national aquatic bodies. Thus, some strategies for controlling and using E. crassipes have been proposed (Gutiérrez et al., 1994). Specifically, after extraction, the water hyacinth biomass has been used as raw material for the production of several bioproducts and bioenergy; however, most of them have not covered the region's needs, and their economic profitability has not been reached. In this work, we propose a local biorefinery scheme to produce power and bioproducts from water lilies, using Aspen Plus V.10.0, per the needs of the Patzcuaro Lake community in Michoacán, Mexico. The scheme has been designed to process 197.6 kg/h of water lily, aligned to the extraction region schedule (COMPESCA, 2023), separating the biomass into two main compounds: root (RT, 47 wt % of total plant) and stems-leaves (S-L, 53 wt % of total plant). The power and steam are generated by RT flow (combustion process), while the S-L are separated in two fractions, 50 wt % for each one. The first fraction is the feedstock for an anaerobic digestion process operated to 35 °C to produce a fertilizer stream from the process sludge and biogas, which is converted to power using a turbine. On the other hand, the second fraction of S-L enters to drying equipment to reduce its moisture content; then, the dried biomass is divided in two processing zones: 1) pyrolysis to produce bio-oil, biochar, and high-temperature gases and 2) gasification to generate syngas, which is converted to power. According to the results, the total generated power is capable of covering all the electric requirements of the scheme, producing a super plus of 45 % regarding the total consumption; also, the system covers all heating requirements. On the other hand, fertilizer and biochar are helpful products for regional needs, improving the total annual cost (TAC) of the scheme.

References

COMPESCA. (2023, November 01). Comisión de Pesca del Estado de Michoacán. Informe anual de avances del Programa: Mantenimiento y Rehabilitación de Embalses.

Gutiérrez López, F. Arreguín Cortés, R. Huerto Delgadillo, P. Saldaña Fabela (1994). Control de malezas acuáticas en México. Ingeniería Hidráulica En México, 9(3), 15–34.

IMTA. (2020, July 15). Instituto Mexicano de Tecnología del Agua. Plantas Invasoras.



System analysis and optimization of replacing surplus refinery fuel gas by coprocessing with HTL bio-crude off-gas in oil refineries.

Erik Lopez Basto1,2, Eliana Lozano Sanchez3, Samantha Eleanor Tanzer1, Andrea Ramírez Ramírez4

1Department of Engineering Systems and Services, Faculty of Technology, Policy, and Management, Delft University of Technology, Delft, the Netherlands; 2Cartagena Refinery. Ecopetrol S.A., Colombia; 3Department of Energy, Aalborg University, Aalborg, Denmark.; 4Department of Chemical Engineering, Faculty of Applied Sciences, Delft University of Technology, Delft, the

Sustainable production is a critical goal for the oil refining industry supporting the energy transition and reducing climate change impacts. This research uses Ecopetrol, Colombia's state-owned oil and gas company, and one of its high-complexity refineries (processing 11.45 Mtpd of crude oil) as a case study to explore CO2 reduction strategies. Decarbonizing refineries requires a combination of technologies, including low-carbon hydrogen (Low-C H2), sustainable energy, carbon capture, utilization, and storage (CCUS), bio-feedstocks, and product changes.

A key question addressed is the impact of CCUS on refinery performance and the potential to repurpose surplus refinery fuel gas while balancing techno-economic and environmental in the short and long-term goals.

Colombia’s biomass resources offer opportunities for advanced biofuel technologies like Hydrothermal Liquefaction (HTL), which produces bio-crude compatible with existing refinery infrastructure and off-gas with biogenic carbon that can be used in CCU processes. This research is grounded on the opportunity to utilize refinery fuel gas and HTL bio-crude off-gas in conversion processes to produce more valuable and sustainable products (see Figure 1 for the simplified system block diagram).

Our systems optimization approach, using mixed-integer linear programming (MILP) in Linny-R software, evaluates refinery operations and minimizes costs under CO2 emission constraints. Building on optimized low-C H2 and CCS systems (Lopez, E., et al. 2024), the first step assesses surplus refinery fuel gas, and the second screens CCU technologies, selecting steam reforming and autothermal reforming to convert fuel gas into methanol. HTL bio-crude off-gas is integrated into thermocatalytic processes for further methanol production, with techno-economic data sourced from literature and Aspen Plus simulations. Detailed techno-economic assessment presented in the work by Lozano, E., et al. (2024) is used as input for this study.

The objective function in the system analysis focuses on cost minimization while achieving specified CO2 reduction targets.

Results show that CCU technologies and surplus gas utilization can significantly reduce CO2 emissions, offering valuable insights into how refineries can contribute to global decarbonization efforts. Integrating biomass-derived feedstocks and CCU technologies provides a viable path for sustainable refinery operations, advancing the industry's role in a more sustainable energy future.

Figure 1. Simplified system block diagram

References

Lopez, E., et al. (2024). Assessing the impacts of low-carbon intensity hydrogen integration in oil refineries. Manuscript in press.

Lozano, E., et al. (2024). TEA of co-processing refinery fuel gas and biogenic gas streams for methanol synthesis. Manuscript submitted for publication in Escape Conference 2025.



Technical Assessment of direct air capture using piperazine in an advanced solvent-based absorption process

Shengyuan Huang, Olajide Otitoju, Yao Zhang, Meihong Wang

University of Sheffield, United Kingdom

CO2 emissions from power generation and industry increase the concentration of CO2 in the atmosphere to 422ppm, which generates a series of climate change and environmental problems. Carbon capture is one of the effective ways to mitigate global warming. Direct air capture (DAC), as one of the negative emission technologies, has great potential for commercial development to achieve capturing 980Mt CO2 in 2050 by IEA Net Zero Emissions Scenario.

DAC can be achieved through absorption using solvents, adsorption using solid adsorbents or a combination of both. This study is based on liquid phase DAC (L-DAC) because it requires smaller land requirement and specific energy consumption compared with other technologies, which is more suitable for large-scale commercial deployment. In the literature, MEA is widely used in DAC. However, use of MEA in DAC process has two big challenges: high energy consumption 6 to 8.8 GJ/tCO2 and high cost up to $340/tCO2. These are the barriers to prevent DAC deployment.

This research aims to study DAC using Piperazine (PZ) with different configurations and evaluate the technic and economic performance at large scale through process simulation. PZ as the new solvent could improve the absorption capacity and performance. The simulation is implemented in Aspen Plus®. The DAC process using PZ will be compared using simulation data from literature to ensure the model’s accuracy. Different configurations (e.g. standard configuration vs advanced flash stripper), different loadings and carbon capture levels will be studied to achieve better system performance and energy consumption performance. The research outcome from this study can be useful for process design by the industrial practitioners and also policymakers.

Acknowledgement: The authors would like to thank the financial support of the EU RISE project OPTIMAL (Grant Agreement No: 101007963).



TEA of co-processing refinery fuel gas and biogenic gas streams from thermochemical conversion for methanol synthesis

Eliana Lozano Sanchez1, Erik Lopez Basto2, Andrea Ramirez Ramirez2

1Aalborg University, Denmark; 2Delft University of Technology, The Netherlands

Heat decarbonization is a key strategy for fossil refineries to lower their emissions in the short/medium term. Direct electrification and other low carbon heat sources are expected to play a major role, however, current availability of refinery fuel gases (RFG) - mixture of residual gases rich in hydrocarbons used for on-site heat generation - may limit decarbonization if alternative uses for surplus RFG are not explored. Thus, evaluating RFG utilization options is key for refineries, while integration of renewable carbon sources remains crucial to decrease fossil crude dependance.

This study presents a techno-economic assessment of co-processing biogenic CO2 sources from biomass thermochemical conversion with RFG to produce methanol, a key chemical with high demand in industry and as shipping fuel. Hydrothermal liquefaction (HTL) and fast pyrolysis (FP) are the technologies evaluated due to their integration potential in a refinery context: these produce bio-oils with drop-in fuel potential that can use existing infrastructure and a by-product gas rich in CO2/CO to be co-processed with the RFG into methanol, which remains unexplored in literature and stands as the main contribution of this study.

The process is simulated in Aspen HYSYS assuming a fixed gas input of 25 tonne/h, which corresponds to estimated RFG surplus in a refinery case study after some emission reduction measures. The process comprises a reforming step to produce syngas (steam and autothermal reforming -SMR/ATR- are evaluated) followed by methanol synthesis via CO2/CO hydrogenation. The impact of gas co-processing is evaluated for increasing ratios of HTL/FP gas relative to the RFG baseline in terms of hydrogen requirement, carbon conversion to methanol, overall water balance and specific energy consumption.

Preliminary results indicate that the valorization of RFG using SMR allows for an increased share of biogenic gas up to 45 wt% without having a negative impact in the overall carbon conversion to methanol. SMR of the RFG results in a syngas with excess hydrogen, which makes possible to accommodate additional biogenic CO2 to operate at lower stoichiometric numbers without a negative impact in conversion and without additional H2 input, being this a key advantage of this integration. Although overall carbon conversion is not affected, the methanol throughput is reduced by 24-27 % relative to the RFG baseline due to the higher concentration of CO2 in the mix that lowers the carbon content and increases water production during methanol synthesis. The ATR case results in lower energy consumption but produces less hydrogen, limiting the biogenic gas share to only 7 wt% before requiring additional H2 for methanol synthesis.

This study aims to contribute to the discussions on integration of low carbon technologies into refinery operations, highlighting synergies between fossil and biobased feedstocks that expand the state-of-the art of co-processing of bio-feedstocks from thermochemical biomass conversion. Future results include the estimation of trade-offs between production costs and methanol carbon intensity, motivating the integration of these technologies in more comprehensive system analysis of fossil refineries and their net zero pathways.



Surrogate Model-Based Optimisation of Pressure-Swing Distillation Sequences with Variable Feed Composition

Laszlo Hegely, Peter Lang

Department of Building Services and Process Engineering, Faculty of Mechanical Engineering, Budapest University of Technology and Economics, Hungary

For separating azeotropic mixtures, special distillation methods must be used, such as pressure-swing (PSD), extractive or heteroazeotropic distillation. The advantage of PSD is that it does not require the addition of a new component. However, it can only be applied if the azeotrope is pressure-sensitive, and its energy demand can also be high. The configuration of the system depends on not only the type of the homoazeotrope but also the feed composition (z). If z is between the azeotropic compositions at the pressures of the columns, the feed can be introduced into either the low- (LP-HP sequence) or the high-pressure column (HP-LP sequence). Depending on z, one of the sequences will be optimal, whether with respect to energy demand or total annual cost (TAC).

Hegely et al. (2022) studied the separation of a maximum-boiling azeotropic mixture water-ethylenediamine by PSD where z (35 mol% water) was between the azeotropes at 0.1 and 2.02 bar. The TAC of both sequences was minimised without and with heat integration. The LP-HP sequence was more favourable at this composition. The optimisation was performed by two methods: a genetic algorithm (GA) and a surrogate model-based optimisation method (SMBO). By SMBO, algebraic surrogate models were fitted to simulation results by the ALAMO software (Cozad et al., 2014) and the resulting optimisation problem was solved. Different decomposition techniques were tested with the models fitted (1) to elements of TAC (heat duty of LPC, column diameters), (2) to capital and energy costs or (3) to TAC itself. The best results were achieved with the highest level of decomposition. Although TAC obtained by SMBO was lower than that of GA only once, the difference was always within 5 %.

In this work, our aim is to (1) improve the accuracy of surrogate models, thus, the performance of SMBO and (2) study the influence of z on the optimum of the two sequences, using the case study of Hegely et al. (2022). The first goal is achieved by fitting the models to the results of the single columns instead of the two-column system. Achieving the second goal requires repeated optimisation at different feed compositions, which would be very time-consuming with conventional optimisation methods. However, an advantage of SMBO is that z can be included as input variable of the models. This enables quickly finding the optimum for any feed composition.

The novelty of our work consists of determining the optimal PSD system as a function of the feed composition by SMBO. Additionally, this is the first work that uses ALAMO to fit the models to be used in the optimisation to the individual columns.

References

Cozad A., Sahinidis N.V., Miller D.C., 2014. Learning surrogate models for simulation-based optimization. AIChE Journal, 60, 2211–2227.

Hegely L., Karaman Ö.F., Lang P., 2022, Optimisation of Pressure-Swing Distillation of a Maximum-Azeotropic Mixture with the Feed Composition between the Azeotropes. In: Klemeš J.J., Nižetić S., Varbanov P.S. (eds.) Proceedings of the 25th Conference on Process Integration, Modelling and Optimisation for Energy Saving and Pollution Reduction. PRES22.0188.



Unveiling Probability Histograms from Random Signals using a Variable-Order Quadrature Method of Moments

Menwer Attarakih1, Mark W. Hlawitschka2, Linda Al-Hmoud1, Hans-Jörg Bart3

1The University of Jordan, Jordan, Hashemite Kingdom of; 2Johannes Kepler University; 3RPTU Kaiserslautern

Random signals play crucial role in chemical and process engineering where industrial plants collect and analyse big data for process understanding and decision-making. This requires unveiling the underlying probability histogram from process random signals with a finite number of bins. Unfortunately, finding the optimal number of bins is still based on empirical optimization and general rules of thumb (e.g. Scott and Freedman formula). The disadvantages here are the large number of bins that maybe encountered, and the inconsistency of the histogram with low-order moments of the true data.

In this contribution, we introduce an alternative and general method to unveil the probability histograms based on the Quadrature Method Of Moments (QMOM). As being data compression method, it works using the calculated moments of an unknown weight probability density function. Because of the ill-conditioned inverse moment problem, there is no simple and general inversion algorithm to recover the unknown probability histogram which is usually required in many design applications and real time online monitoring (Thibault et al., 2023). Our method uses a novel variable integration order QMOM which adapts automatically depending on the relevance of the information contained in the random data. The number of bins used to recover the underlying histogram increases as the information entropy does. In the hypothetical limit where the data has zero information entropy, the number of bins is reduced to one. In the QMOM realm, the number of bins is explored in an evolutionary algorithm that assigns the nodes in an optimal manner to sample the unknown function or process from which the data is generated. The algorithm terminates when no more important information is available for assignment to the newly created node up to a user predefined threshold. If the date is coming from a dynamic source with varying mean and variance, the boundaries of the bins will move dynamically to reflect the nature of the data.

The application of the method is applied to many case studies including moment-consistent histogram unveiled from monthly mean maximum air to surface temperature in Amman city from 1901 to 2023 using only 13 bins with a bimodal histogram. In another case study, the diastolic and systolic blood pressure measurements are found to follow a normal distribution histogram using a data series spanning a six-year period with 11 bins. In a unique dynamic case study, batch particle aggregation plus growth is simulated based on an initial 11 bins where the simulation ends with 14 bins after 5 seconds simulation time. The result is a histogram which is consistent with 28 low-order moments. In addition to this, measured droplet distribution from a pilot plant sparger of toluene in water is found to follow a normal distribution histogram with 11 bins.

As a main conclusion, our method is a universal histogram reconstruction method which only needs enough number of moments to work with intensive validation using real-life problems.

References

E. Thibault, Chioua, M., McKay, M., Korbel, M., Patience, G. S., Stuart, P. R. (2023), Cand. J. Chem. Eng., 101, 6055-6078.



Sensitivity Analysis of Key Parameters in LES-DEM Simulations of Fluidized Bed Systems Using generalized polynomial chaos

Radouan Boukharfane, Nabil El Mocayd

UM6P, Morocco

In applications involving fine powders and small particles, the accuracy of numerical simulations—particularly those employing the Discrete Element Method (DEM) for predicting granular material behavior—can be significantly impacted by uncertainties in critical parameters. These uncertainties include coefficients of restitution for particle-particle and particle-wall collisions, viscous damping coefficients, and other related factors. In this study, we utilize stochastic expansions based on point-collocation non-intrusive polynomial chaos to conduct a sensitivity analysis of a fluidized bed system. We consider four key parameters as random variables, each assigned a specific probability distribution over a designated range. This uncertainty is propagated through high-fidelity Large Eddy Simulation (LES)-DEM simulations to statistically quantify its impact on the results. To effectively explore the four-dimensional parameter space, we analyze a comprehensive database comprising over 1200 simulations. Notably, our findings reveal that variations in the particle and wall Coulomb friction coefficients exert a more pronounced influence on streamwise particle velocity than do variations in the particle and wall normal restitution coefficients.



An Efficient Convex Training Algorithm for Artificial Neural Networks by Utilizing Piecewise Linear Approximations and Semi-Continuous Formulations

Ece Serenat Koksal1, Erdal Aydin1, Metin Turkay2,3

1Department of Chemical and Biological Engineering, Koç University, Turkiye; 2Department of Industrial Engineering, Koç University, Turkiye; 3SmartOpt, Turkiye

Artificial neural networks (ANNs) are mathematical models representing the relationships between inputs and outputs, inspired by the structure of neuron connections in the human brain. ANNs consist of input and output layers, along with user-defined hidden layers containing neurons, which are interconnected through activation functions such as rectified linear unit (ReLU), hyperbolic tangent and sigmoid. A feedforward neural network (FNN) is a type of ANN that propagates information in one direction, from input to output. ANNs are widely used as data-driven approaches, especially for complex systems like chemical engineering, where mechanistic modelling poses significant challenges. However, they often encounter issues such as overfitting, insufficient data, and suboptimal training.

To address suboptimal training, piecewise linear approximations of nonlinear activation functions, such as sigmoid and hyperbolic tangent, can be employed. This approach may enable the transformation of the non-convex problem into a convex one, enabling training via a special ordered set type II (SOS2) formulation at the same time (Koksal & Aydin, 2023; Sildir & Aydin, 2022). The resulting formulation is a mixed-integer linear programming (MILP) problem. However, as the number of neurons, number of approximation pieces or dataset size increases, the computational time rises due to the exponential complexity increase associated with binary variables, hyperparameters and data points.

In this work, we propose a novel training algorithm for FNNs by employing SOSX variables, as defined by Keha et al., (2004) instead of the conventional SOS2 formulation. By modifying the branching algorithm, we transform the MILP problem into subsets of linear programming (LP) problems. This transformation also brings about parallelizable properties, which may further reduce the computational time for training the FNNs. Results demonstrate that this change in the branching strategy significantly reduces computational time, making the formulation more efficient for convexifying the FNN training process.

References

Keha, A. B., De Farias, I. R., & Nemhauser, G. L. (2004). Models for representing piecewise linear cost functions. Operations Research Letters, 32(1), 44–48. https://doi.org/10.1016/S0167-6377(03)00059-2

Koksal, E. S., & Aydin, E. (2023). Physics Informed Piecewise Linear Neural Networks for Process Optimization. Computers and Chemical Engineering, 174. https://doi.org/10.1016/j.compchemeng.2023.108244

Sildir, H., & Aydin, E. (2022). A Mixed-Integer linear programming based training and feature selection method for artificial neural networks using piece-wise linear approximations. Chemical Engineering Science, 249. https://doi.org/10.1016/j.ces.2021.117273



Economic evaluation of Solvay processes for sodium bicarbonate production with brine and carbon tax considerations

Dina Ewis, Zeyad Ghazi, Sabla Y. Alnouri, Muftah H. El-Naas

Gas Processing Center, College of Engineering, Qatar University, Doha, Qatar

Reject brine discharge and high CO2 emissions from desalination plants are major contributors to environmental pollution. Managing reject brine involves significant costs, mainly due to the energy-intensive processes required for brine dilution and disposal. In this context, Solvay process represents a mitigation scheme that can effectively reduce reject brine salinity and sequestering CO2 while producing sodium bicarbonates simultaneously. The Solvay process represents a combined approach that can effectively manage reject brine and CO2 in a single reaction while producing an economically feasible product. Therefore, this study reports a systematic techno-economics assessment of conventional and modified Solvay processes, while incorporating brine and carbon tax. The model evaluates the significance of implementing a brine and CO2 tax on the economics of conventional and Ca(OH)2 modified Solvay compared to industries expenditures on brine dilution and treatment before discharge to the sea. The results show that the conventional Solvay process becomes profitable after applying a brine tax of 1 dollar per meter cube of brine and a CO2 tax of 42 dollars per tonne CO2 —both figures lower than the current costs associated with brine treatment and existing carbon taxes. Moreover, the profitability of the Ca(OH)₂-modified Solvay process increases even further with minimal brine and CO₂ taxes. The findings highlight the significance of adopting modified Solvay process as an integrated solution for sustainable brine management and carbon capture.



THE GREEN HYDROGEN SUPPLY CHAIN IN THE BRAZILIAN STATE OF BAHIA: A DETERMINISTIC APPROACH

Leonardo Santana1, Gustavo Santos1, Pessoa Fernando1, Barbosa-Póvoa Ana Paula2

1SENAI CIMATEC university center, Brazil; 2Instituto Superior Técnico – Universidade de Lisboa, Portugal

Hydrogen is increasingly recognized as a pivotal element in decarbonizing energy, transport, chemical industry, and agriculture sectors. However, significant technological challenges related to production, transport, and storage hinder its broader integration into these industries. Overcoming these barriers requires the development of a sustainable hydrogen supply chain (HSC). This paper aims to design and plan a HSC by developing a Mixed-Integer Linear Programming (MILP) for the Brazilian state of Bahia, the fifth largest state of Brazil (as big as France), a region with significant potential for sustainable electricity and electrolytic hydrogen production. The case study utilizes existing road infrastructure, liquefied and compressed hydrogen via trucks or trains are considered. A monetization strategy is employed to consolidate both economic and environmental aspects into a single objective function, translating CO2 emissions into costs using carbon credit prices. Facility locations are selected based on the preference locations for hydrogen production from Bahia’s government report, considering four dimensions: economic, social, environmental, and technical. Wind power, solar PV, and grid electricity are considered energy sources for hydrogen production facilities, and the model aims to select the optimal combination of energy sources for each plant. The outcomes include the selection of specific hydrogen production plants to meet the demand center's requirements, alongside decisions regarding the preferred form of hydrogen storage (liquefied or compressed) and the optimal energy source (solar, wind, or grid) for each facility. This model provides a practical contribution to the implementation of a sustainable green hydrogen supply chain in Bahia, focusing on the industrial sector's needs. The study offers a replicable and accessible computational approach to solving complex supply chain problems, especially in regions with growing interest in green hydrogen production.



A combined approach to optimization of soft sensor architecture and physical sensor configuration

Lukas Furtner1, Isabell Viedt1, Leon Urbas1,2

1Process Systems Engineering Group, TU Dresden, Germany; 2Chair of Process Control Systems, TU Dresden, Germany

In the chemical industry, soft sensors are deployed to reduce equipment cost or allow for a continuous measurement of process variables. Soft sensors monitor parameters not via physical sensors but infer them from other process variables, often by means of parametric equations like balances and thermodynamic or kinetic dependencies. Naturally, the precision of soft sensors is affected by the uncertainty of their input variables. This paper proposes a novel approach to automatically identify the most precise soft sensor based on a set of process system equations and the configuration of physical sensors in the chemical plant. Furthermore, the method assesses the benefit of deploying additional physical sensors to increase a soft sensor’s precision. This enables engineers to derive adjustments of the existing sensor configuration in a chemical plant. Based on approximating the uncertainty of soft sensors to infer a critical process variable via Monte Carlo simulation, the proposed method is insusceptible against dependent, non-Gaussian uncertainties. Additionally, the approach allows to incorporate hybrid semi-parametric soft sensors [1], modelling poorly understood effects and dependencies within the process system with data-driven, non-parametric parts. Applied to the Tennessee Eastman process [2], the method identifies Pareto-optimal sensor configurations, considering sensor cost and monitoring precision for critical process variables. Finally, the method's deployment in real-world chemical plants is discussed.

Sources
[1] J. Sansana et al., “Recent trends on hybrid modeling for Industry 4.0,” Computers & Chemical Engineering, vol. 151, p. 107365, Aug. 2021
[2] J. J. Downs and E. F. Vogel, “A plant-wide industrial process control problem,” Computers & Chemical Engineering, vol. 17, no. 3, pp. 245–255, Mar. 1993



Machine Learning Models for Predicting the Amount of Nutrients Required in a Microalgae Cultivation System

Geovani R. Freitas1,2,3,4, Sara M. Badenes3, Rui Oliveira4, Fernando G. Martins1,2

1Laboratory for Process Engineering, Environment, Biotechnology and Energy (LEPABE); 2Associate Laboratory in Chemical Engineering (ALiCE); 3Algae for Future (A4F); 4LAQV-REQUIMTE

Effective prediction of nutrient demands is crucial for optimising microalgae growth, maximising productivity, and minimising resources waste. With the increasing amount of data related to microalgae cultivation systems, data mining (DM) and machine learning (ML) methods to extract additional knowledge has gained popularity over time. In the DM process, models can be evaluated using ML algorithms, such as random forest (RF), artificial neural network (ANN) and support vector regression (SVR). In the development of these ML models, data preprocessing stage is necessary due to the poor quality of data. While cleaning and outlier removal techniques are employed to eliminate missing data or outliers, normalization is used to standardize features, ensuring that no single feature is more relevant to the model due to differences in scale. After this stage, feature selection is employed to identify the most relevant parameters, such as solar irradiance and initial dry weight of biomass. Once the optimal features are identified, data splitting and cross-validation strategies are employed to ensure that the models are trained and evaluated with representative subsets of the dataset. Proper data splitting into training and testing sets prevents overfitting, allowing the models to generalize effectively to new, unseen data. Cross-validation techniques, such as k-fold and repeated k-fold cross-validation, are used to rigorously test model performance across multiple iterations, ensuring that results are not dependent on any single data partition. Principal component analysis (PCA) can also be applied as a dimensionality reduction technique to simplify complex environmental datasets by reducing the number of variables or features in the data while retaining as much information as possible. To further improve prediction capabilities, ensemble methods are incorporated, leveraging multiple models to achieve a higher overall performance. Stacking, a popular ensemble technique, is used to combine the outputs of individual models, such as RF, ANN, and SVR, into a single meta-model. This approach takes advantage of the strengths of each base model, such as the non-linear mapping capabilities of ANN, the robustness of RF against overfitting, and the effectiveness of SVR in handling complex feature interactions. By combining these diverse models, the stacked ensemble method provides more accurate and reliable predictions of nutrient requirements. The application of these ML techniques has been demonstrated using a dataset acquired from the cultivation of the microalgae Dunaliella in a flat-panel photobioreactor (FP-PBR). The results showed that the data mining workflow, in combination with different ML models, was able to describe the nutrients requirements to obtain a good performance of microalgae Dunaliella production in carotenogenic phase, for b-carotene production, in a FP-PBR system.



Dynamical modeling of ultrafine particle classification in tubular bowl centrifuges

Sandesh Athni Hiremath1, Marco Gleiss2, Naim Bajcinca1

1RPTU Kaiserslautern, Germany; 2KIT Karlsruhe, Germany

Ultrafine or colloidal particles are widely used in industry as aerogels, coatings, filtration aids or thin films and require a defined particle size. For this purpose tubular centrifuges are suitable for particle separation and classification due to the high g-forces. The design and optimization of tubular centrifuges requires a large number of pilot tests, which is time-consuming and costly. Additionally, the centrifuge while operating semi-continuously under constant process conditions, results in temporal changes of particle size distribution and solids volume fraction especially at the outlet. Altogether, these aspects makes the task of designing an efficient centrifuge challenging. This work presents a dynamic model for the real-time simulation of the behavior during particle classification in a pilot-scale tubular centrifuge and also provide a novel data-driven algorithm for model validation. The combination of the two greatly facilitates the design and control of the centrifugation process, in particular the tubular centrifuge being considered. First, we discuss the new continuous mathematical model as an improvement over the previously published multi-compartment (discrete) model by Winkler et al. [1]. Based on simulation we show the influence of operational conditions and material behavior on the classification of a colloidal silica-water slurry. Subsequently, we validate the dynamical model by comparing experimental data with the simulations for the temporal change of product loss, grade efficiency and sediment build-up. For validation, we propose a new data driven method which uses neural-odes that incorporates the proposed new centrifugation model thus capable of encoding the physical (transport) laws in the network parameters. In summary, our work provides the following novelties:

1. A continuous dynamical model for a tubular centrifugation process that establishes a strong foundation for continuous and semi-continuous control of the process.

2. A new data-driven validation algorithm that not only allows the use of physics based continuous model thus serving as a base methodology for developing a full-fledged learning based observer model which can be used as a state-estimator during continuous process control.

[1] Marvin Winkler, Frank Rhein, Hermann Nirschl, and Marco Gleiss. Real-time modeling of volume and form dependent nanoparticle fractionation in tubular centrifuges. Nanomaterials, 12(18):3161, 2022.



Towards a multi-scale process optimization coupling custom models for unit operations, process simulator, and environmental impact.

Thomas Hietala1, Sonja Herres-Pawlis2, Pedro S.F. Mendes1

1Centro de Química Estrutural, Instituto Superior Técnico, Portugal; 2Institute of Inorganic Chemistry, RWTH Aachen University, Germany

To achieve utmost process efficiency, all scales, from phenomena within a given unit operation to mass and energy integration, matter. For instance, the way mass transfer and kinetics are optimized in a chemical reactor (e.g., focusing either on activity or selectivity) will impact the downstream separation train and, thus, the process as a whole. Currently, as the design of sustainable processes is mostly performed independently at different scales, the overall influence of design choices at different levels is not assessed in a seamless way, leading to a trial-and-error and inefficient design workflow. In order to consider all scales simultaneously, a multi-scale model has been developed that couples a process model to a complex mass-transfer limited reactor model and to an environmental and/or social impact assessment tool. The production of Polylactic-acid (PLA), the most produced bioplastic to date[1], was chosen as the case study for the development of this tool.

The multi-scale model covers, as of today, the reactor, process and environmental analysis scales. The process model simulating the production process of PLA was developed in Aspen Plus simulation software employing the Polymer Plus module and PolyNRTL as the thermodynamic method, based on literature implementation[2]. The production process consists firstly of an oligomerization reaction step of lactic acid to a PLA pre-polymer. It is followed by a depolymerization step which converts the pre-polymer into lactide. After a purification step, the lactide forms the high molecular weight PLA in a ring-opening polymerization step. The PLA product is obtained after a final purification step. The depolymerization step, in industry, is performed in a devolatilization equipment, which is a mass-transfer limited reactor. As there are no adequate mass-transfer limited reactor models in Aspen Plus, a Python CAPE-Open Unit Operation module[3] was developed to couple a realistic devolatilization reactor model into the process model. If mass-transfer would not be accounted for in the reactor, the ultimate PLA production would be underestimated by 8-times, with the corresponding impact on profitability and environmental.

From the process model, the economic performance of the process can be determined. To determine the environmental performance of the designed process simultaneously and seamlessly, a Life Cycle Analysis (LCA) model, performed in OpenLCA software, is coupled with Aspen Plus using an interface coded in Python. With this multi-scale model built-in, the impact of the design variables at the various scales on the process's overall economic and environmental performance can be determined and optimized.

This multi-scale model creates a basis to develop a multi-objective optimization framework using economic and environmental objective functions directly from Aspen Plus and OpenLCA software. This could enable a reduction in the environmental impact of processes without disregarding the profitability of the process.

[1] - European Bioplastics, Bioplastics market data, 2023, https://www.european-bioplastics.org/news/publications/ accessed on 25/09/2024

[2] - K. C. Seavey and Y. A. Liu, Step-growth polymerization process modeling and product design. New Jersey: Wiley, 2008

[3] - https://www.colan.org/process-modeling-component/python-cape-open-unit-operation/ accessed on 25/09/2025



Enhancing hydrodynamics simulations in Distillation Columns Using Smoothed Particle Hydrodynamics (SPH)

RODOLFO MURRIETA-DUEÑAS1, JAZMIN CORTEZ-GONZÁLEZ1, ROBERTO GUTIÉRREZ-GUERRA2, JUAN GABRIEL SEGOVIA-HERNÁNDEZ3, CARLOS ENRIQUE ALVARADO-RODRÍGUEZ3

1TECNOLÓGICO NACIONAL DE MÉXICO/ CAMPUS IRAPUATO, MÉXICO; 2UNIVERSIDAD DE GUANAJUATO, MÉXICO; 3UNIVERSIDAD TECNOLÓGICA DE LEÓN, CAMPUS LEÓN, MÉXICO

Distillation is one of the most widely applied unit operations in chemical engineering, renowned for its effectiveness in product purification. However, traditional distillation processes are often hampered by significant inefficiencies, driving efforts to enhance thermodynamic performance in both equipment design and operation. While many alternatives have been evaluated using MESH equations and sequential simulators, comparatively less attention has been given to Computational Fluid Dynamics (CFD) modeling, largely due to its complexity. CFD methodologies typically fall under either Eulerian or Lagrangian approaches. The Eulerian method relies on a mesh to discretize the medium, providing spatial averages at the fluid interfaces. Popular techniques include finite volume and finite element methods, with finite volume commonly employed to simulate the hydrodynamics, mass transfer, and momentum in distillation columns (Haghshenas et al., 2007; Lavasani et al., 2018; Zhao, 2019; Ke, 2022). Despite its widespread use, the Eulerian approach faces challenges such as interface modeling, convergence issues, and selecting appropriate turbulence models for simulating turbulent flows. In contrast, Lagrangian methods, which discretize the continuous medium using non-mesh-based points, offer detailed insights into interfacial phenomena. Among these, Smoothed Particle Hydrodynamics (SPH) stands out for its ability to model discontinuous media and complex geometries without requiring a mesh, making it ideal for studying various systems, including microbial growth (Martínez-Herrera et al., 2022), sea wave dynamics (Altomare et al., 2023), and stellar phenomena (Reinoso et al., 2023). This versatility and robustness make SPH a promising tool for distillation process modeling. In this study, we present a numerical simulation of a liquid-vapor (L-V) thermal equilibrium stage in a plate distillation column, employing the SPH method. The focus is on Sieve and Bubble cap plates, with periodic temperature conditions applied to facilitate thermal equilibrium. Column sizing was performed using Aspen One for an equimolar Benzene-Toluene mixture, operating under conditions ensuring a condenser cooling water temperature of 120°F. The Chao-Seader thermodynamic model was applied, with both sieve and bubble plates integrated into a ten-stage column. Stage 5 was designated as the feed stage, and a 98% purification and recovery rate for both components was assumed. This setup provided critical operational parameters, including liquid and vapor velocities, viscosity, density, pressure, and column diameter. Three-dimensional CAD models of the distillation column and the plates were generated using SolidWorks and subsequently imported into DualSPHysics (Domínguez et. al., 2022) for CFD simulation. Stages 6 and 7 were selected for detailed analysis, as they are positioned just below the feed stage. The results showed that the sieve plate achieved thermal equilibrium more rapidly than the bubble cap plate, a difference attributable to the steam injection zone in the bubble cap design. Moreover, the simulations allowed the calculation of heat transfer coefficients based on plate geometry, providing insights into heat exchange at the fluid interfaces. In conclusion, this study highlights the potential of using periodic temperature conditions to simulate thermal equilibrium in distillation columns. Additionally, the SPH method has demonstrated its utility as a powerful and flexible tool for simulating fluid dynamics and thermal equilibrium in distillation processes.



Electric arc furnace dust waste management: A process synthesis approach.

Agustín Porley Santana1, Mayra Doldan1,2, Martín Duarte Guigou2,3, Mauricio Ohanian1, Soledad Gutiérrez Parodi1

1Instituto de Ingeniería Química, Facultad de Ingeniería, Universidad de la República, Montevideo, 11300, Uruguay; 2Viento Sur Ingeniería, Ruta 61, Km 19, Nueva Helvecia, Colonia, Uruguay.; 3Grupo de Ingeniería de Materiales, Inst. Tecn. Reg. Sur-Oeste, Universidad Tecnológica del Uruguay, Horacio Meriggi 905, CP60000, Paysandú, Uruguay.

The residue from the solid collection system of steel mills is known as electric arc furnace dust (EAFD). It contains significant amounts of iron, zinc, and lead in the form of oxides, silicates, and carbonates, along with minor components such as chromium, tin, nickel, and cadmium. Therefore, most countries classify this waste as hazardous waste. Its management presents scientific and technical challenges that significantly impact the economics of the steelmaking process.

Currently, the management of this waste consists of burying it in the final disposal plant. However, there are multiple treatment alternatives to reduce its hazardousness by recovering and immobilizing marketable heavy metals such as Zn and Pb. This process can be carried out through a hydrometallurgical dissolution with selective extraction of Zn, leaving the rest of the metals components in the solid. Zn has amphoteric properties, but it shares this characteristic with Pb, so alkaline extraction solubilizes both metals simultaneously, leaving iron compounds in an insoluble form. At this stage, two currents result, one solid and one liquid. The liquid stream is a zinc-rich solution from which Zn could be electrochemically recovered as a valuable product, ensuring that the electrodeposited material shows characteristics that allow for easy recovery through mechanical means. The solid stream can be stabilized by incorporating it into an alkali-activated inorganic polymer (geopolymer) to obtain a product or waste that captures the heavy metals, immobilizing them, or it can be managed by a third party. To avoid lead contamination of the product of interest (pure Zn), the liquid stream can go through a precipitation process with sodium sulfide, removing the lead as lead sulfide or electrodepositing pure lead by controlling (the voltage or current) before electrodepositing the Zn in a subsequent stage. Pilot-scale testing of these processes has been conducted previously. [1]

Each step generates different costs and alternatives for managing this residue. For this, the process synthesis approach is considered suitable, allowing for the simultaneous analysis of these alternatives and the selection of the one that generates the greatest benefit.

This work studies the management of steel mill residue with a process synthesis approach combining experimental data from pilot-scale operations, data collected from metallurgical companies, and data based on expert judgment. The stages to achieve this objective involve: superstructure conception, its translation into mathematical language, and implementation in a mathematical programming software (GAMS). The aim is to assist in decision-making at the managerial level, so the objective function chosen was to maximize commercial value per ton of EAFD to be managed. A superstructure model is proposed that combines binary variables for operations and binary variables for artificial streams, enabling accurate modeling of the various connections involved in this process management network. Artificial streams were used to formally describe disjunctions. Sensitivity analyses are currently being conducted.

References

[1] M.Doldán, M. Duarte Guigou, G. Pereira, M. Ohanian, Electrodeposition of Zinc and Lead from Electric Arc Furnace Dust Dissolution: A Kinetic Study in A Closer Look at Chemical Kinetics, Editorial Nova Science Publishers 2022



Network theoretical analysis of the reaction space in biorefineries

Jakub Kontak, Jana Marie Weber

Intelligent Systems Department, Delft University of Technology, Netherlands

Abstract

Large chemical reaction space has been analysed intensively to learn the patterns of chemical reactions (Fialkowski et al., 2005; Jacob & Lapkin, 2018; Llanos et al., 2019, Mann & Venkatasubramanian, 2023) and to understand the wiring structure to be used for network pathway planning problems (Weber et al., 2019; Ulonska et al., 2016). With increasing pressure towards more sustainable production systems, it becomes worthwhile to model the reaction space reachable from biobased feedstocks, e.g. through integrated processing steps in biorefineries.

In this work we focus on a network-theoretical analysis of biorefinery reaction data. We obtain biorefinery reaction data from the REAXYS web interface, propose a directed all-to-all mapping between reactants and products for comparability purposes with related work, and finally compare the reaction space obtained from biorefineries with the network of organic chemistry (NOC) (Jacob & Lapkin, 2018). Our findings indicate that despite having 1000 times fewer molecules, the constructed network resembles the NOC in terms of its scale-free nature and shares similarities regarding its “small-world” property. Our results further suggest that the biorefinery network space has a higher centralisation and clustering coefficient. Additionally, we inspect the coverage rate of our data querying strategy and find that our network covers most of common second and third intermediates, yet only few biorefinery end-products and direct feedstock molecules are present.

References

Fialkowski, M., Bishop, K. J., Chubukov, V. A., Campbell, C. J., & Grzybowski, B. A. (2005). Architecture and evolution of organic chemistry. Angewandte Chemie International Edition, 44(44), 7263-7269.

Jacob, P. M., & Lapkin, A. (2018). Statistics of the network of organic chemistry. Reaction Chemistry & Engineering, 3(1), 102-118.

Llanos, E. J., Leal, W., Luu, D. H., Jost, J., Stadler, P. F., & Restrepo, G. (2019). Exploration of the chemical space and its three historical regimes. Proceedings of the National Academy of Sciences, 116(26), 12660-12665.

Mann, V., & Venkatasubramanian, V. (2023). AI-driven hypergraph network of organic chemistry: network statistics and applications in reaction classification. Reaction Chemistry & Engineering, 8(3), 619-635.

Weber, J. M., Lió, P., & Lapkin, A. A. (2019). Identification of strategic molecules for future circular supply chains using large reaction networks. Reaction Chemistry & Engineering, 4(11), 1969-1981.

Ulonska, K., Skiborowski, M., Mitsos, A., & Viell, J. (2016). Early‐stage evaluation of biorefinery processing pathways using process network flux analysis. AIChE Journal, 62(9), 3096-3108.



Applying Quality by Design to Digital Twin Supported Scale-Up of Methyl Acetate Synthesis

Jessica Ebert1, Amy Koch1, Isabell Viedt1,3, Leon Urbas1,2,3

1TUD Dresden University of Technology, Process Systems Engineering Group; 2TUD Dresden University of Technology, Chair of Process Control Systems; 3TUD Dresden University of Technology, Process-to-Order Lab

The scale-up from lab to production scale is an essential cost and time factor in the development of chemical processes, especially when high demands are placed on product quality. Quality by Design is a common method used in the pharmaceutical industry to ensure product quality through the production process (Yu et al., 2014), which is why the QbD methodology could be a useful tool for the process development in chemical industry as well. Concepts from literature demonstrate how mechanistic models are used for the direct scale-up from laboratory equipment to production equipment by dispensing with intermediate scales in order to shorten the time to process (Furrer et al., 2021). The integration of Quality by Design into a direct scale-up approach promises further advantages, such as a deeper process understanding and the assurance of process safety. Digital twins consisting of simulation models digitally represent the behavior of plants and the processes running on it, enable the model-based scale-up.

In this work a simulation-based workflow for the digital twin supported scale-up of processes and process plants is proposed, which integrates various aspects of the quality by design methodology. The key element is the determination of the design space defining Critical Quality Attributes and identifying Critical Process Parameters as well as Critical Material Attributes (Yu et al., 2014). The design space is transferred from the laboratory scale model to the production scale model. To illustrate the concept, the workflow is implemented for the use case of the synthesis of methyl acetate. The process is scaled from a 2 L laboratory stirred tank reactor to a 50 L production plant fulfilling each step of the scale-up workflow: modelling, definition of the target product quality, experiments, model adaption, parameter transfer and design space identification. Thereby, the presentation of the results focusses on the design space identification and transfer using global system analysis. Finally, benefits and limitations of the implementation of Quality by Design in the direct scale-up using digital twins are discussed.

References

Schindler, Polyakova, Harding, Weinhold, Stenger, Grünewald & Bramsiepe (2020). General approach for technology and Process Equipment Assembly (PEA) selection in process design. Chemical Engineering and Processing – Process(159), Article 108223.

T. Furrer, B. Müller, C. Hasler, B. Berger, M. Levis & A. Zogg (2021). New Scale-up Technologies for Hydrogenation Reactions in Multipurpose Pharmaceutical Production Plants. Chimia(75), Article 11.

X.. L.Yu, G. Amidon, M. A. Khan, S. W. Hoag, J. Polli, G.K. Raju & J. Woodcock (2014). Understanding Pharmaceutical Quality by Design. The AAPS Journal(16), 771–783.



Digital Twin supported Model-based Design of Experiments and Quality by Design

Amy Koch1, Jessica Ebert1, Isabell Viedt1,2, Andreas Bamberg4, Leon Urbas1,2,3

1TUD Dresden University of Technology, Process Systems Engineering Group; 2TUD Dresden University of Technology, Process-to-Order Lab; 3TUD Dresden University of Technology, Chair of Process Control Systems; 4Merck Electronics KGaA, Frankfurter Str. 250, Darmstadt 64293, Germany

In the specialty chemical industries, faster time-to-process is a significant measure of success. One key aspect which supports faster time-to-process is reducing the time required for experimental efforts in the process development phase. Here, Digital Twin workflows based on methods such as global system analysis, model-based design of experiments (MBDoE), and the identification of the design space as well as leveraging prior knowledge of the equipment capabilities can be utilized to reduce the experimental load (Koch et al., 2023). MBDoE utilizes prior knowledge (model structure & initial parameter estimates) to optimally design an experiment by identification of optimum process conditions, thereby reducing experimental effort (Franceschini & Macchietto, 2008). Further benefit can be achieved by applying Quality by Design methods (Katz & Campbell, 2012) to these Digital Twin workflows; here, the prior knowledge supplied by the Digital Twin is used to pre-screen combinations of critical process parameters and model parameters to identify suitable parameter combinations for inclusion in the MBDoE optimization problem (Mädler, 2023). In this paper, first a Digital Twin workflow based on incorporating prior knowledge of equipment capabilities into global system analysis and subsequent MBDoE is presented and relevant methodology explained. This workflow is illustrated with a prototypical implementation using the process simulation tool gPROMS for the specific use case of an esterification process in a stirred tank reactor. As a result, benefits such as improved parameter estimation and reduced experimental effort compared to traditional DoE are illustrated as well as a critical evaluation of the applied methods.

References

G. Franceschini & S. Macchietto (2008). Model-based design of experiments for parameter precision: State of the art. Chemical Engineering Science 63(19), 4846-4872. Chemical Engineering Science, 63, 4846–4872.

P . Katz, & C. Campbell, 2012, FDA 2011 process validation guidance: Process validation revisited, Journal of GXP Compliance, 16(4), 18.

A. Koch, J. Mädler, A. Bamberg, and L. Urbas, 2023. Digital Twins for Scale-Up in Modular Plants: Requirements, Concept, and Roadmap. In Computer Aided Chemical Engineering, 2063-2068, Elsevier.

J. Mädler, 2023. Smarte Process Equipment Assemblies zur Unterstützung der Prozessvalidierung in modularen Anlagen.



Bioprocess control using hybrid mechanistic and Gaussian process modeling

Lydia Katsini, Satyajeet Sheetal Bhonsale, Jan F.M. Van Impe

BioTeC+, Chemical & Biochemical Process Technology & Control, KU Leuven, Belgium

Control of bioprocesses is crucial for achieving optimal yield of various products. In this study, we focus on the fermentation of Xanthophyllomyces dendrorhous, a yeast known for its ability to produce astaxanthin, a high-value carotenoid with applications in pharmaceuticals, nutraceuticals, and aquaculture. Successful application of optimal control, requires, however, accurate and robust process models (Bhonsale et al., 2022). Since the system dynamics are non-linear and biological variability is an inherent property of the process, modeling such a system is demanding.

Aiming to tackle the system complexity, our approach in modeling this process follows Vega-Ramon et al. (2021), who combined two distinct methods: mechanistic and machine learning models. On the one hand, mechanistic models, based on existing knowledge, provide valuable insights into the underlying phenomena but are limited by their demand for accurate parameterization and may struggle to adapt to process disturbances. On the other hand, machine learning models, based on experimental data, can capture the underlying pattern without previous knowledge, however, they are also limited to the domain of the training data utilized to build them.

A key challenge in both modeling approaches is dealing with uncertainty, and more specifically biological variability, which is inherent in biological systems. To address this, we utilize Gaussian Process (GP) modeling, a flexible, non-parametric machine learning technique that provides a framework for uncertainty quantification. In this study, the use of GPs allows for robust control of the fermentation by accounting for the biological variability of the system.

Optimal control framework is implemented both for the hybrid model and the mechanistic model to identify the optimal sugar feeding strategy for maximizing astaxanthin yield. This study demonstrates how optimal control can benefit from hybrid mechanistic and machine learning bioprocess modeling.

References

Bhonsale, S. et al. (2022). Nonlinear Model Predictive Control based on multi-scale models: is it worth the complexity? IFAC-PapersOnLine, 55(23), 129-134. https://doi.org/10.1016/j.ifacol.2023.01.028

Vega-Ramon, F. et al. (2021). Kinetic and hybrid modeling for yeast astaxanthin production under uncertainty. Biotechnology and Bioengineering, 118, 4854–4866. https://doi.org/10.1002/bit.27950



Tune Decomposition Schemes for Large-Scale Mixed-Integer Programs by Bayesian Optimization

Guido Sand1, Sophie Hildebrandt1, Sina Nunes1, Chung On Yip1, Meik Franke2

1Pforzheim University of Applied Science, Germany; 2University of Twente, The Netherlands

Heuristic decomposition schemes are a common approach to approximately solve large-scale mixed-integer programs (MIPs). A typical example are moving horizon schemes applied to scheduling problems. Decomposition schemes usually exhibit parameters which can be used to tune their performance. Examples for parameters of moving horizon schemes are the horizon length and the step size of its movement. Systematic tuning approaches are seldomly reported in literature.

In a previous paper by the first two authors, Bayesian optimization was proposed as a methodological approach to systematically tune decomposition schemes for mixed-integer programs. This approach is reasonable since the tuning problem is a black-box optimization problem with an expensive to evaluate objective function: Each evaluation of the objective function of the Bayesian optimization requires the solution of the mixed-integer program using the specifically parametrized decomposition scheme. The mentioned paper demonstrated by an exemplary mixed-integer hoist scheduling model and a moving horizon scheme that the proposed approach is feasible and effective in principle.

After the proof of concept in the previous paper, the paper at hand discusses detailed results of three studies of the Bayesian optimization-based approach using the same exemplary hoist scheduling model:

  1. Examine the solution space:
    The graphs of the objective function (makespan or computational cost) of the tuning problem are analysed for small instances of the mixed-integer model considering the sequences of evaluations of the Bayesian optimization in the integer-valued space of tuning parameters. The results show that the Bayesian optimization converges relatively fast to good solutions even though the visual inspection of the graphs of the objective function exhibit only little structure.
  2. Compare different acquisition functions:
    The type of acquisition function is studied since it is assumed to be a tuning parameter of the Bayesian optimization with a major impact on its performance. Four types of acquisition functions are applied to a set of test cases and compared with respect to the mean performance and its variance. The results show a similar performance of three types and a slightly inferior performance of the fourth type.
  3. Enlarge the tuning-parameter space:
    The scaling behaviour of the Bayesian optimization-based approach with respect to the dimension of the space of tuning-parameters is studied: The number of tuning-parameters is increased from two to four parameters (three integer- and one real-valued). First results indicate that the studied approach is also feasible for real-valued tuning parameters and remains effective in higher dimensional spaces.

The results indicate that Bayesian optimization is a promising approach to tune decomposition schemes for large-scale mixed-integer programs. Future work will investigate the optimization of tuning-parameters for multiple instances in two directions. Direction one is inspired by hyperparameter optimization methods and aims at tuning one decomposition scheme which is on average optimal for multiple instances. Direction two is motivated by algorithm selection methods and aims at predicting good tuning parameters from previously optimized tuning parameters.



Enhancing industrial symbiosis to reduce CO2 emissions in a Portuguese industrial park

Ricardo Nunes Dias1,2, Fátima Nunes Serralha2, Carla Isabel Costa Pinheiro1

1Centro de Química Estrutural, IMS, Department of Chemical Engineering, Instituto Superior Técnico/Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal; 2RESILIENCE – Center for Regional Resilience and Sustainability, Escola Superior de Tecnologia do Barreiro, Instituto Politécnico de Setúbal, 2839-001 Lavradio, Portugal

The primary objective of any industry is to generate profit, which often results in a focus on the efficiency of product production, not neglecting environmental and social issues. However, it is important to recognise that every process has multiple outlets, including the desired products and residues. In some cases, the effort required to process these residues further may outweigh the benefits (at first glance), leading to their disposal at a cost to the industry. Many of these residues can be sorted to enhance their value, enabling their sale instead of disposal [1].

This work presents a model developed in GAMS to identify and quantify potential symbiosis, that are already occurring, or could occur, if the appropriate relations between enterprises were established. A network flow is modelled to establish as much symbiosis as possible. The objective function maximises material exchange between enterprises while ensuring that every possible symbiosis is established. This will result in exchanges between enterprises that may involve too small amounts of wastes to be implemented. However, this outcome is beneficial for decision-makers, as having multiple sinks for a given residue can be beneficial [2,3]. EMn,j,i,n’ (exchanged material) is the main decision variable of the model, where the indices are: n and n', the donor and receiver enterprises (respectively), j, the category, and i, the residue. A binary variable, Yn,j,i,n’, is also used to allow or not a given exchange between enterprises. Each residue is categorised according to the role it has in each enterprise, as it can be an industrial residue (category 3), or a resource (category 0), categories 1 and 2 are reserved for products and subproducts, respectively. The wastes produced are converted into CO2eq (carbon dioxide equivalent), as a quantification of environmental impact. Reducing the amount of waste produced can significantly reduce the environmental impact of a given enterprise. This study assesses the largest industrial park in Portugal, which encompasses a refinery and a petrochemical plant, as the two largest facilities within the park. The direct CO2 emissions mitigated by the deployment of CO2 utilisation processes can be quantified. The establishment of a methanol plant utilising CO2 can reduce the CO2 emissions from the park by 335,560 tons. A range of CO2 utilisation processes will be evaluated to determine the optimal processes for implementation.

Even though a residue can have impacts on several environmental aspects, in this work we focus on reducing carbon emissions. Furthermore, it was found that cooperation between local enterprises and the announced investments of these enterprises can lead to significant environmental gains in the region studied.

References

[1] J. Patricio,, Y. Kalmykova,, L. Rosado,, J. Cohen,, A. Westin,, J. Gil, Resour Conserv Recycl 185 (2022). 10.1016/j.resconrec.2022.106437.

[2] D.C.Y. Foo, PROCESS INTEGRATION FOR RESOURCE CONSERVATION, 2016.

[3] L. Fraccascia,, D.M. Yazan,, V. Albino,, H. Zijm, Int J Prod Econ 221 (2020). 10.1016/j.ijpe.2019.08.006.



Blue Hydrogen Plant: Accurate Hybrid Model Based on Component Mass Flows and Simplified Thermodynamic Properties is Practically Linear

Farbod Maghsoudi, Raunak Pandey, Vladimir Mahalec

McMaster University, Canada

Current models of process plants are either rigorous first principles models based on molar flows and fractions (used for process design or optimization of operating conditions) or simple mass or volumetric flows (used for production planning and scheduling). Detailed models compute stream properties via nonlinear calculations which employ mole fractions resulting in many nonlinearities and limit plant wide models to a single time-period computation. Planning models are flow-based models, usually linear and therefore solve rapidly which makes them suitable for multi-time period representation of the plant at the expense of lower accuracy.

Once a plant is in operation, most of its streams stay at or close to the normal operating conditions which are maintained by the process control loops. Therefore, each stream can be described by its properties at these normal operating conditions (unit enthalpy, temperature, pressure, density, heat capacity, vapor fraction, etc.). It should be noted that these bulk properties per unit mass are much less sensitive to changes in stream composition if one employs mass units instead of moles (e.g. latent heat of C5 to C10 hydrocarbons varies much less in energy/mass than in energy/mole units).

Based on these observations, this work employs a new plant modelling paradigm which leads to models that have accuracy close to the rigorous models and at the same time the models are (almost) linear, thereby permitting rapid solution of large-scale single-period and multi-period models. Instead of total molar flow and mole fractions, we represent streams by mass flows of components and total mass flow. In addition, we employ simplified thermodynamic properties based on [property value/mass], which eliminates the need to use mole or mass fractions.

This paradigm has been used to model a blue hydrogen plant described in the NETL report [1]. The plant converts natural gas into hydrogen and CO2 via autothermal reforming (ATR) and water-gas shift (WGS) reactors . Oxygen is supplied from the air separation unit, while steam and electricity are supplied by a combined heat and power (CHP) unit. Stream properties at normal operating conditions have been obtained from AspenPlus plant model. Surrogate reactor models employ mass component flows and have only one bilinear term, even though their AspenPlus counterpart is a highly nonlinear RGIBBS model. The entire plant model has a handful of bilinear terms, and its results are within 1% to 2% of the rigorous AspenPlus model.

Novelty of our work is in changing the plant modelling paradigm from molar flows, fractions, and rigorous thermodynamic properties calculation to mass component flows and simplified thermodynamic properties. Rigorous properties calculation is used to update the simplified properties after the hybrid model converges. This novel plant modelling paradigm greatly reduces nonlinearities of plant models while maintaining high accuracy. Due to its rapid convergence, the same plant model can be used for optimization of operating condition, multi-time period production planning, and for scheduling.

References:

  1. Comparison of Commercial State-of-the-art Fossil-based Hydrogen Production Technologies, DOE/NETL-2022/3241, April 12, 2022


Synergies between the distillation of first- and second-generation sugarcane ethanol for sustainable biofuel production

Luiz De Martino Costa1,2, Abhay Athaley3, Zach Losordo4, Adriano Pinto Mariano1, John Posada2, Lee Rybeck Lynd5

1Universidade Estadual de Campinas, Brazil; 2Delft Universtity of Technology, The Netherlands; 3National Renewable Energy Laboratory, United States; 4Terragia Biofuel Incorporated, United States; 5Dartmouth College, United States

Despite the yearly opening of second-generation (2G) sugarcane distilleries in Brazil, 2G bagasse ethanol distillation remains a challenging unit operation due to low-titer ethanol having increased heat duty and production costs per ethanol mass produced. For this reason, and because of the logistics involving transporting sugarcane bagasse, 2G bagasse ethanol is currently commercially produced in plants annexed to first-generation (1G) ethanol plants, and this configuration can likely become one path of evolution for 2G ethanol production in Brazil.

In the context of 1G2G integrated sugarcane ethanol plants, mixing ethanol beers from both processes may reduce the production costs of 2G ethanol (personal communication with a 2G ethanol producer). However, the energy, process, economic, and environmental advantages of this integrated model compared to its stand-alone counterpart remain unclear. Thus, this work focused on the energy synergies between the distillation of integrated first- and second-generation sugarcane ethanol mills.

For this investigation, integrated and separated 1G2G distillation simulations were conducted using Aspen Plus v.10. The separated distillation arrangement consisted of two RadFrac columns: one to distillate 1G beer and another to distillate 2G beer until near azeotropic levels (91.5% wt ethanol). In the integrated distillation arrangement, two columns were used: one to rectify 2G beer and another to distillate 2G vapor and 1G beer until azeotropic levels. The mass flow ratio between 1G to 2G beer was assumed to be 3:1, both mixtures enter the columns as saturated liquid and consist of only water and ethanol. The 1G beer titer was assumed 100 g/L and the 2G beer titer was varied from 10 to 40 g/L to understand and compare the energy impacts for low titer 2G beer. The energy analysis was conducted by quantifying and comparing the reboilers’ duty and distilled ethanol production to calculate heating energy demand.

1G2G integration resulted in an overall heating energy demand for ethanol distillation at a near-constant value of 3.25 MJ/kgEthanol, regardless of the 2G ethanol titer. In comparison, the separated scenario had energy demand ranging from 3.60 (40 g/L 2G beer titer) to 3.80 (10 g/L 2G beer titer) MJ/kgEthanol, meaning that it is possible to obtain energy savings from 9.5% to 14.5%. Additionally to the energy savings, the energy demand value found for the integrated scenario is almost the same for only 1G beer. The main reason for these results is that the energy demand for 2G ethanol is reduced due to the reflux ratio necessary for distillation, lowering in an integrated 1G2G column to be near to only 1G conditions. This can be observed in the integrated scenario by the 2G ethanol heat demand in isolation being the near-constant value of 3.35 MJ/kgEthanol for the studied range of 2G ethanol titer while changing from 5.81 to 19.92 MJ/kgEthanol in the separated scenario. These results indicate that distillation integration should be chosen for the 1G2G sugarcane distilleries for a less energy-demanding process, and, therefore, more sustainable biofuel.



Development of anomaly detection models independent of noise and missing values using graph Laplacian regularization

Yuna Tahashi, Koichi Fujiwara

Department of Materials Process Engineering, Nagoya University, Japan

Process data frequently suffer from imperfections such as missing values or measurement noise due to sensor malfunctions. Such data imperfections pose significant challenges to process fault detection, potentially leading to false positives or overlooking rare faulty events. Fault detection models with high sensitivity may excessively detect these irregularities, which leads to disturbing the identification of true faulty events.

To address this challenge, we propose a new fault detection model based on an autoencoder architecture with graph Laplacian regularization that considers specific temporal relationships among time series data. Laplacian regularization assumes that neighboring samples remain similar, imposing significant penalties when neighboring samples lack smoothness. In addition, graph Laplacian regularization can take the smoothness of graph structures into account. Since normal samples in close temporal proximity should keep similar characteristics, a graph can be utilized to represent temporal dependencies between successive samples in a time series. In the proposed model, the nearest correlation (NC) method which is a structural learning algorithm considering the correlation among variables is used. Using graph Laplacian regularization with the NC method, it is expected that missing values or measurement noise are corrected automatically from the viewpoint of the correlation among variables in the normal process condition, and only significant changes like faulty events can be detected because they cannot be corrected sufficiently. The proposed method has broad applicability to various models because the graph regularization term based on the NC method is simply added to the objective function when a model is trained.

To demonstrate the efficacy of our proposed model, we conducted a case study using simulation data generated from a vinyl acetate monomer (VAM) production process, employing a rigorous process model built on Visual Modeler (Omega Simulation Inc., Japan). In the VAM simulator, six faulty scenarios, such as sudden changes in feed composition and pressure, were generated.

The results show that the fault detection model with graph Laplacian regularization provides higher fault detection accuracy compared to the model without graph Laplacian regularization in some faulty scenarios. The false alarm rate (FAR) and the missing alarm rate (MAR) were improved by up to 0.4% and 50.1%, respectively. In addition, the detection latency (DL) was shortened at most 1,730 seconds. Therefore, it was confirmed that graph-Laplacian regularization with the NC method is particularly effective for fault detection.

The use of graph Laplacian regularization with the NC method is expected to realize a more reliable fault detection model, which would be capable of robustly handling noise and missing values, reducing false positives, and identifying true faulty events rapidly. This advancement promises to enhance the efficiency and reliability of process monitoring and control across various industrial applications.



Comparing incinerator kiln model predictions with measurements of industrial plants

Lionel Sergent1,2, Abderrazak Latifi1, François Lesage1, Jean-Pierre Corriou1, Thibaut Lemeulle2

1Université de Lorraine, Nancy, France; 2SUEZ, Schweighouse-sur-Moder, France

Roughly 30% of municipal waste is incinerated in the EU. Because of the heterogeneity of the waste and the lack of local measurements, the industry relies on traditional control strategy, including manual piloting. Advanced modeling strategies have been used to gain insights on the design of such facilities. Despite two decades of scientific efforts, obtaining good model accuracy and reliability is still challenging.
In this work, the predictions of a phenomelogical model based on the simplification of literature works is compared to the measurements of an industrial incinerator. The model consists of two sub-models, namely the bed model and the freeboard model. The bed refers to the solid waste traveling through the kiln, while the freeboard refers to the gaseous space above the bed where the flame resides.
The bed of waste is simulated with finite volumes and a walking columns approach, while the freeboard is modeled with the zone method and the interface with the boiler is taken into account through a three layer system. The code implementation of the model takes into account various geometry and other plant important characteristics in a way that allows to easily simulate different types of grate kilns.
The incinerator used as a reference for the development of the model is located in Alsace, France. It features a waste chute, a three zone grate, water walls in the kiln, four secondary air injection points and a cooling water injection. The simulation results are compared with temperature and gas composition measurements. Except for oxygen concentration, gas composition data needs to be retrocalculated from stack gas analyzers. Simulated bed height is compared with the observable fraction of the actual bed. The model reproduces well static behavior and general dynamic tendencies.
The very strong model sensitivity to particle diameter is discussed. Additionally, the model is configured for two other incinerators and shallow comparison with industrial data is performed to assess the generality of the model.
Despite encouraging results, the importance of more work regarding the solid behavior is highlighted.



Modeling the freeboard of a municipal waste incinerator

Lionel Sergent1,2, Abderrazak Latifi1, François Lesage1, Jean-Pierre Corriou1, Thibaut Lemeulle2

1Université de Lorraine, Nancy, France; 2SUEZ, Schweighouse-sur-Moder, France

Roughly 30% of municipal waste is incinerated in the EU. Despite the apparent simplicity, the heterogeneity of the waste and the scarcity of local measurement make waste incineration a challenging process to mathematically describe.
Most of the modeling efforts are concentrated on the bed behavior. However, the gaseous space above the bed, named the freeboard, also needs to be modeled in order to mathematically represent the behavior of the kiln. Indeed, there is a tight coupling between these two spaces, as the bed feeds the freeboard with pyrolysis gases allowing a flame to form in the freeboard, while the flame radiates heat back to the bed, allowing the drying and the pyrolysis to take place.
The freeboard may be modeled using various techniques. The most accurate and commonly used technique is CFD, generally with established commercial software. CFD allows to obtain detailed flow characteristics, which is very valuable to optimize secondary air injection. However, CFD setup is quite heavy and harder to interface with the custom codes typically used for bed modeling. In this work, we propose a coarse model, more adapted to operational use. Each grate zone is associated a with a freeboard gas space where homogeneous combustion reactions occur. Radiative heat transfer is modeled using the zonal method. Three layers are used to represent the interface with the boiler and the thermal inertia the refractory induces. Flow description is reduced to its minimum and solved through the combination of the continuity equation and the ideal gas law, without momentum balance.
The resulting mathematical model is a system of ODEs than can be easily solved with general purpose stiff ODE solvers based on backward differentiation formulas. Steady state simulation results show good agreements with the few measurements available. Dynamic effects are hard to validate due to the lack of local measurements, but general tendencies seem well represented. The coarse freeboard representation is shown to be enough to obtain the radiation profile arriving on the bed.



Superstructure as a communication tool in pre-emptive life cycle design engaging society: Findings from case studies on battery chemicals, plastics, and regional resources

Yasunori Kikuchi1, Ayumi Yamaki1, Aya Heiho2, Jun Nakatani1, Shoma Fujii1, Ichiro Daigo1, Chiharu Tokoro1,3, Shisuke Murakami1, Satoshi Ohara1

1The University of Tokyo, Japan; 2Tokyo City University, Japan; 3Waseda University, Japan

Emerging technologies require sophisticated design and optimization engaging social systems due to their innovative and rapidly advancing characteristics. Despite the fact that they have the significant capacity to change material flows and life cycles by their penetration, their future development and sociotechnical regimes, e.g., regulatory environment, societal infrastructure, and market, are still uncertain and may affect the optimal systems to be implemented in the future. Multiple technologies are being considered simultaneously for a single issue, and appropriate demarcation and synergistic effects are not being evaluated. Superstructures in process systems engineering can visualize all alternative candidates for design problems and contain emerging technologies as such candidates.

In this study, we are tackling pre-emptive life cycle design in social challenges implementing emerging technologies with case studies on battery chemicals, plastics, and regional resources. Appropriate alternative candidates were generated with stakeholders in industries and national projects by constructing superstructures. Based on the consensus superstructures, life cycles have been proposed considering life cycle assessment (LCA) by the simulations of applying emerging technologies.

Regarding the battery chemistry issue, the nickel-manganese-cobalt (NMC) type lithium batteries have become dominant, although the lithium iron phosphate (LFP) type has also been considered as a candidate. The battery chemistries and recycling technologies are emerging technologies in this issue and superstructures were proposed for recycling systems (Yonetsuka et al., 2024). Through communication with the managers of Japanese national projects on battery technology, the scenarios on battery resource circulation have been developed. The issue of plastics has become the design problem of systems applying biomass-derived and recycle-based carbon sources (Meng et al., 2023; Kanazawa et al., 2024). Based on superstructure (Nakamura et al., 2023), the scenario planning and LCA have been conducted and shared with stakeholders for designing future plastic resource circulations. Regional resources could be circulated by implementing multiple technologies (Kikuchi et al., 2023). Through communication with residents and stakeholders, the demonstration test was conducted.

The case studies in this study find the facts below. The superstructures with technology assessments could support the common understanding of the applicable technologies and their pros and cons. Because technologies could not be implemented without social acceptance, CAPE tools should be able to discuss the sociotechnical and socioeconomical aspects of process systems.

D. Kanazawa et al., 2024, Scope 1, 2, and 3 Net Zero Pathways for the Chemical Industry in Japan, J. Chem. Eng. Jpn., 57 (1). DOI: 10.1080/00219592.2024.2360900.

Y. Kikuchi et al., 2024, Prospective life-cycle design of regional resource circulation applying technology assessments supported by CAPE tools, Comput. Aid. Chem. Eng., 53, 2251-2256

F. Meng et al., 2023, Planet compatible pathways for transitioning the chemical industry, Proc. Natl. Acad. Sci., 120 (8) e2218294120.

T. Nakamura et al., 2024, Assessment of Plastic Recycling Technologies Based on Carbon Resource Circularity Considering Feedstock and Energy Use, Comput. Aid. Chem. Eng., 53, 799-804

T. Yonetsuka et al., 2024, Superstructure Modeling of Lithium-Ion Batteries for an Environmentally Conscious Life-Cycle Design, Comput. Aid. Chem. Eng., 53, 1417-1422



A kinetic model for transesterification of vegetable oils catalyzed by sodium methylate—Insights from inline Raman spectroscopy

Ilias Bouchkira, Mohammad El Wajeh, Adel Mhamdi

Process Systems Engineering (AVT.SVT), RWTH Aachen University, 52074 Aachen, Germany

The transesterification of triolein by methanol for biodiesel production is of great interest due to its potential to provide a sustainable and environmentally friendly alternative to fossil fuels. Biodiesel can be produced from renewable sources like vegetable oils, thereby contributing to reducing greenhouse gas emissions and dependency on non-renewable energy. The process also yields glycerol, a valuable by-product that is used in various industries. Given the growing global demand for cleaner energy and sustainable chemical processes, understanding and modeling the kinetics of biodiesel production is critical for improving efficiency, reducing costs, and ensuring scalability of biodiesel production, especially for model-based process design and control (El Wajeh et al., 2023).

We present a kinetic model of the transesterification of triolein by methanol to produce fatty acid methyl esters (FAME), i.e. biodiesel, and glycerol. For parameter estimation, we perform transesterification experiments using an automated lab-scale system consisting of a semi-batch reactor, dosing pumps, stirring system and a cooling/heating thermostat. An important contribution in this work is that we use inline Raman spectroscopy instead of taking samples for offline analysis. The application of Raman spectroscopy enables continuous concentration monitoring of key species involved in the reaction, i.e. FAME, triglycerides, methanol, glycerol and catalyst.

We employ sodium methylate as a catalyst, addressing a gap in the literature, where kinetic parameter values for the transesterification with this catalyst are lacking. To ensure robust parameter estimation, we perform a global sensitivity-based estimability analysis (Bouchkira et al., 2024), confirming that the experimental data is sufficient for accurate model calibration. The parameter estimation is carried out using genetic algorithms, and we determine the confidence intervals of the estimated parameters through Hessian matrix analysis. This approach ensures reliable and meaningful model parameters for a broad range of operating conditions.

We perform experiments for several temperatures relevant for industrial application, with a specific focus on the range around 60°C. The Raman probe used inside the reactor is calibrated offline with high precision, achieving excellent calibration accuracy for concentrations (R2 = 0.99). The results show excellent agreement with experimental data. The predicted concentrations from the model align with experimental data, with deviations generally under 2%, demonstrating the accuracy and reliability of the proposed kinetic model across different operating conditions.

References

El Wajeh, M., Mhamdi, A., & Mitsos, A. (2023). Dynamic modeling and plantwide control of a production process for biodiesel and glycerol. Industrial & Engineering Chemistry Research, 62(27), 10559-10576.

Bouchkira, I., Latifi, A. M., & Benyahia, B. (2024). ESTAN—A toolbox for standardized and effective global sensitivity-based estimability analysis. Computers & Chemical Engineering, 186, 108690.



Integration of renewable energy and reversible solid oxide cells to decarbonize secondary aluminium production and urban systems

Daniel Florez-Orrego1, Dareen Dardor1, Meire Ribeiro Domingos1, Reginald Germanier2, François Maréchal1

1Ecole Polytechnique Federale de Lausanne, Switzerland; 2Novelis Sierre S.A.

The aluminium recycling and remelting industry is a key actor in advancing a sustainable and circular economy within the aluminium sector. Currently, energy conversion processes in secondary aluminium production are largely dependent on natural gas, exposing the industry to volatile market prices and contributing to significant environmental impacts. To mitigate this, efforts are focused on reducing reliance on fossil fuels by incorporating renewable energy and advanced cogeneration systems. Due to the intermittent nature of renewable energy, a combination of technologies can be employed to improve energy integration and enhance process resilience in heavy industry. These technologies include energy storage systems, oxycombustion furnaces, carbon abatement, power-to-gas technologies, and biomass thermochemical conversion. This configuration allows for seasonal storage of renewable energy, optimizing its use during periods of high electricity and natural gas prices. High-temperature reversible solid oxide cells play a critical role in balancing energy needs, while increasing exergy efficiency within the integrated facility, offering advantages over traditional cogeneration systems. When thermally integrated into an aluminium remelting plant, the whole system functions as an industrial battery (i.e. fuel and gases storage), cascading low-grade waste heat to a nearby urban agglomeration. The waste heat temperature from aluminium furnaces and biomass energy conversion technologies supports the integration of high-temperature reversible solid oxide cells. The post-combustion of tail gas from these cells provides heat to the melter furnace, while the electricity generated can be used elsewhere in the system, such as for powering electrical furnaces, rolling processes, ancillary demands, and district heating heat pumps. In fact, by optimally tuning the operating parameters of the rSOC, which in turn depend on the partial load and the utilization factor, the heat-to-power ratio can be modulated to satisfy the energy demands of all the industrial and urban systems involved. The chemically-driven heat recovery in the reforming section is also compared to other energy recovery systems, such as supercritical CO2 power cycles and preheater-melter furnace integration. In all the cases, the low-grade waste heat recovery, typically rejected to environment, is used to supply the heat to the city using an anergy district heating network via heat pumping systems. In this advanced integrated scenario, energy consumption increases by only 30% compared to conventional systems based on natural gas and biomass combustion. However, CO2 emissions are reduced by a factor of three, particularly when combined with a carbon management and sequestration system. Further reductions in emissions can be achieved if higher shares of renewable electricity become available. Moreover, the use of local renewable energy resources promotes the energy security and sustainability of industries traditionally reliant on fossil energy resources.



A Novel Symbol Recognition Framework for Digitization of Piping and Instrumentation Diagrams

Zhiyuan Li1, Zheqi Liu2, Jinsong Zhao1, Huahui Zhou3, Xiaoxin Hu3

1Department of Chemical Engineering, Tsinghua University, Beijing, China; 2Department of Computer Science and Engineering, University of California, San Diego, US; 3Sinopec Ningbo Engineering Co., Ltd, Ningbo, China

Piping and Instrumentation Diagrams (P&IDs) are essential in the chemical industry, but most exist as scanned images, limiting seamless integration into digital workflows. This paper proposes a method to digitize P&IDs and automate unit operation selection for Hazard and Operability (HAZOP) analysis. We combined convolutional neural networks and transformers to detect devices, pipes, instrumentation, and text in image-format P&IDs. Then we reconstructed the process topology and control structures for each P&ID using distance metric learning. Furthermore, multiple P&IDs were integrated into a comprehensive chemical process knowledge graph by stream and equipment identifiers. To facilitate automated HAZOP analysis, we developed a node-merging algorithm that groups equipment according to predefined unit operation categories, thereby identifying specific analysis objects for intelligent HAZOP analysis.

An evaluation conducted on a dataset comprising 500 simulated Piping and Instrumentation Diagrams (P&IDs) revealed that the device recognition process achieved over 99% precision and recall, with 93% accuracy in text extraction. Processing time was reduced by threefold compared to conventional methods, and the node-merging algorithm yielded satisfactory results. This study improves data sharing in chemical process design and facilitates automated HAZOP analysis.



Twin Roll Press Washer Blockage Prediction: A Pulp and Paper Plant Case Study

Bryan Li1,2, Isaac Severinsen1,2, Wei Yu1,2, Timothy Walmsley2, Brent Young1,2

1Department of Chemical and Materials Engineering, The University of Auckland, Auckland 1010, New Zealand; 2Ahuora – Centre for Smart Energy Systems, School of Engineering, The University of Waikato, Hamilton 3240, New Zealand

A process fault is considered an unacceptable deviation from the normal state. Process faults can incur significant product and revenue loss, as well as damage to personnel and equipment. The aim of this research is to create a self-learning digital twin that closely replicates and interfaces with a physical plant to appropriately advise plant operators of a potential plant fault in the near future. A key challenge to accurately predicting process faults is the lack of fault data due to the scarcity of fault occurrences. To overcome this challenge, this study creates synthetic data indistinguishable from the real limited process fault datasets with generative artificial intelligence, so that deep learning algorithms can better learn the fault behaviours. The model capability is further enhanced with real-time fault library updates employing methods of low computational cost: principal component analysis and transfer learning.

A pulp bleaching and washing process is used as an industrial case study. This process is connected to downstream black liquor evaporators and chemical recovery boilers. Successful development of this model can aid the decarbonisation progress in pulp and paper industry by decreasing energy wastage, water usage, and process downtime.



Addressing Incomplete Physical Models in Chemical Processes: A Novel Physics-Informed Neural Network Approach

Zhiyuan Xie, Feiya Lv, Jinsong Zhao

Tsinghua University, China, People's Republic of

In recent years, machine learning—particularly neural networks—has exerted a transformative influence on various facets of chemical processes, including variable prediction, fault detection, and fault diagnosis. However, when data is incomplete or insufficient, purely data-driven neural networks often encounter difficulties in achieving high predictive accuracy. Physics-Informed Neural Networks (PINNs) address these limitations by embedding physical knowledge and prior domain expertise into the neural network framework, thereby constraining the solution space and facilitating effective training with limited data. This methodology offers notable advantages in handling scarce industrial datasets.Despite these strengths, PINNs depend on explicit formulations of nonlinear partial differential equations (PDEs), which present significant challenges when modeling the intricacies of complex chemical processes. To overcome these limitations, this study introduces a novel PINN architecture capable of accommodating processes with incomplete PDE descriptions. Experimental evaluations on a Continuous Stirred Tank Reactor (CSTR) dataset, along with real-world industrial datasets, validate the proposed architecture’s effectiveness and demonstrate its feasibility in scenarios involving incomplete physical models.



A Physics-based, Data-driven Numerical Framework for Anomalous Diffusion of Water in Soil

Zeyuan Song, Zheyu Jiang

Oklahoma State University, United States of America

Precision modeling and forecasting of soil moisture are essential for implementing smart irrigation systems and mitigating agricultural drought. Agro-hydrological models, which describe irrigation, precipitation, evapotranspiration, runoff, and drainage dynamics in soil, are widely used to simulate the root-zone (top 1m of soil) soil moisture content. Most agro-hydrological models are based on the standard Richards equation [1], a highly nonlinear, degenerate elliptic-parabolic partial differential equation (PDE) with first order time derivative. However, research has shown that standard Richards equation is unable to model preferential flow in soil with fractal structure. In such a scenario, the soil exhibits anomalous non-Boltzmann scaling behavior. For soils exhibiting non-Boltzmann scaling behavior, the soil moisture content is a function of $frac{x}{t^{alpha/2}}$, where $x$ is the position vector, $t$ denotes the time, and $alpha$ is a soil-dependent parameter indicating subdiffusion ($alpha in (0,1)$) and superdiffusion ($alpha in (1,2)$). Incorporating this functional form of soil moisture into the Richards equation leads to a generalized, time-fractional Richards equation based on fractional time derivatives. Clearly, solving the time-fractional Richards equation for accurate modeling of water flow dynamics in soil faces extensive theoretical and computational challenges. Naïve approaches typically discretizes the time-fractional Richards equation using finite difference method (FDM). However, the stability of FDM is not guaranteed. Furthermore, the underlying physical laws (e.g., mass conservation) are often lost during the discretization process.

Here, we propose a novel numerical method that synergistically integrates finite volume method (FVM), adaptive linearization scheme, global random walk, and neural network to solve the time-fractional Richards equation. Specifically, the fractional time derivatives are first approximated using trapezoidal quadrature formula, before discretizing the time-fractional Richards equation by FVM. Leveraging our previous findings [2], we develop an adaptive linearization scheme to solve the discretized equation iteratively, thereby overcoming the stability issues associated with directly solving a stiff and sparse matrix equation. To better preserve the underlying physics during the solution process, we reformulate the linearized equation using global random walk algorithm. Next, as opposed to making the prevailing assumption that, in any discretized cell, the soil moisture is proportional to the number of particles, we show that this assumption does not hold. Instead, we propose to use neural networks to model the highly nonlinear relationships between the soil moisture content and the number of particles. We illustrate the accuracy and computational efficiency of our proposed physics-based, data-driven numerical method using numerical examples. Finally, a simple way to efficiently identify the parameter is developed to match the solutions of time-fractional Richards equation with experimental measurements.

References

[1] L.A. Richards, Capillary conduction of liquids through porous mediums, Physics, 1931, 1(5): 318-333.

[2] Z. Song, Z. Jiang, A Novel Data-driven Numerical Method for Hydrological Modeling of Water Infiltration in Porous Media, arXiv preprint arXiv:2310.02806, 2023.



Supersaturation Monitoring for Batch Crystallization using Empirical and Machine Learning Models

Mohammad Reza Boskabadi, Merlin Alvarado Morales, Seyed Soheil Mansouri, Gürkan Sin

Department of Chemical and Biochemical Engineering, Søltofts Plads, Building 228A, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark

Batch crystallization serves as a downstream process within the pharmaceutical and food industries, providing a high degree of flexibility in the purification of a wide range of products. Effective control over the crystal size distribution (CSD) is essential in these processes to minimize waste and the need for recycling, as crystals falling outside the target size range are typically considered waste or are recycled (Boskabadi et al., 2024). The resulting CSD is significantly influenced by the supersaturation (SS) of the mother liquor, a key parameter driving crystal nucleation and growth. Supersaturation is governed by several nonlinear factors, including concentration, temperature, purity, and other quality parameters of the mother liquor, which are often determined through laboratory analysis. Due to the complexity of these dependencies, no direct measurement method or single instrument exists for supersaturation assessment (Morales et al., 2024). This lack of efficient monitoring contributes to the GHG emissions associated with sugar production, estimated at 1.47 kg CO2/kg sugar (Li et al., 2024).

The primary objective of this study is to develop a machine learning (ML)-based model to predict sugar supersaturation using the sugar solubility dataset provided by Van der Sman (2017), aiming to establish correlations between temperature and sucrose solubility. To this end, different ML models were developed, and each model underwent rigorous statistical evaluations to verify its ability to capture solubility trends effectively. The results were compared to the saturation curve predicted by the Flory-Huggins thermodynamic model. The ML model simplifies predictions by accounting for impurities and temperature dependencies, validated using experimental datasets. The findings indicate that this predictive model allows for more precise dynamic control of the crystallization process. Finally, the effect of the developed model on sustainable sugar production was investigated. It was demonstrated that using this model may reduce the mean batch residence time during the crystallization stage, lowering energy consumption, reducing the CO2 footprint, increasing production capacity, and ultimately contributing to sustainable process development.

References:

Boskabadi, M. R., Sivaram, A., Sin, G., & Mansouri, S. S. (2024). Machine Learning-Based Soft Sensor for a Sugar Factory’s Batch Crystallizer. In Computer Aided Chemical Engineering (Vol. 53, pp. 1693–1698). Elsevier.

Li, K., Zhao, M., Li, Y., He, Y., Han, X., Ma, X., & Ma, F. (2024). Spatiotemporal Trends of the Carbon Footprint of Sugar Production in China. Sustainable Production and Consumption, 46, 502–511.

Morales, H., di Sciascio, F., Aguirre-Zapata, E., & Amicarelli, A. (2024). Crystallization Process in the Sugar Industry: A Discussion On Fundamentals, Industrial Practices, Modeling, Estimation and Control. Food Engineering Reviews, 1–29.

Van der Sman, R. G. M. (2017). Predicting the solubility of mixtures of sugars and their replacers using the Flory–Huggins theory. Food & Function, 8(1), 360–371.



Role of process integration and renewable energy utilization for the decarbonization of the watchmaking sector.

Pullah Bhatnagar1, Daniel Alexander Florez Orrego1, Vibhu Baibhav1, François Maréchal1, Manuele Margni2

1EPFL, Switzerland; 2HES-SO Valai Wallis, Switzerland

Switzerland is the largest exporter of watches and clocks worldwide. The Swiss watch industry contributes 4% to Switzerland's GDP, amounting to CHF 25 billion annually. As governments and international organizations accelerate efforts to achieve net-zero emissions, industries are increasingly pressured to adopt more sustainable practices. Decarbonizing the watch industry is therefore essential. One way to improve sustainability is by enhancing energy efficiency, which can significantly reduce the consumption of various energy sources, leading to lower emissions. Additionally, recovering waste heat from different industrial processes can further enhance energy efficiency.

The watch industry operates across five distinct typical days, each characterized by different levels of average power demand, plant activity, and duration. Among these, typical working days experience the highest energy demand, while vacation periods see the lowest. Adjusting the timing of vacation periods—such as shifting the month when the industry closes—can also improve energy efficiency. This becomes particularly relevant with the integration of decarbonization technologies like photovoltaic (PV) and solar thermal (ST) systems, which generate more energy during the summer months.

This work also explores the techno-economic feasibility of incorporating energy storage solutions (both for heat and electricity) and developing a tailored charging and dispatch strategy. The strategy would be designed to account for the variations in energy demand observed across the different characteristic time periods within a month.



An Integrated Machine Learning Framework for Predicting HPNA Formation in Hydrocracking Units Using Forecasted Operational Parameters

Pelin Dologlu1, Berkay Er1, Kemal Burçak Kaplan1, İbrahim Bayar2

1SOCAR Turkey, Digital Transformation Department, Istanbul 34485, Turkey; 2SOCAR STAR Oil Refinery, Process Department, Aliaga, Izmir 35800, Turkey

The accumulation of heavy polynuclear aromatics (HPNAs) in hydrocracking units (HCUs) poses significant challenges to catalyst performance and process efficiency. This study proposes an integrated machine learning framework that combines ridge regression, K-nearest neighbors (KNN), and long short-term memory (LSTM) neural networks to predict HPNA formation, enabling proactive process management. For the training phase, weighted average bed temperature (WABT), catalyst deactivation phase—classified using unsupervised KNN clustering—and hydrocracker feed (HCU feed) parameters obtained from laboratory analyses are utilized to capture the complex nonlinear relationships influencing HPNA formation. In the simulation phase, forecasted WABT values are generated using a ridge regression model, and future HCU feed changes are derived from planned crude oil blend data provided by the planning department. These forecasted WABT values, predicted catalyst deactivation phases, and anticipated HCU feed parameters serve as inputs to the LSTM model for predicting future HPNA levels. This approach allows us to simulate various operational scenarios and assess their impact on HPNA accumulation before they manifest in the actual process. By identifying critical process parameters and their influence on HPNA formation, the model enhances process engineers' understanding of the hydrocracking operation. The ability to predict HPNA levels in advance empowers engineers to implement corrective actions proactively, such as adjusting feed compositions or operating conditions, thereby mitigating HPNA formation and extending catalyst life. The integrated framework demonstrates high predictive accuracy and robustness, underscoring its potential as a valuable tool for optimizing HCU operations through advanced predictive analytics and informed decision-making.



Towards the Decarbonization of a Conventional Ammonia Plant by the Gradual Incorporation of Green Hydrogen

João Fortunato, Pedro Castro, Diogo A. C. Narciso, Henrique A. Matos

Centro de Recursos Naturais e Ambiente, Department of Chemical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisboa, Portugal

As the second most produced chemical worldwide, ammonia (NH3) production depends heavily on fossil fuel consumption. The ammonia production process is highly energy-intensive and results in the emission of 1-2 % of total carbon dioxide emissions1 and the consumption of 2 % of the energy consumed worldwide1. Ammonia is industrially produced by the Haber-Bosch (HB) process, by reacting hydrogen with nitrogen. Hydrogen can be obtained from a variety of feedstocks, such as coal and naphtha, but is typically obtained from the processing of natural gas, via Steam Methane Reforming (SMR)1. In the latter case, atmospheric air can be used directly as a nitrogen source without the need for previous separation, since the oxygen is completely consumed by the methane partial oxidation reaction2.

The ammonia industry is striving for decarbonization, driven by increasing carbon neutrality policies and energy independence targets. In Europe, the Renewable Energy Directive III requires that 42 % of the hydrogen used in industrial processes come from renewable sources by 20303, setting a critical shift towards more sustainable ammonia production methods.

The literature includes many studies focusing on the production of low-carbon ammonia entirely from green hydrogen, without considering its production via SMR. However, this approach could threaten the competitiveness of the current industry and the loss of opportunity to continue valorizing previous investments.

This work addresses the challenges involved with the incorporation of green hydrogen into a conventional ammonia production plant (methane-fed HB process). An Aspen Plus V14 model was developed, and two different green hydrogen incorporation strategies were tested: S-I and S-II. These were inspired by existing operating procedures at one real-life plant, therefore the model simulations main focus are to determine the feasible limits of using an existing conventional NH3 plant and observe the associated main KPIs, when green H2 is available to add.

The S-I strategy reduces the production of grey hydrogen by reducing natural gas and process steam in the SMR. The intake of green hydrogen allows hydrogen and ammonia production to remain fixed.

In strategy S-II, grey hydrogen production remains unchanged, resulting in higher total hydrogen production. By taking in larger quantities of process air, higher NH3 production can be achieved.

These strategies introduce changes to the SMR process and NH3 synthesis, which imply modifications to the operating conditions of the plant. These changes lead to a technical limit for the incorporation of green hydrogen into the conventional HB process. Nevertheless, both strategies make it possible to reduce carbon emissions per quantity of NH3 produced and promote the gradual decarbonization of the current ammonia industry.

1. IEA International Energy Agency. Ammonia Technology Roadmap - Towards More Sustainable Nitrogen Fertiliser Production. https://www.iea.org/reports/ammonia-technology-roadmap (2021).

2. Appl, M. Ammonia, 2. Production Processes. in Ullmann’s Encyclopedia of Industrial Chemistry (Wiley, 2011). doi:10.1002/14356007.o02_o11.

3. RED III Directive (UE) 2023-2413 of 18 October 2023.



Gate-to-Gate Life Cycle Assessment of CO₂ Utilisation in Enhanced Oil Recovery: Sustainability and Environmental Impacts in Dukhan Field, Qatar

Razan Sawaly, Ahmad Abushaikha, Tareq Al-Ansari

hbku, Qatar

This study examines the potential impact of implementing a cap and trade system to reduce CO₂ emissions in Qatar's industrial sector, which is a significant contributor to global emissions. Using data from seven key industries, the research sets emission caps, allocates allowances through a grandfathering method, and allows trading of these allowances to create economic incentives for emission reductions. The study utilizes a model with a carbon price of $12.50 per metric ton of CO₂ and compares baseline emissions with future reduction strategies. Results indicate that while some industrial plants, such as the LNG and methanol plants, achieved substantial emission reductions and financial surpluses through practices like carbon capture and switching to hydrogen, others continued to face deficits. The findings highlight the system's potential to promote sustainable practices, suggesting that tighter caps and auction-based allowance allocations could further enhance the effectiveness of the cap and trade system in Qatar's industrial sector.



Robust Flowsheet Synthesis for Ethyl Acetate, Methanol and Water Separation

Aayush Gupta, Kartavya Maurya, Nitin Kaistha

Indian institute of technology Kanpur, India

Ethyl acetate and methanol are commonly used solvents in the pharmaceutical, textile, dye, fine organic, and paint industry [1] [2]. The waste solvent from these industries often contains EtAc and MeOH in water in widely varying proportions. Sustainability concerns reflected in increasingly stringent waste discharge regulations now dictate complete recovery, recycle and reuse of the organic species from the waste solvent. For the EtAc-MeOH-water waste solvent, simple distillation cannot be used due to the presence of a homogeneous EtAc-MeOH azeotrope and a heterogeneous EtAc-water azeotrope. Synthesizing a feasible flowsheet structure that separates a given waste solvent mixture into its nearly pure constituents (EtAc, MeOH and water) then becomes challenging. The flowsheet structure, among other things, depends on the waste solvent composition. A flowsheet that is feasible for a dilute waste solvent mixture may become infeasible for a more concentrated waste solvent. Given that the flowsheet structure, once chosen, remains fixed and cannot be changed, and that wide variability in the waste solvent composition is expected, in this work, we propose a “robust” flowsheet structure with guaranteed feasibility, regardless of the waste solvent composition. Such a “robust” flowsheet structure has the potential to significantly improve the economic viability of a waste solvent processing plant, as the same equipment can be used to separate the wide range of received waste solvents.

The key to the robust flowsheet design is the use of a liquid-liquid extractor (LLX) with recycled water as the solvent. For a sufficiently high-water rate to the LLX, the raffinate composition is close to the EtAc-water edge (nearly MeOH free), on the liquid-liquid envelope and in the EtAc rich distillation region. The raffinate is distilled to obtain pure EtAc bottoms product and the overhead vapour is condensed and decanted with the organic layer refluxed into the column. The aqueous distillate is mixed with the MeOH rich extract and stripped to obtain an EtAc free MeOH-water bottoms. The overhead vapour is condensed and recycled back to the LLX. The MeOH-water bottoms is further distilled to obtain pure MeOH distillate and pure water bottoms. A fraction of the bottoms is recirculated to the LLX as the solvent feed. Converged designs are obtained for an equimolar waste solvent composition as well as an EtAc rich, MeOH rich and water rich compositions to demonstrate the robustness of the flowsheet structure to a large change in the waste solvent composition.

References

[1] C. S. a. S. M. J. a. C. W. A. a. C. D. J. Slater, "Solvent use and waste issues," Green chemistry in the pharmaceutical industry, pp. 49-82, 2010.

[2] T. S. a. L. Z. a. C. H. a. Z. H. a. L. W. a. F. Y. a. H. X. a. S. Longyan, "Method for separating and recovering ethyl acetate and methanol". China Patent CN102746147B, May 2014.



Integrating offshore wind energy into the optimal deployment of a hydrogen supply chain: a case study in Occitanie

Melissa Cherrouk1,2, Catherine Azzaro-Pantel1, Marie Robert2, Florian Dupriez Robin2

1France Energies Marines / Laboratoire de Génie Chimique, France; 2France Énergies Marines, Technopôle Brest-Iroise, 525 Avenue Alexis de Rochon, 29280, Plouzané, France

The urgent need to mitigate climate change and reduce dependence on fossil fuels has led to the exploration of alternative energy solutions, with green hydrogen emerging as a key player in the global energy transition. Thus, the aim of this study is to assess the feasibility and competitiveness of producing hydrogen at sea using offshore wind energy, evaluating both economic and environmental perspectives.

Offshore wind energy offers several advantages for hydrogen production. These include access to water for electrolysis, potentially lower export costs for hydrogen compared to electricity, and the ability to smooth the variability of wind energy through hydrogen storage systems. Proper storage plays a crucial role in addressing the intermittency of wind power, making the hydrogen output more stable. This positions storage not only as an advantage but also as a key step for the successful coupling of offshore wind with hydrogen production. However, challenges remain, particularly regarding the capacity and cost of such storage solutions, alongside the high capital expenditures (CAPEX) and operational costs (OPEX) required for offshore systems.

This research explores the potential of offshore wind farms (OWFs) to contribute to hydrogen production by extending a techno-economic model based on Mixed-Integer Linear Programming (MILP). The model optimizes the number and type of production units, storage locations, and distribution methods, employing an optimization approach to determine the best hydrogen flows between regional hubs . The case study focuses on the Occitanie region in southern France, where hydrogen could be produced offshore from a 30 MW floating wind farm with three turbines located 30 km from the coast and transported via pipelines. Other energy sources may complement offshore wind energy to meet hydrogen supply demands. The study evaluates two scenarios: minimizing hydrogen production costs and minimizing greenhouse gas emissions over a 30-year period, divided into six five-year phases.

Initial findings show that, from an economic standpoint, the Levelized Cost of Hydrogen (LCOH) from offshore wind remains higher compared to traditional hydrogen production methods. However, the Global Warming Potential (GWP) of hydrogen produced from offshore wind ranks it among the most environmentally friendly options. Despite this, the volume of hydrogen produced in the current configuration does not meet the demand required for significant impact in Occitanie's hydrogen market, which points out the need to test higher power levels for the OWF and potential hybridization with other renewable energy sources.

The results underline the importance of future multi-objective optimization methods to better balance the economic and environmental trade-offs and make offshore wind a more competitive option for hydrogen production.

Reference:
Sofía De-León Almaraz, Catherine Azzaro-Pantel, Ludovic Montastruc, Marianne Boix, Deployment of a hydrogen supply chain by multi-objective/multi-period optimisation at regional and national scales, Chemical Engineering Research and Design, Volume 104, 2015, Pages 11-31, https://doi.org/10.1016/j.cherd.2015.07.005.



Robust Techno-economic Analysis, Life Cycle Assessment, and Quality by-Design of Three Alternative Continuous Pharmaceutical Tablet Manufacturing Processes

Shang Gao, Brahim Benyahia

Loughborough University, United Kingdom

This study presents a comprehensive comparison of three key downstream tableting manufacturing methods for pharmaceuticals: i) Dry Granulation (DG) through roller compaction, ii) Direct Compaction (DC), and iii) Wet Granulation (WG). First, integrated mathematical models of these downstream (drug product) processes were developed using gPROMS Formulated Products, along with data from the literature and our recent experimental work. The process models were designed and simulated to reliably capture the impact of different design options, process parameters, and material attributes. Uncertainty analysis was conducted using global sensitivity analysis to identify the critical process parameters (CPPs) and critical material attributes (CMAs) that most significantly influence the quality and performance of the final pharmaceutical tablets. These are captured by the critical quality attributes (CQAs), which include tablet hardness, dissolution rate, impurities/residual solvents, and content uniformity—factors crucial for ensuring product safety and efficacy. Based on the identified CPPs and CMAs, combined design spaces that guarantee the attainment of the targeted CQAs were identified and compared. Additionally, techno-economic analyses were conducted alongside life cycle assessments (LCA) based on the process simulation results and inventory data. The LCA provided an in-depth evaluation of the environmental impacts associated with each manufacturing method, considering aspects such as energy consumption, raw material usage, emissions, and waste generation across a cradle-to-gate approach. By integrating CQAs within the LCA framework, this study offers a holistic analysis that captures both the environmental sustainability and product quality implications of the three tableting processes. The findings aim to guide the selection of more sustainable and efficient manufacturing practices in the pharmaceutical industry, balancing trade-offs between environmental impact and product quality.

Keywords: Dry Granulation, Direct Compaction, Wet Granulation, Life Cycle Assessment (LCA), Techno-economic Analysis (TEA), Quality-by-Design (QbD)

Acknowledgements

The authors acknowledge funding from the UK Engineering and Physical Sciences Research Council (EPSRC), for Made Smarter Innovation – Digital Medicines Manufacturing Research Centre (DM2), EP/V062077/1.



Systematic Model Builder, Model-Based Design of Experiments, and Design Space Identification for A Multistep Pharmaceutical Process

Xuming Yuan, Ashish Yewale, Brahim Benyahia

Loughborough University, United Kingdom

Mathematical models of different processing unit are usually established and optimized individually, even when these processes are meant to be combined in a sequential way in the real world, particularly in continuous operating plants. Although, this traditional approach may help reduce complexity, it may deliver suboptimal solutions or/and overlook the interactions between the unit operations. Most importantly, it can dramatically increase the development time, wastes, and experimental costs inherent to the raw materials, solvents, cleaning, etc. This study aims at developing a systematic approach to establish and optimize integrated mathematical models of interactive multistep processes. This methodology starts with suggesting various model candidates for different unit operations initially based on the prior knowledge. A combination of the model candidates for different unit operations is performed, which gives several candidates of integrated models for the multistep process. A model discrimination based on structural identifiability analysis and model prediction performance (Yuan and Benyahia, 2024) reveals the best integrated model for the multistep process. Afterwards, the refinement of the model, consisting of estimability analysis (Bouchkira and Benyahia, 2023) and model-based design of experiment (MBDoE), is conducted to give the optimal experimental design that guarantees the most information-rich data. With the acquisition of the new experimental data, the reliability and robustness of the multistep mathematical model is dramatically enhanced. The optimized model is subsequently used to identify the design space of the multistep process, which delivers the optimal operating range of the critical process parameters (CPPs) that satisfy the targeted critical quality attributes (CQAs). A blending-tableting process of paracetamol is selected as a case study in this work. The methodology applies the prior knowledge from Kushner and Moore (2010), Nassar et al. (2021) and Puckhaber et al. (2022) to establish model candidates for this two-unit-operation process, where the effects of the lubrication in the blender as well as the composition and porosity of the tablet on the tablet tensile strength are taken into consideration. Model discrimination and model refinement are then performed to identify and improve the optimal integrated model for this two-step process, and the enhanced model is applied for the design space identification under specified CQA targets. The results confirm the effectiveness of the proposed methodology, which demonstrates its potential in achieving higher optimality for the processes involving multiple unit operations.



The role of long-term storage in municipal solid waste treatment systems: Multi-objective resources integration

Julie Dutoit1,2, Jaroslav Hemrle2, François Maréchal1

1École Polytechnique Fédérale de Lausanne (EPFL), Industrial Process Energy Systems Engineering (IPESE), Sion, 1951, Switzerland; 2Kanadevia Inova AG, Zürich, 8005, Switzerland

Estimations for the horizon 2050 predict significant municipal solid waste (MSW) generation increase in every world region, whereas important discrepancies remain between net-zero decarbonization targets of the Paris Agreement and current waste treatment technologies’ environmental performance. This creates an important area of research and development to improve the solutions, especially with regards to circular economy goals for material recovery and transitioning energy supply systems. As shown for plastic chemical recycling by Martínez-Narro et al., 2024, promising technologies may include energy-intensive steps which need integration to renewable energy to be environmentally viable. With growing intra-daily and seasonal variations of power availability due to the increase of renewable production share, Demand Side Response (DSR) measures play a crucial role beside energy storage systems to support power grid stability. In current research, DSR applicability to industrial process models is under-represented relatively to the residential sector, with little attention brought to control strategies or input predictions in system analysis (Bosmann and Eser, 2016, Kirchem et al., 2020).

This contribution presents a framework to evaluate the potential of waste treatment system to shift energy loads for a better integration into energy systems of industrial clusters or residential areas. The waste treatment systems scenarios are modeled, simulated and optimized in a hybrid framework OpenModelica/Python, described by Dutoit et al., 2024. In particular, pinch analysis (Linnhoff and Hindmarsh, 1983) is used for the heat integration assessment. The multi-objective approach relies on key performance indicators including process, economic and environmental impact aspects.

For the case study application, core technologies included are waste sorting, waste incineration and post-combustion amine-based carbon capture, which are integrated to heat and power utilities to satisfy varying external demand from the power grid and the district heating network. The heterogeneous modeling of the waste flows allows to define several design options on the material recovery facility for waste plastic fraction sorting, and scenarios are simulated to evaluate the system performance under the described control strategies. Results provide insights for optimal system operations and integration from an industrial perspective.

References

Bosmann, T., & Eser, E. J. (2016). Model-based assessment of demand-response measures – A comprehensive literature review. Renewable and Sustainable Energy Reviews, 57, 1637–1656. https://doi.org/10.1016/j.rser.2015.12.031.

Dutoit, J., Hemrle, J., Maréchal, F. (2024). Supporting Life-Cycle Impact Assessment Transparency in Waste Treatment Systems Simulation: A Decision-Support Methodology. In preparation.

Kirchem, D., Lynch, M. A., Bertsch, V., & Casey, E. (2020). Modelling demand response with process models and energy systems models: Potential applications for wastewater treatment within the energy-water nexus. Applied Energy, 260, 114321. https://doi.org/10.1016/j.apenergy.2019.114321

Linnhoff, B., & Hindmarsh, E. (1983). The pinch design method for heat exchanger networks. Chemical Engineering Science, 38(5), 745–763. https://doi.org/10.1016/0009-2509(83)80185-7

Martínez-Narro, G., Hassan, S., N. Phan, A. (2024). Chemical recycling of plastic waste for sustainable polymer manufacturing – A critical review. Journal of Environmental Chemical Engineering, 12, 112323. https://doi.org/10.1016/j.jece.2024.112323.



A Comparative Study of Feature Importance in Process Data: Neural Networks vs. Human Visual Attention

Rohit Suresh1, Babji Srinivasan1,3, Rajagopalan Srinivasan2,3

1Department of Applied Mechanics and Biomedical Engineering, Indian Institute of Technology Madras, Chennai 600036, India; 2Department of Chemical Engineering, Indian Institute of Technology Madras, Chennai 600036, India; 3American Express Lab for Data Analytics, Risk and Technology Indian Institute of Technology Madras, Chennai 600036, India

Artificial Intelligence(AI) and Automation technologies have revolutionized the way many sectors operate. Specifically in process industries and power plants, there is a lot of scope of enhancing production and efficiency with AI through predictive maintenance, condition monitoring, inspection and quality control etc. However, despite these advancements, human operators are the final decision-makers in such major safety-critical systems. Fostering collaboration between human operators and AI systems is the inevitable next step forward. A primary step towards achieving this goal is to capture the representation of information acquired by both human operators and AI-based systems in a mutually comprehensible way. This would aid in understanding their rationale behind the decision. AI-based systems with deep networks and complex architecture often achieve the best results. However, they are often disregarded by human operators due to lack of transparency. While eXplainable AI(XAI) is an active research area that attempts to comprehend the deep networks, understanding the human rationale behind decision-making is largely overlooked.

Several popular XAI techniques such as local interpretable model-agnostic explanations(LIME), and Gradient-Weighted Class Activation Mapping(Grad-CAM) provide explanations via feature attribution. In the context of process monitoring, Bahkte et al. (2022) used shapeley value framework with integrated gradients to estimate the marginal contribution of process variables in fault classification. One popular way to evaluate the explanations provided by various XAI algorithm is through human eye gaze tracking. Human participants’ visual attention over the stimus is estimated using eye tracking which is compared with the results of XAI.

Eye tracking also has the potential to characterise the mental models of control room operators during different experimental scenarios(Shahab et al., 2022). In this work, participants, acting as control room operators were given tasks of disturbance rejection based on alarm signals and process variable trends in HMI. Extending that in this work we attempt to explain the human operator’s rationale behind the decision making through eye tracking. Participants’ dynamic attention allocation over the stimulus is objectively captured using various eye gaze metrics which are further used to extract the major causal factors that influenced the decision of participants. The effectiveness of the method is demonstrated with a case study. We conduct eye tracking experiments where participants are required to identify the fault in the process. During the experiment the images of trend panel with trajectories of all major process variables captured at a specific instant are shown to the participants. The process variable responsible for the fault is objectively identified using operator knowledge. Our future work will focus on integrating this human rationale with XAI which will pave the way for human-machine teaming.

References:
Bhakte, A., Pakkiriswamy, V. and Srinivasan, R., 2022. An explainable artificial intelligence based approach for interpretation of fault classification results from deep neural networks. Chemical Engineering Science, 250, p.117373.
Shahab, M.A., Iqbal, M.U., Srinivasan, B. and Srinivasan, R., 2022. HMM-based models of control room operator's cognition during process abnormalities. 1. Formalism and model identification. Journal of Loss Prevention in the Process Industries, 76, p.104748.



Parameter Estimation and Model Comparison for Mixed Substrate Biomass Fermentation

Tom Vinestock, Miao Guo

King's College London, United Kingdom

Single cell protein (SCP) fermentation is an effective way of transforming carbohydrate-rich substrates into high-protein foodstuffs and is more sustainable than conventional animal-based protein production [1]. However, whereas cows and other ruminants can be fed agricultural residues, such as rice straw, SCP fermentations generally depend on high-purity single substrate feedstocks as a carbon-source, such as starch-derived glucose, which are expensive, and directly compete for food crops.

Consequently, there is interest in switching to feedstocks derived from cheap agricultural lignocellulosic residues. However, treatment of such lignocellulosic residues produces a mixed feedstock, typically containing both glucose and xylose [2]. Accurate models of mixed-substrate growth are needed for fermentation decision-makers to understand the trade-offs associated with transitioning to the more sustainable lignocellulosic feedstocks. Such models are also a prerequisite for optimizing the operation and control of mixed-substrate fermentations.

In this work, recently published biomass and substrate concentration time-series data, for glucose-xylose batch fermentation of F. Venenatum [3] is used to estimate parameters for different unstructured models of diauxic growth. A Bayesian optimisation methodology is employed to identify the best parameters in each case. A novel model for diauxic growth with substrate cross-inhibition, mediated by variable enzyme production, is proposed, based on Nakamura et al. [4], but simplified to reduce the number of states and parameters, and hence improve identifiability and reduce overfitting. This model is shown to have a lower error on both the calibration dataset and the validation dataset, than the model in Banks et al. [3], itself based on work by Vega-Ramon et al. [5], which models substrate cross-inhibition effects directly. The performance of the model proposed by Kompala and Ramkrishna [6], based on growth-optimised cellular resource allocation, is also evaluated.

This work could lead to improved modelling of mixed substrate fermentation, and therefore help address the technical barriers to wider-scale use of lignocellulose-derived feedstocks in fermentation. Future research could test the generalisability of the diauxic growth models considered using data from a continuous or fed-batch mixed substrate fermentation.

References

[1] Good Food Institute, “Fermentation: State of the industry report,” 2021.

[2] L. Qin, L. Liu, A.P. Zeng, and D. Wei, “From low-cost substrates to single cell oils synthesized by oleaginous yeasts,” Bioresource Technology, Dec. 2017.

[3] M. Banks, M. Taylor, and M. Guo, “High throughput parameter estimation and uncertainty analysis applied to the production of mycoprotein from synthetic lignocellulosic hydrolysates,” 2024.

[4] Y. Nakamura, T. Sawada, F. Kobayashi, M. Ohnaga, and M. Suzuki, “Stability analysis of continuous culture in diauxic growth,” Journal of Fermentation and Bioengineering, 1996.

[5] F. Vega-Ramon, X. Zhu, T. R. Savage, P. Petsagkourakis, K. Jing, and D. Zhang, “Kinetic and hybrid modeling for yeast astaxanthin production under uncertainty,” Biotechnology and Bioengineering, Dec. 2021.

[6] D. S. Kompala, D. Ramkrishna, N. B. Jansen, and G. T. Tsao, “Investigation of bacterial growth on mixed substrates: Experimental evaluation of cybernetic models,” Biotechnology and Bioengineering, July 1986.



Identification of Suitable Operational Conditions and Dimensions for Supersonic Water Separation in Exhaust Gases from Offshore Turbines: A Case Study

Jonatas de Oliveira Souza Cavalcante1, Marcelo da Costa Amaral2, Fernando Luiz Pellegrini Pessoa1,3

1SENAI CIMATEC University Center, Brazil; 2Leopoldo Américo Miguez de Mello Research Center (CENPES); 3Federal University of Rio de Janeiro (UFRJ)

Due to space, weight, and energy efficiency constraints in offshore environments, the efficient removal of water from turbine exhaust gases is a crucial step to optimize operational performance in gas treatment processes. In this context, replacing conventional methods, such as molecular sieves, with supersonic separators emerges as a promising alternative. This work aims to determine the optimal operational conditions and dimensions for supersonic water separation in turbine exhaust gases on offshore platforms. The simulation was conducted using a unit operation extension in Aspen HYSYS, based on the compositions of exhaust gases from Brazilian pre-salt wells. Parameters such as operational conditions, separator dimensions, and shock Mach were optimized to maximize process efficiency and minimize separator size. The results indicated the near-complete removal of water, demonstrating that supersonic separation technology, in addition to being compact, offers a viable and efficient alternative for water removal from exhaust gases, particularly in space-constrained environments.



On optimal hydrogen production pathway selection using the SECA multi-criteria decision-making method

Caroline Kaitano, Thokozani Majozi

University of the Witwatersrand, South Africa

The increasing global population has resulted in the scramble for more energy. Hydrogen offers a new revolution to energy systems worldwide. Considering its numerous uses, research interest has grown to seek sustainable production methods. However, hydrogen production must satisfy three factors, i.e. energy security, energy equity, and environmental sustainability, referred to as the energy trilemma. Therefore, this study seeks to investigate the sustainability of hydrogen production pathways through the use of a Multi-Criteria Decision- Making model. In particular, a modified Simultaneous Evaluation of Criteria and Alternatives (SECA) model was employed for the prioritization of 19 options for hydrogen production. This model simultaneously determines the overall performance scores of the 19 options and the objective weights for the energy trilemma in a South African context. The results obtained from this study showed that environmental sustainability has a higher objective weight value of 0.37 followed by energy security with a value of 0.32 and energy equity being the least with 0.31. Of the 19 options selected, steam reforming of methane with carbon capture and storage was found to have the highest overall performance score, considering the trade-offs in the energy trilemma. This was followed by steam reforming of methane without carbon capture and storage and the autothermal reforming of methane with carbon capture and storage. The results obtained in this study will potentially pave the way for optimally producing hydrogen from different feedstocks while considering the energy trilemma and serve as a basis for further research in sustainable process engineering.



On the role of Artificial Intelligence in Feature oriented Multi-Criteria Decision Analysis

Heyuan Liu1,2, Yi Zhao1, Francois Marechal1

1Industrial Process and Energy Systems Engineering (IPESE), Ecole Polytechnique Fédérale de Lausanne, Sion, Switzerland; 2École Polytechnique, France

In industrial applications, balancing economic and environmental goals is crucial amidst challenges like climate change. To address conflicting objectives, tools like multi-objective optimization (MOO) and multi-criteria decision analysis (MCDA) are utilized. MOO generates a range of viable solutions, while MCDA helps select the most suitable option considering factors like profitability, environmental impact, safety, and efficiency. These tools aid in making informed decisions amidst complex trade-offs and uncertainties.

In this study, we propose a novel approach for MCDA using advanced machine learning techniques and applied the method to analyze the decarbonization solutions to a typical European refinery. First, a hybrid dimensionality reduction method combining AutoEncoders and Principal Component Analysis (PCA) is developed to reduce high-dimensional data while retaining key features. The effectiveness of dimensionality reduction is demonstrated by clustering the reduced data and mapping the clusters back to the original high-dimensional space. The high clustering quality scores indicate that spatial distribution characteristics were well preserved. Furthermore, Geometric analysis techniques, such as Intrinsic Shape Signatures (ISS), Harris Corner Detection, and Mesh Saliency, further refine the identification of typical configurations. Specifically, 15 typical solutions identified by the ISS method are used as baselines to represent distinct regions in the solution space. These solutions serve as a reference set for further comparison.

Building upon this reference set, we utilize Large Language Models (LLMs) to further enhance the decision-making process. First, LLMs are employed to generate and refine ranking criteria for evaluating the identified solutions. We employ LLM with an iterative self-update mechanism to dynamically adjust weighting strategies, enhancing decision-making capabilities in complex environments. To address input size limitations encountered in the problem, we apply heuristic design approaches that effectively manage and optimize the information. Additionally, effective prompt engineering techniques are integrated to improve the model's reasoning and adaptability.

In addition to ranking, LLM technology provides comprehensive and interpretable explanations for the selected solutions. This includes breaking down the criteria used for each decision, clarifying trade-offs between competing objectives, and offering insights into how different configurations perform across various key performance indicators. These explanations help stakeholders better understand the rationale behind the chosen solutions, enabling more informed decision-making in practical applications.



Optimizing CO2 Utilization in Reverse Water-Gas Shift Membrane Reactors with Parametric PINNs

Zahir Aghayev1,2, Zhaofeng Li3, Michael Patrascu3, Burcu Beykal1,2

1Department of Chemical & Biomolecular Engineering, University of Connecticut, Storrs, CT 06269, USA; 2Center for Clean Energy Engineering, University of Connecticut, Storrs, CT 06269, USA; 3The Wolfson Department of Chemical Engineering, Technion – Israel Institute of Technology, Haifa 3200003, Israel

With atmospheric CO₂ levels reaching a record 426.91 ppm in June 2024, the urgency for innovative carbon capture and utilization (CCU) strategies to reduce emissions and repurpose CO₂ into valuable products has become even more critical [1]. One promising solution is the reverse water-gas shift (RWGS) reaction, which transforms CO₂ and hydrogen—produced through renewable energy-powered electrolysis—into carbon monoxide, a key precursor for synthesizing fuels and chemicals. By integrating membrane reactors that selectively remove water vapor, the process shifts the equilibrium forward, resulting in higher CO₂ conversion and CO yield at lower temperatures, in accordance with the Le Chatelier's principle. However, modeling this intensified system remains challenging due to the complex, nonlinear interaction between reaction kinetics and membrane transport.

In this study, we developed a physics-informed neural network (PINN) model that integrates first-principles physics with machine learning to model the RWGS process within a membrane reactor. This approach embeds governing physical laws into the network's architecture, reducing the computational burden typically associated with solving highly nonlinear ordinary differential equations (ODEs), while maintaining both accuracy and interpretability [2]. Our model demonstrated robust predictive performance, achieving an R² value exceeding 0.95, successfully capturing flow rate changes and reaction dynamics along the reactor length. Using this validated PINN model, we performed data-driven optimization, identifying operational conditions that maximized CO₂ conversion efficiency and reaction yield [3-6]. This hybrid modeling approach enhances prediction accuracy and optimizes the reactor conditions, offering a scalable solution for industries integrating renewable energy into chemical production and reducing carbon emissions. Our findings demonstrate the potential of advanced modeling to intensify CO₂ utilization processes, with significant implications for sustainable chemical production and energy systems.

References

  1. NOAA Global Monitoring Laboratory. (2024). Trends in atmospheric carbon dioxide [online]. Available at: https://gml.noaa.gov/ccgg/trends/ [Accessed 10/13/2024].
  2. Raissi, M., Perdikaris, P. and Karniadakis, G.E., 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378, pp.686-707.
  3. Beykal, B. and Pistikopoulos, E.N., 2024. Data-driven optimization algorithms. In Artificial Intelligence in Manufacturing (pp. 135-180). Academic Press.
  4. Boukouvala, F. and Floudas, C.A., 2017. ARGONAUT: AlgoRithms for Global Optimization of coNstrAined grey-box compUTational problems. Optimization Letters, 11, pp.895-913.
  5. Beykal, B., Aghayev, Z., Onel, O., Onel, M. and Pistikopoulos, E.N., 2022. Data-driven Stochastic Optimization of Numerically Infeasible Differential Algebraic Equations: An Application to the Steam Cracking Process. In Computer Aided Chemical Engineering (Vol. 49, pp. 1579-1584). Elsevier.
  6. Aghayev, Z., Voulanas, D., Gildin, E., Beykal, B., 2024. Surrogate-Assisted Optimization of Highly Constrained Oil Recovery Processes Using Classification-Based Constraint Modeling. Industrial & Engineering Chemistry Research (submitted).


Modeling, simulation and optimization of a carbon capture process through a fluidized TSA column

Eduardo dos Santos Funcia1, Yuri Souza Beleli1, Enrique Vilarrasa Garcia2, Marcelo Martins Seckler1, José Luís de Paiva1, Galo AC Le Roux1

1Polytechnic School of the University of Sao Paulo, Brazil; 2Federal University of Ceará, Brazil

Carbon capture technologies have recently emerged as a way to mitigate climate change and global warming by removing carbon dioxide from the atmosphere. Furthermore, by removing carbon dioxide from biomass-originated flue gases, an energy process with negative carbon footprint can be achieved. Among carbon capture processes, the fluidized temperature swing adsorption (TSA) columns are a promising low-pressure alternative, where carbon dioxide flowing upwards is exothermally adsorbed into a fluidized solid sorbent flowing downwards, and later endothermically extracted through higher temperatures while regenerating the sorbent for recirculation. Although an interesting venture, the TSA process has been developed only in small scale, and remains to be scaled-up to become an industrial reality. This work aims to model, simulate and optimize a TSA multi-stage equilibrium system in order to obtain a conceptual design for future process scale up. A mathematical model was developed for adsorption using an approach that makes it easy to extend the model to various configurations. The model was extended to include multiple stages, each with a heat exchanger, and was also coupled to the desorption operation. Each column, adsorption and desorption, includes one external heat exchanger at the bottom for a preliminary heat load of the inward gas flow. The system also included a heat exchanger in the recirculating solid sorbent flow, before the regenerated solid enters the top of the adsorption column. The model is based on molar and energy balances, coupled to pressure drops in a fluidized bed designed to operate close to the minimum fluidization velocity (calculated through semi-empirical correlations), and to thermodynamics of adsorption equilibrium of a mixture of carbon dioxide and nitrogen in solid sorbents. The Toth Equilibrium isotherm was used with parameters experimentally obtained in a previous work (which suggested that the heterogeneity parameter for nitrogen should be fixed at unity). The complete fluidized TSA adsorption/desorption process has been optimized to minimize energy, adsorbent and operating costs, as well as equipment investment and installing, considering equilibrium in each fluidized bed stage. The optimal configuration for heat exchangers is determined and a unit cost for carbon dioxide capture was estimated. It was found that 2 stages are sufficient for an effective removal of carbon dioxide in the adsorption column, while at least 5 stages are necessary to meet captured carbon specification at 95% molar purity. It was also possible to conclude that not all stages in the columns needed heat exchangers, with some heat loads being set at 0 during the optimization. Pressure drop for each stage was estimated as smaller than 0.072 bar for a bed 1 m high, and air velocity was 40-45 cm/s (minimum fluidization velocity was 10-11 cm/s), with low particle Reynolds numbers of about 17, which indicates the system readily fluidizes. These findings show that the methodology here developed is useful for guiding the conceptual design of fluidized TSA process for carbon capture.



Unlocking Process Dynamics: Interpretable PDE Solutions via Symbolic Regression

Benjamin G. Cohen, Burcu Beykal, George M. Bollas

University of Connecticut, USA

Physics-informed symbolic regression (PISR) offers an innovative approach to automatically learn explicit, analytical approximate solutions to partial differential equations (PDEs). Chemical processes often involve dynamics that PDEs can effectively capture, providing valuable insights for engineers and scientists to improve process design and control. Traditionally, solving PDEs requires expertise in analytical methods or costly numerical schemes. However, with the advent of AI/ML, tools like physics-informed neural networks (PINNs) have emerged, learning solutions to PDEs by constraining neural networks to satisfy differential equations and boundary information. Applying PINNs in safety-critical systems is challenging due to the many neural network parameters and black-box nature.

To address these challenges, we explore the effect of replacing the neural network in PINNs with a symbolic regressor to create PISR. Guided by a carefully selected information-theoretic loss function that balances model agreement with differential equations and boundary information against identifiability, PISR can learn approximate solutions to PDEs that are symbolic rather than neural network approximations. This approach yields concise, clear analytical approximate solutions that balance model complexity and fit quality. Using an open-source symbolic regression package in Julia, we demonstrate PISR’s efficacy by learning approximate solutions to several PDEs common in process engineering and compare the learned representations to those obtained using PINNs. The PISR models, when compared to the PINN models, are straightforward, easy to understand, and contain very few parameters, making them ideal for sensitivity analysis and ensuring robust process design and control.



Eco-Designing Pharmaceutical Supply Chains: A Process Engineering Approach to Life Cycle Inventory Generation

Indra CASTRO VIVAR1, Catherine AZARO-PANTEL1, Alberto A. AGUILAR LASSERRE2, Fernando MORALES-MENDOZA3

1Laboratoire de Génie Chimique, Université de Toulouse, CNRS, INPT, UPS, Toulouse, France; 2Tecnologico Nacional de México, Instituto Tecnológico de Orizaba, México; 3Universidad Autónoma de Yucatán, Facultad de Ingeniería Química, Mérida, Yucatán, México

The environmental impact of Active Pharmaceutical Ingredients (APIs) is an increasingly significant research focus, as global pharmaceutical manufacturing practices face heightened scrutiny regarding sustainability. Paracetamol (acetaminophen), one of the most extensively used APIs, requires closer examination due to its current production practices. Most paracetamol is manufactured in large-scale facilities in India and China, with production capacities ranging from 2,000 to 40,000 tons annually.

Offshoring pharmaceutical manufacturing, traditionally a cost-saving strategy, has increased supply chain complexity and dependency on foreign API sources. This reliance has made Europe’s pharmaceutical production vulnerable, especially during global crises or geopolitical tensions, such as the disruptions seen during the COVID-19 pandemic. Consequently, there is growing interest in reshoring pharmaceutical production to Europe. The European Pharmaceutical Strategy (2020)[1] advocates decentralizing production to create shorter, more sustainable value chains. This move seeks to enhance access to high-quality medicines while minimizing the environmental impacts of long-distance transportation.

In France, the government has introduced measures to relocate the production of 50 essential drugs as part of a re-industrialization plan to address medication shortages. Paracetamol sales were restricted in 2022 and early 2023 due to supply chain issues, leading to the relocation of several manufacturing plants.

Yet, pharmaceuticals present unique challenges when assessed using Life Cycle Assessment (LCA), mainly due to a lack of comprehensive life cycle inventory (LCI) data. This scarcity is particularly evident for API synthesis (upstream) and downstream phases such as usage and end-of-life management.

This study aims to apply LCA methodology to evaluate various paracetamol API supply chain scenarios, focusing on the potential benefits of reshoring production to France. A major contribution of this work is the generation of LCI data for paracetamol production through process engineering and chemical process modeling. Aspen Plus software was used to model the paracetamol API manufacturing process, including mass and energy balances. This approach ensures that the datasets generated are robust and validated against available reference data. SimaPro software was used to conduct the LCA using the EcoInvent database and the Environmental Footprint (EF) impact assessment method.

One key finding is the reduction of greenhouse gas emissions for the selected functional unit (FU) of 1 kg of API. Significant differences in electricity use and steam heat generation were observed. According to the EF database, electricity in India results in emissions of 83 g CO₂ eq, while steam heat generation emits 1.38 kg CO₂ eq per FU. In contrast, French emissions are significantly lower, with electricity contributing 5 g CO₂ eq and steam heat generating 1.18 kg CO₂ eq per FU. These results highlight the environmental advantages of relocating production to regions with decarbonized power grids.

This study underscores the value of process modeling in generating LCI data for pharmaceuticals and enhances the understanding of the environmental benefits of reshoring paracetamol manufacturing. The developed methodology can be applied to other chemicals with limited LCI data, supporting more sustainable decision-making in the pharmaceutical sector's eco-design, particularly during re-industrialization efforts.

[1] European Commission Communication from the Commission: A New Industrial Strategy for Europe, vol.102,COM(2020), pp.1-17



Safe Reinforcement Learning with Lyapunov-Based Constraints for Control of an Unstable Reactor

José Rodrigues Torraca Neto1, Argimiro Resende Secchi1,2, Bruno Didier Olivier Capron1, Antonio del-Rio Chanona3

1Chemical and Biochemical Process Engineering Program/School of Chemistry, Universidade Federal do Rio de Janeiro (UFRJ), Brazil; 2Chemical Engineering Program/COPPE, Universidade Federal do Rio de Janeiro (UFRJ), Brazil; 3Sargent Centre for Process Systems Engineering, Imperial College London

Safe reinforcement learning (RL) is essential for real-world applications with uncertainty and safety constraints, such as autonomous robotics and chemical reactors. Recent advances (Brunke et al., 2022) focus on integrating control theory with RL to ensure safety during learning and deployment. These approaches include robust RL frameworks, constrained Markov decision processes (CMDPs), and safe exploration strategies. We propose a novel approach where RL algorithms—PPO (Schulman et al., 2017), SAC (Haarnoja et al., 2018), DDPG (Lillicrap et al., 2016), and TD3 (Fujimoto et al., 2018)—are trained with Lyapunov-based constraints to ensure stability. As our reward function, −(x-xSP)², inherently generates negative rewards, we applied penalties to positive critic values and decreases in critic estimates over time.

For off-policy algorithms (SAC, DDPG, TD3), penalties were applied directly to Q-values, discouraging non-negative values and preventing unexpected decreases. On-policy algorithms (PPO) applied these penalties directly to the value function. DDPG used Ornstein-Uhlenbeck noise for exploration, while TD3 used Gaussian noise, with optimized parameters. Hyperparameters, including safe RL constraints, were tuned using Optuna (Akiba et al., 2019), optimizing learning rates, network architectures, and penalty weights.

Our method was tested on an unstable Continuous Stirred Tank Reactor (CSTR) under random disturbances. Despite challenges posed by disturbances, the Safe RL approach was evaluated for resilience under dynamic conditions. A cosine annealing schedule dynamically adjusted learning rates, ensuring stable training. Base RL algorithms (without safety constraints) were trained on ten parallel environments with disturbances and compared to a Nonlinear Model Predictive Control (NMPC) benchmark. SAC performed best, achieving an optimality gap of 7.73×10⁻⁴ on the training pool and 3.65×10⁻⁴ on new disturbances. DDPG and TD3 exhibited instability due to temperature spikes without safety constraints.

Safe RL significantly improved SAC’s performance, reducing the optimality gap to 2.88×10⁻⁴ on the training pool and 2.36×10⁻⁴ on new disturbances, nearing NMPC performance. Safe RL also reduced instability in DDPG and TD3, preventing temperature spikes and reducing policy noise, though it increased offset from the setpoint, resulting in larger optimality gaps. Despite this tradeoff, Safe RL made these algorithms more reliable, considering unseen disturbances. Overall, Safe RL brought SAC close to optimality across disturbance conditions while it mitigated instability in DDPG and TD3 at the cost of higher setpoint offsets.

References:
L. Brunke et al., 2022, "Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning," Annual Review of Control, Robotics, and Autonomous Systems, Vol. 5, pp. 411–444.
J. Schulman et al., 2017, "Proximal Policy Optimization Algorithms," arXiv:1707.06347.
T. Haarnoja et al., 2018, "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor," Proceedings of the 35th ICML, Vol. 80, pp. 1861-1870.
T.P. Lillicrap et al., 2016, "Continuous Control with Deep Reinforcement Learning," arXiv:1509.02971.
S. Fujimoto et al., 2018, "Addressing Function Approximation Error in Actor-Critic Methods," Proceedings of the 35th ICML, Vol. 80, pp. 1587-1596.
T. Akiba et al., 2019, "Optuna: A Next-generation Hyperparameter Optimization Framework," Proceedings of the 25th ACM SIGKDD, pp. 2623-2631.



A two-level model to assess the economic feasibility of renewable urea production from agricultural wastes

Diego Costa Lopes, Moisés Teles Dos Santos

Universidade de São Paulo, Brazil

Agroindustrial wastes can be an abundant source of chemicals, biofuels and energy. Based on this assumption, this work presents a tow-level modeling (process models and supply chain model) and an optimization framework for an integrated biorefinery system to convert agricultural residues into renewable urea via gasification routes with a possible additional hydrogen input from electrolysis. A process model of the gasification process was developed in Aspen Plus® to identify key performance indicators such as energy consumption and relative yields for urea for different biomasses and operating conditions; then, these key process data were used in a mixed-integer linear programming (MILP) model, designed to identify the optimal combination of energy source, technological route of urea production and plant location that maximizes the net present value of the system. The gasification step was modeled with an equilibrium approach. Besides the gasifier, the plant is comprised of a conditioning system to adjust syngas composition, CO2 capture, ammonia and urea synthesis.

Based on the model’s results, three technological routes (oxygen gasification, air gasification and water electrolysis) were chosen as the most promising, and 6 different biomasses (rice husks, coffee husks, corn stover, soybean straw, sugarcane straw and bagasse) were identified as representative of the Brazilian agricultural scenario. The country was divided into 5569 cities and 558 micro-regions. Each region's agricultural production was evaluated to estimate biomass supply and urea demand. Electricity prices were also considered based on current tariffs. A MILP model was developed to maximize the net present value, combining energy sources, location and route as decision variables, respecting constraints on biomass supply, urea demand and transport between regions. The model was applied to the whole country divided in the microregion level. It was found that the Assis microregion in the São Paulo state is the optimal location for the plant, leveraging the proximity of large sugarcane and soybean crops and cheaper electricity prices compared to the rest of the country, with a positive NPV for an 80 tons of urea / h plant. Biomass dominates the total costs of plant (60%), followed by power (25%) and urea transport (10%). Biomass supplies were not found to be a major constraint in any region; urea demand is the main limiting factor, with more than 30 microregions needed to consume the plant’s production, highlighting the need for close proximity between production and consumption to minimize logistic costs.

The model was also constrained to other regions of Brazil to evaluate local feasibility. The north and northeast regions were not found to be viable locations for a plant with NPVs close to 0, given the lower biomass supplies and urea demands, and larger distances between microregions. Meanwhile, in the southern and midwest regions, large availability of soybean residues also create good conditions for a renewable urea plant, with NPVs of US$ 105 mil and US$ 103 mil respectively. The results indicate the feasibility of producing renewable urea from agricultural wastes and the importance of considering a two-level approach to assess the economic performance of the entire system.



Computer-based Chemical Engineering Education for Green and Digital Transformation

Zorka Novak Pintarič, Miloš Bogataj, Zdravko Kravanja

Faculty of Chemistry and Chemical Engineering, University of Maribor, Smetanova ulica 17, SI-2000 Maribor, Slovenia

The mission of Chemical Engineering Education, particularly Computer-Aided Chemical Engineering, is to equip students with the knowledge and skills they need to drive the green and digital transformation. This involves integrating Chemical Engineering and Process Systems Engineering (PSE) within the Bologna 3-cycle study system. The EFCE Bologna recommendations for Chemical Engineering programs will be reviewed, with a focus on PSE topics, especially those relevant to the green and digital transformation. Key challenges in introducing sustainability and digital knowledge will be highlighted, along with the necessary development of teaching methods and tools.

While chemical engineering programs contain elements of green and digital engineering, their systematic integration into core subjects is limited. The analysis of our study program shows that only a few principles of green engineering, such as maximizing efficiency and energy flow integration, are explicitly addressed. Other principles are indirectly presented through case studies but lack structured inclusion. Digital skills in the current curricula focus mainly on spreadsheets for data processing, basic programming, and process simulation. Green and digital content is more extensively explored in project work and advanced studies, with elective courses and final theses offering deeper engagement.

Artificial Intelligence (AI), as a critical element of digitalization, will significantly impact chemical engineering education, influencing both teaching methods and process optimization. However, the interdisciplinary complexity of AI poses challenges. Students need a solid foundation in programming, data science, and statistics to master AI tools, making its gradual introduction essential. The question therefore arises as to how AI can be effectively integrated into chemical engineering education by striking a balance between technical skills and critical thinking, fostering creativity and ethical awareness while preserving and not sacrificing the engineering fundamentals.

Given the rapid pace of change in the industry, chemical engineering education needs to be reformed, particularly at the bachelor's and master's levels. Core challenges include systematically integrating essential green and digital topics into syllabi, introducing new courses like AI and data science, modernizing textbooks with numerical examples, and providing teachers with training to keep pace with technological advancements.



Development of a Hybrid Model for the Paracetamol Batch Dissolution in Ethanol Using Universal Differential Equations

Fernando Arrais Romero Dias Lima1,2, Amyr Crissaff Silva1, Marcellus G. F. de Moraes3,4, Amaro G. Barreto Jr.1, Argimiro R. Secchi1,4, Idelfonso Nogueira2, Maurício B. de Souza Jr.1,4

1School of Chemistry, EPQB, Universidade Federal do Rio de Janeiro, Av. Horácio Macedo, 2030, CT, Bloco E, 21941-914, Rio de Janeiro, RJ – Brazil; 2Chemical Engineering Department, Norwegian University of Science and Technology, Trondheim, 793101, Norway; 3Instituto de Química, Rio de Janeiro State University (UERJ), Rua São Francisco Xavier, 524, Maracanã, Rio de Janeiro, RJ, 20550-900, Brazil; 4PEQ/COPPE – Universidade Federal do Rio de Janeiro, Av. Horácio Macedo, 2030, CT, Bloco G, G115, 21941-914, Rio de Janeiro, RJ – Brazil

Crystallization is a relevant process in the pharmaceutical industry for product purification and particle production. An efficient crystallization is characterized by crystals produced with the desired attributes. Therefore, modeling this process is a key point to achieving this goal. In this sense, the objective of this work is to propose a hybrid model to describe paracetamol dissolution in ethanol. The universal differential equations methodology is considered in the development of this model, using a neural network to predict the dissolution rate combined with the population balance equations to calculate the moments of the crystal size distribution (CSD) and the concentration. The model was developed using the experimental batches developed by Kim et al. [1]. The dataset is composed of concentration measurements obtained using attenuated total reflectance-Fourier transform infrared (ATR-FTIR). The objective function of the optimization problem is to minimize the difference between the experimental and the predicted concentration. The hybrid model could efficiently predict the concentration compared to the experimental measurements. Moreover, the hybrid approach made predictions of the moments of the CSD similar to the population balance model proposed by Kim et al. [1], being able to successfully calculate batches not considered in the training dataset. Moreover, the performance of the hybrid model was similar to the phenomenological one based on population balance but without the necessity of accounting for solubility information. Therefore, the universal differential equations approach is presented an efficient methodology for modeling crystallization processes with limited information.

1. Kim, Y., Kawajiri, Y., Rousseau, R.W., Grover, M.A., 2023. Modeling of nucleation, growth, and dissolution of paracetamol in ethanol solution for unseeded batch cooling crystallization with temperature-cycling strategy. Industrial & Engineering Chemistry Research 62, 2866–2881.



Novel PSE applications and knowledge transfer in joint industry - university energy-related postgraduate education

Athanasios Stefanakis2, Dimitra Kolokotsa2, Evaggelos Kapartzianis2, Ioannis Bonis2, Emilia Kondili1, JOHN K. KALDELLIS3

1Optimisation of Production Systems Lab., University of West Attica; 2Hellenic Energy S.A,; 3Soft Energy Applications and Environmental Protection Lab., University of West Attica

The area of process systems engineering has historically a profound theoretical involvement (noting especially Stephanopoulos 1985 but also Himmelblau 1993) who were testing the new ideas of all forms of artificial intelligence known today as such. While doing so the computer hardware but also the available data were not in the capacity required by these models.

The situation today with large amounts of data available in the industry and highly available cloud computing has been essential in making sense of broad machine learning models’ applications. In the area of process systems engineering the type of problems currently potentially solved with machine learning routines are

  1. The control system, or in terms of companies' equipment the Distributed Control systems implemented in servers with real-time OS. Predictive algorithms with millions of coefficients (or less but more robust lately) are better in addressing larger systems than simple pieces of equipment, for example with Neural Networks and Deep Learning. The plant wide optimization has not yet happened, but the Supply Chain Optimization is an area which is already seen applications and it is studied in the Academia.
  2. The Process Safety System (also known as Interlock system or emergency shutdown system ) implemented in PLCs has also been augmented by ML with the fault prediction and diagnosis method. Most applied in rotating machine performance (asset performance management systems) it is predicting their failure in advance so that companies can take on-time measures minimizing the risks of accidents but also production loss (predictive maintenance).

The subject has three challenges to be taught:

  1. Process systems engineering subject. This subject requires model understanding which already is not an easy subject.
  2. Machine learning subject. It is also requiring a modeling understanding but at the same time it is not a core subject in PSE
  3. Data engineering subject. As the systems become larger (soon they will be plantwide) knowledge of databases and cloud operating systems is becoming at least required to understand the structure of the models to be used.

These subjects have not a similar language not even close and are three separate frames of knowledge. Re-framing of the PSE is required to include in its core all three new disciplines and this is required to be done faster than in the past. The potential of young generations is enormous as they learn "hands-on" but for the older this is already overwhelming.

For next period the Machine Learning is evolving in the form of Plant optimizers and Fault detection and diagnosis model.

The present article will present the continuous evolution and progress of the cooperation between the largest energy company of Greece and the University in the implementation of knowledge transfer and advanced postgraduate and doctoral education courses. Furthermore, the novel ideas of AI implementation in the process industry as mentioned above will also be described and the prospects of this inspiration for both, the industry and the university will be highlighted.



Optimal Operation of Middle Vessel Batch Distillation using an Economic MPC

Surendra Beniwal, Sujit Jogwar

IIT Bombay, India

Middle vessel batch distillation (MVBD) is an alternative configuration of the conventional batch distillation with improved sustainability index. MVBD consists of two column sections separated by a (middle) vessel for the separation of a ternary mixture. It works on the principle of multi-effect operation wherein vapour from one column section (effect) is used to drive the subsequent effect, thus reducing the overall energy consumption [1]. The entire system is operated under total reflux and at the end of the batch, the three products are accumulated in the three vessels (reflux drum, middle vessel and reboiler).

Previously, it is shown that the performance of the MVBD can be quantified in terms of overall performance index (OPI) which captures the trade-off between separation and energy efficiency [2]. Furthermore, during the operation, holdup trajectory of each vessel can be manipulated to maximize OPI. In order to track these optimal holdup profiles during a batch, a feedback control system is needed.

The present work compares two approaches; sequential (open-loop optimization + closed-loop control) and simultaneous (closed-loop optimization + control). In the sequential approach, optimal set-point trajectory generated with offline optimization is tracked using a model predictive controller (MPC). Alternatively, in the simultaneous approach, OPI maximization is used as an objective function for the controller. This formulation is similar to the economic MPC. As the prediction horizon for this controller is much smaller than the batch time, the problem is reformulated to ensure feasibility of end of batch constraints. The two approaches are compared in terms of the effectiveness, overall performance index, robustness (to disturbance and plant-model mismatch) and associated implementation challenges (computational time). A simulation case study with the separation of a ternary mixture consisting of benzene, toluene and o-xylene is used to illustrate the controller design and performance.

References:

[1] Davidyan, A. G., Kiva, V. N., Meski, G. A., & Morari, M. (1994). Batch distillation in a column with a middle vessel. Chemical Engineering Science, 49(18), 3033-3051.

[2] Beniwal, S., & Jogwar, S. S. (2024). Batch distillation performance improvement through vessel holdup redistribution—Insights from two case studies. Digital Chemical Engineering, 13, 100187.



Recurrent deep learning models for multi-step ahead prediction: comparison and evaluation for real Electrical Submersible Pump (ESP) system.

Vinicius Viena Santana1, Carine de Menezes Rebello1, Erbet Almeida Costa1, Odilon Santana Luiz Abreu2, Galdir Reges2, Téofilo Paiva Guimarães Mendes2, Leizer Schnitman2, Marcos Pellegrini Ribeiro3, Márcio Fontana2, Idelfonso Nogueira1

1Norwegian University of Science and Technology, Norway; 2Federal University of Bahia, Brazil; 3CENPES, Petrobras R&D Center, Brazil

Predicting future states from historical data is crucial for automatic control and dynamic optimization in engineering. Recent advances in deep learning have provided new opportunities to improve prediction accuracy across various engineering disciplines, particularly using Artificial Neural Networks (ANNs). Recurrent Neural Networks (RNNs), particularly, are well-suited for time series prediction due to their ability to model dynamic systems through recurrent updates1.

Despite RNNs' high predictive capacity, their potential can be underutilized if the model training does not consider the intended future usage scenario2,3. In applications like Model Predictive Control (MPC), the model must evolve over time, relying only on its own predictions rather than ground truth data. Training a model to predict only one step ahead may result in poor performance when applied to multi-step predictions, as errors compound in the auto-regressive (or generative) mode.

This study focuses on identifying optimal strategies for training deep recurrent neural networks to predict critical operational time series data from a real Electric Submersible Pump (ESP) system. We evaluate the performance of RNNs in multi-step-ahead predictions under two conditions: (1) when trained for single-step predictions and recursively applied to multi-step forecasting, and (2) using a novel training approach explicitly designed for multi-step-ahead predictions. Our findings reveal that the same model architecture can exhibit markedly different performance in multi-step-ahead forecasting, emphasizing the importance of aligning the training process with the model's intended real-time application to ensure reliable predictions.

[1] Himmelblau, D.M. Applications of artificial neural networks in chemical engineering. Korean J. Chem. Eng. 17, 373–392 (2000). https://doi.org/10.1007/BF02706848

[2] Marrocos, P.H., Iwakiri, I.G.I., Martins, M.A.F., Rodrigues, A.E., Loureiro, J.M., Ribeiro, A.M., & Nogueira, I.B.R. (2022). A long short-term memory based Quasi-Virtual Analyzer for dynamic real-time soft sensing of a Simulated Moving Bed unit. Applied Soft Computing, 116, 108318. https://doi.org/10.1016/j.asoc.2021.108318

[3] Nogueira, I.B.R., Ribeiro, A.M., Requião, R., Pontes, K.V., Koivisto, H., Rodrigues, A.E., & Loureiro, J.M. (2018). A quasi-virtual online analyser based on artificial neural networks and offline measurements to predict purities of raffinate/extract in simulated moving bed processes. Applied Soft Computing, 67, 29-47. https://doi.org/10.1016/j.asoc.2018.03.001



Simulation and optimisation of vacuum (pressure) swing adsorption with simultaneous consideration of real vacuum pump data and bed fluidisation

Yangyanbing Liao, Andrew Wright, Jie Li

Centre for Process Integration, Department of Chemical Engineering, School of Engineering, The University of Manchester, United Kingdom

Pressure swing adsorption (PSA) is an essential technology for gas separation and purification. A PSA process where the highest pressure is above the atmospheric pressure and the lowest pressure is at a vacuum level is referred to as vacuum pressure swing adsorption (VPSA). In contract, vacuum swing adsorption (VSA) refers to a PSA process with the highest pressure equal to or slightly above the atmospheric pressure and the lowest pressure below atmospheric pressure.

Most computational studies concerning simulation of V(P)SA processes have assumed a constant vacuum pump efficiency ranging from 60% to 100%. Nevertheless, Krishnamurthy et al. [3] highlighted 72% is a typical efficiency value for compressors, but not representative for vacuum pumps. They reported a low efficiency value of 30% estimated based on their pilot-plant data. As a result, the energy consumption of the vacuum pump could have been underestimated by at least a factor of two in many computational studies.

In addition to assuming a constant vacuum pump efficiency, efficiency correlations have been proposed to more accurately evaluate the vacuum pump performance [4-5]. However, these correlations fail to conform to the trend suggested by the data points at higher pressures or to accurately represent the vacuum pump performance.

The adsorption bed fluidisation is another key factor in designing the PSA process. This is because bed fluidisation incurs rapid adsorbent attrition and eventually results in a substantial decrease in the separation performance [6]. However, the impacts of fluidisation on PSA optimisation have not been comprehensively addressed. More importantly, existing studies have not simultaneously incorporated real vacuum pump performance data and bed fluidisation limits into PSA optimisation.

To address the above research gaps, in this work we develop accurate prediction models for the pumping speed and power of the vacuum pump based on real performance curves using the data-driven modelling approach [7-8]. We then develop a new, comprehensive V(P)SA model that allows for an accurate evaluation of the V(P)SA process performance without relying on estimated vacuum pump energy efficiency or pressure/flow rate BCs at the vacuum pump end of the adsorption bed. A new optimisation problem that simultaneously incorporates the proposed V(P)SA model and the bed fluidisation constraints is thus constructed.

The computational results demonstrate that vacuum pump efficiency falls within 20%-40%. Using an estimated vacuum pump efficiency, the optimal cost is underestimated by at least 42% compared to that obtained using the proposed performance models. When the fluidisation constraints are incorporated, a low feed velocity and an exceptionally long cycle time are essential for maintaining a small pressure drop across the bed to prevent fluidisation. The optimal total cost is increased by at least 16% than cases where bed fluidisation constraints are not incorporated. Hence, it is important to incorporate vacuum pump performance prediction models developed using real data and bed fluidisation constraints to accurately evaluate the PSA performance.

References

1. Compt. Aided Chem. Eng.2012:1217-21.

2. Energy2017;137:495-509.

3. AIChE J.2014;60(5):1830-42.

4. Int J Greenh Gas Con.2020;93:102902.

5. Ind. Eng. Chem. Res.2019;59(2):856-73.

6. Adsorption2014;20:757-68.

7. AIChE J.2016;62(9):3020-40.

8. Appl. Energy2022;305:117751.



Sociotechnical Transition: An Exploratory Study on the Social Appropriability of Users of Smart Meters in Wallonia.

Elisa Boissézon

Université de Mons, Belgium

Optimal and autonomous daily use of new technologies isn’t a reality for everyone. In a societal context driven by sociotechnical transitions (Markard et al., 2012), many people lack access to digital equipment, do not possess the required digital skills for their use, and, consequently, are unable to participate digitally in social life via e-services. This reality is called digital inequalities (Agence du numérique, 2023) and is even more crucial to consider in the context of the increasing digitalization of society, in all areas, including energy. Indeed, according to the European Union directives, member states are required to develop various means of action, including digital, which are essential to achieving the three strategic axes envisioned by the European energy transition scenario, namely: investment in renewable energies, energy efficiency, and energy sobriety (Dufournet & Marignac, 2018).

In this specific instance, our research focuses on the question of social appropriation (Zélem, 2018) of new technologies in the context of the deployment of smart meters in Wallonia, and the use of associated digital tools by the publics. These individuals, with their unique socio-economic and sociodemographic profiles, are not equally equipped to utilize all the functionalities offered by this new digital system for managing energy consumption (Agence du Numérique, 2023; Van Dijk, 2017; Valenduc, 2013). This exploratory and phenomenological study aims, firstly, to investigate the experiences of the publics concerning the support received during the installation of the new smart metering system and to identify the barriers to the social appropriation of new technologies. Secondly, the field surveys aim to determine to what extent individual participatory forms of support (Benoît-Moreau et al., 2013; Cadenat et al., 2013), developed through the lens of active pedagogies such as experiential learning (Brotcorne & Valenduc, 2008, 2009), and collective forms (Bernaud et al., 2015; Turcotte & Lindsay, 2008) can promote the inclusion of digitally vulnerable users. The central role of field professionals as interfaces (Cihuelo & Jobert, 2015) is also highlighted within the service relationship (Gadrey, 2003) that connects, on one hand, the end consumers and, on the other hand, the organization responsible for deploying the smart meters. Our qualitative investigations were conducted with four types of samples, through semi-structured interviews, considering several determining factors regarding the engagement in the use of new technologies, from both individual and collective perspectives. Broadly speaking, our results indicate that while the standardized support protocol applied by field professionals during the installation of smart meter is sufficient for digitally proficient users, the situation is more nuanced for vulnerable populations who have specific needs requiring close support. In this context, collective participatory support in workshops in the form of focus groups seems to have further promoted the digital inclusion of participants.



Optimizing Methane Conversion in a Flow Reactor System Using Bayesian Optimization and Fisher Information Matrix Driven Experimental Design Approaches: A Comparative Study

Michael Aku, Solomon Gajere Bawa, Arun Pankajakshan, Ye Seol Lee, Federico Galvanin

University College London, United Kingdom

Reaction processes are complex systems requiring optimization techniques to achieve optimal performance in terms of key performance indicators (KPIs) such as yield, conversion, and selectivity [1]. Optimisation efforts often relies on the accurate modelling of reaction kinetics, thermodynamics and transport phenomena to guide experimental design and improve reactor performance. Bayesian Optimization (BO) and Fisher Information Matrix-driven (FIMD) techniques are two key approaches used in the optimization of reaction systems [2].
BO helps in identifying conditions efficiently by starting from an exploratory means of the design space, while FIMD approaches have been recently proposed to maximise the information gained from experiments and progressively improve parameter estimation [3] by focusing more on exploitation of the decision space to reduce the uncertainty in kinetic model parameters [4]. Both techniques have been used largely within the scientific and industrial domains, but they exhibit a fundamental difference in their approach on the balance between exploration (gaining new knowledge) and exploitation (using current knowledge to optimize outcomes) during model calibration.

This study presents a comparative assessment of BO and FIMD methods for optimal experimental design, focusing on the complete oxidation of methane in an automated flow reactor system [5]. The performance of both methods is evaluated in terms of methane conversion optimization, experimental efficiency (i.e., the number of required runs to achieve the optimum), and information. Our preliminary findings suggest that while BO readily converges to a high methane conversion, FIMD can be a valid alternative to reduce the number of required experiments, offering more insights into the sensitivities of each parameter and process dynamics. The comparative analysis paves way towards developing explainable or physics-informed data-driven models to map the relationship between predicted experimental information and KPI. The comparison also highlights trade-offs between convergence speed and robustness in experimental design, which are key aspects to consider for a comprehensive evaluation of both approaches in online reaction process optimization.

References

[1] Taylor, C. J., Pomberger, A., Felton, K. C., Grainger, R., Barecka, M., Chamberlain, T. W., & Lapkin, A. A. (2023). A brief introduction to chemical reaction optimization. Chemical Reviews, 123(6), 3089-3126.

[2] Quirino, P. P. S., Amaral, A. F., Manenti, F., & Pontes, K. V. (2022). Mapping and optimization of an industrial steam methane reformer by the design of experiments (DOE). Chemical Engineering Research and Design, 184, 349-365.

[3] Friso, A., & Galvanin, F. (2024). An optimization-free Fisher information driven approach for online design of experiments. Computers & Chemical Engineering, 187, 108724.

[4] Green, P. L., & Worden, K. (2015). Bayesian and Markov chain Monte Carlo methods for identifying nonlinear systems in the presence of uncertainty. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 373(2051), 20140405.

[5] Pankajakshan, A., Bawa, S. G., Gavriilidis, A., & Galvanin, F. (2023). Autonomous kinetic model identification using optimal experimental design and retrospective data analysis: methane complete oxidation as a case study. Reaction Chemistry & Engineering, 8(12), 3000-3017.



OPTIMAL CONTROL OF PSA UNITS BASED ON EXTREMUM SEEKING

Beatriz Cambão da Silva1,2, Ana Mafalda Ribeiro1,2, Diogo Filipe Rodrigues1,2, Alexandre Filipe Porfírio Ferreira1,2, Idelfonso Bessa Reis Nogueira3

1Laboratory of Separation and Reaction Engineering−Laboratory of Catalysis and Materials (LSRE LCM), Department of Chemical Engineering, University of Porto, Porto, 4200-465, Portugal; 2ALiCE−Associate Laboratory in Chemical Engineering, Faculty of Engineering, University of Porto, Porto, 4200-465, Portugal; 3Chemical Engineering Department, Norwegian University of Science and Technology, Sem Sælandsvei 4, Kjemiblokk 5, Trondheim, 793101, Norway

The application of RTO to dynamic operations is challenging due to the complexity of the nonlinear problems involved, making it difficult to achieve robust solutions [1]. Regarding cyclic adsorption processes, particularly Pressure Swing Adsorption (PSA) and Temperature Swing Adsorption (TSA), the control of the process in real-time is essential to maintain or increase productivity.

The literature on Real-time Optimization in PSA units relies on Model Predictive Control (MPC) and Economic Model Predictive Control (EMPC) [2] . These options rely heavily on the accurate model representation of the industrial plant, requiring a high computational effort and time to ensure optimal control [3]. Given the importance of PSA and TSA systems on multiple separation operations, establishing alternatives for control and optimization in real-time is in order. With that in mind, this work aimed to explore alternative model-free real-time optimization techniques that depend on simple control elements, as is the case of Extremum Seeking Control (ESC).

The chosen case study was Syngas Upgrading, which is relevant since it precedes the Fischer‑Tropsch reactions that enable an alternative to fossil fuels. Syngas Upgrading can also provide H2 for ammonia production and diminish CO2 emission. The operation of the PSA unit for syngas upgrading used as the basis for this study was discussed in the work of Regufe et al. [4].

Extremum-seeking control is a method that aims to control the process by moving an objective’s gradient towards zero while estimating that gradient based on persistent perturbations. A High-pass Filter (HF) eliminates the signal’s DC component to get a clearer response to the changes in the system. The input variable 𝑢 is continually disrupted by a sinusoidal wave, which helps assess the evolution of the objective function by keeping the system in a state of constant perturbation. The integration will determine the necessary adjustment in 𝑢 to bring the objective function closer to its optimum. This adjustment is often scaled by a gain 𝐾 to accelerate convergence.

The PSA model was implemented in gPROMS, representing the behaviour of the industrial plant, with communication with MATLAB and Simulink, where the ESC was implemented.

Extremum Seeking Control successfully optimized the CO2 productivity in PSA units for syngas upgrading/H2 purification. This shows that ESC can be a valuable tool in optimizing and controlling PSA processes and does not require the unit to reach a Cyclic Steady State to adjust the operation.

[1] S. Kameswaran and L. T. Biegler, “Simultaneous dynamic optimization strategies: Recent advances and challenges,” Computers & Chemical Engineering, vol. 30, no. 10, pp. 1560–1575, 2006, doi: 10.1016/j.compchemeng.2006.05.034.

[2] H. Khajuria and E. N. Pistikopoulos, “Optimization and Control of Pressure Swing Adsorption Processes Under Uncertainty,” AIChE Journal, vol. 59, no. 1, pp. 120–131, Jan. 2013, doi: 10.1002/aic.13783.

[3] S. Skogestad, “Advanced control using decomposition and simple elements,” Annual Reviews in Control, vol. 56, p. 100903, 2023, doi: 10.1016/j.arcontrol.2023.100903.

[4] M. J. Regufe et al., “Syngas Purification by Porous Amino-Functionalized Titanium Terephthalate MIL-125,” Energy & Fuels, vol. 29, no. 7, pp. 4654–4664, 2015, doi: 10.1021/acs.energyfuels.5b00975.



Enhancing Higher Education Capacity for Sustainable Data Driven Food Systems in Indonesia – FIND4S

Monika Polanska1, Yoga Pratama2, Setya Budi Abduh2, Ahmad Ni'matullah Al-Baarri2, Jan Van Impe1

1BioTeC+, Chemical & Biochemical Process Technology & Control, KU Leuven, Belgium; 2Department of Food Technology, Diponegoro University, Indonesia

The Capacity Building Project entitled “Enhancing Higher Education Capacity for Sustainable Data Driven Food Systems in Indonesia” (FIND4S, “FIND force”) aims to boost the institutional and administrative resources of seven Indonesian higher education institutions (HEIs) in Central Java.

The EU overarching priorities addressed through the FIND4S project include Green Deal and Digital Transformation, through developing knowledge, competences, skills and values. The modernized, competitive and innovative curricula will stimulate green jobs and pave the way to sustainable food systems including environmental impact to be taken into account. The essential elements of risk assessment, predictive modelling and computational optimization are to be brought together with both sustainability principles of food production and food processing as well as energy and food chain concepts (Life Cycle Assessment) within one coherent structure. The project will offer a better understanding of ecological and food systems dynamics and offer strategies in terms of regenerating natural systems by usage of big data and providing predictive tools for the food industry. The predictive modelling tools can be applied to evaluate the effects of climate change on food safety with regard to managing this new threat for all stakeholders. Raising the quality of education through digital technologies will enable the learners to acquire essential competences and sector-specific digital skills. The inclusion of data management to address sustainability challenges will reinforce scientific, technical and innovation capacities of HEIs and foster links between academia, research and industry.

Initially, the FIND4S project will modernize Bachelor’s degree curricula to include food systems and technology-oriented programs at partner universities in Indonesia. This modernization will aim to meet the desired accreditation standards and better prepare graduates for postgraduate studies. Additionally, in the central hub university, the project will develop a new and innovative Master’s degree program in sustainable food systems that integrates sustainability and environmental awareness into graduate education. This program will align with labor market demands and address the challenges, agriculture and food systems are facing, providing insights into potential threats and opportunities for knowledge transfer to Indonesia through education and research.

The recognition and implementation of novel and innovative programs will be tackled via significant improvement of food science education by designing new curricula and upgrading existing ones, training academic staff, creating a research center and equipping laboratories, as well as expanding the network of collaboration with European Higher Education Institutions. The project will utilize big data, quantitative modeling, and engineering tools to engage all stakeholders, including industry partners. The comprehensive MSc program will meet the growing demand for knowledge, experience, and standards in Indonesia, contributing to a greener and more sustainable economy and society. Ultimately, this initiative will support the necessary transformation towards socially, environmentally, and economically sustainable food systems.



Optimization of Specific Heat Transfer Area for Multiple Effects Desalination (MED) Process

Salih Alsadaie1, Sana I Abukanisha1, Iqbal M Mujtaba3, Amhamed A Omar2

1Sirte University, Libya; 2Sirte Oil Company, National Oil Corporation, Libya; 3University of Bradford, United Kingdom

The world population is expected to increase massively in coming decades putting more stress on the desalination industries to cope with the increasing demand for fresh water. However, with the increasing cost of living, freshwater production processes face the challenge of producing freshwater at higher quality and lower cost. The most known techniques for water desalination are thermal based such as Multistage Flash desalination (MSF) and Multiple Effect desalination (MED) and membrane based such as Reverse Osmosis (RO). Although the installed capacity of (RO) remarkably surpasses the MSF and MED, the MED process is more preferred option for new construction plants in different locations around the world where waste heat is available. However, the MED desalination technology is also required to cut off more costs by optimizing their operating and design parameters.

There are several studies in the literature that focus on optimizing the MED process. Most of these studies focus on increasing production rate or minimizing energy consumption by optimizing operating conditions, using of more efficient control systems, integration with power plants and hybrid with other desalination techniques. However, none of the available studies focused on optimum design configuration such as heat transfer area and number of effects.

In this paper, a mathematical model describing the MED process is developed and solved using gPROMs software. For a fixed production rate, the heat transfer area is optimized by variation of seawater temperature and flowrate steam temperature and flowrate, and the number of effects. The design and operating data are taken from an almost new existing small MED process plant with two large effects and two small effects.

Keywords: MED desalination, gPROMs, optimization, heat transfer area, multiple effects.

References

  1. Mayor, B., 2019. Growth patterns in mature desalination technologies and analogies with the energy field. Desalination, 457, pp.75-84.
  2. Prajapati, M., Shah, M. and Soni, B., 2022. A comprehensive review of the geothermal integrated multi-effect distillation (MED) desalination and its advancements. Groundwater for Sustainable Development, 19, p.100808.


Companies’ operation and trading strategies under the triple trading of electricity, carbon quota and commodities: A game theory optimization modelling

Chenxi Li1, Nilay Shah2, Zheng Li1, Pei Liu1

1State Key Lab of Power System Operation and Control, Department of Energy and Power Engineering, Tsinghua-BP Clean Energy Centre, Tsinghua University, Beijing, 100084, China; 2Department of Chemical Engineering, Imperial College London, SW7 2AZ, United Kingdom

Trading has been recognized as an effective measure for decarbonization, especially with the recent global focus on carbon reduction targets. Due to the high correlation in terms of participants and traded goods, carbon and electricity trading are highly coupled, leading operational strategies of companies involved in the couple transaction unclear. Therefore, the research on the coupled trading is essential, as it helps companies identify optimal strategies and enables policymakers to detect potential policy loopholes. This study presents a novel game theory optimization model involving both power generation companies (GenCos) and factories. Aiming to achieve a Nash equilibrium that maximizes each company’s benefits, this model explores optimal operation strategies for both power generation and consumption companies under electricity-carbon joint trading. It fully captures the operational characteristics of power generation units and the technical energy consumption of electricity-using enterprises to describe the relationship between renewable energy, fossil fuels, electricity, and carbon emissions detailedly. Electricity and carbon prices in the transaction are determined through negotiation between buyers and sellers. Considering the relationship between production volume and price of the same product, the case actually encompasses three trading systems: electricity, carbon, and commodities. The model’s nonlinearity, caused by the Nash equilibrium and the product of price and output, is managed using piecewise linearization and discretization, transforming the problem into a mixed-integer linear problem. Using this triple trading model, this study quantitatively explores three issues based on a virtual case involving three GenCos and four factories: the enterprises’ operational strategies under varying emission reduction requirements, the pros and cons of cap and benchmark carbon quota allocation mechanisms, and the impact of integrating zero-emission enterprises into carbon trading. Results indicate that GenCos tend to act as sellers of both electricity and carbon quotas. Meanwhile, since consumers may cut production rather than implementing low-carbon technologies to lower emissions, driving up product prices to maintain profits, high electricity and carbon prices become unsustainable for GenCos due to reduced electricity demand. Moreover, while benchmark mechanisms may incentivize production, they can also lower overall system profits, which is undesirable for policymakers. Lastly, under strict carbon reduction targets, zero-emission companies may transform the carbon market into a seller's market by purchasing carbon to raise carbon prices, thereby reducing electricity prices and lowering their own operating costs.



Solvent and emission source dependent amine-based CO2 capture costs estimation methodology for systemic level analysis

Yi Zhao1, Aron Beck1, Hayato Hagi2, Bruno Delahaye2, François Maréchal1

1Ecole Polytechnique Fédérale de Lausanne, Switzerland; 2TotalEnergies OneTech, France

Amine-based carbon capture effectively reduces industrial emissions but faces challenges due to high investment costs and the energy penalty associated with solvent regeneration. Existing cost estimation either rely on complex and costly simulation processes or provide overly general results, limiting their applicability for systemic analysis. This study presents a shortcut approach to estimating amine-based carbon capture costs, considering varying solvents and emission sources in terms of flow rates and CO2 concentrations. The results show that scaling effects significantly impact smaller plants, with costs dropping from 200–500 $/t-CO2 to 50–100 $/t-CO2 as capacity increases from 0.1 to 100 t-CO2/h, with Monoethanolamine (MEA) as the solvent. For larger plants, heat utility costs dominate, representing around 80% of the total costs, assuming a natural gas price of 35 $/MWh (10.2 $/MMBTU). Furthermore, MEA-based plants can be up to 25% more expensive than those with alternative solvents. In short, this study provides a practical method for initial amine-based carbon capture cost estimation, enabling a systemic assessment of its technoeconomic potential and facilitating its comparison with other CO2 abatement technologies.



Energy Planning Toward Absolute Environmental Sustainability: Key Decisions and Actionable Insights Through Interpretable Machine Learning

Nicolas Ghuys1, Diederik Coppitters1, Anne van den Oever2, Maarten Messagie2, Francesco Contino1, Hervé Jeanmart1

1Université catholique de Louvain, Belgium; 2Vrije Universiteit Brussel, Belgium

Energy planning models traditionally support the energy transition by focusing on cost-optimized solutions that limit greenhouse gas emissions. However, this narrow focus risks burden-shifting, where reducing emissions increases other environmental pressures, such as freshwater use, solving one problem while creating others. Therefore, we integrated Planetary Boundary-based Life Cycle Assessment (PB-LCA) into energy planning to identify solutions that respect absolute environmental sustainability limits. However, integrating PB-LCA into energy planning introduces challenges, such as adopting distributive justice principles, interpreting trade-offs across PB indicator impacts, and managing subjective weighting in the objective function. To address these, we employed weight screening and interpretable machine learning to extract key decisions and actionable insights from the numerous quantitative solutions generated. Preliminary results for a single weighting scenario show that the transition scenario exceeds several PB thresholds, particularly for ecosystem quality and mineral resource depletion, underscoring the need for a balanced weighting scheme. Next, we will apply screening and machine learning to pinpoint key decisions and provide actionable insights for achieving absolute environmental sustainability.

 
1:30pm - 2:30pmBrewery visit
Location: On-campus brewery
2:00pm - 4:00pmT1: Modelling and Simulation - Session 6
Location: Zone 3 - Room D016
Chair: Rofice Dickson
Co-chair: Arnaud DUJANY
 
2:00pm - 2:20pm

A Comparative Evaluation of Complexity in Mechanistic and Surrogate Modeling Approaches for Digital Twins

Shreyas Parbat1, Isabell Viedt1,3, Leon Urbas1,2,3

1TUD Dresden University of Technology, Process Systems Engineering Group; 2TUD Dresden University of Technology, Chair of Process Control Systems; 3TUD Dresden University of Technology, Process-to-Order Lab

A Digital Twin (DT) is a digital representation of a physical entity that employs data, algorithms, and software to enhance operations, forecast failures, and evaluate new designs through the simulation of real-world scenarios (Attaran et al., 2023). DTs have the potential for real-time monitoring, simulation, and optimization. However, traditional DTs often rely on mechanistic models (Bárkányi et al., 2021). These mechanistic models are complex because of their non-linearity, imposing time and budget constraints (Beisheim et al., 2019). This results in challenges of high computational demands, complex model structures, and slow response time, making the DTs, both complex and resource-intensive. Surrogate models, on the other hand, are the simplified approximations of more complex, higher-order models. These approximations are typically constructed using data-driven approaches, such as Random Forest Regression (Garg et al., 2023), facilitating faster simulations and simpler deployment.

This study aims to analyze the complexity of mechanistic and surrogate modeling approaches in the context of DTs to aid in model selection. A model with reduced complexity is capable of improving computational efficiency, simplifying implementation and maintenance, and enabling ease in real-time monitoring and predictive maintenance. To improve the performance of DTs by selecting a less complex model, a complexity analysis is necessary. This involves evaluating complexity metrics including analytical, structural, space, behavioral, training, and prediction complexity. By assigning complexity scores to models, an overall complexity score can be determined, helping to identify the most suitable model. Using a centrifugal pump as a use case, the mechanistic model is compared to a surrogate model to quantify complexity scores and select a less complex model for DT development.

Future work will focus on accuracy analysis and data augmentation to enhance the framework with additional model selection metrics. The developed complexity evaluation framework can be applied to complex DTs of entire process plants, enabling the identification of components that can be effectively modeled using surrogate models for enhanced efficiency, as well as those that require detailed mechanistic models for greater accuracy and precision.

References

Attaran, M., Attaran, S., Celik, B.G., 2023. The impact of digital twins on the evolution of intelligent manufacturing and Industry 4.0. Advances in Computational Intelligence 3, 11. https://doi.org/10.1007/s43674-023-00058-y

Bárkányi, Á., Chován, T., Németh, S., Abonyi, J., 2021. Modelling for Digital Twins—Potential Role of Surrogate Models. Processes 9. https://doi.org/10.3390/pr9030476

Beisheim, B., Rahimi-Adli, K., Krämer, S., Engell, S., 2019. Energy performance analysis of continuous processes using surrogate models. Energy 183, 776–787. https://doi.org/10.1016/j.energy.2019.05.176

Garg, A., Mukhopadhyay, T., Belarbi, M.O., Li, L., 2023. Random forest-based surrogates for transforming the behavioral predictions of laminated composite plates and shells from FSDT to Elasticity solutions. Composite Structures 309, 116756. https://doi.org/10.1016/j.compstruct.2023.116756



2:20pm - 2:40pm

Data-Driven Dynamic Process Modeling Using Temporal RNN Incorporating Output Variable Autocorrelation and Stacked Autoencoder

Yujie Hu1, Lingyu Zhu2, Han Gong3, Xi Chen1,4

1Zhejiang University, China, People's Republic of; 2College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, Zhejiang, 310014, China.; 3Zhejiang Amino-Chem Co., Ltd, Shaoxing, Zhejiang, 312369, China.; 4Huzhou Institute of Industrial Control Technology, Zhejiang, China.

In chemical production processes, some crucial variables are often difficult to measure in real time through instrumentation, necessitating soft sensing techniques. Data-driven models, such as neural networks, have been widely applied in soft sensing scenarios due to their strong fitting capabilities. However, these models often suffer from limitations such as poor interpretability and limited extrapolation capability, primarily due to their lack of process-specific domain knowledge. This study addresses these challenges by proposing a hybrid modeling approach that integrates mechanistic models into neural network frameworks, with the distillation unit serving as a case study. First, an equilibrium stage model was selected as the mechanistic model, incorporating Murphree efficiency to correct model deviations. Next, parameter estimation of Murphree efficiency was performed using the training dataset, targeting the soft-sensing objective within the equilibrium stage model. Based on the relationship between Murphree efficiency and the column hydraulics, a first-tier network model was developed to predict Murphree efficiency. Subsequently, the estimated Murphree efficiency was used in the equilibrium stage model to produce low-precision soft-sensing results, which were then fed as inputs to a second-tier network model that provided the final soft-sensing predictions. This methodology was applied to an actual distillation system for phenylenediamine, yielding favorable results that demonstrated the applicability of the proposed hybrid model. The integration of mechanistic knowledge within the neural network not only improved predictive accuracy but also enhanced the model's interpretability and robustness in a complex production environment.



2:40pm - 3:00pm

Mathematical Modeling of Electrolyzer-Pressure Retarded Membrane Distillation (E-PRMD) Hybrid Process for Energy-efficient Wastewater Treatment and Multi-functional Desalination System

Sun-young OH1, Kiho Park2, Boram Gu1

1Chonnam National University, Korea, Republic of (South Korea); 2Hanyang University , Korea, Republic of (South Korea)

Ammonia nitrogen from domestic and industrial wastewater can lead to eutrophication, water pollution, and oxygen depletion in aquatic systems when released into the environment without any post-treatment. The removal of ammonia nitrogen from wastewater is particularly challenging and energy-intensive since it requires specialized treatment processes such as biological approaches. These technologies are known to face challenges due to the high sensitivity of microorganisms to environmental changes as well as the need for large areas and significant infrastructure [1]. Recently, alkaline electrolyzers for removing ammonia from aqueous solutions have been investigated. Electrolyzers are compact systems that enhance space efficiency and reduce the use of chemicals, leading to cost savings and a decreased environmental impact. However, due to their direct reliance on electricity, electrolyzers still have high energy consumption and incur substantial electricity costs.

To tackle this problem, we suggest a novel electrolyzer-pressure retarded membrane distillation (E-PRMD) process. The PRMD process is designed to simultaneously produce water and electricity, while the electrolyzer removes ammonia nitrogen and produces hydrogen [2]. The electrical energy produced by the PRMD can power the alkaline electrolysis, resulting in a more economically efficient integrated process. A mathematical model was developed for the E-PRMD process, incorporating reaction kinetics, mass and energy balances. Each unit model was validated using experimental results to ensure accurate model predictions. The developed model for the ammonia electrolyzer was used to predict the amounts of N2 and H2 gas generation rate at each cathode and anode side, which were then used to assess the ammonia removal rates and impurities. The percentage of impurities in the produced hydrogen by the E-PRMD ranges from 0.1% to 3%, depending on the operating current density of 0 to 5 kA/m2. Furthermore, simulation results show that an optimal flow rate exists in terms of maximum net energy density, while the average water flux increases as the feed flow rate in the PRMD system increases.

Therefore, the E-PRMD system, which combines the two systems, not only removes ammonia nitrogen from wastewater but also enhances energy efficiency through the additional electrical energy generated. The additional production of hydrogen and high-quality freshwater further underscores the multi-functionality of the E-PRMD system. Using the developed model for the integrated system, further variations in the process configurations will be explored to investigate the interactions between the PRMD and electrolyzer systems under a wide range of operating conditions. This will allow us to identify the key variables in maximizing ammonia removal, hydrogen production, and energy production through the integrated E-PRMD process.

References

1. Y. Dong, H. Yuan, R. Zhang, N. Zhu, Removal of ammonia nitrogen from wastewater: A review, Trans ASABE 62 (2019).

2. K. Park, D.Y. Kim, D.R. Yang, Theoretical Analysis of Pressure Retarded Membrane Distillation (PRMD) Process for Simultaneous Production of Water and Electricity, Ind Eng Chem Res 56 (2017).



3:00pm - 3:20pm

Cell culture process dynamics and metabolic flux distributions using hybrid models

Rajiv Kailasanathan, Sivaram Abhishek, Mansouri Seyed Soheil

Technical University of Denmark, Denmark

Increasing global demand for bio-based products requires development of efficient bioprocesses that achieve techno-economic feasibility at scale. To achieve this, biopharmaceutical industries have been using model-based methods to understand and control the process. Depending on the available amount of prior process knowledge and the quantum of data, a spectrum of process modelling techniques ranging from purely statistical to purely mechanistic can be used. Purely mechanistic models suffer from the curse of dimensionality, as the amount of data required to fit the model increases exponentially with an increase in parameters to describe the biological phenomena. On the other hand, purely data-driven techniques demonstrate poor performance at low data availability and often fail to provide understanding of the process. Hybrid modelling strategies that combine mechanistic models with data-driven approaches to reap the benefits of both paradigms are receiving much attention1. In this study, we explore a new hybrid modelling framework that combines metabolic networks with latent variable models to provide understanding about the metabolic state of the cells and demonstrate the ability of the framework to predict the time evolution of microalgal growth dynamics in photoautotrophic regime.

Latent variable models are a class of data driven models that assume a latent structure in the provided data to model the posterior distribution of the observed data. In this study, we use multi-dimensional microalgal growth data to model the distribution of the latent space that is mechanistically mapped to the metabolic state. One key advantage of latent variable models is the ability to generate data (x) from the latent space (z) by modelling the conditional likelihood p(x | z). This methodology has been applied on other scientific fields for anomaly detection and understanding underlying behavior.

The latent variable is connected to the mechanistic model through a reduced metabolic network. DRUM (dynamic reduction of unbalanced metabolism) is a metabolic modelling framework that can address intracellular metabolite accumulation by dividing the complete metabolic network into subnetworks within which the quasi-steady state assumption is valid. This allows us to construct a structured mechanistic model of microalgal growth in the form of ODEs which operate with relatively limited number of variables2.

In this study, we combine a structured model describing microalgal growth in a photobioreactor with various kinds of latent variable models to construct hybrid models that can accurately predict the growth profile of various state variables. We also explore the potential of these models in describing complex metabolic effects observed in photoautotrophic regime, eg. night biomass loss and the diurnal cycle. Future work will explore the usability of these models in process optimization and scale up.

References:

1. Solle D, Hitzmann B, Herwig C, et al. Between the Poles of Data-Driven and Mechanistic Modeling for Process Operation. Chem Ing Tech. 2017;89(5):542-561. doi:10.1002/cite.201600175

2. Baroukh C, Muñoz-Tamayo R, Steyer JP, Bernard O. DRUM: A New Framework for Metabolic Modeling under Non-Balanced Growth. Application to the Carbon Metabolism of Unicellular Microalgae. PLOS ONE. 2014;9(8):e104499. doi:10.1371/journal.pone.0104499



3:20pm - 3:40pm

Comparative Analysis of Green Methanol Production Systems via Electrochemical Reduction and Hydrogenation

Yuanjing Zhao1, Grazia Leonzio2, Wei Zhang1, Jin Xuan1, Lei Xing1

1University of Surrey, United Kingdom; 2University of Cagliari, Italy

The call to reduce industrial CO2 emissions has driven the development of advanced CO2 capture and utilisation technologies, with electrochemical CO2 reduction (eCO2R) standing out for its ability to convert CO2 into various fuels and chemicals. A promising solution for industrial decarbonisation is the synthesis of green methanol (MeOH) from CO2 and renewable energy. However, the development of eCO2R technology for green methanol production is only at the laboratory stage, making it difficult to cope with large-scale industrial production situations, especially in the development of CO2 electrolysers, process design, system optimsation, economic analysis and environmental assessment Therefore, it is important to compare different green MeOH production routes based on different system configurations to explore their performance, economic feasibility and environmental sustainability.

A comparative assessment is conducted on various green MeOH synthesis routes using direct air-captured (DAC) CO2 and renewable energy sources, focusing on the technology readiness for near-future deployment. This work provides an overview of techno-economic analysis (TEA) and life-cycle analysis (LCA) over the fossil-fuel based conventional process. Four models were designed and analyzed, including (1) one-step electrochemical conversion of CO2 into MeOH, (2) two-step MeOH synthesis from H2O electrolysis to produce H2 followed by CO2 hydrogenation, and (3) three-step synthesis i.e., H2O electrolysis to produce H2, CO2 electrolysis to produce CO, followed by hydrogenation of CO2 and CO. In addition, a conventional methanol synthesis route using natural gas reforming is set as a benchmark. The surrogate models for H2 and CO2 electrolysers, developed in MATLAB, are integrated into the main processes modelled in Aspen Plus. We set the operating temperature and pressure of methanol reactor to 250 °C and 70 bar respectively. Response surface methodology (RSM) is employed to obtain the surrogate models that characterise the effects of operating temperature, cell voltage, and CO2 residence time on current density, Faraday efficiency (FE), and single-pass conversion in H2 and CO2 electrolysers. Four key process metrics are selected to evaluate the economic feasibility of a green methanol synthesis technology versus a commercial baseline of natural gas to methanol, including levelised cost of methanol (LCOM), levelised amount of CO2 consumed, energy efficiency, and technology readiness level (TRL). The levelised CO2 consumed for conventional route is 1.63 kg CO2 per kg MeOH produced,which ranges from 1.88 to 2.35 CO2 per kg MeOH produced in the other three cases. Compared to the unit methanol production cost of $0.77 per kg MeOH produced from fossil fuel-based processes, the one-step methanol production method, with a cost of $0.69 per kg MeOH produced is more economically feasible despite its relatively low single-pass conversion and energy efficiency. The three-step green methanol synthesis (Case 3) demonstrates comparatively improved performance, with a levelised cost of methanol (LCOM) of $0.64 per kg MeOH produced, a levelised CO2 consumption of 2.13 kg CO2 per kg MeOH produced, and a levelised cost of renewable electricity (LCOE) of 6.78 kWh per kg MeOH produced.



3:40pm - 4:00pm

Dynamic analysis for prediction of flow patterns in an oscillatory baffled reactor using machine learning

Hideyuki Matsumoto1, Yuma Kanbayashi1, Shiro Yoshikawa1, Shinichi Ookawara2

1Institute of Science Tokyo, Japan; 2Yasuda Women’s University, Japan

Oscillatory baffled reactors (OBR) are attracting attention for their process intensification effects, such as high mixing performance at low flow rates and long residence time due to the vortices generated by the interaction between the oscillating flow and the baffles. The oscillatory Reynolds number (Reo) is dimensionless parameter for design of the OBR. On the other hand, when frequency and amplitude differ with the same Reo number, it has showed that the difference influence the flow pattern inside the reactor and behavior of the reaction process in our previous studies. Although it is well known that computational fluid dynamics simulations are effective in designing the internal structure of process equipment, there is a problem in that calculations for unsteady processes require a high computaional load.

Therefore, we came up with application of machine learning using data for flow visualization as a method for predicting unsteady flow patterns. In this study, we investigated methods for dynamic analysis of spatiotemporal data acquired by Particle Image Velocimetry (PIV) to determine inputs and outputs for neural network model. The proper orthogonal decomposition (POD) is a numerical method that enables a reduction in the complexity of computer intensive simulations such as computational fluid dynamics. In order to investigate applicability of POD to dynamic analysis of oscillatory flow patterns, we collected PIV measurement data under conditions of low Reo from 40 to 762. The amplitude of oscillation was varied in the range of 5 to 15 mm.

In the POD analysis of time-series data of velocity vectors in a flow field, the eigenvalues ​​of the covariance matrix calculated from the data matrix are sorted in descending order, and the eigenvector for each eigenvalue is called “Mode”. As a result of POD analysis of the above-mentioned collected data, the cumulative contribution rate of Modes 1 to 3, which have the largest contribution rate, was about 80%. When Reo was 762 and the amplitude was 5 mm, the periodic time-variation of the mode coefficients was seen for the three modes. It was found that the flow pattern for Mode 1 represents profile of vertical flow and the flow pattern for Mode 2 represents profile of generation of vortices in the upper and lower parts.

Next, we developed the multi-layered neural network model with three outputs of Mode 1 – 3 that were extracted above. In the modeling, Reo, amplitude, frequency, velocity ratio for oscillatory flow and local velocity vectors were set as inputs. When traing was implemented by changing the number of hidden layers from 1 to 10 and the number of hidden nodes from 3 to 96, results with high predictive performance were obtained for the generation, size, and movement of vortices. On the other hand, poor predictive performance was observed when Reo was lower. Hence, it was demonstrated that three sets of modes and mode coefficients extracted by the POD could be useful for dynamic analysis and prediction of time-variant flow pattarens in OBR that was operated under low Reo.

 
2:00pm - 4:00pmT1: Modelling and Simulation - Session 7
Location: Zone 3 - Room D049
Chair: Erik Esche
 
2:00pm - 2:20pm

Simulation and Experimental Validation of Biomass Gasification in a Spouted Bed Reactor: Optimization and Troubleshooting Using DWSIM

Cristina Moliner, Valerio Carozzo, Massimo Curti, Elisabetta Arato

Università di Genova, Italy

Simulation plays a crucial role in the design and optimization of gasifiers by providing a detailed understanding of the involved physical processes and complex chemical reactions without the need for extensive trial-and-error experiments. It can also serve as a valuable tool for identifying potential technical issues in experimental devices that operate below expected performance. Simulations can reveal discrepancies between theoretical predictions and actual performance, helping to identify inefficiencies or malfunctions in experimental setups. By comparing simulated outcomes with experimental data, researchers can systematically investigate the root causes of deviations, enabling targeted troubleshooting and refinement of the experimental design. This proactive approach reduces downtime, optimizes performance, and ensures that gasification systems operate closer to their intended efficiency and output.

This study presents a comprehensive simulation of biomass gasification using the open-source software DWSIM. The simulated results were compared with experimental data from a pilot-scale spouted bed reactor, featuring a square-based design with a 20 kWth capacity, using wood pellets as feedstock. The original reactor design [1] was modified to enhance its performance, and a complete experimental campaign was conducted to evaluate the effectiveness of these modifications.

The reactor's thermo-chemical conversion process was simulated using a kinetic approach in DWSIM, accounting for key parameters such as temperature profiles, equivalence ratios, and gas composition. Experimental results revealed that the reactor operated effectively at temperatures exceeding 800°C, maintaining stable conditions across a wide range of equivalence ratios. However, the distribution of products—particularly hydrogen (H2)—did not match expected results based on both literature and simulations. A joint analysis of experimental data and expected behavior from simulation helped identify the observed inefficiencies and optimize the reactor's performance.

[1] DOI 10.1002/cjce.23223



2:20pm - 2:40pm

Thermodynamic Feasibility of High-Temperature Heat Pumps in CO2 Capture Systems

Brieuc Beguin, Grégoire Léonard

University of Liège, Belgium

Conventional amine-based CO2 capture is burdened by its high energy consumption. As a result, current research efforts notably focus on process intensification. In parallel with the development of alternative solvents, many process modifications have been discussed in the literature. These modifications introduce additional process units into the initial flowsheet to enhance absorption or integrate heat more effectively. Among these modifications, heat pumping is gaining attention as it relieves the reboiler of some energy demand by increasing the temperature of available waste heat at the cost of additional mechanical work. Several configurations are available in the literature and can be broadly divided into two categories. Closed-loop heat pumps, which work with an intermediate refrigerant, operate between different process streams. In contrast, open-loop heat pumps use one of the process streams as the working fluid.

While these modifications lead to energy savings, they increase cost and complexity, requiring costly mechanical work. Therefore, the integration of such process units should be carefully considered. Published results often report different performance indicators, such as reboiler duty, equivalent work, or efficiency loss, making systematic comparison difficult without conversion to a common metric.

With this in mind, this work proposes a methodology to evaluate systematically the inclusion of a heat pump effect within CO2 capture. It relies on the theoretical framework of Pinch Analysis, a process engineering tool used to assess the minimum energy consumption of a process and the possible heat integration improvements.

The proposed methodology starts with an extensive description of the heat requirements of the system. The initial flowsheet is analysed through three successive lenses. At first, the stripping column is split into individual equilibrium stages and each stage is represented as a heat exchanger with mass exchange. This simplified model gives insight into the actual mass transfers and the corresponding energy needs at the various temperature levels. Next, the Column Grand Composite Curves (CGCC) are drawn to compare the previous description to the optimal energy distribution. Finally, conventional Pinch Analysis is used to identify heat integration opportunities between the energy consumer (that is the column) and the background process.

Once the energy consumption is well understood, heat pumping opportunities are identified thanks to the Grand Composite Curve (GCC). Both closed-loop and open-loop cycles are studied to compare their performance and integration potential. The GCC shape, which describes the heat availability at different temperature levels, is interpreted. Identified opportunities are modelled and compared. Closed-loop heat pumps are modelled by a linearised relationship between coefficient of performance (COP) and temperature lift while open-loop heat pumps are directly integrated into the flowsheet. Finally, temperature-enthalpy diagrams are used to evaluate the energy requirements before and after heat pump integration.

To demonstrate the methodology, it is applied to an example flowsheet that models an MEA-based CO2 capture unit that separates CO2 from the flue gases of a biomass cogeneration plant.



2:40pm - 3:00pm

Comparative Assessment of Aspen Plus Modelling Strategies for Biomass Steam Co-gasification

Usman Khan Jadoon, Ismael Diaz, Manuel Rodriguez

Departamento de Ingeniería Química Industrial Y del Medioambiente, Escuela Superior de Ingenieros Industriales, Universidad Politécnica de Madrid

The urgent need to reduce global temperatures, minimize greenhouse gas emissions, and achieve energy independence has driven the search for sustainable energy solutions. Steam co-gasification of biomass and plastic waste presents a vital pathway for promoting renewable energy by offering a clean alternative for producing fuels, sustainable aviation fuels, and alcohols like methanol and ethanol. This process can significantly reduce greenhouse gas emissions and decrease reliance on fossil fuels. Modelling and simulation play a crucial role in understanding gasification behaviours, particularly in optimizing syngas yield. However, despite the importance of modelling for process optimization, there is a notable lack of comprehensive comparative studies on Aspen Plus modelling techniques for steam co-gasification in the literature.

Syngas, the primary product of biomass and plastic waste gasification, is essential for various energy applications, making the accurate prediction of its composition critical. This study addresses the gap by comparing three Aspen Plus modelling strategies—thermodynamic equilibrium modelling (TEM), restricted thermodynamic modelling (RTM), and kinetic modelling (KM)—to simulate the co-gasification of pine sawdust and polyethene using steam as the fluidizing medium. The primary aim of the research is to evaluate the effectiveness of these strategies in predicting syngas composition and to identify the most suitable approach for the co-gasification process under different operating conditions.

The methodology involves developing three separate models based on thermodynamic, restricted thermodynamic, and kinetic principles using Aspen Plus software, followed by a comparison of the predicted syngas compositions with experimental data from published literature [1]. In particular, for the restricted thermodynamic model (RTM), a detailed sensitivity analysis was conducted on 17 experimental syngas compositions to optimize reaction temperatures. The purpose of this sensitivity analysis was to improve the match between predicted and actual syngas composition values. These approach temperatures were subsequently applied to calculate new syngas compositions, which were compared against the experimental results to assess the accuracy of the models.

The RTM demonstrated the highest accuracy, achieving an average Root Mean Square Error (RMSE) of 0.0296 when compared to experimental syngas compositions. In contrast, the TEM showed the least accuracy, with an RMSE of 0.1234, highlighting its limitations in predicting the syngas composition for this process. The optimal solution derived from the RTM also exhibited improved accuracy, with an RMSE of 0.0880, while the KM performed moderately, with an RMSE of 0.0929. Notably, the optimal solution derived from the RTM presents a good alternative when detailed kinetic data is unavailable, while also offering the advantage of avoiding the computational expense associated with conducting a sensitivity analysis within the RTM framework. The findings of this study contribute to the ongoing efforts to optimize syngas prediction, thereby supporting the development of more efficient and sustainable bioenergy technologies. This research paves the way for a more comprehensive evaluation of predictive accuracy in co-gasification processes, particularly for mixed feedstocks like biomass and plastic waste.

[1] Pinto, F., et al. "Co-gasification study of biomass mixed with plastic wastes." Fuel 81.3 (2002): 291-297.



3:00pm - 3:20pm

Enhancing the Technical and Economic Performance of Proton Exchange Membrane Fuel Cells Through Three Critical Advancements

Željko Penga1, Jure Penga2, Yuanjing Zhao3, Lei Xing3

1Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture,University of Split, Croatia; 2University of Defense and Security "Dr. Franjo Tuđman"; 3University of Surrey, UK

The distribution of operating parameters within PEM fuel cells under dynamic conditions is inherently nonuniform. Traditional designs, employing constant platinum loading and temperature, often encounter operational challenges due to inefficient heat and mass transfer. When humidified reactants are used, large amounts of liquid water accumulate in porous layers of the cell, making it difficult to achieve high current densities without purging. This purging process, however, leads to significant hydrogen fuel wastage. Conversely, if the reactants are not fully humidified and the cell operates at a constant temperature, the ionomer in the membrane and catalyst layers risks dehydration and starvation. To address these issues, three critical advancements were developed by our research group.

Firstly, to address these issues, a spatially variable temperature profile, or a variable temperature flow field, can be employed. This approach helps balance the production of water and the water vapor partial pressure, ensuring fully humidified conditions across the cell's active area without requiring external humidification. This concept was explored through computational fluid dynamics and experimental research, revealing that a variable temperature flow field can be established by using a liquid coolant with a low flow rate. As the coolant heats up from the cell's waste heat, the desired temperature profile is maintained, enhancing operational efficiency.

Further improvements were achieved by combining the variable temperature profile with a graded catalyst design. Experimental and numerical studies identified the optimal pairing of a variable temperature flow field and graded Pt loading in the cathode catalyst, yielding significantly higher performance. In fact, compared to isothermal operation with uniform catalyst loading, the optimal design demonstrated a 260% increase in current density at 0.6 V, with a 19% reduction in Pt utilization and a notably more uniform current density distribution.

To further enhance performance further, novel flow fields with secondary channels for water removal near the cathode outlet were developed using 3D metal printing. These designs enabled higher current densities with minimal adjustments. These designs leveraged inertial effects in the diffusion layers caused by high flow velocities, a novel concept for PEM fuel cells where flow is typically laminar.

Overall, these innovative approaches—combining variable temperature profiles, graded catalyst designs, and advanced flow fields—demonstrate the potential to significantly improve PEM fuel cell performance. These findings point toward the future development of tailored anisotropic heat and mass transfer systems for next-generation fuel cells, offering enhanced performance and durability.



3:20pm - 3:40pm

Multiscale Modeling of Internal Reforming in Solid Oxide Fuel Cells: A Study of Electrode Morphology and Gradient Microstructures

Hamid R. Abbasi, Masoud Babaei, Constantinos Theodoropoulos

University of Manchester, United Kingdom

This work presents a comprehensive multiscale model for Solid Oxide Fuel Cells (SOFCs), integrating microscale and macroscale simulations to analyze internal reforming and its impact on overall cell performance. Macroscopic models have been shown to accurately predict SOFC performance, but require extensive calibration with experimental results [1,2].

Our multiscale model combines a microscale and a macroscale SOFC description. The microscale model [3], [4] captures the intricate mass and charge transport phenomena at the pore scale of porous electrodes, resolving electrochemical reactions at the triple-phase boundaries and modeling chemical reactions at pore spaces. Simultaneously, the macroscale model provides a broader view of the entire cell's behaviour by solving the same transport equations on a much coarser computational mesh. The multiscale approach is particularly useful for addressing the challenges posed by concurrent chemical and electrochemical reactions at the anode, which complicate the modelling of internal reforming. To overcome these challenges, a novel approach is introduced, spatially separating the regions of chemical and electrochemical activity in the pore scale domain by taking the electrochemical active layer thickness into consideration.

The integrated multiscale model is applied to a full-scale internal reforming SOFC to explore how electrode morphology, particularly the use of gradient microstructures on the anode, influences cell performance. By varying the porosity across the anode—linearly from the fuel channel to the electrolyte—this study provides new insights into optimizing SOFC efficiency.

Optimizing the porosity distribution is crucial in internal reforming cells, where mass transport limitations play a key role in determining overall performance. The findings emphasize the importance of connecting micro- and macro-scale behaviours to enhance predictive accuracy and reduce the model's reliance on experimental calibration.

References

[1] K Tseronis, I Bonis, IK Kookos, C Theodoropoulos “Parametric and transient analysis of non-isothermal, planar solid oxide fuel cells” 2012, International journal of hydrogen energy Vol. 37, 530-547

[2] K. Tseronis, IS Fragkopoulos, I. Bonis, C. Theodoropoulos. “Detailed Multi‐dimensional Modeling of Direct Internal Reforming Solid Oxide Fuel Cells” 2016, Fuel Cells, Vol. 16, 294-312

[3] H. R. Abbasi, M. Babaei, A. Rabbani, and C. Theodoropoulos, ‘Multi-scale model of solid oxide fuel cell: enabling microscopic solvers to predict physical features over a macroscopic domain’, in Computer Aided Chemical Engineering, vol. 52, Elsevier, 2023, pp. 1039–1045.

 
2:00pm - 4:00pmT2: Sustainable Product Development and Process Design - Session 6
Location: Zone 3 - Room E032
Chair: Edgar Ramirez
Co-chair: Guido Sand
 
2:00pm - 2:20pm

Decarbonzed Hydrogen Production: Integrating Renewable Energy into Electrified SMR Process with CO₂ Capture

Joohwa Lee1, Haryn Park1, Bogdan Dorneanu2, Jin-Kuk Kim1, Harvey Arellano-Garcia2

1Department of Chemical Engineering, Hanyang University, Republic of Korea (South Korea); 2FG Prozess- und Anlagentechnik, Brandenburgische Technische Universitat Cottbus-Senftenberg, Germany

The increasing recognition of hydrogen as both a clean energy source and a vital chemical feedstock has amplified interest in sustainable hydrogen production. Among the various production methods, Steam Methane Reforming (SMR) based on furnace heating is widely regarded as an economic solution for large-scale hydrogen production. However, it requires significant heat energy, which results in a large amount of CO₂ emissions. To address these challenges, Wismann et al. 1, in collaboration with Haldor Topsoe, developed an innovative Electric Heating Steam Methane Reformer (EH-SMR), providing a promising eco-friendly alternative to the conventional fossil fuel-based furnace heating technology. While several studies, including those by Song et al. 2, Mehanovic et al. 3, and Do et al. 4 have explored the process design and techno-economic evaluation of hydrogen production, via electric heating reformers, most focus on the EH-SMR process itself, with limited consideration for system-wide energy integration or decarbonization through introducing renewable energy.

This study extends the scope of electrified hydrogen production by systematically evaluating not only the electrification of the SMR process, but also its integration with renewable energy systems. When renewable energy sources, such as solar and wind, are integrated to hydrogen plants, intermittent nature in energy production should be systematically considered, due to their dependence on weather conditions. To ensure a reliable energy supply for the electrified hydrogen plant, battery storage systems are introduced to store any surplus energy or supplement energy in deficit. To effectively manage these fluctuations, a system-wide optimization approach is employed to integrate renewable energy systems, ensuring reliable energy supply and high energy efficiency.

In this contribution, a process modelling and simulation framework is developed for an electrified hydrogen plant, subject to renewable energy integration. Case studies examine the configurational and operational changes when the electrified SMR hydrogen plant is integrated with renewable energy systems. A techno-economic assessment for these case studies is to estimate the Cost of Hydrogen (COH) and CO₂ avoidance cost (CAC), which enhances our understanding on the economic feasibility of renewable-based electrification for hydrogen production. The results of this study provide conceptual guidelines for the clean and sustainable production of hydrogen through renewable-powered electrification, contributing to the decarbonization of the hydrogen industry and the global energy transition.

References

1. S. T. Wismann, J. S. Engbæk, S. B. Vendelbo, F. B. Bendixen, W. L. Eriksen, K. Aasberg-Petersen, C. Frandsen, I. Chorkendorff and P. M. Mortensen, Science, 2019, 364, 756-759.

2. H. Song, Y. Liu, H. Bian, M. Shen and X. Lin, Energy Conversion and Management, 2022, 258, 115513.

3. D. Mehanovic, A. Al-Haiek, P. Leclerc, D. Rancourt, L. Fréchette and M. Picard, Energy Conversion and Management, 2023, 276, 116549.

4. T. N. Do, H. Kwon, M. Park, C. Kim, Y. T. Kim and J. Kim, Energy Conversion and Management, 2023, 279, 116758.



2:20pm - 2:40pm

Optimised integration strategies for the PMR based H2 production with CO2 capture process

Donghoi Kim1, Zhongxuan Liu2, Rahul Anantharaman1, Thijs A. Peters3, Truls Gundersen2

1SINTEF Energy Research; 2Norwegian University of Science and Technology (NTNU); 3SINTEF Industry

Abstract

To achieve low-carbon hydrogen production, a novel protonic membrane reformer (PMR) has been developed that uses electricity to convert and separate natural gas into pure hydrogen, while generating a CO2-rich retentate gas [1,2]. This syngas composition enables efficient low temperature-based carbon capture, achieving high CO2 capture rates with reduced complexity and cost [3,4]. Thus, the PMR system produces hydrogen with a low carbon intensity, which can be further reduced with low-carbon electricity.

For process intensification, the integration of PMR with CO2 liquefaction offers several potential configurations that optimise energy efficiency and maximise hydrogen and CO2 recovery. This is mainly related to handling of impurities in the retentate such as unconverted methane and carbon monoxide. Earlier studies have focussed on a single configuration of this hybrid system where a water gas shift reactor is used to convert the carbon monoxide and the other impurities are handled by purging a slip stream from the liquefaction unit [5]. This study therefore extends earlier work by proposing and analysing several hybrid configurations that explore different approaches to integrating hydrogen production and CO2 capture.

The focus of this work is on the management of the residual gas from the CO2 liquefaction process, which has a significant impact on system performance. In particular, (1) the residual gas may contain significant amounts of hydrogen, which directly affects the hydrogen recovery rate, and (2) its flow rate affects both power consumption and capital cost when recycled to the PMR for further hydrogen recovery. Through a comparative analysis, this study aims to identify the optimal configuration that balances energy efficiency, hydrogen and CO2 recovery and economic viability.

Reference

[1] Malerød-Fjeld H, Clark D, Yuste-Tirados I, Zanón R, Catalán-Martinez D, Beeaff D, et al. Thermo-electrochemical production of compressed hydrogen from methane with near-zero energy loss. Nature Energy 2017;2:923–31. https://doi.org/10.1038/s41560-017-0029-4.

[2] Clark D, Malerød-Fjeld H, Budd M, Yuste-Tirados I, Beeaff D, Aamodt S, et al. Single-step hydrogen production from NH3, CH4, and biogas in stacked proton ceramic reactors. Science 2022;376:390–3. https://doi.org/10.1126/science.abj3951.

[3] Berstad D, Anantharaman R, Nekså P. Low-temperature CO2 capture technologies – Applications and potential. International Journal of Refrigeration 2013;36:1403–16. https://doi.org/10.1016/j.ijrefrig.2013.03.017.

[4] Kim D, Berstad D, Anantharaman R, Straus J, Peters TA, Gundersen T. Low Temperature Applications for CO2 Capture in Hydrogen Production. Computer Aided Chemical Engineering 2020;48:445–50. https://doi.org/10.1016/B978-0-12-823377-1.50075-6.

[5] Kim D, Liu Z, Anantharaman R, Riboldi L, Odsæter L, Berstad D, et al. Design of a novel hybrid process for membrane assisted clean hydrogen production with CO2 capture through liquefaction. Computer Aided Chemical Engineering 2022;49:127–32. https://doi.org/10.1016/B978-0-323-85159-6.50021-X.



2:40pm - 3:00pm

Assessing the Synergies of Thermochemical Energy Storage with Concentrated Solar Power and Carbon Capture

Nitin Dhanenjey R, Ishan Bajaj

Department of Chemical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016, India

Two classes of technology that can mitigate CO2 emissions are carbon capture and storage (CCS) and renewable energy generation from sources such as solar and wind. These two technologies have been primarily developed independently. However, their hybridization can offer complementary benefits and lower the cost of greenhouse gas abatement.

Concentrated solar power (CSP) with thermal energy storage (TES) is a promising strategy to deliver cost-effective, reliable, and dispatchable renewable power. Among various TES technologies, thermochemical energy storage (TCES) is especially appealing for the next generation CSP plants because of their high energy density and ability to deliver heat at a high temperature (> 1000 °C). A recent material screening study concluded that due to the low energy density of TCES materials, even the most economically favorable reaction system leads to more than 50% higher LCOE than the monthly average electricity retail price [1].

Accordingly, we propose a novel process that integrates a redox-based TCES system with energy-dense fossil-fuel power plant and CSP. In contrast to the CSP-TCES process, where TCES is used only for energy storage, we propose utilizing the redox system for energy storage and as a source of oxygen for fuel combustion. The integrated process is referred to as CSP-TCES-CCS.

The CSP-TCES-CCS process operates as follows. During the day, the heliostats focus sunlight on the receiver, where photons are absorbed, and heat transfer fluid (HTF) is heated. The flow of HTF is split such that one fraction drives the forward endothermic reaction (MxOz ↔ MxOy + (z-y)/2 O2), and the other heats the working fluid. The oxygen released due to the forward endothermic reaction is stored and used for fuel combustion during the night or winter months. The exhaust from fuel combustion mainly contains CO2 and H2O, which are cooled and separated using a flash column. The CO2 stream is then compressed and stored. The reverse exothermic reaction to oxidize the reduced metal oxide occurs in the presence of air. Thus, during the night operation, energy is obtained by reversible exothermic reaction and combusting fuel.

We extend the stochastic programming-based optimization model developed by Bajaj et al. [1] to obtain the optimal design, operating conditions, and system performance of the CSP-TCES-CCS plant. The objective of the optimization model is to minimize the levelized cost of electricity (LCOE). The constraints of the model include mass and energy balances and sizing and performance of the plant components. Our results indicate that the LCOE of the CSP-TCES-CCS process, using natural gas as a fuel and Mn2O3/Mn3O4 as the redox system, is up to 20% lower than the CSP-TCES process. We note that the energy density of natural gas is more than 100 times that of the Mn2O3/Mn3O4 system. While the CSP-TCES-CCS process incurs additional costs for storing and compressing gases compared to CSP-TCES, it requires less Mn2O3/Mn3O4, resulting in lower material and storage tank costs.

References

[1] Bajaj et al. (2024). RSC Sustainability, 2(4), 943-960.



3:00pm - 3:20pm

Material screening and process optimization for membrane-based carbon capture via machine learning models

Romain Birling, Marina Micari

EPFL, Switzerland

Membrane processes have the potential to dramatically reduce energy consumption and footprint of carbon capture. Such potential depends on the selected material. Material properties determine the membrane area and the energy required for a given separation, which in turn define the economic and environmental impact of the process.

Therefore, we need to optimize process configuration while taking into account the economic and environmental implications of the materials selected.

We propose a novel design strategy that allows to identify the optimal operating conditions in the presence of a wide range of materials, and to assess seamlessly the economic and environmental impact of the proposed combinations of material and process. This method allows for a fast screening of multiple materials and can drive material selection depending on the application.

To this end, we developed a highly-efficient surrogate model for the technical design of membrane processes. The proposed model takes inputs relevant to the case study, material properties and operating conditions, and return representative outputs which are the basis for economic and environmental impact calculations. By integrating the model into an optimization routine based on evolutionary genetic algorithm, we are able to build Pareto fronts for the key technical outputs, i.e., specific energy consumption and specific membrane area, and for given targets of CO2 recovery and purity.

This work focuses on the development of the surrogate model for membrane process starting from the detailed first-principle model presented in previous works.1,2

We followed a surrogate-based optimization approach that has been proposed so far only for adsorption process modelling.3 Such approach consists in performing a design of experiment where the decision variables are sampled via Latin Hypercube Sampling (LHS) in order to cover the entire design space and in building a dataset by running the detailed model for all sets of decision variables. Such dataset is used to train a feedforward multilayer perceptrons neural network, which is then able to predict the outputs in the whole design space. This approach is particularly suitable to membrane process modelling, because the short time required by the detailed model for each evaluation allows to build comprehensive datasets, not biased towards a specific operating region.

The surrogate model is able to produce Pareto fronts for a given material and case study in very short timeframes, thus is particularly suitable to perform highly efficient optimization of membrane process design and fast material screening.

1. Micari, M. & Agrawal, K. V. Optimization of High-Performance Membrane Processes for Post-Combustion Carbon Capture. Comput. Aided Chem. Eng. 53, 997–1002 (2024).

2. Micari, M., Dakhchoune, M. & Agrawal, K. V. Techno-economic assessment of postcombustion carbon capture using high-performance nanoporous single-layer graphene membranes. J. Memb. Sci. 624, 119103 (2021).

3. Yan, Y. et al. Harnessing the power of machine learning for carbon capture, utilisation, and storage (CCUS)-a state-of-the-art review. Energy Environ. Sci. 14, 6122–6157 (2021).



3:20pm - 3:40pm

Modeling MEA Solvent Degradation in CO2 Capture: A Comparative Analysis Across Key Industrial Sectors in Belgium

Loris Baggio, So-mang Kim, Grégoire Léonard

ULiège, Belgium

As 2025 approaches, substantial global transformations are essential to align with the climate objectives set for 2030 and 2050 under the Paris Agreement. Several solutions exist to reduce CO2 emissions, even when emissions are unavoidable due to the chemical reactions intrinsic to certain industrial processes. For example, in sectors such as glass, lime, and steel production, the raw materials inherently produce CO2 as part of their decomposition or transformation, regardless of whether fossil or renewable energy sources are used. In these cases, emissions are a direct consequence of the material processing, not solely energy consumption. From this perspective, CO2 capture technologies offer a crucial solution for these sectors, enabling them to contribute meaningfully to global carbon reduction while maintaining long-term competitiveness in a Net Zero Emissions future.

Among the various technologies for CO2 mitigation, the absorption-regeneration process using amine solvents, particularly MEA (Monoethanolamine), remains the benchmark. Despite its widespread use, challenges persist in accurately predicting and managing key factors such as MEA degradation and the formation of degradation by-products during the absorption-regeneration cycle. These degradation issues present significant hurdles to effective solvent management, process efficiency, and reclamation.

This study presents a detailed analysis of CO2 capture processes modelled in Aspen Plus, focusing on the absorption-regeneration process with MEA while accounting for both thermal and oxidative degradations. The analysis covers five industrial cases, representing the glass, lime, phosphorous, and steel sectors, with inlet CO2 concentrations ranging from 7 to 25 vol-%, and capture capacities varying between 25 and 200 kt/year. The thermal degradation model is based on the work of Lucas Braakhuis [1], while oxidative degradation is modelled using data from Grégoire Léonard [2].

By integrating these degradation kinetics into the CO2 absorption-regeneration process, this study provides a comprehensive assessment of the energy requirements for CO2 capture, the emissions of degradation products, and the loss of MEA's reactivity in forming carbamates. The study specifically focuses on the 30%-wt. aqueous MEA solvent and ensures the captured CO2 is purified to 99.8%-wt, and compressed to 35 bar, aligning with Fluxys' guidelines for future CO2 transport in Belgium.

The processes are compared to simplified models to evaluate the impact of solvent degradation on key parameters such as energy consumption (GJ/t of captured CO2), emissions of MEA and NH3, and the need for fresh solvent and water supplies. Preliminary results indicate a loss of MEA between 0.159 and 0.540 kg/ton of CO2 captured at a 90% capture rate, which aligns with the range of 0.1 to 0.8 kg/ton reported by Neerup in a recent study [3].

[1] L. Braakhuis, “Development of Solvent Degradation Models for Amine-Based CO2 Capture,” Norwegian University of Science and Technology, Trondheim, 2024.

[2] G. Léonard, “Optimal design of a CO2 capture unit with assessment of solvent degradation," Université de Liège, Liège, 2013.

[3] R. Neerup et al., “Solvent degradation and emissions from a CO2 capture pilot at a waste-to-energy plant,” J. Environ. Chem. Eng., vol. 11, no. 6, p. 111411, Dec. 2023, doi: 10.1016/j.jece.2023.111411.



3:40pm - 4:00pm

Integrating Direct Air Capture and HVAC Systems: A Techno-Economic Perspective on Efficiency and Cost Savings

Ikhlas Ghiat1, Yasser M. Abdullatif1,2, Yusuf Bicer1, Abdulkarem I. Amhamed2, Tareq Al-Ansari1,2

1College of Science and Engineering, Hamad Bin Khalifa University, Qatar; 2Qatar Environment and Energy Institute (QEERI), Hamad Bin Khalifa University, Qatar

Direct Air Capture (DAC) technology has gained significant attention as a promising solution for mitigating CO2 emissions and meeting climate goals. However, the current challenges of high energy demand, capital costs, and scalability present critical challenges to the widespread deployment of DAC systems. One potential solution to these challenges is the integration of DAC with Heating, Ventilation, and Air Conditioning (HVAC) systems in buildings. Such an integration presents an opportunity to enhance indoor air quality while simultaneously capturing CO2, potentially lowering the energy consumption and capital investment associated with standalone DAC systems. This study investigates the techno-economic performance of a DAC-HVAC integrated system compared to a standalone DAC system. An important focus in this study is scaling up lab-scale adsorbent filters and using experimental data on sorbent efficiency and stability to model the techno-economic feasibility of a full-scale system within an Air Handling Unit (AHU) of buildings. The adsorbent filter used in this study is 3D-printed and consists of amine-functionalized SBA-15, a mesoporous silica material known for its high CO2 adsorption capacity. Previous studies by the authors have demonstrated the stability and performance of amine-functionalized SBA-15, providing key data for modelling the system's energy requirements. The economic analysis covers capital expenditures, as well as variable and fixed operating expenditures for both the DAC-HVAC integrated system and the standalone DAC system. Various economic metrics, such as the levelized cost of CO2 capture, internal rate of return, discounted payback period, benefit-cost ratio, and break-even point, are evaluated to provide a comprehensive comparison. Moreover, a detailed sensitivity analysis is conducted to explore the influence of key variables, including discount rates, electricity prices, and CO2 selling prices, on the overall economic performance of the systems. An important consideration in the study is the trade-off between the thickness of the filter and its impact on both pressure drop and blower power requirements, as well as filter replacement costs. The pressure drop across the filter can vary between 1600 Pa/m and 2100 Pa/m, depending on airflow velocity, influenced by the configuration of the two parallel filters. Thicker filters may reduce the need for frequent replacements but increase blower energy consumption due to higher pressure drops. The optimization of filter thickness, therefore, plays a crucial role in minimizing operational costs. The results demonstrate economic advantages of integrating DAC with HVAC systems, including lower energy consumption and capital costs compared to standalone DAC systems, as well as improvements in indoor air quality. For a specific filter thickness, blower power ranges from 18 kW for DAC-HVAC to 71 kW for standalone DAC. The levelized cost of capture for the DAC-HVAC system is approximately 130 $/t CO2, with an estimated payback period of 3 years. This study's detailed techno-economic comparison highlights the cost savings and enhanced system efficiency of DAC-HVAC integration, as well as the potential for scaling up DAC technologies and incorporating them into existing building infrastructure.

 
2:00pm - 4:00pmT3: Large Scale Design and Planning/ Scheduling - Session 5
Location: Zone 3 - Aula D002
Chair: Michael Short
Co-chair: Marianne Boix
 
2:00pm - 2:20pm

Materials-Related Challenges of Energy Transition

Fatemeh Rostami1, Piera Patrizio2, Laureano Jimenez1, Carlos Pozo1, Niall Mac Dowell2

1Universitat Rovira i Virgili, Spain; 2Imperial College London, UK

The production and utilization of fossil fuels have long been criticized for their wide-ranging impacts on various aspects such as environmental degradation. Conversely, developing clean energy technologies (CETs) is often hailed as a panacea to reduce dependence on fossil fuels and mitigate greenhouse gas emissions. Despite these perceived benefits, limited efforts have been made to evaluate the requirements and implications of the transition to CETs.

With the development of CETs, mining will emerge as a pivotal geostrategic sector. Therefore, estimating the material requirements for these technologies’ development is essential. Our research estimates the material requirements for the widespread adoption of CETs, assesses the capabilities to develop these technologies, and analyzes the recycling rates required to meet the projected capacity of CETs.

We start by translating the capacity of CETs forecasted by eight Integrated Assessment Models (IAMs) into the corresponding requirements for 36 key materials in the development of CETs. These include critical minerals, rare earth elements, platinum group materials, and structural materials. Our calculations reveal that meeting these projections requires scaling materials supply chains up at an unprecedented rate. When considering diverse technology types and their material requirements – information missing from IAMs – we find that this may represent a substantial 571-fold surge in selenium demand and a 531-fold increase in gallium, figures that seem difficult to achieve1. This challenges the capacity of material reserves and the rate at which these can be produced. In turn, this diminishes the practical usefulness of IAMs, which are perceived as crucial tools in guiding academic discussions and shaping policy strategies.

To address this gap, we novelly adopt a bottom-up approach where we consider material availability constraints to estimate the capacity of CETs that could be realistically deployed. We find potential shortages compared to IAM projections that may result in deviations from the Paris Agreement target by 0.06–0.95 °C. At this point, IAM projections for CETs could still be met increasing material availability through recycling. However, we found that the recycling rate required for some materials like lithium would be above 300%, which does not seem an easy target to achieve.

Overall, this contribution quantifies potential shortages in technology capacities and the need to increase materials production rates. It also emphasizes the crucial role of incorporating these factors into IAMs for more accurate predictions and highlights the materials that developers should focus on. These findings provide crucial insights for evidence-based policymaking, aiming at a seamless transition towards sustainable energy systems.

References

1. Rostami F, Patrizio P, Jimenez Esteller L, Pozo Fernandez C, MacDowell N. Assessing the Realism of Clean Energy Projections. Energy Environ Sci. doi:10.1039/D4EE00747F



2:20pm - 2:40pm

Development of a hybrid, semi-parametric Simulation Model of an AEM Electrolysis Stack Module for large-scale System Simulations

Isabell Viedt1,2,3, Michael Große2,3, Leon Urbas1,2,3

1TUD Dresden University of Technology, Chair of Process Control Systems; 2TUD Dresden University of Technology, Process Systems Engineering Group; 3TUD Dresden University of Technology, Process-to-Order Lab

A key technology in integrating fluctuating, renewable energy in the process industry is the production of green hydrogen using water electrolysis plants. The scale-up of such electrolysis plant capacity remains a major challenge in achieving a successful transition to renewable energy. With this, the system simulation of these large-scale electrolysis plants can be utilized for process design but also for process monitoring and optimization, and maintenance scheduling (Mock et al., 2024).

Since the underlying processes for the simulation models are often not completely understood, hybrid modeling methods are a promising approach to combine process knowledge with process data for more reliable and precise simulation models (von Stosch et al., 2014). These hybrid, semi-parametric model achieved better accuracy than only knowledge-driven mechanistic models.

In this work a hybrid, semi-parametric model for an anion exchange membrane (AEM) electrolyzer module is developed. The basis of this model is a mechanistic model of the AEM stack (Große et al., 2024). The heat loss and heat transfer within stack, pump, heat exchanger and piping cannot directly be measured and therefore are approximated through the parameter estimation from real process data. Available sensors collect data for the temperature in the lye storage tank, the outlet temperature after the stack-unit, and the flow rate into the stack-unit. Using the available sensor data and the mechanistic stack model, the heat transfer coefficient and the heat loss within the peripheral components shall be estimated. The hybrid- semi-parametric model is then validated against real process plant data of the AEM stack module from different load settings of the electrolysis.

To evaluate the applicability of the hybrid, semi-parametric AEM model within a large-scale system simulation context, a large-scale electrolysis plant configuration is designed including multiple AEM stack modules, a water supply module and multiple post-processing steps for the produced hydrogen and oxygen. This plant configuration is then simulated using both the hybrid, semi-parametric and solely mechanistic AEM model to compare performance accuracy and simulation efficiency of both model types.

In future work, this hybrid, semi-parametric model could be the basis for creating more efficient system simulations utilizing surrogate models.

References

Große, M., Viedt, I., Lange, H., and Urbas, L., (2024). Electrolysis stack modeling for dynamic system simulation of modular hydrogen plants, International Journal of Hydrogen Energy. (in review)

Mock, M., Viedt, I., Lange, H., & Urbas, L. (2024). Heterogenous electrolysis plants as enabler of efficient and flexible Power-to-X value chains. In Computer Aided Chemical Engineering (Vol. 53, pp. 1885-1890). Elsevier.doi: 10.1016/B978-0-443-28824-1.50315-X.

von Stosch, M., Oliveira, R., Peres, J., and de Azevedo, S.F., (2014). Hybrid semi-parametric modeling in process systems engineering: Past, present and future, Computers & Chemical Engineering, 60, 86-101.



2:40pm - 3:00pm

Energy system modelling for studying flexibility on industrial sites

Jon Vegard Venås, Lucas Ferreira Bernardino, Kasper Emil Thorvaldsen, Sigrid Aunsmo, Sigmund Eggen Holm, Halvor Aarnes Krog, Ove Wolfgang, Ingeborg Treu Røe

SINTEF Energy Research, Norway

To meet the ambitious net zero target of the EU by 2050, it is a top priority to transition from fossil fuels to renewable sources. However, unlike traditional fossil fuel power plants, which can adjust output to match demand, non-dispatchable renewable sources like solar and wind are subject to natural variability and cannot be controlled to meet immediate demands. This is a challenge in the industrial sector where consistent and predictable energy usage is crucial. The EU project Flex4Fact aims to develop solutions to leverage energy and process flexibility in industry to meet a future with high renewable energy penetration.

A part of this project seeks to identify optimal investment strategies for enhancing energy flexibility, i.e. the capability of an industry to adapt to variable energy production. The investment strategies arise from energy system modelling, where the key is to understand how different technologies, such as solar power and batteries, complement the process flexibility. To enable these assessments, SINTEF’s open-source energy system model, EnergyModelsX [1], has been further developed to specifically address flexibility requirements at industrial sites. The considered flexibility aspects include process flexibility modelled as energy load shifting, allowing multiple energy carriers to cover the demand of single processes, and energy storage in terms of electric batteries. Moreover, synergies between these mentioned flexibilities and integrated on-site renewable expansion are focused upon within this work. Lastly, several sensitivity analyses are conducted to assess the robustness of the investment strategies towards changes in marked prices or scaled production.

The extended EnergyModelsX model is demonstrated through two case studies in the plastic and polymeric products manufacturing sector to evaluate their potential for increasing renewable generation and flexibility. The first use case, being energy intensive, consumes both natural gas and electricity. The main focus of this use case is heat recovery and utilization, hydrogen blending, on-site hydrogen production, and how this can reduce CO2 emissions. The second use case relies solely on electricity consumption, and the considered flexibility is energy shifting by electric batteries and production flexibility. The focus of this case study is on the interplay between energy storage, on-site energy production and process flexibility to increase the degree of self-produced renewable energy in the energy mix. Together, the two case studies demonstrate how the extended EnergyModelsX framework can be used to explore process and energy flexibility in industry to aid the transition from a fossil-based society to a renewable based society.

[1] L. Hellemo, E. F. Bødal, S. E. Holm, D. Pinel, and J. Straus, “EnergyModelsX: Flexible Energy Systems Modelling with Multiple Dispatch,” Journal of Open Source Software, vol. 9, no. 97, p. 6619, May 2024, doi: 10.21105/joss.06619.



3:00pm - 3:20pm

An MIQCP Reformulation for the Optimal Synthesis of Thermally Coupled Distillation Networks

Kevin Pfau1, Arsh Bhatia1, George Ostace2, Goutham Kotamreddy2, Carl Laird1

1Carnegie Mellon University, United States of America; 2Braskem, 550 Technology Dr., Pittsburgh, PA

Superstructure based approaches have long been a powerful method for optimal process synthesis problems. Specifically, there has been decades of research into the synthesis of distillation networks, due to their ubiquitous use in industry and the high cost of separation. The mathematical programs that result from such process synthesis problems are often large-scale mixed-integer nonlinear programs (MINLPs). These MINLPs are challenging to solve, even with state-of-the-art commercial solvers. Many solution approaches in literature rely on decomposition heuristics that exploit the structure of a specific problem formulation.

In this work we present a modeling approach for networks of thermally linked distillation columns. We address two of the major difficulties of previous superstructure approaches: problem generation and solution of the resulting mathematical program. For larger distillation network problems, generating the superstructure and index sets manually is cumbersome and error prone. We develop and employ an algorithmic approach for automatically generating state-task network superstructures and their corresponding index sets. The algorithmic approach allows for the generation of a problem of arbitrarily large size (N components in the system feed), limited only by computational cost.

Using the formulation for thermally coupled columns from Caballero & Grossmann (2004), the separation tasks and heat exchangers for a network of distillation columns are modeled using generalized disjunctive programming. Shortcut equations are used to model column behavior and aid in problem tractability. The FUG (Fenske-Underwood-Gilliand) equations can provide reasonable approximations of column behavior for multicomponent separations under certain assumptions. However, for large MINLPs, even models with Underwood equations can still be challenging to solve. After transformation to an MINLP, commercial solvers can fail to find solutions in reasonable computational times, even for small problem sizes. Through the introduction of intermediate variables, the Underwood equations can be reformulated from a general nonlinear form to bilinear equations. The resulting model is a mixed-integer quadratically constrained program (MIQCP). We can leverage recent advances in solver performance for quadratically constrained programs and solve the reformulated MIQCP using Gurobi where other solvers fail.

The ability to rapidly generate and solve large process synthesis problems is crucial in facilitating early-stage design decisions. Given the significant capital and operating costs associated with distillation networks, as well as their extensive application in the industry, optimizing these networks can lead to substantial energy and monetary savings. Our approach not only enhances the efficiency of solving such complex problems but also provides a scalable solution applicable to a wide range of process synthesis challenges.

References

Caballero, J. A., & Grossmann, I. E. (2004). Design of distillation sequences: From conventional to fully thermally coupled distillation systems. Computers & Chemical Engineering, 28(11), 2307–2329. https://doi.org/10.1016/j.compchemeng.2004.04.010



3:20pm - 3:40pm

A Blockchain -Supported Framework for Transparent Resource Trading and Emission Management in Eco-Industrial Parks (EIPs)

Manar Oqbi2, Dhabia Al-Mohannadi1

1Texas A&M university, Qatar; 2Texas A&M university, College Station

Sustainable industrial development depends on optimizing resource and energy integration within Eco-industrial parks (EIP), combined with stringent carbon emissions reduction policies. The main challenge is ensuring transparency, accountability, and data privacy while optimizing the conversion of raw materials and energy into valuable products and controlling emissions within EIPs. This research introduces an innovative framework to design optimized EIPs and deploy a blockchain-enabled trading platform for resources and emissions management, tackling these key issues. The proposed framework incorporates integrated EIPs combined with emission control policies, supported by two related systems: one for blockchain-based resource trading and the other for emissions control. The resource trading platform fosters transparency, enabling accurate tracking of material and energy flows. Furthermore, the framework integrates a Mixed-Integer linear Programming model (MINLP) with smart contracts on the Ethereum blockchain, ensuring data privacy, traceability, and equitable cost distribution among processes to meet environmental targets. The model also determines emission reductions and investments in carbon capture technology, promoting operational efficiency. Offering a powerful tool to decision-makers and authorities, this framework enhances comprehension of resource and emissions tracking, paving the way for the development of innovative policies and fostering regulatory compliance. This development underscores a leap in promoting sustainable industrial activities and aligning with environmental goals.



3:40pm - 4:00pm

A digital scheduling hub for natural gas processing: a Petrobras case-study using rigorous process simulation

Tayná E. G. de Souza1,2, Letícia C. dos Santos1, Caio R. Soares3,4

1Petrobras – Petróleo Brasileiro S.A., Brazil; 2Chemical Engineering Program/COPPE, Federal University of Rio de Janeiro, Brazil; 3CELIGA Electric Maintenance Ltda, Brazil; 4School of CHemistry, Federal University of Rio de Janeiro, Brazil

Natural gas processing is a crucial step in the Oil & Gas chain, gaining importance due to the high gas-oil ratio in the Pre-Salt layer and its role as a transition energy source. As midstream facilities, processing plants handle dynamic operations, a scenario intensified by Brazil's gas market opening, which allowed third-party access to sites like Petrobras’. This shift increased scheduling demands, requiring thousands of process simulations monthly and further enhancing their role. Short, medium, and long-term scheduling results drive corporate decisions across departments: Logistics, Market, Strategic Outlook, Performance Assessment, and Gas Planning & Optimization.

In this context, this work proposes an innovative digital-based scheduling tool for industrial application (IntegraGas: Integrated-Gas-Scheduling-Hub) that was tested as case study and is currently in use at Petrobras for: plant modeling, automation, integration and management of three distinct scheduling processes in four industrial gas processing assets. The framework implemented uses AspenHYSYS for process simulation, VBA to manage process simulator two-way communication, MicrosoftExcel as data transfer and end-user interface, PowerBI as analytics layer. IntegraGas includes several built-in features tailored for user experience and scheduling needs: 1) import/export data from/to third-party system, 2) select and automatically execute a sequence of what-if scenarios, 3) check legal limits and other product properties (automated warnings upon constraint violations), 4) freely configure plant setup and operation modes, 5) quick graph visualization and insights.

IntegraGas’s engineering core lies in first-principles process models, a virtual representation that was developed in the light each plant PFDs and validated against actual industrial data (average deviations 4-6%). The simulations were specifically tailored for scheduling application, ensuring appropriate compromise between model fit, execution time and open-market transparency requirements. Also, an extensive engineering work was carried out on mapping key process plant variables, i.e. the ones to which production scheduling was the most sensitive, and adding them as frontend user inputs (manipulated variables in the simulation models).

Historically, scheduling tools use overly-simplified models and manual flow of information in/out/within the process. This is mainly due to previous computation power limitations, that hindered the use of first-principles and plant-wide modeling, as well as the development of automated digital tools, which were also less needed in the aforetime slower-paced market. For current industry dynamics, though, this former path is not anymore satisfactory, as other needs arose particularly with the integration between scheduling tools and day-to-day strategic corporate decisions and online optimization tools such as process digital twins and process automation layers. This work comes as a breakthrough as it enters this new industry reality providing an integrated, robust and automated solution aligned with industry digitalization. The use of IntegraGas: 1) enabled fulfillment of scheduling processes for open-market contracts, avoiding company exposure to penalties; 2) provided efficiency gain, reducing in 24h the daily scheduling execution time (30 simulations in less than 1h) and providing the user appropriate time for critical output review; 3) increased reliability of engineering results and of data flow between company departments. Thus presenting itself as a groundbreaking tool towards a novel approach for gas processing scheduling.

 
2:00pm - 4:00pmT4: Model Based optimisation and advanced Control - Session 5
Location: Zone 3 - Aula E036
Chair: Sigurd Skogestad
Co-chair: Mattia Vallerio
 
2:00pm - 2:20pm

Utilizing ML surrogates in CAPD: Case study for an amine-based carbon capture process

Florian Baakes, Gustavo Chaparro, Thomas Bernet, George Jackson, Amparo Galindo, Claire S. Adjiman

Department of Chemical Engineering, Sargent Centre for Process Systems Engineering, Institute for Molecular Science and Engineering, Imperial College London, SW7 2AZ, London, UK

Reducing our carbon emissions to or below zero must be our main objective to mitigate the impact of climate change and retain a liveable environment for the coming generations. Carbon capture and utilization or storage is a promising approach to achieve this goal [1]. Amine-based solvents are already used in industrial settings owing to their high capacity to absorb carbon dioxide (CO2) from flue gases, combined with a relatively easy regeneration to release and store the captured CO2. However, the regeneration of the CO2-loaded solvent is highly energy intensive, leading to around 30% of a power plant’s energy being lost. Thus, there is a high demand for new amine(s) or amine blends that can lower process costs and energy requirements.

To explore the vast chemical and process design space of possible solvents and operating conditions, we previously developed an integrated algorithm to optimize solvent structures and process conditions [2]. However, the non-linear relationships between structure, properties, and process limit the level of detail that could be considered in the process models. In this work, we use machine learning surrogates to improve the process model while maintaining tractability.

To preserve the structure-property relationship and enable the optimization of the solvent structure, we replaced the flash calculations in both columns with a low-dimensional ANN. Starting with a well-known system (monoethanolamine, MEA), we achieve up to a 50% speed-up in the optimization process while converging to equivalent process conditions. We explore the robustness of the surrogate-based model to the choice of starting point, a common challenge when considering process optimisation.

Ultimately, this approach will serve as a starting point in the integrated molecular and process design to achieve a fast and robust convergence while preserving the ranking of optimal solvent candidates.

References

[1] M. Bui, C. S. Adjiman, A. Bardow, E. J. Anthony, A. Boston, S. Brown, P. S. Fennell, S. Fuss, A. Galindo, L. A. Hackett, J. P. Hallett, H. J. Herzog, G. Jackson, J. Kemper, S. Krevor, G. C. Maitland, M. Matuszewski, I. S. Metcalfe, C. Petit, G. Puxty, J. Reimer, D. M. Reiner, E. S. Rubin, S. A. Scott, N. Shah, B. Smit, J. P. M. Trusler, P. Webley, J. Wilcox, N. Mac Dowell, Carbon capture and storage (CCS): the way forward, Energy Environ. Sci., 11, 5 (2018)

[2] L. Lee, A. Galindo, G. Jackson, C. S. Adjiman, Enabling the direct solution of challenging computer-aided molecular and process design problems: Chemical absorption of carbon dioxide, Comput. Chem. Eng., 174 (2023)



2:20pm - 2:40pm

Accelerating Solvent Design Optimisation with a Group-Contribution Machine Learning Surrogate Model for Phase Stability

Lifeng Zhang, Benoît Chachuat, Claire S Adjiman

epartment of Chemical Engineering, The Sargent Centre for Process Systems Engineering, Imperial College London, London, SW7 2AZ, United Kingdom

The computer-aided mixture/blend design (CAMbD) framework has been widely applied for solvent mixture design problems over the past decades. To obtain the optimal solvent mixture for a specified process, a mixed-integer nonlinear programming (MINLP) model is usually developed by incorporating various constraints, such as property prediction model, phase equilibrium equations, a phase stability check, and design constraints on process conditions and property values. Group-contribution methods, e.g. UNIFAC, SAFT-g Mie, are often used in the model building, but embedding such thermodynamic model equations can significantly increase the computational cost required to solve these problems to global optimality. In particular, including a phase stability check constraint normally involves embedding an optimisation problem, such as the plane distance criterion, thus leading to a bilevel optimisation formulation. This can be approximated with a local stability check based on the derivatives of the chemical potential, at the cost of adding further to the nonconvexity. To develop more tractable problem formulations, a classifier-based surrogate of the tangent plane distance criterion was proposed in our previous work. The approach has yielded an accurate and computationally manageable approximation. The method is applicable to a specific thermodynamic model (UNIFAC in our work) with a predefined solvent mixture set. However, it remains to be expanded to other models and mixtures.

The potential of machine learning for predicting thermodynamic properties has been widely investigated in recent years. Even though high accuracy has been achieved with machine learning models, they can be difficult to use in CAMbD as the feasibility of the generated molecules is not guaranteed. To address this, the concept of group-contribution machine learning (GC-ML) models was proposed by adopting the functional groups of GC methods as the input features. This approach combines the benefits of reliable predictions from machine learning and of generation of feasible molecules from the group-contribution representation, and it could in principle be applied in design. In this work, we build on the GC-ML concept to develop a surrogate model for phase stability whose inputs consist of the groups present in the mixture and its composition and temperature. This leads to a more general approach in terms of the source for datasets (thermodynamic model or experimental data) and in terms of the solvents that can be modelled. The performance of such a model is evaluated in terms of accuracy and prediction statistics, using the UNIFAC groups to represent the mixture. The surrogate model is then embedded into an optimisation problem which maximises the solubility of an active pharmaceutical ingredient (API), with the UNIFAC model to predict the solubility. Molecular connectivity and feasibility constraints are also included to ensure that physically feasible molecules are generated while satisfying the overall constraints. The computational expense to solve this surrogate-based model is also found to be competitive with other approaches to solving solvent design problems.



2:40pm - 3:00pm

Open-loop surrogate modeling for process optimization

Lucas F. Santos, Dion Jakobs, Gonzalo Guillén-Gosálbez

Institute for Chemical and Bioengineering, Department of Chemistry and Applied Biosciences, ETH Zürich, Vladimir-Prelog-Weg 1, Zürich 8093, Switzerland

Improving existing tools for process simulation and optimization is a critical task to enable the restructuring of the chemical industry toward sustainability. Recently, there has been an ascending trend toward coupling machine-learning models (i.e., surrogate models) and well-established mathematical programming techniques to optimize process simulations [1]. Such approaches can overcome known issues of simulation-optimization approaches, such as lack of analytical formulations, potentially noisy calculation, unconverged simulation, and high computation expenses.

Traditionally, process simulation surrogates can be generated at three levels of abstraction: system, unit, and property [2]. With the decreasing reference system complexity (from system- to property-wide), the mapping between input and output variables is expected to become simpler at the cost of potentially propagating surrogate modeling errors. Overall, optimizing the surrogate models is often easier than the original process simulation, yet selecting the appropriate surrogacy level and building accurate surrogates might become challenging.

Here, we propose an alternative method for building surrogates. In essence, we introduce the concept of open-loop plant surrogate building, i.e., replacing the whole system with all iterative calculations (e.g., recycles) turned off and converging the simulation using the surrogates. This alleviates the computationally intensive and failure-prone simulation problems listed above. Similar ideas have been considered in derivative-based optimization, where iterative calculations are added as constraints in the optimization problem [3]. Additionally, we propose using model-based adaptive sampling to enhance the performance of the open-loop surrogates to approximate closed-loop simulations by enforcing sampling near convergence. We use this quasi-closed-loop data to more accurately fit the objective functions and constraints from the process design problem with ReLU (rectified linear unit) neural networks. These are formulated as mixed-integer linear programming (MILP) problems through the OMLT Python package [4] and solved to global optimality.

The proposed open-loop surrogate modeling and optimization approach is applied to mathematical benchmark and sustainable chemical process design problems and compared with system-level and open-loop surrogates without active learning enhancement. We found that the larger datasets generated from the computationally cheaper (orders of magnitude) open-loop simulations lead to more accurate neural network surrogates. Adaptive sampling allowed tailoring data generation and modeling performance in regions closer to the converged simulation and consistently improved the optimization results across all case studies. Moreover, the values for the closed recycle given some simulation degrees of freedom can be accurately estimated using the surrogates. We conclude that avoiding the simulation iterative calculations in an open-loop surrogacy level jointly with adaptive sampling can improve surrogate modeling and be a promising alternative for automated chemical process design.

References

[1] A. Bhosekar and M. Ierapetritou, “Advances in surrogate based modeling, feasibility analysis, and optimization: A review,” Computers & Chemical Engineering, 108, 250–267, 2018.

[2] R. Misener and L. Biegler, “Formulating data-driven surrogate models for process optimization,” Computers & Chemical Engineering, 179, 108411, 2023.

[3] L.T. Biegler, I.E. Grossmann, and A.W. Westerberg, Systematic Methods of Chemical Process Design. Prentice Hall PTR, 1997.

[4] F. Ceccon et al., “OMLT: Optimization & Machine Learning Toolkit,” Journal of Machine Learning Research, 23, 349, 1–8, 2022.



3:00pm - 3:20pm

Optimal design of extraction-distillation hybrid processes by combining equilibrium and rate-based modeling

Kai Fabian Kruber1, Anjali Kabra1, Lukas Polte2, Andreas Jupke2, Mirko Skiborowski1

1Hamburg University of Technology, Institute of Process Systems Engineering, Am Schwarzenberg-Campus 4, 21073 Hamburg, Germany; 2RWTH Aachen University, Chair of Fluid Process Engineering, Forckenbeckstraße 51, 52074 Aachen, Germany

Liquid-liquid extraction (LLX) is a crucial technique for the separation of mixtures that are susceptible to high temperatures, highly diluted, or exhibit azeotropic behavior (Sattler, 2012). Despite its widespread industrial application, the design and optimization of liquid-liquid extraction (LLX) processes remain challenging, especially due to kinetic phenomena, e.g. fluid dynamics and mass transfer limitations (Kampwerth et al., 2020). In contrast to distillation, where equilibrium-based (EQ-based) models with constant values for the height-equivalent to a theoretical stage (HETS) are well established and yield accurate results, the beforementioned phenomena in LLX systems can lead to a reduction in model accuracy. Non-equilibrium (NEQ) models, which provide a more detailed description of mass transport and fluid dynamics, offer a superior representation but are associated with a substantial increase in complexity. This results in the generation of highly nonlinear and non-convex optimization problems. Additionally, the necessary consideration of solvent recovery in a closed-loop process further increases the challenges for an optimization-based design. This complexity represents a significant obstacle to the frequent use of NEQ models in process optimization, particularly in the context of large-scale industrial applications.

To address these challenges, this work proposes an integrated approach that combines NEQ modeling with EQ-based superstructure optimization for a hybrid extraction-distillation process. The objective is to develop a more practical method for process optimization that captures the essential features of mass transfer and fluid dynamics in the LLX without the computational burden of full rate-based modeling throughout the entire design process. In the initial phase, an NEQ representation of an LLX column is employed to calculate HETS values specific to the selected solvent and contingent on the operational conditions, namely temperature and phase ratio (Kampwerth et al., 2022). Based on the generated data, HETS correlations are developed and incorporated into an EQ-based extraction-distillation superstructure model (Kruber et al., 2018), thereby enabling an accurate yet simplified representation of mass transport phenomena in combination with a rigorous closed-loop process optimization.

The efficacy of the proposed design approach is illustrated through its application to the separation of a diluted acetone-water mixture. The optimal process designs for a range of solvents are evaluated, with a particular focus on the specific HETS values. Furthermore, the method is benchmarked against a conventional superstructure optimization approach, which employs a constant HETS value across all solvents and operating conditions.

References

Kampwerth, J., Weber, B., Rußkamp, J., Kaminski, S., Jupke, A., 2020. Towards a holistic solvent screening: On the importance of fluid dynamics in a rate-based extraction model. Chem. Eng. Sci. 227, 115905.

Kampwerth, J., Roth, D., Polte, L., Jupke, A., 2022. Model-Based Simultaneous Solvent Screening and Column Design Based on a Holistic Consideration of Extraction and Solvent Recovery. Ind. Eng. Chem. Res. 61 (9), 3374–3382.

Kruber, K.F., Scheffczyk, J., Leonhard, K., Bardow, A., Skiborowski, M., 2018. A hierarchical approach for solvent selection based on successive model refinement. Comput. Aided Chem. Eng. 43, 325–330.

Sattler, K., 2012. Thermische Trennverfahren. John Wiley & Sons, Hoboken.



3:20pm - 3:40pm

Mathematical Modelling and Optimisation of the Cryogenic Distillation Processes used for Hydrogen Isotope Separation in the Fusion Fuel Cycle

Emma Anastasia Barrow1, Iryna Bennett2, Franjo Cecelja1, Eduardo Garciadiego-Ortega2, Megan Thompson2, Dimitrios Tsaoulidis1

1School of Chemistry & Chemical Engineering, University of Surrey, GU2 7XH, Guildford, UK; 2UK Atomic Energy Authority, Culham Science Centre, OX14 3DB, Abingdon, UK

Global distribution of fusion power plants has the potential to provide a limitless supply of low-carbon energy and be revolutionary in tackling today’s climate crisis. Deuterium-Tritium (DT) fusion is currently the leading fusion reaction towards commercialisation of fusion power plants, being able to produce the highest energy output at the “lowest” temperature requirement [1]. In 2022, the National Ignition Facility in California achieved “net” energy production for the first time ever from their breakthrough DT fusion experiments [2]. Unfortunately, tritium - one of the two main feedstocks of this reaction - is a scarcely available, very expensive, and radioactive isotope of hydrogen, which introduces a series of additional feasibility and safety challenges into a commercial fusion power plant’s design.

The fusion fuel cycle is an essential component of a fusion power plant design. The fuel cycle is required to continuously recover unburnt tritium from a fusion reactor’s plasma exhaust gases, and safely recycle it back into the fusion reactor’s fuelling systems as a 50/50 molar mixture of deuterium and tritium with minimal impurities, including protium. Cryogenic distillation is a leading technology candidate for the separation and rebalancing of hydrogen isotope mixtures within the fuel cycle. Cryogenic distillation is advantageous for this application due to having a very high separation efficiency, being a well-established technology, and its design being easily scalable and adaptable for different fuel cycle requirements [3]. However, the main drawback of this technology is that the liquid hold-ups within the columns will contain very large inventories of tritium [4]. High tritium inventory within the fuel cycle is problematic for the feasibility and safety of fusion power plant operation. Therefore, accurate modelling of the fuel cycle, as well as quantification of the tritium inventory requirements, is an essential part of ensuring the feasibility of fusion power plant operation.

In this work, dynamic optimisation of the cryogenic distillation processes for hydrogen isotope separation was performed and an optimal control framework was proposed to minimise tritium inventory, without compromising on separation efficiency, under uncertainty. The effects of parameters such as the number of stages, side stream flow rates, condenser and boiler heat duties, feed location, etc were investigated. These optimization problems are complex and demand specialized optimization techniques. Mathematically, they are formulated as mixed-integer nonlinear programming (MINLP) problems and implemented using the General Algebraic Modelling System (GAMS). A key consideration in developing mathematical models for optimization-based design is the handling of uncertainty in the experimental data used for model development.

References:

1. ITER. WHAT IS ITER. ABOUT 2023; Available from: https://www.iter.org/proj/inafewlines.

2. The Indirect Drive, I.C.F.C., et al., Achievement of Target Gain Larger than Unity in an Inertial Fusion Experiment. Physical Review Letters, 2024. 132(6): p. 065102.

3. Day, C., et al., The pre-concept design of the DEMO tritium, matter injection and vacuum systems. Fusion Engineering and Design, 2022. 179: p. 113139.

4. Schwenzer, J., et al., Operational tritium inventories in the EU-DEMO fuel cycle. Fusion Science and Technology, 2022. 78(8): p. 664-675.

 
2:00pm - 4:00pmT5: Concepts, Methods and Tools - Session 5
Location: Zone 3 - Room E033
Co-chair: Yasunori Kikuchi
 
2:00pm - 2:20pm

The Paradigm of Water and Energy Integration Systems (WEIS): Methodology and Performance Indicators

Miguel Castro Oliveira1,2, Rita Castro Oliveira3, Pedro M. Castro2, Henrique A. Matos2

1Research, Develoment and Innovation, Instituto de Soldadura e Qualidade, 2740-120 Porto Salvo, Portugal; 2Department of Chemical Engineering, Instituto Superior Técnico, Avenida Rovisco Pais 1, 1049-001 Lisboa, Portugal; 3Department of Computer Science and Engineering, Instituto Superior Técnico, Avenida Rovisco Pais 1, 1049-001 Lisboa, Portugal

The water-energy nexus has been introduced on the scope of the most relevant sustainability policies of each region of the world. This concept deals with all potential interdependencies between water and energy, and the full understanding of the inherent aspects is fundamental for the promotion of simultaneous water and energy use-related benefits (Gabbar and Abdelsalam, 2020; Souza et al., 2023; Walsh et al., 2015).

The concept of Water and Energy Integration Systems (WEIS) was recently introduced by Castro Oliveira (2023). It consists of physical systems encompassing a set of water and energy-using units in a site, and the water and energy recirculation between these units, with the goal of achieving economic and environmental benefits. The innovative concept was developed to expand the Total Site Integration (TSI) and Combined Water-Energy Integration (CWEI) concepts, and to attend to their limitations.

While TSI only focus on energy consumers (despite considering all types of energy forms involved in an energy system), CWEI focus on water consumers, with a limited focus on energy efficiency (only though the reduction of hot/ cold utility input in a water system).On the other hand, the WEIS concept focus on energy and water consumers as a whole, adapting conceptual elements from both TSI and CWEI and introducing new ones. These include technologies for energy recovery from water/ wastewater units (such as electrolysis) and the supply of thermal energy (direct waste heat) or electric energy (indirectly produced in thermodynamic cycles) to fulfill already existing energy requirements (hot utilities, in this case) and additional ones (wastewater treatment units). These practices necessarily generate new stream recirculation, for instance, additional fuel streams (such as hydrogen from electrolysis) to combustion-based processes and waste heat streams to several points in the water system.

This work details the methodology behind the concept of WEIS, by elaborating two conceptual superstructure configurations: a standard one (case 1) based on a steady-state perspective (continuous processes only) and a dynamic one (case 2) considering energy storage units (both continuous and batch processes). To prove the innovation and real-life adequacy potential associated to the concept, two representative plants, set within the Portuguese ceramic industry (to which economic and environmental viability have been secured) were used as case studies to apply both conceptual configurations. For case 1, it was estimated an eco-efficiency promotion of 6.46%, 8.57% total energy savings and 23.71% total water savings. For case 2, it was estimated an eco-efficiency promotion of 4.00%, 6.88% total energy savings and 38.57% total water savings.

References

M. Castro Oliveira, 2023. Simulation and Optimisation of Water and Energy Integration Systems (WEIS): An Innovative Approach for Process Industries.

H.A. Gabbar, A.A. Abdesalam, 2020. Energy—water nexus: Integration, monitoring, KPIS tools and research vision, Energies, 13.

R.G. Souza, A. Barbosa, G. Meirelles, 2023, The Water–Energy Nexus of Leakages in Water Distribution Systems, Water, 15.

B.P. Walsh, S.N. Murray, D.T.J. O’Sullivan, 2015. The water energy nexus, an ISO50001 water case study and the need for a water value system, Water Resour. Ind., 10, 15–28.



2:20pm - 2:40pm

Cooperative Price Fixation Strategy to Support the Formation of Industrial Symbiosis Networks

Fabian Lechtenberg, Lluc Aresté-Saló, Antonio Espuña, Moisès Graells

Universitat Politècnica de Catalunya

Industrial symbiosis has emerged as a paradigm to enhance industrial objectives (economic, environmental, etc.) by promoting collaboration between facilities that share resources such as heat and power, but also other tangible or intangible assets, like intermediate products or client satisfaction. In this context, a shift from intra-company to inter-company integration has emerged, e.g., in the form of eco-industrial parks. However, the proper management of these networks requires to reach consensus among multiple stakeholders with conflicting objectives [1]. To provide a transparent and practical approach, this work develops a novel price fixation strategy for resources exchanged within industrial symbiosis networks. By directly linking economic benefits to resource prices, the strategy enables companies to understand the value of each exchange. This approach simplifies bookkeeping, fosters stakeholder trust, and facilitates the formation of symbiotic networks.

Building upon the Process Integration Multi-Actor Multi-Criteria Optimization (PI‑MAMCO) framework [2], which balances multiple objectives and derives fair and stable benefit allocations based on stakeholder preferences and contribution, the proposed strategy begins by identifying all intermediate resources exchanged within the network, including materials and utilities. Price ranges for these resources are established using reference values from market data, ensuring that prices accurately reflect the economic context. The fair prices for these intermediate assets (price fixation policy) are based on the formulation of the economic balances for each company, incorporating the allocated economic benefits derived from the PI-MAMCO framework. Then, the core of the proposed solution approach involves selecting an optimization criterion, such as the minimization of the deviation of internal prices from reference market prices, while satisfying the economic balances for each company. These prices can then form the basis for contractual agreements within the symbiotic network, providing a clear and equitable mechanism for economic benefit allocation without the need for side payments.

A case study involving the formation of a palm oil–based industrial complex [3] is presented to demonstrate the approach. The complex comprises three companies that exchange materials, utilities, and effluents. The case study illustrates how the described price fixation strategy effectively allocates economic benefits among the companies, and enhances transparency in the value of resource exchanges, to support the successful establishment of the symbiotic network. This price fixation strategy offers a practical approach for companies considering participation in industrial symbiosis networks.

[1] M. Hiete, J. Ludwig, F. Schultmann (2012). Journal of Industrial Ecology.
[2] F. Lechtenberg, L. Aresté-Saló, A. Espuña, M. Graells (2024). Applied Energy.
[3] Y. D. Tan, J. S. Lim, S. R. Wan Alwi (2022). Energy.



2:40pm - 3:00pm

Decision Support Tool for Sustainable Small to Medium-Volume Natural Gas Utilization

Patience Bello Shamaki, Pedro Henrique Callil-Soares, Galo Antonio Carillo Le Roux

Department of Chemical Engineering, Polytechnic School, University of Sao Paulo, Brazil, Brazil

Due to safety, economic and logistical challenges, flaring of small to medium volumes of natural gas remains a widespread practice, despite global focus on carbon emissions reduction. This detrimental practice results in climatic decline and waste of valuable resources. The global effort to curb gas flaring has led to increasing innovative technologies for the valorization of these small - medium (stranded) NG volumes (Global Gas Flaring Reduction Partnership, 2019).

This study presents a decision-making support tool that allows for fast evaluation of sustainable, viable alternatives for utilization of small to medium-volume natural gas over flaring. The proposed tool first computes the flare intensity and toxicity of NG profile, including gas composition and volume, provided by the user, to establish an environmental baseline. It then allows the user to select a desired objective between economic, environmental, or with a tradeoff. Based on the selected objective, optimization is performed, and the analysis of performance indices for the utilization processes are presented. The performance indices include the levelized cost of product, return on investment, payback period, capital, and operating cost, for the economic baseline, while the environmental baseline includes CO2 produced, emitted, or avoided from the process, as well as product CO2 emission potentials. The technical performance indices include energy consumption, energy efficiency and feed efficiency. The tool involves the integration of python with Aspen Plus to develop a user-friendly interface that facilitates decision making. The optimization framework focuses on maximizing economic profit with strict adherence to environmental constraints. Different scenarios from best to worst are tested to using the proposed tool. Focusing on sustainable processes, existing and new processes with low-zero carbon emission, as well as new integrated processes with reduced carbon emissions are presented. The data used for this study would be obtained from the world bank gas flaring data, the global gas flaring reduction partnership report (Darani et al., 2021), and other literature sources for process operating conditions. The proposed tool enables stakeholders to evaluate available clean technologies and the possible concessions required to reduce environmental impact of flaring and optimize natural gas valorization.



3:00pm - 3:20pm

Prospective life cycle design enhanced by computer aided process modeling: A case study of Air Conditioners

Shoma Fujii, Yuko Oshita, Ayumi Yamaki, Yasunori Kikuchi

The University of Tokyo, Japan

Prospective lifecycle design of emerging technologies combined with Life Cycle Assessment (LCA), Material Flow Analysis (MFA) and Input-Output Analysis (IOA) plays an important role in the design of sustainable societies and business strategies. However, the prospective lifecycle design tends not to be seamlessly linked to technological development. In the example of air conditioners (ACs), state-of-the-art development is taking place for each component, such as heat exchangers for indoor and outdoor unit, expansion valves, compressors, piping and refrigerants. However, these development data are too detailed to provide for lifecycle design to build a business strategy. On the other hand, useful indicators such as COP (Coefficient of Performance) show no relationship among design parameters, making it difficult to support decision making. In this study, computer-aided process modeling was positioned as a function that links technology development and the system level analyses, and a fundamental model combining ACs process simulation, MFA and LCA was developed.

In the process modeling, a process flow diagram including heat exchangers for indoor and outdoor, a compressor and an expansion valve was modelled, and isenthalpic expansion and isentropic compression were assumed in the expansion valve and the compressor, respectively. In the heat exchangers for indoor and outdoor units, heat exchanging process between refrigerants and ambient air and indoor air were displayed on Temperature-enthalpy diagram, then UA (Overall heat transfer coefficient) values of each heat exchanger were quantified to calculate the relative change in heat exchanger size. By using this process modeling, operating power and size of each component for conventional and natural refrigerants could be defined. In the MFA, statistic data on past and estimated future cooling demand were firstly incorporated, then shipping amount of air conditioners was calculated by balancing of the installed stock and the waste amount estimated by Weibull distribution. Annual power consumption was estimated using performance calculated by the process modeling, then a cradle to grave LCA of ACs was conducted until 2050.

Case studies of replacing conventional refrigerants with natural refrigerants show that if air conditioners are designed to maintain their performance, the size of the heat exchanger will increase, and the share of the impact of air conditioner manufacturing in the total environmental burden in 2050 will be larger. On the other hand, if the air conditioner is designed to maintain the size of the heat exchanger, its performance will deteriorate drastically and its electricity consumption will increase, making its environmental impact strongly dependent on the power grid inventory.



3:20pm - 3:40pm

Life-Cycle Assessment of Chemical Sugar Synthesis Based on Process Design for Biomanufacturing

Hiro Tabata1,2, Satoshi Ohara3,4, Yuichiro Kanematsu1, Heng Yi Teah1, Yasunori Kikuchi1,4,5

1Presidential Endowed Chair for “Platinum Society”, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan; 2Research Center for Solar Energy Chemistry, Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan; 3Research Center for Advanced Science and Technology, LCA Center for Future Strategy, The University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo 153-8904, Japan; 4Department of Chemical System Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan; 5Institute for Future Initiatives, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654, Japan

Biomass sugars from crops are the typical substrates used in the microbial production of useful substances. These bioproduction processes have been shown to be more environmentally friendly than conventional petrochemical processes. However, the Earth's biomass production capacity is limited by its biophysical boundaries, and it cannot meet the enormous demand for fuel and chemical production. Additionally, it is important that the expansion of industrial agriculture has negative aspects, such as vast land use and massive consumption of depletable resources like water, nitrogen, and phosphorus. Thus, for biomanufacturing technology to be fully implemented, concerns regarding the supply of biomass sugar need to be addressed.

Against this background, research is being conducted to synthesize sugars through an agriculture-independent catalytic process. The chemical sugar synthesis is achieved by integrating (1) the formation of formaldehyde through CO2 reduction and (2) the sugar synthesis using formaldehyde as a substrate. One of the main issues to be addressed is the development of a catalyst that can selectively synthesize sugars from formaldehyde. Consequently, we have been developing catalysts to improve the selectivity of this sugar synthesis. In this study, we have achieved a significant improvement in sugar selectivity by using metal oxometalates as catalysts to suppress side reactions1. Furthermore, by using Corynebacterium glutamicum, bioproduction using chemically synthesized sugars has been realized for the first time globally2.

To implement such emerging technologies in society, it is essential to work in tandem with technological development to design future life cycles that will enable system design and evaluation based on the assumption of future performance improvements and infrastructure development. As a further effort in this study, we have applied the knowledge gained from research and development using actual materials to system design to create a system that can produce sugar at maximum efficiency while considering technical, economic, and social feasibility. More specifically, we have designed a system that includes a series of processes from the procurement of raw materials to the refining of the produced sugar, and we have calculated the material and energy balance using a process simulation. Subsequently, a life-cycle assessment will be conducted based on the inventory data obtained, and guidelines for future technological development will be presented.

The environmental impact of the excessive expansion of biomass sugar utilization has been pointed out in the past. However, sugar production is currently dependent on agriculture, and there have been no solutions to replace it with other methods. In contrast, chemical sugar synthesis is much faster than the agricultural process on which biomass sugars depend, and it consumes far fewer resources, such as land and water. This study quantitatively demonstrates the social significance of supplementing sugar production with chemical synthesis processes. Consequently, it is expected to provide new methodologies and perspectives for the field of biomanufacturing, which has been based on the use of biomass sugars, and ultimately for the entire carbon resource recycling system, including food production.

1. H. Tabata, et al., Chem. Sci. 2023,14, 13475-13484

2. H. Tabata, et al., ChemBioChem 2024, 25, e202300760



3:40pm - 4:00pm

Handling discrete decisions in bilevel optimization via neural network embeddings

Isabela Fons Moreno-Palancas, Raquel Salcedo Díaz, Rubén Ruiz Femenia, José Antonio Caballero Suárez

Institute of Chemical Process Engineering, Univeristy of Alicante, Spain

Bilevel optimization is an active area of research within process systems engineering due to its ability to capture the interdependencies between two levels of decisions. This becomes particularly valuable in decentralized supply chain optimization, where participants have conflicting interests and compete against each other. Bilevel models excel in representing such leader-follower relationships by incorporating the follower’s dynamics as a constraint in the leader’s upper-level problem. While continuous bilevel problems are already computationally challenging, the presence of discrete decisions—such as production recipes, technology selection, and capacity levels—further complicates their solution (Jeroslow, 1985).

State-of-the-art optimization solvers like Gurobi and CPLEX now offer extensions to handle mixed-integer linear bilevel problems directly, implementing branch-and-cut algorithms inspired by the original work of Moore and Bard (1990). However, current approaches suffer from scalability issues when applied to large-scale real-life instances (Yue and You, 2017), which evidences the need for innovative strategies, such as neural networks, to better handle these complexities.

In this work, we leverage the approximation abilities of neural networks to develop a single-level reformulation. By properly defining the linking constraints—those that map the neural network’s inputs and outputs to the follower’s variables present in the leader’s problem—just one neural network embedding can be used to represent all possible discrete decisions of the follower and replace the lower-level problem by a set of linear constraints that express the forward pass of the trained neural network. This approach eliminates the need for separate neural network models for each discrete solution, significantly simplifying the reformulation while ensuring that the follower’s decision space is fully captured within the upper-level optimization.

A case study on the optimization of a two-echelon supply chain is solved to demonstrate the viability of the proposed methodology and compare its performance against conventional solution techniques. Our approach allows a significant reduction in problem size compared to classical enumeration methods, where single-level reformulations require enumerating all feasible integer lower-level solutions and incorporating their collection of KKT conditions to the upper-level problem (Yue and You, 2016). Moreover, we proof that solution accuracy is maintained if and when the neural network is properly trained.

This data-driven approach offers a scalable and flexible alternative to conventional KKT-based methods and can be applied to a wide range of bilevel problems without requiring assumptions on the relative response property, thereby expanding the scope of traditional solution strategies in the field.

References:

  • Jeroslow, R.G., 1985. The polynomial hierarchy and a simple model for competitive analysis. Math Program 32, 146–164. https://doi.org/10.1007/BF01586088
  • Moore, J.T., Bard, J.F., 1990. The Mixed Integer Linear Bilevel Programming Problem. Oper Res 38, 911–921.
  • Yue, D., You, F., 2017. Stackelberg-game-based modeling and optimization for supply chain design and operations: A mixed integer bilevel programming framework. Comput Chem Eng 102, 81–95. https://doi.org/10.1016/j.compchemeng.2016.07.026
  • Yue, D., You, F., 2016. Projection-based Reformulation and Decomposition Algorithm for A Class of Mixed-Integer Bilevel Linear Programs. Computer Aided Chemical Engineering 38, 481–486. https://doi.org/10.1016/B978-0-444-63428-3.50085-0
 
2:00pm - 4:00pmT6: Digitalization and AI - Session 4
Location: Zone 3 - Room E030
Chair: Jinsong Zhao
Co-chair: Goerge M. Bollas
 
2:00pm - 2:20pm

Optimal design and control of chemical reactors using PINN-based frameworks

Isabela Fons Moreno-Palancas, Raquel Salcedo Díaz, Rubén Ruiz Femenia, José Antonio Caballero Suárez

Institute of Chemical Process Engineering, Univeristy of Alicante, Spain

In today’s chemical industry, the pursuit of more profitable, sustainable and safer processes is of paramount importance, yet remaining a challenging task given the complex nature of chemical processes. Such systems are defined by governing equations—mass, energy and momentum balances—which are often described by Ordinary Differential Equations (ODEs), Partial Differential Equations (PDEs) or Differential Algebraic Equations (DAEs).

Numerical methods have been traditionally used to replace differential constraints with algebraic equations, enabling the use of state-of-the-art optimization solvers. However, these methods are computationally expensive, limiting their applicability to real-world problems. In this study, we explore the capabilities of Physics-Informed Neural Networks (PINNs) (Raissi et al., 2019) to optimize the design and operation of chemical reactors.

PINNs have emerged as a powerful resource to model complex systems by incorporating physical knowledge into the learning process. This technique outperforms purely data driven strategies in terms of predictive performance and reduces the dependency on large datasets (Ghalambaz et al., 2024)—a key advantage in chemical reactor engineering, where data availability is often scarce. This work expands the applications of PINNs beyond their traditional role as surrogate models, introducing them as an alternative optimization method. Unlike previous studies that have applied a sequential approach (i.e., decoupling training and optimization) (Patel et al., 2023; Ryu et al., 2023), we propose a unified framework where PINNs simultaneously describe the behavior of a reactive system and provide the optimal solutions to a given task (Seo, 2024).

To showcase our methodology, we present two case studies: one focusing on the optimal design, and the other on the optimal control of a reactor. In the former, reactor dynamics and economic goals are integrated into the network architecture to identify the design parameters that minimize capital cost while maintaining performance. The latter case aims at identifying the optimal temperature profile within the reactor. These examples illustrate the potential of PINNs in chemical reactor optimization, arising as an alternative to addressing optimization problems that traditional methods often struggle to solve.

References:

  • Ghalambaz, M., Sheremet, M.A., Khan, M.A., Raizah, Z., Shafi, J., 2024. Physics-informed neural networks (PINNs): application categories, trends and impact. Int J Numer Methods Heat Fluid Flow. https://doi.org/10.1108/HFF-09-2023-0568
  • Patel, R., Bhartiya, S., Gudi, R., 2023. Optimal temperature trajectory for tubular reactor using physics informed neural networks. J Process Control 128, 103003. https://doi.org/10.1016/J.JPROCONT.2023.103003
  • Raissi, M., Perdikaris, P., Karniadakis, G.E., 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J Comput Phys 378, 686–707. https://doi.org/10.1016/J.JCP.2018.10.045
  • Ryu, Y., Shin, S., Liu, J.J., Lee, W., Na, J., 2023. Physics-informed neural networks for optimization of polymer reactor design, in: Kokossis, A.C., Georgiadis, M.C., Pistikopoulos, E. (Eds.), 33rd European Symposium on Computer Aided Process Engineering, Computer Aided Chemical Engineering. Elsevier, pp. 493–498. https://doi.org/https://doi.org/10.1016/B978-0-443-15274-0.50079-2
  • Seo, J., 2024. Solving real-world optimization tasks using physics-informed neural computing. Sci Rep 14, 202. https://doi.org/10.1038/s41598-023-49977-3


2:20pm - 2:40pm

Picard-KKT-hPINN: Enforcing nonlinear enthalpy balances for physically consistent neural networks

Giacomo Lastrucci1, Tanuj Karia1, Zoë Gromotka2, Artur M. Schweidtmann1

1Delft University of Technology, Department of Chemical Engineering, Process Intelligence Research Group, Van der Maasweg 9, 2629 HZ, Delft, The Netherlands; 2Delft University of Technology, Delft Institute of Applied Mathematics, Mathematical Physics Group, Mekelweg 4, 2628 CD, Delft, The Netherlands

Artificial neural networks (ANNs) are widely used as surrogate models to represent complex underlying models in process systems engineering [6]. However, it is well-known that ANNs do not guarantee physically consistent predictions thereby preventing its adoption in various real-world scenarios [1]. To mitigate this limitation, significant research has been carried out to enforce known mechanistic relationships between inputs and predictions in neural networks [4,7]. However, current approaches (1) are limited to specific problems governed by specialized mathematical formulations and (2) rely on external solvers that increase the computational cost to train and evaluate the ANN. Addressing the latter issue, Chen et al. proposed KKT-hPINN: a computationally efficient projection method based on Karush-Kuhn-Tucker (KKT) conditions to enforce linear constraints in ANNs [3]. Yet, the method is limited to linear constraints.

We enforce physical laws that are nonlinear in nature by extending the KKT-hPINN approach. For enforcing nonlinear constraints, we propose two approaches: (1) based on the Picard iteration method to enforce multiplicatively separable constraints by sequentially fixing one of the participating variables, and (2) approximate the solution of nonlinear constraints by linearizing them via Taylor expansion with minimum deviation. We test both approaches to train ANNs for two case studies from the literature: (1) catalytic packed bed reactors for methanol synthesis [5], and (2) Gibbs reactor in an autothermal reforming process [2]. We observe that the proposed approaches can be used to efficiently enforce enthalpy balances expressed via nonlinear constraints ensuring physically consistent predictions while retaining inexpensive training and inference. Additionally, atomic conservation laws expressed via linear constraints are imposed. Enforcing conservation laws ensures ANNs improve accuracy even in data-scarce conditions and when using smaller architectures compared to vanilla ANNs. We expect our method to promote wider adoption of ANNs in real-world applications, especially for scenarios such as large-scale simulations and optimization where the observance of fundamental laws is paramount.

References
[1] P. Bedué, A. Fritzsche, Apr. 2021. Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. Journal of Enterprise Information Management 35 (2), 530–549.

[2] S. I. Bugosen, C. D. Laird, R. B. Parker, Jul. 2024. Process Flowsheet Optimization with Surrogate and Implicit Formulations of a Gibbs Reactor 3, 113–120.

[3] H. Chen, G. E. C. Flores, C. Li, Oct. 2024. Physics-Informed Neural Networks with Hard Linear Equality Constraints. Computers & Chemical Engineering 189, 108764.

[4] P. L. Donti, D. Rolnick, J. Z. Kolter, 2021. DC3: A learning method for optimization with hard constraints.

[5] G. Lastrucci, M. F. Theisen, A. M. Schweidtmann, 2024. Physics-informed neural networks and time-series transformer for modeling of chemical reactors, 571–576.

[6] K. McBride, K. Sundmacher, Jan. 2019. Overview of Surrogate Modeling in Chemical Process Engineering. Chemie Ingenieur Technik 91 (3), 228–239.

[7] A. Mukherjee, D. Bhattacharyya, Aug. 2024. On the development of steady-state and dynamic mass-constrained neural networks using noisy transient data. Computers & Chemical Engineering 187, 108722.



2:40pm - 3:00pm

Physics-informed Data-driven control of Electrochemical Separation Processes

Teslim Olayiwola, Kyle Territo, Jose Romagnoli

Louisiana State University, United States of America

Electrochemical Separation (ECS) technologies, including Electrodialysis (ED), Electrodeionization (EDI), and Capacitive Deionization (CDI), are vital for efficient water treatment and desalination processes. However, optimizing the operational conditions of these systems to achieve higher separation efficiency remains a complex challenge due to their nonlinear and dynamic nature. In this paper, we propose a Reinforcement Learning (RL)-based control framework to address this challenge. By applying various RL algorithms, such as model-free and actor-critic methods, we develop an intelligent control strategy that adapts to different system configurations and conditions. This approach autonomously learns the optimal operational parameters, significantly improving ion removal efficiency. The proposed RL-based control system enhances the performance of ECS processes, providing a versatile and adaptive solution for optimizing separation across multiple electrochemical technologies. This work demonstrates the potential of RL in advancing the design and control of sustainable water purification systems.



3:00pm - 3:20pm

Physics-Informed Graph Neural Networks for Spatially Distributed Dynamically Operated Systems

Md Meraj Khalid1, Luisa Peterson1, Edgar Ivan Sanchez Medina2, Kai Sundmacher1,2

1Process Systems Engineering, Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg, Germany.; 2Chair for Process Systems Engineering, Otto-von-Guericke University, Universitaetsplatz 2, 39106 Magdeburg, Germany.

Robust modeling of process systems is a complex and lengthy procedure. These spatially distributed dynamical systems, usually modeled as systems of partial differential equations, are dependant on both time and space discretization and exhibit highly non-linear behaviour. Analytical solution of the resulting model is often not possible and numerical methods are employed, which are computationally expensive and might encounter instabilities, especially in inverse problems like optimization and state identification or parameter estimation.

Data-driven methods typically have less computational costs and their development requires of less process insights resulting in more efficient surrogate models. Deep learning methods have recently been promising in modeling spatially distributed process systems. Graph Neural Networks (GNNs) are a form of deep learning methods where the system is represented as a graph with a set of nodes and their edge relationships [1]. However, these models are often poor at extrapolation and explainability as they do not respect process boundaries/conditions.

The integration of mechanistic and surrogate models balances the strengths and weaknesses of both modeling principles [2]. The resultant hybrid model, once properly tuned, has better prediction accuracy, increased interpretability and tends to respect the governing physical laws better.

This study aims to develop a Physics-Informed Graph Neural Network (PI-GNN) framework tailored for a catalytic CO2 methanation reactor in a Power-to-Methane process. The approach integrates a mechanistic model for a single-tube fixed bed methanation reactor [3] and a grid-based graph structure utilizing Graph Attention Networks (GATs) [4]. The inclusion of process insights is anticipated to enhance the model’s predictive capabilities and the explainability of its predictions. The performance of the GNN model, Derivative-Informed GNN, and PI-GNN will be compared to demonstrate these improvements. The hybrid nature of the framework is expected to allow for the use of a less representative mechanistic model in PI-GNN, potentially reducing the time and effort required for model development and accelerating the model development to online deployment pipeline.

References-

[1] Gori M., Monfardini G., and Scarselli F., ”A new model for learning in graph domains”, in Proceedings 2005 IEEE International Joint Conference on Neural Networks, vol. 2. IEEE (2005), pp. 729–734, doi: 10.1109/IJCNN.2005.1555942.

[2] von Stosch M., Oliveira R., Peres J., de Azevedo S.F., ”Hybrid semi-parametric modeling in process systems engineering: Past, present and future.”, Computers & Chemical Engineering, 60 (2014), pp. 86-101, doi: 10.1016/j.compchemeng.2013.08.008.

[3] Zimmermann, R. T., Bremer J., and Sundmacher K., ”Load-flexible fixed-bed reactors by multi-period design optimization.” Chemical Engineering Journal 428 (2022): 130771, doi: 10.1016/j.cej.2021.130771.

[4] Peterson, L., Forootani, A., Sanchez Medina, E.I., Gosea, I.V., Sundmacher, K., Benner, P.: Comparison of data-driven approaches for simulating the dynamic operation of a CO2 methanation reactor. Submitted to IEEE Transaction on Automation. Science and Engineering (2024).



3:20pm - 3:40pm

A Physics-Informed Approach to Dynamic Modeling and Parameter Estimation in Biotechnology

Konstantinos Mexis, Stefanos Xenios, Nikolaos Trokanas, Antonis Kokossis

National Technical Univeristy of Athens, Greece

Digitalization in industrial biotechnology is general slow as most processes still rely on experience and regression-based models, which struggle to address increasing complexity. Digital Twins (DT) are gaining interest for industrial applications, due to their potential to enhance process efficiency and resource utilization. DT is still a fast-evolving concept, aiming at a technology that is modular, generic, and scalable. By leveraging Physics-Informed Neural Networks (PINNs) on the development of DT for bioreactors, we address challenges related to limited experimental data—one of the key barriers to developing robust DTs in biotechnology. We effectively managed system complexity as the dynamics evolved over time, adapting to significant shifts in behaviour. Additionally, we successfully estimated numerous unknown and uncertain parameters in the dynamic models, further enhancing the model’s accuracy and predictive capabilities

Starting with a scaffold—an ODE-based system representing bioreactor dynamics and serving as a generic framework—we used PINNs to upgrade the scaffold into a DT by integrating real-world experimental data. Leveraging the power of PINNs to adhere to the underlying physics of the process (i.e., scaffold) while reducing the need for labelled data, we were able to capture state-space knowledge, building predictive potential and incentivizing data-driven intelligence. We demonstrate our twinning approach on both a batch reactor (continuous case) and a fed-batch reactor (discontinuous case). We showcase that even with minimal data (e.g., just 2 points), the integration of process knowledge into PINNs enables successful twinning. Additional data further refines the model, evolving the scaffold into a fully functional DT. Using PINNs, we were able to estimate the unknown kinetic parameters of the bioreactor dynamics and perform accurate short- and long-term predictions, which are particularly valuable in process optimization and control. By leveraging PINNs, we overcame the need for the complete data trajectory and full knowledge of system changes to build an accurate model. Through twinning, we developed a more robust model, even without prior training on discrete changes. The scaffold’s inherited system knowledge allows the model to predict discrete changes and transitions.

In conclusion, our approach demonstrates the potential of PINN powered DTs to transform bioprocess modelling, even in the face of limited data and complex system dynamics. By integrating domain knowledge into the scaffold and allowing the model to adapt to changes in real time, we can develop a flexible and scalable framework for bioreactor optimization. This work highlights the capability of DT not only to enhance predictive accuracy but also to streamline the estimation of complex kinetic parameters, paving the way for more efficient, data-driven biomanufacturing processes.

 
2:00pm - 4:00pmT7: CAPEing with Societal Challenges - Session 4
Location: Zone 3 - Room E031
Chair: Ryan Muir
Co-chair: Stavros Papadokonstantakis
 
2:00pm - 2:20pm

Lignocellulosic Waste Supply Chain Network Design for Sustainable Aviation Fuels Production through Solar Pyrolysis

Stavroula Zervopoulou1, Stavros Papadokonstantakis1, Mika Järvinen2, Muddasser Inayat2

1Research Group Process Systems Engineering for Sustainable Resources, Institute of Chemical, Environmental and Bioscience Engineering, Faculty of Technical Chemistry, Vienna University of Technology, 1060 Vienna, Austria; 2Research Group Energy Conversion and Systems, Department of Mechanical Engineering, School of Engineering, Aalto University, 02150 Espoo, Finland

This study presents a comprehensive approach to optimizing a sustainable aviation fuels (SAF) supply chain network (SAFSCN) with an initial focus on the Chechia Republic. Utilizing a Mixed-Integer Linear Programming (MILP) framework, the research decomposes into two distinct modelling scenarios: decentralized and centralized Hydrodeoxygenation (HDO) plants. In the second scenario, the centralized case, the final upgrading of the pyrolysis oil occurs at the HDO plants located within existing refineries in the examined country.

In the present study, the feedstock selected is wheat straw for the production of SAF to replace Jet A-1, which is included in the Renewable Energy Directive (REDII Annex IX) and is characterized as a bio-based feedstock and low-cost waste material.

The objective function aims to minimize total costs constrained by the mass balance, seasonality, feedstock storage, demand satisfaction, solar panel area, pyrolysis plant capacity, and final HDO plant costs (for the centralized case). Total costs encompass feedstock, transportation, storage, operation, and capital expenditure costs for pyrolysis plants, offset by biochar revenue.

Both scenarios account for revenue from biochar (a by-product) production and the satisfaction of Jet A-1 supply/demand for three time periods, today, in 2030, and 2050, considering both potential outcomes: a mixture (up to 50%) or a drop-in fuel.

Finally, the two scenarios are compared by assessing their respective advantages and disadvantages to determine the most economically feasible (sensitivity analysis) and efficient option for an EU country to start to decouple from oil production and imports. This approach facilitates informed decision-making in the evolution of sustainable aviation fuels produced from agricultural residues, providing a strategic roadmap for policy reforms and supply chain development. This ensures alignment with existing legislation across EU countries, aiding in achieving EU legislative targets.



2:20pm - 2:40pm

Optimization of prospective circular economy in sewage sludge to biofuel production pathway via HTL system using P-graph

Safdar Abbas1, Paraskevi Karka2, Stavros Papadokonstantakis1

1Institute of Chemical, Environmental and Bioscience Engineering TU Wien,1060 Wien, Austria; 2Engineering and Technology Institute Groningen (ENTEG), Faculty of Science and Engineering, University of Groningen, Nijenborgh 3, Groningen, 9747 AG, The Netherlands

Hydrothermal liquefaction (HTL) has proven to be a practical approach for converting sewage sludge into a valuable resource for renewable energy generation. This study focuses on a prospective analysis of various scenarios for sewage sludge-to-fuel pathway design configurations via HTL, co-located with a wastewater treatment plant, in support of a circular economy. The circular economy emphasizes evaluating the environmental performance and economic feasibility of emerging technologies, which has gained significant attention recently. Integrated assessment models (IAMs), such as the REMIND model combined with shared socio-economic pathways (SSPs), are used to develop globally consistent future scenarios. This approach supports the development of three prospective scenarios aligned with the Paris Agreement’s climate targets: REMIND-SSP2-Base (projecting a 3.5°C temperature rise by the end of the century), PKBudg1150 (aiming to limit the rise to below 2°C), and PKBudg500 (targeting a cap below 1.5°C) for 2030, 2040, and 2050. The core part of this study is the use of prospective assessment for a process that serves circular economy targets to identify the most suitable production pathway by considering the economic balance between operating costs, the future market value of products, and the externality costs associated with GHG emissions.

The P-graph model is used as an effective decision support tool. To Identify optimal and near-optimal solutions for addressing trade-offs between future socio-economic policies and practical implementation for 2030, 2040, and 2050, which are often difficult to monetize. This study includes four foreground scenarios for sewage sludge-to-fuel conversion. Scenario 1 uses natural gas in both the HTL unit for biocrude production and the biocrude upgrading unit, with hydrogen produced via steam reforming. Scenario 2 utilizes onsite-produced biomethane for both biocrude production and the upgrading system. Scenario 3 involves using natural gas for the HTL unit while producing hydrogen through electrolysis. Scenario 4 employs biomethane for the HTL process and uses electrolysis for the biocrude upgrading system. The objective of this study is to maximize profit by accounting for credits from avoided GHG emissions, the market value of recovered products, while subtracting operational costs and GHG emission penalties incurred during the biocrude production and upgrading processes. The P-graph model is employed to solve the superstructure problem using branch and bound approach while also provide a graphical representation, which is a key strength of the method (p-graph studio). This visual approach makes the model more accessible for stakeholders without a mathematical optimization background. The potential profit is 810 euro per ton of sewage sludge for Pkbudg500 under scenario 4 by 2040 and near optimal solution 696 and 676 euro per ton of sewage sludge for Pkbudg1150 and Pkbudg500 under scenario 4 by 2050 respectively. The P-graph approach shows that HTL treatment of sewage sludge provides an alternative production pathway within the circular economy concepts.



2:40pm - 3:00pm

A techno-economic optimization approach to an integrated biogas and hydrogen supply chain.

Sandra Cecilia Cerda-Flores1,2, Catherine Azzaro-Pantel1, Fabricio Napoles-Rivera2

1Laboratoire de Génie Chimique, Université de Toulouse, CNRS, INPT, UPS, Toulouse France; 2Posgrado de Ingeniería Química, Universidad Michoacana de San Nicolás de Hidalgo, Morelia, Michoacán, México

A country's energy consumption is linked to its economic growth. Despite efforts to reduce greenhouse gas emissions by integrating green technologies to meet energy needs, energy consumption remains largely reliant on fossil fuels. On a global scale, oil and gas production is projected to increase by 27% and 25% by 2030 (McKinsey & Company, 2022). Nonetheless, current geopolitical conflicts highlight the urgent need to reduce dependency on fossil fuels. As the global energy transition accelerates, and the push to fund clean energy projects intensifies, biogas and hydrogen emerge as strong energy vectors due to the extensive research into their production technologies, versatility, and non-intermittency. Hydrogen production is currently dominated by steam methane reforming (SMR) using natural gas. Since SMR is a well-established technology, using methane obtained from biogas offers an attractive alternative for hydrogen production. Furthermore, using biomass for methane and hydrogen production fosters a circular economy by reincorporating waste into the value chain.

However, several challenges exist due to the direct competition between using methane and other byproducts from biogas versus hydrogen production. Most research on the supply chain of these two products either caters to biogas demand1, hydrogen demand, or power-to-gas concepts2. This study aims to identify the synergies and trade-offs between hydrogen and biogas production within a shared biomass supply chain, regarding economic and environmental performance. Two hydrogen sectors in Mexico are identified: the steel industry and the heavy-duty transport sector, while biogas is intended to replace natural gas in domestic applications. The supply chain model considers biomass obtention, biogas production and upgrading. Hydrogen is produced from the resulting biomethane using SMR, considering its storage through the liquefaction and regasification stages until the refuelling stations, as the final node for the supply chain.

The mathematical model for the supply chain consists of steady-state mass balances, formulated as a mixed-integer linear program, in GAMS (General Algebraic Modeling Language) environment, through a deterministic approach. A single-objective optimization approach, focusing on maximizing profit using fixed and variable costs along each node of the supply chain was performed. The optimization is approached as an allocation problem, with production divided between the methane and the hydrogen demand markets. Hydrogen allocation was gradually increased from 10% to 90% of hydrogen production, obtaining a production yield of 2,072 tons per year, and a levelized cost of hydrogen of 14 € per kg, a value expected to drop with higher production yields, while the levelized cost of biogas returned a value of 0.98€ per kg. Preliminary results highlight the importance of integrating environmental objectives to evaluate the trade-offs between these energy vectors and promote the decarbonization of key economic sectors. Therefore, life cycle assessment will be employed to pinpoint the impacts of each of the products along the supply chain.

[1] Díaz-Trujillo L, Nápoles-Rivera, F. (2019). Optimization of biogas supply chain in Mexico considering economic and environmental aspects. Renewable energy, 139, 1227-1240.

[2] Carrera, E., Azzaro-Pantel, C. (2021). Bi-objective optimal design of Hydrogen and Methane Supply Chains based on Power-to-Gas systems. Chemical Engineering Science, 246, 116861.



3:00pm - 3:20pm

Socioeconomic Impacts and Land Use Change of Integrating Biofuel Production with Livestock Farming in Brazil: A Computable General Equilibrium (CGE) Approach

Igor Lucas Rodrigues Dias1, Matheus Souza Lacerda1, Terezinha de Fatima Cardoso2, Ana Carolina Medina Jimenez2, Tassia Lopes Junqueira2, Geraldo Bueno Martha Junior3, Flávia Barbosa4, Adriano Pinto Mariano1, Antonio Maria Bonomi2, Marcelo Pereira da Cunha1

1University of Campinas (UNICAMP), Brazil; 2Brazilian Center for Research in Energy and Materials (CNPEM), Brazil; 3Brazilian Agricultural Research Corporation (Embrapa), Brazil; 4INESC TEC - University of Porto, Portugal

Sugarcane bioenergy is a reality in Brazil, comprising the production of bioethanol (partially displacing fossil gasoline consumption) and bioelectricity (partially displacing fossil electricity generation). As sugarcane bioethanol can reduce around 68% of greenhouse gas (GHG) emissions of displaced gasoline, there is an opportunity to almost double this reduction taking into account that around 80% of Brazilian light vehicles fleet are able to run with any blend of ethanol or gasoline (the so-called flex-fuel cars). On the other hand, there are concerns about the possible implications caused by the expansion of sugarcane production on indirect land use change, especially when this expansion takes place on pasture land for livestock activity. A promising strategy to enlarge sugarcane bioenergy in Brazil without compromising the pasture industry is to integrate both activities, converting extensive livestock into an intensive one. In 2020, the Brazilian government joined the UNFCCC (United Nations Framework Convention on Climate Change) Race to Zero call announcing the commitment to reduce its GHG emissions in 2050 by zero. The objective of this study is to compare and evaluate socioeconomic (such as activities output level, Gross Domestic Product - GDP, and employment) and environmental (such as Greenhouse Gases - GHG emissions) impacts in two scenarios, both of them including the effects of indirect land use change. The first scenario, referred to Business as Usual (BAU), consists of sugarcane bioenergy and extensive livestock production without integration. The second scenario, Integrated Sugarcane-Bioenergy and Livestock (ISBL) in Brazilian agriculture, considers the same amount of sugarcane bioenergy and livestock production obtained in BAU scenario, but given the integration between activities projected land use is half of the BAU scenario. To achieve this goal, a computable general equilibrium (CGE) model was implemented, in which (i) for the BAU scenario an optimized pasture activity and sugarcane bioenergy industry were introduced separately as single sectors and (ii) for the ISBL scenario an optimized integrated sugarcane bioenergy and livestock sector was considered. For both scenarios in the model, based on the results obtained from an optimization model (in economic and environmental terms) the respective direct technical coefficients were estimated taking into consideration a choice of intermediate consumption and use of primary factors of production that maximize their profitability and minimize the GHG emissions. The Brazilian input-output matrix (main data for the model) was estimated for 2021, admitted as the first recovered economic year after the Covid-19 crisis in 2020. Model’s results show that the ISBL scenario is economically more efficient, as fewer jobs and higher output level activity were obtained. Even considering that the integration of sugarcane bioenergy and livestock production took place just in these activities, the positive socioeconomic impacts were noticed in all sectors of the economy.



3:20pm - 3:40pm

Optimization of Sustainable Fuel Station Retrofitting: A Set-Covering Approach considering Environmental and Economic Objectives

Daniel Vázquez Vázquez, Raul Calvo Serrano

IQS School of Engineering, Universitat Ramon Llull, Via Augusta 390, 08017 Barcelona, Spain.

To improve the sustainability of the transport sector, there is a global tendency to promote more environmentally friendly modes of transportation, in alignment with the Sustainable Development Goals (SDGs), particularly SDG 13, which focuses on climate action. This shift is essential given that the transport sector is one of the largest contributors to greenhouse gas emissions globally, and transitioning to cleaner transportation options is a critical step toward mitigating climate change. Electric vehicles (EVs) have emerged as a promising solution due to their potential to reduce emissions significantly when powered by renewable energy sources. In response to these developments, many countries are implementing policies and incentives to encourage the adoption of EVs, including subsidies for EV purchases, investments in charging infrastructure, and stringent emission standards for conventional vehicles. This trend has led to a surge in EV adoption, which may reduce the reliance on traditional fuel stations in the future. However, the existing network of fuel stations, strategically located and familiar to consumers, offers a unique opportunity to support the EV transition. Retrofitting these fuel stations into EV charging stations not only leverages their advantageous locations and existing infrastructure but also facilitates a smoother transition for consumers who are accustomed to using these sites for refuelling (Ghosh et al., 2022).

In this work, we propose an optimization model based on the set-covering problem to determine which fuel stations are best suited for retrofitting and to evaluate the impact of this conversion on economic and environmental objectives compared to a baseline scenario with no retrofitting. The set-covering approach ensures that the retrofitted stations can adequately serve EVs within a specified radius, addressing the limited range issue of current electric vehicles.

The proposed methodology is applied to a case study in Spain, utilizing a comprehensive dataset of all existing fuel stations. Given the NP-hard nature of the set-covering problem, an initial filtering step is employed to reduce the problem’s complexity. Subsequent optimization is performed under various assumptions, such as the source of electricity and local population density, using bi-objective optimization techniques. The model aims to minimize economic costs and CO2 equivalent emissions, employing a life-cycle assessment (LCA) framework (Azapagic, 1999).

The results indicate that retrofitting a relatively small fraction (approximately 10%) of the fuel stations can satisfy the set-covering constraints, ensuring sufficient coverage for EV users. This optimal solution for the supply chain includes factors such as electricity sourcing and demand of travel distances for EVs. Although the economic costs increase, the environmental benefits are significant, demonstrating that strategic retrofitting of gas stations can play a crucial role in achieving sustainability goals while supporting the growth of electric mobility.

References

Azapagic, A., 1999. Life cycle assessment and its application to process selection, design and optimisation. Chemical Engineering Journal 73, 1–21. https://doi.org/10.1016/S1385-8947(99)00042-X

Ghosh, N., Mothilal Bhagavathy, S., Thakur, J., 2022. Accelerating electric vehicle adoption: techno-economic assessment to modify existing fuel stations with fast charging infrastructure. Clean Techn Environ Policy 24, 3033–3046. https://doi.org/10.1007/s10098-022-02406-x

 
4:00pm - 4:30pmCoffee Break
Location: Zone 2 - Cafetaria
4:00pm - 4:30pmPoster Session 2
Location: Zone 2 - Cafetaria
 

Rebalancing CAPEX and OPEX to Mitigate Uncertainty and Enhance Energy Efficiency in Renewable Energy-Fed Chemical Processes

Ghida Mawassi, Alessandro Di Pretoro, Ludovic Montastruc

LGC (INP - ENSIACET), France

The conventional approach in process engineering design has always been based on the exploitation of the degrees of freedom of a process system for the optimization of the operating conditions with respect to a selected objective function. The latter was usually defined based on the best compromise between capital and operating expenses. However, although the first cost item has played a role of major importance during the life period of the industrial sector focused on the production capacity expansion, the operating aspect is becoming more and more predominant in the current industrial landscape due to the increasing concerns towards carbon-free energy sources and the higher equilibrium between offer and demand. In essence, the reliance on fluctuating and intermittently available energy resources - renewable resources - is increasing, making it essential to maximize product output while minimizing energy consumption.

Based on these observations, it appears evident that the acceptance of higher investments for an improvement in the process performances could be a fruitful opportunity to further improve the efficiency of energy intensified and renewables fed chemical processes. To explore the potential of this design paradigm reconsideration from a quantitative perspective, a dedicated biogas-to-methanol case study was set up for a comparative study. The reaction and separation sections for grade AA biomethanol production were designed and simulated based on both the total annualized and utility costs minimization and compared. The optimal choice was to focus on the most energy-intensive section of the process, the purification. To this end, distillation columns were intentionally oversized. Although this approach increased the initial investment cost, it led to significant energy savings.

The investment increase for each layout and the corresponding energy savings were assessed and analyzed. The outcome of the simulation shows relevant improvements in terms of energy savings equal to 15 % with respect to the conventional layout. As a consequence, the possibility of establishing a new break-even operating point between equipment and utilities related expenses as the optimal decision at the design stage is worth to be analyzed in deeper detail in future studies. Notably, this break-even point is extremely dependent on both the cost and availability of energy. In scenarios where energy availability is limited or costs are higher, the advantages of oversizing become more pronounced.



Operational and Economic Feasibility of Green Solvent-Based Extractive Distillation for 1,3-Butadiene Recovery: A Comparison with Conventional Toxic Solvents

João Pedro Gomes1, Rodrigo Silva2, Clemente Nunes3, Domingos Barbosa1

1LEPABE / ALiCE, Faculdade de Engenharia da Universidade do Porto; 2Repsol Polímeros, S.A., Complexo Petroquímico; 3CERENA, Instituto Superior Técnico

The increasing demand for safer and environmentally friendly processes in the petrochemical industry requires replacing harmful solvents with safer alternatives. One such process, extractive distillation (ED) of 1,3-butadiene, typically employs potential toxic solvents like N,N-dimethylformamide (DMF) and N-methyl-2-pyrrolidone (NMP). Although highly effective, these solvents may pose significant health and environmental risks. This study explores the viability of using propylene carbonate (PC), a green solvent, as a substitute in the butadiene ED process.

A comprehensive simulation study using Aspen Plus® was conducted to model the PC behavior in comparison with DMF (Figure 1). Due to the scarcity of experimental data for the system PC/C4 hydrocarbons, it was crucial to have a reliable prediction of vapor-liquid equilibrium (VLE) to derive accurate pairwise interaction parameters (bij) and ensure a more realistic representation of molecular interactions. Initially, the COSMO-RS (Conductor-like Screening Model for Real Solvents) was employed, leveraging its quantum chemical foundation to predict VLE based on molecular surface polarization charge densities. Subsequently, new energy interaction parameters were obtained for the Non-Random Two-Liquid (NRTL) model, coupled with the Redlich-Kwong (RK) equation of state, a model that is particularly effective for systems with non-ideal behavior, such as those involving polar compounds, strong molecular interactions (like hydrogen bonding), and highly non-ideal mixtures. Thus, making it particularly well-suited for systems, such as those present in the extractive distillation processes. Key operational parameters, such as energy consumption, solvent recovery, and product purity, were evaluated to assess the process efficiency and feasibility. Additionally, an energy analysis of the process with the new solvent was conducted to evaluate its energy-saving potential. This was achieved using the pinch methodology from the Aspen Energy Analysis tool to optimize the existing process for the new solvent. Economic evaluations, including capital (CapEx) and operational costs (OpEx), were carried out to provide a holistic comparison between the solvents.

The initial analysis of the solvent's selectivity showed slightly lower selectivity compared to the conventional, potentially toxic, solvents, along with a higher boiling point. As a consequence, higher solvent-to-feed ratio may be required to achieve the desired separation efficiency. The higher boiling point will also require increased heat duties, leading to higher overall energy consumption. Nevertheless, the study underscores the potential of this green solvent to improve the sustainability of petrochemical processes while striving to maintain economic feasibility.



Optimizing Heat Recovery: Advanced Design of Integrated Heat Exchanger Networks with ORCs and Heat Pumps

Zinet Mekidiche Martínez, José Antonio Caballero Suárez, Juan Labarta

Universidad de Alicante, Spain

An advanced model has been developed to facilitate the simultaneous design of heat exchanger networks integrated with organic Rankine[JACS1] cycles (ORCs) and heat pumps, addressing two primary objectives. The model utilizes heat pumps to reduce reliance on external services by enhancing heat recovery within the system. Secondly, ORCs capitalize on residual heat streams or generate additional energy, effectively integrating with the existing heat exchanger network.

Effective integration of these components requires careful selection of fluids for the ORCs and heat pumps and determining optimal operating temperatures for these cycles to achieve maximum efficiency, the heat exchanger network, in which inlet and outlet temperatures are not necessarily fixed, the number of Organic Rankine cycles and heat pumps, as well as their operating conditions, are simultaneously optimized.

This method aims to minimize costs associated with external services, electricity, and equipment such as compressors and turbines. The approach leads to the design of a heat exchanger network that optimizes both the use of residual heat streams and the integration of other streams within the system. This not only enhances operational efficiency and sustainability but also demonstrates the potential of incorporating an Organic Rankine Cycle (ORC) with various energy streams, not limited solely to residual heat.



CO2 recycling plant for decarbonizing hard-to-abate industries: Empirical modelling and Process design of a CCU plant- A case study

Jose Antonio Abarca, Stephanie Arias-Lugo, Lucia Gomez-Coma, Guillermo Diaz-Sainz, Angel Irabien

Departamento de Ingenierías Química y Biomolecular, Universidad de Cantabria

Achieving a net-zero CO2 society by 2050 is an ambitious target set by the European Commission Green Deal. Reaching this goal will require implementing various strategies to reduce CO2 emissions. Conventional decarbonization approaches are well-established, such as using renewable energies, electrification, and improving energy efficiency. However, different industries, known as "hard-to-abate sectors," face unique challenges due to the inherent CO2 emissions from their processes. For these sectors, alternative strategies must be developed. Carbon Capture and Utilization (CCU) technologies offer a promising and sustainable solution by capturing CO2 and converting it into valuable chemicals, thereby contributing to the circular economy.

This study focuses on designing a CO2 recycling plant for the cement or textile industry as a case study. The proposed plant integrates a CO2 capture process using membrane technology and a utilization stage where CO2 is electrochemically converted into formic acid. During the capture stage, several experiments are carried out at varying inlet concentrations to optimize process parameters and maximize the CO2 output flow. The membrane capture potential is determined by its CO2 permeability and selectivity, making highly selective membranes for efficient CO2 separation from the flue gas stream. Key variables affecting the capture process include flue gas concentration, inlet pressure, and total membrane area. Previous laboratory studies have demonstrated that at least a minimum CO2 concentration of 50 % and a flow rate of 15 mL min-1 cm-2 electrode are required for an efficient CO2 conversion to formic acid [1]. Thus, these variables are crucial for an effective integration of the capture and utilization stages.

For the utilization stage, a three-compartment electrochemical cell is proposed for the direct production of formic acid via CO2 electroreduction. The primary operational variables influencing formic acid production include the CO2 inlet flow rate and composition (determined by the capture stage), applied current density, inlet stream humidity, and water flow rate in the central compartment [2].

The coupling of capture and utilization stages is necessary for the development of CO2 recycling plants. However, it remains in the early stages, especially for the integration of membrane capture technologies and CO2 electroreduction. This work aims to empirically model both the CO2 capture and electroreduction systems using neural networks, resulting in an integrated predictive model for the entire CO2 recycling plant. This model will optimize the performance of the capture-utilization system, facilitating the design of a sustainable process for CO2 capture and conversion into formic acid. Ultimately, this approach will contribute to reducing the products carbon footprint.

Acknowledgments

The authors acknowledge the financial support received from the Spanish State Research Agency through the project PLEC2022-009398 MCIN/AEI/10.13039/501100011033 and Unión Europea Next Generation EU/PRTR. This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101118265. Jose Antonio Abarca acknowledges the predoctoral research grant (FPI) PRE2021-097200.

[1] G. Diaz-Sainz, J. A. Abarca, M. Alvarez-Guerra, A. Irabien, Journal of CO2 Utilization. 2024, 81, 102735

[2] J. A. Abarca, M. Coz-Cruz, G. Diaz-Sainz, A. Irabien, Computer Aided Chemical Engineering, 2024, 53, pp. 2827-2832



Integration of direct air capture with CO2 utilization technologies powered by renewable energy sources to deliver negative carbon emissions

Calin-Cristian Cormos1, Arthur-Maximilian Bathori1, Angela-Maria Kasza1,2, Maria Mihet2, Letitia Petrescu1, Ana-Maria Cormos1

1Babes-Bolyai University, Faculty of Chemistry and Chemical Engineering, Romania; 2National Institute for Research and Development of Isotopic and Molecular Technologies, Romania

Reduction of greenhouse gas emissions is an important environmental element to actively combat the global warming and climate change. To achieve climate neutrality by the middle of this century, several options are envisaged such as increasing the share of renewable energy sources (e.g., solar, wind, biofuels etc.) to gradually replace the fossil fuels, large-scale implementation of Carbon Capture and Utilization (CCUS) technologies, improving overall energy efficiencies of both production and utilization steps etc. In respect to reduce the CO2 concentration from the atmosphere, the Direct Air Capture (DAC) options are of particular interest and very promising in delivering negative carbon emissions. The negative carbon emission is a key element for climate neutrality to balance the still remaining positive emission systems and the hard-to-decarbonize processes. The integration of renewable-powered DAC systems with CO2 utilization technologies can deliver both negative carbon emissions and reduce the energy and economic penalties of such promising decarbonization processes.

This work evaluates the innovative energy-efficient potassium - calcium looping cycle as promising direct air capture technology integrated with various CO2 catalytic transformations into basic chemicals (e.g., synthetic natural gas, methanol etc.). The integrated system will be powered by renewable energy (in terms of both heat and electricity requirements). The investigated DAC concept is set to capture 1 Mt/y CO2 with about 75% carbon capture rate. A fraction of this captured CO2 stream (about 5 - 10%) will be catalytically converted into synthetic methane or methanol using green hydrogen produced by water electrolysis, the rest being sent to geological storage. Conceptual design, process modelling, and model validation followed by overall energy optimization done by thermal integration analysis, were relevant engineering tools used to assess the global mass and energy balances for quantifying key techno-economic and environmental performance indicators. As the results show, the integrated DAC - CO2 utilization system, powered by renewable energy, has promising performances in terms of delivering negative carbon emissions and reduced ancillary energy consumptions. However, significant technological developments (e.g., scale-up, reducing solvent and sorbent make-ups, better process intensification and integration, improved catalysts) are still needed to advance this innovative technology from the current state of the art to a relevant industrial size.



Repurposing Existing Combined Cycle Power Plants with Methane Production for Renewable Energy Storage

Diego Santamaría, Antonio Sánchez, Mariano Martín

Department of Chemical Engineering, University of Salamanca, Plz Caidos 1-5, 37008, Salamanca, Spain

Nowadays, various technologies exist to generate renewable energy, such as solar, wind, hydroelectric power, etc. However, most of these energy sources have fluctuations due to the weather variations. A reliable energy storage is essential to promote a higher share of renewable energy integration into the current energy system. Moreover, energy storage keeps energy security under control. Power-to-Gas technologies consist of storage renewable energy in the form of gaseous chemicals. In this case, Power-to-Methane is the technology of choice since methane allows the use of existing infrastructures for it transport and storage.

This work proposes the integration and optimization of methane energy storage into the existing combined cycle power plants. This involves the introduction of carbon capture systems and methane production reusing the existing power production section. The process leverages renewable energy to produce hydrogen, which is then transformed into methane for easier storage. When the energy demand arises, the stored methane is burned in the combined cycle power plant. Two wastes are produced: water and CO2. The water produced is collected and returned to the electrolyzer while the CO2 is captured and then it is combined with hydrogen to synthesize methane again (Ghaib & Ben-Fares, 2018). This results in a circular process that repurposing the existing infrastructure.

Two different types of combustion method, ordinary and oxy-combustion (Elias et al., 2018) are optimized to evaluate both alternatives and their economic feasibility. In ordinary combustion, air is used as the oxidizer, while in oxy-combustion, pure oxygen is employed, including the oxygen produced in the electrolyzer. However, CO2 recirculation is necessary in oxy-combustion to prevent excessive the flame temperature (Stanger et al., 2015). In addition, is also analysed the potential energy storage capacity of the existing combined cycle power plants in a country, specifically across Spain. This would avoid their decommissioning and reuse the natural gas distribution network, adapting it for use in conjunction with a renewable energy storage system.

References

Elias, R. S., Wahab, M. I. M., & Fang, L. (2018). Retrofitting carbon capture and storage to natural gas-fired power plants: A real-options approach. Journal of Cleaner Production, 192, 722–734.

Ghaib, K., & Ben-Fares, F.-Z. (2018). Power-to-Methane: A state-of-the-art review. Renewable and Sustainable Energy Reviews, 81, 433–446.

Stanger, R., Wall, T., Spörl, R., Paneru, M., Grathwohl, S., Weidmann, M., Scheffknecht, G., McDonald, D., Myöhänen, K., Ritvanen, J., Rahiala, S., Hyppänen, T., Mletzko, J., Kather, A., & Santos, S. (2015). Oxyfuel combustion for CO2 capture in power plants. International Journal of Greenhouse Gas Control, 40, 55–125.



Powering chemical processes with variable renewable energy: A case of iron making in steel industry

Dorcas Tuitoek, Daniel Holmes, Binjian Nie, Aidong Yang

University of Oxford, United Kingdom

The steel industry is responsible for ~8% of global energy demand and emits 7% of CO2 emissions annually 1. Increased adoption of renewable energy in the iron-making process, which is the primary step of the steel-making process, is one of the promising ways to decarbonise the industry. The intermittent nature of renewable energy, as well as the difficulty in storing it, causes a variable energy supply profile necessitating a shift in the operation modes of manufacturing processes to make efficient use of renewable energy. Through dynamic simulation, this study explores a case of the direct reduction process, where iron ore is charged to a shaft furnace reactor where it is reduced to solid iron with green hydrogen.
Existing mathematical modelling and simulation studies of the shaft furnace have only investigated its behaviour assuming constant gas and solid feed rates. Here, we simulate iron ore reduction in a 1D model using COMSOL Multiphysics, with intermittent hydrogen supply, to predict the effects of a time-varying hydrogen feed on the degree of iron ore reduction. The dynamic model of the counter-current moving bed captures chemical reaction kinetics ,mass and heat transfer. With settings relevant to industrial scale operations, our results show that the system can tolerate drops of hydrogen feed rate by up to ~10% without leading to a reduction in the metallisation rate of the product. To tolerate greater fluctuation of H2 feed rate, strategies were tested which could alter the residence time and change the thermal profile in the reactor, to ensure complete metallic iron formation.
These findings show that it is possible to operate a shaft furnace with a certain degree of hydrogen feed variability, hence providing an approach to mitigating the challenges of intermittent renewable energy supply as a solution to decarbonize industries.

1. International Energy Agency (IEA). Iron and Steel Technology Roadmap. Towards More Sustainable Steelmaking. https://www.iea.org/reports/iron-and-steel-technology-roadmap (2020).



Early-Stage Economic and Environmental Assessment for Emerging Chemical Technologies: Back-casting Approach

Yeonguk Kim, Dami Kim, Kosan Roh

Chungnam National University, Korea, Republic of (South Korea)

The emergence of alternative chemical technologies has made their reliable economic and environmental assessments indispensable for guiding future research and development. However, these assessments are inherently challenging due to the lack of comprehensive understanding and technical knowledge of such technologies, particularly at low technology readiness levels (TRLs). This knowledge gap complicates accurate predictions of their real-world performance, economics, and potential environmental impacts. To address these challenges, we adopt a back-casting approach to demonstrate a TRL-based early-stage evaluation procedure, as previously proposed by Roh et al. (2020, Green Chem. 22, 3842). In this work, we apply this framework to methanol production based on the reforming of natural gas, which is a mature chemical technology, to explore its suitability for evaluating emerging chemical technologies. The target technology is assumed to be at three distinct stages of maturity: theoretical, intermediate, and engineering stages. We analyze economic and environmental indicators of the technology using the available information at each stage and then see how similar the indicators calculated at the theoretical and intermediate stages are compared to those at the engineering stage. The analysis shows that the performance indicators are lowest at the theoretical stage due to relying solely on reaction stoichiometry. In the case of the intermediate stage, despite considering various factors, it yields slightly higher performance indicators than the engineering stage due to the lack of process optimization. The outcomes of this study enable a proactive assessment of emerging chemical technologies, providing insights into their feasibility at various stages of development.



A White-Box AI Framework for Interpretable Global Warming Potential Prediction

Jaewook Lee, Ethan Errington, Miao Guo

King's College London, United Kingdom

The transformation of the chemical industry towards sustainable manufacturing requires reliable yet robust decision-making tools involving Life Cycle Assessment (LCA). LCA offers a standardised method to evaluate the environmental profiles of chemical processes and products. However, with the emergence of numerous novel chemicals and processes, existing LCA Inventory databases are increasingly resource-intensive to develop, often delayed in reporting, and suffer from data gaps. Research efforts have been made to address the knowledge gaps by developing predictive models that can estimate LCA properties based on chemical structures. However, the previously published research has been hampered dataset availability and reliance on complex black-box models such as Deep Neural Network (DNN), which often provide low predictive accuracy and lack the interpretability needed for industrial adoption. Understanding the rationale behind model predictions is crucial, particularly in industrial applications where decision-making relies on both the accuracy and transparency. In this study, we introduce a Kolmogorov–Arnold Networks (KAN) model based LCA predictions for emerging chemicals. Designed to bridge the gap between accuracy and interpretability by incorporating domain knowledge into the learning process.

We utilized 15 key LCA categories from the Ecoinvent v3.8 database, comprising 2,239 data points. To address large data scale variation, we applied logarithmic transformation. Using chemical structures represented as SMILES, we converted them into MACCS keys (166-bit fingerprints) and Mordred descriptors (1,825 physicochemical properties), incorporating features like molecular weight and hydrophobicity. These features were used to train a KAN, Random Forest, and DNN to predict LCA values across all categories. KAN consistently outperformed Random Forest and DNN models in 12 out of 15 LCA categories, achieving an average R² value of 74% compared to 66% and 67% for Random Forest and DNNs, respectively. For critical categories like Global Warming Potential, Terrestrial Ecotoxicity, and Ozone Formation–Human Health, KAN achieved high predictive accuracies of 0.84, 0.86, and 0.87, respectively, demonstrating an 8% improvement in overall accuracy. Our feature analysis indicated that MACCS keys provided nearly the same predictive power as Mordred descriptors, despite containing significantly fewer features. Furthermore, we identified that retaining data points with extremely large LCA values (top 3%) could degrade model performance, underscoring the importance of careful data curation. In terms of model interpretability, the use of Gini importance and SHapley Additive exPlanations (SHAP) revealed that functional groups such as halogens, oxygen, and methyl groups had the most significant impact on LCA predictions, aligning with domain knowledge. The SHAP analysis further highlighted that KAN was able to capture more complex structure-property relationships compared to conventional models.

In conclusion, the application of the KAN model for LCA predictions provides a robust and accurate framework for evaluating the environmental impacts of emerging chemicals. By integrating domain-specific knowledge, this approach not only enhances the reliability of LCA prediction but also offers deeper insights into the structural drivers of environmental outcomes. Its demonstrated success in identifying key molecular features makes it a valuable tool for accelerating sustainable innovations in both chemical process transformations and drug development, where precise environmental assessments are essential.



Data-driven approach for reaction mechanism identification using neural ODEs

Junu Kim1,2, Itushi Sakata3, Eitaro Yamatsuta4, Hirokazu Sugiyama1

1The University of Tokyo, Japan; 2Auxilart Co., Ltd., Tokyo, 104-0061, Japan; 3Institute of Physical and Chemical Research, Hyogo, 660-0813, Japan; 4Independent researcher, Japan

In the fields of reaction engineering and process systems engineering, mechanistic models have traditionally been the focus due to their explainability and extrapolative power, as they are based on fundamental principles governing the system. For chemical reactions, kinetic studies are crucial in developing these mechanistic models, providing insights into reaction mechanisms and estimating model parameters [1, 2]. However, kinetic studies often require extensive cycles of experimental data acquisition, reaction pathway generation, model construction, and parameter estimation, making the process laborious and time-consuming. In response to these challenges, machine learning techniques have gained attention. A recent approach involves using neural network models trained on simulation data to classify reaction mechanisms [3]. While effective, these methods demand vast amounts of training data, and expanding the reaction boundaries further increases the data requirements.

In this study, we present a direct, data-driven approach to identifying reaction mechanisms and constructing mechanistic models from experimental data without the need for large datasets. As an initial attempt, we focused on amination and Grignard reactions, which are widely used for various chemical and pharmaceutical synthesis. Since chemical reactions can be expressed as differential equations, our hypothesis is that by calculating first- or higher-order derivatives directly from experimental data, we can estimate the relationships between different chemical compounds in the system and identify the reaction mechanism, order, and parameter values. The major challenge arises with real experimental data, where the number of data points is often limited (e.g., around ten), making it difficult to estimate differential values directly. To address this, we employed neural ordinary differential equations (neural ODEs) to effectively interpolate these sparse datasets [4]. By applying neural ODEs, we were able to generate interpolated data, which enabled the calculation of derivatives and the development of mechanistic models that accurately reproduce the observed data. For future work, we plan to validate our methodology across a broader range of reactions and further automate the process to enhance efficiency and applicability.

References

[1] P. Sagmeister, et al. React. Chem. Eng. 2023, 8, 2818. [2] S. Diab, et al. React. Chem. Eng. 2021, 6, 1819. [3] J. Bures and I. Larrosa Nature 2023, 613, 689. [4] R. T. Q. Chen et al. NeurlIPS. 2018.



Generalised Disjunctive Programming for Process Synthesis

Lukas Scheffold, Erik Esche

Technische Universität Berlin, Germany

Automating process synthesis presents a formidable challenge in chemical engineering. Particularly challenging is the development of frameworks that are both general and accurate, while remaining computationally tractable. To achieve generality, a building block-based modelling approach was proposed in previous contributions by Kuhlmann and Skiborowski [1] and Krone et al. [2]. This model formulation incorporates Phenomena-based Building Blocks (PBBs), capable of depicting a wide array of separation processes [1], [3]. To maximize accuracy, the PBBs are interfaced with CAPE-OPEN thermodynamics, allowing for detailed thermodynamic models [2] within the process synthesis problem. However, the pursuit of generality and accuracy introduces increased model complexity and poses the risk of combinatorial explosion. To address this and enhance tractability, [1] developed a structural screening method that forbids superstructures leading to infeasible configurations. These combined innovations allow for general, accurate, and tractable superstructures.

To further increase the solvable problem size, we propose an advanced optimization framework, leveraging generalized disjunctive programming (GDP). It allows for multiple improvements over existing MINLP formulations, aiming at improving feasibility and solution time. This is achieved by deactivation of unused model equations during the solution procedure. Additionally, Grossmann [4] showed that a disjunctive branch-and-bound algorithm can be postulated. This provides tighter bounds for linear problems than those obtained through reformulations used in conventional MINLP solvers, reducing the required solution time.

Building on these insights, it is of interest whether these findings extend to nonlinear systems. To investigate this, we developed a MathML/XML-based automatic code generation tool inside MOSAICmodeling [5], which formulates complex nonlinear GDP and exports them to conventional optimization environments (Pyomo, GAMS etc.). These are then coupled with structural screening methods [1] and solved using out-of-the-box functionalities for GDP solution. To validate the proposed approach, a case study is conducted involving two PBBs, previously published by Krone et al. [2]. The study compares the performance of the GDP-based optimization framework against conventional MINLP approaches. Preliminary results suggest that the GDP-based framework offers computational advantages over conventional MINLP formulations. The full paper will present detailed comparisons, offering insights into the practical applicability and benefits of GDP.

References

[1] H. Kuhlmann und M. Skiborowski, „Optimization-Based Approach To Process Synthesis for Process Intensification: General Approach and Application to Ethanol Dehydration,“ Industrial & Engineering Chemistry Research, Bd. 56, Nr. 45, p. 13461–13481, 2017.

[2] D. Krone, E. Esche, N. Asprion, M. Skiborowski und J.-U. Repke, „Enabling optimization of complex distillation configurations in GAMS with CAPE-OPEN thermodynamic models,“ Computers & Chemical Engineering, Bd. 157, p. 107626, 2022.

[3] H. Kuhlmann, M. Möller und M. Skiborowski, „Analysis of TBA‐Based ETBE Production by Means of an Optimization‐Based Process‐Synthesis Approach,“ Chemie Ingenieur Technik, Bd. 91, Nr. 3, p. 336–348, 2019.

[4] I. E. Grossmann, „Review of Nonlinear Mixed-Integer and Disjunctive Programming Techniques,“ Optimization and Engineering, Nr. 3, p. 227–252, 2002.

[5] E. Esche, C. Hoffmann, M. Illner, D. Müller, S. Fillinger, G. Tolksdorf, H. Bonart, G. Wozny und J. Repke, „MOSAIC – Enabling Large‐Scale Equation‐Based Flow Sheet Optimization,“ Chemie Ingenieur Technik, Bd. 89, Nr. 5, p. 620–635, 2017.



Optimal Design and Operation of Off-Grid Electrochemical Nitrogen Reduction to Ammonia

Michael Johannes Rix1, Judith M. Schwindling1, Karim Bidaoui1, Alexander Mitsos2,1,3

1RWTH Aachen University, Germany; 2JARA-ENERGY, 52056 Aachen, Germany; 3Energy Systems Engineering (ICE-1), Forschungszentrum Jülich, Germany

Electrochemical processes can aid in defossilizing the chemical industry. When operated off-grid with its own renewable electricity (RE) production, the electrochemical process and the RE plants must be optimized together. We optimize the design and operation of an electrochemical system for nitrogen reduction to ammonia coupled with wind and solar electricity generation to minimize ammonia production costs. Electrochemical nitrogen reduction allows ammonia production from RE, water, and air in one electrolyzer [1]. Comparable design and operation optimizations for coupling RE with electrochemical systems were already performed in the literature for different systems (e.g., for water electrolysis by [2] and others).

We optimize the design and operation of the electrolyzer and RE plant over the scope of one year. We calculate investment costs for the electrolyzer and RE plants annualized over their respective lifetime. We calculate the electricity production from weather data on hourly resolution and the design of the RE plant. From the design of the electrolyzer and the electricity production, we calculate the ammonia production. We investigate three operating strategies: (i) direct coupling of RE and electrolyzer, (ii) curtailment of electricity, and (iii) battery storage and curtailment. In direct coupling, the electrolyzer electricity consumption must follow the RE generation, thus the electrolyzer is sized for the peak power of the RE plant. Therefore, it can only be operated at full load at peak electricity generation which will be only at one or a few times of the year. Curtailment and battery storage allow the decoupling of electricity production and consumption. Thus, the electrolyzer can be operated at full or higher load multiple times of the year.

Operation with curtailment increases the load factor of the electrolyzer and reduces the production cost. The RE plant can be over-designed such that the electrolyzer can operate at full or higher load at off-peak RE generation. Achieving a high load factor and few on/off cycles of the electrolyzer is important since on/off cycles can lead to catalyst degradation due to reverse currents [3]. Implementation of battery storage can further increase the load factor of the electrolyzer. However, battery costs are too high, resulting in increased production costs.

We run the optimization for different locations with different RE potentials. At all locations, operation with curtailment is beneficial, and battery costs are too expensive. The availability of wind and solar determines the optimal design of the electrolyzer and RE plant, the optimal operation, the production cost, and the load factor.

References
1. MacFarlane, D. R. et al. A Roadmap to the Ammonia Economy. Joule 4, 1186–1205 (2020).
2. Hofrichter, A., et al. Determination of the optimal power
ratio between electrolysis and renewable energy to investigate the effects on the hydrogen
production costs. International Journal of Hydrogen Energy 48, 1651–1663 (2023).
3. Kojima, H. et al. Influence of renewable energy power fluctuations on water electrolysis
for green hydrogen production. International Journal of Hydrogen Energy 48, 4572–4593. (2023).



A Stochastic Techno-Economic Assessment of Emerging Artificial Photosynthetic Bio-Electrochemical Systems for CO₂ Conversion

Haris Saeed, Aidong Yang, Wei Huang

Oxford University, United Kingdom

Artificial Photosynthetic Bioelectrochemical Systems (AP-BES) are a promising technology for converting CO2 into valuable bioproducts, addressing both carbon mitigation and sustainable production challenges. By integrating biological and electrochemical processes to emulate natural photosynthesis, AP-BES offer potential for scalable, renewable biomanufacturing. However, their commercialization faces significant challenges related to process efficiency, system integration, and economic uncertainties. A thorough techno-economic assessment (TEA) is crucial for evaluating the viability and scalability of this technology. This study employs a stochastic TEA to assess the bioelectrochemical conversion of CO2 to bioproducts, accounting for variability and uncertainty in key technical and economic parameters. Unlike traditional deterministic TEA, which relies on fixed-point estimates, the stochastic approach uses probability distributions to capture a broader range of potential outcomes. Critical factors such as energy consumption, CO2 conversion efficiency, and bioproduct market prices are modeled probabilistically, offering a more accurate reflection of real-world uncertainties. The novelty of this research lies in its comprehensive application and advanced methodology. This study is one of the first to apply a full-system TEA to AP-BES, covering the entire process from carbon capture to product purification. Moreover, the stochastic approach, utilizing Monte Carlo simulations, enables a more robust analysis by incorporating uncertainties in both technical and economic factors. This combined methodology provides more realistic insights into the system's economic potential and commercial feasibility compared to conventional deterministic models. Monte Carlo simulations are used to generate probability distributions for key economic metrics, including total annualized cost (TAC), internal rate of return (IRR), and levelized cost of product (LCP). By performing thousands of iterations, the model offers a comprehensive understanding of AP-BES's financial viability, delivering confidence intervals and risk assessments often missing from deterministic approaches. Key variables include electricity price fluctuations, a significant driver of operating costs, and changes in bioproduct market prices due to varying demand. The model also accounts for uncertainties in future technological improvements, such as enhanced CO2 conversion efficiencies and potential economies of scale that could lower both capital expenditure (CAPEX) and operational expenditure (OPEX) per kg of CO2 processed. Sensitivity analyses further identify the most influential factors impacting economic outcomes, guiding future research and development. The results underscore the critical role of uncertainty in evaluating the economic viability of AP-BES. While the technology demonstrates significant potential for both economic and environmental benefits, substantial risks remain, particularly concerning electricity price volatility and unpredictable bioproduct markets. Compared to static point estimates in deterministic approaches, Monte Carlo simulations provide a more nuanced understanding of the financial risks and opportunities. This stochastic TEA offers valuable insights for optimizing processes, reducing costs, and guiding investment and research decisions in the development of artificial photosynthetic bioelectrochemical systems.



Empowering LLMs for Mathematical Reasoning and Optimization: A Multi-Agent Symbolic Regression System

Shaurya Vats, Sai Phani Chatti, Aravind Devanand, Sandeep Krishnan, Rohit Karanth Kota

Siemens Technology and Services Pvt. Ltd

Understanding data with complex patterns is a significant part of the journey toward accurate data prediction and interpretation. The relationships between input and output variables can unlock diverse advancement opportunities across various processes. However, most AI models attempting to uncover these patterns are not explainable or remain opaque, offering little interpretation. This paper explores an approach in explainable AI by introducing a multi-agent system (MaSR) for extracting equations between features using data.

We developed a novel approach to perform symbolic regression by discovering mathematical functions using a multi-agent system of LLMs. This system addresses the traditional challenges of genetic optimization, such as random seed generation, complexity, and the explainability of the final equation. The agent-based system divides the process into various steps, including initial function generation, loss and complexity calculation, mutation and crossbreeding of equations, and explanation of the final equation to improve the accuracy and decrease the workload.

We utilize the in-context learning capabilities of LLMs trained on vast amounts of data to generate accurate equations more quickly. Additionally, we incorporate methods like retrieval-augmented generation (RAG) with tabular data and web search to further enhance the process. The system creates an explainable model that clarifies each process step leading to the final equation for a given dataset. We also use the capability of the method in combination with existing technologies to develop innovative solutions, such as incorporating physical laws derived from data using multi-agent symbolic regression (MaSR) to reduce illogical predictions and improving extrapolations, passing the generated equations to LLMs as context for explaining the large number simulation results.

Our solution is compared with symbolic regression methods such as GPlearn and PySR against various benchmarks. This study presents research on expanding the reasoning capacities of large language models alongside their mathematical understanding. The paper serves as a benchmark in understanding the capabilities of LLMs in mathematical reasoning and can be a starting point for solving numerous complex tasks using LLMs. The MaSR framework can be applied in various areas where the reasoning capabilities of LLMs are tested for complex and sequential tasks. MaSR can explain the predictions of black-box models, develop data-driven models, identify complex relationships within the data, assist in feature engineering and feature selection, and generate synthetic data equations to address data scarcity, which are explored as further directions for future research in this paper.



Solid Oxide Cells and Hydrogen Storage to Prevent Grid Congestion

Dorsan Lepour, Arthur Waeber, Cédric Terrier, François Maréchal

École Polytechnique Fédérale de Lausanne, Switzerland

The electrification of heating and mobility sectors, alongside increasing photovoltaic (PV) capacities, places significant pressure on electricity grids, particularly in urban neighborhoods and densely populated zones. High penetration of heat pumps and electric vehicles as well as significant PV deployment can indeed induce supply shortfall or require curtailment, respectively. Grid reinforcement is a potential solution, but is costly and involves substantial structural engineering work. Altough some local energy storage systems have been extensively studied as an alternative (primarily batteries), the potential of integrating reversible solid oxide cells (rSOC) coupled with hydrogen storage in the context of urban energy systems planning remains underexplored. This study aims to address this gap by investigating the technical and economic feasibility of such systems at building or district-scale.

This work uses the framework of REHO (Renewable Energy Hub Optimizer), a decision-support tool for sustainable urban energy system planning. REHO takes into account the endogenous resources of a defined territory, diverse end-use demands (e.g., heating, mobility), and multiple energy carriers (electricity, heat, hydrogen). Multi-objective optimizations are conducted across economic, environmental, and energy efficiency criteria to determine under which circumstances the deployment of rSOC and hydrogen storage becomes relevant.

The study considers several typical districts with their import and export capacities and examines two key scenarios: (1) a closed-loop hydrogen system where hydrogen is produced and consumed locally, and (2) a scenario involving connection to a broader hydrogen network. Results indicate that in areas where grid capacity is strained, rSOC coupled wih hydrogen tank offer a compelling storage solution. They enhance energy self-consumption by converting surplus electricity into hydrogen for later use, while the heat generated during cell operation can be used to meet building space heating and domestic hot water demands.

These findings suggest that hydrogen-based energy storage can be a viable alternative to traditional grid reinforcement, particularly for areas facing an increased penetration of renewables in a saturated grid. The study highlights that for such regions approaching grid congestion, integrating local hydrogen capacities can provide both flexibility and efficiency gains while reducing the need for expensive grid upgrades.



A Modern Portfolio Theory Approach for Chemical Production with Supply Chain Considerations for Efficient Investment Planning

Mohamad Almoussaoui, Dhabia Al-mohannadi

Texas A&M University at Qatar, Qatar

The integrated supply chains of large chemical commodities and fuels play a major role in energy security. These supply chains are at risk of global shocks such as the COVID-19 pandemic [1]. As such, major natural gas producers and exporters such as Qatar aim to balance their supply chain investment returns with the export risks as the hydrocarbon sector contributes primarily to more than one-third of its Gross Demostic Product (GDP) [2]. Hence, this work introduces a modern portfolio theory (MPT) model formulation based on chemical commodities and fuel supply chains. The model uses Markowitz’s optimization [3] model to meet an exporting country’s financial objective of maximizing the investment return and minimizing the associated risk. By defining a supply chain asset as a combination of an exporting country, a traded chemical commodity, and an importing country, the model calculates the return for every supply chain investment, and the risk associated with the latter due to price fluctuations. Solving the optimization problem, a set of Pareto-optimal supply chain portfolios and the efficient frontier, is obtained. The model integrates both the chemical process production [4] and the shipping stages of a supply chain. This work's case study showcases the importance of considering the integrated supply chain in building the MPT model and its impact on the number and allocations of the resulting optimal portfolios. The developed model can guide investment planners to achieve their financial goals at a minimum risk.

References

[1]

M. Shehabi, "Modeling long-term impacts of the COVID-19 pandemic and oil price declines in Gulf oil economies," Economic Modelling, vol. 112, 2022.

[2]

"Qatar - Oil & Gas Field Machinery Equipment," 29 7 2024. [Online]. Available: https://www.trade.gov/country-commercial-guides/qatar-oil-gas-field-machinery-equipment. [Accessed 18 9 2024].

[3]

H. Markowitz, "PORTFOLIO SELECTION*," The Journal of Finance, vol. 7, no. 1, pp. 77-91, 1952.

[4]

S. Shehab, D. M. Al-Mohannadi and P. Linke, "Chemical production process portfolio optimization," Chemical Engineering Research and Design, vol. 167, pp. 207-217, 2021.



Co-gasification of crude glycerol and plastic waste using air/steam mixtures: a modelling approach

BAHIZIRE MARTIN MUKERU, BILAL PATEL

UNIVERSITY OF SOUTH AFRICA, South Africa

There has been an unprecedented growth in plastic waste and current management techniques such as landfilling and incineration are unsustainable, particularly due to the environmental impact associated with these practises [1]. Gasification is considered as one of the most sustainable ways not only to address these issues, but also produce energy from waste plastics [1]. However, issues such as tar and coke formation are associated with plastic waste gasification which reduces syngas quality [1],[2]. Another typical waste in huge quantities is crude glycerol, with low value, which is a by-product from biodiesel production. The cost involved in its purification is exceedingly high and therefore this limits its applications as a purified product [3]. Co-feeding plastic wastes with crude glycerol for syngas production cannot only address issues related to plastic gasification, but also allow the utilization of crude glycerol and enhance syngas quality [3]. This study evaluates the performance of a downdraft gasifier to produce hydrogen and syngas from the co-gasification of crude glycerol and plastic wastes, by means of thermodynamic analysis and modelling using Aspen Plus simulation software. Performance indicators such as cold gas efficiency (CGE), carbon conversion efficiency (CCE) and syngas yield (SY) to determine the technical feasibility of the co-gasification of crude glycerol and plastic wastes at different equivalent ratios (ER). Results demonstrated that an increase in ER increased CGE, CCE and SY. For a blend ratio of 50%, a CCE of 100% was attained at an ER of 0.35 whereas the CGE of 88.29% was attained at ER of 0.3. Increasing the plastic content to 75%, a maximum CCE and CGE of 94.16% and 81.86% were achieved at ER of 0.4. The hydrogen composition reached its maximum of 36.70% and 39.19% at an ER of 0.1 when the plastic ratio increased from 50% to 75% respectively. A 50% plastic bend ratio achieved a syngas ratio (H2/CO) of 1.99 at ER of 0.2 whereas a 75% reached a ratio of 2.05 at an ER of 0.25. At these operating conditions the syngas lower heating value (LHV), SY, CGE and CCE were found to be 6.23 MJ/Nm3, 3.32 Nm3, 66.58%, 76.35% and 6.27 MJ/Nm3, 3.60 Nm3, 59.12%, 53.22% respectively. From these results it can be deduced that the air co-gasification is a promising technology for the sustainable production of energy from waste glycerol and plastic waste.

References

[1] Mishra, R., Shu, C.M., Gollakota, A.R.K. & Pan, S.Y ‘Unveiling the potential of pyrolysis-gasification for hydrogen-rich syngas production from biomass and plastic waste’, Energ. Convers. Manag. 2024: 118997 doi: 10.1016/j.enconman.2024.118997.

[2] Chunakiat,P., Panarmasar,N. & Kuchonthara, P. “Hydrogen Production from Glycerol and Plastics by Sorption-Enhanced Steam Reforming,” Ind. Eng. Chem. Res.2023; 62(49): 21057-21066. doi: 10.1021/acs.iecr.3c02072



COMPARATIVE AND STATISTICAL STUDY ON ASPEN PLUS INTERFACES USED FOR STOCHASTIC OPTIMIZATION

Josue Julián Herrera Velázquez1,3, Erik Leonel Piñón Hernández1, Luis Antonio Vega Vega1, Dana Estefanía Carrillo Espinoza1, Julián Cabrera Ruiz1, J. Rafael Alcántara Avila2

1Universidad de Guanajuato, Mexico; 2Pontificia Universidad Católica del Perú, Peru; 3Instituto Tecnológico Superior de Guanajuato, Mexico

New research on complex intensified schemes has popularized the use of multiple commercial process simulation software. The interfaces between software and computer systems for process optimization have allowed us to maintain rigor in the models. This type of optimization is mentioned in the literature as "Black Box Optimization" since successive evaluations are taken exploiting the information from the simulator without altering the model that integrates it. The writing/reading results are from the contribution of 1) Process simulation software, 2) Middleware protocol, 3) Wrapper protocol, and 4) Platform (IDE) with the optimization algorithm (Muñóz-López et al., 2017). The middleware protocol allows the automation of the process simulator and the transfer of information in both directions. The Wrapper protocol works to interpret the information transferred by the previous protocol and make it useful for both parties, for the simulator and the optimizer. Aspen Plus ® software has become popular due to the rigor of its models and the reliability of its results, as well as the customization it offers for different conventional and unconventional schemes. Few studies have been reported regarding the efficiency and effectiveness of the various computer systems where the programming of the optimization algorithm or the reported packages is carried out. Santos-Bartolome and Van-Gerven (2022) carried out the study of comparing different computer systems (Microsoft Excel VBA ®, MATLAB ®, Python ®, and Unity ®) with the Aspen Hysys ® software, evaluating the accuracy of communication, information exchange time, and the deviation of the results, reaching the conclusion that the best option is to use VBA ®. Ponce-Rocha et al. (2023) studied the execution time between Aspen Plus ® - MATLAB ® and Aspen Plus ® - Python ® connections in multi-objective optimization using the respective optimization packages, reaching the conclusion that the fastest connection occurs in the Python ® connection.

This work proposes to do a comparative study for the Aspen Plus ® software and its interfaces with Microsoft Excel VBA ®, Python ®, and MATLAB ®. 5 schemes are analyzed (conventional and intensified columns). The optimization of the Total Annual Cost is carried out by a modified Simulated Annealing Algorithm (m-SAA) (Cabrera-Ruiz et al., 2021). This algorithm has the same programming for all platforms, using the respective random number functions to make the study as homogeneous as possible. Each optimization is done ten times doing hypothesis testing to eliminate anomalous cases. The aspects to evaluate are the time per iteration, the standard deviation between each test and the number of feasible solutions. The results indicate that the best option to carry out the optimization is using the interface with VBA ®, however the one carried out with Python ® is not very different from this. There is not much literature on optimization algorithm packages in VBA ®, so, connecting with Python ® may be the most efficient and effective for performing stochastic optimization with Aspen Plus ® software in addition to being an open-source language.



3D simulation and design of MEA-based absorption system for biogas purification

Debayan Mazumdar, Wei Wu

National Cheng Kung University, Taiwan

The shape and geometry design of MEA-based absorption system by using ANSYS Fluent R22 is addressed. By conducting CFD studies for observing the effect of liquid distribution quality on counter current two-phase absorption under different liquid distributor designs. By simulation and analysis, the detailed exploration of fluid dynamics offers critical insights and enabling performance optimization. Unlike previous literature which focused on unstructured packing have been done on structure Mellapak 500X Packing. Demonstrating the overall efficiency for a MEA-based absorption system according to different distributor patterns. The previous model of calculation for liquid distribution quality is used for a detailed understanding between the initial layers of packing and pressure difference.



Enhancing Chemical Process Design: Aligning DEXPI Process with BPMN 2.0 for Improved Efficiency in Data Exchange

Shady Khella1, Markus Schichtel2, Erik Esche1, Frauke Weichhardt2, Jens-Uwe Repke1

1Process Dynamics and Operations Group, Technische Universität Berlin, Berlin, Germany; 2Semtation GmbH, Potsdam, Germany

BPMN 2.0 is a widely adopted standard across various industries, primarily used for business process management outside of the engineering sphere [1]. Its long history and widespread use have contributed to a mature ecosystem, offering advanced software tools for editing and optimizing business workflows.

DEXPI Process, a newly developed standard for early-phase chemical process design, focuses on representing Block Flow Diagrams (BFDs) and Process Flow Diagrams (PFDs), both crucial in the conceptual design phase of chemical plants. It provides a standardized way to document design activity, offering engineers a clear rationale for design decisions [2], which is especially valuable during a plant’s operational phases. While DEXPI Process offers a robust data model, it currently lacks an established serialization format for efficient data exchange. As Cameron et al. note in [2], finding a suitable format for DEXPI Process remains a key research area, essential for enhancing its usability and adoption. So far, Cameron et al. have explored several serialization formats for exchanging DEXPI Process information, including AutomationML, an experimental XML, and UML [2].

This work aims to map the DEXPI Process data model to BPMN 2.0, providing a standardized serialization for the newly developed standard. Mapping DEXPI Process to BPMN 2.0 also unlocks access to BPMN’s extensive software toolset. We investigate and validate the effectiveness of this mapping and the enhancements it brings to the usability of DEXPI Process through a case study based on the Tennessee-Eastman process, described in [3]. We then compare our approach with those of Cameron et al. in [2].

We conclude by presenting our findings and the key benefits of this mapping, such as improved interoperability and enhanced toolset support for chemical process engineers. Additionally, we discuss the challenges encountered during the implementation, including aligning the differences in data structures between the two models. Furthermore, we believe this mapping serves as a bridge between chemical process design engineers and business process management teams, unlocking opportunities for better collaboration and integration of technical and business workflows.

References:

[1] ISO. (2022). Information technology — Object Management Group Business Process Model and Notation. ISO/IEC 19510:2013. https://www.iso.org/standard/62652.html

[2] Cameron, D. B., Otten, W., Temmen, H., Hole, M., & Tolksdorf, G. (2024). DEXPI Process: Standardizing Interoperable Information for process design and analysis. Computers &amp; Chemical Engineering, 182, 108564. https://doi.org/10.1016/j.compchemeng.2023.108564

[3] Downs, J. J., & Vogel, E. F. (1993). A plant-wide industrial process control problem. Computers & chemical engineering, 17(3), 245-255. https://doi.org/10.1016/0098-1354(93)80018-I



Linear and non-linear convolutional approaches and XAI for spectral data: classification of waste lubricant oils

Rúben Gariso, João Coutinho, Tiago Rato, Marco Seabra Reis

University of Coimbra, Portugal

Waste lubricant oil (WLO) is a hazardous residue that requires careful management. Among the available options, regeneration is the preferred approach for promoting a sustainable circular economy. However, regeneration is only viable if the WLO does not coagulate during the process, which can cause operational issues, possibly leading to premature shutdown for cleaning and maintenance. To mitigate this risk, a laboratory analysis using an alkaline treatment is currently employed to assess the WLO coagulation potential before it enters the regeneration process. Nevertheless, such a laboratory test is time-consuming, presents several safety risks, and its outcome is subjective, depending on visual interpretation by the analyst.

To expedite decision-making, process analytics technology (PAT) and machine learning were employed to develop a model to classify WLOs according to their coagulation potential. To this end, three approaches were followed, with a focus on convolutional methodologies spanning both linear and non-linear modeling structures. The first approach (benchmark) uses partial least squares for discriminant analysis (PLS-DA) (Wold, Sjöström and Eriksson, 2001) and interval partial least squares (iPLS) (Nørgaard et al., 2000) combined with standard spectral pre-processing techniques (27 model variants). The second approach applies the wavelet transform (Mallat, 1989) to decompose the spectra into multiple frequency components by convolution with linear filters, and PLS-DA for feature selection (10 model variants). Finally, the third approach consists of a convolutional neural network (CNN) (Yang et al., 2019) to estimate the optimal filter for feature extraction (1 model variant). These models were trained on real industrial data provided by Sogilub, the organization responsible for the management of WLO in Portugal.

The results show that the three modeling approaches can attain high accuracy, with an average accuracy of 91%. The development of the benchmark model requires an exhaustive search over multiple combinations of pre-processing filters since the optimal scheme cannot be defined a priori. On the other hand, implicit spectral filtering using wavelet transform convolution significantly lowers the complexity of the model development task, reducing the computational burden while maintaining the interpretability of linear approaches. The CNN was also capable of circumventing the pre-processing burden by implicitly estimating convolutional filters in the inner layers. Additionally, the use of explainable artificial intelligence (XAI) techniques demonstrated that the relevant features of the CNN model are in good accordance with the linear models. In summary, with an adequate level of expertise and effort, different approaches can provide similar prediction performances. However, the development process can be made faster, simpler, and computationally less demanding through a proper convolutional methodology, namely the one based on the wavelet transform.

References:

Mallat, S.G. (1989) IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(7), pp. 674–693.

Nørgaard, L., Saudland, A., Wagner, J., Nielsen, J.P., Munck, L. and Engelsen, S.B. (2000) Applied Spectroscopy, 54(3), pp. 413–419.

Wold, S., Sjöström, M. and Eriksson, L. (2001) Chemometrics and Intelligent Laboratory Systems, 58(2), pp. 109–130.

Yang, J., Xu, J., Zhang, X., Wu, C., Lin, T. and Ying, Y. (2019) Analytica Chimica Acta, 1081, pp. 6–17.



Mathematical Modeling of Ammonia Nitrogen Dynamics in RAS Integrated with Bayesian Parameter Optimization

Lingwei Jiang1, Tao Chen1, Bing Guo2, Daoliang Li3

1School of Chemistry and Chemical Engineering, University of Surrey, United Kingdom; 2School of Sustainability, Civil and Environmental Engineering, University of Surrey, United Kingdom; 3National Innovation Center for Digital Fishery, China Agricultural University,China

The concentration of ammonia nitrogen is a critical parameter in aquaculture as excessive levels can be toxic to the aquatic animals, hampering their growth or even resulting in death. Therefore monitoring of ammonia nitrogen concentration in aquaculture is important for productivity and animal welfare. However, commercially available ammonia nitrogen sensors are expensive, have short lifespans thus requiring frequent maintenance, and can provide unreliable results during use. In contrast, sensors for other water quality parameters (e.g., temperature, dissolved oxygen, pH) are well-developed, accurate, and they could provide useful information to help predict ammonia nitrogen concentration through a mathematical model. In this study we present a new mathematical model for predicting ammonia nitrogen, combining fish bioenergetics with mass balance of ammonia nitrogen. We conduct a sensitivity analysis of the model parameters to identify the key ones and then use a Bayesian optimisation algorithm to calibrate these key parameters to data collected from a recirculating aquaculture system in our lab. We demonstrate that the model is able to give reasonable prediction of ammonia nitrogen on the experimental data not used in model calibration.



Computer-Aided Design of a Local Biorefinery Scheme from Water lily (Eichhornia Crassipes) to Produce Power and Bioproducts

Maria de Lourdes Cinco-Izquierdo1, Araceli Guadalupe Romero-Izquierdo2, Ricardo Musule-Lagunes3, Marco Antonio Martínez-Cinco1

1Universidad Michoacana de San Nicolás de Hidalgo, Facultad de Ingeniería Química, México; 2Universidad Autónoma de Querétaro, Facultad de Ingeniería, Mexico; 3Universidad Veracruzana, Instituto de Investigaciones Forestales, México

Lake ecosystems provide valuable services, such as vegetation and fauna, fertile soils, nutrient and climatic regulation, carbon sequestration, and recreation and tourism activities. Nevertheless, some are currently affected by high resource extraction, climatic change, or alien plant invasion (API), which causes the loss of local species and deterioration of ecosystem function. Regarding API, reports have identified 665 invasive exotic plants in México (IMTA, 2020), wherein the Water lily (Eichhornia crassipes) is highlighted due to its quick proliferation rate covering most national aquatic bodies. Thus, some strategies for controlling and using E. crassipes have been proposed (Gutiérrez et al., 1994). Specifically, after extraction, the water hyacinth biomass has been used as raw material for the production of several bioproducts and bioenergy; however, most of them have not covered the region's needs, and their economic profitability has not been reached. In this work, we propose a local biorefinery scheme to produce power and bioproducts from water lilies, using Aspen Plus V.10.0, per the needs of the Patzcuaro Lake community in Michoacán, Mexico. The scheme has been designed to process 197.6 kg/h of water lily, aligned to the extraction region schedule (COMPESCA, 2023), separating the biomass into two main compounds: root (RT, 47 wt % of total plant) and stems-leaves (S-L, 53 wt % of total plant). The power and steam are generated by RT flow (combustion process), while the S-L are separated in two fractions, 50 wt % for each one. The first fraction is the feedstock for an anaerobic digestion process operated to 35 °C to produce a fertilizer stream from the process sludge and biogas, which is converted to power using a turbine. On the other hand, the second fraction of S-L enters to drying equipment to reduce its moisture content; then, the dried biomass is divided in two processing zones: 1) pyrolysis to produce bio-oil, biochar, and high-temperature gases and 2) gasification to generate syngas, which is converted to power. According to the results, the total generated power is capable of covering all the electric requirements of the scheme, producing a super plus of 45 % regarding the total consumption; also, the system covers all heating requirements. On the other hand, fertilizer and biochar are helpful products for regional needs, improving the total annual cost (TAC) of the scheme.

References

COMPESCA. (2023, November 01). Comisión de Pesca del Estado de Michoacán. Informe anual de avances del Programa: Mantenimiento y Rehabilitación de Embalses.

Gutiérrez López, F. Arreguín Cortés, R. Huerto Delgadillo, P. Saldaña Fabela (1994). Control de malezas acuáticas en México. Ingeniería Hidráulica En México, 9(3), 15–34.

IMTA. (2020, July 15). Instituto Mexicano de Tecnología del Agua. Plantas Invasoras.



System analysis and optimization of replacing surplus refinery fuel gas by coprocessing with HTL bio-crude off-gas in oil refineries.

Erik Lopez Basto1,2, Eliana Lozano Sanchez3, Samantha Eleanor Tanzer1, Andrea Ramírez Ramírez4

1Department of Engineering Systems and Services, Faculty of Technology, Policy, and Management, Delft University of Technology, Delft, the Netherlands; 2Cartagena Refinery. Ecopetrol S.A., Colombia; 3Department of Energy, Aalborg University, Aalborg, Denmark.; 4Department of Chemical Engineering, Faculty of Applied Sciences, Delft University of Technology, Delft, the

Sustainable production is a critical goal for the oil refining industry supporting the energy transition and reducing climate change impacts. This research uses Ecopetrol, Colombia's state-owned oil and gas company, and one of its high-complexity refineries (processing 11.45 Mtpd of crude oil) as a case study to explore CO2 reduction strategies. Decarbonizing refineries requires a combination of technologies, including low-carbon hydrogen (Low-C H2), sustainable energy, carbon capture, utilization, and storage (CCUS), bio-feedstocks, and product changes.

A key question addressed is the impact of CCUS on refinery performance and the potential to repurpose surplus refinery fuel gas while balancing techno-economic and environmental in the short and long-term goals.

Colombia’s biomass resources offer opportunities for advanced biofuel technologies like Hydrothermal Liquefaction (HTL), which produces bio-crude compatible with existing refinery infrastructure and off-gas with biogenic carbon that can be used in CCU processes. This research is grounded on the opportunity to utilize refinery fuel gas and HTL bio-crude off-gas in conversion processes to produce more valuable and sustainable products (see Figure 1 for the simplified system block diagram).

Our systems optimization approach, using mixed-integer linear programming (MILP) in Linny-R software, evaluates refinery operations and minimizes costs under CO2 emission constraints. Building on optimized low-C H2 and CCS systems (Lopez, E., et al. 2024), the first step assesses surplus refinery fuel gas, and the second screens CCU technologies, selecting steam reforming and autothermal reforming to convert fuel gas into methanol. HTL bio-crude off-gas is integrated into thermocatalytic processes for further methanol production, with techno-economic data sourced from literature and Aspen Plus simulations. Detailed techno-economic assessment presented in the work by Lozano, E., et al. (2024) is used as input for this study.

The objective function in the system analysis focuses on cost minimization while achieving specified CO2 reduction targets.

Results show that CCU technologies and surplus gas utilization can significantly reduce CO2 emissions, offering valuable insights into how refineries can contribute to global decarbonization efforts. Integrating biomass-derived feedstocks and CCU technologies provides a viable path for sustainable refinery operations, advancing the industry's role in a more sustainable energy future.

Figure 1. Simplified system block diagram

References

Lopez, E., et al. (2024). Assessing the impacts of low-carbon intensity hydrogen integration in oil refineries. Manuscript in press.

Lozano, E., et al. (2024). TEA of co-processing refinery fuel gas and biogenic gas streams for methanol synthesis. Manuscript submitted for publication in Escape Conference 2025.



Technical Assessment of direct air capture using piperazine in an advanced solvent-based absorption process

Shengyuan Huang, Olajide Otitoju, Yao Zhang, Meihong Wang

University of Sheffield, United Kingdom

CO2 emissions from power generation and industry increase the concentration of CO2 in the atmosphere to 422ppm, which generates a series of climate change and environmental problems. Carbon capture is one of the effective ways to mitigate global warming. Direct air capture (DAC), as one of the negative emission technologies, has great potential for commercial development to achieve capturing 980Mt CO2 in 2050 by IEA Net Zero Emissions Scenario.

DAC can be achieved through absorption using solvents, adsorption using solid adsorbents or a combination of both. This study is based on liquid phase DAC (L-DAC) because it requires smaller land requirement and specific energy consumption compared with other technologies, which is more suitable for large-scale commercial deployment. In the literature, MEA is widely used in DAC. However, use of MEA in DAC process has two big challenges: high energy consumption 6 to 8.8 GJ/tCO2 and high cost up to $340/tCO2. These are the barriers to prevent DAC deployment.

This research aims to study DAC using Piperazine (PZ) with different configurations and evaluate the technic and economic performance at large scale through process simulation. PZ as the new solvent could improve the absorption capacity and performance. The simulation is implemented in Aspen Plus®. The DAC process using PZ will be compared using simulation data from literature to ensure the model’s accuracy. Different configurations (e.g. standard configuration vs advanced flash stripper), different loadings and carbon capture levels will be studied to achieve better system performance and energy consumption performance. The research outcome from this study can be useful for process design by the industrial practitioners and also policymakers.

Acknowledgement: The authors would like to thank the financial support of the EU RISE project OPTIMAL (Grant Agreement No: 101007963).



TEA of co-processing refinery fuel gas and biogenic gas streams from thermochemical conversion for methanol synthesis

Eliana Lozano Sanchez1, Erik Lopez Basto2, Andrea Ramirez Ramirez2

1Aalborg University, Denmark; 2Delft University of Technology, The Netherlands

Heat decarbonization is a key strategy for fossil refineries to lower their emissions in the short/medium term. Direct electrification and other low carbon heat sources are expected to play a major role, however, current availability of refinery fuel gases (RFG) - mixture of residual gases rich in hydrocarbons used for on-site heat generation - may limit decarbonization if alternative uses for surplus RFG are not explored. Thus, evaluating RFG utilization options is key for refineries, while integration of renewable carbon sources remains crucial to decrease fossil crude dependance.

This study presents a techno-economic assessment of co-processing biogenic CO2 sources from biomass thermochemical conversion with RFG to produce methanol, a key chemical with high demand in industry and as shipping fuel. Hydrothermal liquefaction (HTL) and fast pyrolysis (FP) are the technologies evaluated due to their integration potential in a refinery context: these produce bio-oils with drop-in fuel potential that can use existing infrastructure and a by-product gas rich in CO2/CO to be co-processed with the RFG into methanol, which remains unexplored in literature and stands as the main contribution of this study.

The process is simulated in Aspen HYSYS assuming a fixed gas input of 25 tonne/h, which corresponds to estimated RFG surplus in a refinery case study after some emission reduction measures. The process comprises a reforming step to produce syngas (steam and autothermal reforming -SMR/ATR- are evaluated) followed by methanol synthesis via CO2/CO hydrogenation. The impact of gas co-processing is evaluated for increasing ratios of HTL/FP gas relative to the RFG baseline in terms of hydrogen requirement, carbon conversion to methanol, overall water balance and specific energy consumption.

Preliminary results indicate that the valorization of RFG using SMR allows for an increased share of biogenic gas up to 45 wt% without having a negative impact in the overall carbon conversion to methanol. SMR of the RFG results in a syngas with excess hydrogen, which makes possible to accommodate additional biogenic CO2 to operate at lower stoichiometric numbers without a negative impact in conversion and without additional H2 input, being this a key advantage of this integration. Although overall carbon conversion is not affected, the methanol throughput is reduced by 24-27 % relative to the RFG baseline due to the higher concentration of CO2 in the mix that lowers the carbon content and increases water production during methanol synthesis. The ATR case results in lower energy consumption but produces less hydrogen, limiting the biogenic gas share to only 7 wt% before requiring additional H2 for methanol synthesis.

This study aims to contribute to the discussions on integration of low carbon technologies into refinery operations, highlighting synergies between fossil and biobased feedstocks that expand the state-of-the art of co-processing of bio-feedstocks from thermochemical biomass conversion. Future results include the estimation of trade-offs between production costs and methanol carbon intensity, motivating the integration of these technologies in more comprehensive system analysis of fossil refineries and their net zero pathways.



Surrogate Model-Based Optimisation of Pressure-Swing Distillation Sequences with Variable Feed Composition

Laszlo Hegely, Peter Lang

Department of Building Services and Process Engineering, Faculty of Mechanical Engineering, Budapest University of Technology and Economics, Hungary

For separating azeotropic mixtures, special distillation methods must be used, such as pressure-swing (PSD), extractive or heteroazeotropic distillation. The advantage of PSD is that it does not require the addition of a new component. However, it can only be applied if the azeotrope is pressure-sensitive, and its energy demand can also be high. The configuration of the system depends on not only the type of the homoazeotrope but also the feed composition (z). If z is between the azeotropic compositions at the pressures of the columns, the feed can be introduced into either the low- (LP-HP sequence) or the high-pressure column (HP-LP sequence). Depending on z, one of the sequences will be optimal, whether with respect to energy demand or total annual cost (TAC).

Hegely et al. (2022) studied the separation of a maximum-boiling azeotropic mixture water-ethylenediamine by PSD where z (35 mol% water) was between the azeotropes at 0.1 and 2.02 bar. The TAC of both sequences was minimised without and with heat integration. The LP-HP sequence was more favourable at this composition. The optimisation was performed by two methods: a genetic algorithm (GA) and a surrogate model-based optimisation method (SMBO). By SMBO, algebraic surrogate models were fitted to simulation results by the ALAMO software (Cozad et al., 2014) and the resulting optimisation problem was solved. Different decomposition techniques were tested with the models fitted (1) to elements of TAC (heat duty of LPC, column diameters), (2) to capital and energy costs or (3) to TAC itself. The best results were achieved with the highest level of decomposition. Although TAC obtained by SMBO was lower than that of GA only once, the difference was always within 5 %.

In this work, our aim is to (1) improve the accuracy of surrogate models, thus, the performance of SMBO and (2) study the influence of z on the optimum of the two sequences, using the case study of Hegely et al. (2022). The first goal is achieved by fitting the models to the results of the single columns instead of the two-column system. Achieving the second goal requires repeated optimisation at different feed compositions, which would be very time-consuming with conventional optimisation methods. However, an advantage of SMBO is that z can be included as input variable of the models. This enables quickly finding the optimum for any feed composition.

The novelty of our work consists of determining the optimal PSD system as a function of the feed composition by SMBO. Additionally, this is the first work that uses ALAMO to fit the models to be used in the optimisation to the individual columns.

References

Cozad A., Sahinidis N.V., Miller D.C., 2014. Learning surrogate models for simulation-based optimization. AIChE Journal, 60, 2211–2227.

Hegely L., Karaman Ö.F., Lang P., 2022, Optimisation of Pressure-Swing Distillation of a Maximum-Azeotropic Mixture with the Feed Composition between the Azeotropes. In: Klemeš J.J., Nižetić S., Varbanov P.S. (eds.) Proceedings of the 25th Conference on Process Integration, Modelling and Optimisation for Energy Saving and Pollution Reduction. PRES22.0188.



Unveiling Probability Histograms from Random Signals using a Variable-Order Quadrature Method of Moments

Menwer Attarakih1, Mark W. Hlawitschka2, Linda Al-Hmoud1, Hans-Jörg Bart3

1The University of Jordan, Jordan, Hashemite Kingdom of; 2Johannes Kepler University; 3RPTU Kaiserslautern

Random signals play crucial role in chemical and process engineering where industrial plants collect and analyse big data for process understanding and decision-making. This requires unveiling the underlying probability histogram from process random signals with a finite number of bins. Unfortunately, finding the optimal number of bins is still based on empirical optimization and general rules of thumb (e.g. Scott and Freedman formula). The disadvantages here are the large number of bins that maybe encountered, and the inconsistency of the histogram with low-order moments of the true data.

In this contribution, we introduce an alternative and general method to unveil the probability histograms based on the Quadrature Method Of Moments (QMOM). As being data compression method, it works using the calculated moments of an unknown weight probability density function. Because of the ill-conditioned inverse moment problem, there is no simple and general inversion algorithm to recover the unknown probability histogram which is usually required in many design applications and real time online monitoring (Thibault et al., 2023). Our method uses a novel variable integration order QMOM which adapts automatically depending on the relevance of the information contained in the random data. The number of bins used to recover the underlying histogram increases as the information entropy does. In the hypothetical limit where the data has zero information entropy, the number of bins is reduced to one. In the QMOM realm, the number of bins is explored in an evolutionary algorithm that assigns the nodes in an optimal manner to sample the unknown function or process from which the data is generated. The algorithm terminates when no more important information is available for assignment to the newly created node up to a user predefined threshold. If the date is coming from a dynamic source with varying mean and variance, the boundaries of the bins will move dynamically to reflect the nature of the data.

The application of the method is applied to many case studies including moment-consistent histogram unveiled from monthly mean maximum air to surface temperature in Amman city from 1901 to 2023 using only 13 bins with a bimodal histogram. In another case study, the diastolic and systolic blood pressure measurements are found to follow a normal distribution histogram using a data series spanning a six-year period with 11 bins. In a unique dynamic case study, batch particle aggregation plus growth is simulated based on an initial 11 bins where the simulation ends with 14 bins after 5 seconds simulation time. The result is a histogram which is consistent with 28 low-order moments. In addition to this, measured droplet distribution from a pilot plant sparger of toluene in water is found to follow a normal distribution histogram with 11 bins.

As a main conclusion, our method is a universal histogram reconstruction method which only needs enough number of moments to work with intensive validation using real-life problems.

References

E. Thibault, Chioua, M., McKay, M., Korbel, M., Patience, G. S., Stuart, P. R. (2023), Cand. J. Chem. Eng., 101, 6055-6078.



Sensitivity Analysis of Key Parameters in LES-DEM Simulations of Fluidized Bed Systems Using generalized polynomial chaos

Radouan Boukharfane, Nabil El Mocayd

UM6P, Morocco

In applications involving fine powders and small particles, the accuracy of numerical simulations—particularly those employing the Discrete Element Method (DEM) for predicting granular material behavior—can be significantly impacted by uncertainties in critical parameters. These uncertainties include coefficients of restitution for particle-particle and particle-wall collisions, viscous damping coefficients, and other related factors. In this study, we utilize stochastic expansions based on point-collocation non-intrusive polynomial chaos to conduct a sensitivity analysis of a fluidized bed system. We consider four key parameters as random variables, each assigned a specific probability distribution over a designated range. This uncertainty is propagated through high-fidelity Large Eddy Simulation (LES)-DEM simulations to statistically quantify its impact on the results. To effectively explore the four-dimensional parameter space, we analyze a comprehensive database comprising over 1200 simulations. Notably, our findings reveal that variations in the particle and wall Coulomb friction coefficients exert a more pronounced influence on streamwise particle velocity than do variations in the particle and wall normal restitution coefficients.



An Efficient Convex Training Algorithm for Artificial Neural Networks by Utilizing Piecewise Linear Approximations and Semi-Continuous Formulations

Ece Serenat Koksal1, Erdal Aydin1, Metin Turkay2,3

1Department of Chemical and Biological Engineering, Koç University, Turkiye; 2Department of Industrial Engineering, Koç University, Turkiye; 3SmartOpt, Turkiye

Artificial neural networks (ANNs) are mathematical models representing the relationships between inputs and outputs, inspired by the structure of neuron connections in the human brain. ANNs consist of input and output layers, along with user-defined hidden layers containing neurons, which are interconnected through activation functions such as rectified linear unit (ReLU), hyperbolic tangent and sigmoid. A feedforward neural network (FNN) is a type of ANN that propagates information in one direction, from input to output. ANNs are widely used as data-driven approaches, especially for complex systems like chemical engineering, where mechanistic modelling poses significant challenges. However, they often encounter issues such as overfitting, insufficient data, and suboptimal training.

To address suboptimal training, piecewise linear approximations of nonlinear activation functions, such as sigmoid and hyperbolic tangent, can be employed. This approach may enable the transformation of the non-convex problem into a convex one, enabling training via a special ordered set type II (SOS2) formulation at the same time (Koksal & Aydin, 2023; Sildir & Aydin, 2022). The resulting formulation is a mixed-integer linear programming (MILP) problem. However, as the number of neurons, number of approximation pieces or dataset size increases, the computational time rises due to the exponential complexity increase associated with binary variables, hyperparameters and data points.

In this work, we propose a novel training algorithm for FNNs by employing SOSX variables, as defined by Keha et al., (2004) instead of the conventional SOS2 formulation. By modifying the branching algorithm, we transform the MILP problem into subsets of linear programming (LP) problems. This transformation also brings about parallelizable properties, which may further reduce the computational time for training the FNNs. Results demonstrate that this change in the branching strategy significantly reduces computational time, making the formulation more efficient for convexifying the FNN training process.

References

Keha, A. B., De Farias, I. R., & Nemhauser, G. L. (2004). Models for representing piecewise linear cost functions. Operations Research Letters, 32(1), 44–48. https://doi.org/10.1016/S0167-6377(03)00059-2

Koksal, E. S., & Aydin, E. (2023). Physics Informed Piecewise Linear Neural Networks for Process Optimization. Computers and Chemical Engineering, 174. https://doi.org/10.1016/j.compchemeng.2023.108244

Sildir, H., & Aydin, E. (2022). A Mixed-Integer linear programming based training and feature selection method for artificial neural networks using piece-wise linear approximations. Chemical Engineering Science, 249. https://doi.org/10.1016/j.ces.2021.117273



Economic evaluation of Solvay processes for sodium bicarbonate production with brine and carbon tax considerations

Dina Ewis, Zeyad Ghazi, Sabla Y. Alnouri, Muftah H. El-Naas

Gas Processing Center, College of Engineering, Qatar University, Doha, Qatar

Reject brine discharge and high CO2 emissions from desalination plants are major contributors to environmental pollution. Managing reject brine involves significant costs, mainly due to the energy-intensive processes required for brine dilution and disposal. In this context, Solvay process represents a mitigation scheme that can effectively reduce reject brine salinity and sequestering CO2 while producing sodium bicarbonates simultaneously. The Solvay process represents a combined approach that can effectively manage reject brine and CO2 in a single reaction while producing an economically feasible product. Therefore, this study reports a systematic techno-economics assessment of conventional and modified Solvay processes, while incorporating brine and carbon tax. The model evaluates the significance of implementing a brine and CO2 tax on the economics of conventional and Ca(OH)2 modified Solvay compared to industries expenditures on brine dilution and treatment before discharge to the sea. The results show that the conventional Solvay process becomes profitable after applying a brine tax of 1 dollar per meter cube of brine and a CO2 tax of 42 dollars per tonne CO2 —both figures lower than the current costs associated with brine treatment and existing carbon taxes. Moreover, the profitability of the Ca(OH)₂-modified Solvay process increases even further with minimal brine and CO₂ taxes. The findings highlight the significance of adopting modified Solvay process as an integrated solution for sustainable brine management and carbon capture.



THE GREEN HYDROGEN SUPPLY CHAIN IN THE BRAZILIAN STATE OF BAHIA: A DETERMINISTIC APPROACH

Leonardo Santana1, Gustavo Santos1, Pessoa Fernando1, Barbosa-Póvoa Ana Paula2

1SENAI CIMATEC university center, Brazil; 2Instituto Superior Técnico – Universidade de Lisboa, Portugal

Hydrogen is increasingly recognized as a pivotal element in decarbonizing energy, transport, chemical industry, and agriculture sectors. However, significant technological challenges related to production, transport, and storage hinder its broader integration into these industries. Overcoming these barriers requires the development of a sustainable hydrogen supply chain (HSC). This paper aims to design and plan a HSC by developing a Mixed-Integer Linear Programming (MILP) for the Brazilian state of Bahia, the fifth largest state of Brazil (as big as France), a region with significant potential for sustainable electricity and electrolytic hydrogen production. The case study utilizes existing road infrastructure, liquefied and compressed hydrogen via trucks or trains are considered. A monetization strategy is employed to consolidate both economic and environmental aspects into a single objective function, translating CO2 emissions into costs using carbon credit prices. Facility locations are selected based on the preference locations for hydrogen production from Bahia’s government report, considering four dimensions: economic, social, environmental, and technical. Wind power, solar PV, and grid electricity are considered energy sources for hydrogen production facilities, and the model aims to select the optimal combination of energy sources for each plant. The outcomes include the selection of specific hydrogen production plants to meet the demand center's requirements, alongside decisions regarding the preferred form of hydrogen storage (liquefied or compressed) and the optimal energy source (solar, wind, or grid) for each facility. This model provides a practical contribution to the implementation of a sustainable green hydrogen supply chain in Bahia, focusing on the industrial sector's needs. The study offers a replicable and accessible computational approach to solving complex supply chain problems, especially in regions with growing interest in green hydrogen production.



A combined approach to optimization of soft sensor architecture and physical sensor configuration

Lukas Furtner1, Isabell Viedt1, Leon Urbas1,2

1Process Systems Engineering Group, TU Dresden, Germany; 2Chair of Process Control Systems, TU Dresden, Germany

In the chemical industry, soft sensors are deployed to reduce equipment cost or allow for a continuous measurement of process variables. Soft sensors monitor parameters not via physical sensors but infer them from other process variables, often by means of parametric equations like balances and thermodynamic or kinetic dependencies. Naturally, the precision of soft sensors is affected by the uncertainty of their input variables. This paper proposes a novel approach to automatically identify the most precise soft sensor based on a set of process system equations and the configuration of physical sensors in the chemical plant. Furthermore, the method assesses the benefit of deploying additional physical sensors to increase a soft sensor’s precision. This enables engineers to derive adjustments of the existing sensor configuration in a chemical plant. Based on approximating the uncertainty of soft sensors to infer a critical process variable via Monte Carlo simulation, the proposed method is insusceptible against dependent, non-Gaussian uncertainties. Additionally, the approach allows to incorporate hybrid semi-parametric soft sensors [1], modelling poorly understood effects and dependencies within the process system with data-driven, non-parametric parts. Applied to the Tennessee Eastman process [2], the method identifies Pareto-optimal sensor configurations, considering sensor cost and monitoring precision for critical process variables. Finally, the method's deployment in real-world chemical plants is discussed.

Sources
[1] J. Sansana et al., “Recent trends on hybrid modeling for Industry 4.0,” Computers & Chemical Engineering, vol. 151, p. 107365, Aug. 2021
[2] J. J. Downs and E. F. Vogel, “A plant-wide industrial process control problem,” Computers & Chemical Engineering, vol. 17, no. 3, pp. 245–255, Mar. 1993



Machine Learning Models for Predicting the Amount of Nutrients Required in a Microalgae Cultivation System

Geovani R. Freitas1,2,3,4, Sara M. Badenes3, Rui Oliveira4, Fernando G. Martins1,2

1Laboratory for Process Engineering, Environment, Biotechnology and Energy (LEPABE); 2Associate Laboratory in Chemical Engineering (ALiCE); 3Algae for Future (A4F); 4LAQV-REQUIMTE

Effective prediction of nutrient demands is crucial for optimising microalgae growth, maximising productivity, and minimising resources waste. With the increasing amount of data related to microalgae cultivation systems, data mining (DM) and machine learning (ML) methods to extract additional knowledge has gained popularity over time. In the DM process, models can be evaluated using ML algorithms, such as random forest (RF), artificial neural network (ANN) and support vector regression (SVR). In the development of these ML models, data preprocessing stage is necessary due to the poor quality of data. While cleaning and outlier removal techniques are employed to eliminate missing data or outliers, normalization is used to standardize features, ensuring that no single feature is more relevant to the model due to differences in scale. After this stage, feature selection is employed to identify the most relevant parameters, such as solar irradiance and initial dry weight of biomass. Once the optimal features are identified, data splitting and cross-validation strategies are employed to ensure that the models are trained and evaluated with representative subsets of the dataset. Proper data splitting into training and testing sets prevents overfitting, allowing the models to generalize effectively to new, unseen data. Cross-validation techniques, such as k-fold and repeated k-fold cross-validation, are used to rigorously test model performance across multiple iterations, ensuring that results are not dependent on any single data partition. Principal component analysis (PCA) can also be applied as a dimensionality reduction technique to simplify complex environmental datasets by reducing the number of variables or features in the data while retaining as much information as possible. To further improve prediction capabilities, ensemble methods are incorporated, leveraging multiple models to achieve a higher overall performance. Stacking, a popular ensemble technique, is used to combine the outputs of individual models, such as RF, ANN, and SVR, into a single meta-model. This approach takes advantage of the strengths of each base model, such as the non-linear mapping capabilities of ANN, the robustness of RF against overfitting, and the effectiveness of SVR in handling complex feature interactions. By combining these diverse models, the stacked ensemble method provides more accurate and reliable predictions of nutrient requirements. The application of these ML techniques has been demonstrated using a dataset acquired from the cultivation of the microalgae Dunaliella in a flat-panel photobioreactor (FP-PBR). The results showed that the data mining workflow, in combination with different ML models, was able to describe the nutrients requirements to obtain a good performance of microalgae Dunaliella production in carotenogenic phase, for b-carotene production, in a FP-PBR system.



Dynamical modeling of ultrafine particle classification in tubular bowl centrifuges

Sandesh Athni Hiremath1, Marco Gleiss2, Naim Bajcinca1

1RPTU Kaiserslautern, Germany; 2KIT Karlsruhe, Germany

Ultrafine or colloidal particles are widely used in industry as aerogels, coatings, filtration aids or thin films and require a defined particle size. For this purpose tubular centrifuges are suitable for particle separation and classification due to the high g-forces. The design and optimization of tubular centrifuges requires a large number of pilot tests, which is time-consuming and costly. Additionally, the centrifuge while operating semi-continuously under constant process conditions, results in temporal changes of particle size distribution and solids volume fraction especially at the outlet. Altogether, these aspects makes the task of designing an efficient centrifuge challenging. This work presents a dynamic model for the real-time simulation of the behavior during particle classification in a pilot-scale tubular centrifuge and also provide a novel data-driven algorithm for model validation. The combination of the two greatly facilitates the design and control of the centrifugation process, in particular the tubular centrifuge being considered. First, we discuss the new continuous mathematical model as an improvement over the previously published multi-compartment (discrete) model by Winkler et al. [1]. Based on simulation we show the influence of operational conditions and material behavior on the classification of a colloidal silica-water slurry. Subsequently, we validate the dynamical model by comparing experimental data with the simulations for the temporal change of product loss, grade efficiency and sediment build-up. For validation, we propose a new data driven method which uses neural-odes that incorporates the proposed new centrifugation model thus capable of encoding the physical (transport) laws in the network parameters. In summary, our work provides the following novelties:

1. A continuous dynamical model for a tubular centrifugation process that establishes a strong foundation for continuous and semi-continuous control of the process.

2. A new data-driven validation algorithm that not only allows the use of physics based continuous model thus serving as a base methodology for developing a full-fledged learning based observer model which can be used as a state-estimator during continuous process control.

[1] Marvin Winkler, Frank Rhein, Hermann Nirschl, and Marco Gleiss. Real-time modeling of volume and form dependent nanoparticle fractionation in tubular centrifuges. Nanomaterials, 12(18):3161, 2022.



Towards a multi-scale process optimization coupling custom models for unit operations, process simulator, and environmental impact.

Thomas Hietala1, Sonja Herres-Pawlis2, Pedro S.F. Mendes1

1Centro de Química Estrutural, Instituto Superior Técnico, Portugal; 2Institute of Inorganic Chemistry, RWTH Aachen University, Germany

To achieve utmost process efficiency, all scales, from phenomena within a given unit operation to mass and energy integration, matter. For instance, the way mass transfer and kinetics are optimized in a chemical reactor (e.g., focusing either on activity or selectivity) will impact the downstream separation train and, thus, the process as a whole. Currently, as the design of sustainable processes is mostly performed independently at different scales, the overall influence of design choices at different levels is not assessed in a seamless way, leading to a trial-and-error and inefficient design workflow. In order to consider all scales simultaneously, a multi-scale model has been developed that couples a process model to a complex mass-transfer limited reactor model and to an environmental and/or social impact assessment tool. The production of Polylactic-acid (PLA), the most produced bioplastic to date[1], was chosen as the case study for the development of this tool.

The multi-scale model covers, as of today, the reactor, process and environmental analysis scales. The process model simulating the production process of PLA was developed in Aspen Plus simulation software employing the Polymer Plus module and PolyNRTL as the thermodynamic method, based on literature implementation[2]. The production process consists firstly of an oligomerization reaction step of lactic acid to a PLA pre-polymer. It is followed by a depolymerization step which converts the pre-polymer into lactide. After a purification step, the lactide forms the high molecular weight PLA in a ring-opening polymerization step. The PLA product is obtained after a final purification step. The depolymerization step, in industry, is performed in a devolatilization equipment, which is a mass-transfer limited reactor. As there are no adequate mass-transfer limited reactor models in Aspen Plus, a Python CAPE-Open Unit Operation module[3] was developed to couple a realistic devolatilization reactor model into the process model. If mass-transfer would not be accounted for in the reactor, the ultimate PLA production would be underestimated by 8-times, with the corresponding impact on profitability and environmental.

From the process model, the economic performance of the process can be determined. To determine the environmental performance of the designed process simultaneously and seamlessly, a Life Cycle Analysis (LCA) model, performed in OpenLCA software, is coupled with Aspen Plus using an interface coded in Python. With this multi-scale model built-in, the impact of the design variables at the various scales on the process's overall economic and environmental performance can be determined and optimized.

This multi-scale model creates a basis to develop a multi-objective optimization framework using economic and environmental objective functions directly from Aspen Plus and OpenLCA software. This could enable a reduction in the environmental impact of processes without disregarding the profitability of the process.

[1] - European Bioplastics, Bioplastics market data, 2023, https://www.european-bioplastics.org/news/publications/ accessed on 25/09/2024

[2] - K. C. Seavey and Y. A. Liu, Step-growth polymerization process modeling and product design. New Jersey: Wiley, 2008

[3] - https://www.colan.org/process-modeling-component/python-cape-open-unit-operation/ accessed on 25/09/2025



Enhancing hydrodynamics simulations in Distillation Columns Using Smoothed Particle Hydrodynamics (SPH)

RODOLFO MURRIETA-DUEÑAS1, JAZMIN CORTEZ-GONZÁLEZ1, ROBERTO GUTIÉRREZ-GUERRA2, JUAN GABRIEL SEGOVIA-HERNÁNDEZ3, CARLOS ENRIQUE ALVARADO-RODRÍGUEZ3

1TECNOLÓGICO NACIONAL DE MÉXICO/ CAMPUS IRAPUATO, MÉXICO; 2UNIVERSIDAD DE GUANAJUATO, MÉXICO; 3UNIVERSIDAD TECNOLÓGICA DE LEÓN, CAMPUS LEÓN, MÉXICO

Distillation is one of the most widely applied unit operations in chemical engineering, renowned for its effectiveness in product purification. However, traditional distillation processes are often hampered by significant inefficiencies, driving efforts to enhance thermodynamic performance in both equipment design and operation. While many alternatives have been evaluated using MESH equations and sequential simulators, comparatively less attention has been given to Computational Fluid Dynamics (CFD) modeling, largely due to its complexity. CFD methodologies typically fall under either Eulerian or Lagrangian approaches. The Eulerian method relies on a mesh to discretize the medium, providing spatial averages at the fluid interfaces. Popular techniques include finite volume and finite element methods, with finite volume commonly employed to simulate the hydrodynamics, mass transfer, and momentum in distillation columns (Haghshenas et al., 2007; Lavasani et al., 2018; Zhao, 2019; Ke, 2022). Despite its widespread use, the Eulerian approach faces challenges such as interface modeling, convergence issues, and selecting appropriate turbulence models for simulating turbulent flows. In contrast, Lagrangian methods, which discretize the continuous medium using non-mesh-based points, offer detailed insights into interfacial phenomena. Among these, Smoothed Particle Hydrodynamics (SPH) stands out for its ability to model discontinuous media and complex geometries without requiring a mesh, making it ideal for studying various systems, including microbial growth (Martínez-Herrera et al., 2022), sea wave dynamics (Altomare et al., 2023), and stellar phenomena (Reinoso et al., 2023). This versatility and robustness make SPH a promising tool for distillation process modeling. In this study, we present a numerical simulation of a liquid-vapor (L-V) thermal equilibrium stage in a plate distillation column, employing the SPH method. The focus is on Sieve and Bubble cap plates, with periodic temperature conditions applied to facilitate thermal equilibrium. Column sizing was performed using Aspen One for an equimolar Benzene-Toluene mixture, operating under conditions ensuring a condenser cooling water temperature of 120°F. The Chao-Seader thermodynamic model was applied, with both sieve and bubble plates integrated into a ten-stage column. Stage 5 was designated as the feed stage, and a 98% purification and recovery rate for both components was assumed. This setup provided critical operational parameters, including liquid and vapor velocities, viscosity, density, pressure, and column diameter. Three-dimensional CAD models of the distillation column and the plates were generated using SolidWorks and subsequently imported into DualSPHysics (Domínguez et. al., 2022) for CFD simulation. Stages 6 and 7 were selected for detailed analysis, as they are positioned just below the feed stage. The results showed that the sieve plate achieved thermal equilibrium more rapidly than the bubble cap plate, a difference attributable to the steam injection zone in the bubble cap design. Moreover, the simulations allowed the calculation of heat transfer coefficients based on plate geometry, providing insights into heat exchange at the fluid interfaces. In conclusion, this study highlights the potential of using periodic temperature conditions to simulate thermal equilibrium in distillation columns. Additionally, the SPH method has demonstrated its utility as a powerful and flexible tool for simulating fluid dynamics and thermal equilibrium in distillation processes.



Electric arc furnace dust waste management: A process synthesis approach.

Agustín Porley Santana1, Mayra Doldan1,2, Martín Duarte Guigou2,3, Mauricio Ohanian1, Soledad Gutiérrez Parodi1

1Instituto de Ingeniería Química, Facultad de Ingeniería, Universidad de la República, Montevideo, 11300, Uruguay; 2Viento Sur Ingeniería, Ruta 61, Km 19, Nueva Helvecia, Colonia, Uruguay.; 3Grupo de Ingeniería de Materiales, Inst. Tecn. Reg. Sur-Oeste, Universidad Tecnológica del Uruguay, Horacio Meriggi 905, CP60000, Paysandú, Uruguay.

The residue from the solid collection system of steel mills is known as electric arc furnace dust (EAFD). It contains significant amounts of iron, zinc, and lead in the form of oxides, silicates, and carbonates, along with minor components such as chromium, tin, nickel, and cadmium. Therefore, most countries classify this waste as hazardous waste. Its management presents scientific and technical challenges that significantly impact the economics of the steelmaking process.

Currently, the management of this waste consists of burying it in the final disposal plant. However, there are multiple treatment alternatives to reduce its hazardousness by recovering and immobilizing marketable heavy metals such as Zn and Pb. This process can be carried out through a hydrometallurgical dissolution with selective extraction of Zn, leaving the rest of the metals components in the solid. Zn has amphoteric properties, but it shares this characteristic with Pb, so alkaline extraction solubilizes both metals simultaneously, leaving iron compounds in an insoluble form. At this stage, two currents result, one solid and one liquid. The liquid stream is a zinc-rich solution from which Zn could be electrochemically recovered as a valuable product, ensuring that the electrodeposited material shows characteristics that allow for easy recovery through mechanical means. The solid stream can be stabilized by incorporating it into an alkali-activated inorganic polymer (geopolymer) to obtain a product or waste that captures the heavy metals, immobilizing them, or it can be managed by a third party. To avoid lead contamination of the product of interest (pure Zn), the liquid stream can go through a precipitation process with sodium sulfide, removing the lead as lead sulfide or electrodepositing pure lead by controlling (the voltage or current) before electrodepositing the Zn in a subsequent stage. Pilot-scale testing of these processes has been conducted previously. [1]

Each step generates different costs and alternatives for managing this residue. For this, the process synthesis approach is considered suitable, allowing for the simultaneous analysis of these alternatives and the selection of the one that generates the greatest benefit.

This work studies the management of steel mill residue with a process synthesis approach combining experimental data from pilot-scale operations, data collected from metallurgical companies, and data based on expert judgment. The stages to achieve this objective involve: superstructure conception, its translation into mathematical language, and implementation in a mathematical programming software (GAMS). The aim is to assist in decision-making at the managerial level, so the objective function chosen was to maximize commercial value per ton of EAFD to be managed. A superstructure model is proposed that combines binary variables for operations and binary variables for artificial streams, enabling accurate modeling of the various connections involved in this process management network. Artificial streams were used to formally describe disjunctions. Sensitivity analyses are currently being conducted.

References

[1] M.Doldán, M. Duarte Guigou, G. Pereira, M. Ohanian, Electrodeposition of Zinc and Lead from Electric Arc Furnace Dust Dissolution: A Kinetic Study in A Closer Look at Chemical Kinetics, Editorial Nova Science Publishers 2022



Network theoretical analysis of the reaction space in biorefineries

Jakub Kontak, Jana Marie Weber

Intelligent Systems Department, Delft University of Technology, Netherlands

Abstract

Large chemical reaction space has been analysed intensively to learn the patterns of chemical reactions (Fialkowski et al., 2005; Jacob & Lapkin, 2018; Llanos et al., 2019, Mann & Venkatasubramanian, 2023) and to understand the wiring structure to be used for network pathway planning problems (Weber et al., 2019; Ulonska et al., 2016). With increasing pressure towards more sustainable production systems, it becomes worthwhile to model the reaction space reachable from biobased feedstocks, e.g. through integrated processing steps in biorefineries.

In this work we focus on a network-theoretical analysis of biorefinery reaction data. We obtain biorefinery reaction data from the REAXYS web interface, propose a directed all-to-all mapping between reactants and products for comparability purposes with related work, and finally compare the reaction space obtained from biorefineries with the network of organic chemistry (NOC) (Jacob & Lapkin, 2018). Our findings indicate that despite having 1000 times fewer molecules, the constructed network resembles the NOC in terms of its scale-free nature and shares similarities regarding its “small-world” property. Our results further suggest that the biorefinery network space has a higher centralisation and clustering coefficient. Additionally, we inspect the coverage rate of our data querying strategy and find that our network covers most of common second and third intermediates, yet only few biorefinery end-products and direct feedstock molecules are present.

References

Fialkowski, M., Bishop, K. J., Chubukov, V. A., Campbell, C. J., & Grzybowski, B. A. (2005). Architecture and evolution of organic chemistry. Angewandte Chemie International Edition, 44(44), 7263-7269.

Jacob, P. M., & Lapkin, A. (2018). Statistics of the network of organic chemistry. Reaction Chemistry & Engineering, 3(1), 102-118.

Llanos, E. J., Leal, W., Luu, D. H., Jost, J., Stadler, P. F., & Restrepo, G. (2019). Exploration of the chemical space and its three historical regimes. Proceedings of the National Academy of Sciences, 116(26), 12660-12665.

Mann, V., & Venkatasubramanian, V. (2023). AI-driven hypergraph network of organic chemistry: network statistics and applications in reaction classification. Reaction Chemistry & Engineering, 8(3), 619-635.

Weber, J. M., Lió, P., & Lapkin, A. A. (2019). Identification of strategic molecules for future circular supply chains using large reaction networks. Reaction Chemistry & Engineering, 4(11), 1969-1981.

Ulonska, K., Skiborowski, M., Mitsos, A., & Viell, J. (2016). Early‐stage evaluation of biorefinery processing pathways using process network flux analysis. AIChE Journal, 62(9), 3096-3108.



Applying Quality by Design to Digital Twin Supported Scale-Up of Methyl Acetate Synthesis

Jessica Ebert1, Amy Koch1, Isabell Viedt1,3, Leon Urbas1,2,3

1TUD Dresden University of Technology, Process Systems Engineering Group; 2TUD Dresden University of Technology, Chair of Process Control Systems; 3TUD Dresden University of Technology, Process-to-Order Lab

The scale-up from lab to production scale is an essential cost and time factor in the development of chemical processes, especially when high demands are placed on product quality. Quality by Design is a common method used in the pharmaceutical industry to ensure product quality through the production process (Yu et al., 2014), which is why the QbD methodology could be a useful tool for the process development in chemical industry as well. Concepts from literature demonstrate how mechanistic models are used for the direct scale-up from laboratory equipment to production equipment by dispensing with intermediate scales in order to shorten the time to process (Furrer et al., 2021). The integration of Quality by Design into a direct scale-up approach promises further advantages, such as a deeper process understanding and the assurance of process safety. Digital twins consisting of simulation models digitally represent the behavior of plants and the processes running on it, enable the model-based scale-up.

In this work a simulation-based workflow for the digital twin supported scale-up of processes and process plants is proposed, which integrates various aspects of the quality by design methodology. The key element is the determination of the design space defining Critical Quality Attributes and identifying Critical Process Parameters as well as Critical Material Attributes (Yu et al., 2014). The design space is transferred from the laboratory scale model to the production scale model. To illustrate the concept, the workflow is implemented for the use case of the synthesis of methyl acetate. The process is scaled from a 2 L laboratory stirred tank reactor to a 50 L production plant fulfilling each step of the scale-up workflow: modelling, definition of the target product quality, experiments, model adaption, parameter transfer and design space identification. Thereby, the presentation of the results focusses on the design space identification and transfer using global system analysis. Finally, benefits and limitations of the implementation of Quality by Design in the direct scale-up using digital twins are discussed.

References

Schindler, Polyakova, Harding, Weinhold, Stenger, Grünewald & Bramsiepe (2020). General approach for technology and Process Equipment Assembly (PEA) selection in process design. Chemical Engineering and Processing – Process(159), Article 108223.

T. Furrer, B. Müller, C. Hasler, B. Berger, M. Levis & A. Zogg (2021). New Scale-up Technologies for Hydrogenation Reactions in Multipurpose Pharmaceutical Production Plants. Chimia(75), Article 11.

X.. L.Yu, G. Amidon, M. A. Khan, S. W. Hoag, J. Polli, G.K. Raju & J. Woodcock (2014). Understanding Pharmaceutical Quality by Design. The AAPS Journal(16), 771–783.



Digital Twin supported Model-based Design of Experiments and Quality by Design

Amy Koch1, Jessica Ebert1, Isabell Viedt1,2, Andreas Bamberg4, Leon Urbas1,2,3

1TUD Dresden University of Technology, Process Systems Engineering Group; 2TUD Dresden University of Technology, Process-to-Order Lab; 3TUD Dresden University of Technology, Chair of Process Control Systems; 4Merck Electronics KGaA, Frankfurter Str. 250, Darmstadt 64293, Germany

In the specialty chemical industries, faster time-to-process is a significant measure of success. One key aspect which supports faster time-to-process is reducing the time required for experimental efforts in the process development phase. Here, Digital Twin workflows based on methods such as global system analysis, model-based design of experiments (MBDoE), and the identification of the design space as well as leveraging prior knowledge of the equipment capabilities can be utilized to reduce the experimental load (Koch et al., 2023). MBDoE utilizes prior knowledge (model structure & initial parameter estimates) to optimally design an experiment by identification of optimum process conditions, thereby reducing experimental effort (Franceschini & Macchietto, 2008). Further benefit can be achieved by applying Quality by Design methods (Katz & Campbell, 2012) to these Digital Twin workflows; here, the prior knowledge supplied by the Digital Twin is used to pre-screen combinations of critical process parameters and model parameters to identify suitable parameter combinations for inclusion in the MBDoE optimization problem (Mädler, 2023). In this paper, first a Digital Twin workflow based on incorporating prior knowledge of equipment capabilities into global system analysis and subsequent MBDoE is presented and relevant methodology explained. This workflow is illustrated with a prototypical implementation using the process simulation tool gPROMS for the specific use case of an esterification process in a stirred tank reactor. As a result, benefits such as improved parameter estimation and reduced experimental effort compared to traditional DoE are illustrated as well as a critical evaluation of the applied methods.

References

G. Franceschini & S. Macchietto (2008). Model-based design of experiments for parameter precision: State of the art. Chemical Engineering Science 63(19), 4846-4872. Chemical Engineering Science, 63, 4846–4872.

P . Katz, & C. Campbell, 2012, FDA 2011 process validation guidance: Process validation revisited, Journal of GXP Compliance, 16(4), 18.

A. Koch, J. Mädler, A. Bamberg, and L. Urbas, 2023. Digital Twins for Scale-Up in Modular Plants: Requirements, Concept, and Roadmap. In Computer Aided Chemical Engineering, 2063-2068, Elsevier.

J. Mädler, 2023. Smarte Process Equipment Assemblies zur Unterstützung der Prozessvalidierung in modularen Anlagen.



Bioprocess control using hybrid mechanistic and Gaussian process modeling

Lydia Katsini, Satyajeet Sheetal Bhonsale, Jan F.M. Van Impe

BioTeC+, Chemical & Biochemical Process Technology & Control, KU Leuven, Belgium

Control of bioprocesses is crucial for achieving optimal yield of various products. In this study, we focus on the fermentation of Xanthophyllomyces dendrorhous, a yeast known for its ability to produce astaxanthin, a high-value carotenoid with applications in pharmaceuticals, nutraceuticals, and aquaculture. Successful application of optimal control, requires, however, accurate and robust process models (Bhonsale et al., 2022). Since the system dynamics are non-linear and biological variability is an inherent property of the process, modeling such a system is demanding.

Aiming to tackle the system complexity, our approach in modeling this process follows Vega-Ramon et al. (2021), who combined two distinct methods: mechanistic and machine learning models. On the one hand, mechanistic models, based on existing knowledge, provide valuable insights into the underlying phenomena but are limited by their demand for accurate parameterization and may struggle to adapt to process disturbances. On the other hand, machine learning models, based on experimental data, can capture the underlying pattern without previous knowledge, however, they are also limited to the domain of the training data utilized to build them.

A key challenge in both modeling approaches is dealing with uncertainty, and more specifically biological variability, which is inherent in biological systems. To address this, we utilize Gaussian Process (GP) modeling, a flexible, non-parametric machine learning technique that provides a framework for uncertainty quantification. In this study, the use of GPs allows for robust control of the fermentation by accounting for the biological variability of the system.

Optimal control framework is implemented both for the hybrid model and the mechanistic model to identify the optimal sugar feeding strategy for maximizing astaxanthin yield. This study demonstrates how optimal control can benefit from hybrid mechanistic and machine learning bioprocess modeling.

References

Bhonsale, S. et al. (2022). Nonlinear Model Predictive Control based on multi-scale models: is it worth the complexity? IFAC-PapersOnLine, 55(23), 129-134. https://doi.org/10.1016/j.ifacol.2023.01.028

Vega-Ramon, F. et al. (2021). Kinetic and hybrid modeling for yeast astaxanthin production under uncertainty. Biotechnology and Bioengineering, 118, 4854–4866. https://doi.org/10.1002/bit.27950



Tune Decomposition Schemes for Large-Scale Mixed-Integer Programs by Bayesian Optimization

Guido Sand1, Sophie Hildebrandt1, Sina Nunes1, Chung On Yip1, Meik Franke2

1Pforzheim University of Applied Science, Germany; 2University of Twente, The Netherlands

Heuristic decomposition schemes are a common approach to approximately solve large-scale mixed-integer programs (MIPs). A typical example are moving horizon schemes applied to scheduling problems. Decomposition schemes usually exhibit parameters which can be used to tune their performance. Examples for parameters of moving horizon schemes are the horizon length and the step size of its movement. Systematic tuning approaches are seldomly reported in literature.

In a previous paper by the first two authors, Bayesian optimization was proposed as a methodological approach to systematically tune decomposition schemes for mixed-integer programs. This approach is reasonable since the tuning problem is a black-box optimization problem with an expensive to evaluate objective function: Each evaluation of the objective function of the Bayesian optimization requires the solution of the mixed-integer program using the specifically parametrized decomposition scheme. The mentioned paper demonstrated by an exemplary mixed-integer hoist scheduling model and a moving horizon scheme that the proposed approach is feasible and effective in principle.

After the proof of concept in the previous paper, the paper at hand discusses detailed results of three studies of the Bayesian optimization-based approach using the same exemplary hoist scheduling model:

  1. Examine the solution space:
    The graphs of the objective function (makespan or computational cost) of the tuning problem are analysed for small instances of the mixed-integer model considering the sequences of evaluations of the Bayesian optimization in the integer-valued space of tuning parameters. The results show that the Bayesian optimization converges relatively fast to good solutions even though the visual inspection of the graphs of the objective function exhibit only little structure.
  2. Compare different acquisition functions:
    The type of acquisition function is studied since it is assumed to be a tuning parameter of the Bayesian optimization with a major impact on its performance. Four types of acquisition functions are applied to a set of test cases and compared with respect to the mean performance and its variance. The results show a similar performance of three types and a slightly inferior performance of the fourth type.
  3. Enlarge the tuning-parameter space:
    The scaling behaviour of the Bayesian optimization-based approach with respect to the dimension of the space of tuning-parameters is studied: The number of tuning-parameters is increased from two to four parameters (three integer- and one real-valued). First results indicate that the studied approach is also feasible for real-valued tuning parameters and remains effective in higher dimensional spaces.

The results indicate that Bayesian optimization is a promising approach to tune decomposition schemes for large-scale mixed-integer programs. Future work will investigate the optimization of tuning-parameters for multiple instances in two directions. Direction one is inspired by hyperparameter optimization methods and aims at tuning one decomposition scheme which is on average optimal for multiple instances. Direction two is motivated by algorithm selection methods and aims at predicting good tuning parameters from previously optimized tuning parameters.



Enhancing industrial symbiosis to reduce CO2 emissions in a Portuguese industrial park

Ricardo Nunes Dias1,2, Fátima Nunes Serralha2, Carla Isabel Costa Pinheiro1

1Centro de Química Estrutural, IMS, Department of Chemical Engineering, Instituto Superior Técnico/Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal; 2RESILIENCE – Center for Regional Resilience and Sustainability, Escola Superior de Tecnologia do Barreiro, Instituto Politécnico de Setúbal, 2839-001 Lavradio, Portugal

The primary objective of any industry is to generate profit, which often results in a focus on the efficiency of product production, not neglecting environmental and social issues. However, it is important to recognise that every process has multiple outlets, including the desired products and residues. In some cases, the effort required to process these residues further may outweigh the benefits (at first glance), leading to their disposal at a cost to the industry. Many of these residues can be sorted to enhance their value, enabling their sale instead of disposal [1].

This work presents a model developed in GAMS to identify and quantify potential symbiosis, that are already occurring, or could occur, if the appropriate relations between enterprises were established. A network flow is modelled to establish as much symbiosis as possible. The objective function maximises material exchange between enterprises while ensuring that every possible symbiosis is established. This will result in exchanges between enterprises that may involve too small amounts of wastes to be implemented. However, this outcome is beneficial for decision-makers, as having multiple sinks for a given residue can be beneficial [2,3]. EMn,j,i,n’ (exchanged material) is the main decision variable of the model, where the indices are: n and n', the donor and receiver enterprises (respectively), j, the category, and i, the residue. A binary variable, Yn,j,i,n’, is also used to allow or not a given exchange between enterprises. Each residue is categorised according to the role it has in each enterprise, as it can be an industrial residue (category 3), or a resource (category 0), categories 1 and 2 are reserved for products and subproducts, respectively. The wastes produced are converted into CO2eq (carbon dioxide equivalent), as a quantification of environmental impact. Reducing the amount of waste produced can significantly reduce the environmental impact of a given enterprise. This study assesses the largest industrial park in Portugal, which encompasses a refinery and a petrochemical plant, as the two largest facilities within the park. The direct CO2 emissions mitigated by the deployment of CO2 utilisation processes can be quantified. The establishment of a methanol plant utilising CO2 can reduce the CO2 emissions from the park by 335,560 tons. A range of CO2 utilisation processes will be evaluated to determine the optimal processes for implementation.

Even though a residue can have impacts on several environmental aspects, in this work we focus on reducing carbon emissions. Furthermore, it was found that cooperation between local enterprises and the announced investments of these enterprises can lead to significant environmental gains in the region studied.

References

[1] J. Patricio,, Y. Kalmykova,, L. Rosado,, J. Cohen,, A. Westin,, J. Gil, Resour Conserv Recycl 185 (2022). 10.1016/j.resconrec.2022.106437.

[2] D.C.Y. Foo, PROCESS INTEGRATION FOR RESOURCE CONSERVATION, 2016.

[3] L. Fraccascia,, D.M. Yazan,, V. Albino,, H. Zijm, Int J Prod Econ 221 (2020). 10.1016/j.ijpe.2019.08.006.



Blue Hydrogen Plant: Accurate Hybrid Model Based on Component Mass Flows and Simplified Thermodynamic Properties is Practically Linear

Farbod Maghsoudi, Raunak Pandey, Vladimir Mahalec

McMaster University, Canada

Current models of process plants are either rigorous first principles models based on molar flows and fractions (used for process design or optimization of operating conditions) or simple mass or volumetric flows (used for production planning and scheduling). Detailed models compute stream properties via nonlinear calculations which employ mole fractions resulting in many nonlinearities and limit plant wide models to a single time-period computation. Planning models are flow-based models, usually linear and therefore solve rapidly which makes them suitable for multi-time period representation of the plant at the expense of lower accuracy.

Once a plant is in operation, most of its streams stay at or close to the normal operating conditions which are maintained by the process control loops. Therefore, each stream can be described by its properties at these normal operating conditions (unit enthalpy, temperature, pressure, density, heat capacity, vapor fraction, etc.). It should be noted that these bulk properties per unit mass are much less sensitive to changes in stream composition if one employs mass units instead of moles (e.g. latent heat of C5 to C10 hydrocarbons varies much less in energy/mass than in energy/mole units).

Based on these observations, this work employs a new plant modelling paradigm which leads to models that have accuracy close to the rigorous models and at the same time the models are (almost) linear, thereby permitting rapid solution of large-scale single-period and multi-period models. Instead of total molar flow and mole fractions, we represent streams by mass flows of components and total mass flow. In addition, we employ simplified thermodynamic properties based on [property value/mass], which eliminates the need to use mole or mass fractions.

This paradigm has been used to model a blue hydrogen plant described in the NETL report [1]. The plant converts natural gas into hydrogen and CO2 via autothermal reforming (ATR) and water-gas shift (WGS) reactors . Oxygen is supplied from the air separation unit, while steam and electricity are supplied by a combined heat and power (CHP) unit. Stream properties at normal operating conditions have been obtained from AspenPlus plant model. Surrogate reactor models employ mass component flows and have only one bilinear term, even though their AspenPlus counterpart is a highly nonlinear RGIBBS model. The entire plant model has a handful of bilinear terms, and its results are within 1% to 2% of the rigorous AspenPlus model.

Novelty of our work is in changing the plant modelling paradigm from molar flows, fractions, and rigorous thermodynamic properties calculation to mass component flows and simplified thermodynamic properties. Rigorous properties calculation is used to update the simplified properties after the hybrid model converges. This novel plant modelling paradigm greatly reduces nonlinearities of plant models while maintaining high accuracy. Due to its rapid convergence, the same plant model can be used for optimization of operating condition, multi-time period production planning, and for scheduling.

References:

  1. Comparison of Commercial State-of-the-art Fossil-based Hydrogen Production Technologies, DOE/NETL-2022/3241, April 12, 2022


Synergies between the distillation of first- and second-generation sugarcane ethanol for sustainable biofuel production

Luiz De Martino Costa1,2, Abhay Athaley3, Zach Losordo4, Adriano Pinto Mariano1, John Posada2, Lee Rybeck Lynd5

1Universidade Estadual de Campinas, Brazil; 2Delft Universtity of Technology, The Netherlands; 3National Renewable Energy Laboratory, United States; 4Terragia Biofuel Incorporated, United States; 5Dartmouth College, United States

Despite the yearly opening of second-generation (2G) sugarcane distilleries in Brazil, 2G bagasse ethanol distillation remains a challenging unit operation due to low-titer ethanol having increased heat duty and production costs per ethanol mass produced. For this reason, and because of the logistics involving transporting sugarcane bagasse, 2G bagasse ethanol is currently commercially produced in plants annexed to first-generation (1G) ethanol plants, and this configuration can likely become one path of evolution for 2G ethanol production in Brazil.

In the context of 1G2G integrated sugarcane ethanol plants, mixing ethanol beers from both processes may reduce the production costs of 2G ethanol (personal communication with a 2G ethanol producer). However, the energy, process, economic, and environmental advantages of this integrated model compared to its stand-alone counterpart remain unclear. Thus, this work focused on the energy synergies between the distillation of integrated first- and second-generation sugarcane ethanol mills.

For this investigation, integrated and separated 1G2G distillation simulations were conducted using Aspen Plus v.10. The separated distillation arrangement consisted of two RadFrac columns: one to distillate 1G beer and another to distillate 2G beer until near azeotropic levels (91.5% wt ethanol). In the integrated distillation arrangement, two columns were used: one to rectify 2G beer and another to distillate 2G vapor and 1G beer until azeotropic levels. The mass flow ratio between 1G to 2G beer was assumed to be 3:1, both mixtures enter the columns as saturated liquid and consist of only water and ethanol. The 1G beer titer was assumed 100 g/L and the 2G beer titer was varied from 10 to 40 g/L to understand and compare the energy impacts for low titer 2G beer. The energy analysis was conducted by quantifying and comparing the reboilers’ duty and distilled ethanol production to calculate heating energy demand.

1G2G integration resulted in an overall heating energy demand for ethanol distillation at a near-constant value of 3.25 MJ/kgEthanol, regardless of the 2G ethanol titer. In comparison, the separated scenario had energy demand ranging from 3.60 (40 g/L 2G beer titer) to 3.80 (10 g/L 2G beer titer) MJ/kgEthanol, meaning that it is possible to obtain energy savings from 9.5% to 14.5%. Additionally to the energy savings, the energy demand value found for the integrated scenario is almost the same for only 1G beer. The main reason for these results is that the energy demand for 2G ethanol is reduced due to the reflux ratio necessary for distillation, lowering in an integrated 1G2G column to be near to only 1G conditions. This can be observed in the integrated scenario by the 2G ethanol heat demand in isolation being the near-constant value of 3.35 MJ/kgEthanol for the studied range of 2G ethanol titer while changing from 5.81 to 19.92 MJ/kgEthanol in the separated scenario. These results indicate that distillation integration should be chosen for the 1G2G sugarcane distilleries for a less energy-demanding process, and, therefore, more sustainable biofuel.



Development of anomaly detection models independent of noise and missing values using graph Laplacian regularization

Yuna Tahashi, Koichi Fujiwara

Department of Materials Process Engineering, Nagoya University, Japan

Process data frequently suffer from imperfections such as missing values or measurement noise due to sensor malfunctions. Such data imperfections pose significant challenges to process fault detection, potentially leading to false positives or overlooking rare faulty events. Fault detection models with high sensitivity may excessively detect these irregularities, which leads to disturbing the identification of true faulty events.

To address this challenge, we propose a new fault detection model based on an autoencoder architecture with graph Laplacian regularization that considers specific temporal relationships among time series data. Laplacian regularization assumes that neighboring samples remain similar, imposing significant penalties when neighboring samples lack smoothness. In addition, graph Laplacian regularization can take the smoothness of graph structures into account. Since normal samples in close temporal proximity should keep similar characteristics, a graph can be utilized to represent temporal dependencies between successive samples in a time series. In the proposed model, the nearest correlation (NC) method which is a structural learning algorithm considering the correlation among variables is used. Using graph Laplacian regularization with the NC method, it is expected that missing values or measurement noise are corrected automatically from the viewpoint of the correlation among variables in the normal process condition, and only significant changes like faulty events can be detected because they cannot be corrected sufficiently. The proposed method has broad applicability to various models because the graph regularization term based on the NC method is simply added to the objective function when a model is trained.

To demonstrate the efficacy of our proposed model, we conducted a case study using simulation data generated from a vinyl acetate monomer (VAM) production process, employing a rigorous process model built on Visual Modeler (Omega Simulation Inc., Japan). In the VAM simulator, six faulty scenarios, such as sudden changes in feed composition and pressure, were generated.

The results show that the fault detection model with graph Laplacian regularization provides higher fault detection accuracy compared to the model without graph Laplacian regularization in some faulty scenarios. The false alarm rate (FAR) and the missing alarm rate (MAR) were improved by up to 0.4% and 50.1%, respectively. In addition, the detection latency (DL) was shortened at most 1,730 seconds. Therefore, it was confirmed that graph-Laplacian regularization with the NC method is particularly effective for fault detection.

The use of graph Laplacian regularization with the NC method is expected to realize a more reliable fault detection model, which would be capable of robustly handling noise and missing values, reducing false positives, and identifying true faulty events rapidly. This advancement promises to enhance the efficiency and reliability of process monitoring and control across various industrial applications.



Comparing incinerator kiln model predictions with measurements of industrial plants

Lionel Sergent1,2, Abderrazak Latifi1, François Lesage1, Jean-Pierre Corriou1, Thibaut Lemeulle2

1Université de Lorraine, Nancy, France; 2SUEZ, Schweighouse-sur-Moder, France

Roughly 30% of municipal waste is incinerated in the EU. Because of the heterogeneity of the waste and the lack of local measurements, the industry relies on traditional control strategy, including manual piloting. Advanced modeling strategies have been used to gain insights on the design of such facilities. Despite two decades of scientific efforts, obtaining good model accuracy and reliability is still challenging.
In this work, the predictions of a phenomelogical model based on the simplification of literature works is compared to the measurements of an industrial incinerator. The model consists of two sub-models, namely the bed model and the freeboard model. The bed refers to the solid waste traveling through the kiln, while the freeboard refers to the gaseous space above the bed where the flame resides.
The bed of waste is simulated with finite volumes and a walking columns approach, while the freeboard is modeled with the zone method and the interface with the boiler is taken into account through a three layer system. The code implementation of the model takes into account various geometry and other plant important characteristics in a way that allows to easily simulate different types of grate kilns.
The incinerator used as a reference for the development of the model is located in Alsace, France. It features a waste chute, a three zone grate, water walls in the kiln, four secondary air injection points and a cooling water injection. The simulation results are compared with temperature and gas composition measurements. Except for oxygen concentration, gas composition data needs to be retrocalculated from stack gas analyzers. Simulated bed height is compared with the observable fraction of the actual bed. The model reproduces well static behavior and general dynamic tendencies.
The very strong model sensitivity to particle diameter is discussed. Additionally, the model is configured for two other incinerators and shallow comparison with industrial data is performed to assess the generality of the model.
Despite encouraging results, the importance of more work regarding the solid behavior is highlighted.



Modeling the freeboard of a municipal waste incinerator

Lionel Sergent1,2, Abderrazak Latifi1, François Lesage1, Jean-Pierre Corriou1, Thibaut Lemeulle2

1Université de Lorraine, Nancy, France; 2SUEZ, Schweighouse-sur-Moder, France

Roughly 30% of municipal waste is incinerated in the EU. Despite the apparent simplicity, the heterogeneity of the waste and the scarcity of local measurement make waste incineration a challenging process to mathematically describe.
Most of the modeling efforts are concentrated on the bed behavior. However, the gaseous space above the bed, named the freeboard, also needs to be modeled in order to mathematically represent the behavior of the kiln. Indeed, there is a tight coupling between these two spaces, as the bed feeds the freeboard with pyrolysis gases allowing a flame to form in the freeboard, while the flame radiates heat back to the bed, allowing the drying and the pyrolysis to take place.
The freeboard may be modeled using various techniques. The most accurate and commonly used technique is CFD, generally with established commercial software. CFD allows to obtain detailed flow characteristics, which is very valuable to optimize secondary air injection. However, CFD setup is quite heavy and harder to interface with the custom codes typically used for bed modeling. In this work, we propose a coarse model, more adapted to operational use. Each grate zone is associated a with a freeboard gas space where homogeneous combustion reactions occur. Radiative heat transfer is modeled using the zonal method. Three layers are used to represent the interface with the boiler and the thermal inertia the refractory induces. Flow description is reduced to its minimum and solved through the combination of the continuity equation and the ideal gas law, without momentum balance.
The resulting mathematical model is a system of ODEs than can be easily solved with general purpose stiff ODE solvers based on backward differentiation formulas. Steady state simulation results show good agreements with the few measurements available. Dynamic effects are hard to validate due to the lack of local measurements, but general tendencies seem well represented. The coarse freeboard representation is shown to be enough to obtain the radiation profile arriving on the bed.



Superstructure as a communication tool in pre-emptive life cycle design engaging society: Findings from case studies on battery chemicals, plastics, and regional resources

Yasunori Kikuchi1, Ayumi Yamaki1, Aya Heiho2, Jun Nakatani1, Shoma Fujii1, Ichiro Daigo1, Chiharu Tokoro1,3, Shisuke Murakami1, Satoshi Ohara1

1The University of Tokyo, Japan; 2Tokyo City University, Japan; 3Waseda University, Japan

Emerging technologies require sophisticated design and optimization engaging social systems due to their innovative and rapidly advancing characteristics. Despite the fact that they have the significant capacity to change material flows and life cycles by their penetration, their future development and sociotechnical regimes, e.g., regulatory environment, societal infrastructure, and market, are still uncertain and may affect the optimal systems to be implemented in the future. Multiple technologies are being considered simultaneously for a single issue, and appropriate demarcation and synergistic effects are not being evaluated. Superstructures in process systems engineering can visualize all alternative candidates for design problems and contain emerging technologies as such candidates.

In this study, we are tackling pre-emptive life cycle design in social challenges implementing emerging technologies with case studies on battery chemicals, plastics, and regional resources. Appropriate alternative candidates were generated with stakeholders in industries and national projects by constructing superstructures. Based on the consensus superstructures, life cycles have been proposed considering life cycle assessment (LCA) by the simulations of applying emerging technologies.

Regarding the battery chemistry issue, the nickel-manganese-cobalt (NMC) type lithium batteries have become dominant, although the lithium iron phosphate (LFP) type has also been considered as a candidate. The battery chemistries and recycling technologies are emerging technologies in this issue and superstructures were proposed for recycling systems (Yonetsuka et al., 2024). Through communication with the managers of Japanese national projects on battery technology, the scenarios on battery resource circulation have been developed. The issue of plastics has become the design problem of systems applying biomass-derived and recycle-based carbon sources (Meng et al., 2023; Kanazawa et al., 2024). Based on superstructure (Nakamura et al., 2023), the scenario planning and LCA have been conducted and shared with stakeholders for designing future plastic resource circulations. Regional resources could be circulated by implementing multiple technologies (Kikuchi et al., 2023). Through communication with residents and stakeholders, the demonstration test was conducted.

The case studies in this study find the facts below. The superstructures with technology assessments could support the common understanding of the applicable technologies and their pros and cons. Because technologies could not be implemented without social acceptance, CAPE tools should be able to discuss the sociotechnical and socioeconomical aspects of process systems.

D. Kanazawa et al., 2024, Scope 1, 2, and 3 Net Zero Pathways for the Chemical Industry in Japan, J. Chem. Eng. Jpn., 57 (1). DOI: 10.1080/00219592.2024.2360900.

Y. Kikuchi et al., 2024, Prospective life-cycle design of regional resource circulation applying technology assessments supported by CAPE tools, Comput. Aid. Chem. Eng., 53, 2251-2256

F. Meng et al., 2023, Planet compatible pathways for transitioning the chemical industry, Proc. Natl. Acad. Sci., 120 (8) e2218294120.

T. Nakamura et al., 2024, Assessment of Plastic Recycling Technologies Based on Carbon Resource Circularity Considering Feedstock and Energy Use, Comput. Aid. Chem. Eng., 53, 799-804

T. Yonetsuka et al., 2024, Superstructure Modeling of Lithium-Ion Batteries for an Environmentally Conscious Life-Cycle Design, Comput. Aid. Chem. Eng., 53, 1417-1422



A kinetic model for transesterification of vegetable oils catalyzed by sodium methylate—Insights from inline Raman spectroscopy

Ilias Bouchkira, Mohammad El Wajeh, Adel Mhamdi

Process Systems Engineering (AVT.SVT), RWTH Aachen University, 52074 Aachen, Germany

The transesterification of triolein by methanol for biodiesel production is of great interest due to its potential to provide a sustainable and environmentally friendly alternative to fossil fuels. Biodiesel can be produced from renewable sources like vegetable oils, thereby contributing to reducing greenhouse gas emissions and dependency on non-renewable energy. The process also yields glycerol, a valuable by-product that is used in various industries. Given the growing global demand for cleaner energy and sustainable chemical processes, understanding and modeling the kinetics of biodiesel production is critical for improving efficiency, reducing costs, and ensuring scalability of biodiesel production, especially for model-based process design and control (El Wajeh et al., 2023).

We present a kinetic model of the transesterification of triolein by methanol to produce fatty acid methyl esters (FAME), i.e. biodiesel, and glycerol. For parameter estimation, we perform transesterification experiments using an automated lab-scale system consisting of a semi-batch reactor, dosing pumps, stirring system and a cooling/heating thermostat. An important contribution in this work is that we use inline Raman spectroscopy instead of taking samples for offline analysis. The application of Raman spectroscopy enables continuous concentration monitoring of key species involved in the reaction, i.e. FAME, triglycerides, methanol, glycerol and catalyst.

We employ sodium methylate as a catalyst, addressing a gap in the literature, where kinetic parameter values for the transesterification with this catalyst are lacking. To ensure robust parameter estimation, we perform a global sensitivity-based estimability analysis (Bouchkira et al., 2024), confirming that the experimental data is sufficient for accurate model calibration. The parameter estimation is carried out using genetic algorithms, and we determine the confidence intervals of the estimated parameters through Hessian matrix analysis. This approach ensures reliable and meaningful model parameters for a broad range of operating conditions.

We perform experiments for several temperatures relevant for industrial application, with a specific focus on the range around 60°C. The Raman probe used inside the reactor is calibrated offline with high precision, achieving excellent calibration accuracy for concentrations (R2 = 0.99). The results show excellent agreement with experimental data. The predicted concentrations from the model align with experimental data, with deviations generally under 2%, demonstrating the accuracy and reliability of the proposed kinetic model across different operating conditions.

References

El Wajeh, M., Mhamdi, A., & Mitsos, A. (2023). Dynamic modeling and plantwide control of a production process for biodiesel and glycerol. Industrial & Engineering Chemistry Research, 62(27), 10559-10576.

Bouchkira, I., Latifi, A. M., & Benyahia, B. (2024). ESTAN—A toolbox for standardized and effective global sensitivity-based estimability analysis. Computers & Chemical Engineering, 186, 108690.



Integration of renewable energy and reversible solid oxide cells to decarbonize secondary aluminium production and urban systems

Daniel Florez-Orrego1, Dareen Dardor1, Meire Ribeiro Domingos1, Reginald Germanier2, François Maréchal1

1Ecole Polytechnique Federale de Lausanne, Switzerland; 2Novelis Sierre S.A.

The aluminium recycling and remelting industry is a key actor in advancing a sustainable and circular economy within the aluminium sector. Currently, energy conversion processes in secondary aluminium production are largely dependent on natural gas, exposing the industry to volatile market prices and contributing to significant environmental impacts. To mitigate this, efforts are focused on reducing reliance on fossil fuels by incorporating renewable energy and advanced cogeneration systems. Due to the intermittent nature of renewable energy, a combination of technologies can be employed to improve energy integration and enhance process resilience in heavy industry. These technologies include energy storage systems, oxycombustion furnaces, carbon abatement, power-to-gas technologies, and biomass thermochemical conversion. This configuration allows for seasonal storage of renewable energy, optimizing its use during periods of high electricity and natural gas prices. High-temperature reversible solid oxide cells play a critical role in balancing energy needs, while increasing exergy efficiency within the integrated facility, offering advantages over traditional cogeneration systems. When thermally integrated into an aluminium remelting plant, the whole system functions as an industrial battery (i.e. fuel and gases storage), cascading low-grade waste heat to a nearby urban agglomeration. The waste heat temperature from aluminium furnaces and biomass energy conversion technologies supports the integration of high-temperature reversible solid oxide cells. The post-combustion of tail gas from these cells provides heat to the melter furnace, while the electricity generated can be used elsewhere in the system, such as for powering electrical furnaces, rolling processes, ancillary demands, and district heating heat pumps. In fact, by optimally tuning the operating parameters of the rSOC, which in turn depend on the partial load and the utilization factor, the heat-to-power ratio can be modulated to satisfy the energy demands of all the industrial and urban systems involved. The chemically-driven heat recovery in the reforming section is also compared to other energy recovery systems, such as supercritical CO2 power cycles and preheater-melter furnace integration. In all the cases, the low-grade waste heat recovery, typically rejected to environment, is used to supply the heat to the city using an anergy district heating network via heat pumping systems. In this advanced integrated scenario, energy consumption increases by only 30% compared to conventional systems based on natural gas and biomass combustion. However, CO2 emissions are reduced by a factor of three, particularly when combined with a carbon management and sequestration system. Further reductions in emissions can be achieved if higher shares of renewable electricity become available. Moreover, the use of local renewable energy resources promotes the energy security and sustainability of industries traditionally reliant on fossil energy resources.



A Novel Symbol Recognition Framework for Digitization of Piping and Instrumentation Diagrams

Zhiyuan Li1, Zheqi Liu2, Jinsong Zhao1, Huahui Zhou3, Xiaoxin Hu3

1Department of Chemical Engineering, Tsinghua University, Beijing, China; 2Department of Computer Science and Engineering, University of California, San Diego, US; 3Sinopec Ningbo Engineering Co., Ltd, Ningbo, China

Piping and Instrumentation Diagrams (P&IDs) are essential in the chemical industry, but most exist as scanned images, limiting seamless integration into digital workflows. This paper proposes a method to digitize P&IDs and automate unit operation selection for Hazard and Operability (HAZOP) analysis. We combined convolutional neural networks and transformers to detect devices, pipes, instrumentation, and text in image-format P&IDs. Then we reconstructed the process topology and control structures for each P&ID using distance metric learning. Furthermore, multiple P&IDs were integrated into a comprehensive chemical process knowledge graph by stream and equipment identifiers. To facilitate automated HAZOP analysis, we developed a node-merging algorithm that groups equipment according to predefined unit operation categories, thereby identifying specific analysis objects for intelligent HAZOP analysis.

An evaluation conducted on a dataset comprising 500 simulated Piping and Instrumentation Diagrams (P&IDs) revealed that the device recognition process achieved over 99% precision and recall, with 93% accuracy in text extraction. Processing time was reduced by threefold compared to conventional methods, and the node-merging algorithm yielded satisfactory results. This study improves data sharing in chemical process design and facilitates automated HAZOP analysis.



Twin Roll Press Washer Blockage Prediction: A Pulp and Paper Plant Case Study

Bryan Li1,2, Isaac Severinsen1,2, Wei Yu1,2, Timothy Walmsley2, Brent Young1,2

1Department of Chemical and Materials Engineering, The University of Auckland, Auckland 1010, New Zealand; 2Ahuora – Centre for Smart Energy Systems, School of Engineering, The University of Waikato, Hamilton 3240, New Zealand

A process fault is considered an unacceptable deviation from the normal state. Process faults can incur significant product and revenue loss, as well as damage to personnel and equipment. The aim of this research is to create a self-learning digital twin that closely replicates and interfaces with a physical plant to appropriately advise plant operators of a potential plant fault in the near future. A key challenge to accurately predicting process faults is the lack of fault data due to the scarcity of fault occurrences. To overcome this challenge, this study creates synthetic data indistinguishable from the real limited process fault datasets with generative artificial intelligence, so that deep learning algorithms can better learn the fault behaviours. The model capability is further enhanced with real-time fault library updates employing methods of low computational cost: principal component analysis and transfer learning.

A pulp bleaching and washing process is used as an industrial case study. This process is connected to downstream black liquor evaporators and chemical recovery boilers. Successful development of this model can aid the decarbonisation progress in pulp and paper industry by decreasing energy wastage, water usage, and process downtime.



Addressing Incomplete Physical Models in Chemical Processes: A Novel Physics-Informed Neural Network Approach

Zhiyuan Xie, Feiya Lv, Jinsong Zhao

Tsinghua University, China, People's Republic of

In recent years, machine learning—particularly neural networks—has exerted a transformative influence on various facets of chemical processes, including variable prediction, fault detection, and fault diagnosis. However, when data is incomplete or insufficient, purely data-driven neural networks often encounter difficulties in achieving high predictive accuracy. Physics-Informed Neural Networks (PINNs) address these limitations by embedding physical knowledge and prior domain expertise into the neural network framework, thereby constraining the solution space and facilitating effective training with limited data. This methodology offers notable advantages in handling scarce industrial datasets.Despite these strengths, PINNs depend on explicit formulations of nonlinear partial differential equations (PDEs), which present significant challenges when modeling the intricacies of complex chemical processes. To overcome these limitations, this study introduces a novel PINN architecture capable of accommodating processes with incomplete PDE descriptions. Experimental evaluations on a Continuous Stirred Tank Reactor (CSTR) dataset, along with real-world industrial datasets, validate the proposed architecture’s effectiveness and demonstrate its feasibility in scenarios involving incomplete physical models.



A Physics-based, Data-driven Numerical Framework for Anomalous Diffusion of Water in Soil

Zeyuan Song, Zheyu Jiang

Oklahoma State University, United States of America

Precision modeling and forecasting of soil moisture are essential for implementing smart irrigation systems and mitigating agricultural drought. Agro-hydrological models, which describe irrigation, precipitation, evapotranspiration, runoff, and drainage dynamics in soil, are widely used to simulate the root-zone (top 1m of soil) soil moisture content. Most agro-hydrological models are based on the standard Richards equation [1], a highly nonlinear, degenerate elliptic-parabolic partial differential equation (PDE) with first order time derivative. However, research has shown that standard Richards equation is unable to model preferential flow in soil with fractal structure. In such a scenario, the soil exhibits anomalous non-Boltzmann scaling behavior. For soils exhibiting non-Boltzmann scaling behavior, the soil moisture content is a function of $frac{x}{t^{alpha/2}}$, where $x$ is the position vector, $t$ denotes the time, and $alpha$ is a soil-dependent parameter indicating subdiffusion ($alpha in (0,1)$) and superdiffusion ($alpha in (1,2)$). Incorporating this functional form of soil moisture into the Richards equation leads to a generalized, time-fractional Richards equation based on fractional time derivatives. Clearly, solving the time-fractional Richards equation for accurate modeling of water flow dynamics in soil faces extensive theoretical and computational challenges. Naïve approaches typically discretizes the time-fractional Richards equation using finite difference method (FDM). However, the stability of FDM is not guaranteed. Furthermore, the underlying physical laws (e.g., mass conservation) are often lost during the discretization process.

Here, we propose a novel numerical method that synergistically integrates finite volume method (FVM), adaptive linearization scheme, global random walk, and neural network to solve the time-fractional Richards equation. Specifically, the fractional time derivatives are first approximated using trapezoidal quadrature formula, before discretizing the time-fractional Richards equation by FVM. Leveraging our previous findings [2], we develop an adaptive linearization scheme to solve the discretized equation iteratively, thereby overcoming the stability issues associated with directly solving a stiff and sparse matrix equation. To better preserve the underlying physics during the solution process, we reformulate the linearized equation using global random walk algorithm. Next, as opposed to making the prevailing assumption that, in any discretized cell, the soil moisture is proportional to the number of particles, we show that this assumption does not hold. Instead, we propose to use neural networks to model the highly nonlinear relationships between the soil moisture content and the number of particles. We illustrate the accuracy and computational efficiency of our proposed physics-based, data-driven numerical method using numerical examples. Finally, a simple way to efficiently identify the parameter is developed to match the solutions of time-fractional Richards equation with experimental measurements.

References

[1] L.A. Richards, Capillary conduction of liquids through porous mediums, Physics, 1931, 1(5): 318-333.

[2] Z. Song, Z. Jiang, A Novel Data-driven Numerical Method for Hydrological Modeling of Water Infiltration in Porous Media, arXiv preprint arXiv:2310.02806, 2023.



Supersaturation Monitoring for Batch Crystallization using Empirical and Machine Learning Models

Mohammad Reza Boskabadi, Merlin Alvarado Morales, Seyed Soheil Mansouri, Gürkan Sin

Department of Chemical and Biochemical Engineering, Søltofts Plads, Building 228A, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark

Batch crystallization serves as a downstream process within the pharmaceutical and food industries, providing a high degree of flexibility in the purification of a wide range of products. Effective control over the crystal size distribution (CSD) is essential in these processes to minimize waste and the need for recycling, as crystals falling outside the target size range are typically considered waste or are recycled (Boskabadi et al., 2024). The resulting CSD is significantly influenced by the supersaturation (SS) of the mother liquor, a key parameter driving crystal nucleation and growth. Supersaturation is governed by several nonlinear factors, including concentration, temperature, purity, and other quality parameters of the mother liquor, which are often determined through laboratory analysis. Due to the complexity of these dependencies, no direct measurement method or single instrument exists for supersaturation assessment (Morales et al., 2024). This lack of efficient monitoring contributes to the GHG emissions associated with sugar production, estimated at 1.47 kg CO2/kg sugar (Li et al., 2024).

The primary objective of this study is to develop a machine learning (ML)-based model to predict sugar supersaturation using the sugar solubility dataset provided by Van der Sman (2017), aiming to establish correlations between temperature and sucrose solubility. To this end, different ML models were developed, and each model underwent rigorous statistical evaluations to verify its ability to capture solubility trends effectively. The results were compared to the saturation curve predicted by the Flory-Huggins thermodynamic model. The ML model simplifies predictions by accounting for impurities and temperature dependencies, validated using experimental datasets. The findings indicate that this predictive model allows for more precise dynamic control of the crystallization process. Finally, the effect of the developed model on sustainable sugar production was investigated. It was demonstrated that using this model may reduce the mean batch residence time during the crystallization stage, lowering energy consumption, reducing the CO2 footprint, increasing production capacity, and ultimately contributing to sustainable process development.

References:

Boskabadi, M. R., Sivaram, A., Sin, G., & Mansouri, S. S. (2024). Machine Learning-Based Soft Sensor for a Sugar Factory’s Batch Crystallizer. In Computer Aided Chemical Engineering (Vol. 53, pp. 1693–1698). Elsevier.

Li, K., Zhao, M., Li, Y., He, Y., Han, X., Ma, X., & Ma, F. (2024). Spatiotemporal Trends of the Carbon Footprint of Sugar Production in China. Sustainable Production and Consumption, 46, 502–511.

Morales, H., di Sciascio, F., Aguirre-Zapata, E., & Amicarelli, A. (2024). Crystallization Process in the Sugar Industry: A Discussion On Fundamentals, Industrial Practices, Modeling, Estimation and Control. Food Engineering Reviews, 1–29.

Van der Sman, R. G. M. (2017). Predicting the solubility of mixtures of sugars and their replacers using the Flory–Huggins theory. Food & Function, 8(1), 360–371.



Role of process integration and renewable energy utilization for the decarbonization of the watchmaking sector.

Pullah Bhatnagar1, Daniel Alexander Florez Orrego1, Vibhu Baibhav1, François Maréchal1, Manuele Margni2

1EPFL, Switzerland; 2HES-SO Valai Wallis, Switzerland

Switzerland is the largest exporter of watches and clocks worldwide. The Swiss watch industry contributes 4% to Switzerland's GDP, amounting to CHF 25 billion annually. As governments and international organizations accelerate efforts to achieve net-zero emissions, industries are increasingly pressured to adopt more sustainable practices. Decarbonizing the watch industry is therefore essential. One way to improve sustainability is by enhancing energy efficiency, which can significantly reduce the consumption of various energy sources, leading to lower emissions. Additionally, recovering waste heat from different industrial processes can further enhance energy efficiency.

The watch industry operates across five distinct typical days, each characterized by different levels of average power demand, plant activity, and duration. Among these, typical working days experience the highest energy demand, while vacation periods see the lowest. Adjusting the timing of vacation periods—such as shifting the month when the industry closes—can also improve energy efficiency. This becomes particularly relevant with the integration of decarbonization technologies like photovoltaic (PV) and solar thermal (ST) systems, which generate more energy during the summer months.

This work also explores the techno-economic feasibility of incorporating energy storage solutions (both for heat and electricity) and developing a tailored charging and dispatch strategy. The strategy would be designed to account for the variations in energy demand observed across the different characteristic time periods within a month.



An Integrated Machine Learning Framework for Predicting HPNA Formation in Hydrocracking Units Using Forecasted Operational Parameters

Pelin Dologlu1, Berkay Er1, Kemal Burçak Kaplan1, İbrahim Bayar2

1SOCAR Turkey, Digital Transformation Department, Istanbul 34485, Turkey; 2SOCAR STAR Oil Refinery, Process Department, Aliaga, Izmir 35800, Turkey

The accumulation of heavy polynuclear aromatics (HPNAs) in hydrocracking units (HCUs) poses significant challenges to catalyst performance and process efficiency. This study proposes an integrated machine learning framework that combines ridge regression, K-nearest neighbors (KNN), and long short-term memory (LSTM) neural networks to predict HPNA formation, enabling proactive process management. For the training phase, weighted average bed temperature (WABT), catalyst deactivation phase—classified using unsupervised KNN clustering—and hydrocracker feed (HCU feed) parameters obtained from laboratory analyses are utilized to capture the complex nonlinear relationships influencing HPNA formation. In the simulation phase, forecasted WABT values are generated using a ridge regression model, and future HCU feed changes are derived from planned crude oil blend data provided by the planning department. These forecasted WABT values, predicted catalyst deactivation phases, and anticipated HCU feed parameters serve as inputs to the LSTM model for predicting future HPNA levels. This approach allows us to simulate various operational scenarios and assess their impact on HPNA accumulation before they manifest in the actual process. By identifying critical process parameters and their influence on HPNA formation, the model enhances process engineers' understanding of the hydrocracking operation. The ability to predict HPNA levels in advance empowers engineers to implement corrective actions proactively, such as adjusting feed compositions or operating conditions, thereby mitigating HPNA formation and extending catalyst life. The integrated framework demonstrates high predictive accuracy and robustness, underscoring its potential as a valuable tool for optimizing HCU operations through advanced predictive analytics and informed decision-making.



Towards the Decarbonization of a Conventional Ammonia Plant by the Gradual Incorporation of Green Hydrogen

João Fortunato, Pedro Castro, Diogo A. C. Narciso, Henrique A. Matos

Centro de Recursos Naturais e Ambiente, Department of Chemical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisboa, Portugal

As the second most produced chemical worldwide, ammonia (NH3) production depends heavily on fossil fuel consumption. The ammonia production process is highly energy-intensive and results in the emission of 1-2 % of total carbon dioxide emissions1 and the consumption of 2 % of the energy consumed worldwide1. Ammonia is industrially produced by the Haber-Bosch (HB) process, by reacting hydrogen with nitrogen. Hydrogen can be obtained from a variety of feedstocks, such as coal and naphtha, but is typically obtained from the processing of natural gas, via Steam Methane Reforming (SMR)1. In the latter case, atmospheric air can be used directly as a nitrogen source without the need for previous separation, since the oxygen is completely consumed by the methane partial oxidation reaction2.

The ammonia industry is striving for decarbonization, driven by increasing carbon neutrality policies and energy independence targets. In Europe, the Renewable Energy Directive III requires that 42 % of the hydrogen used in industrial processes come from renewable sources by 20303, setting a critical shift towards more sustainable ammonia production methods.

The literature includes many studies focusing on the production of low-carbon ammonia entirely from green hydrogen, without considering its production via SMR. However, this approach could threaten the competitiveness of the current industry and the loss of opportunity to continue valorizing previous investments.

This work addresses the challenges involved with the incorporation of green hydrogen into a conventional ammonia production plant (methane-fed HB process). An Aspen Plus V14 model was developed, and two different green hydrogen incorporation strategies were tested: S-I and S-II. These were inspired by existing operating procedures at one real-life plant, therefore the model simulations main focus are to determine the feasible limits of using an existing conventional NH3 plant and observe the associated main KPIs, when green H2 is available to add.

The S-I strategy reduces the production of grey hydrogen by reducing natural gas and process steam in the SMR. The intake of green hydrogen allows hydrogen and ammonia production to remain fixed.

In strategy S-II, grey hydrogen production remains unchanged, resulting in higher total hydrogen production. By taking in larger quantities of process air, higher NH3 production can be achieved.

These strategies introduce changes to the SMR process and NH3 synthesis, which imply modifications to the operating conditions of the plant. These changes lead to a technical limit for the incorporation of green hydrogen into the conventional HB process. Nevertheless, both strategies make it possible to reduce carbon emissions per quantity of NH3 produced and promote the gradual decarbonization of the current ammonia industry.

1. IEA International Energy Agency. Ammonia Technology Roadmap - Towards More Sustainable Nitrogen Fertiliser Production. https://www.iea.org/reports/ammonia-technology-roadmap (2021).

2. Appl, M. Ammonia, 2. Production Processes. in Ullmann’s Encyclopedia of Industrial Chemistry (Wiley, 2011). doi:10.1002/14356007.o02_o11.

3. RED III Directive (UE) 2023-2413 of 18 October 2023.



Gate-to-Gate Life Cycle Assessment of CO₂ Utilisation in Enhanced Oil Recovery: Sustainability and Environmental Impacts in Dukhan Field, Qatar

Razan Sawaly, Ahmad Abushaikha, Tareq Al-Ansari

hbku, Qatar

This study examines the potential impact of implementing a cap and trade system to reduce CO₂ emissions in Qatar's industrial sector, which is a significant contributor to global emissions. Using data from seven key industries, the research sets emission caps, allocates allowances through a grandfathering method, and allows trading of these allowances to create economic incentives for emission reductions. The study utilizes a model with a carbon price of $12.50 per metric ton of CO₂ and compares baseline emissions with future reduction strategies. Results indicate that while some industrial plants, such as the LNG and methanol plants, achieved substantial emission reductions and financial surpluses through practices like carbon capture and switching to hydrogen, others continued to face deficits. The findings highlight the system's potential to promote sustainable practices, suggesting that tighter caps and auction-based allowance allocations could further enhance the effectiveness of the cap and trade system in Qatar's industrial sector.



Robust Flowsheet Synthesis for Ethyl Acetate, Methanol and Water Separation

Aayush Gupta, Kartavya Maurya, Nitin Kaistha

Indian institute of technology Kanpur, India

Ethyl acetate and methanol are commonly used solvents in the pharmaceutical, textile, dye, fine organic, and paint industry [1] [2]. The waste solvent from these industries often contains EtAc and MeOH in water in widely varying proportions. Sustainability concerns reflected in increasingly stringent waste discharge regulations now dictate complete recovery, recycle and reuse of the organic species from the waste solvent. For the EtAc-MeOH-water waste solvent, simple distillation cannot be used due to the presence of a homogeneous EtAc-MeOH azeotrope and a heterogeneous EtAc-water azeotrope. Synthesizing a feasible flowsheet structure that separates a given waste solvent mixture into its nearly pure constituents (EtAc, MeOH and water) then becomes challenging. The flowsheet structure, among other things, depends on the waste solvent composition. A flowsheet that is feasible for a dilute waste solvent mixture may become infeasible for a more concentrated waste solvent. Given that the flowsheet structure, once chosen, remains fixed and cannot be changed, and that wide variability in the waste solvent composition is expected, in this work, we propose a “robust” flowsheet structure with guaranteed feasibility, regardless of the waste solvent composition. Such a “robust” flowsheet structure has the potential to significantly improve the economic viability of a waste solvent processing plant, as the same equipment can be used to separate the wide range of received waste solvents.

The key to the robust flowsheet design is the use of a liquid-liquid extractor (LLX) with recycled water as the solvent. For a sufficiently high-water rate to the LLX, the raffinate composition is close to the EtAc-water edge (nearly MeOH free), on the liquid-liquid envelope and in the EtAc rich distillation region. The raffinate is distilled to obtain pure EtAc bottoms product and the overhead vapour is condensed and decanted with the organic layer refluxed into the column. The aqueous distillate is mixed with the MeOH rich extract and stripped to obtain an EtAc free MeOH-water bottoms. The overhead vapour is condensed and recycled back to the LLX. The MeOH-water bottoms is further distilled to obtain pure MeOH distillate and pure water bottoms. A fraction of the bottoms is recirculated to the LLX as the solvent feed. Converged designs are obtained for an equimolar waste solvent composition as well as an EtAc rich, MeOH rich and water rich compositions to demonstrate the robustness of the flowsheet structure to a large change in the waste solvent composition.

References

[1] C. S. a. S. M. J. a. C. W. A. a. C. D. J. Slater, "Solvent use and waste issues," Green chemistry in the pharmaceutical industry, pp. 49-82, 2010.

[2] T. S. a. L. Z. a. C. H. a. Z. H. a. L. W. a. F. Y. a. H. X. a. S. Longyan, "Method for separating and recovering ethyl acetate and methanol". China Patent CN102746147B, May 2014.



Integrating offshore wind energy into the optimal deployment of a hydrogen supply chain: a case study in Occitanie

Melissa Cherrouk1,2, Catherine Azzaro-Pantel1, Marie Robert2, Florian Dupriez Robin2

1France Energies Marines / Laboratoire de Génie Chimique, France; 2France Énergies Marines, Technopôle Brest-Iroise, 525 Avenue Alexis de Rochon, 29280, Plouzané, France

The urgent need to mitigate climate change and reduce dependence on fossil fuels has led to the exploration of alternative energy solutions, with green hydrogen emerging as a key player in the global energy transition. Thus, the aim of this study is to assess the feasibility and competitiveness of producing hydrogen at sea using offshore wind energy, evaluating both economic and environmental perspectives.

Offshore wind energy offers several advantages for hydrogen production. These include access to water for electrolysis, potentially lower export costs for hydrogen compared to electricity, and the ability to smooth the variability of wind energy through hydrogen storage systems. Proper storage plays a crucial role in addressing the intermittency of wind power, making the hydrogen output more stable. This positions storage not only as an advantage but also as a key step for the successful coupling of offshore wind with hydrogen production. However, challenges remain, particularly regarding the capacity and cost of such storage solutions, alongside the high capital expenditures (CAPEX) and operational costs (OPEX) required for offshore systems.

This research explores the potential of offshore wind farms (OWFs) to contribute to hydrogen production by extending a techno-economic model based on Mixed-Integer Linear Programming (MILP). The model optimizes the number and type of production units, storage locations, and distribution methods, employing an optimization approach to determine the best hydrogen flows between regional hubs . The case study focuses on the Occitanie region in southern France, where hydrogen could be produced offshore from a 30 MW floating wind farm with three turbines located 30 km from the coast and transported via pipelines. Other energy sources may complement offshore wind energy to meet hydrogen supply demands. The study evaluates two scenarios: minimizing hydrogen production costs and minimizing greenhouse gas emissions over a 30-year period, divided into six five-year phases.

Initial findings show that, from an economic standpoint, the Levelized Cost of Hydrogen (LCOH) from offshore wind remains higher compared to traditional hydrogen production methods. However, the Global Warming Potential (GWP) of hydrogen produced from offshore wind ranks it among the most environmentally friendly options. Despite this, the volume of hydrogen produced in the current configuration does not meet the demand required for significant impact in Occitanie's hydrogen market, which points out the need to test higher power levels for the OWF and potential hybridization with other renewable energy sources.

The results underline the importance of future multi-objective optimization methods to better balance the economic and environmental trade-offs and make offshore wind a more competitive option for hydrogen production.

Reference:
Sofía De-León Almaraz, Catherine Azzaro-Pantel, Ludovic Montastruc, Marianne Boix, Deployment of a hydrogen supply chain by multi-objective/multi-period optimisation at regional and national scales, Chemical Engineering Research and Design, Volume 104, 2015, Pages 11-31, https://doi.org/10.1016/j.cherd.2015.07.005.



Robust Techno-economic Analysis, Life Cycle Assessment, and Quality by-Design of Three Alternative Continuous Pharmaceutical Tablet Manufacturing Processes

Shang Gao, Brahim Benyahia

Loughborough University, United Kingdom

This study presents a comprehensive comparison of three key downstream tableting manufacturing methods for pharmaceuticals: i) Dry Granulation (DG) through roller compaction, ii) Direct Compaction (DC), and iii) Wet Granulation (WG). First, integrated mathematical models of these downstream (drug product) processes were developed using gPROMS Formulated Products, along with data from the literature and our recent experimental work. The process models were designed and simulated to reliably capture the impact of different design options, process parameters, and material attributes. Uncertainty analysis was conducted using global sensitivity analysis to identify the critical process parameters (CPPs) and critical material attributes (CMAs) that most significantly influence the quality and performance of the final pharmaceutical tablets. These are captured by the critical quality attributes (CQAs), which include tablet hardness, dissolution rate, impurities/residual solvents, and content uniformity—factors crucial for ensuring product safety and efficacy. Based on the identified CPPs and CMAs, combined design spaces that guarantee the attainment of the targeted CQAs were identified and compared. Additionally, techno-economic analyses were conducted alongside life cycle assessments (LCA) based on the process simulation results and inventory data. The LCA provided an in-depth evaluation of the environmental impacts associated with each manufacturing method, considering aspects such as energy consumption, raw material usage, emissions, and waste generation across a cradle-to-gate approach. By integrating CQAs within the LCA framework, this study offers a holistic analysis that captures both the environmental sustainability and product quality implications of the three tableting processes. The findings aim to guide the selection of more sustainable and efficient manufacturing practices in the pharmaceutical industry, balancing trade-offs between environmental impact and product quality.

Keywords: Dry Granulation, Direct Compaction, Wet Granulation, Life Cycle Assessment (LCA), Techno-economic Analysis (TEA), Quality-by-Design (QbD)

Acknowledgements

The authors acknowledge funding from the UK Engineering and Physical Sciences Research Council (EPSRC), for Made Smarter Innovation – Digital Medicines Manufacturing Research Centre (DM2), EP/V062077/1.



Systematic Model Builder, Model-Based Design of Experiments, and Design Space Identification for A Multistep Pharmaceutical Process

Xuming Yuan, Ashish Yewale, Brahim Benyahia

Loughborough University, United Kingdom

Mathematical models of different processing unit are usually established and optimized individually, even when these processes are meant to be combined in a sequential way in the real world, particularly in continuous operating plants. Although, this traditional approach may help reduce complexity, it may deliver suboptimal solutions or/and overlook the interactions between the unit operations. Most importantly, it can dramatically increase the development time, wastes, and experimental costs inherent to the raw materials, solvents, cleaning, etc. This study aims at developing a systematic approach to establish and optimize integrated mathematical models of interactive multistep processes. This methodology starts with suggesting various model candidates for different unit operations initially based on the prior knowledge. A combination of the model candidates for different unit operations is performed, which gives several candidates of integrated models for the multistep process. A model discrimination based on structural identifiability analysis and model prediction performance (Yuan and Benyahia, 2024) reveals the best integrated model for the multistep process. Afterwards, the refinement of the model, consisting of estimability analysis (Bouchkira and Benyahia, 2023) and model-based design of experiment (MBDoE), is conducted to give the optimal experimental design that guarantees the most information-rich data. With the acquisition of the new experimental data, the reliability and robustness of the multistep mathematical model is dramatically enhanced. The optimized model is subsequently used to identify the design space of the multistep process, which delivers the optimal operating range of the critical process parameters (CPPs) that satisfy the targeted critical quality attributes (CQAs). A blending-tableting process of paracetamol is selected as a case study in this work. The methodology applies the prior knowledge from Kushner and Moore (2010), Nassar et al. (2021) and Puckhaber et al. (2022) to establish model candidates for this two-unit-operation process, where the effects of the lubrication in the blender as well as the composition and porosity of the tablet on the tablet tensile strength are taken into consideration. Model discrimination and model refinement are then performed to identify and improve the optimal integrated model for this two-step process, and the enhanced model is applied for the design space identification under specified CQA targets. The results confirm the effectiveness of the proposed methodology, which demonstrates its potential in achieving higher optimality for the processes involving multiple unit operations.



The role of long-term storage in municipal solid waste treatment systems: Multi-objective resources integration

Julie Dutoit1,2, Jaroslav Hemrle2, François Maréchal1

1École Polytechnique Fédérale de Lausanne (EPFL), Industrial Process Energy Systems Engineering (IPESE), Sion, 1951, Switzerland; 2Kanadevia Inova AG, Zürich, 8005, Switzerland

Estimations for the horizon 2050 predict significant municipal solid waste (MSW) generation increase in every world region, whereas important discrepancies remain between net-zero decarbonization targets of the Paris Agreement and current waste treatment technologies’ environmental performance. This creates an important area of research and development to improve the solutions, especially with regards to circular economy goals for material recovery and transitioning energy supply systems. As shown for plastic chemical recycling by Martínez-Narro et al., 2024, promising technologies may include energy-intensive steps which need integration to renewable energy to be environmentally viable. With growing intra-daily and seasonal variations of power availability due to the increase of renewable production share, Demand Side Response (DSR) measures play a crucial role beside energy storage systems to support power grid stability. In current research, DSR applicability to industrial process models is under-represented relatively to the residential sector, with little attention brought to control strategies or input predictions in system analysis (Bosmann and Eser, 2016, Kirchem et al., 2020).

This contribution presents a framework to evaluate the potential of waste treatment system to shift energy loads for a better integration into energy systems of industrial clusters or residential areas. The waste treatment systems scenarios are modeled, simulated and optimized in a hybrid framework OpenModelica/Python, described by Dutoit et al., 2024. In particular, pinch analysis (Linnhoff and Hindmarsh, 1983) is used for the heat integration assessment. The multi-objective approach relies on key performance indicators including process, economic and environmental impact aspects.

For the case study application, core technologies included are waste sorting, waste incineration and post-combustion amine-based carbon capture, which are integrated to heat and power utilities to satisfy varying external demand from the power grid and the district heating network. The heterogeneous modeling of the waste flows allows to define several design options on the material recovery facility for waste plastic fraction sorting, and scenarios are simulated to evaluate the system performance under the described control strategies. Results provide insights for optimal system operations and integration from an industrial perspective.

References

Bosmann, T., & Eser, E. J. (2016). Model-based assessment of demand-response measures – A comprehensive literature review. Renewable and Sustainable Energy Reviews, 57, 1637–1656. https://doi.org/10.1016/j.rser.2015.12.031.

Dutoit, J., Hemrle, J., Maréchal, F. (2024). Supporting Life-Cycle Impact Assessment Transparency in Waste Treatment Systems Simulation: A Decision-Support Methodology. In preparation.

Kirchem, D., Lynch, M. A., Bertsch, V., & Casey, E. (2020). Modelling demand response with process models and energy systems models: Potential applications for wastewater treatment within the energy-water nexus. Applied Energy, 260, 114321. https://doi.org/10.1016/j.apenergy.2019.114321

Linnhoff, B., & Hindmarsh, E. (1983). The pinch design method for heat exchanger networks. Chemical Engineering Science, 38(5), 745–763. https://doi.org/10.1016/0009-2509(83)80185-7

Martínez-Narro, G., Hassan, S., N. Phan, A. (2024). Chemical recycling of plastic waste for sustainable polymer manufacturing – A critical review. Journal of Environmental Chemical Engineering, 12, 112323. https://doi.org/10.1016/j.jece.2024.112323.



A Comparative Study of Feature Importance in Process Data: Neural Networks vs. Human Visual Attention

Rohit Suresh1, Babji Srinivasan1,3, Rajagopalan Srinivasan2,3

1Department of Applied Mechanics and Biomedical Engineering, Indian Institute of Technology Madras, Chennai 600036, India; 2Department of Chemical Engineering, Indian Institute of Technology Madras, Chennai 600036, India; 3American Express Lab for Data Analytics, Risk and Technology Indian Institute of Technology Madras, Chennai 600036, India

Artificial Intelligence(AI) and Automation technologies have revolutionized the way many sectors operate. Specifically in process industries and power plants, there is a lot of scope of enhancing production and efficiency with AI through predictive maintenance, condition monitoring, inspection and quality control etc. However, despite these advancements, human operators are the final decision-makers in such major safety-critical systems. Fostering collaboration between human operators and AI systems is the inevitable next step forward. A primary step towards achieving this goal is to capture the representation of information acquired by both human operators and AI-based systems in a mutually comprehensible way. This would aid in understanding their rationale behind the decision. AI-based systems with deep networks and complex architecture often achieve the best results. However, they are often disregarded by human operators due to lack of transparency. While eXplainable AI(XAI) is an active research area that attempts to comprehend the deep networks, understanding the human rationale behind decision-making is largely overlooked.

Several popular XAI techniques such as local interpretable model-agnostic explanations(LIME), and Gradient-Weighted Class Activation Mapping(Grad-CAM) provide explanations via feature attribution. In the context of process monitoring, Bahkte et al. (2022) used shapeley value framework with integrated gradients to estimate the marginal contribution of process variables in fault classification. One popular way to evaluate the explanations provided by various XAI algorithm is through human eye gaze tracking. Human participants’ visual attention over the stimus is estimated using eye tracking which is compared with the results of XAI.

Eye tracking also has the potential to characterise the mental models of control room operators during different experimental scenarios(Shahab et al., 2022). In this work, participants, acting as control room operators were given tasks of disturbance rejection based on alarm signals and process variable trends in HMI. Extending that in this work we attempt to explain the human operator’s rationale behind the decision making through eye tracking. Participants’ dynamic attention allocation over the stimulus is objectively captured using various eye gaze metrics which are further used to extract the major causal factors that influenced the decision of participants. The effectiveness of the method is demonstrated with a case study. We conduct eye tracking experiments where participants are required to identify the fault in the process. During the experiment the images of trend panel with trajectories of all major process variables captured at a specific instant are shown to the participants. The process variable responsible for the fault is objectively identified using operator knowledge. Our future work will focus on integrating this human rationale with XAI which will pave the way for human-machine teaming.

References:
Bhakte, A., Pakkiriswamy, V. and Srinivasan, R., 2022. An explainable artificial intelligence based approach for interpretation of fault classification results from deep neural networks. Chemical Engineering Science, 250, p.117373.
Shahab, M.A., Iqbal, M.U., Srinivasan, B. and Srinivasan, R., 2022. HMM-based models of control room operator's cognition during process abnormalities. 1. Formalism and model identification. Journal of Loss Prevention in the Process Industries, 76, p.104748.



Parameter Estimation and Model Comparison for Mixed Substrate Biomass Fermentation

Tom Vinestock, Miao Guo

King's College London, United Kingdom

Single cell protein (SCP) fermentation is an effective way of transforming carbohydrate-rich substrates into high-protein foodstuffs and is more sustainable than conventional animal-based protein production [1]. However, whereas cows and other ruminants can be fed agricultural residues, such as rice straw, SCP fermentations generally depend on high-purity single substrate feedstocks as a carbon-source, such as starch-derived glucose, which are expensive, and directly compete for food crops.

Consequently, there is interest in switching to feedstocks derived from cheap agricultural lignocellulosic residues. However, treatment of such lignocellulosic residues produces a mixed feedstock, typically containing both glucose and xylose [2]. Accurate models of mixed-substrate growth are needed for fermentation decision-makers to understand the trade-offs associated with transitioning to the more sustainable lignocellulosic feedstocks. Such models are also a prerequisite for optimizing the operation and control of mixed-substrate fermentations.

In this work, recently published biomass and substrate concentration time-series data, for glucose-xylose batch fermentation of F. Venenatum [3] is used to estimate parameters for different unstructured models of diauxic growth. A Bayesian optimisation methodology is employed to identify the best parameters in each case. A novel model for diauxic growth with substrate cross-inhibition, mediated by variable enzyme production, is proposed, based on Nakamura et al. [4], but simplified to reduce the number of states and parameters, and hence improve identifiability and reduce overfitting. This model is shown to have a lower error on both the calibration dataset and the validation dataset, than the model in Banks et al. [3], itself based on work by Vega-Ramon et al. [5], which models substrate cross-inhibition effects directly. The performance of the model proposed by Kompala and Ramkrishna [6], based on growth-optimised cellular resource allocation, is also evaluated.

This work could lead to improved modelling of mixed substrate fermentation, and therefore help address the technical barriers to wider-scale use of lignocellulose-derived feedstocks in fermentation. Future research could test the generalisability of the diauxic growth models considered using data from a continuous or fed-batch mixed substrate fermentation.

References

[1] Good Food Institute, “Fermentation: State of the industry report,” 2021.

[2] L. Qin, L. Liu, A.P. Zeng, and D. Wei, “From low-cost substrates to single cell oils synthesized by oleaginous yeasts,” Bioresource Technology, Dec. 2017.

[3] M. Banks, M. Taylor, and M. Guo, “High throughput parameter estimation and uncertainty analysis applied to the production of mycoprotein from synthetic lignocellulosic hydrolysates,” 2024.

[4] Y. Nakamura, T. Sawada, F. Kobayashi, M. Ohnaga, and M. Suzuki, “Stability analysis of continuous culture in diauxic growth,” Journal of Fermentation and Bioengineering, 1996.

[5] F. Vega-Ramon, X. Zhu, T. R. Savage, P. Petsagkourakis, K. Jing, and D. Zhang, “Kinetic and hybrid modeling for yeast astaxanthin production under uncertainty,” Biotechnology and Bioengineering, Dec. 2021.

[6] D. S. Kompala, D. Ramkrishna, N. B. Jansen, and G. T. Tsao, “Investigation of bacterial growth on mixed substrates: Experimental evaluation of cybernetic models,” Biotechnology and Bioengineering, July 1986.



Identification of Suitable Operational Conditions and Dimensions for Supersonic Water Separation in Exhaust Gases from Offshore Turbines: A Case Study

Jonatas de Oliveira Souza Cavalcante1, Marcelo da Costa Amaral2, Fernando Luiz Pellegrini Pessoa1,3

1SENAI CIMATEC University Center, Brazil; 2Leopoldo Américo Miguez de Mello Research Center (CENPES); 3Federal University of Rio de Janeiro (UFRJ)

Due to space, weight, and energy efficiency constraints in offshore environments, the efficient removal of water from turbine exhaust gases is a crucial step to optimize operational performance in gas treatment processes. In this context, replacing conventional methods, such as molecular sieves, with supersonic separators emerges as a promising alternative. This work aims to determine the optimal operational conditions and dimensions for supersonic water separation in turbine exhaust gases on offshore platforms. The simulation was conducted using a unit operation extension in Aspen HYSYS, based on the compositions of exhaust gases from Brazilian pre-salt wells. Parameters such as operational conditions, separator dimensions, and shock Mach were optimized to maximize process efficiency and minimize separator size. The results indicated the near-complete removal of water, demonstrating that supersonic separation technology, in addition to being compact, offers a viable and efficient alternative for water removal from exhaust gases, particularly in space-constrained environments.



On optimal hydrogen production pathway selection using the SECA multi-criteria decision-making method

Caroline Kaitano, Thokozani Majozi

University of the Witwatersrand, South Africa

The increasing global population has resulted in the scramble for more energy. Hydrogen offers a new revolution to energy systems worldwide. Considering its numerous uses, research interest has grown to seek sustainable production methods. However, hydrogen production must satisfy three factors, i.e. energy security, energy equity, and environmental sustainability, referred to as the energy trilemma. Therefore, this study seeks to investigate the sustainability of hydrogen production pathways through the use of a Multi-Criteria Decision- Making model. In particular, a modified Simultaneous Evaluation of Criteria and Alternatives (SECA) model was employed for the prioritization of 19 options for hydrogen production. This model simultaneously determines the overall performance scores of the 19 options and the objective weights for the energy trilemma in a South African context. The results obtained from this study showed that environmental sustainability has a higher objective weight value of 0.37 followed by energy security with a value of 0.32 and energy equity being the least with 0.31. Of the 19 options selected, steam reforming of methane with carbon capture and storage was found to have the highest overall performance score, considering the trade-offs in the energy trilemma. This was followed by steam reforming of methane without carbon capture and storage and the autothermal reforming of methane with carbon capture and storage. The results obtained in this study will potentially pave the way for optimally producing hydrogen from different feedstocks while considering the energy trilemma and serve as a basis for further research in sustainable process engineering.



On the role of Artificial Intelligence in Feature oriented Multi-Criteria Decision Analysis

Heyuan Liu1,2, Yi Zhao1, Francois Marechal1

1Industrial Process and Energy Systems Engineering (IPESE), Ecole Polytechnique Fédérale de Lausanne, Sion, Switzerland; 2École Polytechnique, France

In industrial applications, balancing economic and environmental goals is crucial amidst challenges like climate change. To address conflicting objectives, tools like multi-objective optimization (MOO) and multi-criteria decision analysis (MCDA) are utilized. MOO generates a range of viable solutions, while MCDA helps select the most suitable option considering factors like profitability, environmental impact, safety, and efficiency. These tools aid in making informed decisions amidst complex trade-offs and uncertainties.

In this study, we propose a novel approach for MCDA using advanced machine learning techniques and applied the method to analyze the decarbonization solutions to a typical European refinery. First, a hybrid dimensionality reduction method combining AutoEncoders and Principal Component Analysis (PCA) is developed to reduce high-dimensional data while retaining key features. The effectiveness of dimensionality reduction is demonstrated by clustering the reduced data and mapping the clusters back to the original high-dimensional space. The high clustering quality scores indicate that spatial distribution characteristics were well preserved. Furthermore, Geometric analysis techniques, such as Intrinsic Shape Signatures (ISS), Harris Corner Detection, and Mesh Saliency, further refine the identification of typical configurations. Specifically, 15 typical solutions identified by the ISS method are used as baselines to represent distinct regions in the solution space. These solutions serve as a reference set for further comparison.

Building upon this reference set, we utilize Large Language Models (LLMs) to further enhance the decision-making process. First, LLMs are employed to generate and refine ranking criteria for evaluating the identified solutions. We employ LLM with an iterative self-update mechanism to dynamically adjust weighting strategies, enhancing decision-making capabilities in complex environments. To address input size limitations encountered in the problem, we apply heuristic design approaches that effectively manage and optimize the information. Additionally, effective prompt engineering techniques are integrated to improve the model's reasoning and adaptability.

In addition to ranking, LLM technology provides comprehensive and interpretable explanations for the selected solutions. This includes breaking down the criteria used for each decision, clarifying trade-offs between competing objectives, and offering insights into how different configurations perform across various key performance indicators. These explanations help stakeholders better understand the rationale behind the chosen solutions, enabling more informed decision-making in practical applications.



Optimizing CO2 Utilization in Reverse Water-Gas Shift Membrane Reactors with Parametric PINNs

Zahir Aghayev1,2, Zhaofeng Li3, Michael Patrascu3, Burcu Beykal1,2

1Department of Chemical & Biomolecular Engineering, University of Connecticut, Storrs, CT 06269, USA; 2Center for Clean Energy Engineering, University of Connecticut, Storrs, CT 06269, USA; 3The Wolfson Department of Chemical Engineering, Technion – Israel Institute of Technology, Haifa 3200003, Israel

With atmospheric CO₂ levels reaching a record 426.91 ppm in June 2024, the urgency for innovative carbon capture and utilization (CCU) strategies to reduce emissions and repurpose CO₂ into valuable products has become even more critical [1]. One promising solution is the reverse water-gas shift (RWGS) reaction, which transforms CO₂ and hydrogen—produced through renewable energy-powered electrolysis—into carbon monoxide, a key precursor for synthesizing fuels and chemicals. By integrating membrane reactors that selectively remove water vapor, the process shifts the equilibrium forward, resulting in higher CO₂ conversion and CO yield at lower temperatures, in accordance with the Le Chatelier's principle. However, modeling this intensified system remains challenging due to the complex, nonlinear interaction between reaction kinetics and membrane transport.

In this study, we developed a physics-informed neural network (PINN) model that integrates first-principles physics with machine learning to model the RWGS process within a membrane reactor. This approach embeds governing physical laws into the network's architecture, reducing the computational burden typically associated with solving highly nonlinear ordinary differential equations (ODEs), while maintaining both accuracy and interpretability [2]. Our model demonstrated robust predictive performance, achieving an R² value exceeding 0.95, successfully capturing flow rate changes and reaction dynamics along the reactor length. Using this validated PINN model, we performed data-driven optimization, identifying operational conditions that maximized CO₂ conversion efficiency and reaction yield [3-6]. This hybrid modeling approach enhances prediction accuracy and optimizes the reactor conditions, offering a scalable solution for industries integrating renewable energy into chemical production and reducing carbon emissions. Our findings demonstrate the potential of advanced modeling to intensify CO₂ utilization processes, with significant implications for sustainable chemical production and energy systems.

References

  1. NOAA Global Monitoring Laboratory. (2024). Trends in atmospheric carbon dioxide [online]. Available at: https://gml.noaa.gov/ccgg/trends/ [Accessed 10/13/2024].
  2. Raissi, M., Perdikaris, P. and Karniadakis, G.E., 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378, pp.686-707.
  3. Beykal, B. and Pistikopoulos, E.N., 2024. Data-driven optimization algorithms. In Artificial Intelligence in Manufacturing (pp. 135-180). Academic Press.
  4. Boukouvala, F. and Floudas, C.A., 2017. ARGONAUT: AlgoRithms for Global Optimization of coNstrAined grey-box compUTational problems. Optimization Letters, 11, pp.895-913.
  5. Beykal, B., Aghayev, Z., Onel, O., Onel, M. and Pistikopoulos, E.N., 2022. Data-driven Stochastic Optimization of Numerically Infeasible Differential Algebraic Equations: An Application to the Steam Cracking Process. In Computer Aided Chemical Engineering (Vol. 49, pp. 1579-1584). Elsevier.
  6. Aghayev, Z., Voulanas, D., Gildin, E., Beykal, B., 2024. Surrogate-Assisted Optimization of Highly Constrained Oil Recovery Processes Using Classification-Based Constraint Modeling. Industrial & Engineering Chemistry Research (submitted).


Modeling, simulation and optimization of a carbon capture process through a fluidized TSA column

Eduardo dos Santos Funcia1, Yuri Souza Beleli1, Enrique Vilarrasa Garcia2, Marcelo Martins Seckler1, José Luís de Paiva1, Galo AC Le Roux1

1Polytechnic School of the University of Sao Paulo, Brazil; 2Federal University of Ceará, Brazil

Carbon capture technologies have recently emerged as a way to mitigate climate change and global warming by removing carbon dioxide from the atmosphere. Furthermore, by removing carbon dioxide from biomass-originated flue gases, an energy process with negative carbon footprint can be achieved. Among carbon capture processes, the fluidized temperature swing adsorption (TSA) columns are a promising low-pressure alternative, where carbon dioxide flowing upwards is exothermally adsorbed into a fluidized solid sorbent flowing downwards, and later endothermically extracted through higher temperatures while regenerating the sorbent for recirculation. Although an interesting venture, the TSA process has been developed only in small scale, and remains to be scaled-up to become an industrial reality. This work aims to model, simulate and optimize a TSA multi-stage equilibrium system in order to obtain a conceptual design for future process scale up. A mathematical model was developed for adsorption using an approach that makes it easy to extend the model to various configurations. The model was extended to include multiple stages, each with a heat exchanger, and was also coupled to the desorption operation. Each column, adsorption and desorption, includes one external heat exchanger at the bottom for a preliminary heat load of the inward gas flow. The system also included a heat exchanger in the recirculating solid sorbent flow, before the regenerated solid enters the top of the adsorption column. The model is based on molar and energy balances, coupled to pressure drops in a fluidized bed designed to operate close to the minimum fluidization velocity (calculated through semi-empirical correlations), and to thermodynamics of adsorption equilibrium of a mixture of carbon dioxide and nitrogen in solid sorbents. The Toth Equilibrium isotherm was used with parameters experimentally obtained in a previous work (which suggested that the heterogeneity parameter for nitrogen should be fixed at unity). The complete fluidized TSA adsorption/desorption process has been optimized to minimize energy, adsorbent and operating costs, as well as equipment investment and installing, considering equilibrium in each fluidized bed stage. The optimal configuration for heat exchangers is determined and a unit cost for carbon dioxide capture was estimated. It was found that 2 stages are sufficient for an effective removal of carbon dioxide in the adsorption column, while at least 5 stages are necessary to meet captured carbon specification at 95% molar purity. It was also possible to conclude that not all stages in the columns needed heat exchangers, with some heat loads being set at 0 during the optimization. Pressure drop for each stage was estimated as smaller than 0.072 bar for a bed 1 m high, and air velocity was 40-45 cm/s (minimum fluidization velocity was 10-11 cm/s), with low particle Reynolds numbers of about 17, which indicates the system readily fluidizes. These findings show that the methodology here developed is useful for guiding the conceptual design of fluidized TSA process for carbon capture.



Unlocking Process Dynamics: Interpretable PDE Solutions via Symbolic Regression

Benjamin G. Cohen, Burcu Beykal, George M. Bollas

University of Connecticut, USA

Physics-informed symbolic regression (PISR) offers an innovative approach to automatically learn explicit, analytical approximate solutions to partial differential equations (PDEs). Chemical processes often involve dynamics that PDEs can effectively capture, providing valuable insights for engineers and scientists to improve process design and control. Traditionally, solving PDEs requires expertise in analytical methods or costly numerical schemes. However, with the advent of AI/ML, tools like physics-informed neural networks (PINNs) have emerged, learning solutions to PDEs by constraining neural networks to satisfy differential equations and boundary information. Applying PINNs in safety-critical systems is challenging due to the many neural network parameters and black-box nature.

To address these challenges, we explore the effect of replacing the neural network in PINNs with a symbolic regressor to create PISR. Guided by a carefully selected information-theoretic loss function that balances model agreement with differential equations and boundary information against identifiability, PISR can learn approximate solutions to PDEs that are symbolic rather than neural network approximations. This approach yields concise, clear analytical approximate solutions that balance model complexity and fit quality. Using an open-source symbolic regression package in Julia, we demonstrate PISR’s efficacy by learning approximate solutions to several PDEs common in process engineering and compare the learned representations to those obtained using PINNs. The PISR models, when compared to the PINN models, are straightforward, easy to understand, and contain very few parameters, making them ideal for sensitivity analysis and ensuring robust process design and control.



Eco-Designing Pharmaceutical Supply Chains: A Process Engineering Approach to Life Cycle Inventory Generation

Indra CASTRO VIVAR1, Catherine AZARO-PANTEL1, Alberto A. AGUILAR LASSERRE2, Fernando MORALES-MENDOZA3

1Laboratoire de Génie Chimique, Université de Toulouse, CNRS, INPT, UPS, Toulouse, France; 2Tecnologico Nacional de México, Instituto Tecnológico de Orizaba, México; 3Universidad Autónoma de Yucatán, Facultad de Ingeniería Química, Mérida, Yucatán, México

The environmental impact of Active Pharmaceutical Ingredients (APIs) is an increasingly significant research focus, as global pharmaceutical manufacturing practices face heightened scrutiny regarding sustainability. Paracetamol (acetaminophen), one of the most extensively used APIs, requires closer examination due to its current production practices. Most paracetamol is manufactured in large-scale facilities in India and China, with production capacities ranging from 2,000 to 40,000 tons annually.

Offshoring pharmaceutical manufacturing, traditionally a cost-saving strategy, has increased supply chain complexity and dependency on foreign API sources. This reliance has made Europe’s pharmaceutical production vulnerable, especially during global crises or geopolitical tensions, such as the disruptions seen during the COVID-19 pandemic. Consequently, there is growing interest in reshoring pharmaceutical production to Europe. The European Pharmaceutical Strategy (2020)[1] advocates decentralizing production to create shorter, more sustainable value chains. This move seeks to enhance access to high-quality medicines while minimizing the environmental impacts of long-distance transportation.

In France, the government has introduced measures to relocate the production of 50 essential drugs as part of a re-industrialization plan to address medication shortages. Paracetamol sales were restricted in 2022 and early 2023 due to supply chain issues, leading to the relocation of several manufacturing plants.

Yet, pharmaceuticals present unique challenges when assessed using Life Cycle Assessment (LCA), mainly due to a lack of comprehensive life cycle inventory (LCI) data. This scarcity is particularly evident for API synthesis (upstream) and downstream phases such as usage and end-of-life management.

This study aims to apply LCA methodology to evaluate various paracetamol API supply chain scenarios, focusing on the potential benefits of reshoring production to France. A major contribution of this work is the generation of LCI data for paracetamol production through process engineering and chemical process modeling. Aspen Plus software was used to model the paracetamol API manufacturing process, including mass and energy balances. This approach ensures that the datasets generated are robust and validated against available reference data. SimaPro software was used to conduct the LCA using the EcoInvent database and the Environmental Footprint (EF) impact assessment method.

One key finding is the reduction of greenhouse gas emissions for the selected functional unit (FU) of 1 kg of API. Significant differences in electricity use and steam heat generation were observed. According to the EF database, electricity in India results in emissions of 83 g CO₂ eq, while steam heat generation emits 1.38 kg CO₂ eq per FU. In contrast, French emissions are significantly lower, with electricity contributing 5 g CO₂ eq and steam heat generating 1.18 kg CO₂ eq per FU. These results highlight the environmental advantages of relocating production to regions with decarbonized power grids.

This study underscores the value of process modeling in generating LCI data for pharmaceuticals and enhances the understanding of the environmental benefits of reshoring paracetamol manufacturing. The developed methodology can be applied to other chemicals with limited LCI data, supporting more sustainable decision-making in the pharmaceutical sector's eco-design, particularly during re-industrialization efforts.

[1] European Commission Communication from the Commission: A New Industrial Strategy for Europe, vol.102,COM(2020), pp.1-17



Safe Reinforcement Learning with Lyapunov-Based Constraints for Control of an Unstable Reactor

José Rodrigues Torraca Neto1, Argimiro Resende Secchi1,2, Bruno Didier Olivier Capron1, Antonio del-Rio Chanona3

1Chemical and Biochemical Process Engineering Program/School of Chemistry, Universidade Federal do Rio de Janeiro (UFRJ), Brazil; 2Chemical Engineering Program/COPPE, Universidade Federal do Rio de Janeiro (UFRJ), Brazil; 3Sargent Centre for Process Systems Engineering, Imperial College London

Safe reinforcement learning (RL) is essential for real-world applications with uncertainty and safety constraints, such as autonomous robotics and chemical reactors. Recent advances (Brunke et al., 2022) focus on integrating control theory with RL to ensure safety during learning and deployment. These approaches include robust RL frameworks, constrained Markov decision processes (CMDPs), and safe exploration strategies. We propose a novel approach where RL algorithms—PPO (Schulman et al., 2017), SAC (Haarnoja et al., 2018), DDPG (Lillicrap et al., 2016), and TD3 (Fujimoto et al., 2018)—are trained with Lyapunov-based constraints to ensure stability. As our reward function, −(x-xSP)², inherently generates negative rewards, we applied penalties to positive critic values and decreases in critic estimates over time.

For off-policy algorithms (SAC, DDPG, TD3), penalties were applied directly to Q-values, discouraging non-negative values and preventing unexpected decreases. On-policy algorithms (PPO) applied these penalties directly to the value function. DDPG used Ornstein-Uhlenbeck noise for exploration, while TD3 used Gaussian noise, with optimized parameters. Hyperparameters, including safe RL constraints, were tuned using Optuna (Akiba et al., 2019), optimizing learning rates, network architectures, and penalty weights.

Our method was tested on an unstable Continuous Stirred Tank Reactor (CSTR) under random disturbances. Despite challenges posed by disturbances, the Safe RL approach was evaluated for resilience under dynamic conditions. A cosine annealing schedule dynamically adjusted learning rates, ensuring stable training. Base RL algorithms (without safety constraints) were trained on ten parallel environments with disturbances and compared to a Nonlinear Model Predictive Control (NMPC) benchmark. SAC performed best, achieving an optimality gap of 7.73×10⁻⁴ on the training pool and 3.65×10⁻⁴ on new disturbances. DDPG and TD3 exhibited instability due to temperature spikes without safety constraints.

Safe RL significantly improved SAC’s performance, reducing the optimality gap to 2.88×10⁻⁴ on the training pool and 2.36×10⁻⁴ on new disturbances, nearing NMPC performance. Safe RL also reduced instability in DDPG and TD3, preventing temperature spikes and reducing policy noise, though it increased offset from the setpoint, resulting in larger optimality gaps. Despite this tradeoff, Safe RL made these algorithms more reliable, considering unseen disturbances. Overall, Safe RL brought SAC close to optimality across disturbance conditions while it mitigated instability in DDPG and TD3 at the cost of higher setpoint offsets.

References:
L. Brunke et al., 2022, "Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning," Annual Review of Control, Robotics, and Autonomous Systems, Vol. 5, pp. 411–444.
J. Schulman et al., 2017, "Proximal Policy Optimization Algorithms," arXiv:1707.06347.
T. Haarnoja et al., 2018, "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor," Proceedings of the 35th ICML, Vol. 80, pp. 1861-1870.
T.P. Lillicrap et al., 2016, "Continuous Control with Deep Reinforcement Learning," arXiv:1509.02971.
S. Fujimoto et al., 2018, "Addressing Function Approximation Error in Actor-Critic Methods," Proceedings of the 35th ICML, Vol. 80, pp. 1587-1596.
T. Akiba et al., 2019, "Optuna: A Next-generation Hyperparameter Optimization Framework," Proceedings of the 25th ACM SIGKDD, pp. 2623-2631.



A two-level model to assess the economic feasibility of renewable urea production from agricultural wastes

Diego Costa Lopes, Moisés Teles Dos Santos

Universidade de São Paulo, Brazil

Agroindustrial wastes can be an abundant source of chemicals, biofuels and energy. Based on this assumption, this work presents a tow-level modeling (process models and supply chain model) and an optimization framework for an integrated biorefinery system to convert agricultural residues into renewable urea via gasification routes with a possible additional hydrogen input from electrolysis. A process model of the gasification process was developed in Aspen Plus® to identify key performance indicators such as energy consumption and relative yields for urea for different biomasses and operating conditions; then, these key process data were used in a mixed-integer linear programming (MILP) model, designed to identify the optimal combination of energy source, technological route of urea production and plant location that maximizes the net present value of the system. The gasification step was modeled with an equilibrium approach. Besides the gasifier, the plant is comprised of a conditioning system to adjust syngas composition, CO2 capture, ammonia and urea synthesis.

Based on the model’s results, three technological routes (oxygen gasification, air gasification and water electrolysis) were chosen as the most promising, and 6 different biomasses (rice husks, coffee husks, corn stover, soybean straw, sugarcane straw and bagasse) were identified as representative of the Brazilian agricultural scenario. The country was divided into 5569 cities and 558 micro-regions. Each region's agricultural production was evaluated to estimate biomass supply and urea demand. Electricity prices were also considered based on current tariffs. A MILP model was developed to maximize the net present value, combining energy sources, location and route as decision variables, respecting constraints on biomass supply, urea demand and transport between regions. The model was applied to the whole country divided in the microregion level. It was found that the Assis microregion in the São Paulo state is the optimal location for the plant, leveraging the proximity of large sugarcane and soybean crops and cheaper electricity prices compared to the rest of the country, with a positive NPV for an 80 tons of urea / h plant. Biomass dominates the total costs of plant (60%), followed by power (25%) and urea transport (10%). Biomass supplies were not found to be a major constraint in any region; urea demand is the main limiting factor, with more than 30 microregions needed to consume the plant’s production, highlighting the need for close proximity between production and consumption to minimize logistic costs.

The model was also constrained to other regions of Brazil to evaluate local feasibility. The north and northeast regions were not found to be viable locations for a plant with NPVs close to 0, given the lower biomass supplies and urea demands, and larger distances between microregions. Meanwhile, in the southern and midwest regions, large availability of soybean residues also create good conditions for a renewable urea plant, with NPVs of US$ 105 mil and US$ 103 mil respectively. The results indicate the feasibility of producing renewable urea from agricultural wastes and the importance of considering a two-level approach to assess the economic performance of the entire system.



Computer-based Chemical Engineering Education for Green and Digital Transformation

Zorka Novak Pintarič, Miloš Bogataj, Zdravko Kravanja

Faculty of Chemistry and Chemical Engineering, University of Maribor, Smetanova ulica 17, SI-2000 Maribor, Slovenia

The mission of Chemical Engineering Education, particularly Computer-Aided Chemical Engineering, is to equip students with the knowledge and skills they need to drive the green and digital transformation. This involves integrating Chemical Engineering and Process Systems Engineering (PSE) within the Bologna 3-cycle study system. The EFCE Bologna recommendations for Chemical Engineering programs will be reviewed, with a focus on PSE topics, especially those relevant to the green and digital transformation. Key challenges in introducing sustainability and digital knowledge will be highlighted, along with the necessary development of teaching methods and tools.

While chemical engineering programs contain elements of green and digital engineering, their systematic integration into core subjects is limited. The analysis of our study program shows that only a few principles of green engineering, such as maximizing efficiency and energy flow integration, are explicitly addressed. Other principles are indirectly presented through case studies but lack structured inclusion. Digital skills in the current curricula focus mainly on spreadsheets for data processing, basic programming, and process simulation. Green and digital content is more extensively explored in project work and advanced studies, with elective courses and final theses offering deeper engagement.

Artificial Intelligence (AI), as a critical element of digitalization, will significantly impact chemical engineering education, influencing both teaching methods and process optimization. However, the interdisciplinary complexity of AI poses challenges. Students need a solid foundation in programming, data science, and statistics to master AI tools, making its gradual introduction essential. The question therefore arises as to how AI can be effectively integrated into chemical engineering education by striking a balance between technical skills and critical thinking, fostering creativity and ethical awareness while preserving and not sacrificing the engineering fundamentals.

Given the rapid pace of change in the industry, chemical engineering education needs to be reformed, particularly at the bachelor's and master's levels. Core challenges include systematically integrating essential green and digital topics into syllabi, introducing new courses like AI and data science, modernizing textbooks with numerical examples, and providing teachers with training to keep pace with technological advancements.



Development of a Hybrid Model for the Paracetamol Batch Dissolution in Ethanol Using Universal Differential Equations

Fernando Arrais Romero Dias Lima1,2, Amyr Crissaff Silva1, Marcellus G. F. de Moraes3,4, Amaro G. Barreto Jr.1, Argimiro R. Secchi1,4, Idelfonso Nogueira2, Maurício B. de Souza Jr.1,4

1School of Chemistry, EPQB, Universidade Federal do Rio de Janeiro, Av. Horácio Macedo, 2030, CT, Bloco E, 21941-914, Rio de Janeiro, RJ – Brazil; 2Chemical Engineering Department, Norwegian University of Science and Technology, Trondheim, 793101, Norway; 3Instituto de Química, Rio de Janeiro State University (UERJ), Rua São Francisco Xavier, 524, Maracanã, Rio de Janeiro, RJ, 20550-900, Brazil; 4PEQ/COPPE – Universidade Federal do Rio de Janeiro, Av. Horácio Macedo, 2030, CT, Bloco G, G115, 21941-914, Rio de Janeiro, RJ – Brazil

Crystallization is a relevant process in the pharmaceutical industry for product purification and particle production. An efficient crystallization is characterized by crystals produced with the desired attributes. Therefore, modeling this process is a key point to achieving this goal. In this sense, the objective of this work is to propose a hybrid model to describe paracetamol dissolution in ethanol. The universal differential equations methodology is considered in the development of this model, using a neural network to predict the dissolution rate combined with the population balance equations to calculate the moments of the crystal size distribution (CSD) and the concentration. The model was developed using the experimental batches developed by Kim et al. [1]. The dataset is composed of concentration measurements obtained using attenuated total reflectance-Fourier transform infrared (ATR-FTIR). The objective function of the optimization problem is to minimize the difference between the experimental and the predicted concentration. The hybrid model could efficiently predict the concentration compared to the experimental measurements. Moreover, the hybrid approach made predictions of the moments of the CSD similar to the population balance model proposed by Kim et al. [1], being able to successfully calculate batches not considered in the training dataset. Moreover, the performance of the hybrid model was similar to the phenomenological one based on population balance but without the necessity of accounting for solubility information. Therefore, the universal differential equations approach is presented an efficient methodology for modeling crystallization processes with limited information.

1. Kim, Y., Kawajiri, Y., Rousseau, R.W., Grover, M.A., 2023. Modeling of nucleation, growth, and dissolution of paracetamol in ethanol solution for unseeded batch cooling crystallization with temperature-cycling strategy. Industrial & Engineering Chemistry Research 62, 2866–2881.



Novel PSE applications and knowledge transfer in joint industry - university energy-related postgraduate education

Athanasios Stefanakis2, Dimitra Kolokotsa2, Evaggelos Kapartzianis2, Ioannis Bonis2, Emilia Kondili1, JOHN K. KALDELLIS3

1Optimisation of Production Systems Lab., University of West Attica; 2Hellenic Energy S.A,; 3Soft Energy Applications and Environmental Protection Lab., University of West Attica

The area of process systems engineering has historically a profound theoretical involvement (noting especially Stephanopoulos 1985 but also Himmelblau 1993) who were testing the new ideas of all forms of artificial intelligence known today as such. While doing so the computer hardware but also the available data were not in the capacity required by these models.

The situation today with large amounts of data available in the industry and highly available cloud computing has been essential in making sense of broad machine learning models’ applications. In the area of process systems engineering the type of problems currently potentially solved with machine learning routines are

  1. The control system, or in terms of companies' equipment the Distributed Control systems implemented in servers with real-time OS. Predictive algorithms with millions of coefficients (or less but more robust lately) are better in addressing larger systems than simple pieces of equipment, for example with Neural Networks and Deep Learning. The plant wide optimization has not yet happened, but the Supply Chain Optimization is an area which is already seen applications and it is studied in the Academia.
  2. The Process Safety System (also known as Interlock system or emergency shutdown system ) implemented in PLCs has also been augmented by ML with the fault prediction and diagnosis method. Most applied in rotating machine performance (asset performance management systems) it is predicting their failure in advance so that companies can take on-time measures minimizing the risks of accidents but also production loss (predictive maintenance).

The subject has three challenges to be taught:

  1. Process systems engineering subject. This subject requires model understanding which already is not an easy subject.
  2. Machine learning subject. It is also requiring a modeling understanding but at the same time it is not a core subject in PSE
  3. Data engineering subject. As the systems become larger (soon they will be plantwide) knowledge of databases and cloud operating systems is becoming at least required to understand the structure of the models to be used.

These subjects have not a similar language not even close and are three separate frames of knowledge. Re-framing of the PSE is required to include in its core all three new disciplines and this is required to be done faster than in the past. The potential of young generations is enormous as they learn "hands-on" but for the older this is already overwhelming.

For next period the Machine Learning is evolving in the form of Plant optimizers and Fault detection and diagnosis model.

The present article will present the continuous evolution and progress of the cooperation between the largest energy company of Greece and the University in the implementation of knowledge transfer and advanced postgraduate and doctoral education courses. Furthermore, the novel ideas of AI implementation in the process industry as mentioned above will also be described and the prospects of this inspiration for both, the industry and the university will be highlighted.



Optimal Operation of Middle Vessel Batch Distillation using an Economic MPC

Surendra Beniwal, Sujit Jogwar

IIT Bombay, India

Middle vessel batch distillation (MVBD) is an alternative configuration of the conventional batch distillation with improved sustainability index. MVBD consists of two column sections separated by a (middle) vessel for the separation of a ternary mixture. It works on the principle of multi-effect operation wherein vapour from one column section (effect) is used to drive the subsequent effect, thus reducing the overall energy consumption [1]. The entire system is operated under total reflux and at the end of the batch, the three products are accumulated in the three vessels (reflux drum, middle vessel and reboiler).

Previously, it is shown that the performance of the MVBD can be quantified in terms of overall performance index (OPI) which captures the trade-off between separation and energy efficiency [2]. Furthermore, during the operation, holdup trajectory of each vessel can be manipulated to maximize OPI. In order to track these optimal holdup profiles during a batch, a feedback control system is needed.

The present work compares two approaches; sequential (open-loop optimization + closed-loop control) and simultaneous (closed-loop optimization + control). In the sequential approach, optimal set-point trajectory generated with offline optimization is tracked using a model predictive controller (MPC). Alternatively, in the simultaneous approach, OPI maximization is used as an objective function for the controller. This formulation is similar to the economic MPC. As the prediction horizon for this controller is much smaller than the batch time, the problem is reformulated to ensure feasibility of end of batch constraints. The two approaches are compared in terms of the effectiveness, overall performance index, robustness (to disturbance and plant-model mismatch) and associated implementation challenges (computational time). A simulation case study with the separation of a ternary mixture consisting of benzene, toluene and o-xylene is used to illustrate the controller design and performance.

References:

[1] Davidyan, A. G., Kiva, V. N., Meski, G. A., & Morari, M. (1994). Batch distillation in a column with a middle vessel. Chemical Engineering Science, 49(18), 3033-3051.

[2] Beniwal, S., & Jogwar, S. S. (2024). Batch distillation performance improvement through vessel holdup redistribution—Insights from two case studies. Digital Chemical Engineering, 13, 100187.



Recurrent deep learning models for multi-step ahead prediction: comparison and evaluation for real Electrical Submersible Pump (ESP) system.

Vinicius Viena Santana1, Carine de Menezes Rebello1, Erbet Almeida Costa1, Odilon Santana Luiz Abreu2, Galdir Reges2, Téofilo Paiva Guimarães Mendes2, Leizer Schnitman2, Marcos Pellegrini Ribeiro3, Márcio Fontana2, Idelfonso Nogueira1

1Norwegian University of Science and Technology, Norway; 2Federal University of Bahia, Brazil; 3CENPES, Petrobras R&D Center, Brazil

Predicting future states from historical data is crucial for automatic control and dynamic optimization in engineering. Recent advances in deep learning have provided new opportunities to improve prediction accuracy across various engineering disciplines, particularly using Artificial Neural Networks (ANNs). Recurrent Neural Networks (RNNs), particularly, are well-suited for time series prediction due to their ability to model dynamic systems through recurrent updates1.

Despite RNNs' high predictive capacity, their potential can be underutilized if the model training does not consider the intended future usage scenario2,3. In applications like Model Predictive Control (MPC), the model must evolve over time, relying only on its own predictions rather than ground truth data. Training a model to predict only one step ahead may result in poor performance when applied to multi-step predictions, as errors compound in the auto-regressive (or generative) mode.

This study focuses on identifying optimal strategies for training deep recurrent neural networks to predict critical operational time series data from a real Electric Submersible Pump (ESP) system. We evaluate the performance of RNNs in multi-step-ahead predictions under two conditions: (1) when trained for single-step predictions and recursively applied to multi-step forecasting, and (2) using a novel training approach explicitly designed for multi-step-ahead predictions. Our findings reveal that the same model architecture can exhibit markedly different performance in multi-step-ahead forecasting, emphasizing the importance of aligning the training process with the model's intended real-time application to ensure reliable predictions.

[1] Himmelblau, D.M. Applications of artificial neural networks in chemical engineering. Korean J. Chem. Eng. 17, 373–392 (2000). https://doi.org/10.1007/BF02706848

[2] Marrocos, P.H., Iwakiri, I.G.I., Martins, M.A.F., Rodrigues, A.E., Loureiro, J.M., Ribeiro, A.M., & Nogueira, I.B.R. (2022). A long short-term memory based Quasi-Virtual Analyzer for dynamic real-time soft sensing of a Simulated Moving Bed unit. Applied Soft Computing, 116, 108318. https://doi.org/10.1016/j.asoc.2021.108318

[3] Nogueira, I.B.R., Ribeiro, A.M., Requião, R., Pontes, K.V., Koivisto, H., Rodrigues, A.E., & Loureiro, J.M. (2018). A quasi-virtual online analyser based on artificial neural networks and offline measurements to predict purities of raffinate/extract in simulated moving bed processes. Applied Soft Computing, 67, 29-47. https://doi.org/10.1016/j.asoc.2018.03.001



Simulation and optimisation of vacuum (pressure) swing adsorption with simultaneous consideration of real vacuum pump data and bed fluidisation

Yangyanbing Liao, Andrew Wright, Jie Li

Centre for Process Integration, Department of Chemical Engineering, School of Engineering, The University of Manchester, United Kingdom

Pressure swing adsorption (PSA) is an essential technology for gas separation and purification. A PSA process where the highest pressure is above the atmospheric pressure and the lowest pressure is at a vacuum level is referred to as vacuum pressure swing adsorption (VPSA). In contract, vacuum swing adsorption (VSA) refers to a PSA process with the highest pressure equal to or slightly above the atmospheric pressure and the lowest pressure below atmospheric pressure.

Most computational studies concerning simulation of V(P)SA processes have assumed a constant vacuum pump efficiency ranging from 60% to 100%. Nevertheless, Krishnamurthy et al. [3] highlighted 72% is a typical efficiency value for compressors, but not representative for vacuum pumps. They reported a low efficiency value of 30% estimated based on their pilot-plant data. As a result, the energy consumption of the vacuum pump could have been underestimated by at least a factor of two in many computational studies.

In addition to assuming a constant vacuum pump efficiency, efficiency correlations have been proposed to more accurately evaluate the vacuum pump performance [4-5]. However, these correlations fail to conform to the trend suggested by the data points at higher pressures or to accurately represent the vacuum pump performance.

The adsorption bed fluidisation is another key factor in designing the PSA process. This is because bed fluidisation incurs rapid adsorbent attrition and eventually results in a substantial decrease in the separation performance [6]. However, the impacts of fluidisation on PSA optimisation have not been comprehensively addressed. More importantly, existing studies have not simultaneously incorporated real vacuum pump performance data and bed fluidisation limits into PSA optimisation.

To address the above research gaps, in this work we develop accurate prediction models for the pumping speed and power of the vacuum pump based on real performance curves using the data-driven modelling approach [7-8]. We then develop a new, comprehensive V(P)SA model that allows for an accurate evaluation of the V(P)SA process performance without relying on estimated vacuum pump energy efficiency or pressure/flow rate BCs at the vacuum pump end of the adsorption bed. A new optimisation problem that simultaneously incorporates the proposed V(P)SA model and the bed fluidisation constraints is thus constructed.

The computational results demonstrate that vacuum pump efficiency falls within 20%-40%. Using an estimated vacuum pump efficiency, the optimal cost is underestimated by at least 42% compared to that obtained using the proposed performance models. When the fluidisation constraints are incorporated, a low feed velocity and an exceptionally long cycle time are essential for maintaining a small pressure drop across the bed to prevent fluidisation. The optimal total cost is increased by at least 16% than cases where bed fluidisation constraints are not incorporated. Hence, it is important to incorporate vacuum pump performance prediction models developed using real data and bed fluidisation constraints to accurately evaluate the PSA performance.

References

1. Compt. Aided Chem. Eng.2012:1217-21.

2. Energy2017;137:495-509.

3. AIChE J.2014;60(5):1830-42.

4. Int J Greenh Gas Con.2020;93:102902.

5. Ind. Eng. Chem. Res.2019;59(2):856-73.

6. Adsorption2014;20:757-68.

7. AIChE J.2016;62(9):3020-40.

8. Appl. Energy2022;305:117751.



Sociotechnical Transition: An Exploratory Study on the Social Appropriability of Users of Smart Meters in Wallonia.

Elisa Boissézon

Université de Mons, Belgium

Optimal and autonomous daily use of new technologies isn’t a reality for everyone. In a societal context driven by sociotechnical transitions (Markard et al., 2012), many people lack access to digital equipment, do not possess the required digital skills for their use, and, consequently, are unable to participate digitally in social life via e-services. This reality is called digital inequalities (Agence du numérique, 2023) and is even more crucial to consider in the context of the increasing digitalization of society, in all areas, including energy. Indeed, according to the European Union directives, member states are required to develop various means of action, including digital, which are essential to achieving the three strategic axes envisioned by the European energy transition scenario, namely: investment in renewable energies, energy efficiency, and energy sobriety (Dufournet & Marignac, 2018).

In this specific instance, our research focuses on the question of social appropriation (Zélem, 2018) of new technologies in the context of the deployment of smart meters in Wallonia, and the use of associated digital tools by the publics. These individuals, with their unique socio-economic and sociodemographic profiles, are not equally equipped to utilize all the functionalities offered by this new digital system for managing energy consumption (Agence du Numérique, 2023; Van Dijk, 2017; Valenduc, 2013). This exploratory and phenomenological study aims, firstly, to investigate the experiences of the publics concerning the support received during the installation of the new smart metering system and to identify the barriers to the social appropriation of new technologies. Secondly, the field surveys aim to determine to what extent individual participatory forms of support (Benoît-Moreau et al., 2013; Cadenat et al., 2013), developed through the lens of active pedagogies such as experiential learning (Brotcorne & Valenduc, 2008, 2009), and collective forms (Bernaud et al., 2015; Turcotte & Lindsay, 2008) can promote the inclusion of digitally vulnerable users. The central role of field professionals as interfaces (Cihuelo & Jobert, 2015) is also highlighted within the service relationship (Gadrey, 2003) that connects, on one hand, the end consumers and, on the other hand, the organization responsible for deploying the smart meters. Our qualitative investigations were conducted with four types of samples, through semi-structured interviews, considering several determining factors regarding the engagement in the use of new technologies, from both individual and collective perspectives. Broadly speaking, our results indicate that while the standardized support protocol applied by field professionals during the installation of smart meter is sufficient for digitally proficient users, the situation is more nuanced for vulnerable populations who have specific needs requiring close support. In this context, collective participatory support in workshops in the form of focus groups seems to have further promoted the digital inclusion of participants.



Optimizing Methane Conversion in a Flow Reactor System Using Bayesian Optimization and Fisher Information Matrix Driven Experimental Design Approaches: A Comparative Study

Michael Aku, Solomon Gajere Bawa, Arun Pankajakshan, Ye Seol Lee, Federico Galvanin

University College London, United Kingdom

Reaction processes are complex systems requiring optimization techniques to achieve optimal performance in terms of key performance indicators (KPIs) such as yield, conversion, and selectivity [1]. Optimisation efforts often relies on the accurate modelling of reaction kinetics, thermodynamics and transport phenomena to guide experimental design and improve reactor performance. Bayesian Optimization (BO) and Fisher Information Matrix-driven (FIMD) techniques are two key approaches used in the optimization of reaction systems [2].
BO helps in identifying conditions efficiently by starting from an exploratory means of the design space, while FIMD approaches have been recently proposed to maximise the information gained from experiments and progressively improve parameter estimation [3] by focusing more on exploitation of the decision space to reduce the uncertainty in kinetic model parameters [4]. Both techniques have been used largely within the scientific and industrial domains, but they exhibit a fundamental difference in their approach on the balance between exploration (gaining new knowledge) and exploitation (using current knowledge to optimize outcomes) during model calibration.

This study presents a comparative assessment of BO and FIMD methods for optimal experimental design, focusing on the complete oxidation of methane in an automated flow reactor system [5]. The performance of both methods is evaluated in terms of methane conversion optimization, experimental efficiency (i.e., the number of required runs to achieve the optimum), and information. Our preliminary findings suggest that while BO readily converges to a high methane conversion, FIMD can be a valid alternative to reduce the number of required experiments, offering more insights into the sensitivities of each parameter and process dynamics. The comparative analysis paves way towards developing explainable or physics-informed data-driven models to map the relationship between predicted experimental information and KPI. The comparison also highlights trade-offs between convergence speed and robustness in experimental design, which are key aspects to consider for a comprehensive evaluation of both approaches in online reaction process optimization.

References

[1] Taylor, C. J., Pomberger, A., Felton, K. C., Grainger, R., Barecka, M., Chamberlain, T. W., & Lapkin, A. A. (2023). A brief introduction to chemical reaction optimization. Chemical Reviews, 123(6), 3089-3126.

[2] Quirino, P. P. S., Amaral, A. F., Manenti, F., & Pontes, K. V. (2022). Mapping and optimization of an industrial steam methane reformer by the design of experiments (DOE). Chemical Engineering Research and Design, 184, 349-365.

[3] Friso, A., & Galvanin, F. (2024). An optimization-free Fisher information driven approach for online design of experiments. Computers & Chemical Engineering, 187, 108724.

[4] Green, P. L., & Worden, K. (2015). Bayesian and Markov chain Monte Carlo methods for identifying nonlinear systems in the presence of uncertainty. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 373(2051), 20140405.

[5] Pankajakshan, A., Bawa, S. G., Gavriilidis, A., & Galvanin, F. (2023). Autonomous kinetic model identification using optimal experimental design and retrospective data analysis: methane complete oxidation as a case study. Reaction Chemistry & Engineering, 8(12), 3000-3017.



OPTIMAL CONTROL OF PSA UNITS BASED ON EXTREMUM SEEKING

Beatriz Cambão da Silva1,2, Ana Mafalda Ribeiro1,2, Diogo Filipe Rodrigues1,2, Alexandre Filipe Porfírio Ferreira1,2, Idelfonso Bessa Reis Nogueira3

1Laboratory of Separation and Reaction Engineering−Laboratory of Catalysis and Materials (LSRE LCM), Department of Chemical Engineering, University of Porto, Porto, 4200-465, Portugal; 2ALiCE−Associate Laboratory in Chemical Engineering, Faculty of Engineering, University of Porto, Porto, 4200-465, Portugal; 3Chemical Engineering Department, Norwegian University of Science and Technology, Sem Sælandsvei 4, Kjemiblokk 5, Trondheim, 793101, Norway

The application of RTO to dynamic operations is challenging due to the complexity of the nonlinear problems involved, making it difficult to achieve robust solutions [1]. Regarding cyclic adsorption processes, particularly Pressure Swing Adsorption (PSA) and Temperature Swing Adsorption (TSA), the control of the process in real-time is essential to maintain or increase productivity.

The literature on Real-time Optimization in PSA units relies on Model Predictive Control (MPC) and Economic Model Predictive Control (EMPC) [2] . These options rely heavily on the accurate model representation of the industrial plant, requiring a high computational effort and time to ensure optimal control [3]. Given the importance of PSA and TSA systems on multiple separation operations, establishing alternatives for control and optimization in real-time is in order. With that in mind, this work aimed to explore alternative model-free real-time optimization techniques that depend on simple control elements, as is the case of Extremum Seeking Control (ESC).

The chosen case study was Syngas Upgrading, which is relevant since it precedes the Fischer‑Tropsch reactions that enable an alternative to fossil fuels. Syngas Upgrading can also provide H2 for ammonia production and diminish CO2 emission. The operation of the PSA unit for syngas upgrading used as the basis for this study was discussed in the work of Regufe et al. [4].

Extremum-seeking control is a method that aims to control the process by moving an objective’s gradient towards zero while estimating that gradient based on persistent perturbations. A High-pass Filter (HF) eliminates the signal’s DC component to get a clearer response to the changes in the system. The input variable 𝑢 is continually disrupted by a sinusoidal wave, which helps assess the evolution of the objective function by keeping the system in a state of constant perturbation. The integration will determine the necessary adjustment in 𝑢 to bring the objective function closer to its optimum. This adjustment is often scaled by a gain 𝐾 to accelerate convergence.

The PSA model was implemented in gPROMS, representing the behaviour of the industrial plant, with communication with MATLAB and Simulink, where the ESC was implemented.

Extremum Seeking Control successfully optimized the CO2 productivity in PSA units for syngas upgrading/H2 purification. This shows that ESC can be a valuable tool in optimizing and controlling PSA processes and does not require the unit to reach a Cyclic Steady State to adjust the operation.

[1] S. Kameswaran and L. T. Biegler, “Simultaneous dynamic optimization strategies: Recent advances and challenges,” Computers & Chemical Engineering, vol. 30, no. 10, pp. 1560–1575, 2006, doi: 10.1016/j.compchemeng.2006.05.034.

[2] H. Khajuria and E. N. Pistikopoulos, “Optimization and Control of Pressure Swing Adsorption Processes Under Uncertainty,” AIChE Journal, vol. 59, no. 1, pp. 120–131, Jan. 2013, doi: 10.1002/aic.13783.

[3] S. Skogestad, “Advanced control using decomposition and simple elements,” Annual Reviews in Control, vol. 56, p. 100903, 2023, doi: 10.1016/j.arcontrol.2023.100903.

[4] M. J. Regufe et al., “Syngas Purification by Porous Amino-Functionalized Titanium Terephthalate MIL-125,” Energy & Fuels, vol. 29, no. 7, pp. 4654–4664, 2015, doi: 10.1021/acs.energyfuels.5b00975.



Enhancing Higher Education Capacity for Sustainable Data Driven Food Systems in Indonesia – FIND4S

Monika Polanska1, Yoga Pratama2, Setya Budi Abduh2, Ahmad Ni'matullah Al-Baarri2, Jan Van Impe1

1BioTeC+, Chemical & Biochemical Process Technology & Control, KU Leuven, Belgium; 2Department of Food Technology, Diponegoro University, Indonesia

The Capacity Building Project entitled “Enhancing Higher Education Capacity for Sustainable Data Driven Food Systems in Indonesia” (FIND4S, “FIND force”) aims to boost the institutional and administrative resources of seven Indonesian higher education institutions (HEIs) in Central Java.

The EU overarching priorities addressed through the FIND4S project include Green Deal and Digital Transformation, through developing knowledge, competences, skills and values. The modernized, competitive and innovative curricula will stimulate green jobs and pave the way to sustainable food systems including environmental impact to be taken into account. The essential elements of risk assessment, predictive modelling and computational optimization are to be brought together with both sustainability principles of food production and food processing as well as energy and food chain concepts (Life Cycle Assessment) within one coherent structure. The project will offer a better understanding of ecological and food systems dynamics and offer strategies in terms of regenerating natural systems by usage of big data and providing predictive tools for the food industry. The predictive modelling tools can be applied to evaluate the effects of climate change on food safety with regard to managing this new threat for all stakeholders. Raising the quality of education through digital technologies will enable the learners to acquire essential competences and sector-specific digital skills. The inclusion of data management to address sustainability challenges will reinforce scientific, technical and innovation capacities of HEIs and foster links between academia, research and industry.

Initially, the FIND4S project will modernize Bachelor’s degree curricula to include food systems and technology-oriented programs at partner universities in Indonesia. This modernization will aim to meet the desired accreditation standards and better prepare graduates for postgraduate studies. Additionally, in the central hub university, the project will develop a new and innovative Master’s degree program in sustainable food systems that integrates sustainability and environmental awareness into graduate education. This program will align with labor market demands and address the challenges, agriculture and food systems are facing, providing insights into potential threats and opportunities for knowledge transfer to Indonesia through education and research.

The recognition and implementation of novel and innovative programs will be tackled via significant improvement of food science education by designing new curricula and upgrading existing ones, training academic staff, creating a research center and equipping laboratories, as well as expanding the network of collaboration with European Higher Education Institutions. The project will utilize big data, quantitative modeling, and engineering tools to engage all stakeholders, including industry partners. The comprehensive MSc program will meet the growing demand for knowledge, experience, and standards in Indonesia, contributing to a greener and more sustainable economy and society. Ultimately, this initiative will support the necessary transformation towards socially, environmentally, and economically sustainable food systems.



Optimization of Specific Heat Transfer Area for Multiple Effects Desalination (MED) Process

Salih Alsadaie1, Sana I Abukanisha1, Iqbal M Mujtaba3, Amhamed A Omar2

1Sirte University, Libya; 2Sirte Oil Company, National Oil Corporation, Libya; 3University of Bradford, United Kingdom

The world population is expected to increase massively in coming decades putting more stress on the desalination industries to cope with the increasing demand for fresh water. However, with the increasing cost of living, freshwater production processes face the challenge of producing freshwater at higher quality and lower cost. The most known techniques for water desalination are thermal based such as Multistage Flash desalination (MSF) and Multiple Effect desalination (MED) and membrane based such as Reverse Osmosis (RO). Although the installed capacity of (RO) remarkably surpasses the MSF and MED, the MED process is more preferred option for new construction plants in different locations around the world where waste heat is available. However, the MED desalination technology is also required to cut off more costs by optimizing their operating and design parameters.

There are several studies in the literature that focus on optimizing the MED process. Most of these studies focus on increasing production rate or minimizing energy consumption by optimizing operating conditions, using of more efficient control systems, integration with power plants and hybrid with other desalination techniques. However, none of the available studies focused on optimum design configuration such as heat transfer area and number of effects.

In this paper, a mathematical model describing the MED process is developed and solved using gPROMs software. For a fixed production rate, the heat transfer area is optimized by variation of seawater temperature and flowrate steam temperature and flowrate, and the number of effects. The design and operating data are taken from an almost new existing small MED process plant with two large effects and two small effects.

Keywords: MED desalination, gPROMs, optimization, heat transfer area, multiple effects.

References

  1. Mayor, B., 2019. Growth patterns in mature desalination technologies and analogies with the energy field. Desalination, 457, pp.75-84.
  2. Prajapati, M., Shah, M. and Soni, B., 2022. A comprehensive review of the geothermal integrated multi-effect distillation (MED) desalination and its advancements. Groundwater for Sustainable Development, 19, p.100808.


Companies’ operation and trading strategies under the triple trading of electricity, carbon quota and commodities: A game theory optimization modelling

Chenxi Li1, Nilay Shah2, Zheng Li1, Pei Liu1

1State Key Lab of Power System Operation and Control, Department of Energy and Power Engineering, Tsinghua-BP Clean Energy Centre, Tsinghua University, Beijing, 100084, China; 2Department of Chemical Engineering, Imperial College London, SW7 2AZ, United Kingdom

Trading has been recognized as an effective measure for decarbonization, especially with the recent global focus on carbon reduction targets. Due to the high correlation in terms of participants and traded goods, carbon and electricity trading are highly coupled, leading operational strategies of companies involved in the couple transaction unclear. Therefore, the research on the coupled trading is essential, as it helps companies identify optimal strategies and enables policymakers to detect potential policy loopholes. This study presents a novel game theory optimization model involving both power generation companies (GenCos) and factories. Aiming to achieve a Nash equilibrium that maximizes each company’s benefits, this model explores optimal operation strategies for both power generation and consumption companies under electricity-carbon joint trading. It fully captures the operational characteristics of power generation units and the technical energy consumption of electricity-using enterprises to describe the relationship between renewable energy, fossil fuels, electricity, and carbon emissions detailedly. Electricity and carbon prices in the transaction are determined through negotiation between buyers and sellers. Considering the relationship between production volume and price of the same product, the case actually encompasses three trading systems: electricity, carbon, and commodities. The model’s nonlinearity, caused by the Nash equilibrium and the product of price and output, is managed using piecewise linearization and discretization, transforming the problem into a mixed-integer linear problem. Using this triple trading model, this study quantitatively explores three issues based on a virtual case involving three GenCos and four factories: the enterprises’ operational strategies under varying emission reduction requirements, the pros and cons of cap and benchmark carbon quota allocation mechanisms, and the impact of integrating zero-emission enterprises into carbon trading. Results indicate that GenCos tend to act as sellers of both electricity and carbon quotas. Meanwhile, since consumers may cut production rather than implementing low-carbon technologies to lower emissions, driving up product prices to maintain profits, high electricity and carbon prices become unsustainable for GenCos due to reduced electricity demand. Moreover, while benchmark mechanisms may incentivize production, they can also lower overall system profits, which is undesirable for policymakers. Lastly, under strict carbon reduction targets, zero-emission companies may transform the carbon market into a seller's market by purchasing carbon to raise carbon prices, thereby reducing electricity prices and lowering their own operating costs.



Solvent and emission source dependent amine-based CO2 capture costs estimation methodology for systemic level analysis

Yi Zhao1, Aron Beck1, Hayato Hagi2, Bruno Delahaye2, François Maréchal1

1Ecole Polytechnique Fédérale de Lausanne, Switzerland; 2TotalEnergies OneTech, France

Amine-based carbon capture effectively reduces industrial emissions but faces challenges due to high investment costs and the energy penalty associated with solvent regeneration. Existing cost estimation either rely on complex and costly simulation processes or provide overly general results, limiting their applicability for systemic analysis. This study presents a shortcut approach to estimating amine-based carbon capture costs, considering varying solvents and emission sources in terms of flow rates and CO2 concentrations. The results show that scaling effects significantly impact smaller plants, with costs dropping from 200–500 $/t-CO2 to 50–100 $/t-CO2 as capacity increases from 0.1 to 100 t-CO2/h, with Monoethanolamine (MEA) as the solvent. For larger plants, heat utility costs dominate, representing around 80% of the total costs, assuming a natural gas price of 35 $/MWh (10.2 $/MMBTU). Furthermore, MEA-based plants can be up to 25% more expensive than those with alternative solvents. In short, this study provides a practical method for initial amine-based carbon capture cost estimation, enabling a systemic assessment of its technoeconomic potential and facilitating its comparison with other CO2 abatement technologies.



Energy Planning Toward Absolute Environmental Sustainability: Key Decisions and Actionable Insights Through Interpretable Machine Learning

Nicolas Ghuys1, Diederik Coppitters1, Anne van den Oever2, Maarten Messagie2, Francesco Contino1, Hervé Jeanmart1

1Université catholique de Louvain, Belgium; 2Vrije Universiteit Brussel, Belgium

Energy planning models traditionally support the energy transition by focusing on cost-optimized solutions that limit greenhouse gas emissions. However, this narrow focus risks burden-shifting, where reducing emissions increases other environmental pressures, such as freshwater use, solving one problem while creating others. Therefore, we integrated Planetary Boundary-based Life Cycle Assessment (PB-LCA) into energy planning to identify solutions that respect absolute environmental sustainability limits. However, integrating PB-LCA into energy planning introduces challenges, such as adopting distributive justice principles, interpreting trade-offs across PB indicator impacts, and managing subjective weighting in the objective function. To address these, we employed weight screening and interpretable machine learning to extract key decisions and actionable insights from the numerous quantitative solutions generated. Preliminary results for a single weighting scenario show that the transition scenario exceeds several PB thresholds, particularly for ecosystem quality and mineral resource depletion, underscoring the need for a balanced weighting scheme. Next, we will apply screening and machine learning to pinpoint key decisions and provide actionable insights for achieving absolute environmental sustainability.

 
4:30pm - 5:30pmPlenary 2: Prof. Denis Wirtz, John Hopkins University, USA
Location: Zone 1 - Aula Louisiane
Chair: Jan Van Impe
Co-chair: Filip Logist
Our group has recently developed CODA, an AI-based platform to map whole diseased and healthy organs and organisms in 3D and at single-cell resolution. CODA solves the challenge of imaging large volume samples, while preserving high spatial resolution. Through integration with other multi-omic approaches – such as spatial transcriptomics and proteomics - CODA allows for unprecedented cellular and molecular profiling of tissues. I will discuss the new biological insights into tumor onset and progression gained from the use of CODA, including ovarian and pancreatic cancers, and associated biomedical implications for early detection of cancer. I will also introduce our new deep learning-based optical flow tool InterpolAI, developed to virtually restore damaged tissues and replace missing input images for enhanced 3D imaging, such as MRI, 3D histology, ssTEM, and light-sheet microscopy. Finally, I will describe our efforts to make CODA accessible to the wider community.
6:15pm - 7:00pmBoat tour to Oude Vismijn
7:00pm - 10:00pmConference Dinner at Oude Vismijn
Location: Oude Vismijn
Date: Wednesday, 09/July/2025
8:00am - 8:30amRegistration & welcome coffee
Location: Entrance Hall - Cafetaria
8:30am - 9:30amPlenary 3: Prof. Maria Papathanasiou, Imperial College London, UK
Location: Zone 1 - Aula Louisiane
Chair: Léonard Grégoire
Co-chair: Satyajeet Bhonsale

Process Systems Engineering through the lens of biopharma: niche challenges or research questions of the future?



Pharmaceutical product and process development rely primarily on time- and cost- intensive experimentation. In recent years, the adaptation and deployment of computer-modelling tools in this space have been gaining increasing interest as means to inform, accelerate, and optimise the industrial workflow. Despite, however, the modernization of the day-to-day workflows, the sector is still challenged by endogenous and exogenous uncertainty that often impair the ability to deliver enough therapeutics on time. This is further highlighted in the case of new modalities, where product and process performance are still unknown. At the same time, manufacturers are pressured to deliver therapies at accessible prices, while also considering the sector’s environmental footprint.

In this talk, we will discuss how Process Systems Engineering (PSE) innovation can enable quantification of uncertainty across the whole product lifecycle, from manufacturing to distribution. We will explore how PSE innovation can be adapted and transferred to cater to the needs of next-generation biopharma, in an effort to increase the sector’s preparedness, responsiveness, and efficiency. The inherent, bi-directional relationship between manufacturing and supply chains will be discussed with a view to understanding how supply chain optimisation can be revisited to satisfy the needs of an ever-changing medical landscape. Lastly, we will discuss how open challenges in today’s biopharmaceutical manufacturing and supply chain operation can motivate PSE research for the future.
9:30am - 10:00amEssenscia – Keynote on Functional Safety
Location: Zone 1 - Aula Louisiane
Functionary
  • MSc. Jan Luyts, Lead Functionary Project, essenscia, BASF
  • MSc. Geert Boogaerts, essenscia, KU Leuven
Functional safety (FS) – an essential cornerstone in the process safety (PS) of chemical processes – is often approached as a purely risk-based challenge although this has several shortcomings:
  • There is a lack of understanding of crucial safety parameters and how they affect one another. Furthermore, although having a major impact on the classification and performance of FS, risk assessments and quantitative failure rates are often subjectively determined through external stakeholders.
  • The primary goal of a safety instrumented system (SIS) is to ensure safety, which often conflicts with cybersecurity standards and guidelines. It is essential to implement robust cybersecurity measures in the process industry to (cyber)secure the safety systems.
  • A significant uncertainty on cost-determining factors throughout the plant’s safety lifecycle is often observed, resulting in the general trend of mismatching the goals and the means of FS.
  • In process industry, safety departments have traditionally focused on random failures even though systematic failures are significantly more common, and even dominant in the total number of dangerous failures. Systematic failures, which are associated with human and organizational factors (HOFs), can lead to unacceptable scenarios that can no longer be mitigated. However, they are researched only to a limited extent to date.
This strategic project aims to tackle these shortcomings by developing an integral methodology to FS design based on an enriched performance- and technology-based approach that will shift the industry-wide paradigm. This new method will even further improve the safety of production units at a lower cost but without tolerating compromises on the overall PS. A more rational and efficient approach will be essential to safeguard the license to operate (LTO) and boost the competitiveness of the Flemish chemical industry.
10:00am - 10:30amCoffee Break
Location: Zone 2 - Cafetaria
10:00am - 10:30amPoster Session 3
Location: Zone 2 - Cafetaria
 

pyDEXPI: A Python framework for piping and instrumentation diagrams using the DEXPI information model

Dominik P. Goldstein, Lukas Schulze Balhorn, Achmad Anggawirya Alimin, Artur M. Schweidtmann

Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands

Developing piping and instrumentation diagrams (P&IDs) is a fundamental task in process engineering. For designing complex installations, such as petroleum plants, multiple departments across several companies are involved in refining and updating these diagrams, creating significant challenges in data exchange between different software platforms from various vendors. The primary challenge in this context is interoperability, which refers to the seamless exchange and interpretation of information to collectively pursue shared objectives. To enhance the P&ID creation process, a unified, machine-readable data format for P&ID data is essential. A promising candidate is the Data Exchange in the Process Industry (DEXPI) standard. However, the absence of an open-source software implementation of DEXPI remains a major bottleneck, limiting the interoperability of P&ID data in practice. This lack of interoperability is further hindering the adoption of cutting-edge digital process engineering tools, such as automated data analysis and the integration of generative artificial intelligence (AI), which could significantly improve the efficiency and innovation of engineering design workflows.

We present pyDEXPI, an open-source implementation of the DEXPI format for P&IDs in Python. Currently, pyDEXPI encompasses three main parts. (1) At its core, pyDEXPI implements the classes of the DEXPI information model as Pydantic data classes. The pyDEXPI classes define the class relationships and the data attributes outlined in the DEXPI specification. (2) pyDEXPI provides several possibilities for importing and exporting P&ID data into the data class framework. This includes importing DEXPI data in its Proteus XML exchange format, saving and loading pyDEXPI models as a Python pickle file, and casting pyDEXPI into a graph format. (3) pyDEXPI offers toolkit functionalities to analyze and manipulate pyDEXPI P&IDs. For example, pyDEXPI tools can be used to search through P&IDs for data of interest and add, remove, or change data without violating DEXPI modeling conventions. With this functionality, pyDEXPI makes P&ID data more efficient to handle, more flexible, and more interoperable. We envision that, with further development, pyDEXPI will act as a central scientific computing library for the domain of digital process engineering, facilitating interoperability and the application of data analytics and generative AI on P&IDs.

Key references:

M. Theißen et al., 2021. DEXPI P&ID specification. DEXPI Initiative, version 1.3

M. Toghraei, 2019. Piping and instrumentation diagram development, first edition Edition. John Wiley & Sons, Inc, Hoboken, NJ, USA



A Bayesian optimization approach for data-driven Petlyuk distillation columns design.

Alexander Panales-Perez1, Antonio Flores-Tlacuahuac2, Fabian Fuentes-Cortés3, Miguel Angel Gutierrez-Limon4, Mauricio Sales-Cruz5

1Tecnológico Nacional de México, Instituto Tecnológico de Celaya, Departamento de Ingeniería Química Celaya, Guanajuato, México, 38010; 2Escuela de Ingeniería y Ciencias, Tecnológico de Monterrey, Campus Monterrey Ave. Eugenio Garza Sada 2501, Monterrey, N.L, 64849, México; 3Department of Energy Systems and Environment, IMT Atlantique, GEPEA rue Alfred Kastler, Nantes, 44000, France; 4Departamento de Energía, Universidad Autónoma Metropolitana-Azcapotzalco Av. San Pablo 180, C.P. 02200, Ciudad de México, México; 5Departamento de Procesos y Tecnología, Universidad Autónoma Metropolitana-Cuajimalpa Av. Vasco de Quiroga 4871, C.P. 05348, Ciudad de México, México

Recently, the focus on increasing process efficiency to lower energy consumption has led to alternative systems, such as Petlyuk distillation columns. It has been proven that, when compared to conventional distillation columns, these systems offer significant energy and cost savings. Therefore, from an economic perspective, achieving high-purity products alone does not define the feasibility of a process, to achieve an equilibrium in the trade-off between product purity and cost a multiobjective optimization is needed. Despite the effectiveness of common optimization methods, novel strategies like Bayesian optimization, which do not require an explicit mathematical model, can handle complex systems. Even starting from just one initial point, Bayesian optimization can effectively perform the optimization process. However, as a black-box method, it requires an analysis of the influence of hyperparameters on the optimization process. Thus, this work presents a Petlyuk column case study, including an analysis of hyperparameters such as the acquisition function and the number of initial points.



Enhancing Energy Efficiency of Industrial Brackish Water Reverse Osmosis Desalination Process using Waste Heat

Mudhar Al-Obaidi1, Alanood Alsarayreh2, Iqbal M Mujtaba3

1Middle Technical University, Iraq; 2Mu’tah University, Jordan; 3University of Bradford, United Kingdom

The Reverse Osmosis (RO) system has the potential as a vibrant technology to produce high-quality water from brackish water sources. However, the progressive increase in water and electricity demands necessitates the development of a sustainable desalination technology. This can be achieved by reducing the specific energy consumption of the process which will also reduce the environmental footprint. This study proposes the concept of reducing the overall energy consumption of a multistage multi-pass RO system of Arab Potash Company (APC) in Jordan via heating the feed brackish water. The utilisation of waste heat generated from different units of production plant of APC such as steam condensate supplied to a heat exchanger is a feasible technique to heat brackish water entering the RO system. To systematically evaluate the contribution of water temperature on the performance metrics including specific energy use, a generic model of RO system is developed. Model based simulation is used to evaluate the influence of water temperature. The results indicate a clear enhancement of specific energy consumption while using water temperatures close to the maximum recommended temperature of the manufacture. It has been noticed that an increase in water temperature from 25 ºC to 40 ºC can result an overall energy saving of more than 10%.

References

  1. Alanood A. Alsarayreh, M.A. Al-Obaidi, A.M. Al-Hroub, R. Patel, and I.M. Mujtaba. Optimisation of energy consumption in a medium-scale reverse osmosis brackish water desalination plant. Proceedings of the 30th European Symposium on Computer Aided Chemical Engineering (ESCAPE30), May 24-27, 2020, Milano, Italy
  2. Alanood A Alsarayreh, Mudhar A Al-Obaidi, Shekhah K Farag, Raj Patel, Iqbal M Mujtaba, 2021. Performance evaluation of a medium-scale industrial reverse osmosis brackish water desalination plant with different brands of membranes. A simulation study. Desalination, 503, 114927.
  3. Alanood A. Alsarayreh, Mudhar A. Al-Obaidi, Saad S. Alrwashdeh, Raj Patel, Iqbal M. Mujtaba, 2022. Enhancement of energy saving of reverse osmosis system via incorporating a photovoltaic system. Editor(s): Ludovic Montastruc, Stephane Negny, Computer Aided Chemical Engineering, Elsevier, 51, 697-702.


Analysis of Control Properties as a Sustainability Indicator in Intensified Processes for Levulinic Acid Purification

Tadeo Velázquez-Sámano, Heriberto Alcocer-García, Eduardo Sánchez-Ramírez, Carlos Rodrigo Caceres-Barrera, Juan Gabriel Segovia-Hernández

Universidad de Guanajuato, México

Sustainability is one of the greatest challenges humanity has faced. Therefore, there is a special emphasis on current chemical processes being improved or redesigned to ensure sustainability for future generations. The chemical industry has successfully implemented process redesign using process intensification. Through the process intensification, significant savings in energy consumption, lower production costs, reduction in the size or number of equipment and reductions in environmental impacts can be generated. However, one of the disadvantages associated with the intensification of processes is the loss of manipulable variables, due to the increase in interactions due to integrations in the equipment, which can generate a deterioration in the control properties. In other words, intensified processes can be more unstable in the face of disturbances in the system, which could become a problem not only of product quality but even a safety problem. On the other hand, some studies have shown that intensified designs can have better control properties than their conventional counterparts. Therefore, it is important to incorporate the study of control properties into intensified schemes since it is not known whether this intensification will improve or worsen the control properties.

Taking this into account, this study performed an analysis of the control properties of recently proposed schemes for the purification of levulinic acid. Levulinic acid is considered one of the bioproducts from lignocellulosic biomass with the greatest market potential, so the evaluation of control aspects in these schemes is relevant for its possible industrial application. These alternatives include conventional hybrid systems that contemplate liquid-liquid extraction and distillation and intensified schemes using thermal coupling and movement of sections. The studied schemes were obtained through a rigorous multi-objective optimization process taking the total annual cost as an economic criterion and the eco-indicator 99 as an environmental criterion. They were optimized using the differential evolution method with tabu list, which is a hybrid method that has proven to be efficient in complex nonlinear and nonconvex systems. The objective of this study is to identify the dynamic characteristics of the designs studied and to anticipate which could present control problems. Furthermore, each study of intensified distillation schemes contributes to generating guidelines that help the design stage of this type of systems. To analyze the control of the systems, two types of analyses were conducted: closed-loop and open-loop. For the closed-loop analysis, the aim was to minimize the integral of absolute error by identifying the optimal tuning of the controller's gain and integral time. In the open-loop analysis, the condition number, the relative gain array, and the feed sensitivity index were examined. The results reveal that the design comprising a liquid-liquid extraction column, three distillation columns, and thermal coupling between the last two columns exhibits the best dynamic performance. This design demonstrates a lower total condition number, a sensitivity index below the average, a stable control structure, and low values for the integral of absolute error. Additionally, this design shows superior cost and environmental impact indicators, making it the best option among the proposed designs.



Reactive Crystallization Modeling for Process Integration Simulation

Zachary Maxwell Hillman, Gintaras Reklaitis, Zoltan K Nagy

Purdue University, United States of America

Reactive crystallization (RC) is a chemical process in which the reaction yields a crystalline product. It is used in various industries such as pharmaceutical manufacturing or water purification (McDonald et al., 2021). In some cases, RC is the only feasible process pathway, such as the precipitation of certain ionic solids from solution. In other cases, a reaction can become a RC by changing the reaction environment to a solvent with low product solubility.

In either case, the process combines reaction with separation, intensifying the overall design. Process intensification leads to different advantages and disadvantages compared to traditional routes and therefore conducting an analysis prior to construction would be valuable (McDonald et al., 2021; Schembecker & Tlatlik, 2003).

Despite the utility and prevalence of RC, it has not been incorporated into any modern process design software, to our knowledge. There are RC models that simulate the inner reactions and dynamics of a RC (Tang et al., 2023; Salami et al., 2020), but each have limiting assumptions, and none have been integrated with the rest of a process line simulation. This modeling gap complicates RC process design and limits both the exploration of the possible benefits to using RC as well as the ability to optimize a system that relies on it.

To fill this gap, we built a generalized model that can be integrated with other unit operations in the Python process simulator package PharmaPy (Casas-Orozco et al., 2021). This model focuses on the reaction-crystallization interactions and dynamics to predict reaction yield and crystal critical quality attributes given inlet streams and reactor conditions. In this way, RC can be integrated with other unit operations to capture the effects RC has on the process overall.

The model and its assumptions are described in this work. The model space, limitations and capabilities are explored. Finally, the potential benefits of the RC system are shown using three example cases.

  1. Casas-Orozco, D., Laky, D., Wang, V., Abdi, M., Feng, X., Wood, E., Laird, C., Reklaitis, G. V., & Nagy, Z. K. (2021). PharmaPy: An object-oriented tool for the development of hybrid pharmaceutical flowsheets. Computers & Chemical Engineering, 153, 107408. https://doi.org/10.1016/j.compchemeng.2021.107408
  2. McDonald, M. A., Salami, H., Harris, P. R., Lagerman, C. E., Yang, X., Bommarius, A. S., Grover, M. A., & Rousseau, R. W. (2021). Reactive crystallization: A review. Reaction Chemistry & Engineering, 6(3), 364–400. https://doi.org/10.1039/D0RE00272K
  3. Salami, H., Lagerman, C. E., Harris, P. R., McDonald, M. A., Bommarius, A. S., Rousseau, R. W., & Grover, M. A. (2020). Model development for enzymatic reactive crystallization of β-lactam antibiotics: A reaction–diffusion-crystallization approach. Reaction Chemistry & Engineering, 5(11), 2064–2080. https://doi.org/10.1039/D0RE00276C
  4. Schembecker, G., & Tlatlik, S. (2003). Process synthesis for reactive separations. Chemical Engineering and Processing: Process Intensification, 42(3), 179–189. https://doi.org/10.1016/S0255-2701(02)00087-9
  5. Tang, H. Y., Rigopoulos, S., & Papadakis, G. (2023). On the effect of turbulent fluctuations on precipitation: A direct numerical simulation – population balance study. Chemical Engineering Science, 270, 118511. https://doi.org/10.1016/j.ces.2023.118511


A Machine Learning approach for subvisible particles classification in biotherapeutic formulations

Louis Joos, Anouk Brizzi, Eva-Maria Herold, Erica Ferrari, Cornelia Ziegler

Sanofi, France

Processing steps on biotherapeutics can cause appearance of Subvisible Particles (SvPs) which are considered a critical quality attribute (CQA) by pharmaceutical regulatory agencies [2,3]. SvPs are usually split between Inherent Particles (protein particles), Intrinsic Particles (Silicon Oil droplets, Glass, Cellulose, etc.) and Extrinsic Particles (e.g. clothes fibers). Discrimination between proteinaceous and other particles (generally of size ranging from 2 to 100 µm) is key in assessing product stability and potential risk factors such as immunogenicity or negative effects on quality and efficacy of the drug product [1].

According to USP <788> [4], the preferred method for determination of SvPs is light obscuration (LO). However, LO is not able to distinguish between particles of different compositions. In contrast, Flow Imaging Microscopy (FIM) has demonstrated high sensitivity in detecting and imaging SvPs [5].

In this study we develop a novel experimental and modeling workflow based on binary supervised classification, which allows a simple and robust classification of silicone oil (SO) droplets and non-silicone oil (NSO) particles. First, we generate experimental data from different therapeutic proteins exposed to various stresses and some samples mixed with relevant impurities. Data acquisition is performed with IPAC-2 (Occhio), MFI (Protein Simple), and Flowcam (Yokogawa Fluid Imaging Technologies) microscopes that are able to extract different morphological (e.g. circularity, aspect ratio) and intensity-based (e.g. Average, Standard Deviation) features from features from particle images.

Second, we train tree-based models, particularly Random Forests, on tabular data extracted from the microscopes from different projects and manually labelled by expert scientists. We obtain 97% vs 85% global accuracy for previously used baseline filters, even for particles in the range 2-5 µm which are usually hardest to classify.

Finally, we extend these models to multi-class problems with new types of particles (glass and cellulose) with good accuracy (93%), suggesting that this methodology is adapted to classify efficiently many different particles. Future perspectives include the exploration of new particle classes (air bubbles, protein aggregates, etc.) and a complementary Deep Learning multilabel approach to classify particles by direct image analysis when there are multiple particles overlapping on the same image.

References

[1] Sharma, D. K., & King, D. (2012). Flow imaging microscopy for the characterization of protein particles. Journal of Pharmaceutical Sciences, 101(10), 4046-4059.

[2] International Conference on Harmonisation (ICH). (1999). Q6B: Specifications: Test Procedures and Acceptance Criteria for Biotechnological/Biological Products.

[3] International Conference on Harmonisation (ICH). (2004). Q5E: Comparability of Biotechnological/Biological Products Subject to Changes in Their Manufacturing Process.

[4] United States Pharmacopeia (USP). (2017). <788> Particulate Matter in Injections.

[5] Zölls, S., Weinbuch, D., Wiggenhorn, M., Winter, G., Jiskoot, W., Friess, W., & Hawe, A. (2013). Flow imaging microscopy for protein particle analysis—a comparative evaluation of four different analytical instruments. AAPS Journal, 15(4), 1200-1211.



EVALUATION OF THE CONTROLLABILITY OF DISTILLATION WITH MULTIPLE REACTIVE STAGES

Josué Julián Herrera Velazquez1,3, Julián Cabrera Ruiz1, J. Rafael Alcántara Avila2, Salvador Hernández1

1Universidad de Guanajuato, Mexico; 2Pontificia Universidad Católica del Perú, Peru; 3Instituto Tecnológico Superior de Guanajuato, Mexico

Different energy alternatives to fossil fuels have been proposed to reduce greenhouse gas emissions that have contributed to the poor climate conditions we have today. Despite the collective efforts of the generation of these technologies, it remains a challenge to reduce production costs so that they are accessible to the largest sector of the population. Silicon-based photovoltaic (PV) solar panels (Silane) are an alternative to electricity generation in homes and industries. Most of the cost of these panels is in obtaining the raw material. Intensified schemes have been proposed to produce Silane (SiH4) that reduce costs and energy demand of the process, such as reactive distillation. Zeng et al. (2017) propose dividing the reactive zone into several reactive zones where, under a parametric study, they found that three reactive zones greatly benefit the energy needs of the unit operation. Alcántara-Maciel et al. (2022) solved this problem by stochastic optimization using dynamic limits where the case of one, two, and three reactive zones are evaluated in the same study to determine the optimal reactive zones for this problem by optimizing the Total Annual Cost (TAC), finding that the best solution is one reactive zone. Post-optimization controllability studies have been carried out for the reactive distillation column to produce Silane for a single reactive zone but not for the case of multiple reactive zones. Techniques have been proposed to evaluate the controllability of steady-state processes based on previous open-loop analysis and using Singular Value Decomposition (SVD) with simplified first-order transfer function models (Cabrera et al., 2018). In this work, three Pareto solutions from a previous study of this reactive distillation column solved in a multi-objective way for the reboiler load (Qh) and the TAC will be evaluated, as well as the case proposed by Zeng et al. (2017). The condition number of the rigorous control of the SVD with a quantitative measurement proposal Ag+gsm (Cabrera et al., 2018) will be compared with approximations of first and second-order transfer function models for a positive perturbation, so that the feasibility of using these simplified proposals to evaluate the steady-state controllability in a multi-objective global optimization of this complex scheme can be evaluated. The results of this study show that first and second-order transfer functions can be effectively used to predict steady-state controllability for a frequency of up to 100 rad/h, which is a new proposal, since only first-order transfer functions are reported in the literature. Simplifying rigorous transfer function models to first- or second-order models helps reduce noise in the stochastic optimization process, as well as helping to achieve shorter computational time in this novel implementation of steady-state controllability evaluation.



Optimization of steam power systems in industrial parks considering distributed heat supply and auxiliary steam turbines

Lingwei Zhang, Yufei Wang

China University of Petroleum (Beijing), China, People's Republic of

Enterprises in industrial parks have dispersed locations and various heat demands. In the steam power system for centralized heat supply in an industrial park, heat demands of all consumers are satisfied by the energy station, leading to the high steam delivery costs caused by the individual distant enterprises. Additionally, considering the trade-off between distance-related costs and heat cascaded utilization, the number of steam levels is limited, thus some consumers are supplied with heat at higher temperature than that of required, resulting in the low energy efficiency. To deal with the above problems, an optimization model of steam power systems in industrial parks considering distributed heat supply and auxiliary steam turbines is proposed. Field erected boilers can independently supply heat for consumers to avoid excessive pipeline costs; Auxiliary steam turbines are used for the re-depressurization of steam received by consumers, which can increase the electricity generation capacity and improve the temperature matching of heat supply and demand. A mixed-integer nonlinear programming model is established for the problem, and the steam power systems are optimized by the objective function of minimizing the total annual cost (TAC). In this model, the influence of different numbers of steam levels is considered. The saturation temperatures of steam levels are the decision variables, and the arrangements of field erected boilers and auxiliary turbines are determined by the binary variables. A case study illustrates that there is an optimal number of steam levels to minimize the TAC of the system. The selective installation of field erected boilers and steam auxiliary turbines for consumers can effectively reduce the cost of pipeline network, increase the income of electricity generation, and significantly decrease the TAC.



Conceptual Modular Design and Optimization for Continuous Pharmaceutical Processes

Tuse Asrav, Merlin Alvarado-Morales, Gurkan Sin

Technical University of Denmark, Denmark

The pharmaceutical industry faces challenges such as high manufacturing costs, strict regulations, and rapidly evolving product portfolios, driving the need for efficient, flexible, and adaptive manufacturing processes. To meet these demands, the industry is shifting toward multiproduct, multiprocess facilities, increasing interest in modular process designs [1].

Modular design is characterized by using standardized, interchangeable, small-scale process units or modules that can be easily rearranged by exchanging units and numbering up for fast adaptation to different products and production scales. The modular approach not only supports the efficient design of multiproduct facilities but also allows for the continuous optimization of processes as new data and technologies become available.

This study presents a systematic framework for the conceptual design of modular pharmaceutical facilities, which allows for reduced engineering cycles, faster time-to-market, and enhanced adaptability to changing market demands. In brief, the proposed framework consists of 1) module definition, 2) process flowsheet design, 3) simulation-based optimization, and 4) uncertainty analysis and robustness evaluation.

The application of the framework is demonstrated through case studies involving the manufacturing of two widely used active pharmaceutical ingredients (APIs) ibuprofen and paracetamol with distinct production steps following the modular design approach. The standard modules such as the reaction and separation modules are defined in terms of the type of equipment, the number of equipment, and the size of the equipment. The process flowsheets are then designed and optimized by combining these standardized modules. Simulation-based optimization and uncertainty analysis are integrated to quantify key metrics such as process efficiency, robustness, and flexibility.

This study demonstrates how modular systems offer a cost-efficient, adaptable solution that integrates continuous production with high flexibility. The approach allows pharmaceutical facilities to quickly reconfigure processes to meet changing demands, providing an innovative pathway for future developments in pharmaceutical manufacturing. The results also highlight the importance of integrating stochastic optimization in modular design to enhance robustness and ensure confidence in performance by accounting for uncertainties.

References

[1] Bertran, M. O., & Babi, D. K. (2023). Exploration and evaluation of modular concepts for the design of full-scale pharmaceutical manufacturing facilities. Biotechnology and Bioengineering. https://doi.org/10.1002/bit.28539



Design of a policy framework in support of the Transformation of the Dutch Industry

Jan van Schijndel, Rutger deMare, Nort Thijssen, Jim van der Valk Bouman

QuoMare

The size of the Dutch Energy System in 2022 was approximately 2700 PJ. Some 14% (380 PJ) classifies as renewable heat & power and the remaining 86% as fossil energy (natural gas, crude oil and coal). A network of power-generation units, refineries and petrochemical complexes convert fossil resources into heat (700 PJ), power (400 PJ), transportation fuels (500 PJ) and high value chemicals (400 PJ). Some 700 PJ is lost by conversion and transport. The corresponding CO2 emission level in 2022 was some 150 mln ton of CO2-equivalents.

Transformation of this system into a Net Zero CO2 system by 2050 calls for both decarbonisation and recarbonisation of fossil resources into renewable resources: renewable heat (waste heat, geo- & aqua-thermal heat), renewable power (solar & wind) and renewable carbon (biomass, waste, and CO2).

QuoMare developed a decision support framework TDES to support this Transformation of the Dutch Energy System.

TDES is based on Mixed-Integer Multi-Period Linear Programming mathematics.

TDES evaluates the impact of integer decisions (decarbonization, recarbonisation & infrastructure investment options) on a year-to-year basis simultaneously with continuous variables (unit capacities & interconnecting flows) subject to various constraints (like CO2 targets over time and infrastructure limitations). The objective is to maximize the net present value of the accumulated energy system margin over the 2020-2050 time-horizon.

TDES can help policy makers to develop policies for ‘optimal transition pathways’ that will deliver a Net Zero energy system by 2050.

Decarbonisation of heat & power is well underway at the moment. Over 50% of current Dutch power demand comes already from solar and wind. Large scale waste heat recovery and distribution projects are under development. Also a high penetration rate of residential heat-pumps is noticed. High level heat to be supplied to industry by green and blue H2 is projected to be viable from 2035 onwards.

However, the recarbonisation of fossil based transportation fuels (in particular for shipping and aviation) and chemicals is hampering progress due to the lack of robust business cases. Without a line of sight towards healthy production margins, companies are reluctant to invest in technologies (like electrolysis, pyrolysis, gasification, oxy-firing, fermentation, Fischer-Tropsch synthesis, methanol synthesis, auto-thermal reforming, dry-reforming) needed to produce the envisaged 800 PJ (some 20 mln ton) of renewable carbon based transportation fuels and high value chemicals by 2050.

The paper will address which set of meaningful policies would steer the energy system transformation towards a Net Zero system in 2050. Such an optimal set of policy measures will be a combination of CO2 emission constraints (prerequisite for any license to operate), CO2 tax levels (imposed on top of ETS), and capital investment subsidies (to ensure a level playing field in cost terms for the production of renewable carbon based transportation fuels and chemicals).

The novelty of this work relates to the application of a MP-MILP approach to the development of optimal policies to drive the energy transition at a country wide level.



Data-Driven Deep Reinforcement Learning for Greenhouse Temperature Control

Farhat Mahmood, Sarah Namany, Rajesh Govindan, Tareq Al-Ansari

College of Science and Engineering, Hamad bin Khalifa University, Qatar

Efficient temperature control in closed greenhouses is essential for optimal plant growth, especially in arid regions where extreme conditions challenge mirco-climate management. Maintaining the optimum temperature range directly influences healthy plant development and overall agricultural productivity, impacting crop yields and financial outcomes. However, the greenhouse in the present case study fails to maintain the optimum temperature as it operates based on predefined settings, limiting its ability to adapt to dynamic climate conditions. To address this, the objective is to develop a control system that maintains an ideal temperature range within the greenhouse that dynamically adapts to fluctuating external conditions, ensuring consistent climate control. Therefore, this study presents a control framework using Deep Deterministic Policy Gradient, a model-free deep reinforcement learning algorithm, to optimize temperature control in the closed greenhouse. A deep neural network is trained using historical data collected from the greenhouse to accurately represents the nonlinear behavior of the greenhouse system under varying conditions. The deep determinstic policy gradient algorithm learns optimal control strategies by interacting with a simulated greenhouse environment, continuously adapting without needing an explicit system dynamics model. Results from the study demonstrate that for a three-day simulation period, the deep deterministic policy gradient-based control system leads to superior temperature control as compared to the existing system with mean squared error of 0.1459 °C and a mean absolute error of 0.2028°C. The proposed control system promotes healthier plant growth and improved crop yields, contributing to better resource management and sustainability in controlled environment agriculture.



Balancing modelling complexity and experimental effort for conducting QbD on lipid nanoparticles (LNPs) systems

Daniel Vidinha Batista, Marco Seabra Reis

University of Coimbra, CERES, Department of Chemical Engineering

Abstract

Lipid nanoparticles (LNPs) efficiently encapsulate nucleic acids while ensuring successful intracellular delivery and endosomal escape. Therefore, there is increasing interest from the industrial and research communities in exploring the LNPs’ unique properties as a promising drug carrier. To ensure the successful and safe synthesis of these LNPs while maintaining their quality attributes, the pharmaceutical industry typically recommends following a Quality by Design (QbD) approach. One of the key aspects of the QbD approach is the use of Design of Experiments (DOE) to establish the Design Space that guarantees the quality requirements of the LNPs are met1. However, before defining a design space, several DOE stages may be necessary for screening the important factors, modelling the system’s behaviour accurately, and finding the optimal operational conditions. As each experiment is expensive due to the high cost of the formulation components, there is a strong concern and interest in making this process as efficient and informative as possible.

In this context, an in silico study provides a suitable test bed to analyse and compare the different DOE strategies that may be adopted and collect insights about a reasonable number of experiments to accommodate within a designated budget, while ensuring a statistically valid analysis. Therefore, we have conducted a systematic study based on the work developed by Karl et al.2, who provided a simulation model of the LNP synthesis, referred to as the Golden Standard (GS) Model. This model was derived and codified in the JMP Pro software using a recent methodology called self-validated ensemble model (SVEM). The model is quite complex in its structure, and was considered unknown throughout the study.

The objective of this study is to ascertain the efficacy of different DOE alternatives for a selected number of effects. A variety of models with increasing complexity was considered. These models are referred to as Estimated Models (EM) and vary from main effects only to models contemplating third-order non-linear mixture effects. In the development of the EM models, there are predictors of the GS model that were not considered, to better reproduce the realistic situation of modelling mismatch and experimental limitations. This is the case of the type of ionizable lipid and the total flow rate.

We have considered the molar ratio of each lipidic component (Ionizable lipid, Structural lipid, Helper lipid and PEG lipid) and the N/P ratio as factors, and for the responses, the potency and average size of the LNP. These responses were contaminated with additive white noise with different signal-to-noise ratio (SNRs) to better reflect the reality of having different levels of reproducibility of the measured responses.

Our results revealed that different responses require quite different model structures, with distinct levels of complexity. However, it was possible to notice that the number of experiments suggested is approximately the same, of the order of 30, a fact that may be anticipated for a DOE with similar factors under analysis.

References:

  1. Gurba-Bryśkiewic et al. Biomedicines. 2023;11(10):2752.doi:10.3390/biomedicines11102752
  2. Karl et al. JoVE. 2023;(198):65200. doi:10.3791/65200


Decarbonizing Quebec’s Chemical Sector: Bridging sector disparities with simplified modeling

Mélissa Lemire, Marie-Hélène Talbot, Sylvain Larose

Laboratoire des technologies de l’énergie, Institut de Recherche d’Hydro-Québec, Canada

Electric utilities are at a critical juncture where they must proactively anticipate energy consumption and power demand over extended time horizons to support the energy transition. These projections are essential for meeting the expected surge in renewable electricity as we shift away from natural gas to eliminate greenhouse gas (GHG) emissions. Given that a significant portion of these emissions comes from industrial processes, utilities need a comprehensive understanding of the thermal energy requirements of various processes within their service regions in order to navigate this transition effectively.

In Quebec, the chemical sector includes 19 major GHG emitters, each with annual emissions exceeding 10,000 tCO2 equivalent, operating across 11 distinct application areas, excluding refineries from this analysis. The sector is undergoing rapid transformation driven by the closure of aging facilities and the establishment of new plants focused on battery production and renewable fuel generation. The latter aims at decarbonising “hard-to-abate” sectors, which pose significant challenges. It is imperative to establish a clear methodology for characterising the chemical sector to accurately estimate the energy requirements for decarbonisation.

A thorough analysis of existing literature and reported GHG emissions serves as a foundation for estimating the actual energy requirement of each major emitter. Despite the diversity of industrial processes, a trend emerges: alternative end-use technologies can often be identified based on the required thermal temperature levels. With this approach, alternative end-use technologies that closely align with the specific heat levels needed are considered. Furthermore, two key performance indicators for decarbonisation scenarios have been developed. These indicators enable the comparison of various technological solutions and estimation of the uncertainties associated with different decarbonisation pathways. We introduce the Decarbonisation Efficiency Coefficient (DEC), which evaluates the reduction of fossil fuel consumption per unit of renewable energy and relies on the first law efficiency of both existing fossil-fuel technologies and alternative renewable energy technologies. The second indicator, the GHG Performance Indicator (GPI), assesses the reduction of greenhouse gas emissions per unit of renewable energy required, providing a clear metric for assessing the most efficient technological solutions to support decarbonisation efforts.

In a status quo market, the decarbonisation of this sector could yield a significant reduction in primary energy consumption, ranging from 10% to 61%, depending on the technologies implemented. Alternative end-use technologies include heat pumps, electric boilers with reheaters, biomass boilers, and green hydrogen utilisation, each presenting unique advantages for a sustainable industrial landscape. Ultimately, for Quebec’s energy transition to succeed, electric utilities must adapt to evolving market conditions and enhance their understanding of industrial energy requirements. By accurately estimating the electricity required for effective decarbonisation, utilities can play a pivotal role in shaping a sustainable future.



Optimizing Green Hydrogen Supply Chains in Portugal: Balancing Economic Efficiency and Water Sustainability

João Imaginário1, Tânia Pinto Varela1, Nelson Chibeles-Martins2,3

1CEG-IST, IST UL, Portugal; 2NOVA Math, NOVA FCT, Portugal; 3Mathematics Department, NOVA FCT, Portugal

As the world intensifies efforts to reduce carbon emissions and combat climate change, green hydrogen has emerged as a pivotal solution for sustainable energy transition. Produced using renewable sources like hydro, wind, and solar energy, green hydrogen holds immense potential for clean energy systems. Portugal, with its abundant renewable resources, is well-positioned to become a leader in green hydrogen production. However, the water-intensive nature of hydrogen production, especially via electrolysis, poses a challenge, particularly in regions facing water scarcity.

In Portugal, water resources are unevenly distributed, with southern regions such as Alentejo and Algarve already experiencing significant water stress. This creates a complex challenge for balancing green hydrogen development with the need to conserve water. To address this, a multi-objective optimization model for the Green Hydrogen Supply Chain (GHSC) in Portugal is proposed. This model aims to minimize both production costs and water stress, offering a more sustainable approach than traditional models that focus solely on economic efficiency.

The model leverages a meta-heuristic algorithm to explore large solution spaces, offering near-optimal solutions for supply chain design/planning. It incorporates regional water availability by analysing hydrographic characteristics of mainland Portugal, allowing for flexible decision-making that balances cost and water stress according to regional constraints. Scenario analysis is employed to evaluate different production strategies under varying conditions of water availability and demand.

By integrating these dual objectives, the model supports the design of green hydrogen supply chains that are both economically viable and environmentally responsible. This approach ensures that hydrogen production does not exacerbate water scarcity, particularly in already vulnerable regions. The findings contribute to the broader goal of creating cleaner, more resilient energy systems, providing valuable insights for sustainable energy planning and policy.

This research is a critical step in ensuring green hydrogen development aligns with long-term sustainability, offering a framework that prioritizes both economic and environmental goals.



Towards net zero carbon emissions and optimal water management within an integrated aquatic and agricultural livestock system

Amira Siniscalchi1,2, Guillermo Durand1,2, Erica Patricia Schulz1,2, Maria Soledad Diaz1,2

1Universidad Nacional del Sur, Argentine Republic; 2Planta Piloto de Ingeneria Quimica (PLAPIQUI)

We propose an integrated agricultural, livestock, ecohydrological and carbon capture model for the management of extreme climate events within a salt lake basin, while minimizing carbon dioxide emissions. Salt lakes, are typical of arid and semiarid zones where annual evaporation exceeds rainfall or runoff. They are particularly vulnerable to climatic changes, and salt and water levels can reach critical values. The mitigation of the consequences of extreme environmental events, such as floods and droughts, has been addressed for an endorheic salt lake in previous work [1].

In the present model, the system is composed of five integrated submodels: ecohydrological, meteorological, agricultural, livestock and carbon emission/capture. In the ecohydrological model, dynamic mass balances are formulated for both a salt lake and an artificial freshwater reservoir. The meteorological model includes surrogate models for meteorological variables, based on historical data for air temperature, wind, relative humidity and precipitations, on a daily basis. Based on them, wind speed profiles, radiation, vapor saturation, etc. are estimated, as required for evaporation and evapotranspiration profiles calculation. The agricultural submodel includes biomass growth for native trees, crops and pasture, as well as water requirement at each life cycle stage, which is calculated as a function of tree/crop/pasture evapotranspiration and precipitations. Local data is collected for native species and soil types. Carbon capture is calculated as function of biomass and soil type. Water requirement for cattle is calculated as function of biomass. The proposed model also accounts for CO2 emissions associated to sowing, electrical consumption for pumps (drip irrigation for crop and pasture and for water diversion to/from river), methane emissions (CO2-eq) by livestock, as well as CO2 sequestration by trees, pasture, crops and soil. One objective is to carry out the carbon mass balance along a given time horizon (six years or more) and to propose additional activities to achieve net zero carbon, depending on different climate events.

An optimal control problem is proposed, in which the objective function is an integral one that aims to keep the salt lake volume (and its associated salinity, as it is an endorheic basin), at a desired value, along a given time horizon, to keep salinity at optimal values for reproduction of valuable fish species. Control variables are stream flowrates that are diverted to/from the tributary to the salt lake from/to an artificial freshwater reservoir during dry/wet periods. The resulting optimal control problem is constrained with a DAE system of equations that represents the above-described system. The optimal control problem has been implemented in gPROMS (Siemens, 2024) and solved with a control vector parameterization approach.

Numerical results show that the system under study can produce meat, quinoa crops and fish, with important incomes, as well as restore native tree species, under different extreme events. Net zero carbon goals are approached within the basin, while performing optimal water management in a salt lake basin.

References

Siniscalchi, A, Diaz, M.S, Lara, R.J (2022). Sustainable long-term mitigation of floods and droughts in semiarid regions:Integrated optimal management strategies for a salt lake basin. Ecohydrology. 2022;15:e2396



A Modelling and Simulation Software for Polymerization with Microscopic Resolution

Shenhua Jiao1, Xiaowen Lin1,2, Rui Liu1, Xi Chen1,2

1State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University 310027, Hangzhou China; 2Huzhou Institute of Industrial Control Technology 313000, Huzhou China

In the domain of process systems engineering, developing software embedded with advanced computational methods is in great demand to enhance the kinetic comprehension and facilitate industrial applications. Polymer production, characterized by complex reaction mechanisms, represents a particularly intricate process industry. In this study, a scientific software, PolymInsight, is developed for polymerization modelling and simulation with insight on microscopic resolution.

From an algorithm perspective, PolymInsight offers high-performance solution strategies for polymerization process modelling by utilizing self-developed approaches. On a flowsheet level, the software provides an equation-oriented and a sequential-module approach to solve for macroscopic information. On micro-structure level, it provides users with both deterministic and stochastic algorithms to predict polymers’ microscopic properties, e.g., molecular weight distribution (MWD). Users can choose from various methods to model these indices: the stochastic method (Liu, 2023) which proposes concepts of the “buffer pool” to enable multi-step steady-state Monte Carlo simulation for complicated reactions including long chain branching, the orthogonal collocation method (Lin, 2021) which applied a model reformulation strategy to enable the numerical solution of the large-scale system of equations for calculating MWD in steady-state polymerizations, and an explicit analytical solution derivation method which provide the analytical expressions of MWD for specific polymerization mechanisms including FRP with combination termination, chain transfer to polymer and CRP with reversible reactions.

From a software architecture perspective, PolymInsight is built on a self-developed process modelling platform that allows flexible user customization and is specifically tailored for the macromolecular field. As a general software, it is modularly designed, and each module supports external libraries and secondary development. Pivotal modules, including reaction components, reaction kinetics, standard units, standard streams and solution strategies, are meticulously constructed and seamlessly integrated. The software's versatility is ensured by its support for a wide range of (i) polymerization mechanisms, (including Ziegler-Natta polymerization, free radical polymerization, and controlled radical polymerization), (ii) computing algorithms (including deterministic methods solving large scale equations and stochastic methods utilizing Monte Carlo simulation), (iii) user-defined flowsheets and parameters, and (iv) extensible standard model libraries. The insights gained from this work open up opportunities for optimizing operational conditions, addressing complex computational challenges, and enabling online control with minimal requirements for specialized knowledge.

References:

Lin, X., Chen, X., Biegler, L. T., & Feng, L.-F. (2021). A modified collocation framework for dynamic evolution of molecular weight distributions in general polymer kinetic systems. Chemical Engineering Science, 237, 116519.

Liu, R., Lin, X., Armaou, A., & Chen, X. (2023). A multistep method for steady-state Monte Carlo simulations of polymerization processes. AIChE Journal, 69(3), e17978.

Mastan, E., & Zhu, S. (2015). Method of moments: A versatile tool for deterministic of polymerization kinetics. European Polymer Journal, 68, 139–160.



Regularization and Uncertainty Quantification for Parameter Estimation of NRTL Models

Volodymyr Kozachynskyi1, Christian Hoffmann1, Erik Esche1,2

1Technische Universität Berlin, Process Dynamics and Operations, Straße des 17. Juni 135, 10623 Berlin, Germany; 2Bundesanstalt für Materialforschung und -prüfung (BAM), Unter den Eichen 87, 12205 Berlin, Germany

Accurate prediction of vapor-liquid equilibria (VLE) using thermodynamic models is critical to every step of chemical process design. The model’s accuracy and uncertainty can be quantified based on the uncertainty of the estimated parameters. The NRTL model is among the most widely used activity models. The estimation of its binary interaction parameters is usually done using a heuristic that sets the value of the nonrandomness parameter α to a value between 0.1 and 0.47. However, this heuristic can lead to an overestimation of the prediction accuracy of the final thermodynamic model, i.e., the model is actually not as reliable as the process engineer thinks. In this contribution, we present the results of the identifiability analysis of the binary VLE model [1] and argue that regularization should be used instead of simply fixing α.

In this work, the NRTL model with temperature dependent binary interaction parameters is considered, resulting in five parameters to be estimated: parameter α and four binary interaction parameters. 12 different binary mixtures with different azeotropic behavior, including no azeotrope and double azeotrope, are analyzed. A standard Monte Carlo method for describing real parameter and model prediction uncertainty is modified to be used for identifiability analysis and comparison of regularization techniques. Identifiability analysis is a technique used to determine parameters that can be uniquely estimated based on model sensitivity to the parameters. Four different subset selection regularization techniques are compared: SVD algorithm, generalized orthogonalization, forward selection, and eigenvalue algorithm, as they use different identifiability methods to select and remove unidentifiable parameters from the estimation.

The results of our study on 12 binary mixtures show that, depending on the mixture, the number of identifiable parameters varies between 3 and 5, implying that it is crucial to use regularization to efficiently solve the underlying parameter estimation problem. According to the analysis of all mixtures, parameter α, depending on the chosen regularization technique, is usually the most sensitive parameter, suggesting that it is inadvisable to remove this parameter from the estimation – in contradiction to standard heuristics.

In addition to this identifiability analysis, the nonlinearity of the NRTL model with respect to the parameters is analyzed. The actual form of the parameter uncertainty usually indicates nonlinearity and does not follow the normal distribution, which contradicts standard assumptions. Nevertheless, the prediction accuracy estimated using the linearization assumption is sufficiently good, i.e., linearization provides at least a valid underestimation of the real model prediction uncertainty.

In the presentation, we shall demonstrate exemplarily for some of the investigated mixtures that the estimation of NRTL parameters should be performed using regularization techniques, how large the introduced bias is based on a selected regularization technique, and compare actual uncertainty to its linear estimator.

[1] Kozachynskyi V., Hoffmann C., and Esche E. 2024. Why fixing alpha in the NRTL model might be a bad idea – Identifiability analysis of a binary Vapor-Liquid equilibrium, 10.48550/arXiv.2408.07844. Preprint.

[2] Lopez, C.D.C., Barz, T., Körkel, S., Wozny, G., 2015. Nonlinear ill-posed problem analysis in model-based parameter estimation and experimental design. 10.1016/j.compchemeng.



From Sugar to Bioethanol – Simulation, Optimization, and Process Technology in One Module

Jan Schöneberger1, Burcu Aker2

1Berliner Hochschule für Technik; 2Chemstations

The Green Processes Lab module, part of the Green Engineering study program at BHT, aims at equipping students to simulate, optimize, and implement an industrially relevant sustainable process within a single semester. Bioethanol production, with a minimum purity of 99.8 wt% is the selected process, using readily available supermarket feedstocks: sugar and yeast.

In earlier modules of the program, students engage with essential unit operations, including vessel reactor (fermenter), batch distillation, batch rectification, filtration, centrifugation, dryer, and adsorber. These operations are thoroughly covered in theoretical lectures reinforced through mathematical modeling, and predefined experiments, so that a comprehensive knowledge about their behavior exists. The students work in groups and are mostly unrestricted in designing their process, beside safety regulations and two artificial constraints: Only existing equipment can be used, and each process step is limited to a duration of 180 minutes including set-up, shutdown and cleaning. The groups compete in finding the economically best process, i.e. the process which produces the maximum amount bioethanol with the minimum amount of resources, namely sugar, yeast, and electricity. This turns the limitation for the process step duration into a big challenge, for it requires a very detailed simulation and process planning.

For tackling this task, the students use a commercial software, namely the flowsheet simulator CHEMCAD. This tool provides basic simulation models for unit operations and a powerful thermodynamic engine to calculate physical properties of pure substances and mixtures. However, the models must still be parametrized based on the existing equipment. Therefore, tools such as reaction rate regression and data reconciliation are used with data from previous experiments and a limited number of individually designed new experiments.

The parametrized models are then used to optimize the economic objective function. Due to the stepwise nature of the process, an overall optimization of all process parameters is extremely difficult. Instead, the groups combine different optimization approaches and try to focus on the individual processes without disregarding the other steps. This encourages a high degree of communication within the groups, because each group member is responsible for one process step.

At the end of the semester, each group successfully produced a quantifiable amount of bioethanol and documented the resources utilized throughout the process, as utility consumption was measured at each step. This data allows for the calculation of specific product costs, facilitating comparisons among groups and against commercially available bioethanol.

This work presents insights gained from the course, highlighting the challenges and the successes. It emphasizes the importance of mathematical modelling and the challenges in aligning modeled data with measured data. A key finding is that while the models may not perfectly reflect reality, they are essential for a successful process design, particularly for inexperienced engineers transitioning from academic to industry.



Aotearoa-New Zealand’s Energy Future: A Model for Industrial Electrification through Renewable Integration

Daniel Jia Sheng Chong1, Timothy Gordon Walmsley1, Martin Atkins1, Botond Bertok2, Michael Walmsley1

1Ahuora – Centre for Smart Energy Systems, School of Engineering, The University of Waikato; 2Szechenyi István University, Gyor, Egyetem tér 1, Hungary

Green energy carriers are increasingly proposed as the energy of the future. This study evaluates Aotearoa-New Zealand’s potential to transition to full industrial electrification and produce high-value, green hydrogen-rich compounds, all within the country’s resource constraints. At the core of this research is a regional energy transition system model, developed using the P-graph framework. P-graph is a bipartite graph designed specifically for representing complex process systems, particularly combinatorial problems. The novelty of this research lies in integrating the open-source P-graph Python library with the ArcPy library and the MERRA-2 global dataset API to conduct large-scale energy modelling.

The model integrates renewable energy and biomass resources for green hydrogen production, simulating energy transformation processes in an hourly basis. On the demand side, scenarios consider full electrification of industrial process heat through heat pumps and electrode boilers, complemented by biomass using driven technologies such as bubbling fluidised-bed reactor for biomass residue, straw and stover as well as biomass boilers for K-grade logs to meet heat demand. Additionally, the model accounts for projected increases in electricity consumption from the growing use of electric and hydrogen-battery hybrid vehicles, as well as existing residential and commercial energy needs. Aotearoa-New Zealand’s abundant natural wood resources emerge as a viable feedstock for downstream processes, supplying carbon sources for hydrogen-rich compounds such as methanol and urea.

The regional energy transition model framework is structured to minimise overall system costs. To optimise the logistics of biomass transportation, we use the Python-based ArcPy library to calculate cost functions based on the distance to green refineries. The model is designed to be highly dynamic, adapting to spot electricity prices and fluctuating demand across residential, commercial, and industrial sectors, particularly influenced by seasonal and weather variations. It incorporates non-dispatchable energy sources, such as wind and solar, with variable outputs, while utilising hydroelectric power as a stable baseload and energy storage solution to counter peak demand periods. The hourly solar irradiance, wind speed, and precipitation data from the MERRA-2 global dataset are coupled with the model to produce realistic and accurate capacity factors for these renewable energy sources.

The study concludes that Aotearoa-New Zealand remains a major player in the Oceanic region with respect to energy-based chemical production. Beyond meeting domestic needs, the country has the potential to become a net exporter of sustainable fuels, comparable to conventional energy sources. This outcome is achievable through the optimisation of diverse renewable energy sources and cross-sector energy integration. The findings provide policymakers with concrete, in-depth analyses of renewable projects to guide New Zealand’s transition to a net-zero hydrogen economy.



Non-Linear Model Predictive Control for Oil Production in Wells Using Electric Submersible Pumps

Carine de Menezes Rebello1, Erbet Almeida Costa1, Marcos Pellegrini Ribeiro4, Marcio Fontana3, Leizer Schnitman2, Idelfonso Bessa dos Reis Nogueira1

1Department of Chemical Engineering, Norwegian University of Science and Technology, Norway; 2Department of Chemical Engineering, Federal University of Bahia, Polytechnic School, Bahia, Brazil; 3Department of Electrical and Computer Engineering, Federal University of Bahia, Polytechnic School, Bahia, Brazil.; 4CENPES, Petrobras R&D Center, Brazil, Av. Horcio Macedo 950, Cid. Universitria, Ilha do Fundo, Rio de Janeiro, Brazil.

The optimization of oil production in wells lifted by Electric Submersible Pumps (ESPs) requires precise control of operational parameters, along with strict adherence to safety and efficiency constraints. The stable and safe operation of these wells is guided by physical and safety limits designed to minimize failures, extend equipment lifespan, and reduce costs associated with repairs, maintenance, and operational downtime. Moreover, maintaining operational stability not only lowers repair expenses but also mitigates revenue losses caused by unexpected equipment failures or inefficient production processes.

Process control has become a tool for reducing the frequency of constraint violations and ensuring the continuous optimization of oil production. By keeping operations within a well-defined operational envelope, operators can avoid common issues such as excessive vibrations, which may lead to premature pump wear and tear. Moreover, staying within this envelope prevents the degradation of pump efficiency over time and curbs excessive energy consumption, both of which have significant long-term cost implications.

Leveraging the available degrees of freedom to overcome the system's inherent constraints and improve operational efficiency. In the case of wells using ESPs, these degrees of freedom are primarily the ESP rotation speed (or frequency) and the opening of the production choke valve.

We propose a Non-Linear Model Predictive Control (NMPC) system tailored for a well equipped with an ESP. The NMPC framework explicitly accounts for the pump's operational limitations and effectively uses the available degrees of freedom to maximize performance. The NMPC's overarching objectives are to maximize oil production while respecting all system constraints, including both physical limitations and operational safety boundaries. This approach presents a more advanced and systematic control method compared to traditional PID-based systems, particularly in nonlinear, and constraint- intensive environments like oil wells.

The NMPC methodology is fundamentally based on a phenomenological model of the ESP, calibrated to predict key controlled variables accurately. These include the production flow rate and the liquid column height (HEAD). The prediction model consists of a system of three differential equations and a set of algebraic equations representing a stiff, single-phase, and isothermal system.

The system being modeled is a pilot plant located at the Artificial Lift Lab at the Federal University of Bahia. This pilot plant features a 15-stage centrifugal pump installed in a 32-meter-high well, circulating 3,000 liters of mineral oil within a closed-loop system.

In this setup, the controlled variables are the HEAD and the production flow, while the manipulated variables include the ESP frequency and the choke valve opening. The proposed NMPC system has been tested and has demonstrated its effectiveness in rejecting disturbances and accurately tracking setpoints. This guarantees stable and safe pump operation while optimizing oil production, providing a robust solution to the challenges associated in ESP-lifted well operations.



Life Cycle Assessment of Green Hydrogen Electrofuels in India's Transportation Sector

Ankur Singhal, Pratham Arora

IIT Roorkee, India

A transition to low-carbon fuels is integral in addressing the challenge of climate change. An essential transformation is underway in the transportation sector, one of the primary sources of global greenhouse gas emissions. The electrofuels that represent methanol synthesis via power-to-fuel technology have the potential to decarbonize the sector. This paper outlines an important comprehensive life cycle assessment for electrofuels, with this study focusing on the production of synthetic methanol from renewable hydrogen from water electrolysis coupled with carbon from the direct air capture (DAC) process. It looks at the whole value chain from raw material extraction to fuel combustion in transportation applications to give a proper cradle-to-grave analysis. The results from this impact assessment will offer a fuller comparison of merits and shortcomings associated with the electrofuel pathway compared to conventional methanol. The sensitivity study will determine how influential factors such as electrolyzer performance, carbon capture efficiency, and energy mix can impact the overall environmental effect. This study will compare synthetic methanol with traditional methanol, considering such categories as global warming potential, energy consumption, acidification, and eutrophication to appreciate the prospects for scaling synthetic methanol for the transportation industry.



Probabilistic Design Space Identification for Upstream Bioprocesses under Limited Data Availability

Ranjith Chiplunkar, Syazana Mohamad Pauzi, Steven Sachio, Maria M Papathanasiou, Cleo Kontoravdi

Imperial College London, United Kingdom

Design space identification and flexibility analysis are essential in process systems engineering, offering frameworks that enhance the optimization of operating conditions[1]. Such approaches can be broadly categorized into model-based and data-driven methods[2-4]. For complex systems like upstream biopharma processes, developing reliable mechanistic models is challenging, either due to a limited understanding of the underlying mechanisms or the need for simplifying assumptions to reduce model complexity. As a result, data-driven approaches often prove more practical from a modeling perspective. However, they often require extensive experimentation, which can be expensive and impractical, leading to sparse datasets[3]. Such sparsity also means that the data uncertainty becomes a significant factor that needs to be addressed.

We present a novel framework that utilizes a data-driven model to overcome the aforementioned challenges, even with sparse experimental data. Specifically, we utilize Gaussian Process (GP) models to account for real-world data uncertainties, enabling a probabilistic characterization of the design space—a critical generalization beyond traditional deterministic approaches. The framework has two primary components. First, the GP model predicts key performance indicators (KPIs) based on input process variables, allowing for the probabilistic modeling of these KPIs. Based on process performance constraints, a probability of feasibility is calculated, which indicates the likelihood that the constraints will be satisfied for a given input. After achieving a probabilistic design space characterization, the framework conducts a comprehensive quantitative analysis of process flexibility. Alpha shapes are employed to define deterministic boundaries at various confidence levels, allowing for the quantification of volumetric process flexibility and acceptable operational ranges. This enables a detailed examination of trade-offs between process flexibility, performance, and confidence levels.

The proposed framework is applied to an experimental dataset designed to study the effects of cell culture osmolality and temperature on the yield and purity of a monoclonal antibody product produced in Chinese hamster ovary cell fed-batch cultures. The results help balance purity-yield trade-offs through probabilistic characterizations that guide further experimentation and process design. The framework visualizes results through probabilistic heat maps and flexibility metrics to provide actionable insights for process development scientists. Since it is primarily a data-based framework, the framework is transferable to other types of bioprocesses.

References

[1] Yang, W., Qian, W., Yuan, Z. & Chen, B. 2022. Perspectives on the flexibility analysis for continuous pharmaceutical manufacturing processes. Chinese Journal of Chemical Engineering, 41, 29-41.

[2] Ding, C. and M. Ierapetritou. 2021. A novel framework of surrogate-based feasibility analysis for establishing design space of twin-column continuous chromatography. Int J Pharm, 609: p.121161.

[3] Kasemiire, A., Avohou, H. T., De Bleye, C., Sacre, P. Y., Dumont, E., Hubert, P. & Ziemons, E. 2021. Design of experiments and design space approaches in the pharmaceutical bioprocess optimization. European Journal of Pharmaceutics and Biopharmaceutics, 166, 144-154.

[4] Sachio, S., C. Kontoravdi, and M.M. Papathanasiou. 2023. A model-based approach towards accelerated process development: A case study on chromatography. ChERD, 197: p.800-820.

[5] M. M. Papathanasiou & C. Kontoravdi. 2020. Engineering challenges in therapeutic protein product and process design. Current Opinion in Chemical Engineering, 27, 81-88.



Study of the Base Case in a Comparative Analysis of Recycling Loops for Sustainable Aviation Fuel Synthesis from CO2

Antoine Rouxhet, Alejandro Morales, Grégoire Léonard

University of Liège, Belgium

In the context of the fight against global warming, the EU launched the ReFuelEU Aviation plan as part of the Fit for 55 package. Within this framework, sustainable aviation fuels are identified as a key tool for reducing hard-to-abate CO2 emissions. Power-to-fuel processes offer the potential to synthesise a wide range of fuels by replacing crude oil with captured CO2 as the carbon source. This CO2 is combined with hydrogen produced through water electrolysis, utilizing the reverse water-gas shift (RWGS) reaction :

CO2 + H2 ⇌ CO + H2O ∆°H298.15K = +41 kJ/molCO2 (1)

The purpose of this reaction is to convert the CO2 molecule into a less stable one, making it easier to transform into complex molecules, such as the hydrocarbon chains that constitute kerosene. This conversion is carried out through the Fischer-Tropsch (FT) reaction :

CO + H2 ⇌ CnH2n+2 + H2O ∆°H298.15K = -160 kJ/molCO (2)

In previous work, two kinetic reactor models were developed in Aspen Custom Modeler: one for the RWGS reaction [1] and one for the FT reaction [2]. The next step consists in integrating both models into a single process model built in Aspen Plus. This process includes both reaction units and the subsequent separation steps, yielding three main product fractions: the heavy hydrocarbons, the middle-distillates, which contain the kerosene-like fraction, and the light hydrocarbons along with unreacted gases.

This work is part of a broader study aimed at comparing different recycling loops for this process. Indeed, the literature proposes various configurations for recirculating unreacted gases, some of which include additional conversion units to transform light FT gases into reactants. However, there is currently a lack of comprehensive comparisons of these options from both technical and economic perspectives. The objective is therefore to compare these configurations to determine the one best suited for kerosene production.

In particular, this work presents the results of the base case i.e., the recycling of the gaseous phase leaving the separation units without any transformation of this stream. Three options are considered for the entry point of this recycled stream : at the inlet of the RWGS reactor, at the inlet of the FT reactor, or at both inlets. The present study investigates the comparison of these options based on carbon and energy efficiencies.

The next step involves adding a transformation unit to the recycling loop, such as a partial combustion unit. This would allow the conversion of light FT gases into process reactants, thereby improving overall efficiency. An economic comparison of the various options is also a goal of the study.

[1] Rouxhet, A., & Léonard, G. (2024). The Reverse Water-Gas Shift Reaction as an Intermediate Step for Synthetic Jet Fuel Production: A Reactor Sizing Study at Two Different Scales. Computer Aided Chemical Engineering, 53, 685-690. doi:10.1016/B978-0-443-28824-1.50115-0

[2] Morales Perez, A., & Léonard, G. (2022). Simulation of a Fischer-Tropsch reactor for jet fuel production using Aspen Custom Modeler. In L. Montastruc & S. Negny, 32nd EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING. Amsterdam, Netherlands: Elsevier. doi:10.1016/B978-0-323-95879-0.50051-5



Electricity Bidding with Variable Loads

Iiro Harjunkoski1,2

1Hitachi Energy Germany AG; 2Aalto University, Finland

The ongoing and planned electrification of many industries and processes also results in that all disturbances or changes in production will directly require countermeasures at the power grid level to maintain stability. As the electricity infrastructure is already facing the increasing volatility on the supply side due to the growing number of renewable energy source (RES) generation units, it is important also to untap the potential of the electrification. Processes will have a strong impact and can also help balancing for the RES-fluctuations, ensuring that the demand and supply are balanced at all times. This opportunity has already been recognized [1] and here we further elaborate on the concept by adding a battery energy storage system (BESS) to support the balancing between the production targets and grid stability.

Electricity bidding offers more opportunities than only forecasting the electricity load. Large consumers must participate in this electricity markets ahead of time and their energy bids will affect the market clearing. This mechanism allows to schedule the power plants to ensure sufficient supply but with increasing RES participation it becomes a challenge to deliver to promise and here industrial loads could potentially participate also in helping to maintain the stability. The main vehicle to deal with unplanned supply variations is ancillary services [3], which from the consumer point of view would commit to a potential increase/lowering of the energy consumption if called upon. This raises the practical question how much the process industries can plan for such volatility as it mainly must be focused on delivering to its own customers.

A common option – also seen by many RES unit owners – is to invest on BESS to act as a buffer between the consuming load and the power grid. This can also shield the process owner from unwanted and infeasible power volatilities, which can have an immense effect on the more electricity dependent processes. With such an energy storage system in place there is an option to use it for offering ancillary services, as well as participate in energy arbitrage trading [4]. However, the key is how to operate such a combined system in a profitable manner also taking into account the uncertainty of electricity prices. In this paper we extend the approach in [5], where a number of energy and ancillary service products are co-optimized taking into account the uncertainty in price developments. The previous approach was aimed for RES/BESS owners, where the forecasted load was relatively stable and mainly focused on keeping the system running. Here we change the load behavior to be not a parameter but a variable and link this to a process schedule, which is co-optimized with the bidding decisions. Much following the concepts in [6] we compare the cases with various levels of uncertainty (forecasting quantiles) and different sizes of BESS systems using a simplified stochastic approach, which reduces to a deterministic optimization approach in the case there is only one scenario available. The example process is modeled using the resource task network [7] approach.



Sodium bicarbonate production from CO2 captured in waste-to-energy plants: an Italian case-study

Laura Annamaria Pellegrini1, Elvira Spatolisano1, Giorgia De Guido1, Elena Riva Redolfi2, Mauro Corradi2, Davide Alberti3, Adriano Carrara3

1Politecnico di Milano, Italy; 2Acinque Ambiente Srl, Italy; 3a2a S.p.A., Italy

Waste-to-energy (WtE) plants, despite offering a sustainable solution to both waste management and energy production, significantly contribute to greenhouse gas emissions (Kearns, 2019). Therefore, the integration with CO₂ capture technologies represents a promising approach to enhance sustainability, enabling for both waste reduction and climate change mitigation (Otgonbayar and Mazzotti, 2024). Once this CO2 is captured from the flue gas, it can be eventually converted into high value-added products, following the circular economy principles. Key conversion technologies include chemical, electrochemical or biological methods for CO₂ valorization to methanol, syngas, plastics, minerals or fuels. However, challenges remain about the cost-effective implementation of these solution at the commercial scale. Research efforts in this respect are focused on improving efficiency and reducing costs, to allow for the process scale-up to the industrial level.

One of the viable alternatives for carbon dioxide utilization in the waste-to-energy context is its conversion into sodium bicarbonate (NaHCO₃). NaHCO₃, commonly known as baking soda, is often used for waste-to-energy flue gas treatment to abate various harmful pollutants, as sulfur oxides (SOₓ) and acidic gases as hydrogen chloride (HCl). Hence, bicarbonate production in situ from captured carbon dioxide can be an interesting solution for simultaneously lowering the plant environmental impact and improving the overall economic balance.

To explore sodium bicarbonate production as an alternative for carbon dioxide utilization, its production from sodium carbonate (Na₂CO₃) is analyzed referring to an existing waste-to-energy context in Italy (Moioli et al., 2024). The process technical assessment is performed through Aspen Plus V14®. Inlet CO2 flowrate is fixed to guarantee a bicarbonate output of about 30% of the annual need of the waste-to-energy plant. The effect of Na2CO3/CO2 (in the range 0.8-1.2 mol/mol) and temperature (in the range 35-45°C) is analyzed. Performances are evaluated considering the energy consumption for each of these cases. Outlet waste streams as well as water demand are minimized by a proper integration between process steams. Direct and indirect CO2 emissions are evaluated, to verify the process viability. As a result, optimal operating conditions are identified, in view of the pilot plant engineering and construction.

Due to the encouraging outcomes and the easy implementation to the existing infrastructures, the potential of carbon dioxide conversion to bicarbonate is demonstrated, proving that it can became a feasible CO2 utilization choice within the waste-to-energy context.

References

Kearns, D. T., 2019. Waste-to-Energy with CCS: A pathway to carbon-negative power generation. ©Global CCS Institute.

Otgonbayar, T., Mazzotti, M., 2024. Modeling and assessing the integration of CO2 capture in waste-to-energy plants delivering district heating. Energy 290, 130087. https://doi.org/10.1016/j.energy.2023.130087.

Moioli, S., De Guido, G., Pellegrini, L.A., Fasola, E., Redolfi Riva, E., Alberti D., Carrara A., 2024. Techno-economic assessment of the CO2 value chain with CCUS applied to a waste-to-energy Italian plant. Chemical Engineering Science 287, 119717.



A Decomposition Approach for Operable Space Maximization

Alberto Saccardo1, Marco Sandrin1,2, Constantinos C. Pantelides1,2, Benoît Chachuat1

1Sargent Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, United Kingdom; 2Siemens Industry Software, London W6 7HA, United Kingdom

Model-based design of experiments (MBDoE) is a powerful methodology for improving parameter precision and thus optimising the development of predictive mechanistic models [1]. By leveraging the system knowledge embedded in a mathematical model structure, MBDoE aims to maximise experimental information while minimising experimental time and resources. Recent developments in MBDoE have enabled the computation of robust campaigns of parallel experiments [2], which could in turn be applied repeatedly in a sequential design. Effort-based methods are particularly suited to the design of parallel experiments. They proceed by discretising the experimental design space into a set of candidate experiments and determine the optimal number of replicates (or efforts) for each, aiming to maximise the information content of the overall campaign.

A challenge with MBDoE is that its success ultimately depends on the assumed model structure, which can introduce epistemic errors when the model presents a large structural mismatch. Traditional MBDoE methods rely on Fisher information matrix (FIM)-derived metrics (e.g., D-optimality criterion), which implicitly assume a correct model structure [3], making them prone to suboptimality in case of significant structural mismatch. Albeit common in engineering models, the impact of structural uncertainty on MBDoE has not received as much attention as parametric uncertainty in the literature [3].

Inspired by [4], we propose to address this issue by appending a secondary, space-filling criterion to the main FIM-based criterion in a bi-objective optimisation framework. The idea is for the space-filling criterion to promote alternative experimental campaigns that explore the experimental design space more broadly, yet without significantly compromising their predicted information content. Within an effort-based approach, we compute such a space-filling criterion as a (minimal or average) distance between the selected experiments in the discretised experimental space and maximise it alongside a D-optimality criterion. We can furthermore apply gradient search to refine the effort-based discretization in a subsequent step [5].

We benchmark the proposed bi-criterion approach against a standard D-optimality approach for a microalgae cultivation system, whereby the (inaccurate) model describes nitrogen consumption with a simple Monod model and the ground truth is simulated using the Droop model.

References

[1] G. Franceschini, S. Macchietto, 2008. Model-based design of experiments for parameter precision: State of the art. Chem Eng Sci 63:4846–4872.

[2] K. P. Kusumo, K. Kuriyan, S. Vaidyaraman, S. García-Muñoz, N. Shah, B. Chachuat, 2022. Risk mitigation in model-based experiment design: a continuous-effort approach to optimal campaigns. Comput Chem Engg 159:107680.

[3] M. Quaglio, E. S. Fraga, F. Galvanin, 2018. Model-Based Design of Experiments in the Presence of Structural Model Uncertainty: An Extended Information Matrix Approach. Chem Engin Res Des 136:129–43.

[4] Q. Chen, R. Paulavičius, C. S. Adjiman, S. García‐Muñoz, 2018. An Optimization Framework to Combine Operable Space Maximization with Design of Experiments. AIChE J 64(11):3944–57.

[5] M. Sandrin, B. Chachuat, C. C. Pantelides, 2024. Integrating Effort- and Gradient-Based Approaches in Optimal Design of Experimental Campaigns. Comput Aided Chem Eng 53:313–18.



Non-invasive Tracking of PPE Usage in Research Lab Settings using Computer Vision-based Approaches: Challenges and Solutions

Haseena Sikkandar, Sanjeevrajan Nagavelu, Pradhima Mani Amudhan, Babji Srinivasan, Rajagopalan Srinivasan

Indian Institute of Technology, Madras, India

Personal Protective Equipment (PPE) protects researchers working in laboratory environments involving biological, chemical, medical, and other hazards. Therefore, monitoring PPE compliance in academic and industrial laboratories is vital. CSB case studies have reported significant injuries and fatalities in university lab settings, highlighting the importance of proper PPE and safety protocols to prevent accidents (https://www.csb.gov/videos). This paper develops a real-time PPE monitoring system using computer vision to ensure lab users wear essential gear like coats, safety gloves, bouffant caps, goggles, masks, and shoes (Arfan et al.,2023).

Current literature indicates substantial advancements in computer vision and object detection for PPE monitoring in industrial settings, though challenges persist due to variable lighting, background noise, and PPE occlusion (Protik et al., 2021). However, consistent real-time effectiveness in dynamic settings still requires further development of more robust solutions.

The non-intrusive detection of PPE usage in laboratory settings requires (1) a suitable hardware system comprising cameras, along with (2) computer vision-based algorithms whichareessential for effective monitoring.

In hardware system design, the strategic placement of cameras in the donning area, rather than inside the laboratory, is recommended. This preference is due to the ability to capture individuals as they equip their PPE before entering hazardous zones. Additionally, environments with significant height variations and lighting variability greatly affect detection accuracy. The physical occlusion of PPE items either by the individual’s body or surrounding objects, further complicates the task of ensuring full compliance. Computer vision-based algorithms face challenges with overlapping objects which can lead to tracking and identification errors. Variations in individual postures, movements, and PPE appearances also reduce detection accuracy. This problem is exacerbated if the AI model is trained on a limited dataset that doesn't accurately represent real-world diversity. Additionally, static elements like posters or dynamic elements can be misclassified as PPE, leading to a high rate of false positives.

To address the hardware system design issues, a solution involves strategically placing multiple cameras to cover the entire process, eliminating blind spots, and confirming correct PPE usage before individuals enter sensitive zones. In computer vision-based algorithms, the system uses adaptive image processing techniques to tackle variable lighting, occlusion, and posture variations. Software enhancements include multi-object tracking and pose estimation algorithms, trained on diverse datasets for accurate PPE detection. Incorporating edge cameras that utilize decentralized computing significantly enhances the operational efficiency of real-time PPE detection systems.

Future conceptual challenges in PPE detection systems include the ability to effectively detect multiple individuals. Each laboratory may also require customized PPE based on specific safety requirements. These variations necessitate the development of highly adaptable AI models capable of recognizing a wide range of PPE and distinguishing between different individuals, even in crowded settings, to ensure compliance and safety.

References:

  • M. Arfan et al., “Advancing Workplace Safety: A Deep Learning Approach for PPE Detection using Single Shot Detector”, International Workshop on Artificial Intelligence and Image Processing, Indonesia, pp. 127-132, 2023.
  • Protik et al.,, “Real-time PPE Detection Using YOLOv4 and TensorFlow,” IEEE Region 10 Symposium, Jeju, Korea, pp. 1-6, 2021.


Integrating batch operations involving liquid-solid mixtures into continuous process flows

Valeria González Sotelo, Pablo Monzón, Soledad Gutiérrez Parodi

Universidad de la República, Facultad de Ingeniería, Uruguay

While there has been a growth in specialized simulators for batch processes, the prevailing trend is towards a simple cycle modeling. Batch processes can be then integrated into an overall flowsheet, with the output stream properties calculated based on the established reaction conditions (time, temperature, etc.). To guarantee a continuous flow of material, an accumulation tank is usually incorporated.

Moreover, a wide range of heterogeneous batch processes exist within industry. Examples include sequential batch reactors in wastewater treatment, solid-liquid extraction processes, adsorption reactors, lignocellulosic biomass hydrolysis and grains soaking. When processing solid-liquid mixtures, or multiphase mixtures in general, phase separation can be exploited allowing for savings in resources such as raw materials or energy. In fact, these processes enable the separate discharge of the liquid and solid phases, providing flexibility to selectively retain either phase or a fraction thereof. Sequencing batch reactors retain microbial flocs while periodically discharging a portion of the treated effluent. By treating lignocellulosic biomass with a hot, pressurized aqueous solution, lignin and pentoses can be solubilized, leaving cellulose as the remaining solid phase1. In this case, since cellulose is the fraction of interest, the solid can be separated and most of the liquid phase retained for processing a new batch of biomass, thus saving reagents, water, and energy.

In a heterogeneous batch process, a degree of freedom typically emerges that often becomes a decision variable in the design of these processes: the solid-to-liquid ratio (S/L), which is a critical parameter that influences factors such as reaction rate, heat and mass transfer. Partial phase retention adds a new degree of freedom, the retained fraction, to the process design.

The re-use process is thus inherently dynamic. In a traditional batch process, the time horizon for analysis corresponds to the reaction, loading, and unloading time. For re-use processes, the mass balance will cause reaction products to accumulate in the retained phase from cycle to cycle. To take this into account, the time horizon for mass balances needs to be extended to include as many cycles as necessary. Eventually, a periodic operating condition will be reached.

The primary objective of this work is to incorporate the batch-with-reuse model into flowsheets, similar to traditional batches, by identifying the periodic condition under the given process conditions. A general algorithm to simulate the periodic condition suitable for any kinetics is proposed. It could enable the coupling of these processes in a simulation flowsheet. Regarding the existence of a periodic condition, an analytical study of the involved kinetic expressions and illustrative examples will be included.

1 Mangone, F., Gutiérrez, S..A Recycle Model of Spent Liquor in Pre-treatment of Lignocellulosic Biomass, Computer Aided Chemical Engineering, Volume 48, Pages 565-570, 2020, ISSN 1570-7946, ISBN 9780128233771, Elsevier, https://doi.org/10.1016/B978-0-12-823377-1.50095-1



Enhancing decision-making by prospective Life Cycle Assessment linked to Integrated Assessment Models: the roadmap of formic acid production

Marta Rumayor, Javier Fernández-González, Antonio Domínguez-Ramos, Angel Irabien

University of Cantabria, Spain

Formic acid (FA) is gaining attention as a versatile compound used both as chemical and energy carrier. Currently, it is produced by a two-step fossil-based process that include the reaction between methanol and carbon monoxide to methyl formate which is then hydrolyzed to form FA. Growing the global concerns about climate change the exploration of new strategies to produce FA from renewable sources has never been more important than today. Several sustainable FA production pathways have emerged in the latest decades including those based in chemocatalytic and electrochemical processes. Their environmental viability has been confirmed through ex-ante life cycle assessment (LCA) provided there are enhancements in energy consumption and consumable durability.1,2 However, these studies have been conducted using static approaches, which may not accurately reflect the influence related to the background evolution and the long-term reliability of the environmental prospects taking into account other decarbonization pathways in the background processes.

Identifying exogenous challenges affecting FA production due to supply changes is just as crucial as targeting the hotspots in the foreground technologies. This study aims to overcome this epistemological uncertainty by performing a dynamic life cycle assessment (d-LCA) utilizing the open-source Python Premise tool with the IMAGE integrated assessment model (IAM). A time-dependent background system was developed that aligned with prospective scenarios based on socio-economic pathways and climate change mitigation targets. This was coupled with the ongoing portfolio of emerging renewable technologies together with the traditional decarbonization approaches.

Given the substantial energy demands of chemocatalytic- and electro-based technologies, they could not be considered a feasible decarbonization solution under pessimistic policy scenarios. Conversely, a rapid development rate could enhance the feasibility of the electro-based pathway by 2030 within the optimistic background trajectory. A fully renewable electrolytic production approach could significantly reduce carbon emissions (up to 70%) and fossil fuel dependence (up to 80%) compared to conventional production by 2050. Other traditional approaches involve an intermediate decarbonization/defossilization synergy. Despite the potential of the electro-based pathway, a complete shift would involve land degradation risks. To facilitate the development of electrolyzers, prioritizing reductions in the use of scarce materials is crucial, aiming to enhance durability to 7 years by 2050. This study facilitates a comprehensive analysis of the portfolio of production processes minimizing the overall impact considering several regions and time-horizons and interlinked them with energy-economy-climate systems.

Acknowledgements

The present work is related to CAPTUS Project. This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101118265. J.F.-G. would like to thank the financial support of the Spanish Ministry of Science, Innovation (MICIN) for the concession of the FPU grant (19/05483).

References

(1) Rumayor, M.; Dominguez-Ramos, A.; Perez, P.; Irabien, A. Journal of CO2 Utilization 2019, 34, 490–499.

(2) Rumayor, M.; Dominguez-Ramos, A.; Irabien, A. Sustainable Production and Consumption 2019, 18, 72–82.



Towards Self-Tuning PID Controllers: A Data-Driven, Reinforcement Learning Approach for Industrial Automation

Kyle Territo, Peter Vallet, Jose Romagnoli

LSU, United States of America

As industries transition toward the digitalization and interconnectedness of Industry 4.0, the availability of vast amounts of process data opens new opportunities for optimizing industrial control systems. Traditional Proportional-Integral-Derivative (PID) controllers, often require manual tuning to maintain optimal performance in the face of changing process conditions. This paper presents an automated and adaptive method for PID tuning, leveraging historical closed-loop data and machine learning to create a data-driven approach that can continuously evolve over time.

At the core of this method is the use of historical process data to train a plant surrogate model, which accurately mimics the behavior of the real system under various operating conditions. This model allows for safe and efficient exploration of control strategies without interfering with live operations. Once the surrogate model is constructed, an RL agent interacts with it to learn the optimal control policy. This agent is trained to respond dynamically to the current state of the plant, which is defined by a comprehensive set of variables, including operational conditions, system disturbances, and other relevant measurements.

By integrating RL into the tuning process, the system is capable of adapting to a wide range of scenarios without the need for manual intervention. The RL agent learns to adjust the PID controller parameters based on the evolving state of the system, optimizing performance metrics such as stability, response time, and energy efficiency. After the training phase, the agent is deployed online to monitor the real-time state of the plant. If any significant deviations or disturbances are detected, the RL agent is called upon to make real-time adjustments to the PID controller, ensuring that the process remains optimized under new conditions.

One of the unique advantages of this approach is its ability to continuously update and refine the surrogate model and RL agent over time. As the plant operates, real-time data is collected and integrated into the historical dataset, allowing the models to adapt to any long-term changes in the process. This continuous learning capability makes the system highly resilient and scalable, ensuring optimal performance even in the face of new and unforeseen operating conditions.

By combining data-driven modeling with reinforcement learning, this method provides a robust, adaptive, and automated solution for PID tuning in modern industrial environments. The approach not only reduces the need for manual tuning and oversight but also maximizes the use of available process data, aligning with the principles of Industry 4.0. As industrial systems become increasingly complex and data-rich, such methods hold significant potential for improving process efficiency, reliability, and sustainability.



Energy integration of an intensified biorefinery scheme from waste cooking oil to produce sustainable aviation fuel

Ma. Teresa Carrasco-Suárez1, Araceli Guadalupe Romero-Izquierdo2

1Faculty of Engineering, Monash University, Australia; 2Facultad de Ingeniería, Universidad Autónoma de Querétaro, Mexico

The sustainable aviation fuel (SAF) has been proved as a viable alternative to reduce the CO2 emissions derived from the aviation sector activities, boosting its sustainable growth. However, the reported SAF processes are not economically competitive with jet fuel fossil-oil derived, thus, the application of strategies to reduce its economical issues has captured the interest of researchers and industrials. In this sense, in 2022 Carrasco-Suárez et al., studied the intensification, on the SAF separation zone, of a biorefinery scheme from waste cooking oil (WCO), which allowed a reduction of 3.07 % of CO2 emissions, regarding the conventional processing scheme; also, diminishing the operational cost from steam and cooling water services. Despite these improvements, the WCO biorefinery scheme is not economically viable and possesses high energy requirements. For this reason, in this work we present the energy integration of the whole biorefinery scheme from WCO, including the intensification of all separation zones involved in the scheme, using Aspen Plus V.10.0. The energy integration of WCO biorefinery scheme was addressed from the pinch point methodology to minimize its energy requirements. The energy integration (EI-PI-S) results have been presented in form of indicators to compare them with the conventional scheme (CS) and the intensified scheme before energy integration (PI-S). The defined indicators were: total annual cost (TAC), energy investment per delivered energy by the products (EI-P), energy investment per the mass of the main product (EI-MP, SAF as main product) and CO2 emissions per mass of main product (CO2-MP). According with results, the EI-PI-S contain the best indicators, regarding the CS and PI-S, reducing 14.34 % and 31.06 % the steam and cooling water requirements, regarding to PI-S; also, the CO2 emissions were reduced in 13.85 % and 14.13 % regarding CS and PI-S, respectively. However, the TAC for EI-PI-S is 0.5 % higher than the PI-S. The studied integrated and intensified WCO biorefinery scheme rises as a feasible option to produce SAF and other biofuels, attending the principles of minimum energy requirements and improving its economic performance.

References:

M. T. Carrasco-Suárez, A.G. Romero-Izquierdo, C. Gutiérrez-Antonio, F.I. Gómez-Castro, S. Hernández, 2022. Production of renewable aviation fuel by waste cooking oil processing in a biorefinery scheme: Intensification of the purification zone. Chem. Eng. Process. - Process Intensif. 181, 109103. https://doi.org/https://doi.org/10.1016/j.cep.2022.109103



Integrating Renewable Energy and CO₂ Utilization for Sustainable Chemical Production: A Superstructure Optimization Approach

Tianen Lim, Yuxuan Xu, Zhihong Yuan

Tsinghua University, China, People's Republic of

Climate change, primarily caused by the extensive emission of greenhouse gases, particularly carbon dioxide (CO₂), has intensified global efforts toward achieving carbon neutrality. In this context, renewable energy and CO₂ utilization technologies have emerged as key strategies for reducing the reliance on fossil fuels and mitigating environmental impacts. In this work, a superstructure optimization model is developed to integrate renewable energy networks and chemical production processes. The energy network incorporates multiple sources, including wind, solar, and biomass, along with energy storage systems to enhance reliability and minimize grid dependence. The reaction network features various pathways that utilize CO₂ as a raw material to produce high value-added chemicals such as polyglycolic acid (PGA), ethylene-vinyl acetate (EVA), and dimethyl carbonate (DMC), allowing for efficient conversion and resource utilization. The optimization is formulated as a mixed-integer linear programming (MILP) model, targeting the minimization of production costs while identifying the most efficient energy and reaction routes. This research supports the green transition of the chemical industry by optimizing a model that integrates renewable energy and CO₂ in chemical processes, contributing to more sustainable production methods.



Sustainable production of L-lactic acid from lignocellulosic biomass using a recyclable buffer: Process development and techno-economic evaluation

Donggeun Kang, Donghyeon Kim, Dongin Jung, Siuk Roh, Jiyong Kim

School of Chemical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea

With growing concerns about energy security and climate change, there is an increasing emphasis on finding solutions for sustainable development. To address this problem, using lignocellulosic biomass (LCB) to produce polymeric materials is one of the promising strategies to reduce dependence on fossil fuels. L-lactic acid (L-LA), a key monomer in biodegradable plastics, is a sustainable alternative that can be derived from LCB. The L-LA production process typically involves several various technologies such as fermentation, filtration, and distillation. In the L-LA production process, large amounts of buffers are used to maintain proper pH during fermentation, so conventional buffers (e.g., CaCO3) are often selected because of their low cost. However, these buffers cannot be recycled efficiently, and the potential for recyclable buffers remains uncertain. In this work, we aim to develop and evaluate a novel process for sustainable L-LA production using a recyclable buffer (i.e., KOH). The process involves a series of different unit operations such as pretreatment, fermentation, extraction, and electrolysis. In particular, the fermentation process is designed to achieve high yields of L-LA by maximizing the conversion of sugars to L-LA. In addition, an efficient buffer regeneration process using membrane electrolysis is implemented to recycle the buffer with minimal energy input. Then, we evaluated the viability of the proposed process compared to the conventional process based on minimum selling price (MSP), and net CO2 emissions (NCE). The MSP for L-LA was evaluated to be 0.88 USD /kg L-LA, and the NCE was assessed to be 3.31 kg CO₂-eq/kg L-LA. These results represent a 15% reduction in MSP and a 10% reduction in NCE compared to the conventional process. Additionally, a sensitivity analysis was performed with a 20% change in production scale, and LCB composition from the reference value. The sensitivity analysis results showed that the MSP varied from -4.4% to 3.6% by production scale, and from -13.0% to 19.0% by LCB composition. The proposed process, as a cost-effective and eco-friendly process, promotes biotechnology practices for sustainable production of L-LA.

References

Wang, Yumei, Zengwei Yuan, and Ya Tang. "Enhancing food security and environmental sustainability: A critical review of food loss and waste management." Resources, Environment and Sustainability 4 (2021): 100023.



Potential of chemical looping for green hydrogen production from biogas: process design and techno-economic analysis

Donghyeon Kim, Donggeun Kang, Dongin Jung, Siuk Roh, Jiyong Kim

School of Chemical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea

Hydrogen (H₂), as the most promising alternative to conventional fossil fuel-based energy carriers, faces the critical challenge of diversifying its sources and lowering production costs. In general, there are two main technological routes for H2 production: electrolysis using renewable power and catalytic reforming of natural gas. Biogas, produced from organic waste, offers a renewable and carbon-neutral option for H₂ production, but due to its high CO2 content, it requires a pre-separation process of CO2 from CH4 or a catalyst with different performance to be used as a feed gas in existing reforming processes. Chemical looping reforming (CLR), as an advanced H₂ production system, uses an oxygen carrier as the oxidant instead of air, allowing raw biogas to be used directly in the reforming process. Recently, a number of studies on the design and analysis of the CLR process have been reported, and these technological studies have gradually secured the economic feasibility of H2 production by CLR. However, for the CLR process to be deployed in the biogas treatment industry, further research is needed to comprehensively analyze the economic, environmental, and technical capabilities of the CLR processes under different feed conditions, required capacities, and targeted H2 purity. This study proposes new biogas-based CLR processes and analyzes the capability of the processes from techno-economic and environmental perspectives: ⅰ) conventional CLR as a base process, ⅱ) chemical looping steam reforming (CLSR), ⅲ) chemical looping water splitting (CLWS), and ⅳ) chemical looping dry reforming (CLDR). The proposed processes consist of unit operations such as a CLR reactor, a water-gas shift reactor, a pressure swing adsorption (PSA) unit, and a monoethanolamine (MEA) sorbent-based CO₂ absorption unit. Evaluation metrics include unit production cost (UPC), net CO2 equivalent emissions (NCE), and energy efficiency to compare economic, environmental, and technical performance, respectively. Each process is simulated using the commercial process simulator Aspen Plus to obtain mass and energy balance data. The oxygen carrier to fuel ratio and the heat exchanger network (HEN) are optimized through thermodynamic analysis to ensure efficient redox reactions, maximize heat recovery, and achieve autothermal conditions. As a result, we comparatively analyzed the economic and environmental capability of the proposed processes by identifying the major cost-drivers and CO2 emission contributors. In addition, the sensitivity analysis was performed using various scenarios to provide technical solutions to improve the economic and environmental performance, resulting in the real implementation of the CLR process.



Data-Driven Soft Sensors for Process Industries: Case Study on a Delayed Coker Unit

Wei Sun1, James G. Brigman2, Cheng Ji1, Pratap Nair2, Fangyuan Ma1, Jingde Wang1

1Beijing University of Chemical technology, China, People's Republic of; 2Ingenero Inc. 4615 Southwest Freeway, Suite 320, Houston TX 77027, USA

Research on data-driven soft sensors have been extensively conducted, yet reports of successful industrial applications are still notably scarce. The reason can be attributed to the variable operating conditions and frequent disturbances encountered during real-time process operations. Industrial data are always nonlinear, dynamic, and highly unbalanced, which poses huge challenges to capture the key characteristics of the underlying processes. Aiming at this issue, this work presents a comprehensive solution for industrial applications of soft sensors, including feature selection, feature extraction, and model updating.

Feature selection aims to identify variables that are both independent of each other and have significant impact on concerned performance, including quality, safety, and etc. It not only helps in reducing the dimensionality of the data to simplify the model, but also improving the prediction performance. Process knowledge can be utilized to initially screen variables, then correlation and redundancy analysis has to be employed because the existence of information redundancy not only leads to an increase in the computational load of modeling, but also significantly affects its prediction accuracy. Therefore, a mutual information-based relevance-redundancy algorithm is introduced for feature selection in this work, in which the relevance and redundancy among process variables are evaluated through a comprehensive correlation function and ranked according to their importance using greedy search to obtain the optimal variable set [1]. Then feature extraction is performed to capture internal features from the optimal variable set and build the association between latent features and output variables. Considering the complexity of industrial processes, deep learning techniques are often leveraged to handle the intricate patterns and relationships within the data. Long Short-Term Memory (LSTM) networks, a specific type of recurrent neural network (RNN), are particularly well-suited for this task due to their ability to capture long-term dependencies in sequential data. In industrial processes, many variables exhibit temporal correlations. LSTM networks can effectively model these temporal dependencies by maintaining a memory state that allows them to learn from sequences of data over extended periods. Meanwhile, a differential unit is embedded in the latent layer of LSTM networks in this work to simultaneously handle the short-term nonstationary features caused by process disturbances [2]. Once the model is trained, the model is updated during online application to incorporate the slow deviation in equipment and reaction agents.. Some quality related data are usually available behind real-time measurement, but can still be utilized to fine-tune the model parameters, ensuring sustained prediction accuracy over an extended period. To verify the effectiveness of this work, a case study on a delayed Coker unit is investigated. The results demonstrated promising long-term prediction performance for tube metal temperature, indicating the potential of this work in industrial application.

[1] Tao, T., Ji, C., Dai, J., Rao, J., Wang, J. and Sun, W. Data-based Health Indicator Extraction for Battery SOH Estimation via Deep Learning, Journal of Energy Storage, 2024

[2] Ji, C., Ma, F., Wang, J., & Sun, W. Profitability Related Industrial-Scale Batch Processes Monitoring via Deep Learning based Soft Sensor Development, Computers and Chemical Engineering, 2022



Retrofitting AP-X LNG Process Through Mixed Refrigerant Composition variation: A Sensitivity Analysis Towards Decarbonization Objective

Mutaman Abdulrahim, Saad Al-Sobhi, Fares Almoamoni

Chemical Engineering department, Qatar University, Qatar

Despite the promising outlook for the LNG market as a cost-effective energy carrier, associated GHG emissions remain an obstacle toward the net-zero emissions target. This study focuses on the AP-X LNG process, investigating the potential for decarbonization through optimizing a mixed refrigerant (MR) composition. The process simulation is carried out using Aspen HYSYS v.12.1 to simulate the large-scale AP-X LNG process, with the Peng-Robinson equation of state as a fluid package. Several reported studies have incorporated ethylene into their MR cycle instead of ethane, which might result in different MR volumes and energy requirements. Different refrigerant compositions are examined through the Aspen HYSYS optimizer, aiming to identify optimal MR composition that minimizes environmental impact and maximizes profitability without compromising the efficiency and performance of the process. Energy, Exergy, Economic, and Environmental (4E) assessment analysis will be performed to obtain key performance indicators such as specific power consumption, exergy efficiency, cost of production, etc. This work will contribute to the existing AP-X-based plant retrofitting activity and sustainability, offering insights into pathways for reducing the carbon footprint of the AP-X process.



Performance Evaluation of Gas Turbine Combined Cycle Plants with Hydrogen Co-Firing Under Various Operating Conditions

Hyeonrok Choi1,2, Won Yang1, Youngjae Lee1, Uendo Lee1, Changkook Ryu2, Seongil Kim1

1Korea Institute of Industrial Technology, Korea, Republic of (South Korea); 2School of Mechanical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea

In response to global efforts on climate change, countries are advancing low-carbon strategies and aiming for carbon neutrality. To reduce CO₂ emissions, fossil fuel power plants are integrating co-firing and combustion technologies centered on carbon-free fuels. Hydrogen has emerged as a promising fuel option, especially for gas turbine combined cycle (GTCC) plants when co-fired with natural gas. Due to the similar Wobbe Index (WI) values of hydrogen and methane, minimal modifications are required to the existing gas turbine nozzles. Furthermore, hydrogen’s high combustion limit allows stable operation even at elevated fuel-air ratios. Gas turbines are also adaptable to changes in ambient condition, which enables them to accommodate the output variations and operational changes, associated with hydrogen co-firing. Hydrogen, having distinct combustion characteristics compared to natural gas, affects gas turbine operation and alters the properties of the exhaust gases. The increased water vapor fraction from hydrogen co-firing results in a higher specific heat capacity of the exhaust gases and a reduced flow rate, leading to changes in turbine power output and efficiency compared to methane combustion. These changes impact the heat transfer properties of the Heat Recovery Steam Generator (HRSG) in the bottom cycle, thereby affecting the overall thermal performance of the GTCC plant. Since gas turbine operations vary with seasonal changes in temperature and humidity, it is essential to evaluate hydrogen co-firing’s impact on thermal performance across different seasonal conditions.

This study developed an in-house code to evaluate gas turbine performance during hydrogen co-firing and to assess the HRSG and steam turbine cycle based on the heat transfer mechanism, focusing on the impact on thermal performance across different seasonal conditions. Hydrogen co-firing effects on GTCC plant thermal performance were assessed under various ambient conditions. Three ambient cases (-12°C, RH 60%; 5°C, RH 60%; and 32°C, RH 70%) were analyzed for two scenarios: one with fixed Turbine Inlet Temperature (TIT) and one with fixed power output. A 600 MWe-class GTCC plant model consists of two F-class gas turbines and one steam turbine. Compressor performance maps and a turbine choking equation were used to analyze operating point and isentropic efficiency variations. The HRSG model, developed from heat exchanger geometric data, provided results for gas and water-steam side temperatures and heat transfer rates. The GTCC plant models were validated based on manufacturer data for design and off-design conditions.

The study performed process analysis to predict GTCC plant thermal performance and power output under hydrogen co-firing. Thermodynamic and off-design models of the gas turbine, HRSG, and steam turbine were used to analyze changes in exhaust temperature, flow rate, and composition, along with corresponding bottom cycle output variations. The effects of seasonal conditions on thermal performance under hydrogen co-firing were analyzed, providing a detailed evaluation of its impact on GTCC plant efficiency and output across different seasons. This analysis provides insights into the effects of hydrogen co-firing on GTCC plant performance across seasonal conditions, highlighting its role in hydrogen applications for combined cycle plants.



Modelling of Woody Biomass Gasification for process optimization

Yu Hui Kok, Yasuki Kansha

The University of Tokyo, Japan

In recent decades, public awareness of climate change has been increasing significantly due to the accelerating rate of global warming. To align with the Paris Agreement and “Green Transformation (GX) Basix Policy” in 2023, the use of biomass instead of fossil fuel for power generation and biofuel production has increased (Zhou & Tabata, 2024). Biomass gasification is widely used for biomass conversion as this thermochemical process can satisfy various needs such as producing heat, electricity, fuels and chemical synthesis (Situmorang et al., 2020). To date, extensive research has been conducted on biomass gasification, particularly focusing on the reaction models of the process. These models enable more computationally efficient predictions of the yield and composition of various gas and tar species, making it feasible to simulate complex reactor configurations without compromising accuracy. However, existing models are too complex to apply to the control system or to optimize the process operating conditions effectively, limiting their practical use for industrial applications. To address this, a simple reaction model for biomass gasification was developed in this research. To analyze the gasification reaction of the system and evaluate the gasification model, two feedstocks - Japanese cedar and waste cardboard were used in this steam gasification experiments to gain insight into the gasifier behaviour. A reaction model is developed by combining the biomass gasification equilibrium model and the experimental data. This model simulates woody biomass gasification using AspenTech’s Aspen Plus, a chemical process simulator. Validation of the accuracy of the model is done by comparing simulation results with available literature data and experimental data. As a case study, the model was used for process optimization, examining the effect of varying key operating parameters at the steam gasifier such as temperature of the gasification process, biomass moisture content and steam to biomass ratio (S/B) on the conversion performance. The experimental results show that Japanese Cedar has a higher syngas yield and H2/CO ratio than the cardboard gasification, indicating a more promising conversion of biofuel and bioenergy for Japanese Cedar. The optimal operating condition for maximizing syngas was found to be at 850°C gasifier temperature and S/B of 2. The process simulation model effectively predicts syngas composition with an absolute error below 4% for syngas composition. This study is helpful in developing a control system in future studies which able to capture the complex interactions between the factors that influence the performance of gasifiers and optimize them for improved efficiency and scalability in industrial applications.

Reference

Zhou, J., & Tabata, T. (2024). Research Trends and Future Direction for Utilization of Woody Biomass in Japan. Retrieved from https://www.mdpi.com/2076-3417/14/5/2205

Situmorang, Y. A., Zhao, Z., Yoshida, A., Abudula, A., & Guan, G. (2020). Small-scale biomass gasification systems for power generation (<200 kW class): A review. In Renewable and Sustainable Energy Reviews (Vol. 117). Elsevier Ltd. https://doi.org/10.1016/j.rser.2019.109486



Comparative analysis of conventional and novel low-temperature and hybrid technologies for carbon dioxide removal from natural gas

Federica Restelli, Giorgia De Guido

Politecnico di Milano, Italy

Global electricity consumption is projected to rise in the coming decades. To meet this growing demand sustainably, renewable energy sources and, among fossil fuels, natural gas are expected to see the most significant growth. As natural gas consumption increases, it will also become necessary to extract it from low-quality reserves, which often contain high levels of acid gases such as carbon dioxide and hydrogen sulphide [1].

The aim of this work is to compare various innovative and conventional technologies for the removal of carbon dioxide from natural gas, considered as a binary mixture of methane and carbon dioxide, with carbon dioxide contents ranging from 5 to 70 mol%. It first examines the performance of the physical absorption process using propylene carbonate as a solvent, along with a hybrid process in which it is applied downstream of low-temperature distillation. These results are, then, compared with previously studied technologies, including conventional chemical absorption with amines, physical absorption with dimethyl ethers of polyethylene glycol (DEPG), low-temperature distillation, and hybrid processes that combine distillation and absorption [2].

Propylene carbonate is particularly advantageous, as noted in the literature [3], when hydrogen sulphide is not present in raw natural gas. The processes are simulated using Aspen Plus® V9.0 [4] and Aspen HYSYS® V9.0 [5]. The energy analysis is conducted using the “net equivalent methane” method, which allows to compare duties of different nature [6]. The processes are compared in terms of methane equivalent consumption, methane losses, and product quality, offering guidance on the optimal process based on the composition of the raw natural gas.

References

[1] Langé S., Pellegrini L.A. (2016). Energy analysis of the new dual-pressure low-temperature distillation process for natural gas purification integrated with natural gas liquids recovery. Industrial & Engineering Chemistry Research 55, 7742-7767.

[2] De Guido, G., Gilardi, M., Pellegrini, L.A. (2021). Novel technologies for low-quality natural gas purification. In: Computer Aided Chemical Engineering (Vol. 50, pp. 241-246). Elsevier.

[3] Bucklin, R.W., Schendel, R.L (1984). Comparison of Fluor Solvent and Selexol processes. Energy Prog., United States.

[4] AspenTech (2016). Aspen Plus®, Burlington (MA), United States.

[5] AspenTech (2016). Aspen HYSYS®, Burlington (MA), United States.

[6] Pellegrini, L.A., De Guido, G., Valentina, V. (2019). Energy and exergy analysis of acid gas removal processes in the LNG production chain. Journal of Natural Gas Science and Engineering 61, 303-319.



Development of Chemical Recycling System for NOx Gas from NH3 Combustion

Isshin Ino, Yuka Sakai, Yasuki Kansha

Organization for Programs on Environmental Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Japan

Referring to the SDGs suggested by the United Nations, resource recycling is receiving more attention. However, some toxic but reactive wastes are only stabilized using additional resources before being released into the environment. Converting pollutants to valuable materials using their reactivity enhances the recycle ratio in society, leading to a reduction of environmental impact, which is called chemical recycling.

In this study, the potentials of chemical recycling for nitrogen oxides (NOX) gases from ammonia (NH3) combustion were evaluated from the chemical and economic points of view. Fundamental research for the system was conducted using NOX gas as the case study. As a chemical recycling method for NOX, the conversion to potassium nitrate (KNO3), valuable as fertilizer and raw material for gunpowder, was adopted. In this method, the high reactivity of NOX as toxicity was effectively utilized for chemical conversion.

On the other hand, most of the NOX gas in Japan is currently neutralized to nitrogen gas by the Selective Catalytic Reduction (SCR) method using additional ammonia. Nitrogen and water products are neutral and non-toxic but cannot be utilized further. Compared to this SCR method, this adapted method has high economic potential for the chemical recycling of NOX. The conversion ratio of chemical absorption by potassium hydroxide (KOH) was experimentally measured to analyze this method's environmental protection and economic potential. In addition, the system's economic value was estimated using the experimental data. The research further focuses on modeling and evaluating the NOX utilization system for NH3 combustion. The study concluded that the waste gas utilization system for NO­X waste gas is feasible as it is profit-oriented, enabling further resource utilization and construction of the nitrogen cycle. Furthermore, applying this approach to other waste gases is promising for realizing a sustainable society.



Hybrid Model: Oxygen balance for the development of a digital twin

Marc Lemperle1, Pedram Ramin1, Julian Kager1, Benny Cassells2, Stuart Stocks2, Krist Gernaey1

1Technical University Denmark, Denmark; 2Novonesis, Fermentation Pilot Plant

The oxygen transfer rate (OTR) is often a limiting factor when targeting maximum yield in a fermentation process. Understanding the OTR is therefore critical for improved bioreactor performance, as dissolved oxygen often becomes the limiting factor in aerobic fermentations due to its inherent low solubility in liquids such as in fermentation broths1. With the long-term aim of establishing a digital twin framework, the initial phase of development involves mathematical modelling of the OTR in a pilot-scale bioreactor, hosting the filamentous fungus Aspergillus oryzae using an elaborate experimental design.

The experimental design is specifically tailored to the interplay of the factors influencing the OTR e.g., airflow, back-pressure and agitation speed. Through a first set of four fermentation, a full-factorial experimental design with three factors (aeration, agitation, and pressure) and two levels (high and low) is designed. Concluding the 23-factorial design, eight different unique patterns of factors and two centre points were investigated in four different fermentation processes.

Since viscosity plays a crucial role in determining mass transfer properties in the chosen fungal process, understanding its effects is essential for modelling the OTR2. Another set of the similar experimental setup offered the possibility to investigate the on-line viscosity measurement in the fermentation broth. A significant improvement in the description of the volumetric oxygen mass transfer coefficient (KLa) with an R2 fit of 92 % and the unsatisfactory mechanistic understanding of viscosity led therefore to the development of a hybrid OTR model. The hybrid sequential OTR model includes a light gradient boost machine model that predicts the online viscosity from both the mechanistic model outputs and the process data. Evaluation of the first series of experiments without online viscosity data showed an improved KLa fit with a normalized mean square error of up to 0.14. Further evaluation with production batches to demonstrate model performance is being considered as a subsequent step.

Cell dry weight and off-line viscosity measurements were taken from each of the above-mentioned industrial based fermentation processes throughout the fermentation. The subsequent analysis aims to decipher the relationships between the OTR and the agitation, aeration, head pressure and viscosity, thus providing the basis for an accurate and reliable mathematical model of the oxygen balance inside a fermentation.

The hybrid OTR model presents the first step towards developing a digital twin, aiding with operational decisions for fermentation processes.



An integrated approach for the sustainable water resources optimisation

MICHAELA ZAROULA1, EMILIA KONDILI1, JOHN K. KALDELLIS2

1Optimisation of Production Systems Lab, Mechanical Engineering Department, University of West Attica; 2Soft Energy Applications and Environmental Protection Lab., University of West Attica

Unhindered access to clean water, the preservation and strengthening of water reserves is, together with the coverage of energy needs, a basic element of the survival of the human species (and not only) and therefore a top priority of both the UN and the E.U. In particular, the E.E. has set the goal of upgrading the coverage of 70 million of its citizens to clean water by 2030.

On the other hand, of course, the current situation in the balance of water supply and demand in the southern Mediterranean is clearly deficient and particularly worrying, while the situation worsens even more during the summer season, where excessive tourist flows. In Greece for example the ever-increasing demand for water, especially in the island regions during the summer (tourist) season, combined with the prolonged drought, has led to over-exploitation (to the point of exhaustion) of any available water reserves, depriving traditional agricultural crops of the necessary amounts of water and makes absolutely imperative the need for the optimal management of existing water resources but also the optimal development of new or improvement of existing infrastructures.

In particular, the lack of water resources suffocates the irrigation of agricultural crops, constantly shrinking the production of local products and drastically reducing the number of people employed in the primary sector.

In this context and, especially, in the light of ever-increasing violation of the area’s capacity, the present work highlights the main rationale and methods of our current research in water resources optimisation.

More specifically, the main objectives of the present work are:

The detailed description of the integrated energy – water problem in highly pressed areas

The use of scientific methods for the optimization of the water resource system

The development of a mathematical optimization model for the optimal exploitation of existing water resources as well as the optimization of new infrastructure projects planning that takes quantitatively into account the priorities and the values of the water use.

Furthermore, the innovative approach in the present work also considers the need to reduce the demand based on future forecasts so that the water resources are always in balance with the wider environment where they are utilized.

Water resources sustainability is included in the optimization model for the reduction of the environmental impacts and the environmental footprint of the energy-water system.

It is expected that with the completion of the research will result in an integrated tool that will support the users in the optimal water resources exploitation.



Streamlined Life-Cycle Assessments of Chemicals Based on Chemical Taxonomies

Maximilian Guido Hoepfner, Lucas F. Santos, Gonzalo Guillén-Gosálbez

Institute for Chemical and Bioengineering, Department of Chemistry and Applied Biosciences, ETH Zurich, Vladimir-Prelog-Weg 1, 8093 Zurich, Switzerland

Addressing the challenges caused by climate change and the impact of human activities requires a tool to evaluate and identify strategies for mitigating climate risk. Life cycle assessment (LCA) has emerged as the prevalent approach to quantify the impact of industrial systems, providing valuable insights on how to improve their sustainability performance. Still, it remains in most cases a data-intensive and complex tool. Especially, for the chemical industry, with a wide variety of products, there is an urge for tools that streamline and accelerate their environmental impact assessment. As an example, the largest LCA database, Ecoinvent, currently includes only around 700 chemicals 1, most of them bulk chemicals, which highlights the need to cover data gaps and develop streamlined methods to facilitate the widespread adoption of LCA in the chemical sector.

Specifically, LCA data focus mostly on high production volume chemicals, most of them produced in continuous processes operating at high temperature and pressure. Quantifying the impact of fine chemicals, often produced in bath plants and at milder conditions, thus, requires time-consuming process simulations2, or data-driven methods 3. The latter estimate impacts based on molecular descriptors and are often trained with high production volume chemicals, which might make them less accurate for fine chemicals.

Alternatively, here we explore another approach to streamline the LCA calculations based on classifying chemicals according to their molecular structure, e.g., occurring functional groups in the molecule. By applying a chemical taxonomy, we establish intervals within which impacts are likely to fall and correlations between sustainability metrics within classes. Furthermore, we investigate the use of process metric indicators (PMI), such as waste-mass and energy intensity, as proxies of LCA impacts. Notably, we studied the 783 chemicals found in the Ecoinvent 3.9.1. cutoff database by using the taxonomy implemented in the classyfire tool 1. Subsequently, the LCIs for all chemicals were used to estimate simple PMI metrics, while their impacts were computed following the IPCC 2013 GWP 100 and ReCiPe 2016 midpoint methods. Starting with the classification into organic and inorganic chemicals, a subsequent classification into so-called superclasses, representing more complex molecular characteristics, is performed. Furthermore, we applied clustering, principal component analysis (PCA) and data fitting to identify patterns and trends in the superclasses. The calculations were implemented in Brightway and Python 3.11.

Preliminary results show that the use of a chemical taxonomy allows to identify stronger correlations between LCA impacts and PMI metrics, opening the door for streamlined LCA methods based on simple metrics and formulas tailored to the specific type of chemical class.

1. Lucas, E. et al. The need to integrate mass- and energy-based metrics with life cycle impacts for sustainable chemicals manufacture. Green Chem. 26, (2024).

2. Hai, X. et al. Geminal-atom catalysis for cross-coupling. Nature 622, 754–760 (2023).

3. Zhang, D., Wang, Z., Oberschelp, C., Bradford, E. & Hellweg, S. Enhanced Deep-Learning Model for Carbon Footprints of Chemicals. ACS Sustain. Chem. Eng. 12, 2700–2708 (2024).



Aspen Plus Teaching: Spread or Compact Approach

Fernando G. Martins1,2, Henrique A. Matos3

1LEPABE, Laboratory for Process Engineering, Environment, Biotechnology and Energy, Chemical Engineering Department, Faculty of Engineering, University of Porto, Porto, Portugal; 2ALiCE, Associate Laboratory in Chemical Engineering, Faculty of Engineering, University of Porto, Porto, Portugal; 3CERENA , Departamento de Engenharia Química, Instituto Superior Técnico, Universidade de Lisboa, Portugal

Aspen Plus is a software package for the modelling and simulation of chemical processes used in several chemical engineering courses of different levels worldwide, with the support of several books [1-4]. This contribution has the goal to discuss how this teaching and learning is applied in two Portuguese universities: Instituto Superior Técnico – University of Lisbon (IST.UL) and Faculty of Engineering – University of Porto (FE.UP)

In 2021, the former integrated master’s in Chemical Engineering, with a duration of 5 years, was split into two courses: the Bachelor, with a duration of 3 years, and the Master, with a duration of 2 years.

With this reformulation, and at IST.UL, the courses’ coordination decided to spread the Aspen Plus teaching in the 2nd year of the Bachelor in different courses, starting the first introduction to the package in the 1st semester with the Chem. Eng. Thermodynamics. The idea is to use Aspen Plus to support the learning about compound properties, phase diagrams with different models (IDEAL, NRTL, PR, SRK, etc.), azeotropic identification and activity coefficients calculation. Moreover, based on experimental data is possible to obtain the binary interaction coefficient by regression, being a tool for help experimental data analysis. In addition, a Rankine cycle is modelled and simulations are carried out to calculate COP and other KPIs for different fluids automatically.

The same procedure is now introduced in other courses, such as Process Separation, Transport Phenomena, etc. At IST.UL, there are 2 courses of Project Design (12 ECTS) at Bachelor and Master levels that use Aspen Plus as a tool in Conceptual Project Design .

At FE.UP, the introductory teaching of Aspen Plus occurs in 3rd year of the Bachelor, in a course called Software Tools for Chemical Engineering, intending to simulate industrial processes of small complexity, properly choosing the applicable thermodynamic and unit operations models, and analyse the influence of design variables and operating conditions. The Aspen Plus is also taught, in a more advanced way, in the course of Engineering Design (12 ECTS), in the 2nd year of the master's degree, when students develop preliminary designs for industrial chemical processes.

This work tries to analyse how these two teaching strategies influence the student’s performance in the two Project Design courses at IST.UL, and in Engineering Project at FE.UP, by the fact that Aspen Plus is intensively used in these courses.

References:

[1] Schefflan, R. (2016). Teach yourself the basics of ASPEN PLUS, 2nd edition, Wiley & Sons

[2] Al-MALAH, K.I.M. (2017). ASPEN PLUS – Chemical Engineering Applications, Wiley & Sons

[3] Sandler, S.I. (2015). Using Aspen Plus in Thermodynamics Instruction: A Step-by-Step Guide, Wiley & Sons

[4] Adams II, T.A. (2022). Learn Aspen Plus in 24 Hours, 2nd Edition, McGraw Hill



Integration of Life Cycle Assessment into the Optimal Design of Hydrogen Infrastructure for Regional-Scale Deployment

Alessandro Poles1, Catherine Azzaro-Pantel1, Henri Schneider2, Renato Luise3

1Laboratoire de Génie Chimique, Université Toulouse, CNRS, INPT, Toulouse, France; 2LAboratoire PLAsma et Conversion d'Énergie, INPT, Toulouse, France; 3European Institute for Energy Research, Emmy-Noether Straße 11, Karlsruhe, Germany

Climate change mitigation is one of the most urgent global challenges. Greenhouse gas (GHG) emissions are the primary drivers of climate change, needing coordinated international action. However, political and territorial complexities make a uniform global approach difficult. As a result, individual countries are developing their own national policies aligned with international guidelines, such as those from the Intergovernmental Panel on Climate Change (IPCC). These policies often focus solely on emissions generated within national borders, as is the case with France’s National Low-Carbon Strategy (SNBC). Focusing solely on territorial emissions in national carbon neutrality strategies may lead to the unintended consequence of shifting environmental impacts to other stages of the life cycle occurring outside the country's borders. To provide a comprehensive assessment of environmental impacts, broader decision-support tools, such as Life Cycle Assessment (LCA), are crucial.

This is particularly important in energy systems, where hydrogen has emerged as a key component of the future energy mix. Hydrogen production technologies - such as Steam Methane Reforming (SMR) and electrolysis - each present distinct trade-offs. Currently, hydrogen is predominantly produced via SMR (>90%), largely due to its established market presence and lower production costs (1-3 $/kgH2). However, SMR brings significant GHG emissions (10-12 kgCO₂-eq / kgH2). Electrolysis, on the other hand, presents a lower-carbon alternative when powered by renewable energy, although it is currently more expensive (6 $/kgH2).

Literature shows that most existing hydrogen system optimizations focus on reducing costs and minimizing GHG emissions, often overlooking broader environmental considerations. This highlights the need for a multi-objective framework that addresses not only economic and GHG emission reductions but also the mitigation of other environmental impacts, thus ensuring a more sustainable approach to hydrogen network development.

This study proposes an integrated framework that couples multi-objective optimization for hydrogen networks with LCA. The optimization framework is developed using Mixed Integer Linear Programming (MILP) and an augmented epsilon-constraint method, implemented in the GAMS environment over a multi-year timeline (2022-2050). Evaluated hydrogen production pathways include electrolysis powered by renewable energy sources (wind, PV, hydro, and the national grid) and SMR with Carbon Capture and Storage (CCS). The LCA model is directly integrated into the optimization process, using the ReCiPe2016 method to calculate environmental indicators following a Well-to-Tank approach. A case study of hydrogen deployment in Auvergne-Rhône-Alpes, addressing industrial and mobility demand for hydrogen, will illustrate this framework.

The current phase of the research focuses on a bi-criteria optimization framework that balances economic objectives with environmental indicators, considered individually, to identify correlated indicators. Future research will explore strategies to reduce dimensionality in multi-objective optimization (MOO) without compromising solution quality, ensuring that decisions are both efficient and environmentally robust.

Reference [1] Thèse Renato Luise, Développement par approche ascendante de méthodes et d'outils de conception de chaînes logistiques « hydrogène décarboné »: application au cas de la France, Toulouse INP, 4 octobre 2023, https://theses.fr/2023INPT0083?domaine=theses



Streamlining Catalyst Development through Machine Learning: Insights from Heterogeneous Catalysis and Photocatalysis

Mitra Jafari, Julia Schowarte, Parisa Shafiee, Bogdan Dorneanu, Harvey Arellano-Garcia

Brandenburg University of Technology Cottbus-Senftenberg, Germany

Designing heterogeneous catalysts and optimizing reaction conditions present significant challenges. This process typically involves catalyst synthesis, optimization, and numerous reaction tests, which are not only energy- and time-intensive but also costly. Advances in machine learning (ML) have provided researchers with new tools to predict catalysts' behaviour, reaction conditions, and product distributions without the need for extensive laboratory experiments. Through correlation analysis, ML can uncover relationships between various parameters and catalyst performance. Predictive models, trained on existing data, can forecast the effectiveness of new materials, while data-driven insights help guide catalyst design and optimization. Automating the ML framework further streamlines this process, improving scalability and enabling rapid evaluation of a wider range of candidates, which accelerates the development of solutions to current challenges [1,2].

In this contribution, a proposed ML approach and its potential in catalysis (heterogeneous and photocatalysis) is explored by analysing datasets from different reactions, such as Fischer-Tropsch synthesis and pollutant degradation. These datasets are categorized based on descriptors like catalyst formulation, pretreatment, characteristics, activation, and reaction conditions, with the goal of predicting reaction outcomes. Initially, the data undergoes cleaning and labelling using one-hot encoding. Subsequent steps include imputation and normalization for data preparation. In addition, techniques such as Spearman correlation matrices, dendrograms, pair plots, and dimensionality reduction methods like PCA are applied. The datasets are then employed to train and test several models, including ensemble methods, regression techniques, and neural networks. Hyperparameter tuning was optimized using GridSearchCV, alongside cross-validation. Performance metrics such as R², RMSE, and MAE, are used to assess model accuracy and AIC for model selection, with a simple mean value model or linear regression serving as a baseline for comparison.

Finally, the prediction accuracy of each model is investigated, and the best performing model is selected. The effect of different descriptors on the respons have also been assessed to find out the most effective parameters on the catalysts performance. In regards of photocatalysis, nonlinear behaviour was observed due to the optimization driven influences. This is likely because the published results solely consisted of optimized data.

References

  1. Tang, Deqi, Rangsiman Ketkaew, and Sandra Luber. "Machine Learning Interatomic Potentials for Catalysis." Chemistry–A European Journal (2024): e202401148.
  2. Schnitzer, Tobias, Martin Schnurr, Andrew F. Zahrt, Nader Sakhaee, Scott E. Denmark, and Helma Wennemers. "Machine Learning to Develop Peptide Catalysts─ Successes, Limitations, and Opportunities." ACS Central Science 10, no. 2 (2024): 367-373.


Life Cycle Design of a Novel Energy Crop “Sweet Erianthus” by Backcasting from Process Simulation Integrating Agriculture and Industry

Satoshi Ohara1, Yoshifumi Terajima2, Hiro Tabata3,4, Shoma Fujii5, Yasunori Kikuchi3,5

1Research Center for Advanced Science and Technology, LCA Center for Future Strategy, The University of Tokyo; 2Tropical Agriculture Research Front, Japan International Research Center for Agricultural Sciences; 3Presidential Endowed Chair for “Platinum Society”, The University of Tokyo; 4Research Center for Solar Energy Chemistry, Graduate School of Engineering Science, Osaka University; 5Institute for Future Initiatives, The University of Tokyo

Crops have been developed primarily for food production. Toward decarbonization, it is also essential to design and develop novel crops suitable for new application processes such as biofuels and green chemicals production through backcasting approaches. For example, modifying industrial crops through crossbreeding or genetic modification can change their unit yield, environmental tolerance, and raw material composition (i.e., sugars, starch, and lignocellulose). However, conventional energy crop improvement has been aimed only at high-unit yield with high fiber content, such as Energy cane and Giant Miscanthus, which contain little or no sugar, limiting their use to energy and lignocellulosic applications.

Sweet Erianthus was developed in Japan as a novel energy crop by crossbreeding Erianthus (wild plants with high biomass productivity even in poor environments) and Saccharum spp. hybrids (sugarcane with sugar storage ability). Erianthus has a deep root system to draw up nutrients and water from the deep layers of the soil, making it possible to cultivate crops with low fertilizer and water inputs even in farmland unsuitable for agriculture due to low rainfall or low nutrients and water near the surface. On the other hand, sugarcane accumulates sugars directly in the stalk. Microorganisms can easily convert extracted sugar juice into bioproducts such as ethanol and polylactic acid. Therefore, Sweet Erianthus presents a dual characteristic of both Erianthus and sugarcane.

In this study, we are tackling the design of optimal Sweet Erianthus crop conditions (unit yield, compositional balance of sugars and fiber) by backcasting from simulations of the entire life cycle, considering sustainable agriculture, industrial productivity, environmental impact, and resource recycling. As options for industrial applications, ethanol fermentation, biomass combustion, power generation, and torrefaction to produce charcoal, biogas oil, and syngas were selected. Production potentials and energy inputs were calculated using the previously reported simulation models (Ouchida et al., 2017; Leonardo et al., 2023). Specifically, the production potential of each energy per unit area was simulated by multiplying conversion factors with three variables: unit yield; Y [t/ha], sugar content; S [wt%], and fiber content; F [wt%]. Each variable was assumed not to exceed the range of widths of the various prototypes developed.

The simulation results reveal optimal feedstock conditions that maximize energy productivity per unit area or minimize environmental impact. The fiber-to-sugar content (F/S) ratio was found to be especially important. This study presents a simulation-based crop design methodology. This study presents a method for practical crop design on the agricultural side based on simulations on the industrial side, which is expected to enable efficient new crop development.

K. Ouchida et al., 2017, Integrated Design of Agricultural and Industrial Processes: A Case Study of Combined Sugar and Ethanol Production, AIChE Journal, 63(2), 560-581

L. Leonardo et al., 2023, Simulation-based design of regional biomass thermochemical conversion system for improved environmental and socio-economic performance, Comput. Aid. Chem. Eng., 52. 2363-2368



Reversible Solid Oxide Cells and Long-term Energy Storage in Residential Areas

Arthur Waeber, Dorsan Lepour, Xinyi Wei, Shivom Sharma, François Maréchal

EPFL, Switzerland

As environmental concerns intensify and energy demand rises, especially in residential areas, reversible Solid Oxide Cells (rSOC) stand out as a promising technology. Characterized by their reversibility, high electrical efficiency, and fuel flexibility, they also cogenerate high-quality heat. The smart operation of rSOC systems can present interesting opportunities for long-term energy storage, facilitating the penetration of renewable energies at different scales while continuously providing useful heat.

Although the implementation of energy storage systems in residential areas has already been extensively discussed in the literature, the focus is mainly on batteries, often omitting the seasonal dimension. This study aims to address this gap by investigating the technical and economic feasibility of rSOC systems in residential areas alongside various long-term storage options: Hydrogen (H2), Hybrid tank (CH4/CO2) , and Ammonia (NH3).

Each of these molecules requires precise modeling, introducing specific constraints and impacting the rSOC system's performance in terms of electricity or heat output in different ways. To achieve this, the processes are first modeled in Aspen Plus to account for thermodynamic properties before being integrated into the Renewable Energy Hub Optimizer (REHO) framework.

REHO is a decision-support tool designed for sustainable urban energy system planning. It considers the endogenous resources of a specified area, various end-use demands (such as heating and mobility), and multiple energy carriers, including electricity, heat, and hydrogen. Multi-objective optimizations are conducted across economic, environmental, and energy efficiency criteria to facilitate a sound comparison of different storage solutions.

This analysis emphasizes the need for long-term storage technologies to support the penetration of decentralized electricity production. By providing tangible figures, such as CAPEX, storage tank sizes, and renewable energy installed capacity, it enables a fair comparison of the three main scalable long-term storage options. Additionally, it offers guidelines on the optimal storage conditions for each molecule, balancing energy efficiency and storage tank size. The role of rSOC as electricity storage technology and as heat producer for domestic hot water and/or space heating is also evaluated for the different storage options.



A Comparative Analysis of Industrial Edge MLOps prototype for ML Application Deployment at the edge of Process Industry

Fatima Rani, Lucas Vogt, Prof. Leon Urbas

Technische Universität Dresden, Germany

In the evolving Industry 4.0 revolution, combining Artificial Intelligence of Things (AIoT) and edge computing represents a significant step forward in innovation and efficiency. This paper introduces a prototype for constructing an edge AI system utilizing the contemporary Machine Learning Operations (MLOps) concept (Rani et al., 2024 & 2023). By employing microcontrollers such as Raspberry Pi and Nvidia Jetson Nano microcomputer as hardware, our methodology encompasses data ingestion and machine learning model deployment on edge devices (Antonini et al., 2022). Crucially, the MLOps pipeline is fully developed within the ecoKI platform, a pioneering research initiative focused on making energy-saving solutions for Small and Medium-sized Enterprises (SMEs). Here, we propose an MLOps pipeline that can be run as either multiple or single workflows, leveraging a REST API for interaction and customization through the FastAPI web framework in Python. This pipeline enables seamless data processing, model development, and deployment on edge devices. Moreover, real-time AI processing on edge devices enables microcontrollers, even those with limited resources, to effectively handle tasks in areas such as predictive maintenance, process optimization, quality assurance, and supply chain management. Furthermore, a comparative analysis conducted with Edge Impulse validates the effectiveness of our approach, demonstrating how optimized ML algorithms can be successfully deployed in the process industry (Janapa Reddi et al., 2023). Finally, this study aims to provide a blueprint for advancing Edge AI development in the process industry by exploring AI techniques suited for resource-limited environments and addressing key challenges, such as ML algorithm optimization and computational power.

References

Rani, F., Chollet, N., Vogt, L., & Urbas, L. (2024). Industrial Edge MLOps: Overview and Challenges. Computer Aided Chemical Engineering, 53, 3019-3024.

Rani, F., Khaydarov, V., Bode, D., Hasan, I. H. & Urbas, L.(2023). MLOps Practice: Overcoming the Energy Efficiency Gap, Empirical Support Through ecoKI Platform in the Case of German SMEs. PAC- Protection, Automation Control, World Global Conference 2023.

Antonini, M., Pincheira, M., Vecchio, M., & Antonelli, F. (2022, May). Tiny-MLOps: A framework for orchestrating ML applications at the far edge of IoT systems. In 2022 IEEE international conference on evolving and adaptive intelligent systems (EAIS) (pp. 1-8). IEEE.

Janapa Reddi, V., Elium, A., Hymel, S., Tischler, D., Situnayake, D., Ward, C., ... & Quaye, J. (2023). Edge impulse: An mlops platform for tiny machine learning. Proceedings of Machine

Learning and Systems, 5.

Acknowledgments: This work was Funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) under the grant number 03EN2047C.



Energy Water Nexus Resilience Analysis Using Integrated Resource Allocation Approach

Hesan Elfaki1, Dhabia Al-Mohannadi2, Mohammad Lameh1

1Texas A & M, United States of America; 2Hamad Bin Khalifa University , Qatar

Power and water systems are strongly interconnected through exchanged flows of water, electricity, and heat which are fundamental to maintain continuous operation and provide functional services that meet the demands. These systems are highly vulnerable to climate stressors resulting in possible disruption to the operation. As the services delivered by these systems are vital for community development across all sectors, it is essential to create reliable frameworks and effective methods to assess and enhance the resilience of the energy-water nexus to climate impacts.

This work presents a macroscopic, high-level representation of the interconnected nexus system utilizing a resource allocation model to capture the interactions between the power and water subsystems. The model is used to assess the performance of the system under various climate impact scenarios to determine peak demands the system can withstand and quantify the losses of the functional services which reveals the system vulnerabilities. Resilience metrics are incorporated to interpret these results and characterize the nexus performance. The overall method is generic, and its capabilities will be demonstrated through a case study on the energy-water nexus in the Gulf Cooperation Council (GCC) region.



Technoeconomic Analysis of a Novel Amine-Free Direct Air Capture System Integrated with HVAC

Yasser Abdellatif1,2, Ikhlas Ghiat1, Riham Surkatti2, Yusuf Bicer1, Tareq AL-ANSARI1,2, Abdulkarem I. Amhamed1,3

1Hamad Bin Khalifa University College of Science and Engineering, Qatar; 2Qatar Environment and Energy Institute (QEERI), Doha, Qatar.; 3Corresponding author’s email: aamhamed@hbku.edu.qa

The increasing demand for Direct Air Capture (DAC) technologies has been driven by the need to mitigate rising CO2 levels and address climate change. However, DAC systems face challenges, particularly in humid environments, where high humidity substantially increases the energy required for regeneration. Conventional CO2 physisorption is often hindered by competitive water adsorption, which reduces system efficiency and increases energy demand. Addressing these limitations is crucial for advancing DAC technology and improving commercial viability. This study proposes a novel DAC system integrated with an Air Handling Unit (AHU) to manage these challenges. A key feature of the system is the incorporation of a silica gel wheel for air dehumidification prior to physisorption. This pre-treatment step significantly enhances physisorbents' performance by reducing water vapor in the air, optimizing the CO2 adsorption process. As a result, physisorbents can better perform with conventional chemisorbents, which benefit from water co-adsorption but could have limitations, such as material degradation and higher energy demands. The study focuses on two adsorbents: NbOFFIVE and SBA-15 functionalized with TEPA. These materials were chosen for their promising CO2 capture properties. The system was tailored for the AHU of Doha Tower, a high-rise in a hot, humid climate. The silica gel wheel dehumidifies return air before it enters the CO2 capture stage. The air is then cooled by the existing AHU system to create optimal conditions for adsorption. After CO2 capture, the air is reheated using the AHU’s heater to maintain indoor temperatures. The adsorbed water in the silica gel is regenerated using the CO2- and water-free airstream, allowing the system to deliver the required humidity range for indoor areas before supplying the air to the building. This ensures both air quality and operational efficiency. This integrated approach offers significant advantages in energy savings and efficiency. The use of silica gel prior to physisorption reduced energy requirements by 82% for NbOFFIVE and 39% for SBA-15/TEPA, compared to a DAC-HVAC system without silica gel dehumidification. Physisorbents generally exhibit lower heats of adsorption than chemisorbents, further reducing the system’s overall energy demand. The removal of excess moisture also minimizes the energy required for water desorption and addresses key drawbacks of amines, such as instability in indoor environments. Additionally, this approach lowers the cooling load by eliminating water condensation typically managed by the HVAC system. These factors were evaluated in a technoeconomic analysis, where they played a crucial role in reducing operational costs. Utilizing the existing AHU infrastructure further reduces capital expenditures (CAPEX), making this system a highly attractive solution for large-scale CO2 capture applications.



Computer-Aided Molecular Design for Citrus and Coffee Wastes Valorisation

Giovana Correia de Assis Netto1, Moisés Teles dos Santos1, Vincent Gerbaud2

1University of São Paulo, Brazil; 2Laboratoire de Génie Chimique, France

Brazil is the world's largest producer of both coffee and oranges. These agro-industrial processes generate large quantities of wastes, which are typically discarded in landfills, mixed with animal feed, or incinerated. Such practices not only pose environmental issues but also fail to fully exploit the economic potential of these residues. The Brazilian coffee processing predominantly employs the dry method, wherein the coffee fruit is dried and dehulled, resulting in coffee husk as the primary waste (18% w/w fresh fruit). Subsequently, green coffee beans are roasted, generating an additional residue known as silverskin (4.3% w/w fresh fruit). Finally, roasted and ground coffee undergoes extraction, resulting in spent coffee grounds (91% w/w of ground coffee). Altogether, these residues can account for up to 99% of the coffee fruit's mass. Similarly, Brazil leads global orange juice production. This process generates orange peel waste, which comprises 50–60% of the fruit's mass. Coffee and orange peel wastes contain valuable compounds that can be extracted or produced via biological or chemical conversions, making the residues potential sources of chemical platforms. These chemical platforms can be used as molecular building blocks, with multiple functional groups that can be functionalised into useful chemicals. A notable example is furfural, a key bio-based chemical platform that serves as a precursor for various chemicals, offering an alternative to petroleum-based products. Furfural is usually obtained from xylose dehydration and purified by extraction with organic solvents, such as toluene or methyl isobutyl ketone, followed by distillation. The objective of this work is to design alternative solvents for furfural extraction from aqueous solutions, using Computer-Aided Molecular Design (CAMD). A comprehensive literature review identified chemical platforms that can be produced from coffee and orange residues. These molecular structures were then used as molecular building blocks in the chemical library of an in-house CAMD tool. The CAMD tool employed uses molecular graphs for chemical structures representation and modification, group contribution methods for property estimations and a genetic algorithm as search procedure. The target properties for the screening included Kow (as a measurement of toxicity), enthalpy of vaporisation, melting point, boiling point, flash point and Hansen solubility parameters. Other 31 properties, including EHS indicators, were also calculated for reference. From the initial list of 40 building block families, 19 families were identified in coffee wastes, and 20 families were identified in orange wastes. Among these, 13 building blocks are common to both types of residues and were evaluated as molecular fragments to design candidate solvents for furfural separation: furoate, geranyl, glucaric acid, glutamic acid, hydroxymethylfurfural, hydroxypropionic acid, levulinic acid, limonene, 5-methylfurfural, oleic acid, succinic acid, glycerol and the furfural itself. The results demonstrate that molecular structures derived from citrus and coffee residues have the potential to produce solvents with properties comparable to those of toluene. The findings are promising as they represent an advancement over the use of toluene, a fossil-derived solvent, enhancing sustainability in furfural extraction and avoiding the use of non-renewable chemicals in downstream processes of agro-based biorefineries.



Introduction of carbon capture technologies in industrial symbioses for decarbonization

Sydney Thomas, Marianne Boix, Stéphane Negny

Laboratoire de Genie Chimique, Toulouse INP, CNRS, Université Paul Sabatier, France

Climate change is a consequence of human activities, with one of the primary sources of greenhouse gases emissions (GHG) being industrial activities. Therefore, it is imperative to drastically reduce emissions from the industrial sector in order to effectively address climate change. This endeavor will necessitate the implementation of multiple actions aimed at enhancing both sobriety and efficiency.

Eco-industrial parks are among the viable options for increasing efficiency. They operate through the collaboration of industries that choose to cooperate to mutualize or exchange materials, energy, or services. By optimizing these flows, it is possible to reuse a fraction of materials, thus reducing waste and fossil fuel consumption, thereby decreasing GHG emissions.

This study is based on a real eco-industrial park located in South Korea, where some companies are capable of producing different levels of steam, while others have a demand for steam (Kim et al., 2010). However, this work also pertains to a project for reindustrialization in France, necessitating that parameters are adapted to French conditions while striving for a general applicability that may extend to other countries. One of the preliminary solutions for reducing GHG emissions involves optimizing the steam network among companies. Additionally, it is feasible to implement carbon capture solutions to mitigate the impact of fuel consumption, although these techniques may also contribute to other forms of pollution. Consequently, while they reduce GHG emissions, they may inadvertently increase other types of pollution. The ultimate objective is to optimize the park utilizing a systemic approach.

In this analysis, carbon capture modules are modeled and integrated into an optimization model for steam exchanges that was previously developed by Mousqué et al. (2018). The multi-period model utilizes a multi-criteria mixed-integer linear programming (MILP) approach. The constraints of the problem correspond to material and energy balances as well as thermodynamic equations. Three criteria are considered to assert the optimal organization: cost, greenhouse gas emissions (GHG), and pollution from amines. Subsequently, an epsilon-constraint strategy is employed to delineate the Pareto front. Finally, the TOPSIS method is utilized to determine the most advantageous solution.

The preliminary findings appear to indicate that capture through adsorption holds significant promise. Compared to base case scenario, this method has the potential to divide by 3 the CO2 emissions while the cost only increases by 0.4% per year. This approach may eliminate the necessity of amine use in carbon capture and reduce the energy needs compared to absorption capture. However, further researches need to be done to confirms these results.



Temporal Decomposition Scheme for Designing Large-Scale CO2 Supply Chains Using a Neural-Network Based Model for Forecasting CO2 Emissions

Jose A. Álvarez-Menchero, Ruben Ruiz-Femenia, Raquel Salcedo-Díaz, Isabela Fons Moreno-Palancas, Jose A. Caballero

University of Alicante, Spain

The battle against climate change and the search for innovative solutions to mitigate its effects have become the focus of the researchers’ attention. One potential approach to reduce the impacts of the global warming could be the design of a Carbon Capture and Storage Supply Chain (CCS SC), as proposed by D’Amore [1]. However, the high complexity of the model requires exploring alternative ways to optimise it.

In this work, a CCS multi-period supply chain for Europe, based on that presented by D’Amore [1], is designed. Data on CO2 emissions have been sourced from the EDGAR database [2], which includes information spanning the last 50 years. Since this problem involves optimising the cost and operation decisions over a 10-year time horizon, it would be advisable to forecast carbon dioxide emissions to enhance the reliability of the data used. For this purpose, a neural-network based model is implemented for forecasting [3]. The chosen model is N-BEATS.

Furthermore, a temporal decomposition scheme is used to address the intractability issues of the model. The selected method is Lagrangean decomposition, which has been employed in other high-complexity works, demonstrating strong performance and significant computational savings [4,5].

References

[1] D’Amore, F., Bezzo, F., 2017. Economic optimisation of European supply chains for CO2 capture, transport and sequestration.

[2] JRC, 2021. Emission Database for Global Atmospheric Research (EDGAR). Joint Research Centre, European Commission (Available at:). https://edgar.jrc.ec.europa.eu/index.php.

[3] Akiba, T., Sano, S., Yanase, T., Ohta, T., & Koyama, M., 2019. A Next-generation Hyperparameter Optimization Framework. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.

[4] Jackson, J. R., Grossmann, I. E., 2003. Temporal decomposition scheme for nonlinear multisite production planning and distribution models.

[5] Goel, V., Grossmann, I. E., 2006. A novel branch and bound algorithm for optimal development of gas fields under uncertainty in reserves.



Dynamic simulation of turquoise hydrogen production using a regenerative non-catalytic pyrolysis reactor under various heat sources

Jiseon Park1,2, Youngjae Lee1, Uendo Lee1, Won Yang1, Jongsup Hong2, Seongil Kim1

1Korea Institute of Industrial Technology, Korea, Republic of (South Korea); 2Yonsei university, Korea, Republic of (South Korea)

Hydrogen is widely regarded as a key energy source for reducing carbon emissions and dependence on fossil fuels. As a result, several methods for producing hydrogen have been developed, such as gray, blue, green, and turquoise hydrogen. Grey hydrogen is produced from natural gas but generates a large amount of CO₂ as a byproduct. Blue hydrogen captures and stores CO2 to overcome the problems of gray hydrogen production methods. Green hydrogen is produced through water electrolysis powered by renewable energy and emits almost zero CO2. However, it faces challenges such as intermittent energy supply and high production costs.

In turquoise hydrogen production, methane pyrolysis generates hydrogen and solid carbon at high temperatures. Unlike the other hydrogen production methods, this process does not emit carbon dioxide, thus it offers environmental benefits. Notably, non-catalytic methane pyrolysis has the advantage of avoiding catalyst deactivation issues. While catalytic methane pyrolysis increases operational complexity and costs because of the regular catalyst replacement, the non-catalytic process addresses these challenges. However, non-catalytic processes require maintaining much higher reactor temperatures than steam methane reforming and catalytic methane pyrolysis. Consequently, optimizing the heat supply is critical to maintaining high temperatures.

This study explores various methods of supplying heat to sustain the high temperature inside the reactor. We propose a new method for turquoise hydrogen production based on a regenerative pyrolysis reactor to optimize heat supply. In this system, as methane pyrolysis begins in one reactor, it undergoes an endothermic reaction, causing a decrease in temperature. Meanwhile, another reactor supplies heat by combusting hydrogen, ammonia, or methane to gradually increase the temperature. This system enables continuous heat supply and efficiently uses thermal energy.

Therefore, this study conducts dynamic simulation to optimize a regenerative non-catalytic pyrolysis system for continuous turquoise hydrogen production. By utilizing dynamic analysis inside the reactor, optimal operating conditions for this hydrogen production system are determined, which ensures efficient and continuous hydrogen production. Additionally, the study compares hydrogen, ammonia, and methane as heat sources to determine the most effective fuel for maintaining high temperatures in reactors. This comparison utilizes life cycle assessment (LCA) to comprehensively evaluate the energy consumption and CO2 emissions of each fuel source.

The integration of dynamic analysis with LCA provides critical insights into the environmental and operational efficiencies of various heat supply methods used in the regenerative turquoise hydrogen production system. This approach enables the quantification of those impacts and supports the identification of the most suitable fuel. Ultimately, this research contributes to the development of more sustainable and efficient hydrogen production technologies, highlighting the potential for significant reductions in carbon emissions.



Empowering Engineering with Machine Learning: Hybrid Application to Reactor Modeling

Felipe CORTES JARAMILLO1, Julian Per BECKER1, Benoit CELSE1, Thibault FANEY1, Victor COSTA1, Jean-Marc COMMENGE2

1IFP Energies nouvelles, France; 2Université de Lorraine, CNRS, LRGP, France

Hydrocracking is a chemical process that breaks down heavy hydrocarbons into lighter, more valuable products, using feedstocks such as vacuum gas oil (VGO) or renewable sources like vegetable oil and animal fat. Although existing hydrocracking models, developed over years of research, can achieve high accuracy and robustness once calibrated and validated [1-3], significant challenges persist. These include the inherent complexity of the feedstocks (containing billions of molecules), high computational costs, and limitations in analytical techniques, particularly in differentiating between similar compounds like iso and normal alkanes. These challenges result in extensive experimentation, higher costs, and considerable discrepancies between physics-based model predictions and actual measurements.

To overcome these limitations, effective approximations are needed that integrate both empirical data and established process knowledge. A preliminary investigation into purely data-driven models revealed difficulties in capturing the fundamental behavior of the hydrocracking reaction, motivating the exploration of an hybrid modeling approach. Among various hybrid modeling frameworks [4], physics-informed machine learning was selected after in-depth examination, as it can leverage well-established first-order principles, represented by ordinary differential equations (ODEs), to guide data-driven models. This method can improve approximations of real-world reactions, even when the first-order principles do not perfectly match the underlying, complex processes [5].

This work introduces a novel hybrid modeling approach that employs physics-informed neural networks (PINNs) to address the challenges of hydrocracking reactor modeling. The performance is compared against a traditional kinetic model and a range of purely data-driven models, using data from 120 continuous pilot plant experiments as well as simulated scenarios based on the existing first-order behavior models developed at IFPEN [2].

Multiple criteria including accuracy, trend analysis, extrapolation capabilities, and model development time were used to evaluate the methods. In all scenarios, the proposed approach demonstrated a performance improvement over both the kinetic and purely data-driven models. The results highlight that constraining data-driven models, such as neural networks, with known first-order principles enhances robustness and accuracy. This hybrid methodology offers a new avenue for modeling uncertain reactor processes by effectively combining general a priori knowledge with data-driven insights.

References

[1] Chinesta, F., & Cueto, E. (2022). Empowering engineering with data and AI: a brief review.

[2] Becker, P. J., & Celse, B. (2024). Combining industrial and pilot plant datasets via stepwise parameter fitting. Computer Aided Chemical Engineering, 53, 901-906.

[3] Becker, P. J., Serrand, N., Celse, B., Guillaume, D., & Dulot, H. (2017). Microkinetic model for hydrocracking of VGO. Computers & Chemical Engineering, 98, 70-79.

[4] Bradley, W., et al. (2022). Integrating first-principles and data-driven modeling. Computers & Chemical Engineering, 166, 107898.

[5] Tai, X. Y., Ocone, R., Christie, S. D., & Xuan, J. (2022). Hybrid ML optimization for catalytic processes. Energy and AI, 7, 100134.



Cascade heat pumps as an enabler for solvent-based post-combustion capture in a cement plant

Sarun Kumar Kochunni1, Rahul Anantharaman2, Armin Hafner1

1Department of Energy and Process Engineering, NTNU; 2SINTEF Energy Research

Cement production is a significant source of global CO₂ emissions, contributing about 7-8% of the world's total emissions. This is mainly due to the energy-intensive process of producing clinker (the primary component of cement) and the chemical reaction called calcination, which releases CO₂ when limestone (calcium carbonate) is heated. Around 60% of these direct emissions arise from calcination, while the remaining 40% result from fuel combustion. Thus, capturing CO₂ is essential for decarbonising the industry. Among the various capture techniques, solvent-based post-combustion CO₂ capture stands out due to its maturity and compatibility with existing cement plants. However, this method demands significant heat for solvent regeneration, which is often scarce in many cement facilities that require substantial heat for drying raw materials. Typically, 30-50% of the heat needed for solvent regeneration can be sourced from the excess heat generated within the cement plant. Additional heat can be supplied by burning fuels to create steam or by employing heat pumps to upgrade the low-grade heat available from the capture facility or the subsequent CO₂ liquefaction process.

This study systematically incorporates cascade heat pumps to harness waste heat from the CO₂ liquefaction process for regenerating solvents. The proposed method replaces the conventional ammonia-based refrigeration system for CO₂ liquefaction with a cascade HTHP, which provides refrigeration for the liquefaction and high-temperature heat for solvent regeneration. The system liquefies CO₂ using the evaporator and applies the heat rejected via the condenser for solvent regeneration. In this cascade HTHP, ammonia or propane is used in the lower cycle, while butane or pentane operates in the upper cycle, aiming for operational temperatures of 240 K for liquefaction and 395 K for heat supply.

The system’s thermodynamic performance is evaluated using ASPEN HYSYS simulations across different refrigerant configurations in the integrated setup. The findings indicate that an HTHP system using ammonia and pentane can deliver up to 12.5% of the heat needed for solvent regeneration, resulting in a net COP of 2.0. This efficiency exceeds that of other low-temperature heat sources for solvent regeneration. While adding a pentane cycle raises power consumption, the system remains energy-efficient overall, highlighting its potential for decarbonising cement production through enhanced CO₂ capture and integration strategies.



Agent-Based Simulation of Integrated Process and Energy Supply Chains: A Case Study on Biofuel Production

Farshid Babaei, David B. Robins, Robert Milton, Solomon F. Brown

School of Chemical, Materials and Biological Engineering, University of Sheffield, United Kingdom

Despite the potential benefits of decision level integration for the process and energy supply chains, these systems are traditionally assessed and optimised by incorporating simplified models of unit operations within a spatially distributed network. Such an organisational level integration can hardly be achieved without leveraging Information and Communication Technology (ICT) tools and concepts. In this research work, a multi-scale agent-based model is proposed to facilitate the transition from traditional practices to coordinated supply chains.

The multi-agent system framework proposed incorporates different organisational dimensions of the process and energy supply chains including raw material suppliers, rigorous processing plants, and consumers. Furthermore, the overall behaviour of each agent type in the model and its interaction with other agents are implemented. This allows for the simultaneous assessment and optimisation of process and supply chain decisions. By integrating detailed process models into the supply chain operation, the devised framework goes beyond existing studies in which the behaviour of lower decision levels is neglected.

To demonstrate the application of the proposed multi-agent system, a case study for a biofuel supply chain is presented which captures the underlying dynamic of the supply chain network. The involved actors, comprising farmers, biorefineries, and end-users, seek to increase their payoffs given their interdependencies and intra-organisational variables. The example features distributed and asynchronous decision-making, same-echelon actor competition, and incomplete information. The aggregated payoff of the supply network is optimised under different scenarios and fraction of capacity allocated to biofuel production and consumption as well as biofuel production variables are obtained. According to the results, unit operation level decisions along with the participant allocated capacity options significantly influence the supply chain performance. In conclusion, the proposed research expounds a more realistic view of multi-scale coordination schemes in the process and energy supply chains.



Steel Plant Electrification: A Pathway to Sustainable Production and Carbon Reduction

Rachid Klaimi2, Sabla Alnouri1, Vladimir Stijepovic3, Aleksa Miladinovic3, Mirko Stijepovic3

1Qatar University, Qatar; 2Notre Dame University; 3University of Belgrade

Traditional steel processes are energy-intensive and rely heavily on fossil fuels, contributing to significant greenhouse gas emissions. By adopting electrification technologies, such as electric boilers and compressors, particularly when powered by renewable energy, steel plants can reduce their carbon footprint, enhance process flexibility, and lower long-term operational costs. This transition also aligns with increasing regulatory pressures and market demand for greener practices, positioning companies for a more competitive and sustainable future. This work investigates the potential of replacing conventional steam crackers in a steel plant that relies on the use of fossil fuels, with electrically driven heating systems powered by renewable energy sources. The overall aim was to significantly lower greenhouse gas emissions by integrating electric furnaces and heat pumps into the steel production process. This study evaluates the potential carbon savings from the integration of solar energy in a steel plant with a production capacity of 300,000 tons per month. The solar field required for this integration was found to span an area of 40,764 m². By incorporating solar power into the plant’s energy mix, the analysis reveals a significant reduction in carbon emissions, with an estimated saving of 2,831 tons of CO₂ per year.



INCEPT: Interpretable Counterfactual Explanations for Processes using Timeseries comparisons

Omkar Pote3, Dhanush Majji3, Abhijit Bhakte1, Babji Srinivasan2,3, Rajagopalan Srinivasan1,3

1Department of Chemical Technology, Indian Institute of Technology Madras, Chennai 600036, India; 2Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai 600036, India; 3American Express Lab for Data Analytics and Risk Technology, Indian Institute of Technology Madras, Chennai 600036, India

Advancements in sensors, storage technologies, and computational power have unlocked the potential of AI for process monitoring. AI-based methods can successfully address complex process monitoring involving multivariate time series data. While their classification performance in process monitoring is very good, the decision-making logic of AI models are often difficult to interpret by operators and other plant personnel. In this paper, we propose a novel approach to explain the results from AI based process monitoring methods to plant operators based on counterfactual explanations.

Explainable AI (XAI) has emerged as a promising field of research, aiming to address these challenges by enhancing the interpretability of AI. XAI has gained significant attention in chemical engineering; much of this research focuses on explainability in tabular and image data. Most XAI methods provide explanations at a sample level, i.e., they assume that a single data point is inherently interpretable, which is unrealistic assumption for dynamic systems such as chemical processes. There has been limited exploration into explainability for systems characterized by multivariate time series. To address this gap, we propose a novel XAI method that provides counterfactual explanations accounting for a multivariate time-series nature.

A counterfactual explanation is the "smallest change to the feature values that alters the prediction to a predefined output." Ates et al. (2021) developed a method for counterfactual multivariate time series explainability. Here, we adapt this method and extend it to account for autocorrelation and cross-correlation which is essential in process monitoring. Our proposed method, called INterpretable Counterfactual Explanations for Processes using Time series comparisons (INCEPT), generates a counterfactual explanation through a four-step methodology. Consider an online process sample given to a neural network based fault identification model. The neural network would use a window of data around this sample to predict the state of the process (normal, fault-1, etc). First, the time series data is transformed into PC space using Dynamic PCA to address autocorrelation and cross-correlation. Second, the nearest match from the training data is identified in this space for the desired class using Euclidean distance. Third, a counterfactual sample is generated by adjusting key variables that increase the likelihood of the desired class, guided by a greedy algorithm. Finally, the counterfactual is transformed back to the original space, and the model recalculates the class probabilities until the desired class is achieved. The adjustments needed to the process variables to result in the counterfactual are used as the basis to generate explanations.

The effectiveness of the proposed method will be demonstrated using the Tennessee Eastman case study. The generated explanations can aid model developers in debugging and model enhancement. They can also assist plant operators in understanding the model’s predictions and gain actionable insights.

References:

[1] Bhakte, A., et.al., 2024. Potential for Counterfactual Explanations to Support Digitalized Plant Operations.

[2] Bhakte, A., et.al., 2022. An explainable artificial intelligence-based approach for interpretation of fault classification results from deep neural networks.

[3] Ates, E., et.al., 2021. Counterfactual Explanations for Multivariate Time Series.



Dynamic Simulation of an Oxy-Fuel Cement Pyro-processing Section

Marc-Daniel Stumm1, Tom Dittrich2, Jost Lemke2, Eike Cramer1, Alexander Mitsos3,1,4

1Process Systems Engineering (AVT.SVT), RWTH Aachen University, 52074 Aachen, Germany; 2thyssenkrupp Polysius GmbH, 59269 Beckum, Germany; 3JARA-ENERGY, 52056 Aachen, Germany; 4Institute of Climate and Energy Systems, Energy Systems Engineering (ICE-1), Forschungszentrum Jülich GmbH, 52425 Jülich, Germany

Cement production accounts for 7 % of global greenhouse gas emissions [1]. Tackling these emissions requires carbon capture and storage technologies [1], of which an oxy-fuel combustion process followed by CO2 compression is economically promising [2]. The oxy-fuel process substitutes air with a mixture of O2 and CO2 as the combustion medium. The O2-CO2 mixture requires a partial recirculation of flue gas [3], which increases the complexity of the process dynamics and potentially inefficient operating conditions, thus necessitating process control. We propose the use of model-based control and state estimation schemes. As the recycle connects the dynamics of the whole pyro-processing section, the process model must include the entire section, namely, the preheater tower, precalciner, rotary kiln, and clinker cooler. Literature on dynamic cement production models is scarce and focuses on modeling individual units, e.g., the rotary kiln [4⁠,5] or the precalciner [6]. We develop a first-principle dynamic model of the full pyro-processing section, including the preheater tower, precalciner, rotary kiln, and clinker cooler as submodels. The states of the precalciner, rotary kiln, and clinker cooler vary significantly in axial direction. Thus, the corresponding models are spatially discretized using the finite volume method. Parameter values for the model are taken from the literature [6]. We implement the models in Modelica as an aggregation of submodels. Thus, we can easily adapt our model to fit different cement plants, which vary in configuration. We simulate the oxy-fuel pyro-processing section outlined in the CEMCAP study [3]. The simulation has similar residence times, temperatures, and cement compositions as reported in the literature [2⁠,7], validating our model. Therefore, the presented dynamic model can form the basis for future model-based control and state estimation applications. Furthermore, the model can be used to investigate carbon reduction measures in the cement industry.

References

1. European Commission. Joint Research Centre. Decarbonisation options for the cement industry; Publications Office, 2023.

2. SINTEF Energy Research. CEMCAP D4.6 - Comparative techno-economic analysis of CO2 capture in cement plants 2018.

3. Ditaranto, M.; Bakken, J. Study of a full scale oxy-fuel cement rotary kiln. International Journal of Greenhouse Gas Control 2019, 83, 166–175, doi:10.1016/j.ijggc.2019.02.008.

4. Spang, H.A. A Dynamic Model of a Cement Kiln. Automatica 1972, 309–323, doi:10.1016/0005-1098(72)90050-7.

5. Svensen, J.L.; Da Silva, W.R.L.; Merino, J.P.; Sampath, D.; Jørgensen, J.B. A Dynamical Simulation Model of a Cement Clinker Rotary Kiln, 2024. Available online: http://arxiv.org/pdf/2405.03200v1.

6. Svensen, J.L.; Da Silva, W.R.L.; Jørgensen, J.B. A First-Engineering Principles Model for Dynamical Simulation of a Calciner in Cement Production, 2024. Available online: http://arxiv.org/pdf/2405.03208v1.

7. European Commission - JRC IPTS European IPPC Bureau. Best Available Techniques (BAT) Reference Document for the Production of Cement, Lime and Magnesium Oxide.

8. Mujumdar, K.S.; Ganesh, K.V.; Kulkarni, S.B.; Ranade, V.V. Rotary Cement Kiln Simulator (RoCKS): Integrated modeling of pre-heater, calciner, kiln and clinker cooler. Chemical Engineering Science 2007, 62, 2590–2607, doi:10.1016/j.ces.2007.01.063.



Multi-Objective Optimization for Sustainable Design of Power-to-Ammonia Plants

Andrea Isella, Davide Manca

Politecnico di Milano, Italy

Ammonia synthesis currently is the most carbon-intensive chemical process behind only oil refining (Isella and Manca, 2022). From this perspective, producing ammonia from renewable-energy-powered electrolysis (i.e. Power-to-Ammonia) is attracting increasing interest and earning the potential to make the ammonia industry achieve carbon neutrality (MPP, 2022). This work addresses the process design of such a synthetic pathway through a methodology based on the multi-objective optimization of the so-called “Three pillars of sustainability”: i.e. Economic, Environmental, and Social. Specifically, we developed a tool estimating the installed capacities of every main process section typically featured by Power-to-Ammonia facilities (e.g., the renewable power plant, the electrolyzer, energy and hydrogen storage systems, etc.) to maximize the “Global Sustainability Score” of the plant.



Simulating Long-term Carbon Balance on Forestry Management and Woody Biomass Applications in Japan

Ziyi Han1, Heng Yi Teah2, Yuichiro Kanematsu2, Yasunori Kikuchi1,2,3

1Department of Chemical System Engineering, The University of Tokyo; 2Presidential Endowed Chair for “Platinum Society”, The University of Tokyo; 3Institute for Future Initiatives, The University of Tokyo

Forests play a vital role as carbon sinks and renewable resources in mitigating climate change. However, in Japan, insufficient forest management has resulted in a suboptimal age class distribution of trees. Aging trees are left unattended and underutilized, resulting in inefficient carbon capture as they age. This underutilization contributes to a substantial reliance on imported wood products (lower self-sufficiency). To improve carbon sequestration and renew forest industries, it is crucial to adopt a systematic approach, so that the emissions and mitigation opportunities in the transformation of forest resources to usable products along the forest value chain can be identified and optimized.

In this study, we aim to identify an efficient forestry value chain that maximizes the carbon mitigation considering the coordination of the varied interests from diverse stakeholders in Japan. We simulate the long-term carbon balance on forest management and forest resources utilization, incorporating the woody biomass material flow across five modules, with two in wood production and three in wood utilization sectors.

(1) Forest and forestry management: the supply of woody biomass from designed forestry management practices, for example, to homogenize the forest age class distribution within a given simulation period.

(2) Wood processing: the transformation of roundwood into timber, plywood, and wood chips. A different ratio of wood products is determined based on the demand from each application.

(3) Construction sector: using timber and plywood for wood construction; the maximum flow is to satisfy the domestic demand of construction with 100% self-sufficiency rate without the need for imported wood.

(4) Energy sector: using wood chips for direct conversion to heat and electricity; the maximum flow is to reach the saturation of local renewable energy demand provided by local governments.

(5) Chemical sector: using wood chips as sources of cellulose, hemicellulose and lignin for thermochemical conversion to chemicals that serves as versatile energy carriers considering multiple pathways. The target hydrocarbon products include hydrogen, jet fuels and biodiesel.

We focus on the allocation of woody biomass from modules (1) and (2) to the three utilization modules. The objective is to identify the flow of energy-material productions by various pathway and to evaluate the GHG emissions within the defined system boundary. We evaluate the carbon balance of sequestration and emission from modules (1) and (2), and the cradle-to-gate life cycle GHG emission of modules (3), (4) and (5) accounting for processes from the selected co-production pathways.

Our model shows the overall GHG emissions resulting from the forestry value chain at a given forestry management and processing strategy, and the environmentally preferred order of woody biomass utilization. The variables in each module can be set to reflect the interest of each sector, allowing the model to capture the consequences of wood resource allocation, availability, and its contribution to climate mitigation. Therefore, the simulation can support the policymakers and relevant industry stakeholders for a more comprehensive forestry management and biomass application planning in Japan.



Discovering patterns in Food Safety Culture by k-means clustering

Simen Akkermans1, Maria Tsigka2, Jan FM Van Impe1, Efstathia Tsakali1,2

1BioTeC+ KU Leuven; 2University of West Attica, Greece

Food safety (FS) is an ongoing issue and despite the awareness and major initiatives taken in recent decades, several outbreaks highlight the need for further action. The key element of prevention combined with the application of prerequisite programs constitute the fundamental principles of any Food Safety Management System (FSMS), with particular emphasis on hygiene, food safety training, and the development and implementation of FSMSs throughout all areas of activity in the food industry. On the other hand, the concept of Food Safety Culture (FSC) separates FS from FSMS, by focusing on human behavior. Food safety managers often do not fully understand the relationship between FS and FSC, resulting in improper practices and further risks for food safety. Over the past decade, various tools for enforcing FSC have been proposed for different sectors of the food industry. However, there is no universal assessment tool, as specific aspects of food safety culture and each sector of the food industry, require different or customized assessment tools. Although, the literature on FS is growing rapidly, existing research related to FSC is virtually non-existent or fragmented. The aim of this study was to test the potential of machine learning based on questionnaire results to understand patterns in FSC.

As a case study, surveys were conducted with 103 employees of the Greek food industry. These employees were subdivided over different departments, genders, experience levels, company food hazard levels and company sizes. Each employee filled out a questionnaire consisting of 18 questions based on a Likert scale. After establishing the existence of significant relationships between the answers that were provided, it was investigated if specific subgroups of the employees had a different FSC. This was done by implementing an unsupervised k-means clustering of the survey results. It was found that, when the employees were subdivided into just 3 clusters, these clusters significantly affected all 18 questions of the surveys as demonstrated by Kruskal-Wallis tests. As such, these 3 clusters represented employee subgroups that adhered to a distinct FSC. This classification provides valuable information on the different cohorts that exist with respect to FSC and thereby enables a targeted approach to improve FSC.

This study has demonstrated the potential of machine learning techniques to monitor and control FSC. As such, the proposed approach contributes to the implementation of GFSI and BRC GC standards requirements and the General Principles for Food Hygiene of the 2020 amendment of Codex Alimentarius.



Development and Integration of a Co-Current Hollow Fiber Membrane Unit for Gas Separation in Process Simulators Using CAPE-OPEN Standards

Loretta Salano, Mattia Vallerio, Flavio Manenti

Politecnico di Milano, Italy

Process simulation plays a crucial role in the design, control, and optimization of chemical processes, offering a cost-effective alternative to experimental approaches. This study presents the development and implementation of a custom co-current hollow fiber membrane unit for gas separation using the CAPE-OPEN standard, integrated into Aspen HYSYS®. A one dimensional model was considered after appropriate physical assumptions, presenting a boundary value problem (BVP) due to the pressure profile along the fiber. The shooting method allows for the accurate resolution of BVPs by iteratively adjusting initial conditions to minimize errors across the domain. This approach ensures convergence to the correct solution, critical for complex gas separation processes. The CAPE-OPEN standards allow to link the developed model in C++ and interact with the software through input and output ports. To further ensure reliability in the simulation, error handling has been included to ensure appropriate operational parameters from the user. Further on, appropriate output variables are given to the simulator environment to enable direct optimization within the process simulator. This flexibility provides greater control over key performance indicators, such as energy consumption and separation efficiency, ultimately facilitating a more efficient design process for applications like biogas upgrading, hydrogen purification, and carbon capture. Results from case studies demonstrate that the co-current hollow fiber membrane unit significantly reduces energy consumption compared to traditional methods like pressure swing water absorption (PSWA) for biogas upgrading to biomethane. While membrane technology showed a 21% reduction in energy consumption for biomethane production, PSWA exhibited slightly higher efficiency for biomethanol production. This study not only demonstrates the value of CAPE-OPEN standards in implementing custom unit operations but also lays the groundwork for future developments in process simulation using advanced mathematical modelling and optimization techniques.



A Comparative Study of Aspen Plus and Machine Learning Models for Syngas Prediction in Biomass-Plastic Waste Co-Gasification

Usman Khan Jadoon, Ismael Diaz, Manuel Rodriguez

Departamento de Ingeniería Química Industrial Y del Medioambiente, Escuela Superior de Ingenieros Industriales, Universidad Politécnica de Madrid

The transition to cleaner energy sources is critical for addressing global environmental challenges, and the co-gasification of biomass and plastic waste presents a viable solution for sustainable syngas production. Syngas, a crucial component in energy applications, demands precise prediction of its composition to enhance co-gasification efficiency. Traditional modelling techniques, such as those implemented in Aspen Plus, have been instrumental in simulating gasification processes. However, machine learning (ML) models offer the potential to improve predictive accuracy, particularly for complex, non-linear systems. This study explores the comparative performance of Aspen Plus models and surrogate ML models in predicting syngas composition during the steam and air co-gasification of biomass and plastic waste.

The primary focus of this research is on evaluating the aspen-plus-based modelling techniques like thermodynamic and restricted equilibrium thermodynamics and kinetic modelling, alongside surrogate models such as Kriging, support vector machines, and artificial neural networks. The novelty of this work lies in the integration of Aspen Plus with machine learning methodologies, providing a comprehensive comparative analysis of both approaches for the first time. This study seeks to determine which modelling approach offers superior accuracy for predicting syngas components like hydrogen, carbon monoxide, carbon dioxide, and methane.

The methodology involves developing Aspen Plus models for steam and air co-gasification using woody biomasses and plastic wastes as feedstocks. These models simulate syngas production under varying operating conditions. Concurrently, machine learning models are trained on experimental datasets to predict syngas composition based on the same input parameters. A comparative analysis is then performed, with the accuracy of each approach measured using performance matrices like root mean square error.

ML models are anticipated to better capture the non-linearities of the gasification process, while Aspen Plus models will continue to offer valuable mechanistic insights and process understanding. The potential superiority of ML models suggests that integrating data-driven and process-driven approaches could enhance predictive capabilities and optimize co-gasification processes. This study offers significant contributions to the field of bioenergy and gasification technologies by exploring the potential of machine learning as a powerful predictive tool. By comparing Aspen Plus and machine learning models, this research highlights the potential benefits of combining these methodologies to improve syngas production forecasts. The findings from this comparative analysis are expected to advance the development of more accurate and efficient bioenergy technologies, contributing to the global transition toward sustainable energy systems.



A Fault Detection Method Based on Key Variable Forecasting

Borui Yang, Jinsong Zhao

Department of Chemical Engineering, Tsinghua University, Beijing 100084, China

With the advancement of industrial production toward digitalization and automation, process monitoring have become an essential technical tool for ensuring the safe and efficient operation of chemical processes. Although process engineering has greatly developed, the risk of process faults remains there. If such faults are not detected and diagnosed at an early stage, they may go beyond control. Over the past decades, various fault detection approaches have been proposed, including model-driven, knowledge-driven, and data-driven methods. Data-driven methods, in particular, have gained prominence, as they rely primarily on large amounts of process data, making them especially relevant with the widespread application of the Internet of Things (IoT). Among these, neural-network-based methods have emerged as a prominent approach. By stacking feature extraction layers and applying nonlinear activation functions between them, deep neural networks exhibit a strong capacity to capture complex, nonlinear patterns. This aligns well with the nature of chemical process variables, which are inherently nonlinear, strongly coupled with control loops, multivariate, and subject to time lags.
In industrial applications, fault detection algorithms rely on the time-series data of key variables. However, statistical methods such as Principal Component Analysis (PCA) and Partial Least Squares (PLS) are limited in capturing the temporal dependencies between consecutive data points. To address this, architectures such as Autoencoders (AE), Convolutional Neural Networks (CNN), and Transformers incorporate the relationships between time points through sliding window sampling. However, this approach can dilute fault signals, leading to delayed fault detection. Inspired by human decision-making process, where adverse future trends are often considered to enable timely responses to unfavorable outcomes, we propose incorporating key variables that have already entered a fault state in future time points into the fault detection model. This proactive inclusion of future fault indicators can significantly improve the timeliness of fault detection.

Building on the aforementioned concept, this work develops and implements a proactive fault detection method based on key variable forecasting. This approach employs multiple predictive models (such as LSTM, Transformer, and Crossformer) to actively forecast key variables over a future time horizon. The predicted results, combined with historical information, are used as inputs to a variational autoencoder (VAE) to calculate the reconstruction error for fault detection. The detection component of the method is trained using normal operating data, and faults are identified by evaluating the reconstruction error. The forecasting component is trained with mixed data, where the initial part contains normal data, followed by the selective introduction of faults after a certain period, enabling the predictive model to capture both fault evolution trends and normal data characteristics.

The proposed method has been validated on the CSTH dataset and the Tennessee Eastman Process (TEP) dataset, demonstrating that incorporating future information at the current time point significantly enhances early fault detection. However, optimizing the design of the reconstruction loss function and model architecture is necessary to mitigate false alarms. Introducing future expectations into current assessments shows great potential for advancing early fault detection and diagnosis but also poses challenges, requiring higher performance from key variable forecasting models.



Rate-Based Modeling Approach of a Rotating Packed Bed for CO2 Chemisorption in aqueous MEA Solutions

Joshua Orthey, John Paul Gerakis, Markus Illner, Jens-Uwe Repke

Process Dynamics and Operations Group - Technical University of Berlin, Germany

Driven by societal and political pressure for climate action, CO2 capture from flue gases is a primary focus for both academia and industry. Rotating Packed Beds (RPBs)[1][2] are a potential way to enhance process intensification and offer significant advantages in intensifying amine-based absorption processes, including enhanced mass transfer, load flexibility, higher allowable fluid loads, and the ability to use more concentrated amine solutions with higher viscosities. One main focus of our study encompasses both a direct comparison between packed columns and RPBs, and the integration of these technologies in a hybrid concept, with the potential to enhance the overall efficiency of the CO₂ capture process. Since there are numerous process configurations of RPB and packed columns in CO2 capture, covering gas pretreatment, absorption, and desorption, an initial evaluation of viable candidate configurations is essential. Equally important is the analysis of fundamental process behavior and its limitations, which is crucial for planning effective experimental campaigns and identifying suitable operating conditions. Unlike existing models, our approach offers a more detailed analysis, focusing specifically on the assessment of different process configurations and experimental conditions, enabling a deeper understanding and refinement of the capture process, which allowing us to effectively plan and design experiments.

For this purpose, a rate-based approach RPB model for the reactive absorption of CO2 with MEA solutions using the two-film theory was developed. The model is formulated for steady-state operations and encompasses all relevant component species. It addresses multicomponent mass transfer, incorporating equilibrium and kinetic reactions in the liquid phase while considering mass transfer resistances in both the liquid and gas phase.

For the gas bulk phase, ideal gas behavior is assumed, while the non-ideal liquid phase incorporates activity coefficients (elecNRTL). The Maxwell-Stefan approach was used to describe the diffusion processes and mass transport in both phases. The model is generally discretized using an equidistant radius. [1] Additionally, a film discretization near the interface was implemented. First validation studies show that the model accurately depicts dependencies on rotational speed and varying liquid-to-gas (L/G) ratios with respect to temperature and concentration profiles and has been validated against literature data.[2]

The CO₂ absorption and desorption process using conventional packed bed columns has been implemented in Aspen Plus. To enable simulations of hybrid configurations, the developed RPB model will be integrated into Aspen Custom Modeler. This study aims to analyze various hybrid process configurations through simulation to identify an efficient configuration, which will be validated by experiments in pilot plants. These results will demonstrate whether integrating RPBs with packed columns enhances energy efficiency and separation performance while reducing operational costs and providing key insights for future scale-up efforts and driving the advancement of hybrid CO₂ capture processes.

[1] Thiels et al. (2016): Modelling and Design of Carbon Dioxide Absorption in Rotating Packed Bed and Packed Column. DOI: 10.1016/j.ifacol.2016.07.303

[2] Hilpert, et al. (2022): Experimental analysis and rate-based stage modeling of multicomponent distillation in a Rotating Packed Bed. DOI: 10.1016/j.cep.2021.108651.



Machine Learning applications in dairy production

Alexandra Petrokolou1, Satyajeet Sheetal Bhonsale2, Jan FM Van Impe2, Efstathia Tsakali1,2

1BioTeC+ KU Leuven; 2University of West Attica, Greece

The dairy sector is one of the most well developed and prosperous industries at an international level. Due to several factors, including its high nutritional value, it’s susceptibility but also its popularity among the consumers, milk attracted scientific interest quiet early comparing to other food products. Likewise, the dairy industry had always been a pioneer in adopting new processing, monitoring and quality control technologies, starting from pasteurization heat treatment and canning for shelf life expansion at the beginning of 20th century to PCR methods to detect adulteration and machine learning applications, nowadays.

The dairy industry is closely connected with large-scale production lines and complex processes that require precision and continuous monitoring. The primary target is to meet customer requirements with increased profit while minimizing environmental impact. In this regard, various automated models based on artificial intelligence, particularly Machine Learning, have been developed to contribute to sustainability and circular economy. There are three major types of Machine Learning: Supervised Learning which uses labeled data, Unsupervised Learning where the algorithm tries to find hidden patterns and relationships and Reinforcement Learning which employs a trial and error method. Building a machine learning model requires several steps, starting with relevant and accurate data collection. These smart applications have been extensively introduced to dairy production, from the farm stage and milk processing to final inspection and product distribution. In this paper, the most significant applications of Machine Learning in the dairy industry are illustrated with actual applications and discussed in terms of their potentials. The applications are categorized as per the production stage and their serving purpose.

The most significant applications integrate recognition cameras, smart sensors, thermal imaging cameras, and digitized supply chain systems to facilitate inventory management. During the animal raising, smart environmental sensors can monitor weather conditions in real-time. In addition to this, animals can be fitted with smart collars or other small devices to record parameters such as breathing rates, metabolism, weight, and body temperature. These devices can also track animals’ location and monitor transitions from lying to standing. By collecting these data, algorithms can detect the potential onset of diseases such as mastitis, minimizing the need for manual human processing of repetitive tasks and enabling proactive health management.

Beyond the farm, useful applications emerge in milk processing, particularly in pasteurization, which requires specific temperature and time settings for each production line. Machine learning models can optimize this process, resulting in energy savings. The control of processing conditions through sensors also aids in the ripening stage, contributing to the standardization of cheese products. Advancements are also occurring in product packaging, where Machine Vision technology can identify damages and defects that may compromise product quality, potentially leading to food spoilage and consumer dissatisfaction. Finally, dairy products are particularly vulnerable and necessitate specific conditions throughout the supply chain. By employing machine learning algorithms, it is possible to identify the most efficient distribution routes, thereby reducing operational costs. Additionally, a smart sensor system can monitor temperature and humidity levels, spotting deviations from established safety/quality standards



Dynamic Modelling of CO2 Capture with Hydrated Lime: Integrating Porosity Evolution, Evaporation, and Temperature Variations

Natalia Vidal de la Peña, Dominique Toye, Grégoire Léonard

University of Liège, Belgium

The construction sector is currently one of the most polluting industries globally. In Europe, over 30% of the EU's environmental footprint is attributed to buildings, making this sector the largest contributor to environmental impact within the European Union. Buildings are responsible for 42% of the EU's annual energy consumption and for 35% of annual greenhouse gas (GHG) emissions.

Considering these data, it is essential to explore methods to reduce the negative impact of this sector on the environment. To contribute to the circular economy of this sector, this work proposes the mineral carbonation method as a resource to mitigate the environmental impact of this industry. Specifically, we propose the mineral carbonation of mineral wastes from the construction sector specifically, pure hydrated lime (Ca(OH)2), referred to as CH in construction terminology.

This research is part of the Walloon Region's Mineral Loop project, with the objective of modelling the carbonation reactions of mineral waste and optimizing the process by improving reactor conditions and material pretreatment. The carbonation of hydrated lime involves a reaction between calcium hydroxide and CO2, combining physical and chemical phenomena. The novelty of this mathematical model lies in the consideration of porosity evolution during carbonation, as well as the liquid water saturation of the material, by accounting for evaporation phenomena. Furthermore, the model is able to represent the temperature gradient along the reactor. These parameters are important because they affect the carbonation rate of this material. In previous work, we observed that the influence of water in this system is significant, and it is crucial to have a good characterization of its behaviour during this process. First, water is needed to initiate the carbonation, but introducing too much can lead to pore blockage. In addition, the release of water during carbonation can also cause pore blockage if evaporation is not adequately considered. Considering that, this model allows us to account for the influence of water, enabling a good correlation between water evaporation and carbonation rates under different carbonation conditions. All parameters are experimentally validated to provide a reliable model that can predict the behaviour of CH during carbonation.

The experimental setup for the carbonation process consists of an aluminium cup filled with CH placed inside a reactor with a capacity of 1.4 L, where pure CO2 is introduced through a hole in the upper part of the reactor. The system is modelled in COMSOL Multiphysics 6.2 by introducing the cup geometry and assuming CO2 is transported axially through the aluminium cup containing hydrated lime particles by dispersion, without convection, and that it diffuses within the material.

In conclusion, the proposed mathematical model accounts for the reaction phenomena, porosity variation, thermal gradient, and evaporation during the carbonation process, providing a solid understanding of the system and an effective tool to contribute to the circular economy of the construction industry. The model has been successfully validated, and the primary objective moving forward is to use it as a tool for predicting the carbonation response of other more complex materials.



Integrated LCA and Eco-design Process for Hydrogen Technologies: Case Study of the Solid Oxide Electrolyser.

Gabriel Magnaval1,2, Tristan Debonnet2, Manuele Margni1,2

1CIRAIG, Polytechnique Montréal, Montréal, Canada; 2HES-SO Valais-Wallis, Sion, Switzerland

Fuel Cell and Electrolyzer Cell hydrogen technologies are promising solutions to support the green transition. To ensure their sustainable development from the early stages of design, it is essential to assess their environmental impacts and define effective ecodesign strategies.

Life Cycle Assessment (LCA) is a widely used methodology for evaluating the environmental impacts of a product or system throughout its entire life cycle, from raw material extraction to disposal. So far literature does not provide consistent modelling approaches for hydrogen technologies assessment, limiting the interpretation, comparability of LCA results and hindering interoperability of datasets. A novel modular LCA model has been specifically designed to harmonize assessment models. The modular approach is structured by (supply) tiers, each of them subdivided in common unit processes. Tier 0 represents the operation phase delivering the functional unit. Tier 1 encompasses the stack manufacturing, the balance of plant equipment, the operation consumables, and the end-of-life of the stack. Each element is further subdivided in common Tier 2 sub processes and so on. This model has been applied to perform a Screening LCA of a Solid Oxide Electrolyzer (SOE), based on publicly available literature data to be used as a baseline for evaluating technological innovation of SOE designed for high-pressure applications, and developed within an industrial European Project. The Functional unit has been defined as the producing 1kg of hydrogen at 30 bar of a 20kW SOE stack.

Our findings suggest that Hydrogen production through SOE performs better than steam methane reforming only if supplied by electricity from renewable sources or nuclear. The operation consumables (electricity consumption and heat supply) have been identified as the most significant contributors to the environmental footprint, emphasizing the importance of energy efficiency and renewable energy sourcing. Critical parameters affecting the life cycle impact scores include the stack's lifespan, the balance of plant equipment, and material production.

To further support an environmentally sustainable development of stack technologies, we propose to integrate the LCA metrics within an ecodesign process tailored to the development of hydrogen technologies. The deployment of this process aims to ensure an environmentally sound development at the early stage of the innovation by improving the communication between the LCA experts and the technology developers, and to accelerate the data collection. An ecodesign workshop is organized during the first months of the project to enhance the literacy of the technology developers. It introduces the systemic and quantified approach to determine the hotspots of the technology, identify sustainable innovations, and evaluate their benefits and the risk of potential burden shift. Once trained, a parametrized tool which integrates screening LCA results in a user-friendly interface is distributed to the project partners. It allows technology developers to quickly assess potential innovations, compare different scenarios for improving environmental performance of their technology, and iterate calculation without the need of LCA experts. The LCA team is working throughout the project on updating the tool and explaining the trade-offs.



Decision Support Tool for Technology Selection in Industrial Heat Generation: Balancing Cost and Emissions

Soha Shokry Mousa, Dhabia Al-Mohannadi

Texas A&M University at Qatar, Qatar

Decarbonization of industrial processes is essential in order for the world to meet sustainability targets, including those for energy-intensive industries. On this note, electrification of heat generation could potentially reduce CO₂ emissions but comes with a set of challenges on balancing cost-efficiency with technical feasibility. A decision support framework is presented for the choice of technologies generating heat in industries, addressing the trade-offs between capital cost, CO₂ emissions, and heat demand across different temperature levels.

A tool was developed to evaluate various heat generation technologies, including high-temperature heat pumps, electrode boilers, and conventional systems. The application of heat integration principles allows the developed tool to analyse heat demands at different temperatures and, in turn, the suitability of a technology based on certain parameters, such as the cost-effectiveness of technology and capacity limits. The framework incorporates multi-criteria analysis, enabling decision-makers to systematically identify technologies that minimize overall cost while achieving goals for emission reductions and meeting total heat demands of the industrial process.

The initial application of the developed tool to real case studies proved the effectiveness of the methodology as part of the energy transition of the industrial sector.



Assessing Distillation Processes through Sustainability Indicators Aligned with the Sustainable Development Goals

Omer Faruk Karaman, Peter Lang, Laszlo Hegely

Budapest University of Technology and Economics, Hungary

There has been a growing interest in sustainability in chemical engineering as industries aim to reduce their environmental footprint without compromising economic performance. This research proposes a set of sustainability indicators aligned with the United Nations’ Sustainable Development Goals (SDGs) for the evaluation of the sustainability of distillation processes, offering a structured way to assess and improve these systems. The use of these indicators is illustrated in two case studies: (1) a continuous pressure-swing distillation (PSD) of a maximum-azeotropic mixture without and with heat integration and (2) the recovery of acetone from a waste solvent mixture by batch distillation (BD). These processes were selected due to their widespread industrial use, their potential to benefit from improvements in their sustainability, and to show the general applicability of the indicators proposed.

Distillation is one of the most commonly used methods for the separation of liquid mixtures. It is performed in a continuous way when large processing capacities are needed (e.g. refining, petrochemical industry). Batch distillation is also used frequently (e.g. in the pharmaceutical or fine chemical industry) because of its flexibility in separating mixtures with varying quantity and composition, including waste solvent mixtures. However, distillation is very energy-intensive, leading to high operational costs and greenhouse gas emissions.

This study aims to address these issues by developing sustainability indicators (e.g. recovery of components, wastewater generation, greenhouse gas emissions) that account for environmental, economic and social aspects. By aligning these indicators with the SDGs, which are globally recognized sustainability standards, the research also aims to encourage industries towards more sustainable practices.

The novelty of this work is that, to our knowledge, we are the first who propose sustainability indicators aligned with SDGs in the field of distillation.

The case studies illustrate how to apply the proposed indicators to evaluate the sustainability aspects of distillation processes. In the PSD example (Karaman et al., 2024a), the process was optimised without and with heat integration, which led to a significant decrease in both the total annual cost and environmental impact (CO2 emission). In the acetone recovery by BD case (Karaman et al., 2024b), either the profit or the CO2 emissions were optimised by the Box-complex method. In this work, we determined how the proposed set of sustainability indicators improved due to the optimisation and heat integration performed in our previous works.

This research emphasizes the increasing importance of sustainability in chemical separation processes by integrating sustainability metrics aligned with SDGs into the evaluation of distillation processes. Our work proposes a generally applicable framework to quantify the sustainability aspects of the processes, which could be used to identify how these processes can be improved by balancing cost-effectiveness and environmental impacts.

References

Karaman, O.F.; Lang P.; Hegely L. 2024a. Optimisation of Pressure-Swing Distillation of a Maximum-Azeotropic Mixture with Heat Integration. Energy (submitted).

Karaman, O.F.; Lang, P.; Hegely, L. 2024b. Economic and Environmental Optimisation of Acetone Recovery by Batch Distillation. Proceedings of the 27th Conference on Process Integration, Modelling and Optimisation for Energy Saving and Pollution Reduction. Paper: PRES24.0144.



Strategies for a more Resilient Green Haber-Bosch Process

José M. Pires1,2, Diogo Narciso2, Carla I. C. Pinheiro1

1Centro de Química Estrutural, Institute of Molecular Sciences, Departamento de Engenharia Química, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001, Portugal; 2Centro de Recursos Naturais e Ambiente, Departamento de Engenharia Química, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001, Portugal

With a global production of 183 million metric tons in 2020 [1], ammonia (NH3) stands out as one of the most important commodity chemicals on the global scene, alongside ethylene and propylene. Despite 85% of all ammonia produced being used in fertilizer production [1], its applications extend beyond the agri-food sector. The Haber-Bosch (HB) process has historically enabled large-scale ammonia production, supporting agricultural practices in response to the unprecedented population growth over the past century, but it also accounts for 1.2% of global anthropogenic CO2 emissions [2]. In the ongoing energy transition, Power-to-X systems have emerged as promising solutions for both i) storing renewable energy, and ii) producing chemical or fuels. The green HB (gHB) process, powered entirely by green electricity, can be viewed as a Power-to-Ammonia (PtA) system. In this process, hydrogen from electrolysis and nitrogen from an air separation unit are compressed and fed into the NH3 synthesis loop, whose general configuration mirrors that of the conventional HB process. However, the intermittent nature of renewable energy means hydrogen production is not constant over time. Therefore, in a PtA system, the NH3 synthesis loop must be operated dynamically, which presents a major operational challenge.

Dynamic operation of NH3 converters is typically associated with reaction extinction and sustained temperature oscillations (known as limit cycles), which can severely damage the catalyst. This work is situated in this context, with the development of a high-fidelity model of the gHB process using gPROMS Process. As various process flexibilization measures have already been proposed in the literature and industrial patents [3,4], this work aims to test some of these measures, or combination thereof, by quantitatively assessing their impacts on the process. The process is first modelled and simulated at a nominal process load, followed by a flexibility analysis in which partial loads are introduced to observe their effects on process responsiveness and resilience. Essentially, all proposed measures boil down to maintaining high loop pressure, a key aspect consistently addressed in the patents, which can be achieved by exploiting the ammonia synthesis reaction equilibrium. Therefore, measures that shift the equilibrium towards the reactants side are particularly relevant for this analysis, as they lead to an increase in the number of moles leaving the reactor. Increasing the reactor operating temperature and the NH3 fraction in the reactor feed are some of proposed possibilities, but are complex, as they affect the intricate reaction dynamics and may cause reactor overheating or even reaction extinction. Other possibilities include reducing reactor flow or, in the worst case, decreasing loop pressure [3].

[1] IRENA & AEA. (2022). Innovation outlook: renewable ammonia.

[2] Smith, C. et al. (2020). Current and future role of Haber-Bosch ammonia in a carbon-free energy landscape. Energy Environ. Sci., 13(2), 331-344.

[3] Fahr, S. et al. (2023). Design and thermodynamic analysis of a large-scale ammonia reactor for increased load flexibility. Chemical Engineering Journal, 471, 144612.

[4] Ostuni, R. & Zardi, F. (2016). Method for load regulation of an ammonia plant (U.S. Patent No. 9463983).



Process simulation and thermodynamic analysis of newly synthesized pre-combustion CO2 capture system using novel Ionic liquids for H2 production

Sadah Mohammed, Fadwa Eljack

Qatar University, Qatar

Deploying fossil fuels to meet global energy needs has increased greenhouse gas emissions, mainly CO2, contributing to climate change. Therefore, transitioning toward clean energy sources is crucial for a sustainable low-carbon economy. Hydrogen (H2) is a viable decarbonization option, but its production via steam methane reforming (SMR) emits significant CO2 [1]. Integrating abatement technology, like pre-combustion CO2 capture in the SMR process, can reduce carbon intensity. The pre-combustion systems are effective for high-pressure streams rich in CO2, making them suitable for H2 production. In this regard, solvent selection is crucial in designing effective CO2 capture systems by considering several factors, including eco-toxicity, reducing irreversibility, and maximizing energy efficiency. In this context, ionic liquids (ILs) have become increasingly popular for their low regeneration energy, making them well-suited for pre-combustion applications.

The main goal of this work is to synthesize a pre-combustion CO2 capture system using newly designed ILs and conduct a thermodynamic analysis regarding energy requirements and exergy loss. These novel ILs are synthesized using a predictive deep-learning model developed in our previous work [2]. Before assessing the performance of novel ILs, an eco-toxicity analysis is conducted using the ADMETlab 2.0 web tool to ensure their environmental suitability. The novel ILs are then defined in the simulation software Aspen Plus, following the integrated modified translation-rotation-internal coordinate (TRIC) system with the COMSO-based/Aspen approach developed in our previous publication [3]. The proposed steady-state pre-combustion CO2 capture process suggested by Zhai and Rubin [4] is then simulated in Aspen plus V12 to treat the syngas stream with high CO2 concentration (16.27% CO2). The suggested process configuration is modified to employ an IL-based absorption system suitable for processing large-scale syngas streams, enhancing CO2 removal and H2 purity under high-pressure conditions. Finally, a comprehensive energy and exergy analysis is scrutinized to quantify the thermodynamic deficiencies of the developed system based on the performance of novel ILs. This work is essential as it provides insights into the overall CO2 capture system efficiency and the source of irreversibility to ensure an eco-friendly and optimal process design.

Reference

[1] S. Mohammed, F. Eljack, S. Al-Sobhi, and M. K. Kazi, “A systematic review: The role of emerging carbon capture and conversion technologies for energy transition to clean hydrogen,” J. Clean. Prod., vol. 447, no. May 2023, p. 141506, 2024, doi: 10.1016/j.jclepro.2024.141506.

[2] S. Mohammed, F. Eljack, M. K. Kazi, and M. Atilhan, “Development of a deep learning-based group contribution framework for targeted design of ionic liquids,” Comput. Chem. Eng., vol. 186, no. January, p. 108715, 2024, doi: 10.1016/j.compchemeng.2024.108715.

[3] S. Mohammed, F. Eljack, S. Al-Sobhi, and M. K. Kazi, “Simulation and 3E assessment of pre-combustion CO2 capture process using novel Ionic liquids for blue H2 production,” Comput. Aided Chem. Eng., vol. 53, pp. 517–522, Jan. 2024, doi: 10.1016/B978-0-443-28824-1.50087-9.

[4] H. Zhai and E. S. Rubin, “Systems Analysis of Physical Absorption of CO2 in Ionic Liquids for Pre-Combustion Carbon Capture,” Environ. Sci. Technol., vol. 52, no. 8, pp. 4996–5004, 2018, doi: 10.1021/acs.est.8b00411.



Mechanistic and Data-Driven Models for Predicting Biogas Production in Anaerobic Digestion Processes

Rohit Murali1, Benaissa Dekhici1, Michael Short1, Tao Chen1, Dongda Zhang2

1University of Surrey, United Kingdom; 2University of Manchester, United Kingdom

Anaerobic digestion (AD) plays a crucial role in renewable energy production by converting organic waste into biogas in the absence of oxygen. However, accurately forecasting biogas production for real-time applications in AD plants remains a challenge due to the complex and dynamic nature of the AD process. Despite the extensive literature on decision-making in AD, there are currently no industrially applicable tools available for operators that can aid in predicting biogas output for site-specific applications. Mechanistic models are valuable tools for controlling systems, estimating states and parameters, designing reactors, and optimising operations. They can also predict biological system behaviour, reducing the need for time-consuming and expensive experiments. To ensure effective control, state estimation, and future predictions, accurate models that accurately represent the AD process are essential.

In this study, we present a comparative analysis of two modelling approaches: mechanistic models and data-driven models focusing on their ability to predict biogas production from a lab-scale anaerobic digester. Our work includes the development of a simplistic mechanistic model based on two states concentration of biomass and concentration of substrate which incorporates Haldane kinetics to simulate and predict the biogas production over time. The model was optimised using experimental data, with key kinetic parameters tuned via non-linear regression methods to minimise prediction error. While the mechanistic model demonstrated reasonable accuracy in predicting output trends, it fails to accurately characterise feedstock and biomass concentration for future predictions. A more robust model, such as the Anaerobic Digestion Model No. 1 (ADM1), could offer a more accurate representation. However, its complexity with 35 state variables and over 100 parameters, many of which are rarely measured at AD plants makes it impractical for real-time applications.

To address these limitations, we compared the mechanistic model's performance to a data-driven approach using a Long Short-Term Memory (LSTM) neural network. The LSTM model was trained on lab-scale AD data and demonstrated a closer fit to the experimental results, than the simple mechanistic model proving to be a more accurate alternative for predicting biogas production. The LSTM model were also applied to a larger industrial dataset from an AD site, showing strong predictive capabilities and offering a practical alternative to time and resource intensive experimental analysis.

The mechanistic model, while valuable for providing insights into the biochemical processes of AD, achieved an R2 value of 0.82, indicating moderate accuracy in capturing methane production. In contrast, the LSTM model for the lab-scale dataset demonstrated significantly better predictive capabilities with corresponding R2 values ranging between 0.93 - 0.98, indicating a strong fit to the experimental data. When applied to a larger industrial dataset, the LSTM model continued to perform, with R2 values ranging between 0.95 - 0.97. These results demonstrate LSTM model’s superior ability to capture temporal dependencies and handle both lab-scale and industrial data, making it a promising tool for deployment in large-scale AD plants. Its robust performance across different scales highlights its potential for optimising biogas production in real-world applications.



Application and comparison of optimization methods for an Energy Mix optimization problem

Julien JEAN VICTOR1, Zakaria Adam SOULEYMANE2, Augustin MPANDA2, Philippe TRUBERT3, Laurent FONTANELLI1, Sebastien POTEL1, Arnaud DUJANY1

1UniLaSalle, UPJV, B2R GeNumEr, U2R 7511, 60000 Beauvais, France; 2UniLaSalle, UPJV, B2R GeNumEr, U2R 7511, 80000 Amiens, France; 3Syndicat mixte de l'aéroport de Beauvais-Tillé (SMABT), 1 rue du Pont de Paris - 60000 Beauvais

In the last decades, governmental and intergovernmental policies have evolved regarding the global arousal of climate change awareness. In the conception of energy mixes, ecological considerations have taken a predominant importance in the conception of energy mixes, and renewable energy sources are now widely preferred to fossil fuels. Simultaneously, availability of critical resources such as energy is highly sensitive to geopolitical relationships. It is therefore important for territories at various scales to develop their energy mixes and achieve energetic independence [IRENA, 2022]. The development of optimized, renewable and local energy mixes is therefore highly supported by the economic, political and environmental situations [Østergaard and Sperling, 2014].

Multiple studies have aimed to optimize renewable energy technologies and facilities locations to develop more renewable and efficient energy mixes. A majority of these optimization problems are solved using MILP, MINLP or Heuristic algorithms. This study aims to assess and compare optimization methods for environmental and economic optimization of an infrastructure’s energy mix. It focuses on yearly production potential at a regional scale, and therefore does not consider Decomposition or Stochastic optimization methods, better fitted for problems including temporal variation or multiple time periods. From existing methods in Energy Mix literature, Goal Programming, Branch-and-Cut and NSGA-II were selected for they are widely used and cover different problem formulations [Jaber et al, 2024] [Moret et al, 2016]. These methods will be applied to a case study and compared based on their specificities and the solutions they provide.

After census of energy resources already in place in the target territory, the available energy mix undergoes a carbon footprint evaluation that will serve as the environmental component of the problem. The economical component is an aggregation of operative, maintenance and installation costs. The two components constitute the objectives of the problem, either treated separately or weighted in a single objective function. The three selected methods are applied to the problem, and the results provided by each are gathered and assessed based on criteria including optimality, diversity of solutions, and sensitivity to constraints and settings. The assessed methods are then compared based on these criteria, so strengths and weaknesses of each method regarding this specific problem can be identified. The goal is to identify the best fitting methods for such a problem and may lead to the design of a hybrid method ideally fitted to the energy mix optimization problem.

International Renewable Energy Agency (IRENA). (2022). Geopolitics of the energy transformation: The hydrogen factor. Retrieved August 2024, from https://www.irena.org/Digital-Report/Geopolitics-of-the-Energy-Transformation

Jaber, A., Younes, R., Lafon, P., Khoder, J. (2024). A review on multi-objective mixed-integer non-linear optimization programming methods. Engineering, 5(3), 1961-1979. https://doi.org/10.3390/eng5030104

Moret, S., Bierlaire, M., Maréchal, F. (2016). Strategic energy planning under uncertainty: A mixed-integer linear programming modeling framework for large-scale energy systems. In Z. Kravanja & M. Bogataj (Eds.), Computer aided chemical engineering (Vol. 38, pp. 1899–1904). Elsevier. https://doi.org/10.1016/B978-0-444-63428-3.50321-0

Østergaard, P. A., Sperling, K. (2014). Towards sustainable energy planning and management. International Journal of Sustainable Energy Planning and Management, 1, 1-10. https://doi.org/10.5278/IJSEPM.2014.1.1



Insights into the Development and Implementation of Soft Sensors in Industrial Settings

Shweta Mohan Nagrale1, Abhijit Bhakte1, Rajagopalan Srinivasan1,2

1Department Chemical Engineering, Indian Institute of Technology Madras, Chennai, 600036, India; 2American Express Lab for Data Analytics, Risk & Technology, Indian Institute of Technology Madras, Chennai, 600036, India

Soft sensors offer a viable solution for industries where key quality variables cannot be measured frequently. By utilizing readily available process measurements, soft sensors provide frequent estimates of quality variables, thus avoiding the delays typically associated with traditional analyzers. They enhance efficiency and economic performance while improving process control and decision-making.

The literature outlines several challenges in deploying soft sensors within industrial environments. Laboratory measurements are crucial for developing, calibrating, and validating models. Wang et al. (2010) emphasized the mismatch between high-frequency process data and infrequent lab measurements, necessitating down-sampling and, consequently, significant data loss. High dimensionality of process data and multicollinearity complicate model building. Additionally, time delays and varying operational regimes complicate data alignment and model generalization. Without continuous adaptation, soft sensor models risk becoming outdated, reducing their predictive accuracy (Kay et al., 2024). Online learning and model updates are vital for maintaining performance amid changing conditions and sensor drift. Also, effective imputation techniques and outlier management are essential to prevent model distortion. Integrating soft sensors into DCS and suitable human-machine interaction also presents unique challenges.

This work presents practical strategies for developing and implementing soft sensors in real-world refineries. By monitoring key quality parameters like Distillation-95 and Research Octane Number (RON), these sensors provide timely, precise estimations that enhance prediction and process control. We gathered process data at 5-minute intervals and weekly laboratory data over two years. Further, we utilized data preprocessing techniques and clustering methods to distinguish steady-state and transient regimes. Feature engineering strategies were used to address high dimensionality. Also, simpler models like Partial Least Squares (PLS) ensure effective quality prediction due to their balance of accuracy and interpretability. This enables operators to make informed, data-driven decisions and quickly respond to changes without waiting for traditional laboratory analyses. In this paper, we will discuss how the resulting soft sensor can offer numerous benefits, such as detecting quality issues early, minimizing downtime, and optimizing resource allocation. They thus serve as a tool for continuous process improvement. Finally, the user interface can play a significant role in fostering trust among plant personnel, ensuring easy access to predictions, and explicitly highlighting the soft sensor’s confidence in its prediction.

References

Wang, David & Liu, Jun & Srinivasan, Rajagopalan. (2010). Data-Driven Soft Sensor Approach for Quality Prediction in a Refining Process. Industrial Informatics, IEEE Transactions on. 6. 11 - 17. 10.1109/TII.2009.2025124.

Sam Kay, Harry Kay, Max Mowbray, Amanda Lane, Cesar Mendoza, Philip Martin, Dongda Zhang, Integrating transfer learning within data-driven soft sensor design to accelerate product quality control, Digital Chemical Engineering, Volume 10, 2024, 100142, ISSN 2772-5081, https://doi.org/10.1016/j.dche.2024.100142.

R. Nian, A. Narang and H. Jiang, "A Simple Approach to Industrial Soft Sensor Development and Deployment for Closed-Loop Control," 2022 IEEE International Symposium on Advanced Control of Industrial Processes (AdCONIP), Vancouver, BC, Canada, 2022, pp. 261-262, doi: 10.1109/AdCONIP55568.2022.9894185.



Synthesis of Distillation Flowsheets with Reinforcement Learning using Transformer Blocks

Niklas Slager, Meik Franke

Faculty of Science and Technology, University of Twente, the Netherlands

Process synthesis is one of the main tasks of chemical engineers and has major influence on CAPEX and OPEX in the early design phase of a project. Basically, there are two different approaches: heuristic or superstructure optimization approaches. Heuristic approaches provide quick and often satisfying solutions, but due to non-quantitative nature eventually promising options might be overlooked. On the other hand, superstructure optimization approaches are quantitative, but their formulation and solution are difficult and time-consuming. Furthermore, they require the optimal solution to be imbedded within the superstructure and cannot to be applied to open-ended problems.

Reinforcement learning (RL) offers the potential to solve open-ended process synthesis problems. RL is a type of machine learning (ML) which involves an agent making decisions (actions) at a current state within an environment to maximise an expected reward, e.g., revenue. A few publications have dealt with the design of chemical processes [1,2,3,4]. An overview of reinforcement learning methods for process synthesis is given in [5]. Special attention must be paid to the principle of data input embedding. Data embeddings transform raw data (e.g., states, actions) into a form suitable for model processing. Effective embeddings capture the variance and structure of the data to ensure the model learns meaningful patterns. Most of the authors use Convolutional Neural Networks (CNN) and Graph Neural Networks (GNN). However, Convolutional networks and GNNs generally struggle to capture long-range dependencies.

A fundamentally different methodology for permutation-equivariant data processing comes in the form of transformer blocks [6]. Transformer blocks are built on an attention principle, where relations in input data are weighted, and more attention is paid (a higher weight factor is assigned) to relationships having a stronger effect on the outcome.

To demonstrate the applicability of the method, the separation of an ideal seven-component hydrocarbon mixture is investigated. The RL training session was done in 2 hours and much faster than reported sessions on similar five-component problems in [1]. The full recovery of seven components was achieved using a minimum of six separation units designed by the RL agent. However, it cannot be claimed that the learning progress was reliable as minor deviations in hyperparameters easily led to sub-optimal policies, which will be investigated further.

[1] van Kalmthout, S., Midgley, L. I., Franke, M. B. (2022). ps://arxiv.org/abs/2211.04327.

[2] Stops, L., Leenhouts, R., Gao, Q., Schweidtmann, A. M. (2022). AIChE Journal, 69(1).

[3] Goettl, Q., Pirnay, J., Burger, J. Grimm, D. G. (2023). arXiv:2310.06415v1.

[4] Wang, D., et al., (2023). Energy Advances, 2.

[5] Gao, Q., Schweidtmann, A. M. (2024). Current Opinion in Chemical Engineering, 44, 101012.

[6] Vaswani, A., et al. (2023). Attention is all you need. https://arxiv.org/abs/1706.03762.



Machine Learning Surrogate Models for Atmospheric Dispersion: A Time-Efficient Approach to Air Quality Prediction

Omar Hassani Zerrouk1,2, Eva Gallego1, Jose francisco Perales1, Moisès Graells1

1Polytechnic University of Catalonia, Spain; 2Abdelmalek Essaadi University, Morocco

Atmospheric dispersion models are traditionally used to estimate the impact of pollutants on air quality, relying on complex models and requiring extensive computational resources. This hinders the development of practical real-time solutions anticipating the effects of plant incidental emissions. To address these limitations, this study explores the use of machine learning algorithms as surrogate models, faster, less resource-intensive alternatives to traditional dispersion models, with the aim of replicating their results while reducing computational complexity.

Recent studies have explored machine learning as surrogate models for atmospheric dispersion. Kocijan et al. (2023) and Huang et al. (2020) demonstrated the potential of using tree-based techniques to predict air quality, while Gao et al. (2019) used hybrid LSTM-ARIMA models for PM2.5 forecasting. However, most approaches focus on specific algorithms or pollutants. This study provides a broader evaluation of various models, including Regression, Random Forest, Gradient Boosting, and deep learning, across multiple pollutants and meteorological variables.
This study evaluates machine learning models using data from traditional dispersion models for pollutants like NO₂, NOx, SO₂, PM, and meteorological variables. We combined localized meteorological data from municipalities with dispersion data computed using the Eulerian model TAPM (Hurley et al., 2005).

The best-performing models were Gradient Boosting and Random Forest, with MSE values of 1.23 and 1.39, and R² values of 0.94. These models effectively captured nonlinear relationships between meteorological conditions and pollutant concentrations, demonstrating their capacity to handle complex environmental interactions. In contrast, traditional regression models, like Ridge and Lasso, underperformed with MSE values of 14.62 and 17.50, and R² values of 0.40 and 0.28, struggling with data complexity. Similarly, deep learning models such as LSTM and GRU showed weaker performance, with MSE values of 27.43 and 26.73, and R² values of -0.11 and -0.08, suggesting that the data relationships were more influenced by instantaneous features than long-term temporal patterns.

Feature importance was analysed using permutation and standard metrics, revealing that variables related to atmospheric dispersion and stability, such as wind direction, stability, and solar radiation, were the most significant in predicting pollutant concentrations. Time-derived variables, like day or hour, were less relevant, likely because their effects were captured by other environmental factors. This highlights the potential of ML-based surrogate models as efficient alternatives to traditional dispersion models for air quality monitoring.

References

1. Kocijan, J., Hvala, N., Perne, M. et al. Surrogate modelling for the forecast of Seveso-type atmospheric pollutant dispersion. Stoch Environ Res Risk Assess 37, 275–290 (2023).

2. Huang, Y., Ding, H., & Hu, J. (2020). A review of machine learning methods for air quality prediction: Challenges and opportunities. Environmental Science and Pollution Research, 27(16), 19479-19495.

3. Gao, H., Zhang, H., Chen, X., & Zhang, Y. (2019). A hybrid model based on LSTM neural and ARIMA for PM2.5 forecasting. Atmospheric Environment, 198, 206-213.

4. Hurley, P. J., Physick, W. L., Luhar, A. K (2005). TAPM: a practical approach to prognostic meteorological and air pollution modelling, Environmental Modelling & Software, 20(6), 737-752.



Comparative Analysis of PharmHGT, GCN, and GAT Models for Predicting LogCMC in Surfactants

Gabriela Carolina Theis Marchan, Teslim Olayiwola, Andrew N Okafor, Jose Romagnoli

LSU, United States of America

Predicting the critical micelle concentration (LogCMC) of surfactants is essential for optimizing their applications in various industries, including pharmaceuticals, detergents, and emulsions. In this study, we investigate the performance of graph-based machine learning models, specifically Graph Convolutional Networks (GCN), Graph Attention Networks (GAT), and a graph-transformer model, PharmHGT, for predicting LogCMC values. We aim to determine the most effective model for capturing the structural and physicochemical properties of surfactants. Our results provide insights into the relative strengths of each approach, highlighting the potential advantages of transformer-based architectures like PharmHGT in handling molecular graph representations compared to traditional graph neural networks. This comparative study serves as a step towards enhancing the accuracy of LogCMC predictions, contributing to the efficient design of surfactants for targeted applications.



Short-cut Correlations for CO2 Capture Technologies in Small Scale Applications

So-mang Kim, Joanne Kalbusch, Grégoire Léonard

University of Liege, Belgium

Carbon capture (CC) is crucial for achieving net-zero emissions and mitigating climate change. Despite its critical importance, the current deployment of carbon capture technologies remains insufficient to meet the climate target, indicating an urgency to increase the number of carbon capture applications. Emission sources vary significantly in capture scale, with large-scale emitters benefiting from economies of scale, while smaller-scale applications are often neglected. However, to achieve an economy with net-zero emissions, CC applications at various emission levels are necessary.

While many studies on carbon capture technologies highlight capture cost as a key performance indicator (KPI), there is currently no standardized method in the literature to estimate the cost of carbon capture, leading to inconsistencies and incomparable results. This makes it challenging for decision-makers to fairly compare and identify suitable carbon capture options based on the literature results, hindering the deployment of CC units. In addition, conducting detailed simulations and Techno-Economic Assessments (TEAs) to identify viable capture options across various scenarios can be time-consuming and requires significant effort.

To address the aforementioned challenges, this work develops short-cut correlations describing the total equipment cost (TEC) and energy consumptions of selected carbon capture technologies for small capture scale applications. This will allow exploration of the role of CC in small-scale industries and offer a practical framework for evaluating the technical and economic viability of various CO₂ capture systems. The goal is to provide an efficient approach for decision-makers to estimate the cost of carbon capture without the need for extensive simulations and detailed TEAs while ensuring consistent assumptions and cost estimation methods are applied across comparison studies. Also, the correlations are flexible, which allows for various cost estimation methods and case-specific assumptions to fine tune the analyses for various scenarios.

The shortcut correlations can offer valuable insights into small-scale carbon capture (CC) applications by identifying scenarios that enhance their feasibility, such as integrating small-scale carbon capture with waste heat and renewable energy sources. They also facilitate the exploration of various spatial configurations, including the deployment of multiple small-scale capture units versus combining flue gases from small-scale sources into a single larger CC unit. The shortcut correlations are envisioned to improve the accessibility of carbon capture technologies for small-scale industries.



Mixed-Integer Bilevel Optimization Problem Generator and Library for Algorithm Evaluation and Development

Meng-Lin Tsai, Styliani Avraamidou

University of Wisconsin-Madison, United States of America

Bilevel optimization, characterized by nested optimization problems, has gained prominence in modeling two-player interactions across various domains, including environmental policy (Beykal et al. 2020) and hierarchical control (Avraamidou et al., 2017). Despite its wide applicability, bilevel optimization is known to be NP-hard. Mixed-integer bilevel optimization problems are even more challenging to solve (Kleinert et al. 2021), prompting the development of diverse solution methods, such as Bender’s decomposition (Saharidis et al. 2009), multiparametric optimization (Avraamidou et al. 2019), penalty functions (Dempe et al. 2005), and branch-and-bound/cut algorithms (Fischetti et al. 2018). However, due to the large variety of problem types (type of variables, constraints, objective functions), the field lacks standardized benchmark problems. Random problem generators are commonly used to generate problems for algorithm evaluation (Avraamidou et al. 2019), but often produce trivial bilevel problems—defined as those where the high-point relaxation solution is feasible.

In this work, we investigate the prevalence of trivial problems across different problem structures (LP-LP, ILP-ILP, MILP-MILP) and sizes (number of upper/lower variables, binary/continuous variables, constraints), and we reveal how problem structure and size influence trivial problem occurrence probabilities. We introduce a new bilevel problem generator, coded in Python using Gurobi as a solver, designed to create non-trivial bi-level problem instances of chosen type and size. A library of 200 randomly generated problems of different sizes and types will also be part of the tool and available to access online. The proposed tool aims to enhance the robustness of bilevel optimization algorithm testing by ensuring that generated problems provide a meaningful challenge to the solver, offering a reliable method for algorithm evaluation, and accelerating the development of efficient solvers for complex, real-world bilevel optimization problems.

References

Avraamidou, S., & Pistikopoulos, E. N. (2017). A multi-parametric bi-level optimization strategy for hierarchical model predictive control. In Computer aided chemical engineering(Vol. 40, pp. 1591-1596). Elsevier.

Avraamidou, S., & Pistikopoulos, E. N. (2019). A multi-parametric optimization approach for bilevel mixed-integer linear and quadratic programming problems. Computers & Chemical Engineering, 125, 98-113.

Beykal, B., Avraamidou, S., Pistikopoulos, I. P., Onel, M., & Pistikopoulos, E. N. (2020). Domino: Data-driven optimization of bi-level mixed-integer nonlinear problems. Journal of Global Optimization, 78, 1-36.

Dempe, S., Kalashnikov, V., & Rıos-Mercado, R. Z. (2005). Discrete bilevel programming: Application to a natural gas cash-out problem. European Journal of Operational Research, 166(2), 469-488.

Fischetti, M., Ljubić, I., Monaci, M., & Sinnl, M. (2018). On the use of intersection cuts for bilevel optimization. Mathematical Programming, 172(1), 77-103.

Kleinert, T., Labbé, M., Ljubić, I., & Schmidt, M. (2021). A survey on mixed-integer programming techniques in bilevel optimization. EURO Journal on Computational Optimization, 9, 100007.

Saharidis, G. K., & Ierapetritou, M. G. (2009). Resolution method for mixed integer bi-level linear problems based on decomposition technique. Journal of Global Optimization, 44, 29-51.



Surface Tension Data Analysis for Advancing Chemical Engineering Applications

Ulderico Di Caprio1, Flora Esposito1, Bruno C. L. Rodrigues2, Idelfonso Bessa dos Reis Nogueira2, Mumin Enis Leblebici1

1Center for Industrial Process Technology, Department of Chemical Engineering, KU Leuven, Agoralaan Building B, 3590 Diepenbeek, Belgium; 2Chemical Engineering Department, Norwegian University of Science and Technology, Sem Sælandsvei 6, Kjemiblokk 4, Trondheim 7043, Norway

Surface tension plays a critical role in numerous aspects of chemical engineering, influencing key processes such as mass transfer, fluid dynamics, and the behavior of multiphase systems. Accurate surface tension data are essential for the design of separation processes, reactor optimization, and the development of advanced materials. However, despite its importance, the availability of comprehensive, high-quality experimental data has lagged behind modern research needs, limiting progress in fields where precise interfacial properties are crucial.

In this work, we address this gap by revisiting a vast compilation of experimental surface tension data published in 1972. Originally recognized for its breadth and accuracy, this compilation has remained largely inaccessible to the modern scientific community due to its outdated digital format. The digital version of the original document consists primarily of scanned images, making data extraction difficult and time-consuming for researchers. Manual transcription was often required, increasing the risk of human error and reducing efficiency for those seeking to use the data for new developments in chemical engineering.

Our project involves not only the digitalization of this critical dataset—transforming it into a machine-readable and easily accessible format with experimental measurements of surface tension for over 2000 substances across a wide range of conditions—but also an in-depth analysis aimed at identifying the key physical parameters that influence surface tension behavior. Using modern data extraction tools and statistical techniques, we have studied the relationships between surface tension and various physical properties. By analyzing these factors, we present insights into which features most strongly impact surface tension under different conditions.

This comprehensive dataset and accompanying feature analysis offer researchers a valuable foundation for exploring surface tension behavior across diverse areas of chemical engineering. We believe this will contribute to significant advancements in fields such as phase equilibrium, material design, and fluid mechanics, as well as support innovation in emerging technologies like microfluidics, nanotechnology, and sustainable process design.



Design considerations for hardware based acceleration of molecular dynamics

Joseph Middleton, Joan Cordiner

University of Sheffield, United Kingdom

As demand for long and accurate molecular simulations increases so too does the computation demand. Beyond using new, enterprise scale processor developments - such as the ARM neoverse chips – or performing simulations leveraging GPU compute, there exists a potentially faster and more power efficient option in the form of custom hardware. Using hardware description languages it is possible to transform existing algorithms into custom, high performance hardware layouts. This can lead to faster and more efficient simulations, but compromises on the required development time and flexibility. In order to take greatest advantage of the potential performance gains, the focus should be on transforming the most computationally expensive parts of the algorithms.

When performing molecular dynamics simulations in a polar solvent like water, non-bonded electrostatic calculations dominate each simulation step, as the interactions between the solvent and the molecular structure are calculated. However, simply developing a non-bonded electrostatics co-processor may not be enough, as transferring data between the host program and the FPGA itself incurs a significant time delay. For any changes to be made, competitive to existing calculation solutions, the number of data transfers must be reduced. This could be achieved by simulating multiple time-steps between memory transfers, which may impact accuracy, or performing more calculations in the custom hardware.



A Novel AI-Driven Approach for Adsorption Parameter Estimation in Gas-Phase Fixed-Bed Experiments

Rui D. G. Matias1,2, Alexandre F. P. Ferreira1,2, Idelfonso B.R. Nogueira3, Ana Mafalda Ribeiro1,2

1Laboratory of Separation and Reaction Engineering−Laboratory of Catalysis and Materials (LSRE LCM), Department of Chemical Engineering, University of Porto, Porto, 4200-465, Portugal; 2ALiCE−Associate Laboratory in Chemical Engineering, Faculty of Engineering, University of Porto, Porto, 4200-465, Portugal; 3Chemical Engineering Department, Norwegian University of Science and Technology, Sem Sælandsvei 4, Kjemiblokk 5, Trondheim, 793101, Norway

The need to reduce greenhouse gas emissions has driven the shift toward renewable energy sources such as biogas. To use biogas as a substitute for natural gas, it must undergo a purification process to separate methane from carbon dioxide. Adsorption-based separation processes are standard methods for biogas separation (1).

Developing precise mathematical models that can accurately describe all the phenomena involved in the process is crucial for a deeper understanding and the creation of innovative control and optimization techniques for these systems.
By solving a system of coupled Partial Differential Equations, Ordinary Differential Equations, and Algebraic Equations, it is possible to accurately simulate the fixed-bed units used in these processes. However, a robust simulation and, consequently, a better understanding of the intrinsic phenomena governing these systems - such as adsorption isotherms, film and particle mass transfer, among others - heavily depends on carefully selecting parameters for these equations.

These parameters can be estimated using well-known mathematical correlations or trial and error. However, these methods often introduce significant errors (2). For a more accurate determination of parameters, an optimization algorithm can be employed to find the best set of parameters that minimize the difference between the simulation and experimental data, thereby providing a better representation and understanding of the real process.

Different optimization methodologies can be employed for this purpose. For example, deterministic methods are known for ensuring convergence to an optimal solution, but the starting point selection significantly impacts their performance(3). In contrast, meta-heuristic techniques are often preferred for their adaptability and efficiency since they do not rely on predefined initial conditions(4). However, these approaches may not always guarantee finding the optimal solution for every problem.

A parameter estimation methodology based on Artificial Intelligence (AI) offers several advantages. AI algorithms can handle complex problems by processing high-dimensional data and modelling nonlinear relationships more accurately. Additionally, AI techniques, such as neural networks, do not rely on well-defined initial conditions, making them more robust and efficient in the search for global solutions, avoiding local minima traps. Beyond that, they also have the ability to continuously learn from new data, enabling dynamic adjustments.

This work presents an innovative methodology for estimating the isotherm parameters of a mathematical phenomenological model for fixed-bed experiments involving CO2 and CH4. By integrating Artificial Intelligence tools with the phenomenological model and experimental data, this approach develops an algorithm that generates parameter values for the process’ mathematical model, resulting in simulation data with close-to-optimal fit with the experimental points, leading to more accurate simulations and providing valuable insights about this separation.

1. Ferreira AFP, Ribeiro AM, Kulaç S, Rodrigues AE. Methane purification by adsorptive processes on MIL-53(Al). Chemical Engineering Science. 2015;124:79-95.

2. Weber Jr WJ, Liu KT. DETERMINATION OF MASS TRANSPORT PARAMETERS FOR FIXED-BED ADSORBERS. Chemical Engineering Communications. 1980;6(1-3):49-60.

3. Schwaab JCPM. Análise de Dados Experimentais: I. Fundamentos de Estatística e Estimação de Parâmetros: Editora E-papers.

4. Lin M-H, Tsai J-F, Yu C-S. A Review of Deterministic Optimization Methods in Engineering and Management. Mathematical Problems in Engineering. 2012;2012(1):756023.



Integration of Graph Theory and Machine Learning for enhanced process synthesis and design of wastewater treatment networks

Andres D. Castellar-Freile1, Jean Pimentel2, Alec Guerra1, Pratap M. Kodate3, Kirti M. Yenkie1

1Department of Chemical Engineering, Rowan University, Glassboro, New Jersey, USA; 2Sustainability Competence Center, Széchenyi István University, Győr, Hungary; 3Department of Physics, Indian Institute of Technology, Kharagpur, India

Process synthesis (PS) is the first step in process design. This is crucial to finding the best configuration of unit operations/technologies and stream flows that optimize the parameters of interest (cost, environmental impact, energy use, etc.). Traditional approaches such as superstructure optimization strongly depend on user-defined technologies, stream connections, and reasonable initial guesses for the unknown variables. This results not only in missing out on possible structures that can perform better than the selected but also in not considering important aspects such as multiple-input, multiple-output systems, and recycling streams [1]. Regarding this, the enhanced P-graph methodology integrated with insights from machine learning and realistic technology models is presented as a novel approach for process synthesis. It offers a unique advantage providing all n-feasible structures considering its specific connectivity rules for input, intermediate, and terminal nodes [2]. In addition, the novel two-layer process synthesis algorithm [3] is developed which incorporates combinatorial, linear, and nonlinear solvers to integrate the P-graph with realistic nonlinear model equations. It then performs a feasibility analysis and ranks the solution structures based on chosen metrics, such as cost, scalability, or sustainability. However, the n-feasible solutions identified with the P-graph framework could not still be suitable for the real process resulting from limitations in their reliability and structural resilience over a certain period. Considering this, applying Machine Learning (ML) methods for regression, classification, and extrapolation will allow for the accurate prediction of structural reliability and resilience over time [4], [5]. This will support better process design, enable proactive maintenance, and improve overall management.

Many water utility companies use a reactionary (wait-watch-act) methodology to manage their facilities and infrastructure. The proposed method can be applied to these systems offering strong, convergent, and comprehensive solutions for municipalities, water utility companies, and industries, enabling them to make well-informed decisions when designing new facilities or upgrading existing ones, all while minimizing time and financial investment.

Thus, the integration of Graph Theory and ML approaches for optimal design, structural reliability and resilience yields a new framework for Process Synthesis. We demonstrate the Wastewater Treatment Network (WWTN) synthesis as the problem of interest as it is vital in addressing the issues of Water Equity and Public Health. The pipeline network, pumping stations, and the wastewater treatment plant are modeled with the P-graph framework. Detailed and accurate models are developed for the treatment technologies. ML methods such as eXtreme gradient boosting (XGBoost) and Artificial Neural Networks (ANNs) are tested to calculate the pumping stations and the pipeline network's resilience and structural reliability.

References

[1] K. M. Yenkie, Curr. Opin. Chem. Eng., 2019, doi: 10.1016/j.coche.2019.09.002.

[2] F. Friedler, Á. Orosz, and J. Pimentel Losada, 2022. doi: 10.1007/978-3-030-92216-0.

[3] J. Pimentel et al., Comput. Chem. Eng., 2022, doi: 10.1016/j.compchemeng.2022.108034.

[4] G. Kabir, N. B. C. Balek, and S. Tesfamariam, J. Perform. Constr. Facil., 2018, doi: 10.1061/(ASCE)CF.1943-5509.0001162.

[5] Á. Orosz, F. Friedler, P. S. Varbanov, and J. J. Klemes, 2018, doi: 10.3303/CET1863021.



An Automated CO2 Capture Pilot Plant at ULiège: A Platform for the Validation of Process Models and Advanced Control

Cristhian Molina Fernández, Patrick Kreit, Brieuc Beguin, Sofiane Bekhti, Cédric Calberg, Joanne Kalbusch, Grégoire Léonard

University of Liège, Belgium

As the European Union accelerates its efforts to decarbonize society, the exploration of effective pathways to reduce greenhouse gas emissions is increasingly being driven by digital innovation. Pilot installations play a pivotal role in validating both emerging and established technologies within the field of carbon capture, utilization, and storage (CCUS).

At the University of Liège (ULiège) in Belgium, researchers are developing a "smart campus" that integrates advanced CCUS technologies with cutting-edge computational tools. Supported by the European Union's Resilience Plan, the Products, Environment, and Processes (PEPs) group is leading the construction of several key pilot installations, including a CO2 capture pilot plant, a CO2-to-kerosene conversion unit, and a direct air capture (DAC) test bench. These facilities are designed to support real-time data monitoring and advanced computational modeling, enabling enhanced process optimization.

The CO2 capture pilot plant has a processing capacity of 1 ton of CO2 per day, utilizing a fully automated chemical absorption system. Capable of working with either amine or carbonate solvents, the plant operates under an intelligent control framework that allows for remote and extended operation. This level of automation supports continuous data collection, essential for validating computational models and applying advanced control strategies, such as machine learning algorithms. Extended operation provides critical datasets for optimizing solvent stability, understanding corrosion behavior, and refining process models—key factors for scaling up CCUS technology.

The plant is fully electrified, with a heat pump integrated into the system to enhance energy efficiency by recovering heat from the condenser and upgrading it for reboiler use. The initial commissioning and testing phase will be conducted at ULiège’s Sart Tilman campus, where the plant will capture CO2 from a biomass boiler’s exhaust gases at the central heating station.

The modular design of the installation, housed within three 20-foot shipping containers, supports easy transport and deployment at various industrial locations. The automation and control system is centralized in the third container, allowing for full remote operation and facilitating quick reconfiguration of the plant for different experimental setups.

A key feature of the pilot is its flexible design, which integrates advanced gas pretreatment systems (including NOx and SOx removal) and optimized absorption/desorption columns with intercooling and interheating capabilities. These features allow dynamic adjustment of process conditions, enabling real-time optimization of CO2 capture performance. The solvent feed can be varied at different column heights, allowing researchers to evaluate the effect of column height on separation efficiency without making physical modifications. This flexibility is supported by a modular column design, where flanged segments can be dismantled or reassembled easily.

Overall, this pilot plant is designed to facilitate process optimization through data-driven approaches and intelligent control systems, offering critical insights into the performance and scalability of CCUS technologies. By providing a flexible, automated platform for long-duration experimental campaigns, it serves as a vital resource for advancing decarbonization efforts, especially in hard-to-abate industrial sectors.



A COMPARISON OF ROBUST MODELING APPROACHES TO COPE WITH UNCERTAINTY IN INDEPENDENT TERMS, CONSIDERING THE FOREST SUPPLY CHAIN CASE STUDY

Frank Piedra-Jimenez1, Ana Inés Torres2, Maria Analia Rodriguez1

1Instituto de Investigación y Desarrollo en Ingeniería de Procesos y Química Aplicada (UNC-CONICET), Universidad Nacional de Córdoba. Facultad de Ciencias Exactas, Físicas y Naturales. Av. Vélez Sarsfield 1611, X5016GCA Ciudad Universitaria, Córdoba, Argentina; 2Department of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh PA 15213

The need to consider uncertainty in the decision-making process is widely acknowledged in the PSE community, which distinguishes three main modelling paradigms for optimization under uncertainty, namely robust optimization (RO), stochastic programming (SP), and chance-constrained programming (CCP). The last two paradigms mentioned are computationally challenging due to the need for complete distributional knowledge (Chen et al., 2018). Instead, RO does not require the probabilistic behavior of uncertain parameters and strikes a good balance between solution quality and computational tractability (Ning and You, 2019).

One widely used method is the static robust optimization, initially presented by Bertsimas and Sim (2004). They proposed the budgeted uncertainty set which allows flexible handling of the level of conservatism of robust solutions in terms of probabilistic limits of constraint violations. It defines for each uncertain parameter a deviation bound from its nominal value and a budget parameter to determine the number of uncertain parameters that are allowed to take their worst value in each equation. In the case that there is only one uncertain parameter on the right-hand side of the equations, this method may adopt a too conservative perspective, considering the worst-case scenario for each constraint.

To address this concern, Ghelichi et al. (2018) introduced the “adjustable column-wise robust optimization” (ACWRO) method, to define a number of uncertain realizations that the decision-maker is willing to satisfy. Initially presented as a nonlinear model, it is later reformulated to achieve a linear formulation. The present paper proposes an alternative method applying a linear disjunctive formulation, called “disjunctive robust optimization” (DRO). Here, the proposed method is applied to forest supply chain design problem, extending a previous work from the authors (Piedra-Jimenez, et al., 2024). Due to disjunctive structure of the proposed approach, big-M and hull reformulations are applied to the DRO formulation and compared with the ACWRO approach for a large number of instances showing the tightness of each reformulation and computational performance considering the forest supply chain design problem case study.

References:

Bertsimas, D., Sim, M., 2004. The Price of Robustness. Oper. Res. 52, 35–53. https://doi.org/10.1287/OPRE.1030.0065

Chen, Y., Yuan, Z., Chen, B., 2018. Process optimization with consideration of uncertainties—An overview. Chinese J. Chem. Eng. 26, 1700–1706.

Ghelichi, Z., Tajik, J., Pishvaee, M.S., 2018. A novel robust optimization approach for an integrated municipal water distribution system design under uncertainty: A case study of Mashhad. Comput. Chem. Eng. 110, 13–34. https://doi.org/10.1016/J.COMPCHEMENG.2017.11.017

Ning, C., You, F., 2019. Optimization under uncertainty in the era of big data and deep learning: When machine learning meets mathematical programming. Comput. Chem. Eng. 125, 434–448. https://doi.org/10.1016/J.COMPCHEMENG.2019.03.034

Piedra-Jimenez, F., Torres, A.I, Rodriguez, M.A., 2024. A robust disjunctive formulation for the redesign of forest biomass-based fuels supply chain under multiple factors of uncertainty. Comput. Chem. Eng. 181, 108540.

 
10:30am - 11:30amT7: CAPEing with Societal Challenges - Session 5
Location: Zone 3 - Room D049
 
10:30am - 10:50am

Optimal Hydrogen Flux in a Catalytic Membrane Water-Gas Shift Converter

NABEEL ABOGHANDER1, Filip Logist2

1King Fahd University of Petroleum & Minerals, Saudi Arabia; 2KULeuven, Belgium

Due to the increasing momentum for the development of green processes, hydrogen may play a significant role in the energy sector as it can be thought of as a potential futuristic replacement for fossil fuel. Industrially, hydrogen can be produced by several processes such as steam reforming of hydrocarbons and water gas shift reaction. However, both reactions suffer from thermodynamics limitation leading to low conversion and selectivity. Consequently, several improvements and modifications such as installing hydrogen membranes to conversional chemical reactors are implemented which led to improving the performance of the catalytic membrane reactors compared to the conventional reactors. However, these hydrogen membrane reactors have not yet been optimized with respect to the hydrogen flux which makes the membrane reactors economically issuable.

In this work, it is intended to consider the optimization of the performance of a catalytic membrane water gas shift reactor using the hydrogen flux profile as a control variable. To achieve this aim, a one-dimensional homogeneous reactor model is developed considering the variation of the component molar flowrates, temperature within the reaction side and the permeate side, and the pressure drop in the reaction side. The performance of the reactor is assessed using the achievable conversion of carbon monoxide to hydrogen, hydrogen recovery, reactor resistance time and temperature control on the reaction side.



10:50am - 11:10am

On Optimisation of Operating Conditions for Maximum Hydrogen Storage in Metal Hydrides

Chizembi Sakulanda, Thokozani Majozi

University of the Witwatersrand, South Africa

As the climate crisis continues to grow as an existential threat, significant efforts are underway to establish reliable energy resources that can be both renewable and zero-carbon emitting. Off the back of these efforts, hydrogen has emerged as critical player for future energy resource purposes due to its high gravimetric energy density and near-abundant availability. However, hydrogen has its own disadvantages too. Foremost among these are its low volumetric energy density and its challenges associated with storage and transport. An identified solution to this problem is the metal hydride, which is a solid-state storage method that provides a viable solution to the current limitations. Storage is achieved through the chemical absorption of hydrogen into a porous metal alloy’s sublattice. However, despite the biding promise that metal hydrides bring, their challenging thermodynamics leaves a gap between the ideal storage capacity that current industry requires and the limited storage capacity that reusable metal hydrides currently provide.

This work, therefore, uses mathematical modelling to determine optimum operating conditions for the metal hydride in order to maximise hydrogen storage capacity. Computational fluid dynamics is used to simulate the coupled heat and mass transfer that occurs during the hydrogen absorption process into the metal alloy. An exploration of numerical methods to complete this work is conducted - namely the simple explicit method, Crank-Nicolson method and alternating-direction implicit (ADI) method. The ADI method provides the most stable platform to conduct analyses on the variables affecting storage. The hydride bed thickness, heat transfer coefficient, supply pressure and cooling fluid temperature are the variables of focus. It is demonstrated that bed thickness and supply pressure possess the greatest influence on storage capacity and rate of absorption, respectively. Another non-physical variable that bears significant influence is mesh grid size used during the simulation. The alloy MmNi4.6Al0.4 is used in the investigation.



11:10am - 11:30am

Methanol and Ammonia as Green Fuels and Hydrogen Carriers: A Comparative Analysis for Fuel Cell Power Generation

Antonio Sánchez, Elena C. Blanco, Mariano Martín

Department of Chemical Engineering, University of Salamanca, Spain, 37008, Spain

Hydrogen is one of the most significant tools in the energy transition to reach a high share of decarbonization. A wide range of applications has been proposed as energy storage or energy carrier systems. However, some limitations in terms of storage and transportation limited its implementation. Therefore, the use of other chemicals derived from hydrogen emerges. Among them, two liquid options stand out: methanol and ammonia. These chemicals can be used as green fuels or as hydrogen carriers. In the first approach, a direct transformation into power is performed (Salmon & Bañares-Alcántara, 2021). In the second alternative, methanol or ammonia are transformed into hydrogen which are used for power generation (Nemmour et al., 2023). At this point, a trade-off emerges between these two routes. On the one hand, direct fuel cell systems for methanol and ammonia present a simpler scheme, however, efficiencies are currently low. On the other hand, hydrogen fuel cells present a higher performance, however, the process is more complex.

In this work, a systematic comparison of the two alternative routes to transform methanol and ammonia into power using fuel cell technology is performed. The first one is based on the direct use of these chemicals into a direct ammonia (DAFC) or direct methanol (DMFC) fuel cell. In the case of methanol, a PEM fuel cell is selected followed by a CO2 separation system. For ammonia, an alkaline membrane fuel cell is introduced followed by a SCR treatment to control de NOx emissions. The second alternative to generate electricity from these chemicals is based on a first stage to produce H2 followed by a SOFC hydrogen fuel cell. In this work, methanol reforming and ammonia decomposition are evaluated. After the SOFC, an organic Rankine cycle is introduced to improve the energy efficiency of the integrated system.

All the cases have been analyzed for a fuel cell capacity equal to 1000 kW. In the direct use of methanol/ammonia as green fuels in DMFC or DAFC, the energy efficiency is 26.5% and 16.8% respectively. Related to the use as hydrogen carriers, the final conversion of methanol/ammonia is around 97% reaching a hydrogen yield of 0.11 kg H2/kg methanol and 0.12 kg H2/kg ammonia. The final efficiencies of these integrated systems rise to 35.8% (methanol) and 42.3% (ammonia) due to the better energy performance of hydrogen fuel cells. In terms of economics, the cost of electricity for the use as green fuels is around 1200€/MWh and, for the use as hydrogen carriers, about 700 €/MWh. A comparison including transportation and production is also included. The use of methanol and ammonia emerges as a competitive option for distances above 3000 km considering the current efficiency values.

References

Nemmour, A., Inayat, A., Janajreh, I., & Ghenai, C. (2023). Green hydrogen-based E-fuels (E-methane, E-methanol, E-ammonia) to support clean energy transition: A literature review. International Journal of Hydrogen Energy, 48(75), 29011-29033.

Salmon, N., & Bañares-Alcántara, R. (2021). Green ammonia as a spatial energy vector: a review. Sustainable Energy & Fuels, 5(11), 2814-2839.

 
10:30am - 12:30pmT1: Modelling and Simulation - Session 8
Location: Zone 3 - Room E032
Chair: Ihab Hashem
Co-chair: Jakob Kjøbsted Huusom
 
10:30am - 10:50am

Development of a virtual CFD model for regulating temperature in a liquid tank

Jinxin Wang1, Feng Xu1, Yuka Sakai1, Hisashi Takahashi2, Ruizi Zhang3, Hiroaki Kanayama3, Daisuke Satou3, Yasuki Kansha1,3

1Organization for Programs on Environmental Sciences, Graduate School of Arts and Sciences, The University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo 153-8902, Japan; 2TechnoPro, Inc. TechnoPro R&D Company, Roppongi Hills Mori Tower 35F, 6-10-1 Roppongi, Minato-ku, Tokyo 106-6135, Japan; 3Technology and Innovation Center, Daikin Industries, LTD., 1-1 Nishi-Hitotsuya, Settsu, Osaka 566-8585, Japan

Temperature regulating in liquid tanks is critical in the chemical industry and typically relies on thermometer feedback. However, due to the complexity of flow and thermal fields, unsensed local temperatures can deviate from desired limits, highlighting the need for improved tank temperature modeling. The absence of internal thermal or flow data, however, presents a challenge for developing and validating predictive or control models.
In this study, a virtual model for regulating liquid tank temperature was developed using computational fluid dynamics (CFD) based on the Navier-Stokes and energy equations. The inlet temperature was set to a constant value (10 °C for cooling and 50 °C for heating) with a steady flow rate, 770 ml/min. The CFD model was validated against experimental data within a water tank for temperature and flow fields in typical heating and cooling modes with adiabatic walls. Using this virtual CFD model, several new cases were simulated, involving two mechanisms (1) a fuzzy set to trigger the feeding when the temperature of a virtual sensor falls outside [24 °C, 26 °C] and (2) the imposition of unfavorable temperatures on the walls representing ambient influences. The simulations revealed temperature response discrepancies between the sensor and other interior points, which can be over 2°C. Thus, the constructed model can be used to generate valuable datasets for temperature regulation for liquid tanks.
This virtual CFD model offers an economical and reliable approach for advancing temperature prediction and control models in the chemical industry, supporting improved material quality control and energy efficiency.



10:50am - 11:10am

A Computational Framework for Cyclic Steady-State Simulation of Dynamic Catalysis Systems: Application to Ammonia Synthesis

Carolina Colombo Tedesco, John Kitchin, Carl Laird

Carnegie Mellon University, United States of America

Dynamic or Programmable Catalysis is an innovative strategy to improve heterogeneous catalysis processes.1 The technique modulates the binding energies (BE) of adsorbates to the catalytic surface, enabling the periodic favoring of different reaction steps. Such forced energetic oscillations can overcome limitations imposed by the Sabatier Principle, allowing for higher overall reaction rates, unattainable through conventional steady-state methods. Researchers confirmed the effects of dynamic catalysis computationally through the sequential simulation using forward integration of the differential-algebraic equation systems that govern catalytic processes.2 In this work, we implemented a simultaneous simulation approach by formulating the problem as a boundary value problem with limit cycle or periodic boundary conditions, directly solving for the cyclic steady state (CSS). The approach was implemented using the optimization algebraic modeling language Pyomo.DAE3, to support automatic transcription of the differential equations, and the solver IPOPT.4 The methodology improved run times by orders of magnitude. The computational efficiency of the simultaneous approach allowed the implementation of derivative-free optimization methods to determine optimal parameters (within bounds) for the shape of the forcing signal that describes BE oscillations. For the continuous forcing functions, it was possible to implement gradient-based methods and modify the Pyomo/IPOPT framework to determine the waveform parameters that directly optimize the overall rate of reaction (avTOF) using IPOPT. For the square wave, we verified an increase of four orders of magnitude of the avTOF when compared to the peak of the Sabatier Volcano of the static system. These results demonstrate both the potential of dynamic catalysis and the value of using optimization techniques for waveform design. We now aim to extend the methodologies to more complex systems of industrial interest. Wittreich et al. conducted the most comprehensive study on dynamic catalysis in complex systems, working on ammonia synthesis5. Applying the simultaneous simulation approach to such a system should show that the methodology is extensible to more intricate scenarios. Furthermore, applying waveform optimization to this system could lead to even higher reaction rates than those reported. We also aim to further develop mathematical approaches for more direct analyses of dynamic catalysis system behavior.

References

(1) Ardagh, M. A.; Abdelrahman, O. A.; Dauenhauer, P. J. ACS Catal. 2019, 9 (8), 6929–6937. https://doi.org/10.1021/acscatal.9b01606.

(2) Ardagh, M. A.; Birol, T.; Zhang, Q.; Abdelrahman, O. A.; Dauenhauer, P. J. Catal. Sci. Technol. 2019, 9 (18), 5058–5076. https://doi.org/10.1039/C9CY01543D.

(3) Nicholson, B.; Siirola, J. D.; Watson, J.-P.; Zavala, V. M.; Biegler, L. T. Math. Program. Comput. 2018, 10 (2), 187–223. https://doi.org/10.1007/s12532-017-0127-0.

(4) Wächter, A.; Biegler, L. T. Math. Program. 2006, 106 (1), 25–57. https://doi.org/10.1007/s10107-004-0559-y.

(5) Wittreich, G. R.; Liu, S.; Dauenhauer, P. J.; Vlachos, D. G. Sci. Adv. 2022, 8 (4), eabl6576. https://doi.org/10.1126/sciadv.abl6576.



11:10am - 11:30am

Accelerated Process Modeling for Light-Mediated Controlled Radical Polymerization

Rui Liu1, Xi Chen1,2, Antonios Armaou3,4

1State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University 310027, Hangzhou China; 2National Center for International Research on Quality-targeted Process Optimization and Control, Zhejiang University 310027, Hangzhou China; 3Chemical Engineering Department, University of Patras, Patras 26504, Greece; 4Chemical Engineering & Mechanical Engineering Departments, Pennsylvania State University, College Park, PA 16802 USA

Mathematical modeling and simulation are pivotal components in process systems engineering (Vassiliadis, 2024). Focusing on polymerization processes, identifying microscopic properties of polymers is necessary for advancing kinetic comprehension and facilitating industrial applications. Mathematical modeling methods for polymerization processes can be categorized into two types: the deterministic method solving mass balance equations and the stochastic simulation using the Monte Carlo (MC) method. For complex polymerizations, like the controlled radical polymerization (CRP) with reversible reaction pairs, it is challenging to derive a rigorous model and solve it deterministically. Direct solution of the resulting ordinary differential equations is computationally infeasible due to memory resource limitations and model stiffness. These hurdles can be surpassed via coarse graining and order reduction for computational efficiency, albeit with a trade-off in applicability. In contrast, the kinetic Monte Carlo (kMC) method, based on Gillespie’s (1977) stochastic simulation algorithm, is conceptually straightforward, focusing on individual molecules. The kMC method enables characterizing individual polymer chains and the tracking of system state evolution in a mathematically straightforward way. However, in kMC simulations for polymerization processes with complicated mechanisms, computational costs can be prohibitive due to the large number of tracked events and simulated polymeric chains.

The superbasin strategy was originally proposed to speed up molecular simulations (Chatterjee, 2010). To address the computational bottlenecks in kMC simulations, this strategy has great potential for enhancing polymerization process modeling. This work presents a developed superbasin-aided kMC model to accurately and efficiently simulate polymerization processes featuring complex kinetic mechanisms with microscopic resolution. The core concept of the developed model is to algorithmically regularize the discrepancy in reaction rates between fast reactions and slow reactions, thereby reducing the computational resources demanded by fast reactions. The implementation of the superbasin-aided kMC simulation integrates three critical steps into the standard kMC framework: (i) on-line classification of fast and slow reactions, (ii) feasibility assessment of scaling, and (iii) scaling operation for adjusting fast reaction rates. Compared to conventional stochastic simulations, the developed model significantly improves computational efficiency by bypassing the meticulous tracking of fast reaction events that have negligible impact on the overall polymer structures.

Photoiniferter-RAFT (PI-RAFT), a representative light-mediated CRP technique, is used as a case study to demonstrate the superbasin-aided kMC model. The accuracy and efficiency of the developed model are validated against the deterministic method of moments method and the conventional kMC simulation. A thousand-fold speedup is achieved for predicting the evolution of microscopic properties of polymers without compromising simulation accuracy. Furthermore, with light irradiation as an external stimulus, light-mediated CRP presents high temporal control over polymer chain growth by modulating light irradiation. The performance of temporal control under intermittent light irradiation is investigated through the proposed model. The kinetic insights gained from this modeling and acceleration work offer potential for further online control and optimization of light-mediated CRP processes.



11:30am - 11:50am

Plantwide control of a formic acid production plant under unsteady green hydrogen supply

Mohammad Mahdi Ghasemi Aliabadi, Alexandros Anagnostou, Francia Gallardo Gonzalez, Shivam Pandey, Farzad Mousazadeh, Anton Kiss

Technische Universiteit Delft, Netherlands, The

Corresponding author: a.a.kiss@tudelft.nl

The current trends for sustainability require the chemical industry to migrate from non-renewable feeds to green raw materials. Formic Acid (FA) production from green hydrogen and captured CO2 can be a good candidate to mitigate greenhouse gas emissions, as it is a widely applicable chemical. The greatest challenge on the usage of renewable sources is their intermittent nature. Because of this reason, control is of outmost importance to ensure the specifications of the product and process parameters. With processes becoming more complex, the role of plantwide control (PWC) is becoming increasingly prominent.

The original contribution of this work is to design a new PWC system for a highly intensified FA production plant that maintains the throughput of 50 kta and product purity of FA to 85% (%wt.). An already established conceptual design was used, which comprises of two sections [1]. The process starts with the production of CO from CO2 and green H2, which is followed by the production of FA using Methyl Formate as an intermediate. In this context, COPure and divided wall column (DWC) are identified as the most challenging sub-processes. The project was kick-started by firstly addressing the robustness of the Aspen Plus V12 steady state flowsheet towards the identification of the most sensitive equipment pieces. Additionally, by implementing relative specifications in Aspen Plus, it was possible to achieve a flexible steady state simulation for different feed flow rates.

When converting into dynamic mode, several modifications in the steady state flowsheet were needed for successful initialization. These include, adding the necessary equipment and pressure gradients to achieve a proper pressure-driven dynamic simulation. From this point on, the designed PWC system was implemented using two levels. The first level focuses on controlling individual equipment (but it was not adequate to handle different feed flow rates), whereas the second level revolves around controlling the whole process from a plant-wide perspective, which allows for a smooth transition between different production capacities.

The evaluation of the PWC scheme showed that the designed control system maintained FA purity and production rate for all throughput disturbances tested (up to ±20% change), despite several intricacies of the FA process, such as multiple recycles and the DWC. The deviation of the FA flow rate in dynamic mode from the steady state simulation for the +20% and -20% change in throughput was +0.6% and -0.2%, respectively.

Overall, the evaluation of the PWC design of Formic Acid under unsteady hydrogen supply conditions proved the feasibility of the suggested PWC design on a conceptual level.

[1] N. Kalmoukidis, A. Barus, S. Staikos, M. Taube, F. Mousazadeh, A. A. Kiss, Novel process design for eco-efficient production of green formic acid from CO2, Chemical Engineering Research and Design, 210, 425-436, 2024.



11:50am - 12:10pm

Exploiting Operator Training Systems in chemical plants: learnings from industrial practice at BASF.

Frederic Cuypers, Filip Logist, Tom Boelen

BASF Antwerpen NV, Belgium

Demographic changes in operator populations as well as substantially higher levels of automation in chemical plants are leading to a decline in experience and skill levels required to operate these plants in a safe and efficient manner.

Operator training simulators (OTS) have become essential tools within BASF to enhance and develop the experience levels of operators in the plant.

The OTS consists of a dynamic model describing the real process. Besides the model, the OTS environment includes a mimic of the control system and safety logics which are connected to the model. The operator interacts with the OTS via similar control system graphics as in the real plant or control room. In this way the OTS environment becomes a very realistic environment which creates an immersive training experience.

OTS allows operators to practice and improve their skills in this safe and controlled environment. These simulators offer a range of benefits, including reducing training costs, minimizing operational risks, and increasing overall efficiency and experience levels.

OTSs are extensively used to train operators on various aspects of plant operation, such as process understanding and optimization, procedural training and disturbance handling. By providing a realistic simulation environment of the process, OTS enables operators to gain hands-on experience in handling critical situations, troubleshooting problems, and making informed decisions.

Different levels of training can be handled by different types of OTSs. Where the training of a starting operator is mainly focused on understanding (parts of) the process, an experienced operator will focus on handling critical situations to avoid damage or production loss. Therefore, it is important to define upfront the objective of the training to be performed with the OTS, since this will have a significant impact on the scope, level of detail and setup of the OTS.

Due to the high accuracy level of OTS models, OTS are used as well to support activities for HAZOP (Hazard and Operability) studies, debottlenecking and optimization studies or (advanced) control design.

The integration of OTS into BASF's training programs has led to improvements in operator competency and operational efficiency. These simulators have also facilitated knowledge transfer between experienced and new operators, ensuring a smooth transition and continuity in operations.

In conclusion, operator training simulators have become an indispensable tool within BASF for training operators. They offer a safe and realistic environment for operators to practice and enhance their skills, leading to improved performance, reduced incidents, and increased operational efficiency.



12:10pm - 12:30pm

New Directions and Software Tools Within the Process Systems Engineering-Plus Ecosystem

Stephen Burroughs1, Benjamin Lincoln2, Aleeza Adeel1, Isaac Severinsen2,3, Andrew Lee4,5, Oluwamayowa Amusat6, Daniel Gunter6, Bethany Nicholson7, Mark Apperley1, Brent Young3, John Siirola7, Timothy Gordon Walmsley2

1Ahuora – Centre for Smart Energy Systems, Department of Software Engineering, University of Waikato, Hamilton 3240, New Zealand; 2Ahuora – Centre for Smart Energy Systems, School of Engineering, University of Waikato, Hamilton 3240, New Zealand; 3Department of Chemical and Materials Engineering, University of Auckland, 5 Grafton Road, Auckland, 1010, New Zealand; 4National Energy Technology Laboratory, Pittsburgh, PA 15236, United States of America; 5NETL Support Contractor, Pittsburgh, PA 15236, United States of America; 6Lawrence Berkeley National Laboratory, Berkeley, CA 94720, United States of America; 7Center for Computing Research, Sandia National Laboratories, Albuquerque, NM, 87185, United States of America

The transition to a sustainable industrial sector is a complex, capital-intensive challenge that requires the integration of a diverse array of technologies. As a result, it is crucial to understand when, where, and how to deploy both emerging and mature energy technologies synergistically. This integration must be accomplished while minimising any adverse effects on production and managing the inherent increasingly volatile material and energy supply and demand chains. To achieve this, Process Systems Engineering (PSE) provides the advanced conceptual frameworks and software tools to formulate and optimise well-considered integrated solutions that could accelerate the sustainability transition. However, many of the traditional PSE platforms have struggled to fully embrace modern computing technologies and delve into the broader challenges of designing sustainable multi-scale systems.

The landscape of advanced PSE, or PSE+, is poised to undertake a considerable transformation with the rise in popularity of open-source and script-based software platforms with predictive modelling capabilities based on modern mathematical optimization techniques. This paper provides a review of three leading equation-based platforms—Pyomo/IDAES, Modelica and Gekko—that are increasingly utilised for the modelling, simulation, and optimisation of complex systems within the PSE+ domain. Pyomo/IDAES and Modelica have seen considerable attention, forming the basis of an ecosystem of standard models and extensions. Gekko, in contrast, is a light-weight and fast library with minimal overhead and deployable in many industrial control systems. Each platform is critically examined for its capabilities, strengths, and current limitations, highlighting their roles in addressing both conventional and evolving challenges in process systems analysis and integration.

Beyond the current state, this paper explores potential future directions for the development of the PSE+ ecosystem. In particular, the paper discusses the ongoing development of an online platform featuring a graphical user interface (GUI) that is accessible, user-friendly, and reasonably modelling platform agnostic. This cloud-based platform, called the Ahuora Digital Twin platform, is built on modern software engineering principles, including structured data architectures, database integration, modularity, and containerisation, thereby enhancing scalability, flexibility, and accessibility for both researchers and practitioners. The development of such a platform could significantly reduce the barriers to entry for utilising advanced PSE+ methods, making them more accessible to industries and sectors that have not traditionally employed these techniques.

 
10:30am - 12:30pmT2: Sustainable Product Development and Process Design - Session 7 - Including keynote
Location: Zone 3 - Room D016
Co-chair: Zdravko Kravanja
 
10:30am - 11:10am

Keynote by Angelique Leonard

Angelique Leonard

Uliege

 


11:10am - 11:30am

Hybrid Modeling for Prospective Process Design Aligned with the Sustainable Development Goals

Sachin Jog1, Daniel Vázquez2, Lucas F. Santos1, Juan D. Medrano-García1, Gonzalo Guillén-Gosálbez1

1Institute for Chemical and Bioengineering, Department of Chemistry and Applied Biosciences, ETH Zurich, Vladimir-Prelog-Weg 1, 8093 Zurich, Switzerland; 2IQS School of Engineering, Universitat Ramon Llull, Via Augusta 390, 08017 Barcelona, Spain.

To achieve the ambitious goals set through various international agreements for climate change mitigation, it has become imperative for the chemical industry to transition to more sustainable production routes. Thus, life cycle assessment (LCA) has emerged as a tool to compare the environmental performance of alternative production pathways. However, traditional LCA methodologies (such as the Environmental Footprint (EF) impact assessment method (European Commission, 2017)) provide limited insights on the sustainability performance of a process in absolute terms. Recently, it was proposed to overcome this limitation by capitalizing on the planetary boundaries (PBs) framework, which defines a set of thresholds on nine Earth system processes delimiting the Earth’s ‘safe operating space’ (SOS). Transgressing the SOS could lead to detrimental effects that could shift the current equilibrium state of the Earth (Rockström et al., 2009). Building on this concept, Sala et al. (2020) recently developed a framework linking the PBs and the EF method to five Sustainable Development Goals (SDGs). While the SDGs were adopted in 2015 to assist policymakers in addressing key challenges facing humanity, their inclusion in process design has not been explored yet. Further, while LCAs are often carried out assuming that socio-economic systems will remain the same in future, they may greatly change due to learning curves and ongoing efforts to meet climate goals. In this regard, prospective LCAs leveraging the outcomes of integrated assessment models (IAMs) have recently emerged to assess the sustainability performance of industrial systems in future.

Here, capitalizing on the framework developed by Sala et al. (2020) and our previous work on hybrid modeling using Bayesian symbolic regression (Jog et al., 2024), we develop a process design framework accounting for the environmental process performance in terms of the contribution to attaining the SDGs. This approach is applied to the bi-objective optimization of the CO2 hydrogenation to methanol process, considering economic and SDGs-based performance as the objective functions. Comparing with the business‑as‑usual process, we find that while the CO­2­ hydrogenation process stays within the SOS linked to SDG 13 (i.e., climate action), burden-shifting leads to worsening of other impact categories. However, a prospective process design for the year 2050 shows that this collateral damage is drastically reduced due to expected improvements across economic sectors. Overall, this work emphasizes the need to evaluate environmental impacts beyond climate change in the transition to sustainable chemicals production, also showcasing the advantages of using hybrid modeling for efficient computation of the Pareto frontier.

References

European Commission (EC). PEFCR Guidance Document - Guidance for the Development of Product Environmental Footprint Category Rules (PEFCRs), version 6.3; European Commission, 2017, http://ec.europa.eu/environment/eussd/smgp/pdf/PEFCR_guidance_v6.3.pdf. (accessed July 28, 2024).

Jog, S., Vázquez, D., Santos, L.F., Caballero, J.A., Guillén-Gosálbez, G., 2024. Hybrid analytical surrogate-based process optimization via Bayesian symbolic regression. Comput. Chem. Eng. 182, 108563. https://doi.org/10.1016/j.compchemeng.2023.108563

Rockström, J. et al., 2009. A safe operating space for humanity. Nature 461, 472–475. https://doi.org/10.1038/461472a

Sala, S., Crenna, E., Secchi, M., Sanyé-Mengual, E., 2020. Environmental sustainability of European production and consumption assessed against planetary boundaries. J. Environ. Manage. 269, 110686. https://doi.org/10.1016/j.jenvman.2020.110686



11:30am - 11:50am

Applying learning effects for sustainable-by-design clinker production and Power-to-X conversion

Daniel Fozer

The Technical University of Denmark, Denmark

The transition towards climate-neutral societies demands innovative solutions to mitigate carbon emissions, particularly in energy-intensive industries. This study explores sustainable-by-design approaches to clinker production and Power-to-X (PtX) technologies by applying learning effects to forecast their environmental impacts through prospective life cycle assessments (pLCA). Incumbent clinker production practices fall short of meeting carbon-neutral targets, pressing the need to implement waste valorization and CO2 utilization strategies. Yet, knowledge gaps persist on the future environmental performance of alternative clinker manufacturing and emerging PtX solutions.

To address these gaps, this study examines the prospective life cycle impacts of (1) solid recovered fuel (SRF) utilization as a substitute for fossil fuels in clinker production and (2) Power-to-Methanol (PtM) technology, which converts renewable electricity and captured CO2 into methanol. By applying environmental learning effects, the environmental impacts of these systems are projected between 2025 and 2050, considering shared socioeconomic pathways (SSP1, SSP2) and the 1.9 W m−2 representative concentration pathway (SSP2-RCP1.9). First-of-a-kind (FOAK) and nth-of-a-kind (NOAK) PtM plants are modeled using the ASPEN Plus V14 software, capturing environmental learning effects through process optimization and scale-up. SimaPro 9.3.0.3 and the ReCiPe 2016 method are used to conduct life cycle impact assessments. Environmental learning rates, ranging from 4% to 34%, indicate substantial opportunities for improved environmental performance, with NOAK configurations achieving lower energy consumption and reduced production costs. This highlights the importance of integrating technological advancements in both the foreground and background life cycle inventories. The highest decarbonization progress is observed under the SSP2-RCP1.9 trajectory, driven by emissions avoided from waste management systems, the conversion of biogenic carbon-rich municipal solid waste, and CO2 upgrading. The results also highlight the potential for burden-shifting, such as land transformation and eutrophication (+26.9% and +5.1% by 2050), pointing to the need to adjust emission mitigation strategies.

The combined findings from these prospective assessments emphasize the critical role of environmental learning effects in driving sustainability performance in early-stage design and process engineering. By incorporating quantified learning rates into pLCA, this study provides essential insights for decision-makers, underscoring the value of continuous innovation and scale-up to achieve climate goals in cement manufacturing and methanol production. Together, these advancements represent a significant step toward a more sustainable and carbon-neutral industrial future.



11:50am - 12:10pm

Environmental Impacts of Trichlorosilane: Process Optimization, Life Cycle Assessment, and the Importance of Processing History

Ethan Errington1, Deniz Etit2, Jerry Heng2, Miao Guo1

1Department of Engineering, King's College London, WC2R 2LS, United Kingdom; 2Imperial College London, Department of Chemical Engineering, SW7 2AZ, United Kingdom

Trichlorosilane (TCS) is a key platform chemical used in the manufacture of silicon metals, silicones, and functional silanes. Despite this, very little information is available on the environmental impact (EI) associated with TCS manufacture. This is highly undesirable given the production volume of TCS and the manufacturing supply chains dependent upon it.

One reason for the lack of information on the EI of TCS is its variable production history1. For instance, technical grade TCS (TG-TCS, ~98wt%) can be produced from a direct chlorination (DC) process, or as a co-product of the Siemens process2. Though EI information is not available for production by either of these processes, a significant amount of process design information is1; thus, process modelling could be used to accurately predict the EI of TG-TCS via Life Cycle Assessment (LCA)3.

This study addresses gaps in understanding for the EI of TG-TCS. An LCA model has been developed to quantify this impact in terms of the Global Warming Potential (ReCiPe 2016 ‘H’ method4) associated with producing TG-TCS; both the direct (DC) and indirect (Siemens) process approaches are considered. The functional unit used is one kilogram of TG-TCS.

To produce robust research findings, a combination of process modelling (Aspen Plus V11) and technoeconomic analysis is used to identify economic operating conditions at which TG-TCS is likely produced. This is coupled with LCA and a bi-objective optimization method (NSGA-II algorithm5) to simultaneously identify minimal EI possible for manufacturing TG-TCS.

Findings demonstrate the ability to manufacture TG-TCS profitably with GWP in the range of 1.5 to 2.3 kgCO2-eq per year. Moreover, from results of bi-objective optimization, pareto frontiers are used to highlight how the minimum achievable process impact varies as a function of process profitability. Finally, a comparison of impact associated with production according to direct (DC) and indirect (Siemens) processing routes are then used to discuss the effect of processing history on overall environmental impact of the TCS product. Results are of interest to the EI of any field relying on TG-TCS manufacture.

References

[1] Simmler, W. 2000. “Silicon Compounds, Inorganic”. In Ullmann's Encyclopedia of Industrial Chemistry.

[2] Ramírez-Márquez, César, et al. "Process design and intensification for the production of solar grade silicon." Journal of Cleaner Production 170 (2018): 1579-1593.

[3] Parvatker, A.G. and Eckelman, M.J. 2018. Comparative evaluation of chemical life cycle inventory generation methods and implications for life cycle assessment results. ACS Sustainable Chemistry & Engineering 7.1, pp. 350-367.

[4] Huijbregts, M. A.J., et al. 2016. ReCiPe 2016: a harmonized life cycle impact assessment method at midpoint and endpoint level report I: characterization.

[5] Deb, Kalyanmoy, et al. "A fast and elitist multiobjective genetic algorithm: NSGA-II." IEEE transactions on evolutionary computation 6.2 (2002): 182-197.



12:10pm - 12:30pm

Olefins production through sustainable pathways: techno-economic and environmental assessment

Oktay Boztaş, Meire Ellen Ribeiro Domingos, Daniel Flórez-Orrego, François Maréchal

Industrial Process and Energy Systems Engineering, EPFL, Sion, 1950, Switzerland

Plastics are indispensable materials in modern society, primarily used in packaging, textiles, and the transportation industries. The building blocks of plastics, olefins, are currently produced using fossil fuels, with naphtha—a refinery product—as the main feedstock. The most common process, steam naphtha cracking, demands significant energy inputs, which are supplied by the combustion of natural gas. Consequently, the process is vulnerable to fluctuations in the supply of crude oil and is responsible of 30% of the total CO2 emissions in the chemicals industry. One possible alternative to naphtha cracking is the waste gasification, which produces syngas that can then be used for methanol synthesis and further through the methanol-to-olefins (MTO) process to produce olefins. Specifically, carbon-rich plastic waste can serve as the feedstock, supporting a circular economy and providing a solution for both plastic waste treatment and olefin production. Plastics can be co-gasified with biomass within autothermal or plasma configuration gasifiers. In addition, other strategies aiming at improving the efficiency and sustainability of this alternative process, such as CO2 capture, and storage technologies are investigated. Seasonal CO2 storage can enable cost-effective production, allowing for increased capacity during periods when renewable energy is abundant, and electricity is cheaper. CO2 can be valorized in various ways, such as through injection, methanation, or syngas production via the rWGS reaction or co-electrolysis with water. Each of these options requires different processing conditions leading to distinct overall process performance. The hydrogen needed for methanation and rWGS reactions can be supplied by integrating a SOEC. Gasification is also a high energy demand process, after which the waste heat could be recovered for electricity generation trough steam cycles. Those configurations were compared in light of thermodynamic, economic and environmental key performance indicators. As a result, with the integrated energy systems and CO2 valorization methods, the overall process efficiency can reach up to 99%, nearly doubling the production capacity from the same feedstock amount while eliminating direct CO2 emissions. The specific energy requirement of the suggested configurations has increased by 2 to 2.5 times compared to traditional SNC, with the entire energy demand shifted to electricity requirements, making the process independent from fossil fuels. This shift underscores the critical importance of renewable energy sources, as the process depends on the widespread availability of clean electricity to fully realize its environmental benefits. The proposed configurations establish a circular economy with no additional CO2 emissions with maximized efficiency, being a promising solution in future scenarios of high renewable energy availability.

 
10:30am - 12:30pmT3: Large Scale Design and Planning/ Scheduling - Session 6
Location: Zone 3 - Aula E036
Chair: Meik Franke
Co-chair: Iiro Harjunkoski
 
10:30am - 10:50am

Process Integration and Waste Valorization for Sustainable Biodiesel Production Towards a Transportation Sector Energy Transition

Vibhu Baibhav, Daniel Florez-Orrego, Pullah Bhatnagar, Francois Maréchal

Ecole Polytechnique Fédérale de Lausanne, Switzerland

Fossil fuels for transportation sector remain the primary contributor to global emissions, prompting an urgent exploration of renewable energy solutions, such as biodiesel. Produced from renewable feedstocks, biodiesel offers a replacement for traditional fossil fuels, helping mitigate global warming, enhance energy independence, and support rural economies. Yet, biodiesel production still faces significant challenges, including issues of energy efficiency, process optimization, and byproducts treatment, which drive up production costs and limit its broader adoption. By providing a comprehensive framework for biodiesel processes integration and waste heat and material (glycerol) valorization, this work studies to the most promising routes supporting the long-term decarbonization of the biofuels production sector. Three key feedstock, namely, refined palm oil, rapeseed oil, and soybean oil are evaluated and compared in terms of biodiesel yield. Single-step transesterification process has been modified to a two-stage transesterification approach to increase the conversion to fatty acid methyl esters under varying methanol and NaOH catalyst split fractions among two reactors. Moreover, the study addresses the efficient utilization of glycerol, a key by-product of biodiesel production, critical for improving both economic viability and environmental sustainability. To this end, several valorization routes are modeled, including crude glycerol combustion, purification to pharma-grade glycerol, supercritical water gasification, and anaerobic digestion. Mixed-integer linear programming (MILP) is employed to minimize total costs, considering both operational and capital expenditures and the constraints imposed by process integration techniques. Finally, the CO2 emissions savings are compared to fossil fuel counterparts, including the end-use stage, demonstrating the environmental benefits of optimized biodiesel production.



10:50am - 11:10am

Optimization-based planning of carbon-neutral strategy: Economic priority between CCU vs CCS

Siuk Roh, Chanhee You, Woochang Jeong, Donggeun Kang, Dongin Jung, Donghyun Kim, Jiyong Kim

School of Chemical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea

Power and industrial sectors such as iron, cement, and chemical manufacturing are large-scale CO2 emission sources, accounting for 37% of CO2 emissions in South Korea, which should cover an 87% CO2 reduction as of 2018. Carbon capture, utilization, and storage (CCUS) technological framework is recognized as one of the promising strategies until a perfect carbon neutral system is fully deployed. In contrast to the rapid advance of CCUS R&D (e.g., catalyst and process), the demonstration and deployment of CCUS technologies are still immature due to the limited studies on designing and planning a CCUS supply chain, especially integrating it with existing energy and industrial infrastructure. This study aims to develop a new optimization-based method to design and plan CCUS supply chain and analyze the optimal configuration and investment strategies. To achieve this goal, we develop an optimization-based approach to supply chain development using mixed-integer linear programming (MILP) model. We estimate the technical (production scale and raw material consumption), economic (unit production cost), and environmental (carbon emissions) parameters based on the literature. The objective of the optimization model is to maximize the net present value (NPV) and the net CO2 emission (NCE) of the strategies of the CCUS supply chain under logical and practical constraints. As a real case study, Korean future CCUS system was analyzed, which includes three major CO2 emitting industries in South Korea (power plants, steel, and chemicals) and real road transportation modes and sequestration sites. As a result, we analyzed different design and planning strategies based on various design objectives (e.g., maximizing economic and environmental benefits). In addition, by managing major cost-drivers and economic bottlenecks, we determined major decision-making problems on CCUS framework, such as sequestration vs. utilization, and provided a strategic solution of national-level planning of the CCUS supply chain. The major finding of this study can support industry stakeholders and government policymakers by providing a practical guideline to invest and plan the deployment of CCUS.

References

Suh-Young Lee, In-Beum Lee, Jeehoon Han. (2019). Design under uncertainty of carbon capture, utilization and storage infrastructure considering profit, environmental impact, and risk preference



11:10am - 11:30am

Integration of MILP and Discrete-Event Simulation for Flow Shop Scheduling Using Benders Cuts

Roderich Wallrath1,2, Edwin Zondervan2, Meik Franke2

1Bayer AG, Kaiser-Wilhelm Allee 1, 51368 Leverkusen, Germany; 2Faculty of Science and Technology, University of Twente, the Netherlands

For companies in the process and chemical industry, optimization-based scheduling is a critical advantage in today’s fast-paced and interconnected world. However, the complexity when optimizing chemical processes is high: A chemical process that requires personnel and processing equipment, consumes raw materials and utilities, and is linked to a complex supply chain, is naturally subject to many constraints and objectives [1]. Discrete Event Simulation (DES) models can describe complex real-world processes to a great level of detail, have relatively short computation times, and allow to include uncertainty parameters. However, since DES models have limited optimization capabilities, the solutions of DES models may be far away from the optimum. While MILP models enable global optimization, they quickly grow to intractable size, when trying to include all relevant constraints. In addition, MILP models can be difficult to set up, validate, and maintain for real-world applications.

The presentation shows that MILP and DES can be integrated using Benders decomposition which results in an efficient Benders-DES algorithm (BDES) that combines the strengths of rigorous optimization and high-fidelity modeling. The partial integration of DES and MILP using Benders cuts has been shown in [2,3] and is a promising line of research to combine the strengths of simulation and optimization

The developed BDES algorithm makes use of the dual information from the DES subproblem to build stronger Benders cuts. With this approach, flow shop scheduling problems with makespan minimization objective are solved, which is one of the most important operational problems in the process and chemical industry [1]. The dual information can be derived from the critical paths of the DES solutions. Critical paths play an important role in scheduling algorithms and have been used, for example, to improve the B&B procedure [4]. It is shown that the BDES algorithm is very efficient in solving random instances with 25 jobs and 5 machines, as it requires fewer DES iterations and solution time than a genetic algorithm to find near-optimal solutions.

The BDES algorithm is also effective in solving a real-word case study. The case study is a agrochemical formulation plant with seven mixing and filling lines and additional resource constraints. The BDES performs similarly to the originally proposed, monolithic-sequential MILP-DES approach [5], while requiring less modeling effort.

[1] Harjunkoski, I. et al. (2014), Computers & Chemical Engineering 62, 177

[2] Zhang et al. (2017). In: 2017 13th IEEE Conference on Automation Science and Engineering (CASE), pp. 1067–1072.

[3] Forbes et al. (2024). European Journal of Operational Research 312.3, pp. 840–854.

[4] Brucker, P. (2007). Scheduling Algorithms. 5th ed., Berlin, Germany, Springer.

[5] Wallrath, R. et al. (2023). Computers & Chemical Engineering 177, 108341.



11:30am - 11:50am

Evolutionary Algorithm Based Real-time Scheduling via Simulation-Optimization for Multiproduct Batch Plants

Engelbert Pasieka1, Sebastian Engell2,3

1INOSIM Software GmbH, Germany; 2Technische Universität Dortmund; 3ZEDO-Zentrum für Beratungssysteme in der Technik Dortmund e.V.

Scheduling in the process industry determines the sequence and timing of operations to optimize objectives such as minimizing order tardiness and improving plant utilization. In research, scheduling problems are traditionally solved “batch-wise”, i.e. for an idle plant and a given set of orders, production recipes and due dates, optimal schedules are computed. However, this does not reflect reality of production planning and scheduling which is a continuous process, where new orders arrive periodically or at unknown instances, the real operations take longer or shorter periods of time than specified in the recipes, pieces or equipment break down, or operations cannot be executed as planned because resources are not available. All these aspects could be covered by infinitely fast re-computation of optimal schedules whenever an event happens or new information becomes available, but this is practically impossible for realistic problems due to the required computation time.

In online or real-time scheduling, a continuous exchange of information between the scheduling system and the control system of the production plant is necessary. The scheduling model must be updated frequently to reflect the current state of the production system and of the orders. The scheduling algorithm must react to events and disturbances fast, but also utilize the available computing power such that the schedule is near optimal.

We present an online iterative simulation-optimization approach which is tailored to handle these challenges. It builds on our previous work on simulation-optimization using evolutionary algorithms, as described in [1]. The evolutionary algorithm continuously searches for better schedules while the simulation model is updated with the latest information so that the evaluation of each generation of solutions reflects the current situation. After a pre-specified reaction time, a new solution is available after major disturbances. While the first operations of this solution are started, the schedule is further improved continuously and each assignment and timing of an operation that has not been started is based on the currently best solution.

We validate our approach using a multiproduct, multistage batch plant from the pharmaceutical industry, as in the work of Kopanos et al. [2], and demonstrate that it can generate high-quality solutions in the presence of new order arrivals and disturbances. The results are compared with those provided by an idealized clairvoyant scheduler which has access to the full information before the schedule is computed. The influence of the choice of the reaction time after a disturbance which involves a compromise between a fast reaction and better decisions in the immediate future is studied in detail.

References

[1] C. Klanke, E. Pasieka, D. Bleidorn, C. Koslwoski, C. Sonntag and S. Engell, "Evolutionary Algorithm-based Optimal Batch Production Scheduling," Computer Aided Chemical Engineering, pp. 535-540, 2022.

[2] G. M. Kopanos, C. A. Méndez and L. Puigjaner, "MIP-based decomposition strategies for large-scale scheduling problems in multiproduct multistage batch plants: A benchmark scheduling problem of the pharmaceutical industry," European Journal of Operational Research, pp. 644-655, 2010.



11:50am - 12:10pm

Pipeline Network Growth Optimisation for CCUS: A Case Study on the North Sea Port Cluster

Victoria Brown, Joseph Hammond, Diarmid Roberts, Solomon Brown

University of Sheffield, United Kingdom

By 2050 around 12% of cumulative emissions reductions will come from Carbon Capture, Utilisation and Storage (CCUS) making it an essential component in the path towards net zero [1]. Focus will initially be on the retrofitting of fossil fuel power plants, which will then shift to hard-to-decarbonise industries such as iron, steel, and concrete [1]. Such industries are often grouped together in industrial clusters. Comprising both large and small point sources concentrated over a defined geographical area, industrial clusters offer an opportunity to maximise the impact of CCUS whilst also improving economic feasibility [2]. The North Sea Port (NSP) cluster is one such example of this.

Within the NSP cluster an initial set of five emitters are due to join a capture, conditioning, and transport network by 2030. From there other emitters within the area will be able to join incrementally up to 2050 [3].

However, the particular emitters who join and the timing of their connection will have a significant effect on the evolution the network. The pipeline network design will therefore have to balance design requirements for initial emitters in a backbone network, with requirements for encouraging and enabling expansion.

This study builds on scenarios defined between 2030 and 2050 [3], and applies a multi-period evolutionary-based approach (Steiner tree with Obstacles Genetic Algorithm (StObGA)) to predict pipeline year-on-year network growth in the NSP cluster. This provides a novel approach to the problem. The results are used in an examination of the potential growth of the pipeline network and an investigation of trade-offs necessary in the infrastructure design.

This work has received funding from the European Union’s Horizon 2020 Research and Innovation program under grant agreements no. 884418 (C4U project).

References:

[1] IEA, “Energy Technology Perspectives 2020,” IEA, 2020.
[2] Realise CCUS , “Industrial Clusters,” 2024. [Online]. Available: https://realiseccus.eu/ccus-and-refineries/industrial-clusters .
[3] J. O. Ejeh, S. Brown and D. Roberts, “D4.8. Report on techno-economic evaluation of the 2030 North Sea Port cluster,” C4U, 2023.



12:10pm - 12:30pm

Pareto optimal solutions for decarbonization of oil refineries under different electricity grid decarbonization Scenarios

Keerthana Karthikeyan1, Sampriti Chattopadhyay1, Rahul Gandhi2, Ignacio E Grossmann1, Ana I Torres1

1Carnegie Mellon University, United States of America; 2Shell USA

In response to growing global efforts to reduce carbon emissions, the oil refining sector—one of the largest contributors to industrial CO2 emissions—has established ambitious decarbonization targets. Previous work [1] has developed a methodology that helps refineries choose an economically optimal decarbonization pathway. This study builds on the previous work to obtain Pareto optimal solutions for comprehensive analysis of trade-off between minimizing CO2 emissions and minimizing costs to navigate the decarbonization journey. We use a superstructure optimization [2] framework to obtain an optimal solution, while systematically evaluating various technological pathways. A bi-criterion optimization framework is employed to generate the Pareto frontier using the epsilon-constraint method [3]. Preliminary results indicate that lower-cost, higher-emission solutions generally rely on natural gas-based technologies combined with carbon capture, while higher-cost, lower-emission solutions are linked to electric power-based technologies. Furthermore, this study incorporates a more detailed assumption regarding the carbon intensity of grid electricity, moving beyond previous assumptions of a fully decarbonized grid. By comparing decarbonization pathways under both fully decarbonized and carbon-intensive grid scenarios, we account for variations in electricity decarbonization projections based on different countries' policies and goals. This approach offers deeper insights into how the carbon profile of the grid influences optimal decarbonization strategies for refineries, with findings suggesting that carbon-intensive grids further catalyze the adoption of carbon capture technologies.

To further explore these trends, we simulate case studies across different locations, considering various projections for grid decarbonization profiles. We compare the results to assess how the Pareto frontier can inform local policy decisions and incentivize specific technologies [4] [5].

References:

[1] S. Chattopadhyay, R. Gandhi, and I. E. Grossmann, A. I. Torres "Optimization of Retrofit
Decarbonization in Oil Refineries," Foundations of Computer-Aided Process Design
(FOCAPD 2024), Breckenridge, CO, USA, Jul. 14-18, 2024.
[2] Mencarelli, L., Chen, Q., Pagot, A., & Grossmann, I. E. (2020). A review on
superstructure optimization approaches in process system engineering. Computers &
Chemical Engineering, 136, 106808. https://doi.org/10.1016/j.compchemeng.2020.106808
[3] M. Bierlaire, Optimization: Principles and Algorithms, 1st ed. Lausanne, Switzerland:
EPFL Press, 2015.
[4] Noshchenko, O., & Hagspiel, V. (2024). Environmental and economic multi-objective real
options analysis: Effective choices for field development investment planning. Energy, 135,
135037. https://doi.org/10.1016/j.energy.2024.135037
[5] Maigret, J. de, Viesi, D., Mahbub, M. S., Testi, M., Cuonzo, M., Thellufsen, J. Z.,
Østergaard, P. A., Lund, H., Baratieri, M., & Crema, L. (2022). A multi-objective optimization

Acknowledgements:

This work was financed [in part] by a grant from the Commonwealth of PA, Dept. of
Community & Economic Dev.
We acknowledge the support and funding from Shell Global and Shell Polymers Monaca

 
10:30am - 12:30pmT4: Model Based optimisation and advanced Control - Session 6
Location: Zone 3 - Room E031
Chair: Mumin Enis Leblebici
Co-chair: Jose Alberto Romagnoli
 
10:30am - 10:50am

Subset Selection Strategy for Gaussian Process Q-Learning of Process Optimization and Control

Maximilian Bloor1, Tom Savage1, Calvin Tsay2, Antonio Del Rio Chanona1, Max Mowbray1

1Sargent Centre for Process Systems Engineering, Imperial College London, United Kingdom; 2Department of Computing, Imperial College London, United Kingdom

Reinforcement learning (RL) for chemical process control aims to enable more optimal plant-wide decision-making while maintaining safety in hazardous scenarios. However, process settings are both highly complex and more sample-constrained than other established applications of RL, such as fine-tuning language models or silicon chip design, which both apply neural networks (NNs) for decision-making. As opposed to NNs, using Gaussian processes (GPs) to approximate state-action value functions in sample-constrained applications can mitigate against over-fitting and provide analytical uncertainty estimates, enabling probabilistic constraint handling which is critical for safe learning in industrial processes. Inherent uncertainty quantification also enables automatic exploration and exploitation through an acquisition function. As GPs are a non-parametric distribution over potential functions, they reduce overheads in architecture design and hyperparameter tuning, both ill-defined tasks with small datasets.
Previous work has demonstrated the potential of GP models for sample-efficient reinforcement learning in chemical process control and robotics [1,2]. GPs are used to learn the Q-function, which quantifies the expected future reward from taking an action in a given state. However, these previous attempts do not distinguish between inaccurate Q-values generated early on, hindering policy improvement and slow convergence. While the parameters of NNs continually update throughout RL as the distribution of states, actions, and rewards shift, non-parametric GPs must analogously be able to 'forget' inaccurate early representations of the Q-function.
In this work, we enable RL for sample-constrained process control using GPs. We introduce a subset selection mechanism that dynamically selects previous trajectories and reward profiles, balancing coverage while maintaining the density of data high-performing regions of the state-action space, and omitting inaccurate suboptimal assessments of the Q-function. The mechanism we propose is based on the M-Determinantal Point Process (M-DPP), which defines the probability of a subset's selection according to the determinant of its associated Gram matrix [3]. By applying developments from the Sparse Bayesian Optimization community [4], we incorporate Monte Carlo estimates of the state-action values of data points into this selection.
By mitigating against 'inaccurate' initial representations of the Q-function, this work addresses a key limitation of current GP Q-learning methods, where early, suboptimal trajectories could unduly influence the Q-function approximation even after the policy has improved. By selectively 'forgetting' data, our proposed approach allows the GP to more accurately model the Q-function for the current policy, leading to faster convergence and improved final performance. The adaptive subset selection introduced in this work represents a key step toward the prevalence of RL for industrial process control. These enhancements address key challenges in sample efficiency and computational scalability, leading to more optimal plant-wide decision-making while maintaining safety and reliability.
[1]T. Savage, et al. Model-free safe reinforcement learning for chemical processes using gaussian processes. IFAC-PapersOnLine, 2021.
[2]M. P. Deisenroth, et. al.. Gaussian Processes for Data-Efficient Learning in Robotics and Control. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015.
[3]D. R. Burt, et. al . Convergence of sparse variational inference in gaussian processes regression. JMLR, 2020.
[4]H. B. Moss, et. al. Inducing point allocation for sparse gaussian processes in high-throughput bayesian optimisation. AISTATS, 2023.



10:50am - 11:10am

Machine Learning-Based Soft Sensor for Hydrogen Sulfide Monitoring in the Amine Gas Treatment at an Industrial Oil Regeneration Plant

Luis Felipe Sánchez1, Eva Carolina Coelho2, Francesco Negri1,3, Francesco Gallo3, Mattia Vallerio1, Henrique A. Matos2, Flavio Manenti1

1CMIC Department "Giulio Natta", Politecnico di Milano, Piazza Leonardo da Vinci 32, Milano, 20133, Italy; 2Departamento de Engenharia Química, Instituto Superior Técnico, Avenida Rovisco Pais 1, Lisboa, 1049-001, Portugal; 3Itelyum Regeneration S.p.A., Via Tavernelle, 19, Pieve Fissiraga, 26854, Italy

Process monitoring is crucial in industrial facilities to maintain stability, meet production targets, and fulfill safety and environmental regulations. This typically involves regulating variables such as temperatures, pressures, flow rates or levels. However, when composition measurements are required, their evaluation becomes challenging. Composition sensors usually show several drawbacks, including high capital and operating costs, short lifetimes, sensitivity to harsh environments, low data frequency and frequent calibration requirements. To tackle this problem, several alternatives of indirect composition monitoring have been reported in literature. Among these, soft sensors have gained attention in recent years due to their accuracy, ease of training, and the potential of integrating widely known machine learning techniques. In this study, we describe the methodology adopted to train a soft sensor for hydrogen sulfide monitoring in an industrial oil regeneration facility located in Pieve Fissiraga, Italy. The plant currently measures the hydrogen sulfide concentration through sampling and subsequent analysis using a gas chromatograph, which has led to significant delays in the composition measurement of up to eight hours. Unfortunately, insufficient historical data was available to correlate the laboratory analyses with measured plant variables. As an alternative, the data was used to develop and validate a rigorous simulation of the process using Aspen HYSYS. Although the simulation demonstrated high accuracy, with errors of around 2% when compared to plant data, its complexity associated to the Aspen HYSYS acid gas package used led to long computational times and convergence issues. For this reason, we adopted a data-driven surrogate-modeling approach. The surrogate model, based on Kriging Gaussian Process, was trained using data extracted from the simulation in a space-filling design derived from Latin Hypercube Sampling. The model demonstrated a high fidelity to the process simulation, with prediction errors of less than 3%, providing a practical and cost-effective soft-sensor for real-time hydrogen sulfide monitoring with the potential to significantly reduce off-spec operation times.

The key innovation in this research lies in the combination of process simulation, surrogate modeling, and data pre-processing to develop an accurate soft sensor for hydrogen sulfide monitoring. The historical plant data was pre-treated with an innovative approach to tackle its noisy and unsteady behavior and determine the steady states of the plant. Finally, the developed soft sensor is expected to be validated in the industrial environment, enhancing process control and improving environmental compliance.

References

  • S. Chen, C. Yu, Y. Zhu, W. Fan, H. Yu, T. Zhang, 2024. NOx formation model for utility boilers using robust two-step steady-state detection and multimodal residual convolutional auto-encoder. Journal of the Taiwan Institute of Chemical Engineers 155, 105252.
  • A. Galeazzi, F. de Fusco, K. Prifti, F. Gallo, L. Biegler, F. Manenti, 2024. Predicting the performance of an industrial furnace using Gaussian process and linear regression: A comparison. Computers & Chemical Engineering 181, 108513.
  • Y. Jiang, S. Yin, J. Dong, O. Kaynak, 2021. A Review on Soft Sensors for Monitoring, Control, and Optimization of Industrial Processes. IEEE Sensors Journal 21, 12868–12881.


11:10am - 11:30am

Machine Learning-Aided Robust Optimisation for Identifying Optimal Operational Spaces under Uncertainty

Sam Kay1, Mengjia Zhu1, Amanda Lane2, Jane Shaw2, Philip Martin1, Dongda Zhang1

1The University of Manchester, United Kingdom; 2Unilever R&D Port Sunlight, United Kingdom

Process optimisation and quality control are crucial in process industries for minimising product waste and improving overall plant economics. Identifying robust operational regions that ensure both product quality and performance is particularly valued in the chemical and pharmaceutical sectors. However, this task is complicated by uncertainties such as feedstock variability, control disturbances, and model mismatches, which can lead to violations of product quality constraints and significant batch discards. Addressing these uncertainties is essential for maintaining process stability and maximising profitability, as uncontrolled variability introduces stochastic elements into product quality that can result in large-scale wastage if not managed effectively.

We propose a novel robust optimisation strategy that integrates advanced machine learning and process systems engineering techniques to systematically identify optimal operational regions under uncertainty. Our approach begins by using a process model to screen a broad operational space across various uncertainty scenarios, pinpointing promising control trajectories to satisfy process constraints and product quality. Machine learning is then employed to cluster these trajectories into sub-regions. Meanwhile, correlations between key control variables are quantified through interpretable AI methods to reduce the operational space dimensionality. Finally, a two-layer dynamic optimisation framework is employed to determine the optimal control trajectory and its corresponding operable space within each promising sub-region.

To demonstrate the efficiency of our approach, we used a case study focusing on the quality control of a dynamic batch process for formulation product manufacturing. This case studies accounts for generic industrial uncertainties such as feedstock variation, control disturbances, and operator human errors. The results from this case studies highlights the advantages and industrial potential of our proposed strategies, indicating their significant promise for industrial application.



11:30am - 11:50am

Deterministic Optimization of Shell and Tube Heat Exchangers with Twisted Tape Turbulence Promoters

Jamel Eduardo Rumbo-Arias1, Fabián Pino2, Martín Picón-Núñez1, Fernando Israel Gómez-Castro1, Jorge Luis García-Castillo1

1Universidad de Guanajuato, Mexico; 2Universidad Autónoma de San Luis Potosí, México

Heat transfer enhancement techniques are frequently used in heat recovery systems with shell and tube heat exchangers. One of the most common methods is the incorporation of turbulence promoters, which increase thermal efficiency but also cause flow disturbances that result in an increase in pressure drop [1]. The design of shell and tube heat exchangers with turbulence promoters, such as perforated twisted tapes [2], is determined by several geometric variables, such as the inner and outer diameter of the tube, its length, the shell diameter, the baffle spacing, the baffle cut percentage, and the promoter geometry. A key objective in the design of heat exchangers is to minimize their total cost. The thermohydraulic performance of the exchanger depends directly on the geometry of the shell, tubes, and turbulence promoter, making it crucial to determine the optimal combination of parameters to calculate flow rates, the overall heat transfer coefficient (U), and pressure drops [3]. In this work, a deterministic approach is proposed to optimize the design of the heat exchange device. The objective function is to minimize the total annualized cost (TAC) of the equipment. The model is solved in the software GAMS, employing the solver CONOPT. Three case studies are presented: 1.- Optimization of a water-water heat exchanger with similar mass flow rates, 2.- Optimization of a water-water heat exchanger with different flow rates, and 3.- Optimization of a heat exchanger for water-waxy residue. In the first case, it was observed that in the system with turbulence promoters, the overall heat transfer coefficient (U) increased by 53% and the transfer area decreased by 34%, leading to a 64.23% increase in pressure drop inside the tubes. For the water-water system with different mass flows, the costs of an optimized smooth tube system and an optimized system with turbulence promoters were similar, regardless of the fluid arrangement, with an average TAC of 1,384 USD/year. In the case of the waxy residue, the TACs resulted in 12,100 USD/year for the smooth tube and 12,709 USD/year for the tube with the promoter, concluding that the integration of the turbulence promoter did not reduce costs due to a 21% increase in pumping costs.

REFERENCES

[1] M. Picón-Núñez, J. I. Minchaca-Mojica, and L. P. Durán-Plazas, Selection of turbulence promoters for retrofit applications through thermohydraulic performance mapping, Thermal Science and Engineering Progress, vol. 42, Jul. 2023, doi: 10.1016/j.tsep.2023.101876.

[2] M. M. K. Bhuiya, M. S. U. Chowdhury, M. Saha, and M. T. Islam, Heat transfer and friction factor characteristics in turbulent flow through a tube fitted with perforated twisted tape inserts, International Communications in Heat and Mass Transfer, vol. 46, pp. 49–57, Aug. 2013, doi: 10.1016/j.icheatmasstransfer.2013.05.012.

[3] R.K. Sinnott, Chemical Engineering Design, Coulson & Richardson’s Chemical Engineering Series, Volume 6, 4th edition, Elsevier, 2005.



11:50am - 12:10pm

Systematic design of structured packings based on shape optimization

Alina Dobschall, Elvis Michaelis, Mirko Skiborowski

Hamburg University of Technology, Institute of Process Systems Engineering, Germany

Abstract

The design of structured packings for thermal separation columns has been the subject of extensive research for 60 years (Spiegel & Meier, 2003). Despite the profound expertise and considerable advances that have already been gained in this field, it remains a challenging but promising task. This is due to the fact that the packing performance depends on a variety of fluid dynamic and mass transfer-related parameters, which exert a mutual and not necessarily beneficial influence on one another. The systematic development of improved packings by means of computational fluid dynamics (CFD) simulations is a promising approach to yield improved designs that can be manufactured on the basis of the derived CAD models.

Various research groups conducted parameter variations based on CFD simulations to systematically design structured packings (Neukäufer et al., 2019; Zawadzki et al., 2023). However, only a limited number of studies have employed mathematical optimization for this purpose. In previous work, two different CFD-based optimization approaches were developed with the objective of minimizing the pressure drop while maintaining or even maximizing the specific surface area, which is considered as indicator for mass transfer performance (Lange & Fieg, 2022). In topology optimization the material distribution is varied in a predefined grid structure making use of an evolutionary algorithm. While this approach provides high potential for innovation it can only produce rough design drafts for tractable grid sizes. In contrast, shape optimization modifies a given structure gradually based on gradient information of the objective function, as obtained through adjoint simulations (Othmer, 2008). This approach does not enable the generation of new structures, but can be beneficial as a refinement tool for further improvement of already well-performing packings.

In this contribution, shape optimization coupled with single-phase CFD simulations of the gas phase is used to improve two initial packing structures. By starting from a topology-optimized packing structure the sequential integration of both optimization methods is evaluated, while the transferability of the shape optimization approach to established structured packings is evaluated on the basis of an initial Rombopak packing. For both cases, a successful application of the shape optimization was realized, resulting in reshaped packings with constant surface area, reducing the pressure drop by 16% for the topology-optimized packing and 3% for the Rombopak. The analysis of the shape-optimized packings provides further insight to the specific modifications that resulted in these improvements, which includes rounding of edges and the closure of dead zones.

References

L. Spiegel, W. Meier, 2003, Chem. Eng. Res. Des., 81, 1, 39-47.

J. Neukäufer, F. Hanusch, M. Kutscherauer, S. Rehfeldt, H. Klein, T. Grützner, 2019, Chem. Eng. Technol., 42, 9, 1970-1977.

D. Zawadzki, M. Blatkiewicz, M. Jaskulski, M. Piątkowski, J. Koop, R. Loll, A. Górak, 2023, Chem. Eng. Res. Des., 195, 508-525.

A. Lange, G. Fieg, Novel Additively Manufacturable Structured Packings Developed by Innovative Design Methods, The 12th international conference Distillation & Absorption, Toulouse, 18.-21. September 2022.

C. Othmer, 2008, Int. J. Numer. Meth. Fluids, 58, 8, 861-877.

 
10:30am - 12:30pmT5: Concepts, Methods and Tools - Session 6
Location: Zone 3 - Aula D002
Chair: Lydia Katsini
Co-chair: Federico Galvanin
 
10:30am - 10:50am

An Objective Reduction Algorithm for Nonlinear Many-Objective Optimization Problems

Hongxuan Wang, Andrew Allman

University of Michigan, United States of America

Recently, challenges such as climate change and social equity have become important considerations for decision making, including for chemical process systems [1]. Traditional decision-making models that prioritize financial objectives alone are insufficient for addressing these complex issues. Instead, finding a solution which balances these tradeoffs requires solving a multi-objective optimization problem (MOP). Results of MOPs are Pareto frontiers, which depict a manifold of solutions representing the best one objective can do without making another one worse. However, challenges arise in dealing with problems that consider four or more objectives (many-objective problems, or MaOPs), where visualization of objective trade-offs becomes less intuitive and rigorously generating a complete set of solution points becomes computationally prohibitive.

In our previous work, we have developed an algorithm capable of systematically reducing objective dimensionality for (mixed integer) linear MaOPs [3]. In this work, we will extend the algorithm to reduce the dimensionality of nonlinear MaOPs. An outer approximation-like method is used to systematically replace nonlinear objectives and constraints with a set of linear approximations that, when the nonlinear problem is convex, provides a relaxation of the original problem [4]. Additional linear outer approximation constraints likely to be active in determining the Pareto frontier are generated by taking random steps within the cone defined by all objective gradient vectors. We demonstrate that identifying correlation strengths based on the linearly relaxed constraint space using our previously developed method can be sufficient for developing correlation strength weights for objective grouping.

The nonlinear objective reduction algorithm is validated through its application to various systems. First, an illustrative example with elliptical constraints and nonlinear objectives is presented. Next, the algorithm is applied to the well-known DTLZ5 benchmark problems, for which the correlations of objectives are known a priori. By tuning the step size for generating outer approximation constraints, the ability of the nonlinear objective reduction algorithm to successfully divide the objectives into the appropriate groupings is demonstrated for problems with up to 12 objectives. Finally, we demonstrate the algorithm's utility in a practical case study involving the optimal design of a hydrogen production system, underscoring its versatility and effectiveness in solving complex and practical many-objective optimization problems.

[1] Bolis, I., Morioka, S.N. and Sznelwar, L.I., 2017. Are we making decisions in a sustainable way? A comprehensive literature review about rationalities for sustainable development. Journal of cleaner production, 145, pp.310-322.

[3] Russell, J.M., Allman, A., 2023. Sustainable decision making for chemical process systems via dimensionality reduction of many objective problems. AIChE Journal, 69(2), e17692

[4] Viswanathan, J. and Grossmann, I.E., 1990. A combined penalty function and outer-approximation method for MINLP optimization. Computers & Chemical Engineering, 14(7), pp.769-782.

[5] Deb, K., Thiele, L., Laumanns, M. and Zitzler, E., 2005. Scalable test problems for evolutionary multiobjective optimization. In Evolutionary multiobjective optimization: theoretical advances and applications (pp. 105-145). London: Springer London.

[6] Zitzler, E., Deb, K. and Thiele, L., 2000. Comparison of multiobjective evolutionary algorithms: Empirical results. Evolutionary computation, 8(2), pp.173-195.



10:50am - 11:10am

Knowledge Discovery in Large-Scale Batch Processes through Explainable Boosted Models and Uncertainty Quantification: Application to Rubber Mixing

Louis Berthier1,2, Ahmed Shokry1, Eric Moulines1, Guillaume Ramelet2, Sylvain Desroziers2

1Centre de Mathématiques Appliquées (CMAP), Ecole Polytechnique, France; 2Manufacture Française des Pneumatiques Michelin, France

Rubber mixing is a crucial process in the rubber industry, where raw rubber is combined with various additives to produce a composite with the properties expected for tire performance. Conducted in internal mixers, this batch process poses a significant challenge in tracking the composite quality. Measuring the quality requires costly experimental analysis that can take several hours, while a batch is completed in 30 minutes. Developing physics-based models to predict mixing quality is challenging due to several factors: (i) the complex, non-linear, and heterogeneous processes involved, including mechanical mixing, chemical reactions, and thermal effects; (ii) the distinct chemical and physical properties of additives; and (iii) the difficulty of accounting for the machinery’s degradation that evolves over time.

Machine learning (ML) techniques have recently emerged as soft sensors to estimate composite quality by mapping easily measurable online variables to quality indicators. Common approaches such as just-in-time learning, ensemble methods, and window-based methods have shown promise, but they are limited by their lack of explainability. These methods offer little insight into the underlying physical processes or the influence of conditions on the final quality — an understanding that is crucial for control and optimization. They consider basic process variables (e.g. power and temperature) as the sole input for the ML model, neglecting critical factors like material characteristics and weather conditions, which can be vital, especially when ultra-quality grades with minimal variation must be achieved. Additionally, these approaches lack robust Uncertainty Quantification (UQ) frameworks, affecting their reliability.

This work addresses these challenges by proposing an explainable and robust ML-based method with UQ to analyze the hidden relationships between final batch quality and influencing factors, including process variables, material properties, and weather conditions. Our study focuses on one of Michelin’s rubber mixing lines and utilizes a comprehensive dataset covering 35125 production batches. The dataset has a dimensionality of 329 variables, which includes both direct measurements and features extracted using expert knowledge.

Our methodology centers on the XGBoost model which is selected as the most accurate after extensive comparisons. The approach consists of three phases: Feature Selection, Explainability, and UQ. First, we apply recursive feature elimination to reduce dimensionality, retaining the most informative variables, which are then ranked using SHapley Additive exPlanations (SHAP) to further shrink the dimensionality. Subsequently, the explainability phase integrates SHAP values into other techniques like Partial Dependence Plots to ensure consistency across different quality levels and randomized data subsets. Finally, UQ is introduced through conformal predictions and ensemble methods to generate reliable confidence intervals for predictions, providing accurate model predictions and robust uncertainty estimates.

The results of our approach show promising improvements in both accuracy and interpretability. Feature selection eliminated 82% of redundant features, boosting prediction performance by 17% compared to the baseline model. Additionally, our analysis revealed the importance of previously overlooked factors in the literature, such as initial material properties and weather conditions, in predicting composite quality. The UQ module achieved 90% coverage, offering strong mathematical guarantees, and improving the model's robustness and reliability in real-world industrial applications.



11:10am - 11:30am

Nonmyopic Bayesian process optimization with a finite budget

Jose Luis Pitarch1, Leopoldo Armesto2, Antonio Sala1

1Instituto de Automática e Informática Industrial (ai2), Universitat Politecnica de Valencia, Spain; 2Instituto de Diseño y Fabricación (IDF), Universitat Politecnica de Valencia, Spain

Process optimization under uncertainty is inherent to many PSE applications such as optimal experiment design, RTO, etc. Extremum seeking, modifier adaptation, policy search, or Bayesian optimization (BO) are typical approaches used in this context to drive the process to the real optimum by acquiring experimental information. But actual experiments involve a cost (economic, resources, time) and a limit budget usually exists. However, none of the above techniques handle the accumulated exploration cost explicitly as part of a tradeoff in the optimization objective.

The problem of finding the best tradeoff on cumulative process performance and experimental cost over a finite budget is a Markov Decision Process (MDP) whose states are uncertain process beliefs. The general way to approach these problems is evaluating belief trees of candidate actions and plausible observations via dynamic programming, as Monte Carlo Tree Search algorithms do. But their computational cost is prohibitive.

If the belief is modeled by a Gaussian process (GP), the nonmyopic BO acquisition functions developed by the machine learning community are a more tractable and smart way to approach the above MDP problem. The key idea of nonmyopic BO is to look ahead several steps and compute the best decisions based on an estimation on the cumulative expected value or improvement over the budget horizon [1]. Anyway, solving one-shot multi-step trees is also challenging, and rollout algorithms with default myopic BO acquisition functions are normally used to complete the value function estimation after the first lookahead steps, which can lead to conservative results.

Recently we proposed a variant tailored for process experimental optimization [2] in which a few well-known standard BO acquisition functions are dynamically selected in each node of the tree to find the best expected value over the decision horizon. Moreover, we employ Gauss-Hermite Quadrature in each observation node as an efficient way to approximate the value function. Although the approach is more tractable than other nonmyopic BO, its expected optimallity is a bit lower due to just relying on standard BO acquisition functions.

To remove such conservativeness and getting an optimallity comparable with nonmyopic BO approaches in the literature, but at lower computational cost, here we propose modelling the value function of the first-stage decision also with a GP whose data will correspond to evaluations of our decision tree in [2] for subsequent stages, instead of typical rollouts. In this way, the first-stage decision will be efficiently optimized via the dynamically learned value-function GP and Bayesian Adaptive Direct Search [3].

[1] Jiang, S., Jiang, D., Balandat, M., Karrer, B., Gardner, J., & Garnett, R. 2020. Efficient nonmyopic bayesian optimization via one-shot multi-step trees. Advances in Neural Information Processing Systems, 33, 18039-18049.

[2] Pitarch, J.L., Armesto, L., Sala, A. 2024. POMDP non-myopic Bayesian optimization for processes with operation constraints and a finite budget. Revista Iberoamericana de Automática e Informática Industrial 21, 328-338.

[3] Acerbi, L., & Ma, W. J. 2017. Practical Bayesian optimization for model fitting with Bayesian adaptive direct search. Advances in neural information processing systems, 30.



11:30am - 11:50am

A Propagated Uncertainty Active Learning Method for Bayesian Classification Problems

Arun Pankajakshan, Sayan Pal, Maximilian O. Besenhard, Asterios Gavriilidis, Luca Mazzei, Federico Galvanin

University College London, United Kingdom

Bayesian classification (BC) is a powerful supervised machine learning technique for modelling the relationship between a set of continuous variables (causal variables or inputs) and a set of discrete variables (response variables or outputs) that are represented as classes. BC has proven successful in several computational intelligence applications1 (e.g. clinical diagnosis and feasibility analysis). It adopts a probabilistic approach to learning and inference, where the relationship between inputs and outputs is expressed as probability distributions via Gaussian process2 (GP) models. Upon gathering data, posterior GP models are used to predict the class probabilities and to decide the most probable class corresponding to an input.

One way to efficiently implement BC using scarce data is to implement the method in closed loop using active learning3 (AL) methods. The existing AL methods are either based on the values of relative class probabilities or based on the prediction uncertainty of the GP model. While the former methods exclude the uncertainty associated with the inference problem, the latter use uncertainty in the latent function values (the GP model predictions). This makes the AL methods converge slowly towards the true decision boundary separating the classes4. Here, we propose an AL method based on the uncertainty propagated from the space of latent function values to the space of relative class probabilities. We compare our method with existing AL methods in a simulated case study motivated by the identification of the feasible (fouling-free) operating region in a flow reactor for drug particles synthesis. Inputs in the case study are antisolvent flowrate, antisolvent-to-solvent flowrate ratio, and additive concentration, while outputs consist of class labels 0 and 1 for infeasible and feasible experiments, respectively.

The true model assumption in the simulated case study helped obtain the true input-space decision boundary between feasible and infeasible regions, compared against the predicted boundaries to evaluate the classification performance. The results indicate that the probability-based AL method predicts a decision boundary closest to the assumed true boundary, with violations due to misclassification. The other two approaches, the function uncertainty-based approach particularly, provide a more conservative decision boundary compared to the true one. The propagated uncertainty-based approach provides a boundary that is conservative and close to the assumed true decision boundary. This study helps to design widely applicable adaptive BC methods with improved accuracy and reliability, ideal for autonomous systems applications.

References

1 G. Cosma, D. Brown, M. Archer, M. Khan and A. Graham Pockley, Expert Syst Appl, 2017, 70.

2 C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning, 2018.

3 D. D. Lewis and J. Catlett, in Proceedings of the 11th International Conference on Machine Learning, ICML 1994, 1994.

4 D. Khatamsaz, B. Vela, P. Singh, D. D. Johnson, D. Allaire and R. Arróyave, NPJ Comput Mater, 2023.



11:50am - 12:10pm

Modeling climate change impact on dairy: an uncertainty analysis

Lydia Katsini1, Satyajeet Sheetal Bhonsale1, Styliani Roufou2, Vasilis Valdramidis2,3, Monika Polanska1, Jan F. M. Van Impe1

1BioTeC+, Chemical & Biochemical Process Technology & Control, KU Leuven, Belgium; 2Department of Food Sciences and Nutrition, University of Malta, Malta; 3National and Kapodistrian University of Athens, Department of Chemistry, Athens, Greece

The global food system contributes to and at the same time is impacted by climate change, making crucial to ensure climate resilience. Quantitative insights into how climate change affects food production are essential for this effort, and impact modeling serves as a key tool for gaining such insights. Typically, this involves integrating climate projections with impact models. In this study, we develop a machine learning-based impact model to assess the effects of climate change on the food system, with a focus on the dairy sector.

Dairy was chosen for this case study due to the limited research addressing how climate change affects the sector, as most studies focus on its contribution to greenhouse gas emissions. Using a comprehensive dataset of raw bovine milk from multiple farms across three countries (Malta, Spain, Belgium), spanning several years, we built a machine learning model to evaluate the future impacts of climate change. While milk yield is commonly studied, this research places special emphasis on milk fat, another milk attribute likely to be affected by changing climatic conditions.

A key pillar in this work is the uncertainty analysis. The impact model was built using Gaussian process regression, which offers a straightforward quantification of the model uncertainty. We also account for uncertainties in climate models and variability between farms, i.e., inter-farm variability. Therefore our approach yields a robust framework for uncertainty quantification. The results indicate that inter-farm variability contributes the most to the overall uncertainty, suggesting that on-farm measures may be the most effective for climate-proofing the dairy sector.

References

Katsini L. Bhonsale S., Akkermans S., Roufou S. Griffin S., Valdramidis V., Misiou O., Koutsoumanis K., Muñoz López C.A., Polanska M., Van Impe J.F.M., 2022. Quantitative methods to predict the effect of climate change on microbial food safety: A needs analysis. Trends Food Sci 126

Lehner, F., Deser, C., Maher, N., Marotzke, J., Fischer, E. M., Brunner, L., Knutti, R., and Hawkins, E., 2020. Partitioning climate projection uncertainty with multiple large ensembles and CMIP5/6. Earth Syst Dynam, 11, 491–508

 
10:30am - 12:30pmT6: Digitalization and AI - Session 5
Location: Zone 3 - Room E033
Chair: Alexander W Dowling
Co-chair: Fernando G. Martins
 
10:30am - 10:50am

The Smart HPLC Robot: Fully autonomous method development empowered by mechanistic model framework

Dian Ning Chia, Fanyi Duanmu, Luca Mazzei, Eva Sorensen, Maximilian Otto Besenhard

University College London, United Kingdom

Developing ultra- or high-performance liquid chromatography (HPLC) methods for analysis or purification
can require significant amounts of material and manpower, and typically involves time-consuming iterative
lab-based workflows. Autonomous HPLC is a powerful tool for speeding up method development, as demonstrated recently via Bayesian optimisation to identify suitable HPLC settings without anyoperator interference.[1,2] To allow for autonomy and knowledge-driven decision-making, we incorporate mechanistic models, and automated training of these models, in the workflow. This digital HPLC twin, aka “Smart HPLC Robot”, is an intelligent platform enabling the development of an optimized HPLC method with minimal computer-controlled experiments, while simultaneously delivering a calibrated mechanistic model that provides valuable insights into method robustness.
The Smart HPLC Robot is programmed in Python and integrates seamlessly with Agilent OpenLab software, which controls the Agilent 1260 Infinity II system via a web application interface built-in C#. It also interfaces with process simulators to run mechanistic models, ensuring smooth communication between the experimental setup and the simulation environment.
The operation of the Smart HPLC Robot starts with the user configuring the robot, i.e., choosing the mass
transfer and isotherm models to start with, the number of components expected (if this information is available), as well as providing the bounds for the operating variables (e.g., gradient program and flow rate) and objective function for optimisation (e.g., maximize number of peaks or minimize method time) - otherwise default conditions will be used. The initial conditions are then sent to the Agilent. After each experiment, the robot analyses the chromatogram immediately to estimate and update the model parameters, e.g., the adsorption isotherm parameters. Once the parameters are estimated, the now fully-equipped mechanistic model allows for in-silico optimisation of the HPLC method, e.g., gradient program and flow rate. The new, optimal experimental settings are then sent to the Agilent, and the chromatograms from the optimal simulation and the experiment are compared for validation. If unsatisfactory, these steps are automatically repeated as many times as required.
The results show that the Smart HPLC Robot is a fully automated and efficient framework for optimal HPLC design and operation, as well as for model development based on a mechanistic model and a limited number of experiments, all without any manual interference, thus saving material, time and manpower.

References
[1] Dixon, T.M., Williams, J., Besenhard, M., Howard, R.M., MacGregor, J., Peach, P., Clayton, A.D., Warren, N.J., Bourne, R.A., 2024. Operator-free HPLC automated method development guided by Bayesian optimization, Digital Discovery, 3, 1591–1601

[2] Boelrijk J., Ensing B., Forre P., Pirok B.W.J., Closed-loop automatic gradient design for liquid chromatography using Bayesian optimization, Analytica Chimica Acta, 1242, 340789



10:50am - 11:10am

Hybrid Models Automatic Identification and Training through Evolutionary Algorithms

Ulderico Di Caprio, M. Enis Leblebici

Center for Industrial Process Technology, Department of Chemical Engineering, KU Leuven, Agoralaan Building B, 3590 Diepenbeek, Belgium

Hybrid modeling (HM) techniques have become popular for predicting the behavior of complex chemical systems, especially when purely mechanistic models are insufficient. These models combine mechanistic knowledge, such as conservation laws, with data-driven methods like machine learning to enhance accuracy. However, their development often relies on experts to identify model deviations and apply the appropriate data-driven corrections. This study proposes a new methodology to automatically identify discrepancies between a mechanistic model and real-world data and select the optimal data-driven function for deviation prediction using minimal data. The approach is designed for dynamic systems and does not require extensive knowledge of data-driven modeling. The identification process is formulated as a mixed-integer programming (MIP) optimization problem, which simultaneously identifies the mechanistic model component that requires adjustment and the best statistical function to describe the deviation. This non-linear optimization is challenging due to the dynamic nature of the system, creating complex relationships between the parameters and the prediction error. The optimization is solved using mixed-integer differential evolution (DE) algorithm and the Bayesian information criterion (BIC) as loss function to balance model accuracy and complexity.

Several case studies are used to validate the methodology, including chemical reactions, biochemical reactions, and Lotka-Volterra oscillator. Following, the results on one example of the equilibrium reaction is reported. Considering the reaction

A⇋R⇋S,

where each reaction is a first-order in the reactant. The employed kinetic constants are k1D = 3·10-1 min-1, k1I = 1·10-1 min-1, k2D = 2·10-2 min-1, k1I = 1·10-2 min-1, and the initial conditions are CA0=10 mol/L, CR0 = 0 mol/L and CS0 = 0 mol/L. A deviation function was artificially introduced into the mass balance of the first component (r1D), using a multi-layer perceptron to generate deviations from the model. Data was divided into a training set (first 10 minutes) and a test set (next 10 minutes), and model performance was assessed using the coefficient of determination (R²), mean absolute error (MAE), and mean absolute percentage error (MAPE).

The model achieved an R² of 0.917, an MAE of 0.0518, and a MAPE of 0.842% on the test set. These metrics show that the methodology accurately identified both the correct mechanistic equation and its associated parameters. Furthermore, the algorithm correctly identified the deviation in the reaction rate for r1D. When noise (±10%) was added to the data, the algorithm still identified the correct equation and performed well on the training set, but prediction accuracy on the test set decreased, highlighting the methodology's sensitivity to noise. This suggests that noise-reduction techniques should be applied before using the proposed methodology.

In conclusion, this approach offers a novel automated solution for hybrid modeling, improving accuracy with minimal data. While it performs well in dynamic systems, future work will focus on enhancing its robustness in the presence of noise to extend its applicability in real-world scenarios.



11:10am - 11:30am

Hybrid machine-learning for dynamic plant-wide biomanufacturing

Shabnam Shahhoseyni1, Arijit Chakraborty2, Mohammad Reza Boskabadi1, Venkat Venkatasubramanian2, Seyed Soheil Mansouri1

1Department of Chemical and Biochemical Engineering, Technical University of Denmark, DK-2800 Kgs Lyngby, Denmark; 2Department of Chemical Engineering, Columbia University, New York, NY 10027, United States of America

Data-driven modeling has shown great promise in capturing the behavior of complex systems (Venkatasubramanian, 2009). Bioprocesses, as a prime example of such systems, are ideal candidates for data-driven models, which help bridge gaps in both theoretical knowledge and practical modeling. A major challenge in AI and machine learning (ML) modeling for biomanufacturing lies in the lack of high-quality data and the complexity of the processes involved. To overcome this, incorporating domain expertise into the modeling framework is crucial. By combining numeric AI (machine learning) with symbolic AI, a hybrid AI approach can be developed, resulting in robust, interpretable models suitable for complex systems (Chakraborty, Serneels, Claussen, & Venkatasubramanian, 2022).

In this work, we develop a hybrid model including first principle model and data driven approach for Lovastatin biomanufacturing as the target process study. Using a model discovery engine, we aim to create a detailed, explainable model of the manufacturing process that accounts for its various complexities. Our approach starts with generating a specialized timeseries dataset from the KT-Biologics I (KTB1) model of the production unit (Boskabadi, Ramin, Kager, Sin, & Mansouri, 2024). KTB1 presents a dynamic mechanistic simulation model of continuous biomanufacturing, encompassing the entire plant. The upstream section includes a continuous stirred-tank reactor (CSTR) and a hydrocyclone, while the downstream section incorporates centrifuge and nanofiltration. The dataset includes concentrations of various nutrients (such as lactose and adenine), biomass levels in different streams, and the Lovastatin API produced production plant. We then apply the AI-DARWIN framework (Chakraborty, Sivaram, & Venkatasubramanian, 2021) to build explainable ML models. This framework allows us to limit the types of functions used during model discovery, ensuring that the resulting models are both accurate and easy to interpret. The models are presented in polynomial form, making it clear how each factor influences the overall system output. The primary objective is to develop a hybrid AI model to predict the API production in the plant under varying nutrient conditions.

References

Arijit Chakraborty, A. S. (2021). AI-DARWIN: A first principles-based model discovery engine using machine learning. Computers and Chemical Engineering, 154, 107470.

Boskabadi, M., Ramin, P., Kager, J., Sin, G., & Mansouri, S. S. (2024). KT-Biologics I (KTB1): A Dynamic Simulation Model for Continuous Biologics Manufacturing. Computers and Chemical Engineering, 108770.

Chakraborty, A., Serneels, S., Claussen, H., & Venkatasubramanian, V. (2022). Hybrid AI Models in Chemical Engineering–A Purpose-driven Perspective. Computer Aided Chemical Engineering. 51, pp. 1507-1512. Elsevier.

Venkatasubramanian, V. (2009). Drowning in data: informatics and modeling challenges in a data‐rich networked world. AIChE Journal, 55(1), 2-8.



11:30am - 11:50am

Physics-Informed Automated Discovery of Kinetic Models

Miguel Ángel de Carvalho Servia1, Ilya Orson Sandoval1, Klaus Hellgardt1, King Kuok {Mimi} Hii2, Dongda Zhang3, Ehecatl Antonio del Rio Chanona1

1Department of Chemical Engineering, Imperial College London, South Kensington, London, SW7 2AZ, United Kingdom; 2Department of Chemistry, Imperial College London, White City, London, W12 0BZ, United Kingdom.; 3Department of Chemical Engineering, The University of Manchester, Oxford Road, Manchester, M13 9P, United Kingdom

The industrialization of catalytic processes requires reliable kinetic models for their design, optimization, and control. Despite being the most sought-after due to their interpretability, white box models are difficult to construct, requiring extensive time and expert knowledge. To alleviate this, automated knowledge discovery techniques, such as SINDy, have gained popularity in dynamic modelling [1]. This study aims to advance previous frameworks, ADoK-S and ADoK-W, by incorporating prior expert knowledge through mathematical constraints and integrating uncertainty quantification techniques, addressing previously identified shortcomings [2].

The research utilizes improved versions of the ADoK-S and ADoK-W frameworks [2], comprising of four main steps: (I) a genetic programming algorithm with encoded constraints that foster the generation of physically reasonable candidate models, (II) a sequential optimization algorithm for parameter estimation of promising models, (III) a model selection process using the Akaike Information Criterion (AIC) and, (IV) quantifying the uncertainty of the output of the selected model. The revised methodology ensures the proposal of physically coherent models and showcases the uncertainty in the final model's output.

The refined methodology successfully embeds prior knowledge, facilitating the discovery of kinetic models with less data compared with their original counterpart and guaranteeing physically sound proposals. Furthermore, uncertainty quantification enhances the reliability of predictions, aiding in the identification of sensitive parameters and promoting safer and more efficient system developments, vital for decision-making and risk management. These enhancements not only allow for a physics-informed reduction in the search space but also improve data efficiency and model reliability, all critical to making automated knowledge discovery methods a serious competitor to classical kinetic modelling approaches.

References

[1] S. L. Brunton, J. L. Proctor, and J. N. Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci. U.S.A., 113(15):3932–3937, March 2016. doi:10.1073/pnas.1517384113.

[2] Miguel Ángel de Carvalho Servia, Ilya Orson Sandoval, King Kuok (Mimi) Hii, Klaus Hellgardt, Dongda Zhang, and Ehecatl Antonio del Rio Chanona. The automated discovery of kinetic rate models – methodological frameworks. Digit Discov, 3(5):954–968, 2024. ISSN 2635-098X. doi:10.1039/d3dd00212h.

 
10:30am - 12:30pmT9: PSE4Food and Biochemical - Session 5 - Including keynote
Location: Zone 3 - Room E030
Chair: Efstathia Tsakali
Co-chair: Satyajeet Bhonsale
 
10:30am - 11:10am

Keynote: Real-time dynamic optimisation for sustainable biogas production through anaerobic co-digestion with hybrid models

Mohammadamin Zarei1, Oliver Pennington2, Meshkat Dolat1, Rohit Murali1, Mengjia Zhu2, Dongda Zhang2, Michael Short1

1University of Surrey, United Kingdom; 2University of Manchester, United Kingdom

Renewable energy and energy efficiency are crucial for creating new economic opportunities and reducing environmental impacts. Anaerobic digestion (AD) transforms organic materials into a clean, renewable energy source and is recognised as an important part of the UK's net-zero strategy. Co-digestion of various organic wastes and energy crops addresses the disadvantages of single substrate digestion, increasing production flexibility and enhancing gas yields, but adding complexity and sensitivity to the process. This study employs a model-predictive control strategy to optimize biogas production while simultaneously considering global warming potential to find optimal feeding schedules to meet dynamic gas demands via a nonlinear programming model for dynamic optimization of the overall system.

The NLP model incorporates a combined heat and power system to maximize production flexibility and capitalize on dynamic electricity, heat, and gas prices and considers various physical and economic parameters, including biomethane potential, chemical oxygen demand, and substrate density. A cardinal temperature and pH model is utilized to account for substrate degradation and gas production rates under varying conditions. The MPC strategy, implemented using the GEKKO optimization tool, provides fine-grained control over the digester temperature and feeding, accounting for real-world complexities such as time delays in heating/cooling systems, varying ambient conditions, and multiple feed components with different temperatures. We incorporate a hybrid model trained on real plant and experimental data to simulate system responses across varying realistic operating ranges.

Results demonstrate that the integrated model can simultaneously optimize the interaction between biogas generation and CHP operation for real-time profit maximization, while considering system environmental impact. The model provides detailed analyses of substrate utilization, total production volumes, and methane and carbon dioxide production, while offering insights into the dynamic behavior of the digester temperature control system. A case study validates the model's potential for guiding decision-making in biogas production facilities, emphasizing the necessity of strategic feedstock management and precise temperature control for optimizing biogas yield and CHP operations. This integrated approach represents a significant advancement in the modeling and control of anaerobic co-digestion systems, offering a powerful tool for enhancing the efficiency and profitability of biogas production facilities.



11:10am - 11:30am

Optimization-based operational space design for effective bioprocess performance under uncertainty

Mengjia Zhu, Oliver Pennington, Sam Kay, Dongda Zhang

Department of Chemical Engineering, University of Manchester, Manchester, M13 9PL, UK

In bioprocess operations, maintaining consistent product quality and yield is critical, particularly given the inherent uncertainties in biological systems. To achieve effective control over these processes, system identification is typically performed to develop mathematical models that describe the dynamic behavior of the bioprocess. However, uncertainties in model parameters often persist due to biological variability, limitations in measurement accuracy, and simplifications made during model development. These uncertainties, which may span a range or follow a distribution, can lead to deviations in process performance if models rely solely on nominal parameter values. Therefore, it is crucial to develop control strategies that ensure key performance indicators (KPIs) (e.g., final product concentration and yield) are consistently met despite these uncertainties.

Real-time feedback control, commonly employed in industrial applications to manage such uncertainties, can be costly and impractical. This is due to the need for high-speed data processing, robust sensors, and rapid control actions, which can strain existing systems. To address these challenges, this paper proposes an approach that eliminates the dependence on real-time control while accounting for model uncertainties. Specifically, we aim to identify the largest possible operational space for the control variables that serves as a guideline for process operations. If the system operates within this defined space, the KPIs can be reliably achieved, regardless of the uncertainties in the system.

We reformulate the problem as an optimization task aimed at maximizing the operational space of relevant control variables, subject to path and terminal constraints imposed by process dynamics and performance specifications. We integrate symbolic frameworks using CasADi, a software tool for numerical optimization, and solve the formulated optimization problem using IPOPT, an interior-point optimizer. Also, a novel stage-wise optimization procedure is implemented to effectively reduce computational burden. Different from conventional surrogate-based methods, by avoiding surrogate models, our method preserves accuracy and fidelity to the original process dynamics, and can easily incorporate path constraints, while efficiently identifying the operational space.

The proposed method is validated through a case study on astaxanthin production, which involves a time-varying control variable (feed flowrate) and seven uncertain model parameters. Our approach successfully identifies an operational space for the control variable, ensuring that final product specifications are consistently met across a wide range of parameter variations. The operational space is further validated through extensive testing, demonstrating the effectiveness of the proposed control strategy.



11:30am - 11:50am

A MILP model to identify optimal strategies to convert soybean straw into value-added products

Ivaldir José Tamagno Junior1, Bruno F. Santoro2, Omar Guerra3, Moisés Teles dos Santos1

1University of São Paulo, Brazil; 2OP2B - Optimization to Business Ltda, Brazil; 3National Renewable Energy Laboratory, USA

Soybean is a highly valuable global commodity due to its versatility and numerous derivative products. During harvest, all non-seed materials become “straw,” for each ton of soybeans, 1.2 to 1.5 tons of straw are generated. Currently, this waste is primarily used for low-value purposes such as animal feed, landfilling, and incineration. To address this, the present work proposes a conceptual biorefinery aimed at converting soybean straw into higher-value products. The study began with data collection to identify potential conversion routes for this biomass. Based on this information, a superstructure was developed, comprising nine conversion processes: five thermochemical routes (pyrolysis, combustion, hydrothermal gasification, liquefaction, and deoxy-liquefaction), three biological routes (enzymatic hydrolysis, fermentation, and anaerobic fermentation), and one chemical route (alkaline extraction). Each process was evaluated based on product yields, conversion times, and associated costs from the literature. Using this data, a MILP (Mixed-Integer Linear Programming) optimization model was built in Pyomo with a CPLEX solver. the model comprises 23 possible products (biochar, bio-oil, syngas, ethanol, acetic acid, formic acid, hydroxymethylfurfural, furfural, biopolyols, fiber, methane, biohydrogen, propionic acid, iso-butyric acid, n-butyric acid, iso-valeric acid, n-valeric acid, biogas, xylose, glucose, energy, methanol and dimethyl ether). The variable in this problem was the amount of biomass processed by each conversion process. The objective function was to maximize the profit based on the product mix. The optimization considered a maximum raw material supply of 0.625 tons per year. As a result, fermentation was identified as the most profitable route, yielding annually $2.45 million in revenue. This route utilizes diluted pre-treated biomass and glucose as a supplement to produce five main products: ethanol (5.57 g L-1), acetic acid (1.74 g L-1), formic acid (1.03 g L-1), furfural (0.02 g L-1), and hydroxymethylfurfural (0.01 g L-1). In conclusion, soybean straw offers significant potential for value-added biorefinery applications, with fermentation emerging as the most profitable conversion route. Future research will focus on optimizing other processes and exploring additional soybean biomasses while scaling up biorefinery technologies. This could foster sustainable industrial development to valorize wastes generated in agro-industrial sector.



11:50am - 12:10pm

Fed-batch bioprocess prediction and dynamic optimization from hybrid modelling and transfer learning

Oliver Pennington1, Youping Xie2, Keju Jing3, Dongda Zhang1

1University of Manchester, United Kingdom; 2Fuzhou University, China; 3Xiamen University, China

Bioprocesses are seeing increased use for the production of renewable plastics, fuels, and other valuable bioproducts. Supporting bioprocess development during the ever prevalent fourth industrial revolution faces many challenges, including low yields in reactor scale-up, significant batch-to-batch variation and by-product accumulation. Modelling plays an important role in overcoming these challenges in its application to process optimisation and control, as well as process monitoring for fault detection.

However, modelling can be challenging as biosystems are dynamic with complicated catalytic and inhibitory relationships. Their continued exploration has uncovered deeper understanding of fundamental bioprocess mechanisms, allowing the development of kinetic models that rely on physical assumptions to explain the dynamic behaviour of the system. However, in many cases these assumptions only hold true within a narrow operational space and the biosystem will deviate from the assumed behaviour in a manner that is inexplainable through the existing physical assumptions and derivations. In such cases, data-driven modelling may be utilized to capture complicated nonlinear trends that a physical model cannot capture. However, purely data-driven modelling requires extensive data collection to identify the nonlinear model structure. In order to overcome the limitations of physical assumptions and reduce the data requirement to train the model, hybrid modelling can be employed. This uses fundamental physical understanding as the foundation upon which a data-driven model is applied in order to capture the simplified remaining nonlinearities, thus improving upon the accuracy of a purely kinetic model, while reducing the data requirement of a purely data-driven model.

In this study, an Artificial Neural Network (ANN) is employed as the data-driven component to improve the fundamental kinetic model accuracy. This is done by changing the most uncertain kinetic parameters from constants to time-varying outputs of the ANN. The ANN inputs are state variables to ensure the overall dynamic model is a function of state variables only, thus extending its application to the real-time modelling and prediction of fed-batch operation. Depending on the type of measurements, different ANNs are constructed using either offline data or online data to calculate time-varying parameters. To test the efficiency and accuracy of the modelling approach, a case study involving the production of lutein is used. Lutein is a valuable xanthophyll carotenoid utilized in several industries, including food, cosmetics, and pharmaceuticals. The microalga Chlorella sorokiniana has been previously identified for its potentially high lutein and microalgal biomass production, with a recent study exploring its growth and lutein production (Xie et al., 2022). This study also explores and compares the utilization of offline and online data to maximize the predictive capabilities of the model, meanwhile reliably estimating the hybrid model uncertainty. The novelty of this work lies in the application of hybrid modelling to the production of lutein from high-cell-density C. sorokiniana using offline and online data with the scope of conducting optimization under uncertainty for future fed-batch system design.

References:

Y. Xie et al, 2022, High-cell-density heterotrophic cultivation of microalga Chlorella sorokiniana FZU60 for achieving ultra-high lutein production efficiency, Bioresource Technology, Volume 365, https://doi.org/10.1016/j.biortech.2022.128130



12:10pm - 12:30pm

Individual-based modelling (IbM) reveals emergent stability in microbial competitive interactions

Jian Wang, Ihab Hashem, Satyajeet Bhonsale, Jan Van Impe

KU Leuven, Belgium

Understanding the factors that determine the stability of microbial communities remains a central question in microbial ecology. While ecological principles are increasingly applied to microbiology to study microbial dynamics, the modelling approaches are limited to two extremes: generalized population models that lack spatial detail and genome-scale models that are too detailed for community-level analyses. We advocate for using individual-based modelling (IbM) to overcome these limitations. By explicitly modelling how microbes release toxins to modify their immediate environment, IbM provides a realistic framework to elucidate the principles governing microbial interactions. In our study, we employ IbM to investigate how different interaction types—self-inhibition, amensalism, and rock-paper-scissors (RPS) dynamics—affect community stability in response to varying interaction strengths and growth rates. By comparing the results from IbM with those from stochastic ODE and PDE models, we reveal that simple community-level features can dictate emergent behaviours. Specifically, we find that the type of interaction significantly influences community stability more than the underlying biological processes alone. By capturing these aspects, IbM offers deeper insights into the emergent stability of microbial communities, advancing our understanding of ecological dynamics and potentially informing the management of microbial systems.

 
11:00am - 12:00pmBrewery visit
Location: On-campus brewery
11:30am - 12:30pmT10: PSE4BioMedical and (Bio)Pharma - Session 3
Location: Zone 3 - Room D049
Co-chair: Noor Al-Rifai
 
11:30am - 11:50am

Combining monoculture models of B. subtilis and E. coli in the presence of ampicillin enables prediction of unexpected coculture behaviour

Simen Akkermans1, Meike Wortel2, Ruben Claus1, Stanley Brul2, Jan Van Impe1

1BioTeC+, Chemical and Biochemical Process Technology and Control, KU Leuven Campus Gent, Gent, Belgium; 2Microbiology Theme, MBMFS, Swammerdam Institute for Life Sciences, UvA, Amsterdam, The Netherlands

Bacterial pathogens often reside in environments with rich microbial ecosystems. Therefore, antibiotic treatments of pathogens are influenced by the environmental microbiome. Recent research has illustrated that Bacillus subtilis PY79 and Escherichia coli K12 were respectively susceptible and tolerant to ampicillin in coculture, whereas they exhibited the opposite behaviour in monocultures. This research investigated whether the difference in antibiotic responses between monocultures and cocultures results from direct interactions between these two bacteria or indirectly through their interactions with the antibiotic. The research hypothesis was that coculture kinetics arise from the simple combination of the monoculture kinetics based on the interactions between each species and the antibiotic.

To validate this hypothesis, a model-based approach was followed. First, two population-level models were constructed to describe the microbial kinetics of either B. subtilis or E. coli in monoculture in the presence of ampicillin. Specifically, these models consist of a set of coupled differential equations that describe (i) microbial growth inhibition at low antibiotic concentrations, (ii) microbial inactivation kinetics at high antibiotic concentrations, and (iii) the degradation of the antibiotic due to bacterially produced beta-lactamase enzymes. The model parameters were estimated from a dataset of 36 shake flask experiments that contained measurements of the evolution of the viable cell densities and the concentration of ampicillin. Then, the obtained monoculture models were combined into a coculture model without adding any direct interactions between the bacterial species. This model was validated against another experimental dataset of 17 shake flask experiments on the coculture of B. subtilis and E. coli in the presence of ampicillin.

The obtained monoculture models fitted the experimental data accurately, and the parameters had narrow confidence bounds, indicating high identifiability of the selected model structures. These monoculture models captured specific differences between the behaviour of the two strains. B. subtilis was found to degrade the environmental ampicillin quickly but required a low environmental ampicillin level for growth. E. coli, on the other hand, had limited capability to degrade ampicillin but was able to grow at higher ampicillin levels. The coculture model was constructed by simply combining the monoculture models with their estimated model parameters. This coculture model was found to have good accuracy in predicting the experimentally measured coculture behaviour, as evaluated based on the root mean square error. Therefore, this model proved that the coculture behaviour of B. subtilis and E. coli arises as a simple combination of the monoculture behaviour, without direct interactions between the strains. Moreover, the models demonstrate that the difference in the observed antibiotic susceptibility between monocultures and cocultures is due to the cooperator-cheater dynamics that arise in cocultures because E. coli takes advantage of B. subtilis’ ability to degrade ampicillin.

This research used a data-driven semi-mechanistic population-level modelling approach to study the differences in antibiotic susceptibility observed between strains occurring in pure and mixed species systems. The results highlighted how model-based studies of these microbial dynamics help to understand the interactions that occur in antibiotic resistance between bacteria.



11:50am - 12:10pm

A hybrid model for determining design spaces in freezing processes of human induced pluripotent stem cell-derived spheroids

Yusuke Hayashi1, Masaharu Fujioka1, Yuta Yamaguchi2, Tetsuya Fujii2, Hirokazu Sugiyama1

1Department of Chemical System Engineering, The University of Tokyo, Tokyo, Japan; 2Technology Research & Development Division, Sumitomo Pharma Co., Ltd., Osaka, Japa

Human induced pluripotent stem (hiPS) cells are considered as the most promising sources in the field of regenerative medicine due to their various advantages compared with the conventional sources. Along with recent successful clinical studies, e.g., dilated cardiomyopathy and Parkinson’s disease, the realization of regenerative medicine using hiPS cells is becoming possible.

In hiPS cell manufacturing, the freezing process is one of the most important steps because it is necessary for the transportation and preservation of hiPS cell products. Generally, the products currently used in clinical applications are obtained via spheroids which are spherical cell aggregates. However, it is difficult to freeze hiPS cell-derived spheroids while maintaining high quality with the current freezing technology, which is a major obstacle in the product manufacturing1.

In the field of process systems engineering, model-based approaches have been applied to bio-related processes, e.g., design and evaluation of therapies to modulate neutrophil dynamics2 and sensitivity analysis of perfusion bioreactors3. Some contributions involve stem cell manufacturing processes, e.g., model-based assessment of temperature profiles in slow freezing for human induced pluripotent stem cells4 and design space determination of mesenchymal stem cell cultivation processes5. However, model application for the design of spheroid manufacturing processes is still in fancy.

This work presents development of a hybrid model for determining design spaces in freezing processes of hiPS cell-derived spheroids. We first developed a mechanistic model to describe the structure of spheroids and the phenomena. The mechanistic model was then extended to cover the cell survival rate through statistical modeling. Freezing experiments using hiPS cell-derived spheroids were performed to estimate the necessary parameter values for the extension. Given the spheroid radius, the concentration of cryoprotective agent, and the immersion time of the cryoprotective agent into the spheroid, the developed hybrid model can calculate the cell survival rate and the number of living cells in the spheroid after thawing, which are the quality and productivity indicators.

The application of the hybrid model was demonstrated in a case study. As a result, a feasible parameter range of the freezing process was obtained, given a set of constraints such as the cell survival rate and the number of living cells. The result would be useful for the freezing process design of hiPS cell-derived spheroids. In the ongoing work, we are investigating of the cell toxicity derived from cryoprotective agents.

References
1. Bissoyi A., et al., ACS Appl. Mater. Interfaces., 15, 2630–2638 (2023).
2. Ho T., et al., Comput. Chem. Eng., 51, 187–196 (2013).
3. Nașcu I., et al., Comput. Chem. Eng., 163, 107829 (2022).
4. Hayashi Y., et al., Comput. Chem. Eng., 144, 107150 (2021).
5. Hirono K., et al., AIChE J., 70, e18452 (2024).

 
12:30pm - 2:30pmLunch
Location: Zone 2 - Cafetaria
1:30pm - 2:30pmMDPI Meeting
Location: L226
This year, ESCAPE35 will be co-hosting together with IChemE’s CAPE SIG a panel discussion under the theme “Career Options within the CAPE landscape”. The panellists will feature a diverse group of well-respected CAPE individuals from industry and academia, across different career stages. The panel discussion will focus on career possibilities within industry and academia for individuals within the CAPE community, and a networking event afterwards. Confirmed panelists: Roberto Abbiati – Roche Seyed Soheil Mansouri – DTU (possibly) Lauren Lee - UCL
1:30pm - 2:30pmPoster Session 3
Location: Zone 2 - Cafetaria
 

pyDEXPI: A Python framework for piping and instrumentation diagrams using the DEXPI information model

Dominik P. Goldstein, Lukas Schulze Balhorn, Achmad Anggawirya Alimin, Artur M. Schweidtmann

Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands

Developing piping and instrumentation diagrams (P&IDs) is a fundamental task in process engineering. For designing complex installations, such as petroleum plants, multiple departments across several companies are involved in refining and updating these diagrams, creating significant challenges in data exchange between different software platforms from various vendors. The primary challenge in this context is interoperability, which refers to the seamless exchange and interpretation of information to collectively pursue shared objectives. To enhance the P&ID creation process, a unified, machine-readable data format for P&ID data is essential. A promising candidate is the Data Exchange in the Process Industry (DEXPI) standard. However, the absence of an open-source software implementation of DEXPI remains a major bottleneck, limiting the interoperability of P&ID data in practice. This lack of interoperability is further hindering the adoption of cutting-edge digital process engineering tools, such as automated data analysis and the integration of generative artificial intelligence (AI), which could significantly improve the efficiency and innovation of engineering design workflows.

We present pyDEXPI, an open-source implementation of the DEXPI format for P&IDs in Python. Currently, pyDEXPI encompasses three main parts. (1) At its core, pyDEXPI implements the classes of the DEXPI information model as Pydantic data classes. The pyDEXPI classes define the class relationships and the data attributes outlined in the DEXPI specification. (2) pyDEXPI provides several possibilities for importing and exporting P&ID data into the data class framework. This includes importing DEXPI data in its Proteus XML exchange format, saving and loading pyDEXPI models as a Python pickle file, and casting pyDEXPI into a graph format. (3) pyDEXPI offers toolkit functionalities to analyze and manipulate pyDEXPI P&IDs. For example, pyDEXPI tools can be used to search through P&IDs for data of interest and add, remove, or change data without violating DEXPI modeling conventions. With this functionality, pyDEXPI makes P&ID data more efficient to handle, more flexible, and more interoperable. We envision that, with further development, pyDEXPI will act as a central scientific computing library for the domain of digital process engineering, facilitating interoperability and the application of data analytics and generative AI on P&IDs.

Key references:

M. Theißen et al., 2021. DEXPI P&ID specification. DEXPI Initiative, version 1.3

M. Toghraei, 2019. Piping and instrumentation diagram development, first edition Edition. John Wiley & Sons, Inc, Hoboken, NJ, USA



A Bayesian optimization approach for data-driven Petlyuk distillation columns design.

Alexander Panales-Perez1, Antonio Flores-Tlacuahuac2, Fabian Fuentes-Cortés3, Miguel Angel Gutierrez-Limon4, Mauricio Sales-Cruz5

1Tecnológico Nacional de México, Instituto Tecnológico de Celaya, Departamento de Ingeniería Química Celaya, Guanajuato, México, 38010; 2Escuela de Ingeniería y Ciencias, Tecnológico de Monterrey, Campus Monterrey Ave. Eugenio Garza Sada 2501, Monterrey, N.L, 64849, México; 3Department of Energy Systems and Environment, IMT Atlantique, GEPEA rue Alfred Kastler, Nantes, 44000, France; 4Departamento de Energía, Universidad Autónoma Metropolitana-Azcapotzalco Av. San Pablo 180, C.P. 02200, Ciudad de México, México; 5Departamento de Procesos y Tecnología, Universidad Autónoma Metropolitana-Cuajimalpa Av. Vasco de Quiroga 4871, C.P. 05348, Ciudad de México, México

Recently, the focus on increasing process efficiency to lower energy consumption has led to alternative systems, such as Petlyuk distillation columns. It has been proven that, when compared to conventional distillation columns, these systems offer significant energy and cost savings. Therefore, from an economic perspective, achieving high-purity products alone does not define the feasibility of a process, to achieve an equilibrium in the trade-off between product purity and cost a multiobjective optimization is needed. Despite the effectiveness of common optimization methods, novel strategies like Bayesian optimization, which do not require an explicit mathematical model, can handle complex systems. Even starting from just one initial point, Bayesian optimization can effectively perform the optimization process. However, as a black-box method, it requires an analysis of the influence of hyperparameters on the optimization process. Thus, this work presents a Petlyuk column case study, including an analysis of hyperparameters such as the acquisition function and the number of initial points.



Enhancing Energy Efficiency of Industrial Brackish Water Reverse Osmosis Desalination Process using Waste Heat

Mudhar Al-Obaidi1, Alanood Alsarayreh2, Iqbal M Mujtaba3

1Middle Technical University, Iraq; 2Mu’tah University, Jordan; 3University of Bradford, United Kingdom

The Reverse Osmosis (RO) system has the potential as a vibrant technology to produce high-quality water from brackish water sources. However, the progressive increase in water and electricity demands necessitates the development of a sustainable desalination technology. This can be achieved by reducing the specific energy consumption of the process which will also reduce the environmental footprint. This study proposes the concept of reducing the overall energy consumption of a multistage multi-pass RO system of Arab Potash Company (APC) in Jordan via heating the feed brackish water. The utilisation of waste heat generated from different units of production plant of APC such as steam condensate supplied to a heat exchanger is a feasible technique to heat brackish water entering the RO system. To systematically evaluate the contribution of water temperature on the performance metrics including specific energy use, a generic model of RO system is developed. Model based simulation is used to evaluate the influence of water temperature. The results indicate a clear enhancement of specific energy consumption while using water temperatures close to the maximum recommended temperature of the manufacture. It has been noticed that an increase in water temperature from 25 ºC to 40 ºC can result an overall energy saving of more than 10%.

References

  1. Alanood A. Alsarayreh, M.A. Al-Obaidi, A.M. Al-Hroub, R. Patel, and I.M. Mujtaba. Optimisation of energy consumption in a medium-scale reverse osmosis brackish water desalination plant. Proceedings of the 30th European Symposium on Computer Aided Chemical Engineering (ESCAPE30), May 24-27, 2020, Milano, Italy
  2. Alanood A Alsarayreh, Mudhar A Al-Obaidi, Shekhah K Farag, Raj Patel, Iqbal M Mujtaba, 2021. Performance evaluation of a medium-scale industrial reverse osmosis brackish water desalination plant with different brands of membranes. A simulation study. Desalination, 503, 114927.
  3. Alanood A. Alsarayreh, Mudhar A. Al-Obaidi, Saad S. Alrwashdeh, Raj Patel, Iqbal M. Mujtaba, 2022. Enhancement of energy saving of reverse osmosis system via incorporating a photovoltaic system. Editor(s): Ludovic Montastruc, Stephane Negny, Computer Aided Chemical Engineering, Elsevier, 51, 697-702.


Analysis of Control Properties as a Sustainability Indicator in Intensified Processes for Levulinic Acid Purification

Tadeo Velázquez-Sámano, Heriberto Alcocer-García, Eduardo Sánchez-Ramírez, Carlos Rodrigo Caceres-Barrera, Juan Gabriel Segovia-Hernández

Universidad de Guanajuato, México

Sustainability is one of the greatest challenges humanity has faced. Therefore, there is a special emphasis on current chemical processes being improved or redesigned to ensure sustainability for future generations. The chemical industry has successfully implemented process redesign using process intensification. Through the process intensification, significant savings in energy consumption, lower production costs, reduction in the size or number of equipment and reductions in environmental impacts can be generated. However, one of the disadvantages associated with the intensification of processes is the loss of manipulable variables, due to the increase in interactions due to integrations in the equipment, which can generate a deterioration in the control properties. In other words, intensified processes can be more unstable in the face of disturbances in the system, which could become a problem not only of product quality but even a safety problem. On the other hand, some studies have shown that intensified designs can have better control properties than their conventional counterparts. Therefore, it is important to incorporate the study of control properties into intensified schemes since it is not known whether this intensification will improve or worsen the control properties.

Taking this into account, this study performed an analysis of the control properties of recently proposed schemes for the purification of levulinic acid. Levulinic acid is considered one of the bioproducts from lignocellulosic biomass with the greatest market potential, so the evaluation of control aspects in these schemes is relevant for its possible industrial application. These alternatives include conventional hybrid systems that contemplate liquid-liquid extraction and distillation and intensified schemes using thermal coupling and movement of sections. The studied schemes were obtained through a rigorous multi-objective optimization process taking the total annual cost as an economic criterion and the eco-indicator 99 as an environmental criterion. They were optimized using the differential evolution method with tabu list, which is a hybrid method that has proven to be efficient in complex nonlinear and nonconvex systems. The objective of this study is to identify the dynamic characteristics of the designs studied and to anticipate which could present control problems. Furthermore, each study of intensified distillation schemes contributes to generating guidelines that help the design stage of this type of systems. To analyze the control of the systems, two types of analyses were conducted: closed-loop and open-loop. For the closed-loop analysis, the aim was to minimize the integral of absolute error by identifying the optimal tuning of the controller's gain and integral time. In the open-loop analysis, the condition number, the relative gain array, and the feed sensitivity index were examined. The results reveal that the design comprising a liquid-liquid extraction column, three distillation columns, and thermal coupling between the last two columns exhibits the best dynamic performance. This design demonstrates a lower total condition number, a sensitivity index below the average, a stable control structure, and low values for the integral of absolute error. Additionally, this design shows superior cost and environmental impact indicators, making it the best option among the proposed designs.



Reactive Crystallization Modeling for Process Integration Simulation

Zachary Maxwell Hillman, Gintaras Reklaitis, Zoltan K Nagy

Purdue University, United States of America

Reactive crystallization (RC) is a chemical process in which the reaction yields a crystalline product. It is used in various industries such as pharmaceutical manufacturing or water purification (McDonald et al., 2021). In some cases, RC is the only feasible process pathway, such as the precipitation of certain ionic solids from solution. In other cases, a reaction can become a RC by changing the reaction environment to a solvent with low product solubility.

In either case, the process combines reaction with separation, intensifying the overall design. Process intensification leads to different advantages and disadvantages compared to traditional routes and therefore conducting an analysis prior to construction would be valuable (McDonald et al., 2021; Schembecker & Tlatlik, 2003).

Despite the utility and prevalence of RC, it has not been incorporated into any modern process design software, to our knowledge. There are RC models that simulate the inner reactions and dynamics of a RC (Tang et al., 2023; Salami et al., 2020), but each have limiting assumptions, and none have been integrated with the rest of a process line simulation. This modeling gap complicates RC process design and limits both the exploration of the possible benefits to using RC as well as the ability to optimize a system that relies on it.

To fill this gap, we built a generalized model that can be integrated with other unit operations in the Python process simulator package PharmaPy (Casas-Orozco et al., 2021). This model focuses on the reaction-crystallization interactions and dynamics to predict reaction yield and crystal critical quality attributes given inlet streams and reactor conditions. In this way, RC can be integrated with other unit operations to capture the effects RC has on the process overall.

The model and its assumptions are described in this work. The model space, limitations and capabilities are explored. Finally, the potential benefits of the RC system are shown using three example cases.

  1. Casas-Orozco, D., Laky, D., Wang, V., Abdi, M., Feng, X., Wood, E., Laird, C., Reklaitis, G. V., & Nagy, Z. K. (2021). PharmaPy: An object-oriented tool for the development of hybrid pharmaceutical flowsheets. Computers & Chemical Engineering, 153, 107408. https://doi.org/10.1016/j.compchemeng.2021.107408
  2. McDonald, M. A., Salami, H., Harris, P. R., Lagerman, C. E., Yang, X., Bommarius, A. S., Grover, M. A., & Rousseau, R. W. (2021). Reactive crystallization: A review. Reaction Chemistry & Engineering, 6(3), 364–400. https://doi.org/10.1039/D0RE00272K
  3. Salami, H., Lagerman, C. E., Harris, P. R., McDonald, M. A., Bommarius, A. S., Rousseau, R. W., & Grover, M. A. (2020). Model development for enzymatic reactive crystallization of β-lactam antibiotics: A reaction–diffusion-crystallization approach. Reaction Chemistry & Engineering, 5(11), 2064–2080. https://doi.org/10.1039/D0RE00276C
  4. Schembecker, G., & Tlatlik, S. (2003). Process synthesis for reactive separations. Chemical Engineering and Processing: Process Intensification, 42(3), 179–189. https://doi.org/10.1016/S0255-2701(02)00087-9
  5. Tang, H. Y., Rigopoulos, S., & Papadakis, G. (2023). On the effect of turbulent fluctuations on precipitation: A direct numerical simulation – population balance study. Chemical Engineering Science, 270, 118511. https://doi.org/10.1016/j.ces.2023.118511


A Machine Learning approach for subvisible particles classification in biotherapeutic formulations

Louis Joos, Anouk Brizzi, Eva-Maria Herold, Erica Ferrari, Cornelia Ziegler

Sanofi, France

Processing steps on biotherapeutics can cause appearance of Subvisible Particles (SvPs) which are considered a critical quality attribute (CQA) by pharmaceutical regulatory agencies [2,3]. SvPs are usually split between Inherent Particles (protein particles), Intrinsic Particles (Silicon Oil droplets, Glass, Cellulose, etc.) and Extrinsic Particles (e.g. clothes fibers). Discrimination between proteinaceous and other particles (generally of size ranging from 2 to 100 µm) is key in assessing product stability and potential risk factors such as immunogenicity or negative effects on quality and efficacy of the drug product [1].

According to USP <788> [4], the preferred method for determination of SvPs is light obscuration (LO). However, LO is not able to distinguish between particles of different compositions. In contrast, Flow Imaging Microscopy (FIM) has demonstrated high sensitivity in detecting and imaging SvPs [5].

In this study we develop a novel experimental and modeling workflow based on binary supervised classification, which allows a simple and robust classification of silicone oil (SO) droplets and non-silicone oil (NSO) particles. First, we generate experimental data from different therapeutic proteins exposed to various stresses and some samples mixed with relevant impurities. Data acquisition is performed with IPAC-2 (Occhio), MFI (Protein Simple), and Flowcam (Yokogawa Fluid Imaging Technologies) microscopes that are able to extract different morphological (e.g. circularity, aspect ratio) and intensity-based (e.g. Average, Standard Deviation) features from features from particle images.

Second, we train tree-based models, particularly Random Forests, on tabular data extracted from the microscopes from different projects and manually labelled by expert scientists. We obtain 97% vs 85% global accuracy for previously used baseline filters, even for particles in the range 2-5 µm which are usually hardest to classify.

Finally, we extend these models to multi-class problems with new types of particles (glass and cellulose) with good accuracy (93%), suggesting that this methodology is adapted to classify efficiently many different particles. Future perspectives include the exploration of new particle classes (air bubbles, protein aggregates, etc.) and a complementary Deep Learning multilabel approach to classify particles by direct image analysis when there are multiple particles overlapping on the same image.

References

[1] Sharma, D. K., & King, D. (2012). Flow imaging microscopy for the characterization of protein particles. Journal of Pharmaceutical Sciences, 101(10), 4046-4059.

[2] International Conference on Harmonisation (ICH). (1999). Q6B: Specifications: Test Procedures and Acceptance Criteria for Biotechnological/Biological Products.

[3] International Conference on Harmonisation (ICH). (2004). Q5E: Comparability of Biotechnological/Biological Products Subject to Changes in Their Manufacturing Process.

[4] United States Pharmacopeia (USP). (2017). <788> Particulate Matter in Injections.

[5] Zölls, S., Weinbuch, D., Wiggenhorn, M., Winter, G., Jiskoot, W., Friess, W., & Hawe, A. (2013). Flow imaging microscopy for protein particle analysis—a comparative evaluation of four different analytical instruments. AAPS Journal, 15(4), 1200-1211.



EVALUATION OF THE CONTROLLABILITY OF DISTILLATION WITH MULTIPLE REACTIVE STAGES

Josué Julián Herrera Velazquez1,3, Julián Cabrera Ruiz1, J. Rafael Alcántara Avila2, Salvador Hernández1

1Universidad de Guanajuato, Mexico; 2Pontificia Universidad Católica del Perú, Peru; 3Instituto Tecnológico Superior de Guanajuato, Mexico

Different energy alternatives to fossil fuels have been proposed to reduce greenhouse gas emissions that have contributed to the poor climate conditions we have today. Despite the collective efforts of the generation of these technologies, it remains a challenge to reduce production costs so that they are accessible to the largest sector of the population. Silicon-based photovoltaic (PV) solar panels (Silane) are an alternative to electricity generation in homes and industries. Most of the cost of these panels is in obtaining the raw material. Intensified schemes have been proposed to produce Silane (SiH4) that reduce costs and energy demand of the process, such as reactive distillation. Zeng et al. (2017) propose dividing the reactive zone into several reactive zones where, under a parametric study, they found that three reactive zones greatly benefit the energy needs of the unit operation. Alcántara-Maciel et al. (2022) solved this problem by stochastic optimization using dynamic limits where the case of one, two, and three reactive zones are evaluated in the same study to determine the optimal reactive zones for this problem by optimizing the Total Annual Cost (TAC), finding that the best solution is one reactive zone. Post-optimization controllability studies have been carried out for the reactive distillation column to produce Silane for a single reactive zone but not for the case of multiple reactive zones. Techniques have been proposed to evaluate the controllability of steady-state processes based on previous open-loop analysis and using Singular Value Decomposition (SVD) with simplified first-order transfer function models (Cabrera et al., 2018). In this work, three Pareto solutions from a previous study of this reactive distillation column solved in a multi-objective way for the reboiler load (Qh) and the TAC will be evaluated, as well as the case proposed by Zeng et al. (2017). The condition number of the rigorous control of the SVD with a quantitative measurement proposal Ag+gsm (Cabrera et al., 2018) will be compared with approximations of first and second-order transfer function models for a positive perturbation, so that the feasibility of using these simplified proposals to evaluate the steady-state controllability in a multi-objective global optimization of this complex scheme can be evaluated. The results of this study show that first and second-order transfer functions can be effectively used to predict steady-state controllability for a frequency of up to 100 rad/h, which is a new proposal, since only first-order transfer functions are reported in the literature. Simplifying rigorous transfer function models to first- or second-order models helps reduce noise in the stochastic optimization process, as well as helping to achieve shorter computational time in this novel implementation of steady-state controllability evaluation.



Optimization of steam power systems in industrial parks considering distributed heat supply and auxiliary steam turbines

Lingwei Zhang, Yufei Wang

China University of Petroleum (Beijing), China, People's Republic of

Enterprises in industrial parks have dispersed locations and various heat demands. In the steam power system for centralized heat supply in an industrial park, heat demands of all consumers are satisfied by the energy station, leading to the high steam delivery costs caused by the individual distant enterprises. Additionally, considering the trade-off between distance-related costs and heat cascaded utilization, the number of steam levels is limited, thus some consumers are supplied with heat at higher temperature than that of required, resulting in the low energy efficiency. To deal with the above problems, an optimization model of steam power systems in industrial parks considering distributed heat supply and auxiliary steam turbines is proposed. Field erected boilers can independently supply heat for consumers to avoid excessive pipeline costs; Auxiliary steam turbines are used for the re-depressurization of steam received by consumers, which can increase the electricity generation capacity and improve the temperature matching of heat supply and demand. A mixed-integer nonlinear programming model is established for the problem, and the steam power systems are optimized by the objective function of minimizing the total annual cost (TAC). In this model, the influence of different numbers of steam levels is considered. The saturation temperatures of steam levels are the decision variables, and the arrangements of field erected boilers and auxiliary turbines are determined by the binary variables. A case study illustrates that there is an optimal number of steam levels to minimize the TAC of the system. The selective installation of field erected boilers and steam auxiliary turbines for consumers can effectively reduce the cost of pipeline network, increase the income of electricity generation, and significantly decrease the TAC.



Conceptual Modular Design and Optimization for Continuous Pharmaceutical Processes

Tuse Asrav, Merlin Alvarado-Morales, Gurkan Sin

Technical University of Denmark, Denmark

The pharmaceutical industry faces challenges such as high manufacturing costs, strict regulations, and rapidly evolving product portfolios, driving the need for efficient, flexible, and adaptive manufacturing processes. To meet these demands, the industry is shifting toward multiproduct, multiprocess facilities, increasing interest in modular process designs [1].

Modular design is characterized by using standardized, interchangeable, small-scale process units or modules that can be easily rearranged by exchanging units and numbering up for fast adaptation to different products and production scales. The modular approach not only supports the efficient design of multiproduct facilities but also allows for the continuous optimization of processes as new data and technologies become available.

This study presents a systematic framework for the conceptual design of modular pharmaceutical facilities, which allows for reduced engineering cycles, faster time-to-market, and enhanced adaptability to changing market demands. In brief, the proposed framework consists of 1) module definition, 2) process flowsheet design, 3) simulation-based optimization, and 4) uncertainty analysis and robustness evaluation.

The application of the framework is demonstrated through case studies involving the manufacturing of two widely used active pharmaceutical ingredients (APIs) ibuprofen and paracetamol with distinct production steps following the modular design approach. The standard modules such as the reaction and separation modules are defined in terms of the type of equipment, the number of equipment, and the size of the equipment. The process flowsheets are then designed and optimized by combining these standardized modules. Simulation-based optimization and uncertainty analysis are integrated to quantify key metrics such as process efficiency, robustness, and flexibility.

This study demonstrates how modular systems offer a cost-efficient, adaptable solution that integrates continuous production with high flexibility. The approach allows pharmaceutical facilities to quickly reconfigure processes to meet changing demands, providing an innovative pathway for future developments in pharmaceutical manufacturing. The results also highlight the importance of integrating stochastic optimization in modular design to enhance robustness and ensure confidence in performance by accounting for uncertainties.

References

[1] Bertran, M. O., & Babi, D. K. (2023). Exploration and evaluation of modular concepts for the design of full-scale pharmaceutical manufacturing facilities. Biotechnology and Bioengineering. https://doi.org/10.1002/bit.28539



Design of a policy framework in support of the Transformation of the Dutch Industry

Jan van Schijndel, Rutger deMare, Nort Thijssen, Jim van der Valk Bouman

QuoMare

The size of the Dutch Energy System in 2022 was approximately 2700 PJ. Some 14% (380 PJ) classifies as renewable heat & power and the remaining 86% as fossil energy (natural gas, crude oil and coal). A network of power-generation units, refineries and petrochemical complexes convert fossil resources into heat (700 PJ), power (400 PJ), transportation fuels (500 PJ) and high value chemicals (400 PJ). Some 700 PJ is lost by conversion and transport. The corresponding CO2 emission level in 2022 was some 150 mln ton of CO2-equivalents.

Transformation of this system into a Net Zero CO2 system by 2050 calls for both decarbonisation and recarbonisation of fossil resources into renewable resources: renewable heat (waste heat, geo- & aqua-thermal heat), renewable power (solar & wind) and renewable carbon (biomass, waste, and CO2).

QuoMare developed a decision support framework TDES to support this Transformation of the Dutch Energy System.

TDES is based on Mixed-Integer Multi-Period Linear Programming mathematics.

TDES evaluates the impact of integer decisions (decarbonization, recarbonisation & infrastructure investment options) on a year-to-year basis simultaneously with continuous variables (unit capacities & interconnecting flows) subject to various constraints (like CO2 targets over time and infrastructure limitations). The objective is to maximize the net present value of the accumulated energy system margin over the 2020-2050 time-horizon.

TDES can help policy makers to develop policies for ‘optimal transition pathways’ that will deliver a Net Zero energy system by 2050.

Decarbonisation of heat & power is well underway at the moment. Over 50% of current Dutch power demand comes already from solar and wind. Large scale waste heat recovery and distribution projects are under development. Also a high penetration rate of residential heat-pumps is noticed. High level heat to be supplied to industry by green and blue H2 is projected to be viable from 2035 onwards.

However, the recarbonisation of fossil based transportation fuels (in particular for shipping and aviation) and chemicals is hampering progress due to the lack of robust business cases. Without a line of sight towards healthy production margins, companies are reluctant to invest in technologies (like electrolysis, pyrolysis, gasification, oxy-firing, fermentation, Fischer-Tropsch synthesis, methanol synthesis, auto-thermal reforming, dry-reforming) needed to produce the envisaged 800 PJ (some 20 mln ton) of renewable carbon based transportation fuels and high value chemicals by 2050.

The paper will address which set of meaningful policies would steer the energy system transformation towards a Net Zero system in 2050. Such an optimal set of policy measures will be a combination of CO2 emission constraints (prerequisite for any license to operate), CO2 tax levels (imposed on top of ETS), and capital investment subsidies (to ensure a level playing field in cost terms for the production of renewable carbon based transportation fuels and chemicals).

The novelty of this work relates to the application of a MP-MILP approach to the development of optimal policies to drive the energy transition at a country wide level.



Data-Driven Deep Reinforcement Learning for Greenhouse Temperature Control

Farhat Mahmood, Sarah Namany, Rajesh Govindan, Tareq Al-Ansari

College of Science and Engineering, Hamad bin Khalifa University, Qatar

Efficient temperature control in closed greenhouses is essential for optimal plant growth, especially in arid regions where extreme conditions challenge mirco-climate management. Maintaining the optimum temperature range directly influences healthy plant development and overall agricultural productivity, impacting crop yields and financial outcomes. However, the greenhouse in the present case study fails to maintain the optimum temperature as it operates based on predefined settings, limiting its ability to adapt to dynamic climate conditions. To address this, the objective is to develop a control system that maintains an ideal temperature range within the greenhouse that dynamically adapts to fluctuating external conditions, ensuring consistent climate control. Therefore, this study presents a control framework using Deep Deterministic Policy Gradient, a model-free deep reinforcement learning algorithm, to optimize temperature control in the closed greenhouse. A deep neural network is trained using historical data collected from the greenhouse to accurately represents the nonlinear behavior of the greenhouse system under varying conditions. The deep determinstic policy gradient algorithm learns optimal control strategies by interacting with a simulated greenhouse environment, continuously adapting without needing an explicit system dynamics model. Results from the study demonstrate that for a three-day simulation period, the deep deterministic policy gradient-based control system leads to superior temperature control as compared to the existing system with mean squared error of 0.1459 °C and a mean absolute error of 0.2028°C. The proposed control system promotes healthier plant growth and improved crop yields, contributing to better resource management and sustainability in controlled environment agriculture.



Balancing modelling complexity and experimental effort for conducting QbD on lipid nanoparticles (LNPs) systems

Daniel Vidinha Batista, Marco Seabra Reis

University of Coimbra, CERES, Department of Chemical Engineering

Abstract

Lipid nanoparticles (LNPs) efficiently encapsulate nucleic acids while ensuring successful intracellular delivery and endosomal escape. Therefore, there is increasing interest from the industrial and research communities in exploring the LNPs’ unique properties as a promising drug carrier. To ensure the successful and safe synthesis of these LNPs while maintaining their quality attributes, the pharmaceutical industry typically recommends following a Quality by Design (QbD) approach. One of the key aspects of the QbD approach is the use of Design of Experiments (DOE) to establish the Design Space that guarantees the quality requirements of the LNPs are met1. However, before defining a design space, several DOE stages may be necessary for screening the important factors, modelling the system’s behaviour accurately, and finding the optimal operational conditions. As each experiment is expensive due to the high cost of the formulation components, there is a strong concern and interest in making this process as efficient and informative as possible.

In this context, an in silico study provides a suitable test bed to analyse and compare the different DOE strategies that may be adopted and collect insights about a reasonable number of experiments to accommodate within a designated budget, while ensuring a statistically valid analysis. Therefore, we have conducted a systematic study based on the work developed by Karl et al.2, who provided a simulation model of the LNP synthesis, referred to as the Golden Standard (GS) Model. This model was derived and codified in the JMP Pro software using a recent methodology called self-validated ensemble model (SVEM). The model is quite complex in its structure, and was considered unknown throughout the study.

The objective of this study is to ascertain the efficacy of different DOE alternatives for a selected number of effects. A variety of models with increasing complexity was considered. These models are referred to as Estimated Models (EM) and vary from main effects only to models contemplating third-order non-linear mixture effects. In the development of the EM models, there are predictors of the GS model that were not considered, to better reproduce the realistic situation of modelling mismatch and experimental limitations. This is the case of the type of ionizable lipid and the total flow rate.

We have considered the molar ratio of each lipidic component (Ionizable lipid, Structural lipid, Helper lipid and PEG lipid) and the N/P ratio as factors, and for the responses, the potency and average size of the LNP. These responses were contaminated with additive white noise with different signal-to-noise ratio (SNRs) to better reflect the reality of having different levels of reproducibility of the measured responses.

Our results revealed that different responses require quite different model structures, with distinct levels of complexity. However, it was possible to notice that the number of experiments suggested is approximately the same, of the order of 30, a fact that may be anticipated for a DOE with similar factors under analysis.

References:

  1. Gurba-Bryśkiewic et al. Biomedicines. 2023;11(10):2752.doi:10.3390/biomedicines11102752
  2. Karl et al. JoVE. 2023;(198):65200. doi:10.3791/65200


Decarbonizing Quebec’s Chemical Sector: Bridging sector disparities with simplified modeling

Mélissa Lemire, Marie-Hélène Talbot, Sylvain Larose

Laboratoire des technologies de l’énergie, Institut de Recherche d’Hydro-Québec, Canada

Electric utilities are at a critical juncture where they must proactively anticipate energy consumption and power demand over extended time horizons to support the energy transition. These projections are essential for meeting the expected surge in renewable electricity as we shift away from natural gas to eliminate greenhouse gas (GHG) emissions. Given that a significant portion of these emissions comes from industrial processes, utilities need a comprehensive understanding of the thermal energy requirements of various processes within their service regions in order to navigate this transition effectively.

In Quebec, the chemical sector includes 19 major GHG emitters, each with annual emissions exceeding 10,000 tCO2 equivalent, operating across 11 distinct application areas, excluding refineries from this analysis. The sector is undergoing rapid transformation driven by the closure of aging facilities and the establishment of new plants focused on battery production and renewable fuel generation. The latter aims at decarbonising “hard-to-abate” sectors, which pose significant challenges. It is imperative to establish a clear methodology for characterising the chemical sector to accurately estimate the energy requirements for decarbonisation.

A thorough analysis of existing literature and reported GHG emissions serves as a foundation for estimating the actual energy requirement of each major emitter. Despite the diversity of industrial processes, a trend emerges: alternative end-use technologies can often be identified based on the required thermal temperature levels. With this approach, alternative end-use technologies that closely align with the specific heat levels needed are considered. Furthermore, two key performance indicators for decarbonisation scenarios have been developed. These indicators enable the comparison of various technological solutions and estimation of the uncertainties associated with different decarbonisation pathways. We introduce the Decarbonisation Efficiency Coefficient (DEC), which evaluates the reduction of fossil fuel consumption per unit of renewable energy and relies on the first law efficiency of both existing fossil-fuel technologies and alternative renewable energy technologies. The second indicator, the GHG Performance Indicator (GPI), assesses the reduction of greenhouse gas emissions per unit of renewable energy required, providing a clear metric for assessing the most efficient technological solutions to support decarbonisation efforts.

In a status quo market, the decarbonisation of this sector could yield a significant reduction in primary energy consumption, ranging from 10% to 61%, depending on the technologies implemented. Alternative end-use technologies include heat pumps, electric boilers with reheaters, biomass boilers, and green hydrogen utilisation, each presenting unique advantages for a sustainable industrial landscape. Ultimately, for Quebec’s energy transition to succeed, electric utilities must adapt to evolving market conditions and enhance their understanding of industrial energy requirements. By accurately estimating the electricity required for effective decarbonisation, utilities can play a pivotal role in shaping a sustainable future.



Optimizing Green Hydrogen Supply Chains in Portugal: Balancing Economic Efficiency and Water Sustainability

João Imaginário1, Tânia Pinto Varela1, Nelson Chibeles-Martins2,3

1CEG-IST, IST UL, Portugal; 2NOVA Math, NOVA FCT, Portugal; 3Mathematics Department, NOVA FCT, Portugal

As the world intensifies efforts to reduce carbon emissions and combat climate change, green hydrogen has emerged as a pivotal solution for sustainable energy transition. Produced using renewable sources like hydro, wind, and solar energy, green hydrogen holds immense potential for clean energy systems. Portugal, with its abundant renewable resources, is well-positioned to become a leader in green hydrogen production. However, the water-intensive nature of hydrogen production, especially via electrolysis, poses a challenge, particularly in regions facing water scarcity.

In Portugal, water resources are unevenly distributed, with southern regions such as Alentejo and Algarve already experiencing significant water stress. This creates a complex challenge for balancing green hydrogen development with the need to conserve water. To address this, a multi-objective optimization model for the Green Hydrogen Supply Chain (GHSC) in Portugal is proposed. This model aims to minimize both production costs and water stress, offering a more sustainable approach than traditional models that focus solely on economic efficiency.

The model leverages a meta-heuristic algorithm to explore large solution spaces, offering near-optimal solutions for supply chain design/planning. It incorporates regional water availability by analysing hydrographic characteristics of mainland Portugal, allowing for flexible decision-making that balances cost and water stress according to regional constraints. Scenario analysis is employed to evaluate different production strategies under varying conditions of water availability and demand.

By integrating these dual objectives, the model supports the design of green hydrogen supply chains that are both economically viable and environmentally responsible. This approach ensures that hydrogen production does not exacerbate water scarcity, particularly in already vulnerable regions. The findings contribute to the broader goal of creating cleaner, more resilient energy systems, providing valuable insights for sustainable energy planning and policy.

This research is a critical step in ensuring green hydrogen development aligns with long-term sustainability, offering a framework that prioritizes both economic and environmental goals.



Towards net zero carbon emissions and optimal water management within an integrated aquatic and agricultural livestock system

Amira Siniscalchi1,2, Guillermo Durand1,2, Erica Patricia Schulz1,2, Maria Soledad Diaz1,2

1Universidad Nacional del Sur, Argentine Republic; 2Planta Piloto de Ingeneria Quimica (PLAPIQUI)

We propose an integrated agricultural, livestock, ecohydrological and carbon capture model for the management of extreme climate events within a salt lake basin, while minimizing carbon dioxide emissions. Salt lakes, are typical of arid and semiarid zones where annual evaporation exceeds rainfall or runoff. They are particularly vulnerable to climatic changes, and salt and water levels can reach critical values. The mitigation of the consequences of extreme environmental events, such as floods and droughts, has been addressed for an endorheic salt lake in previous work [1].

In the present model, the system is composed of five integrated submodels: ecohydrological, meteorological, agricultural, livestock and carbon emission/capture. In the ecohydrological model, dynamic mass balances are formulated for both a salt lake and an artificial freshwater reservoir. The meteorological model includes surrogate models for meteorological variables, based on historical data for air temperature, wind, relative humidity and precipitations, on a daily basis. Based on them, wind speed profiles, radiation, vapor saturation, etc. are estimated, as required for evaporation and evapotranspiration profiles calculation. The agricultural submodel includes biomass growth for native trees, crops and pasture, as well as water requirement at each life cycle stage, which is calculated as a function of tree/crop/pasture evapotranspiration and precipitations. Local data is collected for native species and soil types. Carbon capture is calculated as function of biomass and soil type. Water requirement for cattle is calculated as function of biomass. The proposed model also accounts for CO2 emissions associated to sowing, electrical consumption for pumps (drip irrigation for crop and pasture and for water diversion to/from river), methane emissions (CO2-eq) by livestock, as well as CO2 sequestration by trees, pasture, crops and soil. One objective is to carry out the carbon mass balance along a given time horizon (six years or more) and to propose additional activities to achieve net zero carbon, depending on different climate events.

An optimal control problem is proposed, in which the objective function is an integral one that aims to keep the salt lake volume (and its associated salinity, as it is an endorheic basin), at a desired value, along a given time horizon, to keep salinity at optimal values for reproduction of valuable fish species. Control variables are stream flowrates that are diverted to/from the tributary to the salt lake from/to an artificial freshwater reservoir during dry/wet periods. The resulting optimal control problem is constrained with a DAE system of equations that represents the above-described system. The optimal control problem has been implemented in gPROMS (Siemens, 2024) and solved with a control vector parameterization approach.

Numerical results show that the system under study can produce meat, quinoa crops and fish, with important incomes, as well as restore native tree species, under different extreme events. Net zero carbon goals are approached within the basin, while performing optimal water management in a salt lake basin.

References

Siniscalchi, A, Diaz, M.S, Lara, R.J (2022). Sustainable long-term mitigation of floods and droughts in semiarid regions:Integrated optimal management strategies for a salt lake basin. Ecohydrology. 2022;15:e2396



A Modelling and Simulation Software for Polymerization with Microscopic Resolution

Shenhua Jiao1, Xiaowen Lin1,2, Rui Liu1, Xi Chen1,2

1State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University 310027, Hangzhou China; 2Huzhou Institute of Industrial Control Technology 313000, Huzhou China

In the domain of process systems engineering, developing software embedded with advanced computational methods is in great demand to enhance the kinetic comprehension and facilitate industrial applications. Polymer production, characterized by complex reaction mechanisms, represents a particularly intricate process industry. In this study, a scientific software, PolymInsight, is developed for polymerization modelling and simulation with insight on microscopic resolution.

From an algorithm perspective, PolymInsight offers high-performance solution strategies for polymerization process modelling by utilizing self-developed approaches. On a flowsheet level, the software provides an equation-oriented and a sequential-module approach to solve for macroscopic information. On micro-structure level, it provides users with both deterministic and stochastic algorithms to predict polymers’ microscopic properties, e.g., molecular weight distribution (MWD). Users can choose from various methods to model these indices: the stochastic method (Liu, 2023) which proposes concepts of the “buffer pool” to enable multi-step steady-state Monte Carlo simulation for complicated reactions including long chain branching, the orthogonal collocation method (Lin, 2021) which applied a model reformulation strategy to enable the numerical solution of the large-scale system of equations for calculating MWD in steady-state polymerizations, and an explicit analytical solution derivation method which provide the analytical expressions of MWD for specific polymerization mechanisms including FRP with combination termination, chain transfer to polymer and CRP with reversible reactions.

From a software architecture perspective, PolymInsight is built on a self-developed process modelling platform that allows flexible user customization and is specifically tailored for the macromolecular field. As a general software, it is modularly designed, and each module supports external libraries and secondary development. Pivotal modules, including reaction components, reaction kinetics, standard units, standard streams and solution strategies, are meticulously constructed and seamlessly integrated. The software's versatility is ensured by its support for a wide range of (i) polymerization mechanisms, (including Ziegler-Natta polymerization, free radical polymerization, and controlled radical polymerization), (ii) computing algorithms (including deterministic methods solving large scale equations and stochastic methods utilizing Monte Carlo simulation), (iii) user-defined flowsheets and parameters, and (iv) extensible standard model libraries. The insights gained from this work open up opportunities for optimizing operational conditions, addressing complex computational challenges, and enabling online control with minimal requirements for specialized knowledge.

References:

Lin, X., Chen, X., Biegler, L. T., & Feng, L.-F. (2021). A modified collocation framework for dynamic evolution of molecular weight distributions in general polymer kinetic systems. Chemical Engineering Science, 237, 116519.

Liu, R., Lin, X., Armaou, A., & Chen, X. (2023). A multistep method for steady-state Monte Carlo simulations of polymerization processes. AIChE Journal, 69(3), e17978.

Mastan, E., & Zhu, S. (2015). Method of moments: A versatile tool for deterministic of polymerization kinetics. European Polymer Journal, 68, 139–160.



Regularization and Uncertainty Quantification for Parameter Estimation of NRTL Models

Volodymyr Kozachynskyi1, Christian Hoffmann1, Erik Esche1,2

1Technische Universität Berlin, Process Dynamics and Operations, Straße des 17. Juni 135, 10623 Berlin, Germany; 2Bundesanstalt für Materialforschung und -prüfung (BAM), Unter den Eichen 87, 12205 Berlin, Germany

Accurate prediction of vapor-liquid equilibria (VLE) using thermodynamic models is critical to every step of chemical process design. The model’s accuracy and uncertainty can be quantified based on the uncertainty of the estimated parameters. The NRTL model is among the most widely used activity models. The estimation of its binary interaction parameters is usually done using a heuristic that sets the value of the nonrandomness parameter α to a value between 0.1 and 0.47. However, this heuristic can lead to an overestimation of the prediction accuracy of the final thermodynamic model, i.e., the model is actually not as reliable as the process engineer thinks. In this contribution, we present the results of the identifiability analysis of the binary VLE model [1] and argue that regularization should be used instead of simply fixing α.

In this work, the NRTL model with temperature dependent binary interaction parameters is considered, resulting in five parameters to be estimated: parameter α and four binary interaction parameters. 12 different binary mixtures with different azeotropic behavior, including no azeotrope and double azeotrope, are analyzed. A standard Monte Carlo method for describing real parameter and model prediction uncertainty is modified to be used for identifiability analysis and comparison of regularization techniques. Identifiability analysis is a technique used to determine parameters that can be uniquely estimated based on model sensitivity to the parameters. Four different subset selection regularization techniques are compared: SVD algorithm, generalized orthogonalization, forward selection, and eigenvalue algorithm, as they use different identifiability methods to select and remove unidentifiable parameters from the estimation.

The results of our study on 12 binary mixtures show that, depending on the mixture, the number of identifiable parameters varies between 3 and 5, implying that it is crucial to use regularization to efficiently solve the underlying parameter estimation problem. According to the analysis of all mixtures, parameter α, depending on the chosen regularization technique, is usually the most sensitive parameter, suggesting that it is inadvisable to remove this parameter from the estimation – in contradiction to standard heuristics.

In addition to this identifiability analysis, the nonlinearity of the NRTL model with respect to the parameters is analyzed. The actual form of the parameter uncertainty usually indicates nonlinearity and does not follow the normal distribution, which contradicts standard assumptions. Nevertheless, the prediction accuracy estimated using the linearization assumption is sufficiently good, i.e., linearization provides at least a valid underestimation of the real model prediction uncertainty.

In the presentation, we shall demonstrate exemplarily for some of the investigated mixtures that the estimation of NRTL parameters should be performed using regularization techniques, how large the introduced bias is based on a selected regularization technique, and compare actual uncertainty to its linear estimator.

[1] Kozachynskyi V., Hoffmann C., and Esche E. 2024. Why fixing alpha in the NRTL model might be a bad idea – Identifiability analysis of a binary Vapor-Liquid equilibrium, 10.48550/arXiv.2408.07844. Preprint.

[2] Lopez, C.D.C., Barz, T., Körkel, S., Wozny, G., 2015. Nonlinear ill-posed problem analysis in model-based parameter estimation and experimental design. 10.1016/j.compchemeng.



From Sugar to Bioethanol – Simulation, Optimization, and Process Technology in One Module

Jan Schöneberger1, Burcu Aker2

1Berliner Hochschule für Technik; 2Chemstations

The Green Processes Lab module, part of the Green Engineering study program at BHT, aims at equipping students to simulate, optimize, and implement an industrially relevant sustainable process within a single semester. Bioethanol production, with a minimum purity of 99.8 wt% is the selected process, using readily available supermarket feedstocks: sugar and yeast.

In earlier modules of the program, students engage with essential unit operations, including vessel reactor (fermenter), batch distillation, batch rectification, filtration, centrifugation, dryer, and adsorber. These operations are thoroughly covered in theoretical lectures reinforced through mathematical modeling, and predefined experiments, so that a comprehensive knowledge about their behavior exists. The students work in groups and are mostly unrestricted in designing their process, beside safety regulations and two artificial constraints: Only existing equipment can be used, and each process step is limited to a duration of 180 minutes including set-up, shutdown and cleaning. The groups compete in finding the economically best process, i.e. the process which produces the maximum amount bioethanol with the minimum amount of resources, namely sugar, yeast, and electricity. This turns the limitation for the process step duration into a big challenge, for it requires a very detailed simulation and process planning.

For tackling this task, the students use a commercial software, namely the flowsheet simulator CHEMCAD. This tool provides basic simulation models for unit operations and a powerful thermodynamic engine to calculate physical properties of pure substances and mixtures. However, the models must still be parametrized based on the existing equipment. Therefore, tools such as reaction rate regression and data reconciliation are used with data from previous experiments and a limited number of individually designed new experiments.

The parametrized models are then used to optimize the economic objective function. Due to the stepwise nature of the process, an overall optimization of all process parameters is extremely difficult. Instead, the groups combine different optimization approaches and try to focus on the individual processes without disregarding the other steps. This encourages a high degree of communication within the groups, because each group member is responsible for one process step.

At the end of the semester, each group successfully produced a quantifiable amount of bioethanol and documented the resources utilized throughout the process, as utility consumption was measured at each step. This data allows for the calculation of specific product costs, facilitating comparisons among groups and against commercially available bioethanol.

This work presents insights gained from the course, highlighting the challenges and the successes. It emphasizes the importance of mathematical modelling and the challenges in aligning modeled data with measured data. A key finding is that while the models may not perfectly reflect reality, they are essential for a successful process design, particularly for inexperienced engineers transitioning from academic to industry.



Aotearoa-New Zealand’s Energy Future: A Model for Industrial Electrification through Renewable Integration

Daniel Jia Sheng Chong1, Timothy Gordon Walmsley1, Martin Atkins1, Botond Bertok2, Michael Walmsley1

1Ahuora – Centre for Smart Energy Systems, School of Engineering, The University of Waikato; 2Szechenyi István University, Gyor, Egyetem tér 1, Hungary

Green energy carriers are increasingly proposed as the energy of the future. This study evaluates Aotearoa-New Zealand’s potential to transition to full industrial electrification and produce high-value, green hydrogen-rich compounds, all within the country’s resource constraints. At the core of this research is a regional energy transition system model, developed using the P-graph framework. P-graph is a bipartite graph designed specifically for representing complex process systems, particularly combinatorial problems. The novelty of this research lies in integrating the open-source P-graph Python library with the ArcPy library and the MERRA-2 global dataset API to conduct large-scale energy modelling.

The model integrates renewable energy and biomass resources for green hydrogen production, simulating energy transformation processes in an hourly basis. On the demand side, scenarios consider full electrification of industrial process heat through heat pumps and electrode boilers, complemented by biomass using driven technologies such as bubbling fluidised-bed reactor for biomass residue, straw and stover as well as biomass boilers for K-grade logs to meet heat demand. Additionally, the model accounts for projected increases in electricity consumption from the growing use of electric and hydrogen-battery hybrid vehicles, as well as existing residential and commercial energy needs. Aotearoa-New Zealand’s abundant natural wood resources emerge as a viable feedstock for downstream processes, supplying carbon sources for hydrogen-rich compounds such as methanol and urea.

The regional energy transition model framework is structured to minimise overall system costs. To optimise the logistics of biomass transportation, we use the Python-based ArcPy library to calculate cost functions based on the distance to green refineries. The model is designed to be highly dynamic, adapting to spot electricity prices and fluctuating demand across residential, commercial, and industrial sectors, particularly influenced by seasonal and weather variations. It incorporates non-dispatchable energy sources, such as wind and solar, with variable outputs, while utilising hydroelectric power as a stable baseload and energy storage solution to counter peak demand periods. The hourly solar irradiance, wind speed, and precipitation data from the MERRA-2 global dataset are coupled with the model to produce realistic and accurate capacity factors for these renewable energy sources.

The study concludes that Aotearoa-New Zealand remains a major player in the Oceanic region with respect to energy-based chemical production. Beyond meeting domestic needs, the country has the potential to become a net exporter of sustainable fuels, comparable to conventional energy sources. This outcome is achievable through the optimisation of diverse renewable energy sources and cross-sector energy integration. The findings provide policymakers with concrete, in-depth analyses of renewable projects to guide New Zealand’s transition to a net-zero hydrogen economy.



Non-Linear Model Predictive Control for Oil Production in Wells Using Electric Submersible Pumps

Carine de Menezes Rebello1, Erbet Almeida Costa1, Marcos Pellegrini Ribeiro4, Marcio Fontana3, Leizer Schnitman2, Idelfonso Bessa dos Reis Nogueira1

1Department of Chemical Engineering, Norwegian University of Science and Technology, Norway; 2Department of Chemical Engineering, Federal University of Bahia, Polytechnic School, Bahia, Brazil; 3Department of Electrical and Computer Engineering, Federal University of Bahia, Polytechnic School, Bahia, Brazil.; 4CENPES, Petrobras R&D Center, Brazil, Av. Horcio Macedo 950, Cid. Universitria, Ilha do Fundo, Rio de Janeiro, Brazil.

The optimization of oil production in wells lifted by Electric Submersible Pumps (ESPs) requires precise control of operational parameters, along with strict adherence to safety and efficiency constraints. The stable and safe operation of these wells is guided by physical and safety limits designed to minimize failures, extend equipment lifespan, and reduce costs associated with repairs, maintenance, and operational downtime. Moreover, maintaining operational stability not only lowers repair expenses but also mitigates revenue losses caused by unexpected equipment failures or inefficient production processes.

Process control has become a tool for reducing the frequency of constraint violations and ensuring the continuous optimization of oil production. By keeping operations within a well-defined operational envelope, operators can avoid common issues such as excessive vibrations, which may lead to premature pump wear and tear. Moreover, staying within this envelope prevents the degradation of pump efficiency over time and curbs excessive energy consumption, both of which have significant long-term cost implications.

Leveraging the available degrees of freedom to overcome the system's inherent constraints and improve operational efficiency. In the case of wells using ESPs, these degrees of freedom are primarily the ESP rotation speed (or frequency) and the opening of the production choke valve.

We propose a Non-Linear Model Predictive Control (NMPC) system tailored for a well equipped with an ESP. The NMPC framework explicitly accounts for the pump's operational limitations and effectively uses the available degrees of freedom to maximize performance. The NMPC's overarching objectives are to maximize oil production while respecting all system constraints, including both physical limitations and operational safety boundaries. This approach presents a more advanced and systematic control method compared to traditional PID-based systems, particularly in nonlinear, and constraint- intensive environments like oil wells.

The NMPC methodology is fundamentally based on a phenomenological model of the ESP, calibrated to predict key controlled variables accurately. These include the production flow rate and the liquid column height (HEAD). The prediction model consists of a system of three differential equations and a set of algebraic equations representing a stiff, single-phase, and isothermal system.

The system being modeled is a pilot plant located at the Artificial Lift Lab at the Federal University of Bahia. This pilot plant features a 15-stage centrifugal pump installed in a 32-meter-high well, circulating 3,000 liters of mineral oil within a closed-loop system.

In this setup, the controlled variables are the HEAD and the production flow, while the manipulated variables include the ESP frequency and the choke valve opening. The proposed NMPC system has been tested and has demonstrated its effectiveness in rejecting disturbances and accurately tracking setpoints. This guarantees stable and safe pump operation while optimizing oil production, providing a robust solution to the challenges associated in ESP-lifted well operations.



Life Cycle Assessment of Green Hydrogen Electrofuels in India's Transportation Sector

Ankur Singhal, Pratham Arora

IIT Roorkee, India

A transition to low-carbon fuels is integral in addressing the challenge of climate change. An essential transformation is underway in the transportation sector, one of the primary sources of global greenhouse gas emissions. The electrofuels that represent methanol synthesis via power-to-fuel technology have the potential to decarbonize the sector. This paper outlines an important comprehensive life cycle assessment for electrofuels, with this study focusing on the production of synthetic methanol from renewable hydrogen from water electrolysis coupled with carbon from the direct air capture (DAC) process. It looks at the whole value chain from raw material extraction to fuel combustion in transportation applications to give a proper cradle-to-grave analysis. The results from this impact assessment will offer a fuller comparison of merits and shortcomings associated with the electrofuel pathway compared to conventional methanol. The sensitivity study will determine how influential factors such as electrolyzer performance, carbon capture efficiency, and energy mix can impact the overall environmental effect. This study will compare synthetic methanol with traditional methanol, considering such categories as global warming potential, energy consumption, acidification, and eutrophication to appreciate the prospects for scaling synthetic methanol for the transportation industry.



Probabilistic Design Space Identification for Upstream Bioprocesses under Limited Data Availability

Ranjith Chiplunkar, Syazana Mohamad Pauzi, Steven Sachio, Maria M Papathanasiou, Cleo Kontoravdi

Imperial College London, United Kingdom

Design space identification and flexibility analysis are essential in process systems engineering, offering frameworks that enhance the optimization of operating conditions[1]. Such approaches can be broadly categorized into model-based and data-driven methods[2-4]. For complex systems like upstream biopharma processes, developing reliable mechanistic models is challenging, either due to a limited understanding of the underlying mechanisms or the need for simplifying assumptions to reduce model complexity. As a result, data-driven approaches often prove more practical from a modeling perspective. However, they often require extensive experimentation, which can be expensive and impractical, leading to sparse datasets[3]. Such sparsity also means that the data uncertainty becomes a significant factor that needs to be addressed.

We present a novel framework that utilizes a data-driven model to overcome the aforementioned challenges, even with sparse experimental data. Specifically, we utilize Gaussian Process (GP) models to account for real-world data uncertainties, enabling a probabilistic characterization of the design space—a critical generalization beyond traditional deterministic approaches. The framework has two primary components. First, the GP model predicts key performance indicators (KPIs) based on input process variables, allowing for the probabilistic modeling of these KPIs. Based on process performance constraints, a probability of feasibility is calculated, which indicates the likelihood that the constraints will be satisfied for a given input. After achieving a probabilistic design space characterization, the framework conducts a comprehensive quantitative analysis of process flexibility. Alpha shapes are employed to define deterministic boundaries at various confidence levels, allowing for the quantification of volumetric process flexibility and acceptable operational ranges. This enables a detailed examination of trade-offs between process flexibility, performance, and confidence levels.

The proposed framework is applied to an experimental dataset designed to study the effects of cell culture osmolality and temperature on the yield and purity of a monoclonal antibody product produced in Chinese hamster ovary cell fed-batch cultures. The results help balance purity-yield trade-offs through probabilistic characterizations that guide further experimentation and process design. The framework visualizes results through probabilistic heat maps and flexibility metrics to provide actionable insights for process development scientists. Since it is primarily a data-based framework, the framework is transferable to other types of bioprocesses.

References

[1] Yang, W., Qian, W., Yuan, Z. & Chen, B. 2022. Perspectives on the flexibility analysis for continuous pharmaceutical manufacturing processes. Chinese Journal of Chemical Engineering, 41, 29-41.

[2] Ding, C. and M. Ierapetritou. 2021. A novel framework of surrogate-based feasibility analysis for establishing design space of twin-column continuous chromatography. Int J Pharm, 609: p.121161.

[3] Kasemiire, A., Avohou, H. T., De Bleye, C., Sacre, P. Y., Dumont, E., Hubert, P. & Ziemons, E. 2021. Design of experiments and design space approaches in the pharmaceutical bioprocess optimization. European Journal of Pharmaceutics and Biopharmaceutics, 166, 144-154.

[4] Sachio, S., C. Kontoravdi, and M.M. Papathanasiou. 2023. A model-based approach towards accelerated process development: A case study on chromatography. ChERD, 197: p.800-820.

[5] M. M. Papathanasiou & C. Kontoravdi. 2020. Engineering challenges in therapeutic protein product and process design. Current Opinion in Chemical Engineering, 27, 81-88.



Study of the Base Case in a Comparative Analysis of Recycling Loops for Sustainable Aviation Fuel Synthesis from CO2

Antoine Rouxhet, Alejandro Morales, Grégoire Léonard

University of Liège, Belgium

In the context of the fight against global warming, the EU launched the ReFuelEU Aviation plan as part of the Fit for 55 package. Within this framework, sustainable aviation fuels are identified as a key tool for reducing hard-to-abate CO2 emissions. Power-to-fuel processes offer the potential to synthesise a wide range of fuels by replacing crude oil with captured CO2 as the carbon source. This CO2 is combined with hydrogen produced through water electrolysis, utilizing the reverse water-gas shift (RWGS) reaction :

CO2 + H2 ⇌ CO + H2O ∆°H298.15K = +41 kJ/molCO2 (1)

The purpose of this reaction is to convert the CO2 molecule into a less stable one, making it easier to transform into complex molecules, such as the hydrocarbon chains that constitute kerosene. This conversion is carried out through the Fischer-Tropsch (FT) reaction :

CO + H2 ⇌ CnH2n+2 + H2O ∆°H298.15K = -160 kJ/molCO (2)

In previous work, two kinetic reactor models were developed in Aspen Custom Modeler: one for the RWGS reaction [1] and one for the FT reaction [2]. The next step consists in integrating both models into a single process model built in Aspen Plus. This process includes both reaction units and the subsequent separation steps, yielding three main product fractions: the heavy hydrocarbons, the middle-distillates, which contain the kerosene-like fraction, and the light hydrocarbons along with unreacted gases.

This work is part of a broader study aimed at comparing different recycling loops for this process. Indeed, the literature proposes various configurations for recirculating unreacted gases, some of which include additional conversion units to transform light FT gases into reactants. However, there is currently a lack of comprehensive comparisons of these options from both technical and economic perspectives. The objective is therefore to compare these configurations to determine the one best suited for kerosene production.

In particular, this work presents the results of the base case i.e., the recycling of the gaseous phase leaving the separation units without any transformation of this stream. Three options are considered for the entry point of this recycled stream : at the inlet of the RWGS reactor, at the inlet of the FT reactor, or at both inlets. The present study investigates the comparison of these options based on carbon and energy efficiencies.

The next step involves adding a transformation unit to the recycling loop, such as a partial combustion unit. This would allow the conversion of light FT gases into process reactants, thereby improving overall efficiency. An economic comparison of the various options is also a goal of the study.

[1] Rouxhet, A., & Léonard, G. (2024). The Reverse Water-Gas Shift Reaction as an Intermediate Step for Synthetic Jet Fuel Production: A Reactor Sizing Study at Two Different Scales. Computer Aided Chemical Engineering, 53, 685-690. doi:10.1016/B978-0-443-28824-1.50115-0

[2] Morales Perez, A., & Léonard, G. (2022). Simulation of a Fischer-Tropsch reactor for jet fuel production using Aspen Custom Modeler. In L. Montastruc & S. Negny, 32nd EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING. Amsterdam, Netherlands: Elsevier. doi:10.1016/B978-0-323-95879-0.50051-5



Electricity Bidding with Variable Loads

Iiro Harjunkoski1,2

1Hitachi Energy Germany AG; 2Aalto University, Finland

The ongoing and planned electrification of many industries and processes also results in that all disturbances or changes in production will directly require countermeasures at the power grid level to maintain stability. As the electricity infrastructure is already facing the increasing volatility on the supply side due to the growing number of renewable energy source (RES) generation units, it is important also to untap the potential of the electrification. Processes will have a strong impact and can also help balancing for the RES-fluctuations, ensuring that the demand and supply are balanced at all times. This opportunity has already been recognized [1] and here we further elaborate on the concept by adding a battery energy storage system (BESS) to support the balancing between the production targets and grid stability.

Electricity bidding offers more opportunities than only forecasting the electricity load. Large consumers must participate in this electricity markets ahead of time and their energy bids will affect the market clearing. This mechanism allows to schedule the power plants to ensure sufficient supply but with increasing RES participation it becomes a challenge to deliver to promise and here industrial loads could potentially participate also in helping to maintain the stability. The main vehicle to deal with unplanned supply variations is ancillary services [3], which from the consumer point of view would commit to a potential increase/lowering of the energy consumption if called upon. This raises the practical question how much the process industries can plan for such volatility as it mainly must be focused on delivering to its own customers.

A common option – also seen by many RES unit owners – is to invest on BESS to act as a buffer between the consuming load and the power grid. This can also shield the process owner from unwanted and infeasible power volatilities, which can have an immense effect on the more electricity dependent processes. With such an energy storage system in place there is an option to use it for offering ancillary services, as well as participate in energy arbitrage trading [4]. However, the key is how to operate such a combined system in a profitable manner also taking into account the uncertainty of electricity prices. In this paper we extend the approach in [5], where a number of energy and ancillary service products are co-optimized taking into account the uncertainty in price developments. The previous approach was aimed for RES/BESS owners, where the forecasted load was relatively stable and mainly focused on keeping the system running. Here we change the load behavior to be not a parameter but a variable and link this to a process schedule, which is co-optimized with the bidding decisions. Much following the concepts in [6] we compare the cases with various levels of uncertainty (forecasting quantiles) and different sizes of BESS systems using a simplified stochastic approach, which reduces to a deterministic optimization approach in the case there is only one scenario available. The example process is modeled using the resource task network [7] approach.



Sodium bicarbonate production from CO2 captured in waste-to-energy plants: an Italian case-study

Laura Annamaria Pellegrini1, Elvira Spatolisano1, Giorgia De Guido1, Elena Riva Redolfi2, Mauro Corradi2, Davide Alberti3, Adriano Carrara3

1Politecnico di Milano, Italy; 2Acinque Ambiente Srl, Italy; 3a2a S.p.A., Italy

Waste-to-energy (WtE) plants, despite offering a sustainable solution to both waste management and energy production, significantly contribute to greenhouse gas emissions (Kearns, 2019). Therefore, the integration with CO₂ capture technologies represents a promising approach to enhance sustainability, enabling for both waste reduction and climate change mitigation (Otgonbayar and Mazzotti, 2024). Once this CO2 is captured from the flue gas, it can be eventually converted into high value-added products, following the circular economy principles. Key conversion technologies include chemical, electrochemical or biological methods for CO₂ valorization to methanol, syngas, plastics, minerals or fuels. However, challenges remain about the cost-effective implementation of these solution at the commercial scale. Research efforts in this respect are focused on improving efficiency and reducing costs, to allow for the process scale-up to the industrial level.

One of the viable alternatives for carbon dioxide utilization in the waste-to-energy context is its conversion into sodium bicarbonate (NaHCO₃). NaHCO₃, commonly known as baking soda, is often used for waste-to-energy flue gas treatment to abate various harmful pollutants, as sulfur oxides (SOₓ) and acidic gases as hydrogen chloride (HCl). Hence, bicarbonate production in situ from captured carbon dioxide can be an interesting solution for simultaneously lowering the plant environmental impact and improving the overall economic balance.

To explore sodium bicarbonate production as an alternative for carbon dioxide utilization, its production from sodium carbonate (Na₂CO₃) is analyzed referring to an existing waste-to-energy context in Italy (Moioli et al., 2024). The process technical assessment is performed through Aspen Plus V14®. Inlet CO2 flowrate is fixed to guarantee a bicarbonate output of about 30% of the annual need of the waste-to-energy plant. The effect of Na2CO3/CO2 (in the range 0.8-1.2 mol/mol) and temperature (in the range 35-45°C) is analyzed. Performances are evaluated considering the energy consumption for each of these cases. Outlet waste streams as well as water demand are minimized by a proper integration between process steams. Direct and indirect CO2 emissions are evaluated, to verify the process viability. As a result, optimal operating conditions are identified, in view of the pilot plant engineering and construction.

Due to the encouraging outcomes and the easy implementation to the existing infrastructures, the potential of carbon dioxide conversion to bicarbonate is demonstrated, proving that it can became a feasible CO2 utilization choice within the waste-to-energy context.

References

Kearns, D. T., 2019. Waste-to-Energy with CCS: A pathway to carbon-negative power generation. ©Global CCS Institute.

Otgonbayar, T., Mazzotti, M., 2024. Modeling and assessing the integration of CO2 capture in waste-to-energy plants delivering district heating. Energy 290, 130087. https://doi.org/10.1016/j.energy.2023.130087.

Moioli, S., De Guido, G., Pellegrini, L.A., Fasola, E., Redolfi Riva, E., Alberti D., Carrara A., 2024. Techno-economic assessment of the CO2 value chain with CCUS applied to a waste-to-energy Italian plant. Chemical Engineering Science 287, 119717.



A Decomposition Approach for Operable Space Maximization

Alberto Saccardo1, Marco Sandrin1,2, Constantinos C. Pantelides1,2, Benoît Chachuat1

1Sargent Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, United Kingdom; 2Siemens Industry Software, London W6 7HA, United Kingdom

Model-based design of experiments (MBDoE) is a powerful methodology for improving parameter precision and thus optimising the development of predictive mechanistic models [1]. By leveraging the system knowledge embedded in a mathematical model structure, MBDoE aims to maximise experimental information while minimising experimental time and resources. Recent developments in MBDoE have enabled the computation of robust campaigns of parallel experiments [2], which could in turn be applied repeatedly in a sequential design. Effort-based methods are particularly suited to the design of parallel experiments. They proceed by discretising the experimental design space into a set of candidate experiments and determine the optimal number of replicates (or efforts) for each, aiming to maximise the information content of the overall campaign.

A challenge with MBDoE is that its success ultimately depends on the assumed model structure, which can introduce epistemic errors when the model presents a large structural mismatch. Traditional MBDoE methods rely on Fisher information matrix (FIM)-derived metrics (e.g., D-optimality criterion), which implicitly assume a correct model structure [3], making them prone to suboptimality in case of significant structural mismatch. Albeit common in engineering models, the impact of structural uncertainty on MBDoE has not received as much attention as parametric uncertainty in the literature [3].

Inspired by [4], we propose to address this issue by appending a secondary, space-filling criterion to the main FIM-based criterion in a bi-objective optimisation framework. The idea is for the space-filling criterion to promote alternative experimental campaigns that explore the experimental design space more broadly, yet without significantly compromising their predicted information content. Within an effort-based approach, we compute such a space-filling criterion as a (minimal or average) distance between the selected experiments in the discretised experimental space and maximise it alongside a D-optimality criterion. We can furthermore apply gradient search to refine the effort-based discretization in a subsequent step [5].

We benchmark the proposed bi-criterion approach against a standard D-optimality approach for a microalgae cultivation system, whereby the (inaccurate) model describes nitrogen consumption with a simple Monod model and the ground truth is simulated using the Droop model.

References

[1] G. Franceschini, S. Macchietto, 2008. Model-based design of experiments for parameter precision: State of the art. Chem Eng Sci 63:4846–4872.

[2] K. P. Kusumo, K. Kuriyan, S. Vaidyaraman, S. García-Muñoz, N. Shah, B. Chachuat, 2022. Risk mitigation in model-based experiment design: a continuous-effort approach to optimal campaigns. Comput Chem Engg 159:107680.

[3] M. Quaglio, E. S. Fraga, F. Galvanin, 2018. Model-Based Design of Experiments in the Presence of Structural Model Uncertainty: An Extended Information Matrix Approach. Chem Engin Res Des 136:129–43.

[4] Q. Chen, R. Paulavičius, C. S. Adjiman, S. García‐Muñoz, 2018. An Optimization Framework to Combine Operable Space Maximization with Design of Experiments. AIChE J 64(11):3944–57.

[5] M. Sandrin, B. Chachuat, C. C. Pantelides, 2024. Integrating Effort- and Gradient-Based Approaches in Optimal Design of Experimental Campaigns. Comput Aided Chem Eng 53:313–18.



Non-invasive Tracking of PPE Usage in Research Lab Settings using Computer Vision-based Approaches: Challenges and Solutions

Haseena Sikkandar, Sanjeevrajan Nagavelu, Pradhima Mani Amudhan, Babji Srinivasan, Rajagopalan Srinivasan

Indian Institute of Technology, Madras, India

Personal Protective Equipment (PPE) protects researchers working in laboratory environments involving biological, chemical, medical, and other hazards. Therefore, monitoring PPE compliance in academic and industrial laboratories is vital. CSB case studies have reported significant injuries and fatalities in university lab settings, highlighting the importance of proper PPE and safety protocols to prevent accidents (https://www.csb.gov/videos). This paper develops a real-time PPE monitoring system using computer vision to ensure lab users wear essential gear like coats, safety gloves, bouffant caps, goggles, masks, and shoes (Arfan et al.,2023).

Current literature indicates substantial advancements in computer vision and object detection for PPE monitoring in industrial settings, though challenges persist due to variable lighting, background noise, and PPE occlusion (Protik et al., 2021). However, consistent real-time effectiveness in dynamic settings still requires further development of more robust solutions.

The non-intrusive detection of PPE usage in laboratory settings requires (1) a suitable hardware system comprising cameras, along with (2) computer vision-based algorithms whichareessential for effective monitoring.

In hardware system design, the strategic placement of cameras in the donning area, rather than inside the laboratory, is recommended. This preference is due to the ability to capture individuals as they equip their PPE before entering hazardous zones. Additionally, environments with significant height variations and lighting variability greatly affect detection accuracy. The physical occlusion of PPE items either by the individual’s body or surrounding objects, further complicates the task of ensuring full compliance. Computer vision-based algorithms face challenges with overlapping objects which can lead to tracking and identification errors. Variations in individual postures, movements, and PPE appearances also reduce detection accuracy. This problem is exacerbated if the AI model is trained on a limited dataset that doesn't accurately represent real-world diversity. Additionally, static elements like posters or dynamic elements can be misclassified as PPE, leading to a high rate of false positives.

To address the hardware system design issues, a solution involves strategically placing multiple cameras to cover the entire process, eliminating blind spots, and confirming correct PPE usage before individuals enter sensitive zones. In computer vision-based algorithms, the system uses adaptive image processing techniques to tackle variable lighting, occlusion, and posture variations. Software enhancements include multi-object tracking and pose estimation algorithms, trained on diverse datasets for accurate PPE detection. Incorporating edge cameras that utilize decentralized computing significantly enhances the operational efficiency of real-time PPE detection systems.

Future conceptual challenges in PPE detection systems include the ability to effectively detect multiple individuals. Each laboratory may also require customized PPE based on specific safety requirements. These variations necessitate the development of highly adaptable AI models capable of recognizing a wide range of PPE and distinguishing between different individuals, even in crowded settings, to ensure compliance and safety.

References:

  • M. Arfan et al., “Advancing Workplace Safety: A Deep Learning Approach for PPE Detection using Single Shot Detector”, International Workshop on Artificial Intelligence and Image Processing, Indonesia, pp. 127-132, 2023.
  • Protik et al.,, “Real-time PPE Detection Using YOLOv4 and TensorFlow,” IEEE Region 10 Symposium, Jeju, Korea, pp. 1-6, 2021.


Integrating batch operations involving liquid-solid mixtures into continuous process flows

Valeria González Sotelo, Pablo Monzón, Soledad Gutiérrez Parodi

Universidad de la República, Facultad de Ingeniería, Uruguay

While there has been a growth in specialized simulators for batch processes, the prevailing trend is towards a simple cycle modeling. Batch processes can be then integrated into an overall flowsheet, with the output stream properties calculated based on the established reaction conditions (time, temperature, etc.). To guarantee a continuous flow of material, an accumulation tank is usually incorporated.

Moreover, a wide range of heterogeneous batch processes exist within industry. Examples include sequential batch reactors in wastewater treatment, solid-liquid extraction processes, adsorption reactors, lignocellulosic biomass hydrolysis and grains soaking. When processing solid-liquid mixtures, or multiphase mixtures in general, phase separation can be exploited allowing for savings in resources such as raw materials or energy. In fact, these processes enable the separate discharge of the liquid and solid phases, providing flexibility to selectively retain either phase or a fraction thereof. Sequencing batch reactors retain microbial flocs while periodically discharging a portion of the treated effluent. By treating lignocellulosic biomass with a hot, pressurized aqueous solution, lignin and pentoses can be solubilized, leaving cellulose as the remaining solid phase1. In this case, since cellulose is the fraction of interest, the solid can be separated and most of the liquid phase retained for processing a new batch of biomass, thus saving reagents, water, and energy.

In a heterogeneous batch process, a degree of freedom typically emerges that often becomes a decision variable in the design of these processes: the solid-to-liquid ratio (S/L), which is a critical parameter that influences factors such as reaction rate, heat and mass transfer. Partial phase retention adds a new degree of freedom, the retained fraction, to the process design.

The re-use process is thus inherently dynamic. In a traditional batch process, the time horizon for analysis corresponds to the reaction, loading, and unloading time. For re-use processes, the mass balance will cause reaction products to accumulate in the retained phase from cycle to cycle. To take this into account, the time horizon for mass balances needs to be extended to include as many cycles as necessary. Eventually, a periodic operating condition will be reached.

The primary objective of this work is to incorporate the batch-with-reuse model into flowsheets, similar to traditional batches, by identifying the periodic condition under the given process conditions. A general algorithm to simulate the periodic condition suitable for any kinetics is proposed. It could enable the coupling of these processes in a simulation flowsheet. Regarding the existence of a periodic condition, an analytical study of the involved kinetic expressions and illustrative examples will be included.

1 Mangone, F., Gutiérrez, S..A Recycle Model of Spent Liquor in Pre-treatment of Lignocellulosic Biomass, Computer Aided Chemical Engineering, Volume 48, Pages 565-570, 2020, ISSN 1570-7946, ISBN 9780128233771, Elsevier, https://doi.org/10.1016/B978-0-12-823377-1.50095-1



Enhancing decision-making by prospective Life Cycle Assessment linked to Integrated Assessment Models: the roadmap of formic acid production

Marta Rumayor, Javier Fernández-González, Antonio Domínguez-Ramos, Angel Irabien

University of Cantabria, Spain

Formic acid (FA) is gaining attention as a versatile compound used both as chemical and energy carrier. Currently, it is produced by a two-step fossil-based process that include the reaction between methanol and carbon monoxide to methyl formate which is then hydrolyzed to form FA. Growing the global concerns about climate change the exploration of new strategies to produce FA from renewable sources has never been more important than today. Several sustainable FA production pathways have emerged in the latest decades including those based in chemocatalytic and electrochemical processes. Their environmental viability has been confirmed through ex-ante life cycle assessment (LCA) provided there are enhancements in energy consumption and consumable durability.1,2 However, these studies have been conducted using static approaches, which may not accurately reflect the influence related to the background evolution and the long-term reliability of the environmental prospects taking into account other decarbonization pathways in the background processes.

Identifying exogenous challenges affecting FA production due to supply changes is just as crucial as targeting the hotspots in the foreground technologies. This study aims to overcome this epistemological uncertainty by performing a dynamic life cycle assessment (d-LCA) utilizing the open-source Python Premise tool with the IMAGE integrated assessment model (IAM). A time-dependent background system was developed that aligned with prospective scenarios based on socio-economic pathways and climate change mitigation targets. This was coupled with the ongoing portfolio of emerging renewable technologies together with the traditional decarbonization approaches.

Given the substantial energy demands of chemocatalytic- and electro-based technologies, they could not be considered a feasible decarbonization solution under pessimistic policy scenarios. Conversely, a rapid development rate could enhance the feasibility of the electro-based pathway by 2030 within the optimistic background trajectory. A fully renewable electrolytic production approach could significantly reduce carbon emissions (up to 70%) and fossil fuel dependence (up to 80%) compared to conventional production by 2050. Other traditional approaches involve an intermediate decarbonization/defossilization synergy. Despite the potential of the electro-based pathway, a complete shift would involve land degradation risks. To facilitate the development of electrolyzers, prioritizing reductions in the use of scarce materials is crucial, aiming to enhance durability to 7 years by 2050. This study facilitates a comprehensive analysis of the portfolio of production processes minimizing the overall impact considering several regions and time-horizons and interlinked them with energy-economy-climate systems.

Acknowledgements

The present work is related to CAPTUS Project. This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101118265. J.F.-G. would like to thank the financial support of the Spanish Ministry of Science, Innovation (MICIN) for the concession of the FPU grant (19/05483).

References

(1) Rumayor, M.; Dominguez-Ramos, A.; Perez, P.; Irabien, A. Journal of CO2 Utilization 2019, 34, 490–499.

(2) Rumayor, M.; Dominguez-Ramos, A.; Irabien, A. Sustainable Production and Consumption 2019, 18, 72–82.



Towards Self-Tuning PID Controllers: A Data-Driven, Reinforcement Learning Approach for Industrial Automation

Kyle Territo, Peter Vallet, Jose Romagnoli

LSU, United States of America

As industries transition toward the digitalization and interconnectedness of Industry 4.0, the availability of vast amounts of process data opens new opportunities for optimizing industrial control systems. Traditional Proportional-Integral-Derivative (PID) controllers, often require manual tuning to maintain optimal performance in the face of changing process conditions. This paper presents an automated and adaptive method for PID tuning, leveraging historical closed-loop data and machine learning to create a data-driven approach that can continuously evolve over time.

At the core of this method is the use of historical process data to train a plant surrogate model, which accurately mimics the behavior of the real system under various operating conditions. This model allows for safe and efficient exploration of control strategies without interfering with live operations. Once the surrogate model is constructed, an RL agent interacts with it to learn the optimal control policy. This agent is trained to respond dynamically to the current state of the plant, which is defined by a comprehensive set of variables, including operational conditions, system disturbances, and other relevant measurements.

By integrating RL into the tuning process, the system is capable of adapting to a wide range of scenarios without the need for manual intervention. The RL agent learns to adjust the PID controller parameters based on the evolving state of the system, optimizing performance metrics such as stability, response time, and energy efficiency. After the training phase, the agent is deployed online to monitor the real-time state of the plant. If any significant deviations or disturbances are detected, the RL agent is called upon to make real-time adjustments to the PID controller, ensuring that the process remains optimized under new conditions.

One of the unique advantages of this approach is its ability to continuously update and refine the surrogate model and RL agent over time. As the plant operates, real-time data is collected and integrated into the historical dataset, allowing the models to adapt to any long-term changes in the process. This continuous learning capability makes the system highly resilient and scalable, ensuring optimal performance even in the face of new and unforeseen operating conditions.

By combining data-driven modeling with reinforcement learning, this method provides a robust, adaptive, and automated solution for PID tuning in modern industrial environments. The approach not only reduces the need for manual tuning and oversight but also maximizes the use of available process data, aligning with the principles of Industry 4.0. As industrial systems become increasingly complex and data-rich, such methods hold significant potential for improving process efficiency, reliability, and sustainability.



Energy integration of an intensified biorefinery scheme from waste cooking oil to produce sustainable aviation fuel

Ma. Teresa Carrasco-Suárez1, Araceli Guadalupe Romero-Izquierdo2

1Faculty of Engineering, Monash University, Australia; 2Facultad de Ingeniería, Universidad Autónoma de Querétaro, Mexico

The sustainable aviation fuel (SAF) has been proved as a viable alternative to reduce the CO2 emissions derived from the aviation sector activities, boosting its sustainable growth. However, the reported SAF processes are not economically competitive with jet fuel fossil-oil derived, thus, the application of strategies to reduce its economical issues has captured the interest of researchers and industrials. In this sense, in 2022 Carrasco-Suárez et al., studied the intensification, on the SAF separation zone, of a biorefinery scheme from waste cooking oil (WCO), which allowed a reduction of 3.07 % of CO2 emissions, regarding the conventional processing scheme; also, diminishing the operational cost from steam and cooling water services. Despite these improvements, the WCO biorefinery scheme is not economically viable and possesses high energy requirements. For this reason, in this work we present the energy integration of the whole biorefinery scheme from WCO, including the intensification of all separation zones involved in the scheme, using Aspen Plus V.10.0. The energy integration of WCO biorefinery scheme was addressed from the pinch point methodology to minimize its energy requirements. The energy integration (EI-PI-S) results have been presented in form of indicators to compare them with the conventional scheme (CS) and the intensified scheme before energy integration (PI-S). The defined indicators were: total annual cost (TAC), energy investment per delivered energy by the products (EI-P), energy investment per the mass of the main product (EI-MP, SAF as main product) and CO2 emissions per mass of main product (CO2-MP). According with results, the EI-PI-S contain the best indicators, regarding the CS and PI-S, reducing 14.34 % and 31.06 % the steam and cooling water requirements, regarding to PI-S; also, the CO2 emissions were reduced in 13.85 % and 14.13 % regarding CS and PI-S, respectively. However, the TAC for EI-PI-S is 0.5 % higher than the PI-S. The studied integrated and intensified WCO biorefinery scheme rises as a feasible option to produce SAF and other biofuels, attending the principles of minimum energy requirements and improving its economic performance.

References:

M. T. Carrasco-Suárez, A.G. Romero-Izquierdo, C. Gutiérrez-Antonio, F.I. Gómez-Castro, S. Hernández, 2022. Production of renewable aviation fuel by waste cooking oil processing in a biorefinery scheme: Intensification of the purification zone. Chem. Eng. Process. - Process Intensif. 181, 109103. https://doi.org/https://doi.org/10.1016/j.cep.2022.109103



Integrating Renewable Energy and CO₂ Utilization for Sustainable Chemical Production: A Superstructure Optimization Approach

Tianen Lim, Yuxuan Xu, Zhihong Yuan

Tsinghua University, China, People's Republic of

Climate change, primarily caused by the extensive emission of greenhouse gases, particularly carbon dioxide (CO₂), has intensified global efforts toward achieving carbon neutrality. In this context, renewable energy and CO₂ utilization technologies have emerged as key strategies for reducing the reliance on fossil fuels and mitigating environmental impacts. In this work, a superstructure optimization model is developed to integrate renewable energy networks and chemical production processes. The energy network incorporates multiple sources, including wind, solar, and biomass, along with energy storage systems to enhance reliability and minimize grid dependence. The reaction network features various pathways that utilize CO₂ as a raw material to produce high value-added chemicals such as polyglycolic acid (PGA), ethylene-vinyl acetate (EVA), and dimethyl carbonate (DMC), allowing for efficient conversion and resource utilization. The optimization is formulated as a mixed-integer linear programming (MILP) model, targeting the minimization of production costs while identifying the most efficient energy and reaction routes. This research supports the green transition of the chemical industry by optimizing a model that integrates renewable energy and CO₂ in chemical processes, contributing to more sustainable production methods.



Sustainable production of L-lactic acid from lignocellulosic biomass using a recyclable buffer: Process development and techno-economic evaluation

Donggeun Kang, Donghyeon Kim, Dongin Jung, Siuk Roh, Jiyong Kim

School of Chemical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea

With growing concerns about energy security and climate change, there is an increasing emphasis on finding solutions for sustainable development. To address this problem, using lignocellulosic biomass (LCB) to produce polymeric materials is one of the promising strategies to reduce dependence on fossil fuels. L-lactic acid (L-LA), a key monomer in biodegradable plastics, is a sustainable alternative that can be derived from LCB. The L-LA production process typically involves several various technologies such as fermentation, filtration, and distillation. In the L-LA production process, large amounts of buffers are used to maintain proper pH during fermentation, so conventional buffers (e.g., CaCO3) are often selected because of their low cost. However, these buffers cannot be recycled efficiently, and the potential for recyclable buffers remains uncertain. In this work, we aim to develop and evaluate a novel process for sustainable L-LA production using a recyclable buffer (i.e., KOH). The process involves a series of different unit operations such as pretreatment, fermentation, extraction, and electrolysis. In particular, the fermentation process is designed to achieve high yields of L-LA by maximizing the conversion of sugars to L-LA. In addition, an efficient buffer regeneration process using membrane electrolysis is implemented to recycle the buffer with minimal energy input. Then, we evaluated the viability of the proposed process compared to the conventional process based on minimum selling price (MSP), and net CO2 emissions (NCE). The MSP for L-LA was evaluated to be 0.88 USD /kg L-LA, and the NCE was assessed to be 3.31 kg CO₂-eq/kg L-LA. These results represent a 15% reduction in MSP and a 10% reduction in NCE compared to the conventional process. Additionally, a sensitivity analysis was performed with a 20% change in production scale, and LCB composition from the reference value. The sensitivity analysis results showed that the MSP varied from -4.4% to 3.6% by production scale, and from -13.0% to 19.0% by LCB composition. The proposed process, as a cost-effective and eco-friendly process, promotes biotechnology practices for sustainable production of L-LA.

References

Wang, Yumei, Zengwei Yuan, and Ya Tang. "Enhancing food security and environmental sustainability: A critical review of food loss and waste management." Resources, Environment and Sustainability 4 (2021): 100023.



Potential of chemical looping for green hydrogen production from biogas: process design and techno-economic analysis

Donghyeon Kim, Donggeun Kang, Dongin Jung, Siuk Roh, Jiyong Kim

School of Chemical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea

Hydrogen (H₂), as the most promising alternative to conventional fossil fuel-based energy carriers, faces the critical challenge of diversifying its sources and lowering production costs. In general, there are two main technological routes for H2 production: electrolysis using renewable power and catalytic reforming of natural gas. Biogas, produced from organic waste, offers a renewable and carbon-neutral option for H₂ production, but due to its high CO2 content, it requires a pre-separation process of CO2 from CH4 or a catalyst with different performance to be used as a feed gas in existing reforming processes. Chemical looping reforming (CLR), as an advanced H₂ production system, uses an oxygen carrier as the oxidant instead of air, allowing raw biogas to be used directly in the reforming process. Recently, a number of studies on the design and analysis of the CLR process have been reported, and these technological studies have gradually secured the economic feasibility of H2 production by CLR. However, for the CLR process to be deployed in the biogas treatment industry, further research is needed to comprehensively analyze the economic, environmental, and technical capabilities of the CLR processes under different feed conditions, required capacities, and targeted H2 purity. This study proposes new biogas-based CLR processes and analyzes the capability of the processes from techno-economic and environmental perspectives: ⅰ) conventional CLR as a base process, ⅱ) chemical looping steam reforming (CLSR), ⅲ) chemical looping water splitting (CLWS), and ⅳ) chemical looping dry reforming (CLDR). The proposed processes consist of unit operations such as a CLR reactor, a water-gas shift reactor, a pressure swing adsorption (PSA) unit, and a monoethanolamine (MEA) sorbent-based CO₂ absorption unit. Evaluation metrics include unit production cost (UPC), net CO2 equivalent emissions (NCE), and energy efficiency to compare economic, environmental, and technical performance, respectively. Each process is simulated using the commercial process simulator Aspen Plus to obtain mass and energy balance data. The oxygen carrier to fuel ratio and the heat exchanger network (HEN) are optimized through thermodynamic analysis to ensure efficient redox reactions, maximize heat recovery, and achieve autothermal conditions. As a result, we comparatively analyzed the economic and environmental capability of the proposed processes by identifying the major cost-drivers and CO2 emission contributors. In addition, the sensitivity analysis was performed using various scenarios to provide technical solutions to improve the economic and environmental performance, resulting in the real implementation of the CLR process.



Data-Driven Soft Sensors for Process Industries: Case Study on a Delayed Coker Unit

Wei Sun1, James G. Brigman2, Cheng Ji1, Pratap Nair2, Fangyuan Ma1, Jingde Wang1

1Beijing University of Chemical technology, China, People's Republic of; 2Ingenero Inc. 4615 Southwest Freeway, Suite 320, Houston TX 77027, USA

Research on data-driven soft sensors have been extensively conducted, yet reports of successful industrial applications are still notably scarce. The reason can be attributed to the variable operating conditions and frequent disturbances encountered during real-time process operations. Industrial data are always nonlinear, dynamic, and highly unbalanced, which poses huge challenges to capture the key characteristics of the underlying processes. Aiming at this issue, this work presents a comprehensive solution for industrial applications of soft sensors, including feature selection, feature extraction, and model updating.

Feature selection aims to identify variables that are both independent of each other and have significant impact on concerned performance, including quality, safety, and etc. It not only helps in reducing the dimensionality of the data to simplify the model, but also improving the prediction performance. Process knowledge can be utilized to initially screen variables, then correlation and redundancy analysis has to be employed because the existence of information redundancy not only leads to an increase in the computational load of modeling, but also significantly affects its prediction accuracy. Therefore, a mutual information-based relevance-redundancy algorithm is introduced for feature selection in this work, in which the relevance and redundancy among process variables are evaluated through a comprehensive correlation function and ranked according to their importance using greedy search to obtain the optimal variable set [1]. Then feature extraction is performed to capture internal features from the optimal variable set and build the association between latent features and output variables. Considering the complexity of industrial processes, deep learning techniques are often leveraged to handle the intricate patterns and relationships within the data. Long Short-Term Memory (LSTM) networks, a specific type of recurrent neural network (RNN), are particularly well-suited for this task due to their ability to capture long-term dependencies in sequential data. In industrial processes, many variables exhibit temporal correlations. LSTM networks can effectively model these temporal dependencies by maintaining a memory state that allows them to learn from sequences of data over extended periods. Meanwhile, a differential unit is embedded in the latent layer of LSTM networks in this work to simultaneously handle the short-term nonstationary features caused by process disturbances [2]. Once the model is trained, the model is updated during online application to incorporate the slow deviation in equipment and reaction agents.. Some quality related data are usually available behind real-time measurement, but can still be utilized to fine-tune the model parameters, ensuring sustained prediction accuracy over an extended period. To verify the effectiveness of this work, a case study on a delayed Coker unit is investigated. The results demonstrated promising long-term prediction performance for tube metal temperature, indicating the potential of this work in industrial application.

[1] Tao, T., Ji, C., Dai, J., Rao, J., Wang, J. and Sun, W. Data-based Health Indicator Extraction for Battery SOH Estimation via Deep Learning, Journal of Energy Storage, 2024

[2] Ji, C., Ma, F., Wang, J., & Sun, W. Profitability Related Industrial-Scale Batch Processes Monitoring via Deep Learning based Soft Sensor Development, Computers and Chemical Engineering, 2022



Retrofitting AP-X LNG Process Through Mixed Refrigerant Composition variation: A Sensitivity Analysis Towards Decarbonization Objective

Mutaman Abdulrahim, Saad Al-Sobhi, Fares Almoamoni

Chemical Engineering department, Qatar University, Qatar

Despite the promising outlook for the LNG market as a cost-effective energy carrier, associated GHG emissions remain an obstacle toward the net-zero emissions target. This study focuses on the AP-X LNG process, investigating the potential for decarbonization through optimizing a mixed refrigerant (MR) composition. The process simulation is carried out using Aspen HYSYS v.12.1 to simulate the large-scale AP-X LNG process, with the Peng-Robinson equation of state as a fluid package. Several reported studies have incorporated ethylene into their MR cycle instead of ethane, which might result in different MR volumes and energy requirements. Different refrigerant compositions are examined through the Aspen HYSYS optimizer, aiming to identify optimal MR composition that minimizes environmental impact and maximizes profitability without compromising the efficiency and performance of the process. Energy, Exergy, Economic, and Environmental (4E) assessment analysis will be performed to obtain key performance indicators such as specific power consumption, exergy efficiency, cost of production, etc. This work will contribute to the existing AP-X-based plant retrofitting activity and sustainability, offering insights into pathways for reducing the carbon footprint of the AP-X process.



Performance Evaluation of Gas Turbine Combined Cycle Plants with Hydrogen Co-Firing Under Various Operating Conditions

Hyeonrok Choi1,2, Won Yang1, Youngjae Lee1, Uendo Lee1, Changkook Ryu2, Seongil Kim1

1Korea Institute of Industrial Technology, Korea, Republic of (South Korea); 2School of Mechanical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea

In response to global efforts on climate change, countries are advancing low-carbon strategies and aiming for carbon neutrality. To reduce CO₂ emissions, fossil fuel power plants are integrating co-firing and combustion technologies centered on carbon-free fuels. Hydrogen has emerged as a promising fuel option, especially for gas turbine combined cycle (GTCC) plants when co-fired with natural gas. Due to the similar Wobbe Index (WI) values of hydrogen and methane, minimal modifications are required to the existing gas turbine nozzles. Furthermore, hydrogen’s high combustion limit allows stable operation even at elevated fuel-air ratios. Gas turbines are also adaptable to changes in ambient condition, which enables them to accommodate the output variations and operational changes, associated with hydrogen co-firing. Hydrogen, having distinct combustion characteristics compared to natural gas, affects gas turbine operation and alters the properties of the exhaust gases. The increased water vapor fraction from hydrogen co-firing results in a higher specific heat capacity of the exhaust gases and a reduced flow rate, leading to changes in turbine power output and efficiency compared to methane combustion. These changes impact the heat transfer properties of the Heat Recovery Steam Generator (HRSG) in the bottom cycle, thereby affecting the overall thermal performance of the GTCC plant. Since gas turbine operations vary with seasonal changes in temperature and humidity, it is essential to evaluate hydrogen co-firing’s impact on thermal performance across different seasonal conditions.

This study developed an in-house code to evaluate gas turbine performance during hydrogen co-firing and to assess the HRSG and steam turbine cycle based on the heat transfer mechanism, focusing on the impact on thermal performance across different seasonal conditions. Hydrogen co-firing effects on GTCC plant thermal performance were assessed under various ambient conditions. Three ambient cases (-12°C, RH 60%; 5°C, RH 60%; and 32°C, RH 70%) were analyzed for two scenarios: one with fixed Turbine Inlet Temperature (TIT) and one with fixed power output. A 600 MWe-class GTCC plant model consists of two F-class gas turbines and one steam turbine. Compressor performance maps and a turbine choking equation were used to analyze operating point and isentropic efficiency variations. The HRSG model, developed from heat exchanger geometric data, provided results for gas and water-steam side temperatures and heat transfer rates. The GTCC plant models were validated based on manufacturer data for design and off-design conditions.

The study performed process analysis to predict GTCC plant thermal performance and power output under hydrogen co-firing. Thermodynamic and off-design models of the gas turbine, HRSG, and steam turbine were used to analyze changes in exhaust temperature, flow rate, and composition, along with corresponding bottom cycle output variations. The effects of seasonal conditions on thermal performance under hydrogen co-firing were analyzed, providing a detailed evaluation of its impact on GTCC plant efficiency and output across different seasons. This analysis provides insights into the effects of hydrogen co-firing on GTCC plant performance across seasonal conditions, highlighting its role in hydrogen applications for combined cycle plants.



Modelling of Woody Biomass Gasification for process optimization

Yu Hui Kok, Yasuki Kansha

The University of Tokyo, Japan

In recent decades, public awareness of climate change has been increasing significantly due to the accelerating rate of global warming. To align with the Paris Agreement and “Green Transformation (GX) Basix Policy” in 2023, the use of biomass instead of fossil fuel for power generation and biofuel production has increased (Zhou & Tabata, 2024). Biomass gasification is widely used for biomass conversion as this thermochemical process can satisfy various needs such as producing heat, electricity, fuels and chemical synthesis (Situmorang et al., 2020). To date, extensive research has been conducted on biomass gasification, particularly focusing on the reaction models of the process. These models enable more computationally efficient predictions of the yield and composition of various gas and tar species, making it feasible to simulate complex reactor configurations without compromising accuracy. However, existing models are too complex to apply to the control system or to optimize the process operating conditions effectively, limiting their practical use for industrial applications. To address this, a simple reaction model for biomass gasification was developed in this research. To analyze the gasification reaction of the system and evaluate the gasification model, two feedstocks - Japanese cedar and waste cardboard were used in this steam gasification experiments to gain insight into the gasifier behaviour. A reaction model is developed by combining the biomass gasification equilibrium model and the experimental data. This model simulates woody biomass gasification using AspenTech’s Aspen Plus, a chemical process simulator. Validation of the accuracy of the model is done by comparing simulation results with available literature data and experimental data. As a case study, the model was used for process optimization, examining the effect of varying key operating parameters at the steam gasifier such as temperature of the gasification process, biomass moisture content and steam to biomass ratio (S/B) on the conversion performance. The experimental results show that Japanese Cedar has a higher syngas yield and H2/CO ratio than the cardboard gasification, indicating a more promising conversion of biofuel and bioenergy for Japanese Cedar. The optimal operating condition for maximizing syngas was found to be at 850°C gasifier temperature and S/B of 2. The process simulation model effectively predicts syngas composition with an absolute error below 4% for syngas composition. This study is helpful in developing a control system in future studies which able to capture the complex interactions between the factors that influence the performance of gasifiers and optimize them for improved efficiency and scalability in industrial applications.

Reference

Zhou, J., & Tabata, T. (2024). Research Trends and Future Direction for Utilization of Woody Biomass in Japan. Retrieved from https://www.mdpi.com/2076-3417/14/5/2205

Situmorang, Y. A., Zhao, Z., Yoshida, A., Abudula, A., & Guan, G. (2020). Small-scale biomass gasification systems for power generation (<200 kW class): A review. In Renewable and Sustainable Energy Reviews (Vol. 117). Elsevier Ltd. https://doi.org/10.1016/j.rser.2019.109486



Comparative analysis of conventional and novel low-temperature and hybrid technologies for carbon dioxide removal from natural gas

Federica Restelli, Giorgia De Guido

Politecnico di Milano, Italy

Global electricity consumption is projected to rise in the coming decades. To meet this growing demand sustainably, renewable energy sources and, among fossil fuels, natural gas are expected to see the most significant growth. As natural gas consumption increases, it will also become necessary to extract it from low-quality reserves, which often contain high levels of acid gases such as carbon dioxide and hydrogen sulphide [1].

The aim of this work is to compare various innovative and conventional technologies for the removal of carbon dioxide from natural gas, considered as a binary mixture of methane and carbon dioxide, with carbon dioxide contents ranging from 5 to 70 mol%. It first examines the performance of the physical absorption process using propylene carbonate as a solvent, along with a hybrid process in which it is applied downstream of low-temperature distillation. These results are, then, compared with previously studied technologies, including conventional chemical absorption with amines, physical absorption with dimethyl ethers of polyethylene glycol (DEPG), low-temperature distillation, and hybrid processes that combine distillation and absorption [2].

Propylene carbonate is particularly advantageous, as noted in the literature [3], when hydrogen sulphide is not present in raw natural gas. The processes are simulated using Aspen Plus® V9.0 [4] and Aspen HYSYS® V9.0 [5]. The energy analysis is conducted using the “net equivalent methane” method, which allows to compare duties of different nature [6]. The processes are compared in terms of methane equivalent consumption, methane losses, and product quality, offering guidance on the optimal process based on the composition of the raw natural gas.

References

[1] Langé S., Pellegrini L.A. (2016). Energy analysis of the new dual-pressure low-temperature distillation process for natural gas purification integrated with natural gas liquids recovery. Industrial & Engineering Chemistry Research 55, 7742-7767.

[2] De Guido, G., Gilardi, M., Pellegrini, L.A. (2021). Novel technologies for low-quality natural gas purification. In: Computer Aided Chemical Engineering (Vol. 50, pp. 241-246). Elsevier.

[3] Bucklin, R.W., Schendel, R.L (1984). Comparison of Fluor Solvent and Selexol processes. Energy Prog., United States.

[4] AspenTech (2016). Aspen Plus®, Burlington (MA), United States.

[5] AspenTech (2016). Aspen HYSYS®, Burlington (MA), United States.

[6] Pellegrini, L.A., De Guido, G., Valentina, V. (2019). Energy and exergy analysis of acid gas removal processes in the LNG production chain. Journal of Natural Gas Science and Engineering 61, 303-319.



Development of Chemical Recycling System for NOx Gas from NH3 Combustion

Isshin Ino, Yuka Sakai, Yasuki Kansha

Organization for Programs on Environmental Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Japan

Referring to the SDGs suggested by the United Nations, resource recycling is receiving more attention. However, some toxic but reactive wastes are only stabilized using additional resources before being released into the environment. Converting pollutants to valuable materials using their reactivity enhances the recycle ratio in society, leading to a reduction of environmental impact, which is called chemical recycling.

In this study, the potentials of chemical recycling for nitrogen oxides (NOX) gases from ammonia (NH3) combustion were evaluated from the chemical and economic points of view. Fundamental research for the system was conducted using NOX gas as the case study. As a chemical recycling method for NOX, the conversion to potassium nitrate (KNO3), valuable as fertilizer and raw material for gunpowder, was adopted. In this method, the high reactivity of NOX as toxicity was effectively utilized for chemical conversion.

On the other hand, most of the NOX gas in Japan is currently neutralized to nitrogen gas by the Selective Catalytic Reduction (SCR) method using additional ammonia. Nitrogen and water products are neutral and non-toxic but cannot be utilized further. Compared to this SCR method, this adapted method has high economic potential for the chemical recycling of NOX. The conversion ratio of chemical absorption by potassium hydroxide (KOH) was experimentally measured to analyze this method's environmental protection and economic potential. In addition, the system's economic value was estimated using the experimental data. The research further focuses on modeling and evaluating the NOX utilization system for NH3 combustion. The study concluded that the waste gas utilization system for NO­X waste gas is feasible as it is profit-oriented, enabling further resource utilization and construction of the nitrogen cycle. Furthermore, applying this approach to other waste gases is promising for realizing a sustainable society.



Hybrid Model: Oxygen balance for the development of a digital twin

Marc Lemperle1, Pedram Ramin1, Julian Kager1, Benny Cassells2, Stuart Stocks2, Krist Gernaey1

1Technical University Denmark, Denmark; 2Novonesis, Fermentation Pilot Plant

The oxygen transfer rate (OTR) is often a limiting factor when targeting maximum yield in a fermentation process. Understanding the OTR is therefore critical for improved bioreactor performance, as dissolved oxygen often becomes the limiting factor in aerobic fermentations due to its inherent low solubility in liquids such as in fermentation broths1. With the long-term aim of establishing a digital twin framework, the initial phase of development involves mathematical modelling of the OTR in a pilot-scale bioreactor, hosting the filamentous fungus Aspergillus oryzae using an elaborate experimental design.

The experimental design is specifically tailored to the interplay of the factors influencing the OTR e.g., airflow, back-pressure and agitation speed. Through a first set of four fermentation, a full-factorial experimental design with three factors (aeration, agitation, and pressure) and two levels (high and low) is designed. Concluding the 23-factorial design, eight different unique patterns of factors and two centre points were investigated in four different fermentation processes.

Since viscosity plays a crucial role in determining mass transfer properties in the chosen fungal process, understanding its effects is essential for modelling the OTR2. Another set of the similar experimental setup offered the possibility to investigate the on-line viscosity measurement in the fermentation broth. A significant improvement in the description of the volumetric oxygen mass transfer coefficient (KLa) with an R2 fit of 92 % and the unsatisfactory mechanistic understanding of viscosity led therefore to the development of a hybrid OTR model. The hybrid sequential OTR model includes a light gradient boost machine model that predicts the online viscosity from both the mechanistic model outputs and the process data. Evaluation of the first series of experiments without online viscosity data showed an improved KLa fit with a normalized mean square error of up to 0.14. Further evaluation with production batches to demonstrate model performance is being considered as a subsequent step.

Cell dry weight and off-line viscosity measurements were taken from each of the above-mentioned industrial based fermentation processes throughout the fermentation. The subsequent analysis aims to decipher the relationships between the OTR and the agitation, aeration, head pressure and viscosity, thus providing the basis for an accurate and reliable mathematical model of the oxygen balance inside a fermentation.

The hybrid OTR model presents the first step towards developing a digital twin, aiding with operational decisions for fermentation processes.



An integrated approach for the sustainable water resources optimisation

MICHAELA ZAROULA1, EMILIA KONDILI1, JOHN K. KALDELLIS2

1Optimisation of Production Systems Lab, Mechanical Engineering Department, University of West Attica; 2Soft Energy Applications and Environmental Protection Lab., University of West Attica

Unhindered access to clean water, the preservation and strengthening of water reserves is, together with the coverage of energy needs, a basic element of the survival of the human species (and not only) and therefore a top priority of both the UN and the E.U. In particular, the E.E. has set the goal of upgrading the coverage of 70 million of its citizens to clean water by 2030.

On the other hand, of course, the current situation in the balance of water supply and demand in the southern Mediterranean is clearly deficient and particularly worrying, while the situation worsens even more during the summer season, where excessive tourist flows. In Greece for example the ever-increasing demand for water, especially in the island regions during the summer (tourist) season, combined with the prolonged drought, has led to over-exploitation (to the point of exhaustion) of any available water reserves, depriving traditional agricultural crops of the necessary amounts of water and makes absolutely imperative the need for the optimal management of existing water resources but also the optimal development of new or improvement of existing infrastructures.

In particular, the lack of water resources suffocates the irrigation of agricultural crops, constantly shrinking the production of local products and drastically reducing the number of people employed in the primary sector.

In this context and, especially, in the light of ever-increasing violation of the area’s capacity, the present work highlights the main rationale and methods of our current research in water resources optimisation.

More specifically, the main objectives of the present work are:

The detailed description of the integrated energy – water problem in highly pressed areas

The use of scientific methods for the optimization of the water resource system

The development of a mathematical optimization model for the optimal exploitation of existing water resources as well as the optimization of new infrastructure projects planning that takes quantitatively into account the priorities and the values of the water use.

Furthermore, the innovative approach in the present work also considers the need to reduce the demand based on future forecasts so that the water resources are always in balance with the wider environment where they are utilized.

Water resources sustainability is included in the optimization model for the reduction of the environmental impacts and the environmental footprint of the energy-water system.

It is expected that with the completion of the research will result in an integrated tool that will support the users in the optimal water resources exploitation.



Streamlined Life-Cycle Assessments of Chemicals Based on Chemical Taxonomies

Maximilian Guido Hoepfner, Lucas F. Santos, Gonzalo Guillén-Gosálbez

Institute for Chemical and Bioengineering, Department of Chemistry and Applied Biosciences, ETH Zurich, Vladimir-Prelog-Weg 1, 8093 Zurich, Switzerland

Addressing the challenges caused by climate change and the impact of human activities requires a tool to evaluate and identify strategies for mitigating climate risk. Life cycle assessment (LCA) has emerged as the prevalent approach to quantify the impact of industrial systems, providing valuable insights on how to improve their sustainability performance. Still, it remains in most cases a data-intensive and complex tool. Especially, for the chemical industry, with a wide variety of products, there is an urge for tools that streamline and accelerate their environmental impact assessment. As an example, the largest LCA database, Ecoinvent, currently includes only around 700 chemicals 1, most of them bulk chemicals, which highlights the need to cover data gaps and develop streamlined methods to facilitate the widespread adoption of LCA in the chemical sector.

Specifically, LCA data focus mostly on high production volume chemicals, most of them produced in continuous processes operating at high temperature and pressure. Quantifying the impact of fine chemicals, often produced in bath plants and at milder conditions, thus, requires time-consuming process simulations2, or data-driven methods 3. The latter estimate impacts based on molecular descriptors and are often trained with high production volume chemicals, which might make them less accurate for fine chemicals.

Alternatively, here we explore another approach to streamline the LCA calculations based on classifying chemicals according to their molecular structure, e.g., occurring functional groups in the molecule. By applying a chemical taxonomy, we establish intervals within which impacts are likely to fall and correlations between sustainability metrics within classes. Furthermore, we investigate the use of process metric indicators (PMI), such as waste-mass and energy intensity, as proxies of LCA impacts. Notably, we studied the 783 chemicals found in the Ecoinvent 3.9.1. cutoff database by using the taxonomy implemented in the classyfire tool 1. Subsequently, the LCIs for all chemicals were used to estimate simple PMI metrics, while their impacts were computed following the IPCC 2013 GWP 100 and ReCiPe 2016 midpoint methods. Starting with the classification into organic and inorganic chemicals, a subsequent classification into so-called superclasses, representing more complex molecular characteristics, is performed. Furthermore, we applied clustering, principal component analysis (PCA) and data fitting to identify patterns and trends in the superclasses. The calculations were implemented in Brightway and Python 3.11.

Preliminary results show that the use of a chemical taxonomy allows to identify stronger correlations between LCA impacts and PMI metrics, opening the door for streamlined LCA methods based on simple metrics and formulas tailored to the specific type of chemical class.

1. Lucas, E. et al. The need to integrate mass- and energy-based metrics with life cycle impacts for sustainable chemicals manufacture. Green Chem. 26, (2024).

2. Hai, X. et al. Geminal-atom catalysis for cross-coupling. Nature 622, 754–760 (2023).

3. Zhang, D., Wang, Z., Oberschelp, C., Bradford, E. & Hellweg, S. Enhanced Deep-Learning Model for Carbon Footprints of Chemicals. ACS Sustain. Chem. Eng. 12, 2700–2708 (2024).



Aspen Plus Teaching: Spread or Compact Approach

Fernando G. Martins1,2, Henrique A. Matos3

1LEPABE, Laboratory for Process Engineering, Environment, Biotechnology and Energy, Chemical Engineering Department, Faculty of Engineering, University of Porto, Porto, Portugal; 2ALiCE, Associate Laboratory in Chemical Engineering, Faculty of Engineering, University of Porto, Porto, Portugal; 3CERENA , Departamento de Engenharia Química, Instituto Superior Técnico, Universidade de Lisboa, Portugal

Aspen Plus is a software package for the modelling and simulation of chemical processes used in several chemical engineering courses of different levels worldwide, with the support of several books [1-4]. This contribution has the goal to discuss how this teaching and learning is applied in two Portuguese universities: Instituto Superior Técnico – University of Lisbon (IST.UL) and Faculty of Engineering – University of Porto (FE.UP)

In 2021, the former integrated master’s in Chemical Engineering, with a duration of 5 years, was split into two courses: the Bachelor, with a duration of 3 years, and the Master, with a duration of 2 years.

With this reformulation, and at IST.UL, the courses’ coordination decided to spread the Aspen Plus teaching in the 2nd year of the Bachelor in different courses, starting the first introduction to the package in the 1st semester with the Chem. Eng. Thermodynamics. The idea is to use Aspen Plus to support the learning about compound properties, phase diagrams with different models (IDEAL, NRTL, PR, SRK, etc.), azeotropic identification and activity coefficients calculation. Moreover, based on experimental data is possible to obtain the binary interaction coefficient by regression, being a tool for help experimental data analysis. In addition, a Rankine cycle is modelled and simulations are carried out to calculate COP and other KPIs for different fluids automatically.

The same procedure is now introduced in other courses, such as Process Separation, Transport Phenomena, etc. At IST.UL, there are 2 courses of Project Design (12 ECTS) at Bachelor and Master levels that use Aspen Plus as a tool in Conceptual Project Design .

At FE.UP, the introductory teaching of Aspen Plus occurs in 3rd year of the Bachelor, in a course called Software Tools for Chemical Engineering, intending to simulate industrial processes of small complexity, properly choosing the applicable thermodynamic and unit operations models, and analyse the influence of design variables and operating conditions. The Aspen Plus is also taught, in a more advanced way, in the course of Engineering Design (12 ECTS), in the 2nd year of the master's degree, when students develop preliminary designs for industrial chemical processes.

This work tries to analyse how these two teaching strategies influence the student’s performance in the two Project Design courses at IST.UL, and in Engineering Project at FE.UP, by the fact that Aspen Plus is intensively used in these courses.

References:

[1] Schefflan, R. (2016). Teach yourself the basics of ASPEN PLUS, 2nd edition, Wiley & Sons

[2] Al-MALAH, K.I.M. (2017). ASPEN PLUS – Chemical Engineering Applications, Wiley & Sons

[3] Sandler, S.I. (2015). Using Aspen Plus in Thermodynamics Instruction: A Step-by-Step Guide, Wiley & Sons

[4] Adams II, T.A. (2022). Learn Aspen Plus in 24 Hours, 2nd Edition, McGraw Hill



Integration of Life Cycle Assessment into the Optimal Design of Hydrogen Infrastructure for Regional-Scale Deployment

Alessandro Poles1, Catherine Azzaro-Pantel1, Henri Schneider2, Renato Luise3

1Laboratoire de Génie Chimique, Université Toulouse, CNRS, INPT, Toulouse, France; 2LAboratoire PLAsma et Conversion d'Énergie, INPT, Toulouse, France; 3European Institute for Energy Research, Emmy-Noether Straße 11, Karlsruhe, Germany

Climate change mitigation is one of the most urgent global challenges. Greenhouse gas (GHG) emissions are the primary drivers of climate change, needing coordinated international action. However, political and territorial complexities make a uniform global approach difficult. As a result, individual countries are developing their own national policies aligned with international guidelines, such as those from the Intergovernmental Panel on Climate Change (IPCC). These policies often focus solely on emissions generated within national borders, as is the case with France’s National Low-Carbon Strategy (SNBC). Focusing solely on territorial emissions in national carbon neutrality strategies may lead to the unintended consequence of shifting environmental impacts to other stages of the life cycle occurring outside the country's borders. To provide a comprehensive assessment of environmental impacts, broader decision-support tools, such as Life Cycle Assessment (LCA), are crucial.

This is particularly important in energy systems, where hydrogen has emerged as a key component of the future energy mix. Hydrogen production technologies - such as Steam Methane Reforming (SMR) and electrolysis - each present distinct trade-offs. Currently, hydrogen is predominantly produced via SMR (>90%), largely due to its established market presence and lower production costs (1-3 $/kgH2). However, SMR brings significant GHG emissions (10-12 kgCO₂-eq / kgH2). Electrolysis, on the other hand, presents a lower-carbon alternative when powered by renewable energy, although it is currently more expensive (6 $/kgH2).

Literature shows that most existing hydrogen system optimizations focus on reducing costs and minimizing GHG emissions, often overlooking broader environmental considerations. This highlights the need for a multi-objective framework that addresses not only economic and GHG emission reductions but also the mitigation of other environmental impacts, thus ensuring a more sustainable approach to hydrogen network development.

This study proposes an integrated framework that couples multi-objective optimization for hydrogen networks with LCA. The optimization framework is developed using Mixed Integer Linear Programming (MILP) and an augmented epsilon-constraint method, implemented in the GAMS environment over a multi-year timeline (2022-2050). Evaluated hydrogen production pathways include electrolysis powered by renewable energy sources (wind, PV, hydro, and the national grid) and SMR with Carbon Capture and Storage (CCS). The LCA model is directly integrated into the optimization process, using the ReCiPe2016 method to calculate environmental indicators following a Well-to-Tank approach. A case study of hydrogen deployment in Auvergne-Rhône-Alpes, addressing industrial and mobility demand for hydrogen, will illustrate this framework.

The current phase of the research focuses on a bi-criteria optimization framework that balances economic objectives with environmental indicators, considered individually, to identify correlated indicators. Future research will explore strategies to reduce dimensionality in multi-objective optimization (MOO) without compromising solution quality, ensuring that decisions are both efficient and environmentally robust.

Reference [1] Thèse Renato Luise, Développement par approche ascendante de méthodes et d'outils de conception de chaînes logistiques « hydrogène décarboné »: application au cas de la France, Toulouse INP, 4 octobre 2023, https://theses.fr/2023INPT0083?domaine=theses



Streamlining Catalyst Development through Machine Learning: Insights from Heterogeneous Catalysis and Photocatalysis

Mitra Jafari, Julia Schowarte, Parisa Shafiee, Bogdan Dorneanu, Harvey Arellano-Garcia

Brandenburg University of Technology Cottbus-Senftenberg, Germany

Designing heterogeneous catalysts and optimizing reaction conditions present significant challenges. This process typically involves catalyst synthesis, optimization, and numerous reaction tests, which are not only energy- and time-intensive but also costly. Advances in machine learning (ML) have provided researchers with new tools to predict catalysts' behaviour, reaction conditions, and product distributions without the need for extensive laboratory experiments. Through correlation analysis, ML can uncover relationships between various parameters and catalyst performance. Predictive models, trained on existing data, can forecast the effectiveness of new materials, while data-driven insights help guide catalyst design and optimization. Automating the ML framework further streamlines this process, improving scalability and enabling rapid evaluation of a wider range of candidates, which accelerates the development of solutions to current challenges [1,2].

In this contribution, a proposed ML approach and its potential in catalysis (heterogeneous and photocatalysis) is explored by analysing datasets from different reactions, such as Fischer-Tropsch synthesis and pollutant degradation. These datasets are categorized based on descriptors like catalyst formulation, pretreatment, characteristics, activation, and reaction conditions, with the goal of predicting reaction outcomes. Initially, the data undergoes cleaning and labelling using one-hot encoding. Subsequent steps include imputation and normalization for data preparation. In addition, techniques such as Spearman correlation matrices, dendrograms, pair plots, and dimensionality reduction methods like PCA are applied. The datasets are then employed to train and test several models, including ensemble methods, regression techniques, and neural networks. Hyperparameter tuning was optimized using GridSearchCV, alongside cross-validation. Performance metrics such as R², RMSE, and MAE, are used to assess model accuracy and AIC for model selection, with a simple mean value model or linear regression serving as a baseline for comparison.

Finally, the prediction accuracy of each model is investigated, and the best performing model is selected. The effect of different descriptors on the respons have also been assessed to find out the most effective parameters on the catalysts performance. In regards of photocatalysis, nonlinear behaviour was observed due to the optimization driven influences. This is likely because the published results solely consisted of optimized data.

References

  1. Tang, Deqi, Rangsiman Ketkaew, and Sandra Luber. "Machine Learning Interatomic Potentials for Catalysis." Chemistry–A European Journal (2024): e202401148.
  2. Schnitzer, Tobias, Martin Schnurr, Andrew F. Zahrt, Nader Sakhaee, Scott E. Denmark, and Helma Wennemers. "Machine Learning to Develop Peptide Catalysts─ Successes, Limitations, and Opportunities." ACS Central Science 10, no. 2 (2024): 367-373.


Life Cycle Design of a Novel Energy Crop “Sweet Erianthus” by Backcasting from Process Simulation Integrating Agriculture and Industry

Satoshi Ohara1, Yoshifumi Terajima2, Hiro Tabata3,4, Shoma Fujii5, Yasunori Kikuchi3,5

1Research Center for Advanced Science and Technology, LCA Center for Future Strategy, The University of Tokyo; 2Tropical Agriculture Research Front, Japan International Research Center for Agricultural Sciences; 3Presidential Endowed Chair for “Platinum Society”, The University of Tokyo; 4Research Center for Solar Energy Chemistry, Graduate School of Engineering Science, Osaka University; 5Institute for Future Initiatives, The University of Tokyo

Crops have been developed primarily for food production. Toward decarbonization, it is also essential to design and develop novel crops suitable for new application processes such as biofuels and green chemicals production through backcasting approaches. For example, modifying industrial crops through crossbreeding or genetic modification can change their unit yield, environmental tolerance, and raw material composition (i.e., sugars, starch, and lignocellulose). However, conventional energy crop improvement has been aimed only at high-unit yield with high fiber content, such as Energy cane and Giant Miscanthus, which contain little or no sugar, limiting their use to energy and lignocellulosic applications.

Sweet Erianthus was developed in Japan as a novel energy crop by crossbreeding Erianthus (wild plants with high biomass productivity even in poor environments) and Saccharum spp. hybrids (sugarcane with sugar storage ability). Erianthus has a deep root system to draw up nutrients and water from the deep layers of the soil, making it possible to cultivate crops with low fertilizer and water inputs even in farmland unsuitable for agriculture due to low rainfall or low nutrients and water near the surface. On the other hand, sugarcane accumulates sugars directly in the stalk. Microorganisms can easily convert extracted sugar juice into bioproducts such as ethanol and polylactic acid. Therefore, Sweet Erianthus presents a dual characteristic of both Erianthus and sugarcane.

In this study, we are tackling the design of optimal Sweet Erianthus crop conditions (unit yield, compositional balance of sugars and fiber) by backcasting from simulations of the entire life cycle, considering sustainable agriculture, industrial productivity, environmental impact, and resource recycling. As options for industrial applications, ethanol fermentation, biomass combustion, power generation, and torrefaction to produce charcoal, biogas oil, and syngas were selected. Production potentials and energy inputs were calculated using the previously reported simulation models (Ouchida et al., 2017; Leonardo et al., 2023). Specifically, the production potential of each energy per unit area was simulated by multiplying conversion factors with three variables: unit yield; Y [t/ha], sugar content; S [wt%], and fiber content; F [wt%]. Each variable was assumed not to exceed the range of widths of the various prototypes developed.

The simulation results reveal optimal feedstock conditions that maximize energy productivity per unit area or minimize environmental impact. The fiber-to-sugar content (F/S) ratio was found to be especially important. This study presents a simulation-based crop design methodology. This study presents a method for practical crop design on the agricultural side based on simulations on the industrial side, which is expected to enable efficient new crop development.

K. Ouchida et al., 2017, Integrated Design of Agricultural and Industrial Processes: A Case Study of Combined Sugar and Ethanol Production, AIChE Journal, 63(2), 560-581

L. Leonardo et al., 2023, Simulation-based design of regional biomass thermochemical conversion system for improved environmental and socio-economic performance, Comput. Aid. Chem. Eng., 52. 2363-2368



Reversible Solid Oxide Cells and Long-term Energy Storage in Residential Areas

Arthur Waeber, Dorsan Lepour, Xinyi Wei, Shivom Sharma, François Maréchal

EPFL, Switzerland

As environmental concerns intensify and energy demand rises, especially in residential areas, reversible Solid Oxide Cells (rSOC) stand out as a promising technology. Characterized by their reversibility, high electrical efficiency, and fuel flexibility, they also cogenerate high-quality heat. The smart operation of rSOC systems can present interesting opportunities for long-term energy storage, facilitating the penetration of renewable energies at different scales while continuously providing useful heat.

Although the implementation of energy storage systems in residential areas has already been extensively discussed in the literature, the focus is mainly on batteries, often omitting the seasonal dimension. This study aims to address this gap by investigating the technical and economic feasibility of rSOC systems in residential areas alongside various long-term storage options: Hydrogen (H2), Hybrid tank (CH4/CO2) , and Ammonia (NH3).

Each of these molecules requires precise modeling, introducing specific constraints and impacting the rSOC system's performance in terms of electricity or heat output in different ways. To achieve this, the processes are first modeled in Aspen Plus to account for thermodynamic properties before being integrated into the Renewable Energy Hub Optimizer (REHO) framework.

REHO is a decision-support tool designed for sustainable urban energy system planning. It considers the endogenous resources of a specified area, various end-use demands (such as heating and mobility), and multiple energy carriers, including electricity, heat, and hydrogen. Multi-objective optimizations are conducted across economic, environmental, and energy efficiency criteria to facilitate a sound comparison of different storage solutions.

This analysis emphasizes the need for long-term storage technologies to support the penetration of decentralized electricity production. By providing tangible figures, such as CAPEX, storage tank sizes, and renewable energy installed capacity, it enables a fair comparison of the three main scalable long-term storage options. Additionally, it offers guidelines on the optimal storage conditions for each molecule, balancing energy efficiency and storage tank size. The role of rSOC as electricity storage technology and as heat producer for domestic hot water and/or space heating is also evaluated for the different storage options.



A Comparative Analysis of Industrial Edge MLOps prototype for ML Application Deployment at the edge of Process Industry

Fatima Rani, Lucas Vogt, Prof. Leon Urbas

Technische Universität Dresden, Germany

In the evolving Industry 4.0 revolution, combining Artificial Intelligence of Things (AIoT) and edge computing represents a significant step forward in innovation and efficiency. This paper introduces a prototype for constructing an edge AI system utilizing the contemporary Machine Learning Operations (MLOps) concept (Rani et al., 2024 & 2023). By employing microcontrollers such as Raspberry Pi and Nvidia Jetson Nano microcomputer as hardware, our methodology encompasses data ingestion and machine learning model deployment on edge devices (Antonini et al., 2022). Crucially, the MLOps pipeline is fully developed within the ecoKI platform, a pioneering research initiative focused on making energy-saving solutions for Small and Medium-sized Enterprises (SMEs). Here, we propose an MLOps pipeline that can be run as either multiple or single workflows, leveraging a REST API for interaction and customization through the FastAPI web framework in Python. This pipeline enables seamless data processing, model development, and deployment on edge devices. Moreover, real-time AI processing on edge devices enables microcontrollers, even those with limited resources, to effectively handle tasks in areas such as predictive maintenance, process optimization, quality assurance, and supply chain management. Furthermore, a comparative analysis conducted with Edge Impulse validates the effectiveness of our approach, demonstrating how optimized ML algorithms can be successfully deployed in the process industry (Janapa Reddi et al., 2023). Finally, this study aims to provide a blueprint for advancing Edge AI development in the process industry by exploring AI techniques suited for resource-limited environments and addressing key challenges, such as ML algorithm optimization and computational power.

References

Rani, F., Chollet, N., Vogt, L., & Urbas, L. (2024). Industrial Edge MLOps: Overview and Challenges. Computer Aided Chemical Engineering, 53, 3019-3024.

Rani, F., Khaydarov, V., Bode, D., Hasan, I. H. & Urbas, L.(2023). MLOps Practice: Overcoming the Energy Efficiency Gap, Empirical Support Through ecoKI Platform in the Case of German SMEs. PAC- Protection, Automation Control, World Global Conference 2023.

Antonini, M., Pincheira, M., Vecchio, M., & Antonelli, F. (2022, May). Tiny-MLOps: A framework for orchestrating ML applications at the far edge of IoT systems. In 2022 IEEE international conference on evolving and adaptive intelligent systems (EAIS) (pp. 1-8). IEEE.

Janapa Reddi, V., Elium, A., Hymel, S., Tischler, D., Situnayake, D., Ward, C., ... & Quaye, J. (2023). Edge impulse: An mlops platform for tiny machine learning. Proceedings of Machine

Learning and Systems, 5.

Acknowledgments: This work was Funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) under the grant number 03EN2047C.



Energy Water Nexus Resilience Analysis Using Integrated Resource Allocation Approach

Hesan Elfaki1, Dhabia Al-Mohannadi2, Mohammad Lameh1

1Texas A & M, United States of America; 2Hamad Bin Khalifa University , Qatar

Power and water systems are strongly interconnected through exchanged flows of water, electricity, and heat which are fundamental to maintain continuous operation and provide functional services that meet the demands. These systems are highly vulnerable to climate stressors resulting in possible disruption to the operation. As the services delivered by these systems are vital for community development across all sectors, it is essential to create reliable frameworks and effective methods to assess and enhance the resilience of the energy-water nexus to climate impacts.

This work presents a macroscopic, high-level representation of the interconnected nexus system utilizing a resource allocation model to capture the interactions between the power and water subsystems. The model is used to assess the performance of the system under various climate impact scenarios to determine peak demands the system can withstand and quantify the losses of the functional services which reveals the system vulnerabilities. Resilience metrics are incorporated to interpret these results and characterize the nexus performance. The overall method is generic, and its capabilities will be demonstrated through a case study on the energy-water nexus in the Gulf Cooperation Council (GCC) region.



Technoeconomic Analysis of a Novel Amine-Free Direct Air Capture System Integrated with HVAC

Yasser Abdellatif1,2, Ikhlas Ghiat1, Riham Surkatti2, Yusuf Bicer1, Tareq AL-ANSARI1,2, Abdulkarem I. Amhamed1,3

1Hamad Bin Khalifa University College of Science and Engineering, Qatar; 2Qatar Environment and Energy Institute (QEERI), Doha, Qatar.; 3Corresponding author’s email: aamhamed@hbku.edu.qa

The increasing demand for Direct Air Capture (DAC) technologies has been driven by the need to mitigate rising CO2 levels and address climate change. However, DAC systems face challenges, particularly in humid environments, where high humidity substantially increases the energy required for regeneration. Conventional CO2 physisorption is often hindered by competitive water adsorption, which reduces system efficiency and increases energy demand. Addressing these limitations is crucial for advancing DAC technology and improving commercial viability. This study proposes a novel DAC system integrated with an Air Handling Unit (AHU) to manage these challenges. A key feature of the system is the incorporation of a silica gel wheel for air dehumidification prior to physisorption. This pre-treatment step significantly enhances physisorbents' performance by reducing water vapor in the air, optimizing the CO2 adsorption process. As a result, physisorbents can better perform with conventional chemisorbents, which benefit from water co-adsorption but could have limitations, such as material degradation and higher energy demands. The study focuses on two adsorbents: NbOFFIVE and SBA-15 functionalized with TEPA. These materials were chosen for their promising CO2 capture properties. The system was tailored for the AHU of Doha Tower, a high-rise in a hot, humid climate. The silica gel wheel dehumidifies return air before it enters the CO2 capture stage. The air is then cooled by the existing AHU system to create optimal conditions for adsorption. After CO2 capture, the air is reheated using the AHU’s heater to maintain indoor temperatures. The adsorbed water in the silica gel is regenerated using the CO2- and water-free airstream, allowing the system to deliver the required humidity range for indoor areas before supplying the air to the building. This ensures both air quality and operational efficiency. This integrated approach offers significant advantages in energy savings and efficiency. The use of silica gel prior to physisorption reduced energy requirements by 82% for NbOFFIVE and 39% for SBA-15/TEPA, compared to a DAC-HVAC system without silica gel dehumidification. Physisorbents generally exhibit lower heats of adsorption than chemisorbents, further reducing the system’s overall energy demand. The removal of excess moisture also minimizes the energy required for water desorption and addresses key drawbacks of amines, such as instability in indoor environments. Additionally, this approach lowers the cooling load by eliminating water condensation typically managed by the HVAC system. These factors were evaluated in a technoeconomic analysis, where they played a crucial role in reducing operational costs. Utilizing the existing AHU infrastructure further reduces capital expenditures (CAPEX), making this system a highly attractive solution for large-scale CO2 capture applications.



Computer-Aided Molecular Design for Citrus and Coffee Wastes Valorisation

Giovana Correia de Assis Netto1, Moisés Teles dos Santos1, Vincent Gerbaud2

1University of São Paulo, Brazil; 2Laboratoire de Génie Chimique, France

Brazil is the world's largest producer of both coffee and oranges. These agro-industrial processes generate large quantities of wastes, which are typically discarded in landfills, mixed with animal feed, or incinerated. Such practices not only pose environmental issues but also fail to fully exploit the economic potential of these residues. The Brazilian coffee processing predominantly employs the dry method, wherein the coffee fruit is dried and dehulled, resulting in coffee husk as the primary waste (18% w/w fresh fruit). Subsequently, green coffee beans are roasted, generating an additional residue known as silverskin (4.3% w/w fresh fruit). Finally, roasted and ground coffee undergoes extraction, resulting in spent coffee grounds (91% w/w of ground coffee). Altogether, these residues can account for up to 99% of the coffee fruit's mass. Similarly, Brazil leads global orange juice production. This process generates orange peel waste, which comprises 50–60% of the fruit's mass. Coffee and orange peel wastes contain valuable compounds that can be extracted or produced via biological or chemical conversions, making the residues potential sources of chemical platforms. These chemical platforms can be used as molecular building blocks, with multiple functional groups that can be functionalised into useful chemicals. A notable example is furfural, a key bio-based chemical platform that serves as a precursor for various chemicals, offering an alternative to petroleum-based products. Furfural is usually obtained from xylose dehydration and purified by extraction with organic solvents, such as toluene or methyl isobutyl ketone, followed by distillation. The objective of this work is to design alternative solvents for furfural extraction from aqueous solutions, using Computer-Aided Molecular Design (CAMD). A comprehensive literature review identified chemical platforms that can be produced from coffee and orange residues. These molecular structures were then used as molecular building blocks in the chemical library of an in-house CAMD tool. The CAMD tool employed uses molecular graphs for chemical structures representation and modification, group contribution methods for property estimations and a genetic algorithm as search procedure. The target properties for the screening included Kow (as a measurement of toxicity), enthalpy of vaporisation, melting point, boiling point, flash point and Hansen solubility parameters. Other 31 properties, including EHS indicators, were also calculated for reference. From the initial list of 40 building block families, 19 families were identified in coffee wastes, and 20 families were identified in orange wastes. Among these, 13 building blocks are common to both types of residues and were evaluated as molecular fragments to design candidate solvents for furfural separation: furoate, geranyl, glucaric acid, glutamic acid, hydroxymethylfurfural, hydroxypropionic acid, levulinic acid, limonene, 5-methylfurfural, oleic acid, succinic acid, glycerol and the furfural itself. The results demonstrate that molecular structures derived from citrus and coffee residues have the potential to produce solvents with properties comparable to those of toluene. The findings are promising as they represent an advancement over the use of toluene, a fossil-derived solvent, enhancing sustainability in furfural extraction and avoiding the use of non-renewable chemicals in downstream processes of agro-based biorefineries.



Introduction of carbon capture technologies in industrial symbioses for decarbonization

Sydney Thomas, Marianne Boix, Stéphane Negny

Laboratoire de Genie Chimique, Toulouse INP, CNRS, Université Paul Sabatier, France

Climate change is a consequence of human activities, with one of the primary sources of greenhouse gases emissions (GHG) being industrial activities. Therefore, it is imperative to drastically reduce emissions from the industrial sector in order to effectively address climate change. This endeavor will necessitate the implementation of multiple actions aimed at enhancing both sobriety and efficiency.

Eco-industrial parks are among the viable options for increasing efficiency. They operate through the collaboration of industries that choose to cooperate to mutualize or exchange materials, energy, or services. By optimizing these flows, it is possible to reuse a fraction of materials, thus reducing waste and fossil fuel consumption, thereby decreasing GHG emissions.

This study is based on a real eco-industrial park located in South Korea, where some companies are capable of producing different levels of steam, while others have a demand for steam (Kim et al., 2010). However, this work also pertains to a project for reindustrialization in France, necessitating that parameters are adapted to French conditions while striving for a general applicability that may extend to other countries. One of the preliminary solutions for reducing GHG emissions involves optimizing the steam network among companies. Additionally, it is feasible to implement carbon capture solutions to mitigate the impact of fuel consumption, although these techniques may also contribute to other forms of pollution. Consequently, while they reduce GHG emissions, they may inadvertently increase other types of pollution. The ultimate objective is to optimize the park utilizing a systemic approach.

In this analysis, carbon capture modules are modeled and integrated into an optimization model for steam exchanges that was previously developed by Mousqué et al. (2018). The multi-period model utilizes a multi-criteria mixed-integer linear programming (MILP) approach. The constraints of the problem correspond to material and energy balances as well as thermodynamic equations. Three criteria are considered to assert the optimal organization: cost, greenhouse gas emissions (GHG), and pollution from amines. Subsequently, an epsilon-constraint strategy is employed to delineate the Pareto front. Finally, the TOPSIS method is utilized to determine the most advantageous solution.

The preliminary findings appear to indicate that capture through adsorption holds significant promise. Compared to base case scenario, this method has the potential to divide by 3 the CO2 emissions while the cost only increases by 0.4% per year. This approach may eliminate the necessity of amine use in carbon capture and reduce the energy needs compared to absorption capture. However, further researches need to be done to confirms these results.



Temporal Decomposition Scheme for Designing Large-Scale CO2 Supply Chains Using a Neural-Network Based Model for Forecasting CO2 Emissions

Jose A. Álvarez-Menchero, Ruben Ruiz-Femenia, Raquel Salcedo-Díaz, Isabela Fons Moreno-Palancas, Jose A. Caballero

University of Alicante, Spain

The battle against climate change and the search for innovative solutions to mitigate its effects have become the focus of the researchers’ attention. One potential approach to reduce the impacts of the global warming could be the design of a Carbon Capture and Storage Supply Chain (CCS SC), as proposed by D’Amore [1]. However, the high complexity of the model requires exploring alternative ways to optimise it.

In this work, a CCS multi-period supply chain for Europe, based on that presented by D’Amore [1], is designed. Data on CO2 emissions have been sourced from the EDGAR database [2], which includes information spanning the last 50 years. Since this problem involves optimising the cost and operation decisions over a 10-year time horizon, it would be advisable to forecast carbon dioxide emissions to enhance the reliability of the data used. For this purpose, a neural-network based model is implemented for forecasting [3]. The chosen model is N-BEATS.

Furthermore, a temporal decomposition scheme is used to address the intractability issues of the model. The selected method is Lagrangean decomposition, which has been employed in other high-complexity works, demonstrating strong performance and significant computational savings [4,5].

References

[1] D’Amore, F., Bezzo, F., 2017. Economic optimisation of European supply chains for CO2 capture, transport and sequestration.

[2] JRC, 2021. Emission Database for Global Atmospheric Research (EDGAR). Joint Research Centre, European Commission (Available at:). https://edgar.jrc.ec.europa.eu/index.php.

[3] Akiba, T., Sano, S., Yanase, T., Ohta, T., & Koyama, M., 2019. A Next-generation Hyperparameter Optimization Framework. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.

[4] Jackson, J. R., Grossmann, I. E., 2003. Temporal decomposition scheme for nonlinear multisite production planning and distribution models.

[5] Goel, V., Grossmann, I. E., 2006. A novel branch and bound algorithm for optimal development of gas fields under uncertainty in reserves.



Dynamic simulation of turquoise hydrogen production using a regenerative non-catalytic pyrolysis reactor under various heat sources

Jiseon Park1,2, Youngjae Lee1, Uendo Lee1, Won Yang1, Jongsup Hong2, Seongil Kim1

1Korea Institute of Industrial Technology, Korea, Republic of (South Korea); 2Yonsei university, Korea, Republic of (South Korea)

Hydrogen is widely regarded as a key energy source for reducing carbon emissions and dependence on fossil fuels. As a result, several methods for producing hydrogen have been developed, such as gray, blue, green, and turquoise hydrogen. Grey hydrogen is produced from natural gas but generates a large amount of CO₂ as a byproduct. Blue hydrogen captures and stores CO2 to overcome the problems of gray hydrogen production methods. Green hydrogen is produced through water electrolysis powered by renewable energy and emits almost zero CO2. However, it faces challenges such as intermittent energy supply and high production costs.

In turquoise hydrogen production, methane pyrolysis generates hydrogen and solid carbon at high temperatures. Unlike the other hydrogen production methods, this process does not emit carbon dioxide, thus it offers environmental benefits. Notably, non-catalytic methane pyrolysis has the advantage of avoiding catalyst deactivation issues. While catalytic methane pyrolysis increases operational complexity and costs because of the regular catalyst replacement, the non-catalytic process addresses these challenges. However, non-catalytic processes require maintaining much higher reactor temperatures than steam methane reforming and catalytic methane pyrolysis. Consequently, optimizing the heat supply is critical to maintaining high temperatures.

This study explores various methods of supplying heat to sustain the high temperature inside the reactor. We propose a new method for turquoise hydrogen production based on a regenerative pyrolysis reactor to optimize heat supply. In this system, as methane pyrolysis begins in one reactor, it undergoes an endothermic reaction, causing a decrease in temperature. Meanwhile, another reactor supplies heat by combusting hydrogen, ammonia, or methane to gradually increase the temperature. This system enables continuous heat supply and efficiently uses thermal energy.

Therefore, this study conducts dynamic simulation to optimize a regenerative non-catalytic pyrolysis system for continuous turquoise hydrogen production. By utilizing dynamic analysis inside the reactor, optimal operating conditions for this hydrogen production system are determined, which ensures efficient and continuous hydrogen production. Additionally, the study compares hydrogen, ammonia, and methane as heat sources to determine the most effective fuel for maintaining high temperatures in reactors. This comparison utilizes life cycle assessment (LCA) to comprehensively evaluate the energy consumption and CO2 emissions of each fuel source.

The integration of dynamic analysis with LCA provides critical insights into the environmental and operational efficiencies of various heat supply methods used in the regenerative turquoise hydrogen production system. This approach enables the quantification of those impacts and supports the identification of the most suitable fuel. Ultimately, this research contributes to the development of more sustainable and efficient hydrogen production technologies, highlighting the potential for significant reductions in carbon emissions.



Empowering Engineering with Machine Learning: Hybrid Application to Reactor Modeling

Felipe CORTES JARAMILLO1, Julian Per BECKER1, Benoit CELSE1, Thibault FANEY1, Victor COSTA1, Jean-Marc COMMENGE2

1IFP Energies nouvelles, France; 2Université de Lorraine, CNRS, LRGP, France

Hydrocracking is a chemical process that breaks down heavy hydrocarbons into lighter, more valuable products, using feedstocks such as vacuum gas oil (VGO) or renewable sources like vegetable oil and animal fat. Although existing hydrocracking models, developed over years of research, can achieve high accuracy and robustness once calibrated and validated [1-3], significant challenges persist. These include the inherent complexity of the feedstocks (containing billions of molecules), high computational costs, and limitations in analytical techniques, particularly in differentiating between similar compounds like iso and normal alkanes. These challenges result in extensive experimentation, higher costs, and considerable discrepancies between physics-based model predictions and actual measurements.

To overcome these limitations, effective approximations are needed that integrate both empirical data and established process knowledge. A preliminary investigation into purely data-driven models revealed difficulties in capturing the fundamental behavior of the hydrocracking reaction, motivating the exploration of an hybrid modeling approach. Among various hybrid modeling frameworks [4], physics-informed machine learning was selected after in-depth examination, as it can leverage well-established first-order principles, represented by ordinary differential equations (ODEs), to guide data-driven models. This method can improve approximations of real-world reactions, even when the first-order principles do not perfectly match the underlying, complex processes [5].

This work introduces a novel hybrid modeling approach that employs physics-informed neural networks (PINNs) to address the challenges of hydrocracking reactor modeling. The performance is compared against a traditional kinetic model and a range of purely data-driven models, using data from 120 continuous pilot plant experiments as well as simulated scenarios based on the existing first-order behavior models developed at IFPEN [2].

Multiple criteria including accuracy, trend analysis, extrapolation capabilities, and model development time were used to evaluate the methods. In all scenarios, the proposed approach demonstrated a performance improvement over both the kinetic and purely data-driven models. The results highlight that constraining data-driven models, such as neural networks, with known first-order principles enhances robustness and accuracy. This hybrid methodology offers a new avenue for modeling uncertain reactor processes by effectively combining general a priori knowledge with data-driven insights.

References

[1] Chinesta, F., & Cueto, E. (2022). Empowering engineering with data and AI: a brief review.

[2] Becker, P. J., & Celse, B. (2024). Combining industrial and pilot plant datasets via stepwise parameter fitting. Computer Aided Chemical Engineering, 53, 901-906.

[3] Becker, P. J., Serrand, N., Celse, B., Guillaume, D., & Dulot, H. (2017). Microkinetic model for hydrocracking of VGO. Computers & Chemical Engineering, 98, 70-79.

[4] Bradley, W., et al. (2022). Integrating first-principles and data-driven modeling. Computers & Chemical Engineering, 166, 107898.

[5] Tai, X. Y., Ocone, R., Christie, S. D., & Xuan, J. (2022). Hybrid ML optimization for catalytic processes. Energy and AI, 7, 100134.



Cascade heat pumps as an enabler for solvent-based post-combustion capture in a cement plant

Sarun Kumar Kochunni1, Rahul Anantharaman2, Armin Hafner1

1Department of Energy and Process Engineering, NTNU; 2SINTEF Energy Research

Cement production is a significant source of global CO₂ emissions, contributing about 7-8% of the world's total emissions. This is mainly due to the energy-intensive process of producing clinker (the primary component of cement) and the chemical reaction called calcination, which releases CO₂ when limestone (calcium carbonate) is heated. Around 60% of these direct emissions arise from calcination, while the remaining 40% result from fuel combustion. Thus, capturing CO₂ is essential for decarbonising the industry. Among the various capture techniques, solvent-based post-combustion CO₂ capture stands out due to its maturity and compatibility with existing cement plants. However, this method demands significant heat for solvent regeneration, which is often scarce in many cement facilities that require substantial heat for drying raw materials. Typically, 30-50% of the heat needed for solvent regeneration can be sourced from the excess heat generated within the cement plant. Additional heat can be supplied by burning fuels to create steam or by employing heat pumps to upgrade the low-grade heat available from the capture facility or the subsequent CO₂ liquefaction process.

This study systematically incorporates cascade heat pumps to harness waste heat from the CO₂ liquefaction process for regenerating solvents. The proposed method replaces the conventional ammonia-based refrigeration system for CO₂ liquefaction with a cascade HTHP, which provides refrigeration for the liquefaction and high-temperature heat for solvent regeneration. The system liquefies CO₂ using the evaporator and applies the heat rejected via the condenser for solvent regeneration. In this cascade HTHP, ammonia or propane is used in the lower cycle, while butane or pentane operates in the upper cycle, aiming for operational temperatures of 240 K for liquefaction and 395 K for heat supply.

The system’s thermodynamic performance is evaluated using ASPEN HYSYS simulations across different refrigerant configurations in the integrated setup. The findings indicate that an HTHP system using ammonia and pentane can deliver up to 12.5% of the heat needed for solvent regeneration, resulting in a net COP of 2.0. This efficiency exceeds that of other low-temperature heat sources for solvent regeneration. While adding a pentane cycle raises power consumption, the system remains energy-efficient overall, highlighting its potential for decarbonising cement production through enhanced CO₂ capture and integration strategies.



Agent-Based Simulation of Integrated Process and Energy Supply Chains: A Case Study on Biofuel Production

Farshid Babaei, David B. Robins, Robert Milton, Solomon F. Brown

School of Chemical, Materials and Biological Engineering, University of Sheffield, United Kingdom

Despite the potential benefits of decision level integration for the process and energy supply chains, these systems are traditionally assessed and optimised by incorporating simplified models of unit operations within a spatially distributed network. Such an organisational level integration can hardly be achieved without leveraging Information and Communication Technology (ICT) tools and concepts. In this research work, a multi-scale agent-based model is proposed to facilitate the transition from traditional practices to coordinated supply chains.

The multi-agent system framework proposed incorporates different organisational dimensions of the process and energy supply chains including raw material suppliers, rigorous processing plants, and consumers. Furthermore, the overall behaviour of each agent type in the model and its interaction with other agents are implemented. This allows for the simultaneous assessment and optimisation of process and supply chain decisions. By integrating detailed process models into the supply chain operation, the devised framework goes beyond existing studies in which the behaviour of lower decision levels is neglected.

To demonstrate the application of the proposed multi-agent system, a case study for a biofuel supply chain is presented which captures the underlying dynamic of the supply chain network. The involved actors, comprising farmers, biorefineries, and end-users, seek to increase their payoffs given their interdependencies and intra-organisational variables. The example features distributed and asynchronous decision-making, same-echelon actor competition, and incomplete information. The aggregated payoff of the supply network is optimised under different scenarios and fraction of capacity allocated to biofuel production and consumption as well as biofuel production variables are obtained. According to the results, unit operation level decisions along with the participant allocated capacity options significantly influence the supply chain performance. In conclusion, the proposed research expounds a more realistic view of multi-scale coordination schemes in the process and energy supply chains.



Steel Plant Electrification: A Pathway to Sustainable Production and Carbon Reduction

Rachid Klaimi2, Sabla Alnouri1, Vladimir Stijepovic3, Aleksa Miladinovic3, Mirko Stijepovic3

1Qatar University, Qatar; 2Notre Dame University; 3University of Belgrade

Traditional steel processes are energy-intensive and rely heavily on fossil fuels, contributing to significant greenhouse gas emissions. By adopting electrification technologies, such as electric boilers and compressors, particularly when powered by renewable energy, steel plants can reduce their carbon footprint, enhance process flexibility, and lower long-term operational costs. This transition also aligns with increasing regulatory pressures and market demand for greener practices, positioning companies for a more competitive and sustainable future. This work investigates the potential of replacing conventional steam crackers in a steel plant that relies on the use of fossil fuels, with electrically driven heating systems powered by renewable energy sources. The overall aim was to significantly lower greenhouse gas emissions by integrating electric furnaces and heat pumps into the steel production process. This study evaluates the potential carbon savings from the integration of solar energy in a steel plant with a production capacity of 300,000 tons per month. The solar field required for this integration was found to span an area of 40,764 m². By incorporating solar power into the plant’s energy mix, the analysis reveals a significant reduction in carbon emissions, with an estimated saving of 2,831 tons of CO₂ per year.



INCEPT: Interpretable Counterfactual Explanations for Processes using Timeseries comparisons

Omkar Pote3, Dhanush Majji3, Abhijit Bhakte1, Babji Srinivasan2,3, Rajagopalan Srinivasan1,3

1Department of Chemical Technology, Indian Institute of Technology Madras, Chennai 600036, India; 2Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai 600036, India; 3American Express Lab for Data Analytics and Risk Technology, Indian Institute of Technology Madras, Chennai 600036, India

Advancements in sensors, storage technologies, and computational power have unlocked the potential of AI for process monitoring. AI-based methods can successfully address complex process monitoring involving multivariate time series data. While their classification performance in process monitoring is very good, the decision-making logic of AI models are often difficult to interpret by operators and other plant personnel. In this paper, we propose a novel approach to explain the results from AI based process monitoring methods to plant operators based on counterfactual explanations.

Explainable AI (XAI) has emerged as a promising field of research, aiming to address these challenges by enhancing the interpretability of AI. XAI has gained significant attention in chemical engineering; much of this research focuses on explainability in tabular and image data. Most XAI methods provide explanations at a sample level, i.e., they assume that a single data point is inherently interpretable, which is unrealistic assumption for dynamic systems such as chemical processes. There has been limited exploration into explainability for systems characterized by multivariate time series. To address this gap, we propose a novel XAI method that provides counterfactual explanations accounting for a multivariate time-series nature.

A counterfactual explanation is the "smallest change to the feature values that alters the prediction to a predefined output." Ates et al. (2021) developed a method for counterfactual multivariate time series explainability. Here, we adapt this method and extend it to account for autocorrelation and cross-correlation which is essential in process monitoring. Our proposed method, called INterpretable Counterfactual Explanations for Processes using Time series comparisons (INCEPT), generates a counterfactual explanation through a four-step methodology. Consider an online process sample given to a neural network based fault identification model. The neural network would use a window of data around this sample to predict the state of the process (normal, fault-1, etc). First, the time series data is transformed into PC space using Dynamic PCA to address autocorrelation and cross-correlation. Second, the nearest match from the training data is identified in this space for the desired class using Euclidean distance. Third, a counterfactual sample is generated by adjusting key variables that increase the likelihood of the desired class, guided by a greedy algorithm. Finally, the counterfactual is transformed back to the original space, and the model recalculates the class probabilities until the desired class is achieved. The adjustments needed to the process variables to result in the counterfactual are used as the basis to generate explanations.

The effectiveness of the proposed method will be demonstrated using the Tennessee Eastman case study. The generated explanations can aid model developers in debugging and model enhancement. They can also assist plant operators in understanding the model’s predictions and gain actionable insights.

References:

[1] Bhakte, A., et.al., 2024. Potential for Counterfactual Explanations to Support Digitalized Plant Operations.

[2] Bhakte, A., et.al., 2022. An explainable artificial intelligence-based approach for interpretation of fault classification results from deep neural networks.

[3] Ates, E., et.al., 2021. Counterfactual Explanations for Multivariate Time Series.



Dynamic Simulation of an Oxy-Fuel Cement Pyro-processing Section

Marc-Daniel Stumm1, Tom Dittrich2, Jost Lemke2, Eike Cramer1, Alexander Mitsos3,1,4

1Process Systems Engineering (AVT.SVT), RWTH Aachen University, 52074 Aachen, Germany; 2thyssenkrupp Polysius GmbH, 59269 Beckum, Germany; 3JARA-ENERGY, 52056 Aachen, Germany; 4Institute of Climate and Energy Systems, Energy Systems Engineering (ICE-1), Forschungszentrum Jülich GmbH, 52425 Jülich, Germany

Cement production accounts for 7 % of global greenhouse gas emissions [1]. Tackling these emissions requires carbon capture and storage technologies [1], of which an oxy-fuel combustion process followed by CO2 compression is economically promising [2]. The oxy-fuel process substitutes air with a mixture of O2 and CO2 as the combustion medium. The O2-CO2 mixture requires a partial recirculation of flue gas [3], which increases the complexity of the process dynamics and potentially inefficient operating conditions, thus necessitating process control. We propose the use of model-based control and state estimation schemes. As the recycle connects the dynamics of the whole pyro-processing section, the process model must include the entire section, namely, the preheater tower, precalciner, rotary kiln, and clinker cooler. Literature on dynamic cement production models is scarce and focuses on modeling individual units, e.g., the rotary kiln [4⁠,5] or the precalciner [6]. We develop a first-principle dynamic model of the full pyro-processing section, including the preheater tower, precalciner, rotary kiln, and clinker cooler as submodels. The states of the precalciner, rotary kiln, and clinker cooler vary significantly in axial direction. Thus, the corresponding models are spatially discretized using the finite volume method. Parameter values for the model are taken from the literature [6]. We implement the models in Modelica as an aggregation of submodels. Thus, we can easily adapt our model to fit different cement plants, which vary in configuration. We simulate the oxy-fuel pyro-processing section outlined in the CEMCAP study [3]. The simulation has similar residence times, temperatures, and cement compositions as reported in the literature [2⁠,7], validating our model. Therefore, the presented dynamic model can form the basis for future model-based control and state estimation applications. Furthermore, the model can be used to investigate carbon reduction measures in the cement industry.

References

1. European Commission. Joint Research Centre. Decarbonisation options for the cement industry; Publications Office, 2023.

2. SINTEF Energy Research. CEMCAP D4.6 - Comparative techno-economic analysis of CO2 capture in cement plants 2018.

3. Ditaranto, M.; Bakken, J. Study of a full scale oxy-fuel cement rotary kiln. International Journal of Greenhouse Gas Control 2019, 83, 166–175, doi:10.1016/j.ijggc.2019.02.008.

4. Spang, H.A. A Dynamic Model of a Cement Kiln. Automatica 1972, 309–323, doi:10.1016/0005-1098(72)90050-7.

5. Svensen, J.L.; Da Silva, W.R.L.; Merino, J.P.; Sampath, D.; Jørgensen, J.B. A Dynamical Simulation Model of a Cement Clinker Rotary Kiln, 2024. Available online: http://arxiv.org/pdf/2405.03200v1.

6. Svensen, J.L.; Da Silva, W.R.L.; Jørgensen, J.B. A First-Engineering Principles Model for Dynamical Simulation of a Calciner in Cement Production, 2024. Available online: http://arxiv.org/pdf/2405.03208v1.

7. European Commission - JRC IPTS European IPPC Bureau. Best Available Techniques (BAT) Reference Document for the Production of Cement, Lime and Magnesium Oxide.

8. Mujumdar, K.S.; Ganesh, K.V.; Kulkarni, S.B.; Ranade, V.V. Rotary Cement Kiln Simulator (RoCKS): Integrated modeling of pre-heater, calciner, kiln and clinker cooler. Chemical Engineering Science 2007, 62, 2590–2607, doi:10.1016/j.ces.2007.01.063.



Multi-Objective Optimization for Sustainable Design of Power-to-Ammonia Plants

Andrea Isella, Davide Manca

Politecnico di Milano, Italy

Ammonia synthesis currently is the most carbon-intensive chemical process behind only oil refining (Isella and Manca, 2022). From this perspective, producing ammonia from renewable-energy-powered electrolysis (i.e. Power-to-Ammonia) is attracting increasing interest and earning the potential to make the ammonia industry achieve carbon neutrality (MPP, 2022). This work addresses the process design of such a synthetic pathway through a methodology based on the multi-objective optimization of the so-called “Three pillars of sustainability”: i.e. Economic, Environmental, and Social. Specifically, we developed a tool estimating the installed capacities of every main process section typically featured by Power-to-Ammonia facilities (e.g., the renewable power plant, the electrolyzer, energy and hydrogen storage systems, etc.) to maximize the “Global Sustainability Score” of the plant.



Simulating Long-term Carbon Balance on Forestry Management and Woody Biomass Applications in Japan

Ziyi Han1, Heng Yi Teah2, Yuichiro Kanematsu2, Yasunori Kikuchi1,2,3

1Department of Chemical System Engineering, The University of Tokyo; 2Presidential Endowed Chair for “Platinum Society”, The University of Tokyo; 3Institute for Future Initiatives, The University of Tokyo

Forests play a vital role as carbon sinks and renewable resources in mitigating climate change. However, in Japan, insufficient forest management has resulted in a suboptimal age class distribution of trees. Aging trees are left unattended and underutilized, resulting in inefficient carbon capture as they age. This underutilization contributes to a substantial reliance on imported wood products (lower self-sufficiency). To improve carbon sequestration and renew forest industries, it is crucial to adopt a systematic approach, so that the emissions and mitigation opportunities in the transformation of forest resources to usable products along the forest value chain can be identified and optimized.

In this study, we aim to identify an efficient forestry value chain that maximizes the carbon mitigation considering the coordination of the varied interests from diverse stakeholders in Japan. We simulate the long-term carbon balance on forest management and forest resources utilization, incorporating the woody biomass material flow across five modules, with two in wood production and three in wood utilization sectors.

(1) Forest and forestry management: the supply of woody biomass from designed forestry management practices, for example, to homogenize the forest age class distribution within a given simulation period.

(2) Wood processing: the transformation of roundwood into timber, plywood, and wood chips. A different ratio of wood products is determined based on the demand from each application.

(3) Construction sector: using timber and plywood for wood construction; the maximum flow is to satisfy the domestic demand of construction with 100% self-sufficiency rate without the need for imported wood.

(4) Energy sector: using wood chips for direct conversion to heat and electricity; the maximum flow is to reach the saturation of local renewable energy demand provided by local governments.

(5) Chemical sector: using wood chips as sources of cellulose, hemicellulose and lignin for thermochemical conversion to chemicals that serves as versatile energy carriers considering multiple pathways. The target hydrocarbon products include hydrogen, jet fuels and biodiesel.

We focus on the allocation of woody biomass from modules (1) and (2) to the three utilization modules. The objective is to identify the flow of energy-material productions by various pathway and to evaluate the GHG emissions within the defined system boundary. We evaluate the carbon balance of sequestration and emission from modules (1) and (2), and the cradle-to-gate life cycle GHG emission of modules (3), (4) and (5) accounting for processes from the selected co-production pathways.

Our model shows the overall GHG emissions resulting from the forestry value chain at a given forestry management and processing strategy, and the environmentally preferred order of woody biomass utilization. The variables in each module can be set to reflect the interest of each sector, allowing the model to capture the consequences of wood resource allocation, availability, and its contribution to climate mitigation. Therefore, the simulation can support the policymakers and relevant industry stakeholders for a more comprehensive forestry management and biomass application planning in Japan.



Discovering patterns in Food Safety Culture by k-means clustering

Simen Akkermans1, Maria Tsigka2, Jan FM Van Impe1, Efstathia Tsakali1,2

1BioTeC+ KU Leuven; 2University of West Attica, Greece

Food safety (FS) is an ongoing issue and despite the awareness and major initiatives taken in recent decades, several outbreaks highlight the need for further action. The key element of prevention combined with the application of prerequisite programs constitute the fundamental principles of any Food Safety Management System (FSMS), with particular emphasis on hygiene, food safety training, and the development and implementation of FSMSs throughout all areas of activity in the food industry. On the other hand, the concept of Food Safety Culture (FSC) separates FS from FSMS, by focusing on human behavior. Food safety managers often do not fully understand the relationship between FS and FSC, resulting in improper practices and further risks for food safety. Over the past decade, various tools for enforcing FSC have been proposed for different sectors of the food industry. However, there is no universal assessment tool, as specific aspects of food safety culture and each sector of the food industry, require different or customized assessment tools. Although, the literature on FS is growing rapidly, existing research related to FSC is virtually non-existent or fragmented. The aim of this study was to test the potential of machine learning based on questionnaire results to understand patterns in FSC.

As a case study, surveys were conducted with 103 employees of the Greek food industry. These employees were subdivided over different departments, genders, experience levels, company food hazard levels and company sizes. Each employee filled out a questionnaire consisting of 18 questions based on a Likert scale. After establishing the existence of significant relationships between the answers that were provided, it was investigated if specific subgroups of the employees had a different FSC. This was done by implementing an unsupervised k-means clustering of the survey results. It was found that, when the employees were subdivided into just 3 clusters, these clusters significantly affected all 18 questions of the surveys as demonstrated by Kruskal-Wallis tests. As such, these 3 clusters represented employee subgroups that adhered to a distinct FSC. This classification provides valuable information on the different cohorts that exist with respect to FSC and thereby enables a targeted approach to improve FSC.

This study has demonstrated the potential of machine learning techniques to monitor and control FSC. As such, the proposed approach contributes to the implementation of GFSI and BRC GC standards requirements and the General Principles for Food Hygiene of the 2020 amendment of Codex Alimentarius.



Development and Integration of a Co-Current Hollow Fiber Membrane Unit for Gas Separation in Process Simulators Using CAPE-OPEN Standards

Loretta Salano, Mattia Vallerio, Flavio Manenti

Politecnico di Milano, Italy

Process simulation plays a crucial role in the design, control, and optimization of chemical processes, offering a cost-effective alternative to experimental approaches. This study presents the development and implementation of a custom co-current hollow fiber membrane unit for gas separation using the CAPE-OPEN standard, integrated into Aspen HYSYS®. A one dimensional model was considered after appropriate physical assumptions, presenting a boundary value problem (BVP) due to the pressure profile along the fiber. The shooting method allows for the accurate resolution of BVPs by iteratively adjusting initial conditions to minimize errors across the domain. This approach ensures convergence to the correct solution, critical for complex gas separation processes. The CAPE-OPEN standards allow to link the developed model in C++ and interact with the software through input and output ports. To further ensure reliability in the simulation, error handling has been included to ensure appropriate operational parameters from the user. Further on, appropriate output variables are given to the simulator environment to enable direct optimization within the process simulator. This flexibility provides greater control over key performance indicators, such as energy consumption and separation efficiency, ultimately facilitating a more efficient design process for applications like biogas upgrading, hydrogen purification, and carbon capture. Results from case studies demonstrate that the co-current hollow fiber membrane unit significantly reduces energy consumption compared to traditional methods like pressure swing water absorption (PSWA) for biogas upgrading to biomethane. While membrane technology showed a 21% reduction in energy consumption for biomethane production, PSWA exhibited slightly higher efficiency for biomethanol production. This study not only demonstrates the value of CAPE-OPEN standards in implementing custom unit operations but also lays the groundwork for future developments in process simulation using advanced mathematical modelling and optimization techniques.



A Comparative Study of Aspen Plus and Machine Learning Models for Syngas Prediction in Biomass-Plastic Waste Co-Gasification

Usman Khan Jadoon, Ismael Diaz, Manuel Rodriguez

Departamento de Ingeniería Química Industrial Y del Medioambiente, Escuela Superior de Ingenieros Industriales, Universidad Politécnica de Madrid

The transition to cleaner energy sources is critical for addressing global environmental challenges, and the co-gasification of biomass and plastic waste presents a viable solution for sustainable syngas production. Syngas, a crucial component in energy applications, demands precise prediction of its composition to enhance co-gasification efficiency. Traditional modelling techniques, such as those implemented in Aspen Plus, have been instrumental in simulating gasification processes. However, machine learning (ML) models offer the potential to improve predictive accuracy, particularly for complex, non-linear systems. This study explores the comparative performance of Aspen Plus models and surrogate ML models in predicting syngas composition during the steam and air co-gasification of biomass and plastic waste.

The primary focus of this research is on evaluating the aspen-plus-based modelling techniques like thermodynamic and restricted equilibrium thermodynamics and kinetic modelling, alongside surrogate models such as Kriging, support vector machines, and artificial neural networks. The novelty of this work lies in the integration of Aspen Plus with machine learning methodologies, providing a comprehensive comparative analysis of both approaches for the first time. This study seeks to determine which modelling approach offers superior accuracy for predicting syngas components like hydrogen, carbon monoxide, carbon dioxide, and methane.

The methodology involves developing Aspen Plus models for steam and air co-gasification using woody biomasses and plastic wastes as feedstocks. These models simulate syngas production under varying operating conditions. Concurrently, machine learning models are trained on experimental datasets to predict syngas composition based on the same input parameters. A comparative analysis is then performed, with the accuracy of each approach measured using performance matrices like root mean square error.

ML models are anticipated to better capture the non-linearities of the gasification process, while Aspen Plus models will continue to offer valuable mechanistic insights and process understanding. The potential superiority of ML models suggests that integrating data-driven and process-driven approaches could enhance predictive capabilities and optimize co-gasification processes. This study offers significant contributions to the field of bioenergy and gasification technologies by exploring the potential of machine learning as a powerful predictive tool. By comparing Aspen Plus and machine learning models, this research highlights the potential benefits of combining these methodologies to improve syngas production forecasts. The findings from this comparative analysis are expected to advance the development of more accurate and efficient bioenergy technologies, contributing to the global transition toward sustainable energy systems.



A Fault Detection Method Based on Key Variable Forecasting

Borui Yang, Jinsong Zhao

Department of Chemical Engineering, Tsinghua University, Beijing 100084, China

With the advancement of industrial production toward digitalization and automation, process monitoring have become an essential technical tool for ensuring the safe and efficient operation of chemical processes. Although process engineering has greatly developed, the risk of process faults remains there. If such faults are not detected and diagnosed at an early stage, they may go beyond control. Over the past decades, various fault detection approaches have been proposed, including model-driven, knowledge-driven, and data-driven methods. Data-driven methods, in particular, have gained prominence, as they rely primarily on large amounts of process data, making them especially relevant with the widespread application of the Internet of Things (IoT). Among these, neural-network-based methods have emerged as a prominent approach. By stacking feature extraction layers and applying nonlinear activation functions between them, deep neural networks exhibit a strong capacity to capture complex, nonlinear patterns. This aligns well with the nature of chemical process variables, which are inherently nonlinear, strongly coupled with control loops, multivariate, and subject to time lags.
In industrial applications, fault detection algorithms rely on the time-series data of key variables. However, statistical methods such as Principal Component Analysis (PCA) and Partial Least Squares (PLS) are limited in capturing the temporal dependencies between consecutive data points. To address this, architectures such as Autoencoders (AE), Convolutional Neural Networks (CNN), and Transformers incorporate the relationships between time points through sliding window sampling. However, this approach can dilute fault signals, leading to delayed fault detection. Inspired by human decision-making process, where adverse future trends are often considered to enable timely responses to unfavorable outcomes, we propose incorporating key variables that have already entered a fault state in future time points into the fault detection model. This proactive inclusion of future fault indicators can significantly improve the timeliness of fault detection.

Building on the aforementioned concept, this work develops and implements a proactive fault detection method based on key variable forecasting. This approach employs multiple predictive models (such as LSTM, Transformer, and Crossformer) to actively forecast key variables over a future time horizon. The predicted results, combined with historical information, are used as inputs to a variational autoencoder (VAE) to calculate the reconstruction error for fault detection. The detection component of the method is trained using normal operating data, and faults are identified by evaluating the reconstruction error. The forecasting component is trained with mixed data, where the initial part contains normal data, followed by the selective introduction of faults after a certain period, enabling the predictive model to capture both fault evolution trends and normal data characteristics.

The proposed method has been validated on the CSTH dataset and the Tennessee Eastman Process (TEP) dataset, demonstrating that incorporating future information at the current time point significantly enhances early fault detection. However, optimizing the design of the reconstruction loss function and model architecture is necessary to mitigate false alarms. Introducing future expectations into current assessments shows great potential for advancing early fault detection and diagnosis but also poses challenges, requiring higher performance from key variable forecasting models.



Rate-Based Modeling Approach of a Rotating Packed Bed for CO2 Chemisorption in aqueous MEA Solutions

Joshua Orthey, John Paul Gerakis, Markus Illner, Jens-Uwe Repke

Process Dynamics and Operations Group - Technical University of Berlin, Germany

Driven by societal and political pressure for climate action, CO2 capture from flue gases is a primary focus for both academia and industry. Rotating Packed Beds (RPBs)[1][2] are a potential way to enhance process intensification and offer significant advantages in intensifying amine-based absorption processes, including enhanced mass transfer, load flexibility, higher allowable fluid loads, and the ability to use more concentrated amine solutions with higher viscosities. One main focus of our study encompasses both a direct comparison between packed columns and RPBs, and the integration of these technologies in a hybrid concept, with the potential to enhance the overall efficiency of the CO₂ capture process. Since there are numerous process configurations of RPB and packed columns in CO2 capture, covering gas pretreatment, absorption, and desorption, an initial evaluation of viable candidate configurations is essential. Equally important is the analysis of fundamental process behavior and its limitations, which is crucial for planning effective experimental campaigns and identifying suitable operating conditions. Unlike existing models, our approach offers a more detailed analysis, focusing specifically on the assessment of different process configurations and experimental conditions, enabling a deeper understanding and refinement of the capture process, which allowing us to effectively plan and design experiments.

For this purpose, a rate-based approach RPB model for the reactive absorption of CO2 with MEA solutions using the two-film theory was developed. The model is formulated for steady-state operations and encompasses all relevant component species. It addresses multicomponent mass transfer, incorporating equilibrium and kinetic reactions in the liquid phase while considering mass transfer resistances in both the liquid and gas phase.

For the gas bulk phase, ideal gas behavior is assumed, while the non-ideal liquid phase incorporates activity coefficients (elecNRTL). The Maxwell-Stefan approach was used to describe the diffusion processes and mass transport in both phases. The model is generally discretized using an equidistant radius. [1] Additionally, a film discretization near the interface was implemented. First validation studies show that the model accurately depicts dependencies on rotational speed and varying liquid-to-gas (L/G) ratios with respect to temperature and concentration profiles and has been validated against literature data.[2]

The CO₂ absorption and desorption process using conventional packed bed columns has been implemented in Aspen Plus. To enable simulations of hybrid configurations, the developed RPB model will be integrated into Aspen Custom Modeler. This study aims to analyze various hybrid process configurations through simulation to identify an efficient configuration, which will be validated by experiments in pilot plants. These results will demonstrate whether integrating RPBs with packed columns enhances energy efficiency and separation performance while reducing operational costs and providing key insights for future scale-up efforts and driving the advancement of hybrid CO₂ capture processes.

[1] Thiels et al. (2016): Modelling and Design of Carbon Dioxide Absorption in Rotating Packed Bed and Packed Column. DOI: 10.1016/j.ifacol.2016.07.303

[2] Hilpert, et al. (2022): Experimental analysis and rate-based stage modeling of multicomponent distillation in a Rotating Packed Bed. DOI: 10.1016/j.cep.2021.108651.



Machine Learning applications in dairy production

Alexandra Petrokolou1, Satyajeet Sheetal Bhonsale2, Jan FM Van Impe2, Efstathia Tsakali1,2

1BioTeC+ KU Leuven; 2University of West Attica, Greece

The dairy sector is one of the most well developed and prosperous industries at an international level. Due to several factors, including its high nutritional value, it’s susceptibility but also its popularity among the consumers, milk attracted scientific interest quiet early comparing to other food products. Likewise, the dairy industry had always been a pioneer in adopting new processing, monitoring and quality control technologies, starting from pasteurization heat treatment and canning for shelf life expansion at the beginning of 20th century to PCR methods to detect adulteration and machine learning applications, nowadays.

The dairy industry is closely connected with large-scale production lines and complex processes that require precision and continuous monitoring. The primary target is to meet customer requirements with increased profit while minimizing environmental impact. In this regard, various automated models based on artificial intelligence, particularly Machine Learning, have been developed to contribute to sustainability and circular economy. There are three major types of Machine Learning: Supervised Learning which uses labeled data, Unsupervised Learning where the algorithm tries to find hidden patterns and relationships and Reinforcement Learning which employs a trial and error method. Building a machine learning model requires several steps, starting with relevant and accurate data collection. These smart applications have been extensively introduced to dairy production, from the farm stage and milk processing to final inspection and product distribution. In this paper, the most significant applications of Machine Learning in the dairy industry are illustrated with actual applications and discussed in terms of their potentials. The applications are categorized as per the production stage and their serving purpose.

The most significant applications integrate recognition cameras, smart sensors, thermal imaging cameras, and digitized supply chain systems to facilitate inventory management. During the animal raising, smart environmental sensors can monitor weather conditions in real-time. In addition to this, animals can be fitted with smart collars or other small devices to record parameters such as breathing rates, metabolism, weight, and body temperature. These devices can also track animals’ location and monitor transitions from lying to standing. By collecting these data, algorithms can detect the potential onset of diseases such as mastitis, minimizing the need for manual human processing of repetitive tasks and enabling proactive health management.

Beyond the farm, useful applications emerge in milk processing, particularly in pasteurization, which requires specific temperature and time settings for each production line. Machine learning models can optimize this process, resulting in energy savings. The control of processing conditions through sensors also aids in the ripening stage, contributing to the standardization of cheese products. Advancements are also occurring in product packaging, where Machine Vision technology can identify damages and defects that may compromise product quality, potentially leading to food spoilage and consumer dissatisfaction. Finally, dairy products are particularly vulnerable and necessitate specific conditions throughout the supply chain. By employing machine learning algorithms, it is possible to identify the most efficient distribution routes, thereby reducing operational costs. Additionally, a smart sensor system can monitor temperature and humidity levels, spotting deviations from established safety/quality standards



Dynamic Modelling of CO2 Capture with Hydrated Lime: Integrating Porosity Evolution, Evaporation, and Temperature Variations

Natalia Vidal de la Peña, Dominique Toye, Grégoire Léonard

University of Liège, Belgium

The construction sector is currently one of the most polluting industries globally. In Europe, over 30% of the EU's environmental footprint is attributed to buildings, making this sector the largest contributor to environmental impact within the European Union. Buildings are responsible for 42% of the EU's annual energy consumption and for 35% of annual greenhouse gas (GHG) emissions.

Considering these data, it is essential to explore methods to reduce the negative impact of this sector on the environment. To contribute to the circular economy of this sector, this work proposes the mineral carbonation method as a resource to mitigate the environmental impact of this industry. Specifically, we propose the mineral carbonation of mineral wastes from the construction sector specifically, pure hydrated lime (Ca(OH)2), referred to as CH in construction terminology.

This research is part of the Walloon Region's Mineral Loop project, with the objective of modelling the carbonation reactions of mineral waste and optimizing the process by improving reactor conditions and material pretreatment. The carbonation of hydrated lime involves a reaction between calcium hydroxide and CO2, combining physical and chemical phenomena. The novelty of this mathematical model lies in the consideration of porosity evolution during carbonation, as well as the liquid water saturation of the material, by accounting for evaporation phenomena. Furthermore, the model is able to represent the temperature gradient along the reactor. These parameters are important because they affect the carbonation rate of this material. In previous work, we observed that the influence of water in this system is significant, and it is crucial to have a good characterization of its behaviour during this process. First, water is needed to initiate the carbonation, but introducing too much can lead to pore blockage. In addition, the release of water during carbonation can also cause pore blockage if evaporation is not adequately considered. Considering that, this model allows us to account for the influence of water, enabling a good correlation between water evaporation and carbonation rates under different carbonation conditions. All parameters are experimentally validated to provide a reliable model that can predict the behaviour of CH during carbonation.

The experimental setup for the carbonation process consists of an aluminium cup filled with CH placed inside a reactor with a capacity of 1.4 L, where pure CO2 is introduced through a hole in the upper part of the reactor. The system is modelled in COMSOL Multiphysics 6.2 by introducing the cup geometry and assuming CO2 is transported axially through the aluminium cup containing hydrated lime particles by dispersion, without convection, and that it diffuses within the material.

In conclusion, the proposed mathematical model accounts for the reaction phenomena, porosity variation, thermal gradient, and evaporation during the carbonation process, providing a solid understanding of the system and an effective tool to contribute to the circular economy of the construction industry. The model has been successfully validated, and the primary objective moving forward is to use it as a tool for predicting the carbonation response of other more complex materials.



Integrated LCA and Eco-design Process for Hydrogen Technologies: Case Study of the Solid Oxide Electrolyser.

Gabriel Magnaval1,2, Tristan Debonnet2, Manuele Margni1,2

1CIRAIG, Polytechnique Montréal, Montréal, Canada; 2HES-SO Valais-Wallis, Sion, Switzerland

Fuel Cell and Electrolyzer Cell hydrogen technologies are promising solutions to support the green transition. To ensure their sustainable development from the early stages of design, it is essential to assess their environmental impacts and define effective ecodesign strategies.

Life Cycle Assessment (LCA) is a widely used methodology for evaluating the environmental impacts of a product or system throughout its entire life cycle, from raw material extraction to disposal. So far literature does not provide consistent modelling approaches for hydrogen technologies assessment, limiting the interpretation, comparability of LCA results and hindering interoperability of datasets. A novel modular LCA model has been specifically designed to harmonize assessment models. The modular approach is structured by (supply) tiers, each of them subdivided in common unit processes. Tier 0 represents the operation phase delivering the functional unit. Tier 1 encompasses the stack manufacturing, the balance of plant equipment, the operation consumables, and the end-of-life of the stack. Each element is further subdivided in common Tier 2 sub processes and so on. This model has been applied to perform a Screening LCA of a Solid Oxide Electrolyzer (SOE), based on publicly available literature data to be used as a baseline for evaluating technological innovation of SOE designed for high-pressure applications, and developed within an industrial European Project. The Functional unit has been defined as the producing 1kg of hydrogen at 30 bar of a 20kW SOE stack.

Our findings suggest that Hydrogen production through SOE performs better than steam methane reforming only if supplied by electricity from renewable sources or nuclear. The operation consumables (electricity consumption and heat supply) have been identified as the most significant contributors to the environmental footprint, emphasizing the importance of energy efficiency and renewable energy sourcing. Critical parameters affecting the life cycle impact scores include the stack's lifespan, the balance of plant equipment, and material production.

To further support an environmentally sustainable development of stack technologies, we propose to integrate the LCA metrics within an ecodesign process tailored to the development of hydrogen technologies. The deployment of this process aims to ensure an environmentally sound development at the early stage of the innovation by improving the communication between the LCA experts and the technology developers, and to accelerate the data collection. An ecodesign workshop is organized during the first months of the project to enhance the literacy of the technology developers. It introduces the systemic and quantified approach to determine the hotspots of the technology, identify sustainable innovations, and evaluate their benefits and the risk of potential burden shift. Once trained, a parametrized tool which integrates screening LCA results in a user-friendly interface is distributed to the project partners. It allows technology developers to quickly assess potential innovations, compare different scenarios for improving environmental performance of their technology, and iterate calculation without the need of LCA experts. The LCA team is working throughout the project on updating the tool and explaining the trade-offs.



Decision Support Tool for Technology Selection in Industrial Heat Generation: Balancing Cost and Emissions

Soha Shokry Mousa, Dhabia Al-Mohannadi

Texas A&M University at Qatar, Qatar

Decarbonization of industrial processes is essential in order for the world to meet sustainability targets, including those for energy-intensive industries. On this note, electrification of heat generation could potentially reduce CO₂ emissions but comes with a set of challenges on balancing cost-efficiency with technical feasibility. A decision support framework is presented for the choice of technologies generating heat in industries, addressing the trade-offs between capital cost, CO₂ emissions, and heat demand across different temperature levels.

A tool was developed to evaluate various heat generation technologies, including high-temperature heat pumps, electrode boilers, and conventional systems. The application of heat integration principles allows the developed tool to analyse heat demands at different temperatures and, in turn, the suitability of a technology based on certain parameters, such as the cost-effectiveness of technology and capacity limits. The framework incorporates multi-criteria analysis, enabling decision-makers to systematically identify technologies that minimize overall cost while achieving goals for emission reductions and meeting total heat demands of the industrial process.

The initial application of the developed tool to real case studies proved the effectiveness of the methodology as part of the energy transition of the industrial sector.



Assessing Distillation Processes through Sustainability Indicators Aligned with the Sustainable Development Goals

Omer Faruk Karaman, Peter Lang, Laszlo Hegely

Budapest University of Technology and Economics, Hungary

There has been a growing interest in sustainability in chemical engineering as industries aim to reduce their environmental footprint without compromising economic performance. This research proposes a set of sustainability indicators aligned with the United Nations’ Sustainable Development Goals (SDGs) for the evaluation of the sustainability of distillation processes, offering a structured way to assess and improve these systems. The use of these indicators is illustrated in two case studies: (1) a continuous pressure-swing distillation (PSD) of a maximum-azeotropic mixture without and with heat integration and (2) the recovery of acetone from a waste solvent mixture by batch distillation (BD). These processes were selected due to their widespread industrial use, their potential to benefit from improvements in their sustainability, and to show the general applicability of the indicators proposed.

Distillation is one of the most commonly used methods for the separation of liquid mixtures. It is performed in a continuous way when large processing capacities are needed (e.g. refining, petrochemical industry). Batch distillation is also used frequently (e.g. in the pharmaceutical or fine chemical industry) because of its flexibility in separating mixtures with varying quantity and composition, including waste solvent mixtures. However, distillation is very energy-intensive, leading to high operational costs and greenhouse gas emissions.

This study aims to address these issues by developing sustainability indicators (e.g. recovery of components, wastewater generation, greenhouse gas emissions) that account for environmental, economic and social aspects. By aligning these indicators with the SDGs, which are globally recognized sustainability standards, the research also aims to encourage industries towards more sustainable practices.

The novelty of this work is that, to our knowledge, we are the first who propose sustainability indicators aligned with SDGs in the field of distillation.

The case studies illustrate how to apply the proposed indicators to evaluate the sustainability aspects of distillation processes. In the PSD example (Karaman et al., 2024a), the process was optimised without and with heat integration, which led to a significant decrease in both the total annual cost and environmental impact (CO2 emission). In the acetone recovery by BD case (Karaman et al., 2024b), either the profit or the CO2 emissions were optimised by the Box-complex method. In this work, we determined how the proposed set of sustainability indicators improved due to the optimisation and heat integration performed in our previous works.

This research emphasizes the increasing importance of sustainability in chemical separation processes by integrating sustainability metrics aligned with SDGs into the evaluation of distillation processes. Our work proposes a generally applicable framework to quantify the sustainability aspects of the processes, which could be used to identify how these processes can be improved by balancing cost-effectiveness and environmental impacts.

References

Karaman, O.F.; Lang P.; Hegely L. 2024a. Optimisation of Pressure-Swing Distillation of a Maximum-Azeotropic Mixture with Heat Integration. Energy (submitted).

Karaman, O.F.; Lang, P.; Hegely, L. 2024b. Economic and Environmental Optimisation of Acetone Recovery by Batch Distillation. Proceedings of the 27th Conference on Process Integration, Modelling and Optimisation for Energy Saving and Pollution Reduction. Paper: PRES24.0144.



Strategies for a more Resilient Green Haber-Bosch Process

José M. Pires1,2, Diogo Narciso2, Carla I. C. Pinheiro1

1Centro de Química Estrutural, Institute of Molecular Sciences, Departamento de Engenharia Química, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001, Portugal; 2Centro de Recursos Naturais e Ambiente, Departamento de Engenharia Química, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001, Portugal

With a global production of 183 million metric tons in 2020 [1], ammonia (NH3) stands out as one of the most important commodity chemicals on the global scene, alongside ethylene and propylene. Despite 85% of all ammonia produced being used in fertilizer production [1], its applications extend beyond the agri-food sector. The Haber-Bosch (HB) process has historically enabled large-scale ammonia production, supporting agricultural practices in response to the unprecedented population growth over the past century, but it also accounts for 1.2% of global anthropogenic CO2 emissions [2]. In the ongoing energy transition, Power-to-X systems have emerged as promising solutions for both i) storing renewable energy, and ii) producing chemical or fuels. The green HB (gHB) process, powered entirely by green electricity, can be viewed as a Power-to-Ammonia (PtA) system. In this process, hydrogen from electrolysis and nitrogen from an air separation unit are compressed and fed into the NH3 synthesis loop, whose general configuration mirrors that of the conventional HB process. However, the intermittent nature of renewable energy means hydrogen production is not constant over time. Therefore, in a PtA system, the NH3 synthesis loop must be operated dynamically, which presents a major operational challenge.

Dynamic operation of NH3 converters is typically associated with reaction extinction and sustained temperature oscillations (known as limit cycles), which can severely damage the catalyst. This work is situated in this context, with the development of a high-fidelity model of the gHB process using gPROMS Process. As various process flexibilization measures have already been proposed in the literature and industrial patents [3,4], this work aims to test some of these measures, or combination thereof, by quantitatively assessing their impacts on the process. The process is first modelled and simulated at a nominal process load, followed by a flexibility analysis in which partial loads are introduced to observe their effects on process responsiveness and resilience. Essentially, all proposed measures boil down to maintaining high loop pressure, a key aspect consistently addressed in the patents, which can be achieved by exploiting the ammonia synthesis reaction equilibrium. Therefore, measures that shift the equilibrium towards the reactants side are particularly relevant for this analysis, as they lead to an increase in the number of moles leaving the reactor. Increasing the reactor operating temperature and the NH3 fraction in the reactor feed are some of proposed possibilities, but are complex, as they affect the intricate reaction dynamics and may cause reactor overheating or even reaction extinction. Other possibilities include reducing reactor flow or, in the worst case, decreasing loop pressure [3].

[1] IRENA & AEA. (2022). Innovation outlook: renewable ammonia.

[2] Smith, C. et al. (2020). Current and future role of Haber-Bosch ammonia in a carbon-free energy landscape. Energy Environ. Sci., 13(2), 331-344.

[3] Fahr, S. et al. (2023). Design and thermodynamic analysis of a large-scale ammonia reactor for increased load flexibility. Chemical Engineering Journal, 471, 144612.

[4] Ostuni, R. & Zardi, F. (2016). Method for load regulation of an ammonia plant (U.S. Patent No. 9463983).



Process simulation and thermodynamic analysis of newly synthesized pre-combustion CO2 capture system using novel Ionic liquids for H2 production

Sadah Mohammed, Fadwa Eljack

Qatar University, Qatar

Deploying fossil fuels to meet global energy needs has increased greenhouse gas emissions, mainly CO2, contributing to climate change. Therefore, transitioning toward clean energy sources is crucial for a sustainable low-carbon economy. Hydrogen (H2) is a viable decarbonization option, but its production via steam methane reforming (SMR) emits significant CO2 [1]. Integrating abatement technology, like pre-combustion CO2 capture in the SMR process, can reduce carbon intensity. The pre-combustion systems are effective for high-pressure streams rich in CO2, making them suitable for H2 production. In this regard, solvent selection is crucial in designing effective CO2 capture systems by considering several factors, including eco-toxicity, reducing irreversibility, and maximizing energy efficiency. In this context, ionic liquids (ILs) have become increasingly popular for their low regeneration energy, making them well-suited for pre-combustion applications.

The main goal of this work is to synthesize a pre-combustion CO2 capture system using newly designed ILs and conduct a thermodynamic analysis regarding energy requirements and exergy loss. These novel ILs are synthesized using a predictive deep-learning model developed in our previous work [2]. Before assessing the performance of novel ILs, an eco-toxicity analysis is conducted using the ADMETlab 2.0 web tool to ensure their environmental suitability. The novel ILs are then defined in the simulation software Aspen Plus, following the integrated modified translation-rotation-internal coordinate (TRIC) system with the COMSO-based/Aspen approach developed in our previous publication [3]. The proposed steady-state pre-combustion CO2 capture process suggested by Zhai and Rubin [4] is then simulated in Aspen plus V12 to treat the syngas stream with high CO2 concentration (16.27% CO2). The suggested process configuration is modified to employ an IL-based absorption system suitable for processing large-scale syngas streams, enhancing CO2 removal and H2 purity under high-pressure conditions. Finally, a comprehensive energy and exergy analysis is scrutinized to quantify the thermodynamic deficiencies of the developed system based on the performance of novel ILs. This work is essential as it provides insights into the overall CO2 capture system efficiency and the source of irreversibility to ensure an eco-friendly and optimal process design.

Reference

[1] S. Mohammed, F. Eljack, S. Al-Sobhi, and M. K. Kazi, “A systematic review: The role of emerging carbon capture and conversion technologies for energy transition to clean hydrogen,” J. Clean. Prod., vol. 447, no. May 2023, p. 141506, 2024, doi: 10.1016/j.jclepro.2024.141506.

[2] S. Mohammed, F. Eljack, M. K. Kazi, and M. Atilhan, “Development of a deep learning-based group contribution framework for targeted design of ionic liquids,” Comput. Chem. Eng., vol. 186, no. January, p. 108715, 2024, doi: 10.1016/j.compchemeng.2024.108715.

[3] S. Mohammed, F. Eljack, S. Al-Sobhi, and M. K. Kazi, “Simulation and 3E assessment of pre-combustion CO2 capture process using novel Ionic liquids for blue H2 production,” Comput. Aided Chem. Eng., vol. 53, pp. 517–522, Jan. 2024, doi: 10.1016/B978-0-443-28824-1.50087-9.

[4] H. Zhai and E. S. Rubin, “Systems Analysis of Physical Absorption of CO2 in Ionic Liquids for Pre-Combustion Carbon Capture,” Environ. Sci. Technol., vol. 52, no. 8, pp. 4996–5004, 2018, doi: 10.1021/acs.est.8b00411.



Mechanistic and Data-Driven Models for Predicting Biogas Production in Anaerobic Digestion Processes

Rohit Murali1, Benaissa Dekhici1, Michael Short1, Tao Chen1, Dongda Zhang2

1University of Surrey, United Kingdom; 2University of Manchester, United Kingdom

Anaerobic digestion (AD) plays a crucial role in renewable energy production by converting organic waste into biogas in the absence of oxygen. However, accurately forecasting biogas production for real-time applications in AD plants remains a challenge due to the complex and dynamic nature of the AD process. Despite the extensive literature on decision-making in AD, there are currently no industrially applicable tools available for operators that can aid in predicting biogas output for site-specific applications. Mechanistic models are valuable tools for controlling systems, estimating states and parameters, designing reactors, and optimising operations. They can also predict biological system behaviour, reducing the need for time-consuming and expensive experiments. To ensure effective control, state estimation, and future predictions, accurate models that accurately represent the AD process are essential.

In this study, we present a comparative analysis of two modelling approaches: mechanistic models and data-driven models focusing on their ability to predict biogas production from a lab-scale anaerobic digester. Our work includes the development of a simplistic mechanistic model based on two states concentration of biomass and concentration of substrate which incorporates Haldane kinetics to simulate and predict the biogas production over time. The model was optimised using experimental data, with key kinetic parameters tuned via non-linear regression methods to minimise prediction error. While the mechanistic model demonstrated reasonable accuracy in predicting output trends, it fails to accurately characterise feedstock and biomass concentration for future predictions. A more robust model, such as the Anaerobic Digestion Model No. 1 (ADM1), could offer a more accurate representation. However, its complexity with 35 state variables and over 100 parameters, many of which are rarely measured at AD plants makes it impractical for real-time applications.

To address these limitations, we compared the mechanistic model's performance to a data-driven approach using a Long Short-Term Memory (LSTM) neural network. The LSTM model was trained on lab-scale AD data and demonstrated a closer fit to the experimental results, than the simple mechanistic model proving to be a more accurate alternative for predicting biogas production. The LSTM model were also applied to a larger industrial dataset from an AD site, showing strong predictive capabilities and offering a practical alternative to time and resource intensive experimental analysis.

The mechanistic model, while valuable for providing insights into the biochemical processes of AD, achieved an R2 value of 0.82, indicating moderate accuracy in capturing methane production. In contrast, the LSTM model for the lab-scale dataset demonstrated significantly better predictive capabilities with corresponding R2 values ranging between 0.93 - 0.98, indicating a strong fit to the experimental data. When applied to a larger industrial dataset, the LSTM model continued to perform, with R2 values ranging between 0.95 - 0.97. These results demonstrate LSTM model’s superior ability to capture temporal dependencies and handle both lab-scale and industrial data, making it a promising tool for deployment in large-scale AD plants. Its robust performance across different scales highlights its potential for optimising biogas production in real-world applications.



Application and comparison of optimization methods for an Energy Mix optimization problem

Julien JEAN VICTOR1, Zakaria Adam SOULEYMANE2, Augustin MPANDA2, Philippe TRUBERT3, Laurent FONTANELLI1, Sebastien POTEL1, Arnaud DUJANY1

1UniLaSalle, UPJV, B2R GeNumEr, U2R 7511, 60000 Beauvais, France; 2UniLaSalle, UPJV, B2R GeNumEr, U2R 7511, 80000 Amiens, France; 3Syndicat mixte de l'aéroport de Beauvais-Tillé (SMABT), 1 rue du Pont de Paris - 60000 Beauvais

In the last decades, governmental and intergovernmental policies have evolved regarding the global arousal of climate change awareness. In the conception of energy mixes, ecological considerations have taken a predominant importance in the conception of energy mixes, and renewable energy sources are now widely preferred to fossil fuels. Simultaneously, availability of critical resources such as energy is highly sensitive to geopolitical relationships. It is therefore important for territories at various scales to develop their energy mixes and achieve energetic independence [IRENA, 2022]. The development of optimized, renewable and local energy mixes is therefore highly supported by the economic, political and environmental situations [Østergaard and Sperling, 2014].

Multiple studies have aimed to optimize renewable energy technologies and facilities locations to develop more renewable and efficient energy mixes. A majority of these optimization problems are solved using MILP, MINLP or Heuristic algorithms. This study aims to assess and compare optimization methods for environmental and economic optimization of an infrastructure’s energy mix. It focuses on yearly production potential at a regional scale, and therefore does not consider Decomposition or Stochastic optimization methods, better fitted for problems including temporal variation or multiple time periods. From existing methods in Energy Mix literature, Goal Programming, Branch-and-Cut and NSGA-II were selected for they are widely used and cover different problem formulations [Jaber et al, 2024] [Moret et al, 2016]. These methods will be applied to a case study and compared based on their specificities and the solutions they provide.

After census of energy resources already in place in the target territory, the available energy mix undergoes a carbon footprint evaluation that will serve as the environmental component of the problem. The economical component is an aggregation of operative, maintenance and installation costs. The two components constitute the objectives of the problem, either treated separately or weighted in a single objective function. The three selected methods are applied to the problem, and the results provided by each are gathered and assessed based on criteria including optimality, diversity of solutions, and sensitivity to constraints and settings. The assessed methods are then compared based on these criteria, so strengths and weaknesses of each method regarding this specific problem can be identified. The goal is to identify the best fitting methods for such a problem and may lead to the design of a hybrid method ideally fitted to the energy mix optimization problem.

International Renewable Energy Agency (IRENA). (2022). Geopolitics of the energy transformation: The hydrogen factor. Retrieved August 2024, from https://www.irena.org/Digital-Report/Geopolitics-of-the-Energy-Transformation

Jaber, A., Younes, R., Lafon, P., Khoder, J. (2024). A review on multi-objective mixed-integer non-linear optimization programming methods. Engineering, 5(3), 1961-1979. https://doi.org/10.3390/eng5030104

Moret, S., Bierlaire, M., Maréchal, F. (2016). Strategic energy planning under uncertainty: A mixed-integer linear programming modeling framework for large-scale energy systems. In Z. Kravanja & M. Bogataj (Eds.), Computer aided chemical engineering (Vol. 38, pp. 1899–1904). Elsevier. https://doi.org/10.1016/B978-0-444-63428-3.50321-0

Østergaard, P. A., Sperling, K. (2014). Towards sustainable energy planning and management. International Journal of Sustainable Energy Planning and Management, 1, 1-10. https://doi.org/10.5278/IJSEPM.2014.1.1



Insights into the Development and Implementation of Soft Sensors in Industrial Settings

Shweta Mohan Nagrale1, Abhijit Bhakte1, Rajagopalan Srinivasan1,2

1Department Chemical Engineering, Indian Institute of Technology Madras, Chennai, 600036, India; 2American Express Lab for Data Analytics, Risk & Technology, Indian Institute of Technology Madras, Chennai, 600036, India

Soft sensors offer a viable solution for industries where key quality variables cannot be measured frequently. By utilizing readily available process measurements, soft sensors provide frequent estimates of quality variables, thus avoiding the delays typically associated with traditional analyzers. They enhance efficiency and economic performance while improving process control and decision-making.

The literature outlines several challenges in deploying soft sensors within industrial environments. Laboratory measurements are crucial for developing, calibrating, and validating models. Wang et al. (2010) emphasized the mismatch between high-frequency process data and infrequent lab measurements, necessitating down-sampling and, consequently, significant data loss. High dimensionality of process data and multicollinearity complicate model building. Additionally, time delays and varying operational regimes complicate data alignment and model generalization. Without continuous adaptation, soft sensor models risk becoming outdated, reducing their predictive accuracy (Kay et al., 2024). Online learning and model updates are vital for maintaining performance amid changing conditions and sensor drift. Also, effective imputation techniques and outlier management are essential to prevent model distortion. Integrating soft sensors into DCS and suitable human-machine interaction also presents unique challenges.

This work presents practical strategies for developing and implementing soft sensors in real-world refineries. By monitoring key quality parameters like Distillation-95 and Research Octane Number (RON), these sensors provide timely, precise estimations that enhance prediction and process control. We gathered process data at 5-minute intervals and weekly laboratory data over two years. Further, we utilized data preprocessing techniques and clustering methods to distinguish steady-state and transient regimes. Feature engineering strategies were used to address high dimensionality. Also, simpler models like Partial Least Squares (PLS) ensure effective quality prediction due to their balance of accuracy and interpretability. This enables operators to make informed, data-driven decisions and quickly respond to changes without waiting for traditional laboratory analyses. In this paper, we will discuss how the resulting soft sensor can offer numerous benefits, such as detecting quality issues early, minimizing downtime, and optimizing resource allocation. They thus serve as a tool for continuous process improvement. Finally, the user interface can play a significant role in fostering trust among plant personnel, ensuring easy access to predictions, and explicitly highlighting the soft sensor’s confidence in its prediction.

References

Wang, David & Liu, Jun & Srinivasan, Rajagopalan. (2010). Data-Driven Soft Sensor Approach for Quality Prediction in a Refining Process. Industrial Informatics, IEEE Transactions on. 6. 11 - 17. 10.1109/TII.2009.2025124.

Sam Kay, Harry Kay, Max Mowbray, Amanda Lane, Cesar Mendoza, Philip Martin, Dongda Zhang, Integrating transfer learning within data-driven soft sensor design to accelerate product quality control, Digital Chemical Engineering, Volume 10, 2024, 100142, ISSN 2772-5081, https://doi.org/10.1016/j.dche.2024.100142.

R. Nian, A. Narang and H. Jiang, "A Simple Approach to Industrial Soft Sensor Development and Deployment for Closed-Loop Control," 2022 IEEE International Symposium on Advanced Control of Industrial Processes (AdCONIP), Vancouver, BC, Canada, 2022, pp. 261-262, doi: 10.1109/AdCONIP55568.2022.9894185.



Synthesis of Distillation Flowsheets with Reinforcement Learning using Transformer Blocks

Niklas Slager, Meik Franke

Faculty of Science and Technology, University of Twente, the Netherlands

Process synthesis is one of the main tasks of chemical engineers and has major influence on CAPEX and OPEX in the early design phase of a project. Basically, there are two different approaches: heuristic or superstructure optimization approaches. Heuristic approaches provide quick and often satisfying solutions, but due to non-quantitative nature eventually promising options might be overlooked. On the other hand, superstructure optimization approaches are quantitative, but their formulation and solution are difficult and time-consuming. Furthermore, they require the optimal solution to be imbedded within the superstructure and cannot to be applied to open-ended problems.

Reinforcement learning (RL) offers the potential to solve open-ended process synthesis problems. RL is a type of machine learning (ML) which involves an agent making decisions (actions) at a current state within an environment to maximise an expected reward, e.g., revenue. A few publications have dealt with the design of chemical processes [1,2,3,4]. An overview of reinforcement learning methods for process synthesis is given in [5]. Special attention must be paid to the principle of data input embedding. Data embeddings transform raw data (e.g., states, actions) into a form suitable for model processing. Effective embeddings capture the variance and structure of the data to ensure the model learns meaningful patterns. Most of the authors use Convolutional Neural Networks (CNN) and Graph Neural Networks (GNN). However, Convolutional networks and GNNs generally struggle to capture long-range dependencies.

A fundamentally different methodology for permutation-equivariant data processing comes in the form of transformer blocks [6]. Transformer blocks are built on an attention principle, where relations in input data are weighted, and more attention is paid (a higher weight factor is assigned) to relationships having a stronger effect on the outcome.

To demonstrate the applicability of the method, the separation of an ideal seven-component hydrocarbon mixture is investigated. The RL training session was done in 2 hours and much faster than reported sessions on similar five-component problems in [1]. The full recovery of seven components was achieved using a minimum of six separation units designed by the RL agent. However, it cannot be claimed that the learning progress was reliable as minor deviations in hyperparameters easily led to sub-optimal policies, which will be investigated further.

[1] van Kalmthout, S., Midgley, L. I., Franke, M. B. (2022). ps://arxiv.org/abs/2211.04327.

[2] Stops, L., Leenhouts, R., Gao, Q., Schweidtmann, A. M. (2022). AIChE Journal, 69(1).

[3] Goettl, Q., Pirnay, J., Burger, J. Grimm, D. G. (2023). arXiv:2310.06415v1.

[4] Wang, D., et al., (2023). Energy Advances, 2.

[5] Gao, Q., Schweidtmann, A. M. (2024). Current Opinion in Chemical Engineering, 44, 101012.

[6] Vaswani, A., et al. (2023). Attention is all you need. https://arxiv.org/abs/1706.03762.



Machine Learning Surrogate Models for Atmospheric Dispersion: A Time-Efficient Approach to Air Quality Prediction

Omar Hassani Zerrouk1,2, Eva Gallego1, Jose francisco Perales1, Moisès Graells1

1Polytechnic University of Catalonia, Spain; 2Abdelmalek Essaadi University, Morocco

Atmospheric dispersion models are traditionally used to estimate the impact of pollutants on air quality, relying on complex models and requiring extensive computational resources. This hinders the development of practical real-time solutions anticipating the effects of plant incidental emissions. To address these limitations, this study explores the use of machine learning algorithms as surrogate models, faster, less resource-intensive alternatives to traditional dispersion models, with the aim of replicating their results while reducing computational complexity.

Recent studies have explored machine learning as surrogate models for atmospheric dispersion. Kocijan et al. (2023) and Huang et al. (2020) demonstrated the potential of using tree-based techniques to predict air quality, while Gao et al. (2019) used hybrid LSTM-ARIMA models for PM2.5 forecasting. However, most approaches focus on specific algorithms or pollutants. This study provides a broader evaluation of various models, including Regression, Random Forest, Gradient Boosting, and deep learning, across multiple pollutants and meteorological variables.
This study evaluates machine learning models using data from traditional dispersion models for pollutants like NO₂, NOx, SO₂, PM, and meteorological variables. We combined localized meteorological data from municipalities with dispersion data computed using the Eulerian model TAPM (Hurley et al., 2005).

The best-performing models were Gradient Boosting and Random Forest, with MSE values of 1.23 and 1.39, and R² values of 0.94. These models effectively captured nonlinear relationships between meteorological conditions and pollutant concentrations, demonstrating their capacity to handle complex environmental interactions. In contrast, traditional regression models, like Ridge and Lasso, underperformed with MSE values of 14.62 and 17.50, and R² values of 0.40 and 0.28, struggling with data complexity. Similarly, deep learning models such as LSTM and GRU showed weaker performance, with MSE values of 27.43 and 26.73, and R² values of -0.11 and -0.08, suggesting that the data relationships were more influenced by instantaneous features than long-term temporal patterns.

Feature importance was analysed using permutation and standard metrics, revealing that variables related to atmospheric dispersion and stability, such as wind direction, stability, and solar radiation, were the most significant in predicting pollutant concentrations. Time-derived variables, like day or hour, were less relevant, likely because their effects were captured by other environmental factors. This highlights the potential of ML-based surrogate models as efficient alternatives to traditional dispersion models for air quality monitoring.

References

1. Kocijan, J., Hvala, N., Perne, M. et al. Surrogate modelling for the forecast of Seveso-type atmospheric pollutant dispersion. Stoch Environ Res Risk Assess 37, 275–290 (2023).

2. Huang, Y., Ding, H., & Hu, J. (2020). A review of machine learning methods for air quality prediction: Challenges and opportunities. Environmental Science and Pollution Research, 27(16), 19479-19495.

3. Gao, H., Zhang, H., Chen, X., & Zhang, Y. (2019). A hybrid model based on LSTM neural and ARIMA for PM2.5 forecasting. Atmospheric Environment, 198, 206-213.

4. Hurley, P. J., Physick, W. L., Luhar, A. K (2005). TAPM: a practical approach to prognostic meteorological and air pollution modelling, Environmental Modelling & Software, 20(6), 737-752.



Comparative Analysis of PharmHGT, GCN, and GAT Models for Predicting LogCMC in Surfactants

Gabriela Carolina Theis Marchan, Teslim Olayiwola, Andrew N Okafor, Jose Romagnoli

LSU, United States of America

Predicting the critical micelle concentration (LogCMC) of surfactants is essential for optimizing their applications in various industries, including pharmaceuticals, detergents, and emulsions. In this study, we investigate the performance of graph-based machine learning models, specifically Graph Convolutional Networks (GCN), Graph Attention Networks (GAT), and a graph-transformer model, PharmHGT, for predicting LogCMC values. We aim to determine the most effective model for capturing the structural and physicochemical properties of surfactants. Our results provide insights into the relative strengths of each approach, highlighting the potential advantages of transformer-based architectures like PharmHGT in handling molecular graph representations compared to traditional graph neural networks. This comparative study serves as a step towards enhancing the accuracy of LogCMC predictions, contributing to the efficient design of surfactants for targeted applications.



Short-cut Correlations for CO2 Capture Technologies in Small Scale Applications

So-mang Kim, Joanne Kalbusch, Grégoire Léonard

University of Liege, Belgium

Carbon capture (CC) is crucial for achieving net-zero emissions and mitigating climate change. Despite its critical importance, the current deployment of carbon capture technologies remains insufficient to meet the climate target, indicating an urgency to increase the number of carbon capture applications. Emission sources vary significantly in capture scale, with large-scale emitters benefiting from economies of scale, while smaller-scale applications are often neglected. However, to achieve an economy with net-zero emissions, CC applications at various emission levels are necessary.

While many studies on carbon capture technologies highlight capture cost as a key performance indicator (KPI), there is currently no standardized method in the literature to estimate the cost of carbon capture, leading to inconsistencies and incomparable results. This makes it challenging for decision-makers to fairly compare and identify suitable carbon capture options based on the literature results, hindering the deployment of CC units. In addition, conducting detailed simulations and Techno-Economic Assessments (TEAs) to identify viable capture options across various scenarios can be time-consuming and requires significant effort.

To address the aforementioned challenges, this work develops short-cut correlations describing the total equipment cost (TEC) and energy consumptions of selected carbon capture technologies for small capture scale applications. This will allow exploration of the role of CC in small-scale industries and offer a practical framework for evaluating the technical and economic viability of various CO₂ capture systems. The goal is to provide an efficient approach for decision-makers to estimate the cost of carbon capture without the need for extensive simulations and detailed TEAs while ensuring consistent assumptions and cost estimation methods are applied across comparison studies. Also, the correlations are flexible, which allows for various cost estimation methods and case-specific assumptions to fine tune the analyses for various scenarios.

The shortcut correlations can offer valuable insights into small-scale carbon capture (CC) applications by identifying scenarios that enhance their feasibility, such as integrating small-scale carbon capture with waste heat and renewable energy sources. They also facilitate the exploration of various spatial configurations, including the deployment of multiple small-scale capture units versus combining flue gases from small-scale sources into a single larger CC unit. The shortcut correlations are envisioned to improve the accessibility of carbon capture technologies for small-scale industries.



Mixed-Integer Bilevel Optimization Problem Generator and Library for Algorithm Evaluation and Development

Meng-Lin Tsai, Styliani Avraamidou

University of Wisconsin-Madison, United States of America

Bilevel optimization, characterized by nested optimization problems, has gained prominence in modeling two-player interactions across various domains, including environmental policy (Beykal et al. 2020) and hierarchical control (Avraamidou et al., 2017). Despite its wide applicability, bilevel optimization is known to be NP-hard. Mixed-integer bilevel optimization problems are even more challenging to solve (Kleinert et al. 2021), prompting the development of diverse solution methods, such as Bender’s decomposition (Saharidis et al. 2009), multiparametric optimization (Avraamidou et al. 2019), penalty functions (Dempe et al. 2005), and branch-and-bound/cut algorithms (Fischetti et al. 2018). However, due to the large variety of problem types (type of variables, constraints, objective functions), the field lacks standardized benchmark problems. Random problem generators are commonly used to generate problems for algorithm evaluation (Avraamidou et al. 2019), but often produce trivial bilevel problems—defined as those where the high-point relaxation solution is feasible.

In this work, we investigate the prevalence of trivial problems across different problem structures (LP-LP, ILP-ILP, MILP-MILP) and sizes (number of upper/lower variables, binary/continuous variables, constraints), and we reveal how problem structure and size influence trivial problem occurrence probabilities. We introduce a new bilevel problem generator, coded in Python using Gurobi as a solver, designed to create non-trivial bi-level problem instances of chosen type and size. A library of 200 randomly generated problems of different sizes and types will also be part of the tool and available to access online. The proposed tool aims to enhance the robustness of bilevel optimization algorithm testing by ensuring that generated problems provide a meaningful challenge to the solver, offering a reliable method for algorithm evaluation, and accelerating the development of efficient solvers for complex, real-world bilevel optimization problems.

References

Avraamidou, S., & Pistikopoulos, E. N. (2017). A multi-parametric bi-level optimization strategy for hierarchical model predictive control. In Computer aided chemical engineering(Vol. 40, pp. 1591-1596). Elsevier.

Avraamidou, S., & Pistikopoulos, E. N. (2019). A multi-parametric optimization approach for bilevel mixed-integer linear and quadratic programming problems. Computers & Chemical Engineering, 125, 98-113.

Beykal, B., Avraamidou, S., Pistikopoulos, I. P., Onel, M., & Pistikopoulos, E. N. (2020). Domino: Data-driven optimization of bi-level mixed-integer nonlinear problems. Journal of Global Optimization, 78, 1-36.

Dempe, S., Kalashnikov, V., & Rıos-Mercado, R. Z. (2005). Discrete bilevel programming: Application to a natural gas cash-out problem. European Journal of Operational Research, 166(2), 469-488.

Fischetti, M., Ljubić, I., Monaci, M., & Sinnl, M. (2018). On the use of intersection cuts for bilevel optimization. Mathematical Programming, 172(1), 77-103.

Kleinert, T., Labbé, M., Ljubić, I., & Schmidt, M. (2021). A survey on mixed-integer programming techniques in bilevel optimization. EURO Journal on Computational Optimization, 9, 100007.

Saharidis, G. K., & Ierapetritou, M. G. (2009). Resolution method for mixed integer bi-level linear problems based on decomposition technique. Journal of Global Optimization, 44, 29-51.



Surface Tension Data Analysis for Advancing Chemical Engineering Applications

Ulderico Di Caprio1, Flora Esposito1, Bruno C. L. Rodrigues2, Idelfonso Bessa dos Reis Nogueira2, Mumin Enis Leblebici1

1Center for Industrial Process Technology, Department of Chemical Engineering, KU Leuven, Agoralaan Building B, 3590 Diepenbeek, Belgium; 2Chemical Engineering Department, Norwegian University of Science and Technology, Sem Sælandsvei 6, Kjemiblokk 4, Trondheim 7043, Norway

Surface tension plays a critical role in numerous aspects of chemical engineering, influencing key processes such as mass transfer, fluid dynamics, and the behavior of multiphase systems. Accurate surface tension data are essential for the design of separation processes, reactor optimization, and the development of advanced materials. However, despite its importance, the availability of comprehensive, high-quality experimental data has lagged behind modern research needs, limiting progress in fields where precise interfacial properties are crucial.

In this work, we address this gap by revisiting a vast compilation of experimental surface tension data published in 1972. Originally recognized for its breadth and accuracy, this compilation has remained largely inaccessible to the modern scientific community due to its outdated digital format. The digital version of the original document consists primarily of scanned images, making data extraction difficult and time-consuming for researchers. Manual transcription was often required, increasing the risk of human error and reducing efficiency for those seeking to use the data for new developments in chemical engineering.

Our project involves not only the digitalization of this critical dataset—transforming it into a machine-readable and easily accessible format with experimental measurements of surface tension for over 2000 substances across a wide range of conditions—but also an in-depth analysis aimed at identifying the key physical parameters that influence surface tension behavior. Using modern data extraction tools and statistical techniques, we have studied the relationships between surface tension and various physical properties. By analyzing these factors, we present insights into which features most strongly impact surface tension under different conditions.

This comprehensive dataset and accompanying feature analysis offer researchers a valuable foundation for exploring surface tension behavior across diverse areas of chemical engineering. We believe this will contribute to significant advancements in fields such as phase equilibrium, material design, and fluid mechanics, as well as support innovation in emerging technologies like microfluidics, nanotechnology, and sustainable process design.



Design considerations for hardware based acceleration of molecular dynamics

Joseph Middleton, Joan Cordiner

University of Sheffield, United Kingdom

As demand for long and accurate molecular simulations increases so too does the computation demand. Beyond using new, enterprise scale processor developments - such as the ARM neoverse chips – or performing simulations leveraging GPU compute, there exists a potentially faster and more power efficient option in the form of custom hardware. Using hardware description languages it is possible to transform existing algorithms into custom, high performance hardware layouts. This can lead to faster and more efficient simulations, but compromises on the required development time and flexibility. In order to take greatest advantage of the potential performance gains, the focus should be on transforming the most computationally expensive parts of the algorithms.

When performing molecular dynamics simulations in a polar solvent like water, non-bonded electrostatic calculations dominate each simulation step, as the interactions between the solvent and the molecular structure are calculated. However, simply developing a non-bonded electrostatics co-processor may not be enough, as transferring data between the host program and the FPGA itself incurs a significant time delay. For any changes to be made, competitive to existing calculation solutions, the number of data transfers must be reduced. This could be achieved by simulating multiple time-steps between memory transfers, which may impact accuracy, or performing more calculations in the custom hardware.



A Novel AI-Driven Approach for Adsorption Parameter Estimation in Gas-Phase Fixed-Bed Experiments

Rui D. G. Matias1,2, Alexandre F. P. Ferreira1,2, Idelfonso B.R. Nogueira3, Ana Mafalda Ribeiro1,2

1Laboratory of Separation and Reaction Engineering−Laboratory of Catalysis and Materials (LSRE LCM), Department of Chemical Engineering, University of Porto, Porto, 4200-465, Portugal; 2ALiCE−Associate Laboratory in Chemical Engineering, Faculty of Engineering, University of Porto, Porto, 4200-465, Portugal; 3Chemical Engineering Department, Norwegian University of Science and Technology, Sem Sælandsvei 4, Kjemiblokk 5, Trondheim, 793101, Norway

The need to reduce greenhouse gas emissions has driven the shift toward renewable energy sources such as biogas. To use biogas as a substitute for natural gas, it must undergo a purification process to separate methane from carbon dioxide. Adsorption-based separation processes are standard methods for biogas separation (1).

Developing precise mathematical models that can accurately describe all the phenomena involved in the process is crucial for a deeper understanding and the creation of innovative control and optimization techniques for these systems.
By solving a system of coupled Partial Differential Equations, Ordinary Differential Equations, and Algebraic Equations, it is possible to accurately simulate the fixed-bed units used in these processes. However, a robust simulation and, consequently, a better understanding of the intrinsic phenomena governing these systems - such as adsorption isotherms, film and particle mass transfer, among others - heavily depends on carefully selecting parameters for these equations.

These parameters can be estimated using well-known mathematical correlations or trial and error. However, these methods often introduce significant errors (2). For a more accurate determination of parameters, an optimization algorithm can be employed to find the best set of parameters that minimize the difference between the simulation and experimental data, thereby providing a better representation and understanding of the real process.

Different optimization methodologies can be employed for this purpose. For example, deterministic methods are known for ensuring convergence to an optimal solution, but the starting point selection significantly impacts their performance(3). In contrast, meta-heuristic techniques are often preferred for their adaptability and efficiency since they do not rely on predefined initial conditions(4). However, these approaches may not always guarantee finding the optimal solution for every problem.

A parameter estimation methodology based on Artificial Intelligence (AI) offers several advantages. AI algorithms can handle complex problems by processing high-dimensional data and modelling nonlinear relationships more accurately. Additionally, AI techniques, such as neural networks, do not rely on well-defined initial conditions, making them more robust and efficient in the search for global solutions, avoiding local minima traps. Beyond that, they also have the ability to continuously learn from new data, enabling dynamic adjustments.

This work presents an innovative methodology for estimating the isotherm parameters of a mathematical phenomenological model for fixed-bed experiments involving CO2 and CH4. By integrating Artificial Intelligence tools with the phenomenological model and experimental data, this approach develops an algorithm that generates parameter values for the process’ mathematical model, resulting in simulation data with close-to-optimal fit with the experimental points, leading to more accurate simulations and providing valuable insights about this separation.

1. Ferreira AFP, Ribeiro AM, Kulaç S, Rodrigues AE. Methane purification by adsorptive processes on MIL-53(Al). Chemical Engineering Science. 2015;124:79-95.

2. Weber Jr WJ, Liu KT. DETERMINATION OF MASS TRANSPORT PARAMETERS FOR FIXED-BED ADSORBERS. Chemical Engineering Communications. 1980;6(1-3):49-60.

3. Schwaab JCPM. Análise de Dados Experimentais: I. Fundamentos de Estatística e Estimação de Parâmetros: Editora E-papers.

4. Lin M-H, Tsai J-F, Yu C-S. A Review of Deterministic Optimization Methods in Engineering and Management. Mathematical Problems in Engineering. 2012;2012(1):756023.



Integration of Graph Theory and Machine Learning for enhanced process synthesis and design of wastewater treatment networks

Andres D. Castellar-Freile1, Jean Pimentel2, Alec Guerra1, Pratap M. Kodate3, Kirti M. Yenkie1

1Department of Chemical Engineering, Rowan University, Glassboro, New Jersey, USA; 2Sustainability Competence Center, Széchenyi István University, Győr, Hungary; 3Department of Physics, Indian Institute of Technology, Kharagpur, India

Process synthesis (PS) is the first step in process design. This is crucial to finding the best configuration of unit operations/technologies and stream flows that optimize the parameters of interest (cost, environmental impact, energy use, etc.). Traditional approaches such as superstructure optimization strongly depend on user-defined technologies, stream connections, and reasonable initial guesses for the unknown variables. This results not only in missing out on possible structures that can perform better than the selected but also in not considering important aspects such as multiple-input, multiple-output systems, and recycling streams [1]. Regarding this, the enhanced P-graph methodology integrated with insights from machine learning and realistic technology models is presented as a novel approach for process synthesis. It offers a unique advantage providing all n-feasible structures considering its specific connectivity rules for input, intermediate, and terminal nodes [2]. In addition, the novel two-layer process synthesis algorithm [3] is developed which incorporates combinatorial, linear, and nonlinear solvers to integrate the P-graph with realistic nonlinear model equations. It then performs a feasibility analysis and ranks the solution structures based on chosen metrics, such as cost, scalability, or sustainability. However, the n-feasible solutions identified with the P-graph framework could not still be suitable for the real process resulting from limitations in their reliability and structural resilience over a certain period. Considering this, applying Machine Learning (ML) methods for regression, classification, and extrapolation will allow for the accurate prediction of structural reliability and resilience over time [4], [5]. This will support better process design, enable proactive maintenance, and improve overall management.

Many water utility companies use a reactionary (wait-watch-act) methodology to manage their facilities and infrastructure. The proposed method can be applied to these systems offering strong, convergent, and comprehensive solutions for municipalities, water utility companies, and industries, enabling them to make well-informed decisions when designing new facilities or upgrading existing ones, all while minimizing time and financial investment.

Thus, the integration of Graph Theory and ML approaches for optimal design, structural reliability and resilience yields a new framework for Process Synthesis. We demonstrate the Wastewater Treatment Network (WWTN) synthesis as the problem of interest as it is vital in addressing the issues of Water Equity and Public Health. The pipeline network, pumping stations, and the wastewater treatment plant are modeled with the P-graph framework. Detailed and accurate models are developed for the treatment technologies. ML methods such as eXtreme gradient boosting (XGBoost) and Artificial Neural Networks (ANNs) are tested to calculate the pumping stations and the pipeline network's resilience and structural reliability.

References

[1] K. M. Yenkie, Curr. Opin. Chem. Eng., 2019, doi: 10.1016/j.coche.2019.09.002.

[2] F. Friedler, Á. Orosz, and J. Pimentel Losada, 2022. doi: 10.1007/978-3-030-92216-0.

[3] J. Pimentel et al., Comput. Chem. Eng., 2022, doi: 10.1016/j.compchemeng.2022.108034.

[4] G. Kabir, N. B. C. Balek, and S. Tesfamariam, J. Perform. Constr. Facil., 2018, doi: 10.1061/(ASCE)CF.1943-5509.0001162.

[5] Á. Orosz, F. Friedler, P. S. Varbanov, and J. J. Klemes, 2018, doi: 10.3303/CET1863021.



An Automated CO2 Capture Pilot Plant at ULiège: A Platform for the Validation of Process Models and Advanced Control

Cristhian Molina Fernández, Patrick Kreit, Brieuc Beguin, Sofiane Bekhti, Cédric Calberg, Joanne Kalbusch, Grégoire Léonard

University of Liège, Belgium

As the European Union accelerates its efforts to decarbonize society, the exploration of effective pathways to reduce greenhouse gas emissions is increasingly being driven by digital innovation. Pilot installations play a pivotal role in validating both emerging and established technologies within the field of carbon capture, utilization, and storage (CCUS).

At the University of Liège (ULiège) in Belgium, researchers are developing a "smart campus" that integrates advanced CCUS technologies with cutting-edge computational tools. Supported by the European Union's Resilience Plan, the Products, Environment, and Processes (PEPs) group is leading the construction of several key pilot installations, including a CO2 capture pilot plant, a CO2-to-kerosene conversion unit, and a direct air capture (DAC) test bench. These facilities are designed to support real-time data monitoring and advanced computational modeling, enabling enhanced process optimization.

The CO2 capture pilot plant has a processing capacity of 1 ton of CO2 per day, utilizing a fully automated chemical absorption system. Capable of working with either amine or carbonate solvents, the plant operates under an intelligent control framework that allows for remote and extended operation. This level of automation supports continuous data collection, essential for validating computational models and applying advanced control strategies, such as machine learning algorithms. Extended operation provides critical datasets for optimizing solvent stability, understanding corrosion behavior, and refining process models—key factors for scaling up CCUS technology.

The plant is fully electrified, with a heat pump integrated into the system to enhance energy efficiency by recovering heat from the condenser and upgrading it for reboiler use. The initial commissioning and testing phase will be conducted at ULiège’s Sart Tilman campus, where the plant will capture CO2 from a biomass boiler’s exhaust gases at the central heating station.

The modular design of the installation, housed within three 20-foot shipping containers, supports easy transport and deployment at various industrial locations. The automation and control system is centralized in the third container, allowing for full remote operation and facilitating quick reconfiguration of the plant for different experimental setups.

A key feature of the pilot is its flexible design, which integrates advanced gas pretreatment systems (including NOx and SOx removal) and optimized absorption/desorption columns with intercooling and interheating capabilities. These features allow dynamic adjustment of process conditions, enabling real-time optimization of CO2 capture performance. The solvent feed can be varied at different column heights, allowing researchers to evaluate the effect of column height on separation efficiency without making physical modifications. This flexibility is supported by a modular column design, where flanged segments can be dismantled or reassembled easily.

Overall, this pilot plant is designed to facilitate process optimization through data-driven approaches and intelligent control systems, offering critical insights into the performance and scalability of CCUS technologies. By providing a flexible, automated platform for long-duration experimental campaigns, it serves as a vital resource for advancing decarbonization efforts, especially in hard-to-abate industrial sectors.



A COMPARISON OF ROBUST MODELING APPROACHES TO COPE WITH UNCERTAINTY IN INDEPENDENT TERMS, CONSIDERING THE FOREST SUPPLY CHAIN CASE STUDY

Frank Piedra-Jimenez1, Ana Inés Torres2, Maria Analia Rodriguez1

1Instituto de Investigación y Desarrollo en Ingeniería de Procesos y Química Aplicada (UNC-CONICET), Universidad Nacional de Córdoba. Facultad de Ciencias Exactas, Físicas y Naturales. Av. Vélez Sarsfield 1611, X5016GCA Ciudad Universitaria, Córdoba, Argentina; 2Department of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh PA 15213

The need to consider uncertainty in the decision-making process is widely acknowledged in the PSE community, which distinguishes three main modelling paradigms for optimization under uncertainty, namely robust optimization (RO), stochastic programming (SP), and chance-constrained programming (CCP). The last two paradigms mentioned are computationally challenging due to the need for complete distributional knowledge (Chen et al., 2018). Instead, RO does not require the probabilistic behavior of uncertain parameters and strikes a good balance between solution quality and computational tractability (Ning and You, 2019).

One widely used method is the static robust optimization, initially presented by Bertsimas and Sim (2004). They proposed the budgeted uncertainty set which allows flexible handling of the level of conservatism of robust solutions in terms of probabilistic limits of constraint violations. It defines for each uncertain parameter a deviation bound from its nominal value and a budget parameter to determine the number of uncertain parameters that are allowed to take their worst value in each equation. In the case that there is only one uncertain parameter on the right-hand side of the equations, this method may adopt a too conservative perspective, considering the worst-case scenario for each constraint.

To address this concern, Ghelichi et al. (2018) introduced the “adjustable column-wise robust optimization” (ACWRO) method, to define a number of uncertain realizations that the decision-maker is willing to satisfy. Initially presented as a nonlinear model, it is later reformulated to achieve a linear formulation. The present paper proposes an alternative method applying a linear disjunctive formulation, called “disjunctive robust optimization” (DRO). Here, the proposed method is applied to forest supply chain design problem, extending a previous work from the authors (Piedra-Jimenez, et al., 2024). Due to disjunctive structure of the proposed approach, big-M and hull reformulations are applied to the DRO formulation and compared with the ACWRO approach for a large number of instances showing the tightness of each reformulation and computational performance considering the forest supply chain design problem case study.

References:

Bertsimas, D., Sim, M., 2004. The Price of Robustness. Oper. Res. 52, 35–53. https://doi.org/10.1287/OPRE.1030.0065

Chen, Y., Yuan, Z., Chen, B., 2018. Process optimization with consideration of uncertainties—An overview. Chinese J. Chem. Eng. 26, 1700–1706.

Ghelichi, Z., Tajik, J., Pishvaee, M.S., 2018. A novel robust optimization approach for an integrated municipal water distribution system design under uncertainty: A case study of Mashhad. Comput. Chem. Eng. 110, 13–34. https://doi.org/10.1016/J.COMPCHEMENG.2017.11.017

Ning, C., You, F., 2019. Optimization under uncertainty in the era of big data and deep learning: When machine learning meets mathematical programming. Comput. Chem. Eng. 125, 434–448. https://doi.org/10.1016/J.COMPCHEMENG.2019.03.034

Piedra-Jimenez, F., Torres, A.I, Rodriguez, M.A., 2024. A robust disjunctive formulation for the redesign of forest biomass-based fuels supply chain under multiple factors of uncertainty. Comput. Chem. Eng. 181, 108540.

 
2:30pm - 3:50pmT6: Digitalization and AI - Session 6
Location: Zone 3 - Room D049
Chair: Marco Seabra Reis
Co-chair: Leonhard Urbas
 
2:30pm - 2:50pm

Application of Artificial Intelligence in process simulation tool

Nikhil Rajeev1, Suresh Kumar Jayaraman1, Prajnan Das2, Srividya Varada1

1AVEVA Group Ltd, United States of America; 2Cognizant Technology Solutions U.S. Corporation, United States of America

Process engineers in the Chemical and Oil & Gas industries extensively use process simulation for the design, development, analysis, and optimization of complex systems. This study investigates the integration of Artificial Intelligence (AI) with AVEVA Process Simulation (APS), a next-generation commercial simulation tool. We propose a framework for a custom chatbot application designed to assist engineers in developing and troubleshooting simulations. This chatbot utilizes a custom-trained model to transform engineer prompts into standardized queries, facilitating access to essential information from APS. The chatbot extracts critical data regarding solvers and thermodynamic models directly from APS to help engineers develop and troubleshoot process simulations. Furthermore, we compare the performance of our custom model against OpenAI technology. Our findings indicate that this integration significantly enhances the usability of process simulation tools, promoting more innovative and cost-effective engineering solutions.



2:50pm - 3:10pm

Reinforcement Learning-Based Optimization of Shell and Tube Heat Exchangers

Luana de Pinho Queiroz1,2,3, Olve Ringstad Bruaset3, Ana Mafalda Ribeiro1,2, Idelfonso Bessa dos Reis Nogueira3

1LSRE-LCM – Laboratory of Separation and Reaction Engineering - Laboratory of Catalysis and Materials, Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, Porto, 4200-465, Portugal; 2ALiCE – Associate Laboratory in Chemical Engineering, Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, Porto, 4200-465, Portugal; 3Chemical Engineering Department, Norwegian University of Science and Technology, Sem Sælandsvei 4, Kjemiblokk 5, Trondheim, 793101, Norway

Heat exchangers play a crucial role in a wide range of industries, facilitating heat transfer between fluids at different temperatures, significantly impacting operational efficiency and energy consumption1. Their application is vital in industrial processes where optimizing heat transfer can substantially reduce operational costs and energy demand2. However, the design of heat exchangers presents several challenges, particularly in areas such as rating, sizing, and overall efficiency3. Due to the inherent complexities involved, traditional design approaches often rely on iterative, manual adjustments that may not guarantee optimal results. To address these limitations, recent research has begun exploring the integration of Scientific Machine Learning (SciML), which combines scientific models with machine learning techniques to streamline and enhance the optimization process4. Although the application of SciML in heat exchanger design is still emerging, early studies show its potential to offer valuable insights into heat transfer optimization.

This research introduces a model for optimizing the design of shell and tube heat exchangers using Q-learning, a reinforcement learning technique. The primary aim is to bridge the gap between heat exchanger optimization and the growing field of SciML. The model was developed by training an agent within a simulated environment, where it iteratively adjusted design configurations to maximize a reward function based on heat transfer effectiveness and pressure drop. A comprehensive database informed the simulation of heat exchanger specifications, parameters, and fundamental heat transfer principles, such as the ɛ-NTU method. The reward function was designed to balance maximizing effectiveness and minimizing pressure drop, ensuring an optimal trade-off between these competing performance factors.

During training, the most straightforward design configurations consistently achieved the highest reward across most heat exchanger specifications. While more complex configurations demonstrated superior heat transfer efficiency, the lower pressure drop associated with the simpler designs ultimately proved decisive in performance evaluations. This outcome highlights the potential for machine learning techniques like Q-learning to identify efficient design solutions that traditional methods may otherwise overlook. However, this work represents an early exploration of the approach, and further developments are needed to create a more versatile and practical tool. Future improvements should focus on increasing the model’s adaptability by incorporating a broader range of fluid types, utilizing more detailed heat transfer equations, expanding the set of design configurations, and refining the reward function to account for additional performance parameters.

1 Balaji, C., Srinivasan, B., & Gedupudi, S. (2020). Heat transfer engineering: fundamentals and techniques. Academic Press.

2 Caputo, A. C., Pelagagge, P. M., & Salini, P. (2008). Heat exchanger design based on economic optimisation. Applied thermal engineering, 28(10), 1151-1159.

3 Saxena, R., & Yadav, S. (2013). Designing steps for a heat exchanger. International Journal of Engineering Research & Technology, 2(9), 943-959.

4 Iwema, J. (2023, January 16). Scientific machine learning. Wageningen University & Research. https://sciml.wur.nl/reviews/sciml/sciml.html



3:10pm - 3:30pm

Structural Optimization of Translucent Monolith Reactors through Multi-objective Bayesian Optimization

onur can boy1, ulderico di caprio1, Mumin Enis Leblebici1, Idelfonso Nogueira2

1KU Leuven, Department of Chemical Engineering, Centre for Industrial Process Technology; 2Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU)

Photochemical reactions are a promising approach to thermal and chemical activation methods, improving the selectivity of the process and improving its energy efficiency. In photochemical systems, monoliths accommodating repetitive structures are shown to be a successful approach to miniaturize and intensify chemical processes ensuring high mixy efficiency and surface area to volume ratio while being easily scalable. Monoliths do not have the drawbacks of lower photochemical space-time yield (PSTY) resulting from the mismatch between the size of the light source and the microreactor through multiple stacked channels, creating an enhanced light scattering, positioning them as a better alternative to microreactors [1]. However, they have many critical design parameters, such as the number, the shape, and the shape of channels to be stacked that should be considered with the light source characteristics together to maximize light usage and reactor efficiency. Such a multi-parameter optimization problem is nowadays executed manually by human designers; however, optimization algorithms can represent an alternative to this approach. This work proposes a methodology to automatically design translucent monolith for photochemical reactions, leveraging Multiphysics simulations and Bayesian optimization (BO).

As a demonstration case, the geometry used by Jacobs et al. is optimized through BO using multi-objective cost criteria, namely maximizing both the PSTY and the space-time yield (STY). The Ray tracing module of COMSOL Multiphysics is used to model and simulate light behavior. The optimization is performed with four tunable parameters: characteristic channel diameter, the number of channels to be stacked vertically, and the channel shape and rotation. Channels vary in square, circle, ellipse, and plus sign shapes. It is observed that there is a competing relationship between STY and PSTY. Keeping other factors constant, reactor volume is the dominating factor for STY maximization. Hence, STY maximization requires smaller volumes. However, more energy is wasted due to a decrease in absorbed power. PSTY has also been considered to prevent this. Maximizing PSTY results in a significant decrease in the outlet concentration. The trade-off between the absorbed energy and the output concentration makes it necessary to adjust the weight of the STY and PSTY based on the desired output. As a result, PSTY and STY are simultaneously optimized to ensure they meet the minimum conditions in the benchmark work. By selecting the square-shaped channels and applying a 34° rotation angle, an STY development of 25% and a PSTY development of 20% was achieved, which means the same amount of light was absorbed in a smaller reactor volume. Also, plus sign channels with a 15° rotation angle improved STY and PSTY by 15%.

This study proposes a methodology to increase the efficiency of already optimized photochemical reactor designs by achieving better light scattering using BO. Results show that improving one of STY and PSTY by up to 20% is possible. The competing relation between STY and PSTY is also observed, provided the materials and light characteristics were unchanged.

1-Jacobs, M. et al. (2022) ‘Scaling up multiphase photochemical reactions using translucent monoliths’, Chemical Engineering and Processing - Process Intensification, 181, doi:10.1016/j.cep.2022.109138.



3:30pm - 3:50pm

A novel approach to gradient evaluation and efficient deep learning: A hybrid method

Bogdan Dorneanu, Vasileios K. Mappas, Harvey Arellano-Garcia

Brandenburg University of Technology Cottbus-Senftenberg, Germany

Machine learning approaches, and deep learning particularly, continue to face significant challenges in the efficient training of large-scale models and accurate gradient evaluations (Ahmed et al., 2023). These challenges are interconnected, as efficient training often relies on precise and computationally feasible gradient calculations. This work introduces a suite of novel methodologies that enhance both the training process of deep learning networks (DLNs) and improve gradient evaluation in complex systems.

This contribution presents an innovative approach to DLN training by adapting the block coordinate descent (BCD) method (Yu, 2023), which optimizes individual layers sequentially. This method is integrated with a traditional batch-based training method, creating a hybrid approach that leverages the strengths of both methodologies. To further enhance the optimization process, the study explores the use of the Iterated Control Random Search (ICRS) for initial parameter selection and investigates the application of quasi-Newton methods like L-BFGS with restricted iterations.

Complementing these advancements in DLN training, the study also tackles the challenge of gradient evaluation in large-scale systems, a crucial step for efficient training and optimization (Lwakatare et al., 2020). It introduces a generalized modular strategy based on a novel adjoint-based method, offering flexible and robust solution for gradient evaluation of complex hierarchical multiscale systems. This approach is particularly valuable for machine learning applications dealing with high-dimensional data or complex model architectures, as it allows for more efficient and accurate gradient computations during the training process.

By addressing both the training efficiency of DLNs and the gradient evaluation in large-scale systems, this research provides a comprehensive set of tools to address some of the most pressing challenges in contemporary machine learning. The proposed framework offers promising avenues for improving scalability, efficiency, and effectiveness of machine learning algorithms, particularly in handling complex high-dimensional problems increasingly common in real-world applications.

Utilizing relevant examples from process systems engineering, it is demonstrated how the integration of these methods can directly contribute to more efficient and effective training of large-scale systems.

References

Ahmed, S.A. et al. 2023. Deep learning modelling techniques : current progress, applications, advantages, and challenges, Artificial Intelligence Review 56, 13521-13617

Li, B. et al. 2016. ICRS-Filter: A randomized direct search algorithm for constrained nonconvex optimization problems, Chemical Engineering Research and Design 106, 178-190

Lwakatare, L.E. et al. 2020. Large-scale machine learning systems in real-world industrial settings: A review of challenges and solutions, Information and Software Technology 127, 106368

Yu, Z. 2023. Block coordinate type methods for optimization and learning, Analysis and Applications 21, 777-817

 
2:30pm - 4:30pmT1: Modelling and Simulation - Session 9 - Including keynote
Location: Zone 3 - Room D016
Chair: Abderrazak Latifi
Co-chair: Jean Felipe Leal Silva
 
2:30pm - 3:10pm

Keynote: Automated Identification of Kinetic Models for Nucleophilic Aromatic Substitution Reaction via DoE-SINDy

Wenyao Lyu, Federico Galvanin

University College London, United Kingdom

Nucleophilic aromatic substitutions (SNAr) are crucial in medicinal and agrochemistry, especially for modifying pyridines, pyrimidines, and related heterocycles [1]. Identifying reliable, broadly applicable, and high-yielding methods for generating SNAr products remains a significant challenge, particularly at a process scale, due to high reagent costs, the need for elevated temperatures and extended reaction times, poor functional group tolerance, and strict water exclusion requirements [2]. In addressing these challenges, kinetic models play a vital role in providing a deep understanding of reaction mechanisms, which can facilitate the scale-up, optimisation and control of SNAr reactions [3].

Identifying a reliable kinetic model requires confirming the correct model structure before parameter estimation and validation. Conventional model-building approaches require the definition of pre-determined candidate model structures [4]. However, the reaction mechanism of SNAr—whether concerted or two-step—cannot be precisely confirmed, as it depends on the substrate, nucleophile, leaving group, and reaction conditions. This uncertainty makes it difficult to establish the exact mathematical form of the kinetic model [1].

We employ a DoE-SINDy [5] to address these challenges, allowing for generative modelling without a complete theoretical understanding. The benchmark case study involved the nucleophilic aromatic substitution (SNAr) of 2,4-difluoronitrobenzene with morpholine in ethanol (EtOH), producing a mixture of the desired ortho-substituted product, along with para-substituted and bis-adduct side products, formed through parallel and consecutive steps. In-silico measurements were generated using a ground-truth kinetic model, validated by Agunloye et al. [6], to investigate the performance of DoE-SINDy under various measurements conditions, including different noise levels and sampling intervals. Results show that the ‘true’ kinetic model for the SNAr reaction was successfully identified in a limited number of runs, and the DoE-SINDy framework allowed to quantify the effect of DoE factors, such as inlet concentrations, residence time and experimental budget on the performance of DoE-SINDy in model identification.

References

[1] Rohrbach, S., Smith, A. J., Pang, J. H., Poole, D. L., Tuttle, T., Chiba, S., & Murphy, J. A. (2019). Concerted Nucleophilic Aromatic Substitution Reactions. Angewandte Chemie International Edition, 58(46), 16368–16388.

[2] See, Y. Y., Morales-Colón, M. T., Bland, D. C., & Sanford, M. S. (2020). Development of SNAr Nucleophilic Fluorination: A Fruitful Academia-Industry Collaboration. Accounts of Chemical Research, 53(10), 2372–2383.

[3] Hone, C. A., Boyd, A., O’Kearney-Mcmullan, A., Bourne, R. A., & Muller, F. L. (2019). Definitive screening designs for multistep kinetic models in flow. Reaction Chemistry & Engineering, 4(9), 1565–1570.

[4] Asprey, S. P., & Macchietto, S. (2000). Statistical tools for optimal dynamic model building. Computers & Chemical Engineering, 24(2–7), 1261–1267.

[5] Lyu, W., & Galvanin, F. (2024). DoE-integrated Sparse Identification of Nonlinear Dynamics for Automated Model Generation and Parameter Estimation in Kinetic Studies. Computer Aided Chemical Engineering, 53, 169–174.

[6] Agunloye, E., Petsagkourakis, P., Yusuf, M., Labes, R., Chamberlain, T., Muller, F. L., Bourne, R. A., & Galvanin, F. (2024). Automated kinetic model identification via cloud services using model-based design of experiments. Reaction Chemistry & Engineering, 9(7), 1859–1876.



3:10pm - 3:30pm

Mechanistic Modeling of Capacity Fade for Lithium-Metal Batteries

Naeun Choi, Kihun An, Seung-Wan Song, Kosan Roh

Chungnam National University, Korea, Republic of (South Korea)

Lithium-metal batteries (LMBs) are a promising alternative to lithium-ion batteries (LIBs) for energy storage applications due to their high theoretical capacity (3,860 mAh g⁻¹ for lithium metal vs. 372 mAh g⁻¹ for graphite) and low electrochemical potential (-3.404 V vs. SHE) [1, 2]. However, their practical use is limited by poor cycle stability and safety concerns, such as short circuits or explosions during prolonged use. A significant challenge is the instability of the solid electrolyte interphase (SEI) layer, which cracks during repeated lithium deposition and stripping. These cracks enable dendrite growth, leading to further SEI formation and depletion of active lithium [3]. Additionally, as dendrites are stripped during discharge, some become electrochemically inactive, forming "dead lithium." This dead lithium reduces the effective diffusion coefficient, hindering lithium-ion movement and degrading the overall performance of LMBs [4]. Nevertheless, very few studies have addressed these issues in a mathematical modeling context in contrast to LIBs. To bridge this gap, we develop an LMB model based on the Doyle-Fuller-Newman (DFN) model [5] on COMSOL Multiphysics. The key difference from the conventional DFN model is that we model the lithium-metal electrode by considering only the electrode surface. We also express the effective diffusion coefficient as a function of the amount of dead lithium, which captures the tortuous pathways of lithium ions in the electrolyte. We simulate the dynamic behavior of LMBs and interpret the capacity loss over repeated cycles. We validate the model by comparing predicted voltage-capacity curves and cycle retention results with our experimental data from a Li/NMC-811 coin cell, demonstrating its ability to simulate degradation phenomena accurately.

1. Hao, F., A. Verma, and P.P. Mukherjee, Mechanistic insight into dendrite-SEI interactions for lithium metal electrodes. Journal of Materials Chemistry A, 2018. 6(40): p. 19664-19671.

2. Liu, G.Y. and W. Lu, A Model of Concurrent Lithium Dendrite Growth, SEI Growth, SEI Penetration and Regrowth. Journal of the Electrochemical Society, 2017. 164(9): p. A1826-A1833.

3. Mao, M.L., et al., Anion-enrichment interface enables high-voltage anode-free lithium metal batteries. Nature Communications, 2023. 14(1).

4. Chen, K.H., et al., Dead lithium: mass transport effects on voltage, capacity, and failure of lithium metal anodes. Journal of Materials Chemistry A, 2017. 5(23): p. 11671-11681.

5. Doyle, M., T.F. Fuller, and J. Newman, Modeling of Galvanostatic Charge and Discharge of the Lithium Polymer Insertion Cell. Journal of the Electrochemical Society, 1993. 140(6): p. 1526-1533.



3:30pm - 3:50pm

A Novel Bayesian Framework for Inverse Problems in Precision Agriculture

Zeyuan Song, Zheyu Jiang

Oklahoma State University, United States of America

An essential problem in precision agriculture is to accurately model and predict root-zone (top 1 m of soil) soil moisture profile given soil properties and precipitation and evapotranspiration information. This is typically achieved by solving agro-hydrological models. Nowadays, most of these models are based on the standard Richards equation (RE) [1], a highly nonlinear, degenerate elliptic-parabolic partial differential equation that describes irrigation, precipitation, evapotranspiration, runoff, and drainage through soils. Recently, the standard RE has been generalized to time-fractional RE by replacing the first-order time derivatives with any fractional order between 0 and 2 [2]. Such generalization allows the characterization of anomalous soil exhibiting non-Boltzmann behavior due to the presence of preferential flow.

This work addresses the pressing issue of inverse modeling of time-fractional RE; that is, how to accurately estimate the fractional order and soil property parameters of the fractional RE given soil moisture content measurements. Inverse problems are generally ill-posed due to insufficient and/or inaccurate measurements, thereby posing significant computational challenges. In this work, we propose a novel Bayesian variational autoencoder (BVAE) framework that synergistically integrates our in-house developed physics-based, data-driven global random walk (DRW) fractional RE solver [4] and adaptive Fourier decomposition (AFD) [6] to accurately estimate the parameters of time-fractional RE. Our proposed BVAE framework consists of a probabilistic encoder, latent-to-kernel neural networks and convolutional neural networks. The probabilistic encoder projects the input data (i.e., soil moisture measurements) to a latent space. To preserve useful mathematical properties and physical insights, we further restrict the latent space to its reproducing kernel Hilbert space (RKHS) via the use of latent-to-kernel neural networks. The AFD-based convolutional neural networks are applied to the resulting RKHS as decoder for parameter estimation. These neural networks are trained end-to-end, in which the training data are soil moisture profiles produced by our DRW fractional RE solver. The entire BVAE framework is theoretically justified and explainable using the AFD theory, a novel signal processing technique that achieves superior computationally efficiency. Through illustrative examples, we demonstrate the efficiency and reliability of our BVAE framework.

References

[1] L.A. Richards, Capillary conduction of liquids through porous mediums, Physics, 1931, 1(5): 318-333.

[2] Ł. Płociniczak, Analytical studies of a time-fractional porous medium equation: Derivation, approximation, and applications, Communications in Nonlinear Science and Numerical Simulation, 2015, 24(1-3): 169-183.

[3] M.T. Van Genuchten, A closed‐form equation for predicting the hydraulic conductivity of unsaturated soils, Soil Science Society of America Journal, 1980, 44(5): 892-898.

[4] Z. Song, Z. Jiang, A Novel Data-driven Numerical Method for Hydrological Modeling of Water Infiltration in Porous Media, arXiv preprint arXiv:2310.02806, 2023.

[5] D.P. Kingma, M. Welling, Auto-encoding variational bayes, arXiv preprint arXiv:1312.6114, 2013.

[6] W. Qian, W. Sprößig, J. Wang, Adaptive Fourier decomposition of functions in quaternionic Hardy spaces, Mathematical Methods in the Applied Sciences, 2012, 35(1): 43-64.



3:50pm - 4:10pm

Mathematical modelling and optimisation of electrified reverse water gas shift reactor

Dong-Gi Lee1, Seung-Jun Baek2, Yong-Tae Kim2, In-Hyoup Song2, Boram Gu1

1Chonnam National University, Korea, Republic of (South Korea); 2Korea Research Institute of Chemical Technology

The electrification of chemical processes is known to have great potential to reduce global carbon dioxide emissions. Conventional combustion units, which emit large amounts of carbon dioxide in the process of reaching chemical equilibrium, can particularly benefit from electrification. Additionally, the utilisation of CO2 emitted in the industrial and energy sectors can be used for the sustainable production of carbon-based chemicals. This can be achieved by various catalytic reactions, one of which is the reverse water-gas shift (RWGS) reaction, where CO2 reacts with hydrogen to be converted into synthesis gas [1].

In this study, we develop a computationally efficient simulation approach for an electrified reverse water gas shift reactor and compare it with experimental results for model validation. The washcoat catalyst allows us to attain relatively uniform temperature across the catalyst which reduce the possibility of coke formation. However, its suboptimal mass transfer efficiency requires system optimisation. Many studies on electrified reactors use computational fluid dynamics (CFD), which requires large computational resources and calculation time. This makes CFD modelling methods inappropriate for optimisation that requires iterative calculations. Hence,we build a computationally efficient simulation workflow by combining two-dimensional (2D) mass and energy balances and a one-dimensional (1D) kinetic model. Reactions are applied as a source term in the mass balance model to the reactor walls, where the catalyst is assumed to be located. This is done by assuming a zero washcoat thickness (ZWT) model for rapid computation with reasonable accuracy. Most of the studies consider the physical thickness of the catalyst, which produces precise results but requires three computational domain (gas, catalyst and reactor wall) while ZWT model requires two (gas and reactor wall) [2].

The model was validated through two experiments. The first involved a dry run of the joule heating furnace and the second was a packed bed reaction. Experimental data were employed to fit the boundary heat flux in the energy balance model, as well as the pre-exponential factor and activation energy of the reaction kinetics. As a results, axial temperature profile appears as a parabolic form in the dry run simulation. After applying reactions, a distorted temperature profile appear due to the heat of the reaction and the diffusion of product from the washcoat to gas phase can be obseved. Furthermore, we reduce the calculation time from 20 minutes to 3 minutes compared to the CFD simulation due to the absence of catalyst domain calculation. These results demonstrate that our simulation methodology provides fast and accurate outcome, indicating its potential to be used as a platform for optimisation or control simulations. In future study, this model will be used to propose operating condition and reactor design to maximise the syngas production and minimise energy consumption.

Reference

[1] Thor Wismann, Larsen KE, Mølgaard Mortensen. Electrical Reverse Shift: Sustainable CO2 Valorisation for Industrial Scale. Angew Chem Int Ed. 2022

[2] Michael J. Stutz, Dimos Poulikakos Optimum washcoat thickness of a monolith reactor for syngas production by partial oxidation of methane. Chem Eng Sci. 2008



4:10pm - 4:30pm

Bifurcation Behaviour of CSTR Models Under Parametric Uncertainty: A PCE-Based Approach

Francisca Pizarro Galleguillos, Satyajeet S. Bhonsale, Jan F.M. Van Impe

KU Leuven, Belgium


1799-Bifurcation Behaviour of CSTR Models Under Parametric Uncertainty-Pizarro Galleguillos_b.pdf
 
2:30pm - 4:30pmT2: Sustainable Product Development and Process Design - Session 8
Location: Zone 3 - Room E032
Chair: Zhihong Yuan
Co-chair: Thomas Alan Adams II
 
2:30pm - 2:50pm

Simultaneous Optimization of a Green Ammonia Production System with Heat Integration

Ruitao Sun, Jie Li

The University of Manchester, United Kingdom

The Net Zero target accelerates the energy transition, leading to the rapid development of technologies employing renewable energy. Hydrogen, as a popular energy carrier from renewable energy storage for alleviating the intermittency of renewables, can be further converted to ammonia for the advantages of cost-effective transportation and storage and well-established infrastructure of ammonia. Moreover, ammonia itself is a valuable chemicals as a vital raw material for fertiliser production, refrigerant in cryogenic technologies, solvent for carbon capture process, etc. We have designed a green ammonia production system driven by renewable energy. This system integrated a hydrogen generation process employing PEM water electrolysis, a nitrogen generation process from flue gas recovery, and an ammonia synthesis by the Haber-Bosch Process. In particular, flue gas was treated by an amine-based carbon capture process for nitrogen enrichment and further carbon reduction. The integrated processes were simulated and optimised in Aspen Plus in both sequential modular and equation-oriented modes for optimal operating conditions. However, the convergence difficulty limited the optimised variables. In this work, we will develop a mathematical model for further optimisation in GAMS. The objective is to minimise the levelized cost of ammonia while considering the heat exchanger networks. The resulting outputs will be sent to the model in Aspen Plus for validation.



2:50pm - 3:10pm

Assessing the Economic Viability of Green Methanol Production: The Critical Role of CO₂ Purity in Green Methanol Processes

Franc González-Cazorla1,2, Jordi Pujol1, Oriol Martínez1, Lluís Soler2, Moisès Graells2

1GasN2, Carrer Roure Gros, 23, Sentmenat, Barcelona, 08181, Spain; 2Chemical Engineering Department, Universitat Politècnica de Catalunya, Escola d’Enginyeria de Barcelona Est (EEBE), Av. Eduard Maristany, 16, 08019, Barcelona, Spain

The growing concern over climate change and increasing carbon dioxide (CO₂) emissions has driven the development of advanced strategies for mitigating greenhouse gases in the atmosphere. One promising avenue is the synthesis of green methanol (CH₃OH) through the catalytic hydrogenation of captured CO₂ using renewable hydrogen (H₂). This process not only provides a valuable chemical feedstock with diverse applications in fuel production and industrial processes but also contributes to the reduction of atmospheric CO₂ levels. Recent advancements in CO₂ capture technologies allow for the extraction of CO₂ with purities ranging from 70% to 98% (Raganati et al., 2021). Integrating efficient CO₂ capture technologies with the use of green hydrogen establishes the production of green methanol as a practical and sustainable solution for addressing the challenges posed by climate change.

However, while previous studies have predominantly focused on CO₂ compositions greater than 96% in the synthesis of methanol (Djettene et al., 2024; Pérez-Fortes et al., 2015; Jeong et al., 2022), these high-purity models fail to account for the more variable and lower purity CO₂ streams often encountered in real industrial carbon capture applications. This gap highlights the need for a more comprehensive analysis that reflects actual conditions.

The novelty of this study lies in its det ailed exploration of the economic implications of CO₂ purity within the methanol production process. By modeling and simulating the hydrogenation process to methanol using Aspen Hysys V14, this study analyzes the effects of differing CO₂ purities on key performance indicators such as operational cost, yield, and overall profitability. This approach provides a more realistic assessment of methanol production under varying CO₂ conditions, which has not been thoroughly investigated in previous literature.

The findings demonstrate that even small variations in CO₂ purity can significantly impact both operational costs and profitability, underscoring the necessity of optimizing CO₂ capture technologies for methanol production. This study contributes to the existing body of knowledge by quantifying the relationship between CO₂ purity and economic performance, offering critical insights for future optimization strategies. As such, it emphasizes the crucial role that CO₂ purity plays in enhancing both the sustainability and economic viability of green methanol production within the broader context of climate change mitigation.

References:

Raganati, F., Miccio, F., & Ammendola, P. (2021). Adsorption of carbon dioxide for post-combustion capture: A review. Energy & Fuels, 35(16), 12845–12868. https://doi.org/10.1021/acs.energyfuels.1c01618

Djettene, R., Dubois, L., Duprez, M., De Weireld, G., & Thomas, D. (2024). Integrated CO2 capture and conversion into methanol units: Assessing techno-economic and environmental aspects compared to CO2 into SNG alternative. Journal of CO2 Utilization, 85, 102879. https://doi.org/10.1016/j.jcou.2024.102879

Pérez-Fortes, M., Schöneberger, J. C., Boulamanti, A., & Tzimas, E. (2015). Methanol synthesis using captured CO2 as raw material: Techno-economic and environmental assessment. Applied Energy, 161, 718–732. https://doi.org/10.1016/j.apenergy.2015.07.067

Jeong, J. H., Kim, Y., Oh, S., Park, M., & Lee, W. B. (2022). Modeling of a methanol synthesis process to utilize CO2 in the exhaust gas from an engine plant. Korean Journal of Chemical Engineering, 39(8), 1989–1998. https://doi.org/10.1007/s11814-022-1124-1



3:10pm - 3:30pm

Insights on CO2 Utilization through Reverse Water Gas Shift Reaction in Membrane Reactors: A Multi-scale Mathematical Modeling Approach

Zhaofeng Li1, Anan Uziri1, Zahir Aghayev3,4, Burcu Beykal3,4, Michael Patrascu1,2

1Faculty of Chemical Engineering, Technion - Israel Institute of Technology, Haifa 3200003, Israel; 2Grand Technion Energy Program, Technion - Israel Institute of Technology, Haifa, 3200003, Israel; 3Department of Chemical & Biomolecular Engineering, University of Connecticut, Storrs, CT 06269, USA; 4Center for Clean Energy Engineering, University of Connecticut, Storrs, CT 06269, USA

Current environmental challenges necessitate the mitigation of CO2 emissions. However, CO2 emissions from certain industries are projected to remain significant in the foreseeable future. CO2 utilization is a promising approach to reduce atmospheric CO2 by using it as a feedstock to produce valuable products1. Process intensification by in-situ water separation is a promising concept that enables the development of novel CO2 utilization pathways, as most CO2 utilization processes produce water as a byproduct. Packed bed membrane reactors (PBMRs) combine catalytic reactions of CO2 with selective separation through permeable membranes, based on zeolites, such as LTA, carbon or other materials.

Among the various CO2 utilization pathways, the reverse water gas shift (RWGS) reaction is crucial as it produces syngas, which can further be used to synthesize various products such as methanol, DME and aviation fuels. Despite its importance, the RWGS reaction remains underexplored by rigorous modeling, simulation and optimization. Some of RWGS-PBMR models exist, but they often oversimplify membrane characteristics (i.e. assume constant permeance) and overlook some practical aspects (e.g. use nitrogen as sweep).

In this work we have developed a multi-scale model to study the potential of LTA-membrane reactors for CO2 utilization processes. A detailed microscale membrane permeance model is combined with a reactor-scale model and used as a block in a fully integrated process-scale model. The permeance model predicts the impact of operating temperature, pressure and gas phase composition on water permeance and the perm-selectivity to other relevant species, based on the trans-membrane flux described as the sum of gas translation and surface adsorption diffusion2. Simulations reveal significant changes in membrane permeance under different gas compositions and operating temperatures, highlighting the necessity of incorporating the membrane permeance model into the reactor design.

The effect of various design and operational parameters is evaluated, including membrane perm-selectivity for different species, pressure and flow rate ratios between the retentate and permeate sides, and sweep gases. It is concluded that high pressure and flow rate ratios generally have a positive effect on reactor performance, but assuming excessively high ratios is not always ideal due to diminishing returns. An optimal value for the membrane selectivity is revealed for some configuration, i.e., a higher value is not necessarily better. The membrane reactor model is linked to Aspen Hysys process simulation to evaluate the energy efficiency, yield and other integrated process attributes. A process configuration, which involves recycling the dried retentate flow as sweep gas through the permeate side, is proposed and compared to other process configurations suggested in the literature. These process considerations will be analyzed and discussed.

Reference:

  1. M. Patrascu, Process intensification for decentralized production, Chem. Eng. Process. - Process Intensif. 184 (2023) 109291, http://dx.doi.org/10.1016/j.cep.2023.109291.
  2. Zito, P. F., Brunetti, A., Caravella, A., Drioli, E., & Barbieri, G. (2019). Water vapor permeation and its influence on gases through a zeolite-4A membrane. Journal of Membrane Science, 574, 154–163. https://doi.org/10.1016/j.memsci.2018.12.065.


3:30pm - 3:50pm

Model-based Optimal Design and Analysis of Thermochemical Storage and Release of Hydrogen via the Reversible Redox of Iron Oxide/Iron

Richard Yentumi, Constantin Jurischka, Bogdan Dorneanu, Harvey Arellano-Garcia

Brandenburg University of Technology Cottbus-Senftenberg, Germany

Global efforts to adopt cleaner-burning, low-CO2 fuels have accelerated, with hydrogen (H2) emerging as a promising option since its only byproduct is water vapor. Green hydrogen, produced via water electrolysis powered by renewable energy sources like solar or wind, has gained significant focus [1]. However, large-scale hydrogen storage faces major challenges due to its limitations. Gaseous compression and liquefaction are both energy-intensive and costly, while compressed hydrogen storage presents safety risks. For hydrogen to become a mainstream fuel, technical hurdles related to safe, energy-efficient storage for both stationary and mobile applications must be overcome.

This contribution introduces a solid-state hydrogen storage and release system based on the reversible iron oxide/iron thermochemical redox mechanism. In this process, magnetite (Fe3O4) undergoes an endothermic reduction with hydrogen, producing pure iron and water vapor. The reaction is reversible, allowing hydrogen recovery when iron reacts with steam to reform magnetite and release H2. Iron oxide/iron is an attractive candidate for large-scale hydrogen storage due to its abundance, low cost, non-toxicity, and lower energy requirements compared to other metal oxides [2]. Despite its potential, the system's high operating temperature (≥ 420°C), low storage density, and slow charging/discharging rates limit its suitability for mobile applications like hydrogen fuel cell vehicles (FCVs) [3].

To address these challenges, a custom thermochemical equilibrium model was developed using NIST thermochemistry data. This model predicted the equilibrium conversion of hydrogen to steam and the corresponding heat input required as a function of reaction temperature. Simulation results revealed a trade-off between the two main objectives: maximising equilibrium conversion and minimising heat input during the forward reaction. A multi-objective optimisation study demonstrated a preference for prioritising energy efficiency. Overall, the findings provided invaluable insights on setting the optimal process conditions and configuration of this thermochemical storage approach.

REFERENCES

[1] Raghu Raman, Vinith Kumar Nair, Veda Prakash, Anand Patwardhan, Prema Nedungadi, Green-hydrogen research: What have we achieved, and where are we going? Bibliometrics analysis, Energy Reports, 2022, 8, 9242–9260.

[2] L. Brinkman, B. Bulfin, and A. Steinfeld, Thermochemical Hydrogen Storage via the Reversible Reduction and Oxidation of Metal Oxides, Energy Fuels, 2021, 35, 18756-18767.

[3] K. Otsuka, C. Yamada, T. Kaburagi, S. Takenaka, Hydrogen storage and production by redox of iron oxide for polymer electrolyte fuel cell vehicles, International Journal of Hydrogen Energy 2003, 28, 335-342.



3:50pm - 4:10pm

Techno-Economic and Prospective Life Cycle Assessment of Sustainable Propanol Production Pathways

Abhinandan Nabera1, Juan D. Medrano-García1, Sachin Jog1, Robert Istrate2, Gonzalo Guillén Gosálbez1

1Institute for Chemical and Bioengineering, Department of Chemistry and Applied Biosciences, ETH Zurich, Vladimir-Prelog-Weg 1, 8093 Zurich, Switzerland; 2Institute of Environmental Sciences (CML), Leiden University, Einsteinweg 2, 2333 CC Leiden, The Netherlands

Abstract

The chemical industry has the highest energy demand across industrial sectors, primarily due to its heavy reliance on fossil fuels for both feedstock and utilities. Specifically, the industry consumes ca. 14% and 8% of the global oil and gas supply, respectively, contributing to 5.6 Gt CO2e emissions annually (including both direct and indirect emissions), which accounts for 10% of global greenhouse gas emissions (Bauer et al., 2023). To meet the ambitious targets set by the Paris Climate Agreement, numerous studies in recent years have focused on reducing CO2 emissions from chemical production. Techno‑economic and environmental studies, in particular, have gained wide attention in identifying more sustainable production pathways, with life cycle assessment emerging as the prevalent tool for environmental impact assessments. However, most of the studies consider fixed background data, neglecting the effects that the future evolution of socio‑economic systems could have on the chemical sector.

Propanol is a platform chemical with an annual demand of 4 Mt and a growth rate of 5%. Currently, propanol production relies on syngas and ethylene derived from fossil fuels, specifically from natural gas and naphtha, respectively (Vo et al., 2021). Consequently, the fossil‑based production route for propanol results in significant environmental burdens. An alternative to the fossil‑based route relies on using syngas from captured CO2 and renewable‑powered electrolytic hydrogen via the reverse water‑gas shift reaction. Alternatively, renewable carbon feedstocks such as biomass, biomethane, and plastics could also be used to produce syngas for propanol production. To date, the economic and environmental benefits of these alternative propanol production routes remain unexplored.

To fill this critical research gap, we analyse sustainable routes for producing propanol by conducting a techno‑economic and prospective life cycle assessment to evaluate both their current and future environmental impacts. To this end, we develop detailed process simulations using Aspen HYSYS® v12.1 to quantify the process’s economic and environmental performance. For the environmental assessment, foreground data are extracted from the process simulation, while background inventories are obtained from Ecoinvent v3.10 using Brightway2.5 v1.0.6. Moreover, to perform a prospective life cycle assessment, we employ the premise v2.1.2 framework to model future background data using the IMAGE Integrated Assessment Model, following the shared socioeconomic pathway SSP2 (i.e., ‘middle‑of‑the‑road’) under different representative concentration pathways (RCPs). Overall, our results indicate that the biomass‑based alternatives demonstrate the best economic and environmental performance. Furthermore, we find that using prospective LCA data can greatly affect the outcome of the analysis, reinforcing the need to accompany standard LCAs with prospective studies to obtain a more comprehensive picture of the process’s potential.

References

Bauer, F., Tilsted, J.P., Pfister, S., Oberschelp, C., Kulionis, V., 2023. Mapping GHG emissions and prospects for renewable energy in the chemical industry. Curr. Opin. Chem. Eng. 39, 100881. https://doi.org/10.1016/j.coche.2022.100881

Vo, C.H., Mondelli, C., Hamedi, H., Pérez-Ramírez, J., Farooq, S., Karimi, I.A., 2021. Sustainability Assessment of Thermocatalytic Conversion of CO2 to Transportation Fuels, Methanol, and 1-Propanol. ACS Sustain. Chem. Eng. 9, 10591–10600. https://doi.org/10.1021/acssuschemeng.1c02805



4:10pm - 4:30pm

Optimizing the Selection of Solvents for the Dissolution and Precipitation of Polyethylene

Riccardo Standish1, Jian Ying2, Jakob Burger3, Mirjana Minceva2, Amparo Galindo1, George Jackson1, Claire S J Adjiman1

1Department of Chemical Engineering, Sargent Centre for Process Systems Engineering, Imperial College London, UK; 2TUM School of Life Sciences, Technical University of Munich, Germany; 3Campus Straubing for Biotechnology and Sustainability, Technical University of Munich, Germany

Plastics are indispensable in modern commerce and industry, with their consumption projected to double in the next 20 years, according to the European Environmental Agency (EEA) [1]. However, the environmental persistence of plastics and associated greenhouse gas (GHG) emissions are escalating concerns.

Currently, most plastic recycling in Europe is mechanical, which is energy-intensive, inefficient at removing contaminants, and produces secondary-grade outputs. Solvent-based polymer dissolution is emerging as a promising solution, potentially reducing CO2 emissions by 65-75% per ton of plastic waste compared to incineration [2].

In this study we present a novel computer-aided molecular design (CAMD) formulation for selecting optimal solvents for polymer recycling via dissolution and precipitation. Polyethylene, which is found in heavily contaminated multilayer plastic films, is chosen as a case study polymer as it is well-suited to recycling using the dissolution and precipitation method. A mixed-integer nonlinear programming (MINLP) model is proposed to minimise the heat of dissolution for commercial polyethylene while considering solvent properties such as latent heat and toxicity.

Solubility plays a key role in determining the performance of the process but literature data are only available for a limited number of solvents. We employ the predictive SAFT-γ Mie [3] equation of state for the first time to describe polymer-solvent mixtures in the context of plastic recycling. This thermodynamic model, with its group contribution approach, can accurately model various solvent systems with a minimal number of parameters and experimental data. SAFT- Mie is used to predict polyethylene solubility in numerous solvents and assess polymer-solvent miscibility. We extend the current development of our model to consider green solvents such as cymene and deep eutectic solvents. Our SAFT-γ Mie predictions of polyethylene solubility show good agreement with experimental data.

In the CAMD, we consider a range of organic solvents with diverse molecular structures, including aromatic molecules like toluene and p-xylene, bi-cyclic compounds, such as decalin and ketones methyl ethyl ketone (MEK), and ethyl acetate, which are associating molecules. Additionally, bioderived solvents such as cymene and dibutoxymethane are included in the design space.

The MINLP is solved to generate a ranked list of potential solvents and associated process conditions for dissolving low-density polyethylene, one of the most prevalent polymers in industrial and municipal plastic waste. This study provides valuable insights into the selection of optimal solvents for polyethylene dissolution, advancing the design of more efficient recycling processes.

[1] European Environment Agency, Reichel, A., Trier, X., Fernandez, R. et al. (2021) Plastics, the circular economy and Europe's environment : a priority for action. Publications Office. https://data.europa.eu/doi/10.2800/5847

[2] I. Vollmer et al., ‘Beyond Mechanical Recycling: Giving New Life to Plastic Waste’, Angew. Chem. Int. Ed., vol. 59, no. 36, pp. 15402–15423, 2020, doi: 10.1002/anie.201915651.

[3] A. J. Haslam et al., ‘Expanding the Applications of the SAFT-γ Mie Group-Contribution Equation of State: Prediction of Thermodynamic Properties and Phase Behavior of Mixtures’, J. Chem. Eng. Data, vol. 65, no. 12, pp. 5862–5890, Dec. 2020, doi: 10.1021/acs.jced.0c00746.

 
2:30pm - 4:30pmT5: Concepts, Methods and Tools - Session 7
Location: Zone 3 - Room E033
Chair: Juan Segovia-Hernandez
Co-chair: Antonio del Rio Chanona
 
2:30pm - 2:50pm

Modified Murphree Efficiency for Realistic Modeling of Liquid-Liquid Extraction Stage Efficiencies

Mahdi Mousavi, Ville Alopaeus

Aalto University, Finland

Liquid-liquid extraction (LLX) is a fundamental separation method in chemical engineering, widely used for the separation of components based on their solubility in two immiscible liquids (Thornton, 1996). Accurately modeling stage efficiencies in LLX processes is essential for reliable process design and optimization. However, conventional process simulation software often struggles with precise representation of LLX stage efficiencies (e.g. Aspen Plus). The default methods, such as directly applying efficiency factors to distribution coefficients, can distort equilibrium calculations and lead to inaccurate simulation results. This study introduces a modified version of Murphree Efficiency definition to more accurately model LLX stage efficiencies within process simulators, enhancing the reliability and accuracy of process simulations.

To address the limitations of existing efficiency definitions in simulation software, we revisited the concept of stage efficiency, focusing on the Murphree efficiency. While Murphree efficiency is widely used in distillation processes, its direct application to LLX is not straightforward due to the differences in phase behavior. We modified the standard Murphree efficiency by substituting mole flows for mole fractions, aligning the efficiency calculation with the mass transfer characteristics of LLX processes. This modification accounts for deviations from equilibrium caused by factors such as insufficient mixing, mass transfer resistance, and other operational inefficiencies, providing a more realistic representation of the actual performance of extraction stages.

Implementing this modified efficiency model involved creating a multi-stage LLX column within a custom modeling environment, specifically Aspen Custom Modeler (ACM). Each stage in the model applies the modified Murphree efficiency, allowing for a detailed and realistic simulation of the extraction process. The custom model was then integrated into Aspen Plus, enabling users to perform simulations that reflect real-world inefficiencies and operational conditions in LLX processes. This integration provides greater control over the simulation, including the precise application of stage efficiencies, and overcomes the constraints of default efficiency calculations in standard simulation software.

To demonstrate the effectiveness of the modified efficiency model, we conducted simulations using an acetone-water system with 3-methylhexane as the solvent. The results showed that the modified efficiency definition predicts the LLX process performance more realistic across various efficiency levels. At zero efficiency, the model correctly indicates no extraction or phase separation, while at full efficiency, the system reaches equilibrium conditions. This validation confirms the model's ability to capture the full spectrum of operational efficiencies in LLX processes.

The novelty of this work lies in the modification of the original Murphree efficiency definition specifically for LLX processes. By substituting mole flows for mole fractions, we adapted the standard Murphree efficiency to better align with the mass transfer characteristics of LLX. This modification accounts for deviations from equilibrium due to operational inefficiencies, providing a more realistic representation of extraction stage performance. The modified efficiency model is tested by creating a custom multi-stage LLX column within ACM and integrating it into AP. While our implementation utilizes Aspen Plus and ACM, the principles and methodology are applicable to other simulation environments, potentially broadening the impact of this approach within the chemical engineering community.



2:50pm - 3:10pm

Differentiation between Process and Equipment Drifts in Chemical Plants

Linda Eydam, Lukas Furtner, Julius Lorenz, Leon Urbas

TU Dresden, Germany

The performance of chemical plants is inevitably related to knowledge about the underlying process as well as the deployed equipment. However, equipment drifts make it difficult to obtain accurate process information. Measurement deviations caused by equipment malfunction may be misinterpreted as process drifts and vice versa. It gets even more complex to clearly determine the cause of the drift and the proper course of action when such equipment drifts occur in combination with process drifts. Additional information, which can be provided by the second channel [1], has the potential to enable recognition and decoupling of coupled drifts. The second channel is an interface to the automation pyramid, where additional data can be read out and used without impacting the automation system [1].

In this work, a method is presented that uses additional data to detect and decouple coupled drifts. To achieve this goal, a combination of existing approaches is required. Data analysis using statistical methods and quantitative model-based approaches are combined. Statistical approaches divide the data into clusters [2]. Clusters are areas with different characteristics, such as an area with a certain drift or an area without any drift. The problem is that after such a cluster analysis, it is not possible to determine which cluster belongs to which drift just by statistical approaches. For this reason, possible drifts are modeled, theoretical data points are generated, and theoretical clusters are formed. These theoretical clusters are compared to the real clusters. Through this comparison, the real clusters can be assigned to the drifts. The idea of decoupling is to reconstruct the drifts and span drift vectors to characterize them. Data points that are drift couplings become interpretable vectors. Drift decomposition disassembles these data point vectors into individual drifts.

The application of the developed method on a use case in a brownfield chemical plant showed that the method successfully detects and distinguishes simultaneously appearing process and equipment drifts.

Sources
[1] J. de Caigny, T. Himmelsbach, and R. Huck, “NOA-Konzept, NE 175,” in NAMUR Open Architecture (NOA) Das Konzept zur Öffnung der Prozessautomatisierung, T. Tauchnitz, Ed., Essen: Vulkan Verlag, 2021, pp. 5–9.
[2] R. Dunia and S. Joe Qin, “Joint diagnosis of process and sensor faults using principal component analysis,” Control Engineering Practice, vol. 6, no. 4, Art. no. 4, Apr. 1998, doi: 10.1016/S0967-0661(98)00027-6.



3:10pm - 3:30pm

A superstructure approach for optimization of Simulated Moving Bed (SMB) chromatography

Eva Sorensen, Dian Ning Chia, Fanyi Duanmu

University College London, United Kingdom

High-performance liquid chromatography (HPLC) is one of the main separation methods in the pharmaceutical industry. HPLC can be operated both in batch and continuous mode, although the latter is only slowly emerging as a processing alternative, mainly due to the complexity of both its design and its operation. The most successful continuous HPLC process for drug manufacturing is the Simulated Moving Bed (SMB). SMB is a multi-column, continuous, chromatographic process that can handle much higher throughputs than regular batch chromatographic processes. The process is initially transient, but eventually arrives at a cyclic steady state, which makes optimization very challenging. SMB usually has four sections, the desorbent, extract, feed and raffinate sections, and simulates the counter-current flow between the stationary and mobile phases through periodical and synchronous switching of the inlet and outlet ports in the direction of fluid flow. Each SMB section can have a different number of columns, which must be determined carefully due to physical limitations (e.g. pressure drop) and the significant effect on separation performance. To the best of our knowledge, however, existing studies either pre-fixed the column configuration (number of columns per section) or optimized each possible configuration individually, which clearly results in a sub-optimal design and/or is very time-consuming. This work therefore proposes a superstructure approach that allows for simultaneous optimization of the number of columns in each section, as well as the column dimensions (length and diameter), the switching times, and the flow rates. A single superstructure optimization, which is typically a mixed integer non-linear programming (MINLP) problem, is therefore sufficient to obtain not only the optimal configuration, but also the entire design as well as the operation procedure.

This work focuses not only on the optimal design of SMB using a superstructure approach, but also on the steps required and the challenges faced when constructing such a superstructure model, taking into account the transient startup and the final cyclic steady state. Depending on the purpose of the study, SMB processes can be modelled either with partial discretization (i.e. only temporal domain discretized) or full discretization (i.e. temporal and spatial domains discretized); with the former being used for studying dynamic behaviour such as start-up conditions while the latter is required for optimization purposes. This work first validates both the partially and fully discretized superstructure models against experimental results reported in the literature. Then, superstructure optimization based on full discretization is considered and compared with individual optimizations of the possible structures for a given case study.
The results show that the superstructure optimization proposed in this work can converge to the best column structure with significantly lower computation time.



3:30pm - 3:50pm

Design Space Exploration via Gaussian Process Regression and Alpha Shape Visualization

Elizaveta Marich, Andrea Galeazzi, Foteini Michalopoulou, Steven Sachio, Maria M. Papathanasiou

Sargent Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, United Kingdom

Efficient identification of the design space (DSp) is crucial for optimizing chemical process development, ensuring adherence to industry standards for product quality, safety, and performance. However, traditional methods often struggle with the computational challenges posed by multi-dimensional, non-convex problems [1,2,3]. In response, we propose a novel approach that combines Gaussian Process Regression (GPR) with alpha shape reconstruction to efficiently evaluate and visualize design spaces across varying dimensionalities.

Our methodology focuses on reducing the computational complexity of knowledge space generation by employing GPR surrogate models enhanced through an integrated kernel optimization step. Using a greedy tree search algorithm to identify the optimal composite kernel [4], the approach significantly improves the model's ability to capture intricate, non-linear relationships within the design space.

To define the boundaries of the feasible region without assuming convexity, we utilize alpha shape reconstruction. This technique extends the concept of convex hulls to handle non-convex and disjoint shapes, providing an accurate representation of complex design spaces [3]. The alpha shape reconstruction is implemented using the 'dside' Python package developed by Sachio et al. [5], which integrates Delaunay Triangulations and a bisection search to determine the largest alpha radius, effectively reconstructing the design space.

We assess the effectiveness of the proposed methodology through case studies involving constrained non-convex functions and engineering design problems across two- to seven-dimensional spaces. The results demonstrate that our approach can accurately reconstruct complex design spaces while requiring significantly fewer computational resources compared to existing surrogate-based methods.

References:

  1. Grossmann, I. E., Halemane, K. P., & Swaney, R. E. (1983). Optimization strategies for flexible chemical processes. Computers and Chemical Engineering, 7, 439–462.
  2. Ierapetritou, M. G., & Pistikopoulos, E. N. (2018). Optimization approaches for design and planning of flexible chemical plants. In Process Systems Engineering: Volume 1: Process Modeling, Simulation and Control (pp. 147-184). Wiley.
  3. Geremia, M., Bezzo, F., Ierapetritou, M.G. (2023). A novel framework for the identification of complex feasible space. Computers & Chemical Engineering, 179, 108427.
  4. Duvenaud, D., Lloyd, J.R., Grosse, R., Tenenbaum, J.B., Ghahramani, Z. (2013). Structure discovery in nonparametric regression through compositional kernel search. In Proceedings of the 30th International Conference on Machine Learning (pp. 1166–1174).
  5. Sachio, S., Kontoravdi, C., Papathanasiou, M.M. (2023). A model-based approach towards accelerated process development: A case study on chromatography. Chemical Engineering Research and Design, 197, 800–820.


3:50pm - 4:10pm

Langmuir.jl: An efficient and composable Julia package for adsorption thermodynamics.

Vinicius Viena Santana1, Andrés Riedemann3, Pierre Walker2, Idelfonso Nogueira1

1Norwegian University of Science and Technology; 2California Institute of Technology; 3Universidad de Concepción

Recent advancements in material design have made adsorption a more energy-efficient alternative to traditional thermally driven separation processes. Accurate modelling of adsorption thermodynamics is crucial for designing and operating equilibrium-limited adsorption systems. While high-quality open-source packages like PyIAST1, PyGAPs2, and Ruptura3 are available for processing adsorption data, they operate in isolated ecosystems with limited integration with other computational tools. For example, calculating the isosteric heat of adsorption for single or multi-component systems requires derivatives, which can be error-prone, time-consuming and challenging to maintain for new isotherms if done manually. Automatic differentiation (AD) frameworks are a potential solution to this problem, but in most AD engines, many package elements must be rewritten to accommodate specific AD object types.

Langmuir.jl addresses these limitations by leveraging Julia's composable and differentiable programming ecosystem. Langmuir.jl includes tools for processing adsorption thermodynamics data—loading data, fitting isotherms with most often used models, predictive multicomponent adsorption through Ideal Adsorption Solution Theory (IAST) — and, importantly, enabling accurate derivative calculations through Julia's automatic differentiation libraries, without requiring extensive code adjustments. Additionally, it integrates seamlessly with Clapeyron.jl4 for rigorous fluid-phase behaviour modelling - an aspect that most implementations neglect and has been increasingly important for high-pressure gas storage, e.g., hydrogen.

[1] Simon, C. M., Smit, B., & Haranczyk, M. (2016). PyIAST: Ideal adsorbed solution theory (IAST) Python package. Computer Physics Communications, 200, 364-380. https://doi.org/10.1016/j.cpc.2015.11.016

[2] Iacomi, P., & Llewellyn, P. L. (2019). pyGAPS: A Python-based framework for adsorption isotherm processing and material characterisation. Adsorption, 25(8), 1533-1542. https://doi.org/10.1007/s10450-019-00168-5

[3] Sharma, S., Balestra, S. R. G., Baur, R., Agarwal, U., Zuidema, E., Rigutto, M. S., Calero, S., Vlugt, T. J. H., & Dubbeldam, D. (2023). RUPTURA: Simulation code for breakthrough, ideal adsorption solution theory computations, and fitting of isotherm models. Molecular Simulation, 49(9), 893-953. https://doi.org/10.1080/08927022.2023.2202757

[4] Walker, P. J., Yew, H.-W., & Riedemann, A. (2022). Clapeyron.jl: An extensible, open-source fluid thermodynamics toolkit. Industrial & Engineering Chemistry Research, 61(20), 7130-7153. https://doi.org/10.1021/acs.iecr.2c00326

 
2:30pm - 4:30pmT7: CAPEing with Societal Challenges - Session 6
Location: Zone 3 - Aula D002
Chair: Carlos Pozo Fernández
Co-chair: Gonzalo Guillén-Gosálbez
 
2:30pm - 2:50pm

Understanding the Impact of the European Chemical Industry Against Planetary Boundaries

Irene Barnosell1, Carlos Pozo Fernández2

1LEQUiA, Institute of the Environment, University of Girona, E-17071 Girona, Spain; 2Departament d'Enginyeria Química, Universitat Rovira i Virgili, Av. Països Catalans 26, 43007 Tarragona, Spain

The European chemical industry plays a critical role in the region's economy, producing essential chemicals for numerous sectors. However, its environmental footprint is substantial, with high energy consumption, significant greenhouse gas emissions, and the release of harmful chemicals. However, previous research on the environmental performance of the chemical industry is often limited to specific processes or activities, lacking a comprehensive sector-wide perspective.

To address this gap, this study evaluates the sector's environmental performance against the planetary boundaries (PB) framework, which defines the ecological limits within which humanity can safely operate. By comparing the sector's environmental impacts to these boundaries, we assess its absolute sustainability and identify key areas of transgression. To do this, we consider the 19 highest-volume chemicals as representative of the entire European chemical sector. These chemicals account for 80% of the industry’s energy consumption and 75% of its greenhouse gas emissions, highlighting their critical role in both production volume and environmental impact. Given that each of these chemicals can be manufactured through multiple processes, our analysis incorporates data from 32 processes across 23 datasets, sourced from the ecoinvent 3.5 database. To avoid double counting impacts, we explore the links between these 23 activities and adjust production volumes accordingly.

Our findings reveal that the European chemical industry significantly exceeds the safe operating limits for multiple PBs, particularly for climate change, ocean acidification, and biosphere integrity. The industry's contribution to atmospheric CO2 concentration and energy imbalance at the top of the atmosphere exceeds safe levels by 15 and 16 times, respectively, while impacts on ocean acidification are 6 times greater than acceptable. The biosphere integrity boundary, assessed here via functional diversity, is also slitghly transgressed (3%). Five high-volume chemicals —ammonia, polypropylene, high-density polyethylene, styrene, and benzene— are responsible for 50% of the sector's overall environmental burden across all PBs.

We also explore various mitigation pathways, including the deployment of carbon capture and storage (CCS) technologies, the use of renewable energy, and green hydrogen. Our results indicate that CCS could enable the sector to meet all PBs concurrently; however, burden-shifting to other environmental areas remains a concern. This highlights the necessity of holistic approaches to sustainability, where solutions are evaluated not only within the chemical industry but also in interconnected sectors, such as energy.

In conclusion, while technological solutions such as CCS and green chemistry innovations hold promise, they must be implemented in conjunction with broader systemic changes, including policy interventions and cross-sector collaboration. This study emphasizes the need for integrated, multi-disciplinary strategies to ensure that the European chemical industry can transition toward sustainability within the ecological limits of the planet.



2:50pm - 3:10pm

An optimization-based law of mass action precipitation/dissolution model

Chris Laliwala1, Oluwamayowa O. Amusat2, Ana Inés Torres1

1Carnegie Mellon University, Pittsburgh, PA 15213, USA; 2Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA 94720, USA

As the United States advances its decarbonization goals through electrification initiatives, significant engineering challenges related to the reliance on rare earth elements and many other critical minerals will have to be overcome. Recovery of these critical minerals, either from ores or unconventional feedstocks such as end-of-life products, involves processes where chemical equilibrium calculations are essential. Chemical equilibria problems are typically solved in one of two ways1: either by minimizing the Gibbs free energy of the system (the GEM approach) or by solving a system of equations involving the equilibrium constants (the law of mass action approach, LMA).

However, despite the widespread use and popularity of the LMA approach, it tends to fail when many species are involved, as simultaneous satisfaction of equilibrium between all in-solution and precipitated species is not always possible. Software such as PHREEQC2 and MINTEQ3 which utilize LMA approaches, make use of different heuristics based on saturation indices to arrive at a solution4 The newer GEM methods are more stable, but they also rely on thermodynamic data that is not always available.

In this work, we present an optimization-based approach for solving precipitation/dissolution reactions utilizing equilibrium relations. Our approach models the precipitation reactions as inequality constraints, which relaxes the typical requirement of equilibrium between all precipitated and in-solution species. The objective function is set to minimize the square of the difference between the ion product QP , defined over the actual concentration in solution, and the equilibrium constants Keq or solubility products Ksp, defined over the equilibrium concentrations in solution. This choice of objective function allows the identification of the species that should precipitate (i.e., QP = Ksp) and those that should not (i.e., QP ≤ Ksp) without the need for saturation indices heuristics. We hypothesize that this model may have advantages over current LMA-based software packages in certain applications as it (i) makes use of commonly available data such as solubility products Ksp and equilibrium constants Keq, and (ii) can be more easily embedded in unit operations’ optimization problems.

As a proof of concept, we apply our model to a novel REE recovery process developed by the Critical Materials Innovation Hub (CMI) to determine whether experimental results reported in the literature for that process could be successfully replicated. The CMI process uses a series of dissolution and precipitation reactions to recover rare earth elements as rare earth oxalates from end-of-life rare earth permanent magnets.5,6 The relative complexity and configurability of the process—having multiple stages and unit operations—makes it an ideal case for study, as the model could have direct impacts for licensees as they scale-up and mature the process.

Acknowledgments: This effort was funded by the U.S. Department of Energy’s Process Optimization and Modeling for Minerals Sustainability (PrOMMiS) Initiative, supported by the Office of Fossil Energy and Carbon Management’s Office of Resource Sustainability.

.

(References and Disclaimer missing per limit in length of abstract)



3:10pm - 3:30pm

Multi-Stakeholder Optimization for Identification of Relevant Life Cycle Assessment Endpoint Indicators

Dat Huynh1, Oluwadare Badejo1, Borja Hernández2, Marianthi Ierapetritou1

1Department of Chemical and Biomolecular Engineering, University of Delaware, Newark, Delaware, United States of America.; 2Chemical and Energy Technology Department, Universidad Rey Juan Carlos, Calle Tulipan s/n, Móstoles, Madrid, Spain.

Life Cycle Assessment (LCA) evaluates the environmental effects of products and processes. Life Cycle Impact Assessment methods such as TRACI and ReCiPE were developed to quantify impacts1, 2. They employ midpoint indicators relating the impact of an activity to specific environmental sectors. For example, kg SO2-eq relates to acidification in TRACI v2.12. ReCiPe uses endpoint indicators, a linear combination of midpoint indicators with weights assigned based on relative impact. Endpoint indicators aggregate relevant midpoint indicators to reflect broader societal impacts, such as human health, ecosystem quality, and resource depletion. While these endpoint indicators have a physical basis, their weights are mostly subjective and may not align with stakeholder interests. One example is the United Nations Human Development Index (HDI), where three indicators (life expectancy, education index, and income per capita) are grouped into the HDI with a fixed weight3. These weights can change between regions and the stakeholders’ preferences. Therefore, we need an endpoint indicator that considers the environmental impacts important to stakeholders. This metric is essential for setting policy that prevents burden-shifting between different environmental impacts. A data-driven, multi-stakeholder framework has been developed to enable the creation of LCA endpoint metrics that accommodate the diverse needs of stakeholders, including businesses, governments, and the public.

To generate stakeholder preferences, a mass allocation approach based on emissions is used for businesses. Government reports are employed to determine gaseous emissions, wastewater generation, and solid waste generation. Scaled ordinal rankings based on mass allocation for the emissions estimated in each indicator are then established. Government stakeholder preferences are estimated from state emission regulations. Public preferences are determined using public survey data, and companies’ preferences are considered to correspond to their profitability.

A risk-based approach and a stakeholder satisfaction approach are used for optimization. In the risk-based approach, there is a probabilistic guarantee that in the worst-case scenario, the stakeholder’s worst option is minimized. To do so, an optimization problem using downside risk as the objective function is proposed. The stakeholder satisfaction approach minimizes deviation from stakeholders’ preferred solutions4. The objective function can be formulated as a risk measure that shapes the distribution of stakeholder dissatisfaction. Specifically, Conditional Value at Risk penalizes high dissatisfaction levels in the (1-α) tail of the distribution. By minimizing dissatisfaction, the model selects a solution that satisfies the top (α) percentile of stakeholders. A case study focusing on the state of Delaware is presented. From the government emissions reports, primary stakeholders are identified and their environmental preferences characterized. The optimization framework is used to calculate LCA endpoint metrics and compare them using these different approaches.

References:

(1) Huijbregts, M. A. J.; et al. Int J Life Cycle Assess 2017, 22 (2), 138-147. DOI: 10.1007/s11367-016-1246-y

(2) Bare, J. C. Journal of Industrial Ecology 2002, 6 (3‐4), 49-78.

(3) Programme, U. N. D. Human Development Report 2023/2024; United Nations, 2024. DOI: https://doi.org/10.18356/9789213588703.

(4) Dowling, A. W.; et al. Computers & Chemical Engineering 2016, 90. DOI: 10.1016/j.compchemeng.2016.03.034.



3:30pm - 3:50pm

Towards Sustainable Household Organic Waste Management: Modeling and Analysis

Christian Ottini1,2, Gwenola Yannou1,2, Sandra Domenek1,2, Felipe Buendia1,2

1Université Paris-Saclay, INRAE, AgroParisTech, UMR SayFood, 91120 Palaiseau, France; 2Fondation AgroParisTech, Chaire CoPack, 91120 Palaiseau, France

The reduction and recovery of household organic waste fraction is one of the major challenges for contemporary society. Waste management requires the establishment of one or more strategies that are both economically viable and environmentally sustainable. In 2022, Parisians generated approximately 2.2 million tons, or 410 kg/capita/year, of household waste. 300 kg/capita/year of them were residual household waste (RHW)1, a non-recyclable fraction that is typically incinerated or landfilled, resulting in over 1.75 million tons burned for that year alone1. This RHW consist of approximately 30% organic waste which could be recycled. Following a path of circular economy in 2018 the European Union introduced different obligations the new Regulation on Packaging and Packaging waste, replacing the Waste Framework Directive2. As a result, since 2024, separate collection of biowaste is mandatory across Europe. To optimize and evaluate the environmental impact of this regulatory change, the overall performance of the biowaste treatment system needs to be assessed. The aim of our research is to propose physico-chemical predictive models to enable these assessments.

To build our system simulation, two processes were studied and modelled: i) composting and ii) incineration. Both are sub-systems (represented in their superstructure, Figure 1) that contribute to the Paris waste management system. Due to the lack of an open-access composting model in the literature, and to address common EU objectives, we developed a predictive model to evaluate and simulate the system. The complex matrix of 'biowaste' was described as consisting of different sub-fractions equal to macrocomponents, compostable bags, and inert material due to improper sorting. Data collected over several years from 85 zones in Greater Paris, provided by our municipal partner, were used to estimate waste composition.

A predictive model of composting was developed and validated with experimental data. Hydrolysis and bioconversion were considered by different classes of microorganisms: bacteria, actinomycetes, and fungi, into the final product. Constraints included microbial growth and death rates, oxygen quantity, temperature, substrate availability, and humidity (Figure 2). For incineration, the modified Dulong equation was used for the estimation of the lower heating value (LHV) of the biowaste using the specialized process engineering software ProSimPlus®. The outcomes of both processes were compared in different municipal configurations, with scenarios exploring the importance of biowaste purity and the impact of mis-sorted materials, with regards to the performance in the environmental and techno-economic dimensions of sustainability.

REFERENCES

1 Rapport d'activité Syctom, 2022

https://www.syctom-paris.fr/fileadmin/user_upload/Syctom_RA_2022.pdf

2 Packaging and Packaging Waste Regulation, 2024

https://environment.ec.europa.eu/topics/waste-and-recycling/packaging-waste_en



3:50pm - 4:10pm

Assessing the Environmental Impact of Global Hydrogen Supply through the Lens of Planetary Boundaries

Jesmyl Córdova-Córdova, Carlos Pozo

Universitat Rovira i Virgili, Spain

As global decarbonization efforts accelerate, hydrogen is increasingly recognized as a crucial energy carrier for the transition to a low-carbon future. However, the environmental assessment of hydrogen supply chains, including both production and transportation, within the framework of planetary boundaries (PB), remains insufficiently explored on a global scale.

This study addresses this gap by evaluating the environmental impacts of 800 potential hydrogen supply chains, combining 32 production methods and 25 transportation options. Production methods include steam reforming, water electrolysis with bioenergy and carbon capture and storage (WE-BECCS), and aluminum combustion, while transportation methods cover options like compressed hydrogen, liquid hydrogen, and Liquid Organic Hydrogen Carriers (LOHCs) such as ammonia and methanol. Each alternative is evaluated in six regions before results are aggregated at the global level, thus capturing the influence of regional factors while providing a global perspective on hydrogen’s environmental performance.

Using the PB framework in conjunction with Life Cycle Assessment, the study evaluates the global impacts of these potential hydrogen supply chains on nine Earth-system processes. Key findings reveal that current hydrogen demand contributes significantly to several planetary boundaries. Notably, on-site hydrogen production accounts for approximately 22% of total global impacts on CO2 concentration, primarily driven by steam reforming of natural gas and coal gasification. More importantly, if hydrogen demand continues to rise, the current decentralized production might shift to a centralized model. Considering that transporting compressed hydrogen via pipelines over long distances increases energy consumption and greenhouse gas emissions by 15-25% compared to localized production, this shift could further exacerbate impacts on climate change and atmospheric aerosol loading boundaries.

Among the 32 production methods, WE-BECCS emerged as one of the most promising hydrogen alternatives, reducing CO2 emissions by up to 90% compared to conventional steam reforming. Despite its benefits, it also introduces trade-offs, using 20-30% more land than other alternatives, which impacts the land-system change boundary.

Regional discrepancies also influence technological preferences. For instance, dark fermentation is a better option than autothermal reforming of biogas in China, while the opposite holds true in the USA. These differences are due to variations in electricity generation and in waste management practices, alongside distinct processes to obtain the raw materials for hydrogen production.

On the transportation side, ammonia and methanol are very promising alternatives if used directly as fuels, with contributions from transport on pair with those from compressed or liquid hydrogen, but at a lower cost. However, if hydrogen needs to be regenerated at destination, their impacts increase by 15-48%, indicating that this step is the bottleneck for these pathways.

Given these findings, hydrogen policy must not only focus on production but also address the environmental impacts of transportation, as they could offset production gains. This study highlights the wide range of green hydrogen production alternatives and emphasizes the importance of exploiting domestic resources while applying circular economy principles to meet future hydrogen demand. This approach would allow for maintaining a decentralized production model, diversifying methods, and reducing the risks associated with long-distance transportation.



4:10pm - 4:30pm

Engineering the Final Frontier: The Role of Chemical and Process Systems Engineering in Space Exploration

Edwin Zondervan

University of Twente, Netherlands, The

Space exploration demands the integration of multiple scientific and engineering disciplines, with chemical engineering and process systems engineering playing pivotal roles. This paper examines thecritical contributions to propulsion systems, life support mechanisms, and advanced materials essential for space missions. Recent advancements in chemical propellants and rocket fuels, illustrated by SpaceX and NASA missions, have significantly improved propulsion efficiency and safety. Chemical engineering is vital in developing air purification, water recycling, and bioregenerative life support systems, ensuring astronaut survival and mission sustainability. Additionally, creating heat-resistant, lightweight materials enhances spacecraft durability under extreme space conditions. Process systems engineering (PSE) complements these efforts by integrating, simulating, and controlling complex systems. PSE ensures reliable subsystem integration and uses predictive analytics and advanced modeling for mission planning and risk mitigation. Automation and control systems are essential for maintaining operations with minimal human intervention. The synergy between these fields is evident in in-situ resource utilization (ISRU) technologies, which extract and process local resources on extraterrestrial bodies, reducing reliance on Earth supplies and enhancing mission viability. Despite significant progress, challenges remain. Addressing harsh space environments, ensuring long-duration mission sustainability, and advancing energy sources and materials are ongoing research areas. This presentation underscores the indispensable roles of chemical and process systems engineering in overcoming space exploration challenges.

 
2:30pm - 4:30pmT8: CAPE Education and Knowledge Transfer - Session 3
Location: Zone 3 - Aula E036
Chair: Seyed Soheil Mansouri
 
2:30pm - 2:50pm

Teaching computational tools in chemical engineering curriculum in preparation for the capstone design project

Dina Kamel, Aikaterini Tsatse, Sakiru Badmos

Department of Chemical Engineering, University College London, Torrington Place, WC1E 7JE London, UK

UCL Chemical Engineering employs a wide range of teaching strategies to ensure that graduates are digitally literate and have the required knowledge of how to use relevant computational tools (Tsatse and Sorensen, 2023). The curriculum consists of several modules which have a significant computational element, either as part of individual assignments, or as part of group work (Tsatse and Sorensen, 2021). These modules utilize various computational tools and software, including but not limited to gPROMS, AspenPlus, and GAMS.

Starting from Year 1, students use GAMS to solve simple problems such as mass balances and gPROMS for simple reactor problems including lumped models and distributed models. In Year 2, students start using AspenPlus to a) simulate more complex chemical units, b) interpret the behavior and results observed and c) discuss and justify any differences observed between the experimental data and computational results. In addition, they learn how to use gPROMS to model single distillation column tray and how to solve more complex reaction engineering problem, whilst they need to consider the implications of proper initialization procedures, the challenges of incorporating recycle streams, heat integration etc., which gradually expand the students’ knowledge of how Process Systems Engineering (PSE) relates to their studies. Furthermore, in addition to the traditional taught modules, the program includes a number of problem-based activities, a few of which are typically related to Process Systems Engineering such as Year 2 IEP’s Scenarios (Tsatse and Sorensen, 2021). This is an excellent opportunity for them to apply their knowledge from the taught modules, but also apply their own ideas for the Scenario deliverables.

Moving to Year 3 and their capstone design project, students have acquired the background knowledge to address the deliverables given for the design of a process plant. These deliverables include developing a rigorous model of a complete chemical process using information from literature. They investigate the heat integration possibilities for energy minimization. Moreover, the students work on the detailed design of a specific unit within the plant (reactor or separator) and investigate the optimum conditions, internals and sizing using a comprehensive parametric study.

This work outlines the rationale and strategies for delivering modules with significant computational requirements, and how they are coordinated across the curriculum to prepare students for the third year design project and future professional challenges. It demonstrates how complex process systems engineering (PSE) concepts are introduced through various modules, with a focus on supporting student learning, addressing resource challenges, and incorporating feedback for continuous improvement. The approach ensures that students not only grasp PSE tools but also develop critical engineering thinking, enabling them to excel in these challenges and often exceed expectations.

References

Tsatse, A. and Sorensen, E. (2021) Reflections on the development of scenario and problem-based chemical engineering projects. Computer Aided Chemical Engineering 50, 2033-2038

Tsatse, A. and Sorensen, E. (2023). Teaching strategies for the effective use of computational tools within Chemical Engineering curriculum. Computer Aided Chemical Engineering 52, 3501-3506



2:50pm - 3:10pm

Food for thought: Delicious problems for PSE courses

Daniel Lewin

Technion, Israel

Extended Abstract

Active learning is accepted by most educators as the teaching paradigm that has the best potential to yield improved learning outcomes in classroom settings (Bloom, 1968; Crouch and Mazur, 2001). To adopt active learning, some class time needs to be allocated for students to experiment with the application of the newly acquired knowledge, giving them time to make mistakes, correct their errors, try again, and repeat this process as necessary. This cyclic activity is a variant of Kolb’s (1984) ideas about the cognitive processes involved in learning. One way to allocate time would be to adopt the flipped class paradigm (Lewin and Barzilai, 2022 and 2023).

To be effective, this form of learning relies on the availability of sufficient open-ended problem sets to enable students to provide students with a rich source of open-ended problems. These should encompass a range of difficulty: from introductory level to “final exam level” and beyond. To that end, this paper presents a sample of problems with a common theme close to the author’s heart (“the best way to a man’s heart is through his stomach”), mostly intended to be utilized in a course on numerical methods, with one extra problem designed for a course on “good old” process control. So, with “food, glorious food” as a theme, our four example problems are:

  1. Optimal formulation of Willy Wonka’s new chocolate bar. This is an introductory LP problem, which gives students practice in translating a verbal problem description into a mathematical formulation.
  2. Optimal scheduling for the “Matrix Pizza,” a bakery providing quality pizzas to a college campus in mid-West USA. This is a more advanced MILP problem including alternative scenarios that need to be accounted for in the optimal scheduling solution.
  3. Optimal frying time for “fried ice cream.” This is a transient heat transfer problem that is defined as an IVPDE that needs to be solved numerically.
  4. Control system design for Uncle Kane’s continuous pancake batter machine. This is a SISO control problem presented as a set of alternative operating modes, each with its own uncertain model description. The student needs to select the operating mode and design a suitable PI controller that meets required specifications.

References

Bloom, B. S. (1984). “The 2-sigma problem: The search for methods of instruction as effective as one-to-one tutoring.” Educational Researcher, 13(6): 4-16.

Crouch, C. H. and E. Mazur (2001). “Peer instruction: Ten years of experience and results,” American Journal of Physics, 69 (9): 970-977.

Kolb, D. (1984). Experimental Learning as the Science of Learning and Development. Prentice Hall, Englewood Cliffs, NJ.

Lewin, D. R. and A. Barzilai (2022). “The Flip Side of Teaching Process Design and Process Control to Chemical Engineering Undergraduates – and Completely Online to Boot,” Education for Chemical Engineers, 39, 44-57.

Lewin, D. R. and A. Barzilai (2023). “A Hybrid-Flipped Course in Numerical Methods for Chemical Engineers,” Comput. Chem. Eng., 172, 108167.



3:10pm - 3:30pm

Challenges for modelling in chemical engineering education in the Netherlands

Ana Somoza-Tornos1, Meik Franke2, Cees Haringa3, Anton A. Kiss1, Farzad Mousazadeh1, Leyla Özkan1,4, Antoon ten Kate5, J. Ruud van Ommen1, Edwin Zondervan2, Johan Grievink1

1Department of Chemical Engineering, Faculty of Applied Sciences, Delft University of Technology, van der Maasweg 9, 2629HZ, Delft, The Netherlands; 2Department of Chemical Engineering, Faculty of Science and Technology, University of Twente, Meander, kamer 216, Postbus 217, 7500 AE, Enschede, The Netherlands; 3Department of Biotechnology, Faculty of Applied Sciences, Delft University of Technology, van der Maasweg 9, 2629HZ, Delft, The Netherlands; 4Electrical Engineering Department, Eindhoven University of Technology, Eindhoven, 5612 AP, The Netherlands; 5Freelancer

This past October 16th we organized the 1st Workshop on Modelling in Chemical Engineering Education in the Netherlands with the goal to identify the current situation of modelling training in the academic Chemical Engineering (ChemE) programs in the Netherlands. The workshop was co-organized by PSE-NL (https://pse-nl.com/) and the ChemE department at Delft University of Technology (TUDelft).

The workshop followed up on an inquiry sent to the different stakeholders: lecturers in Dutch ChemE programs, directors of education of bachelor and master programs, student associations, and industrial practitioners. Respondents shared their experiences with the different ChemE program cycles, their views on the role of modelling as an evolving technology in the future of ChemE academic programs, and modelling knowledge expectations for ChemE students entering the workforce. The workshop was intended to be an explorative journey across the scales and application domains of ChemE modelling, from molecular models to supply chain management.

During the workshop, we identified issues, gaps and opportunities for enhanced teaching and practicing of modelling in Dutch academic Chemical Engineering programs. The stakeholders shared their experiences and expectations, and lectures analysed the challenges of teaching modelling at different scales. All input was used to prepare a wish list of shared desired improvements on modelling education. A list of actions to address these challenges will be the target of a follow-up workshop.



3:30pm - 3:50pm

Teaching Digital Twins in Process Control using the Temperature Control Lab

Alexander W Dowling, Daniel J Laky, Madelynn Watson, Molly Dougher, Hailey Lynch, Zhicheng Lu

University of Notre Dame, United States of America

Process control should be one of the most exciting chemical engineering undergraduate courses! This presentation describes transforming "Chemical Process Control" into "Data Analytics, Optimization, and Control" at the University of Notre Dame (second-semester core course in the third undergraduate year). In six hands-on experiments, students practice data-centric modeling and analysis using the Arduino-based Temperature Control Lab (TCLab) hardware. Novel innovations in course content include (1) state-space modeling, (2) optimization using Pyomo, (3) uncertainty quantification including nonlinear regression and design of experiments, (4) and digital twins.

The semester learning goals are:

  1. Develop mathematical models for dynamical systems from data and first principles using modern statistical methods;

  2. Predict dynamical system performance using numerical methods;

  3. Analyze, implement, tune, and debug feedback controllers using the hands-on laboratory;

  4. Formulate and solve optimization problems for decision-making;

  5. Demonstrate mastery of at least two of the above skills in an open-ended group project.

The semester topics are organized into three parts, as described below.

Part 1: Data-Centric Modeling of Dynamical Systems

Classical process control focuses on frequency-domain analysis. While the frequency domain perspective provides beautiful insights into certain aspects of controls (e.g., time delays and responses to periodic inputs), it requires dedicating significant time teaching Laplace transformations. Instead, we emphasize state-space modeling, which naturally complements the (partial) differential (algebraic) equation models taught in transport, kinetics, and thermodynamics. As prerequisites, our students have completed five mathematics courses (Calculus I, II, III, linear algebra and linear ODEs, differential equations) and a numerical methods and data analysis course. Using the TCLab, we build upon this foundation using numerical analysis to perform step tests and nonlinear regression to estimate ODE model parameters. Assessments include:

  • Homework 1 reviews Python programming and statistical/computational methods.

  • Lab 1 fits a first-order linear model to step-test data from the TCLab.

  • Lab 3 compares the quality of fit for one and two-component linear models.

Part 2: Feedback Control

Next, we introduce feedback control, motivated by various applications. Using the TCLab, we implement and compare control strategies to maintain time-varying temperature setpoints to temper dark chocolate. We emphasize model-based design, developing dynamic models for the TCLab and control system, and examining how changing the control gains impacts the eigenvalues. Assessments include:

  • Lab 2 explores relay (on/off) control.

  • Lab 4 explores proportional-integral (PI) control.

Part 3: Computational Optimization

Finally, we introduce computational optimization in Pyomo using production planning and formulation optimization problems such as gasoline blending. These problems provide a foundation for dynamic optimization problems with the TCLab, including optimal control, state estimation, and parameter estimation. Assessments in Part 3 include:

  • Homework 2 introduces optimization modeling in Pyomo, emphasizing business analytics problems.

  • Lab 5 explores Pyomo-based open-loop optimization, state estimation, and parameter estimation for the TCLab.

  • Lab 6 implements closed-loop model predictive control (MPC) and compares performance to relay, PI, and open-loop optimal controls.

Lectures conclude with an exam (~week 11). During the last four weeks, students focus on open-ended team projects.

Material for this course is available online (https://ndcbe.github.io/controls/Readme.html). Prof. Jeffery Kantor (1954-2023) led many innovations in this course.



3:50pm - 4:10pm

An integrated VR/MR and flipped classroom concept for enhanced chemical and biochemical engineering education

Marcos Fallanza1, Antonio Dominguez-Ramos1, Seyed Soheil Mansouri2

1University of Cantabria, Spain; 2Technical University of Denmark, Denmark

The integration of mixed reality (MR) and virtual reality (VR) technologies into chemical engineering education offers promising avenues for enhancing student engagement, intuition, and comprehension of practical concepts. However, there is an existing risk that MR and VR might supplant the content of chemical engineering courses. These technologies should serve as augmentative tools that enhance, rather than replace, existing teaching methodologies. Thus, they need to be adapted with a human-in-the-loop concept and grounded in social learning theory principles.

Current implementations of MR/VR are often isolated within single topics and lack interoperability with the broader curriculum, resulting in partial educational experiences with ill-defined learning designs that fail to leverage the interconnected nature of chemical engineering disciplines. This approach impedes the development of a cohesive understanding of topics ranging from chemical reactor design to heat transfer and limits the potential for integrated learning.

The effective incorporation of MR/VR necessitates the seamless integration of existing educational materials, including presentations, lecture videos, and textual resources. By embedding and blending MR/VR experiences within the variety of existing pedagogical frameworks, chemical engineering educators can create enriched learning environments that cater to diverse cognitive preferences without discarding proven instructional methodologies.

In our view, a significant challenge lies in bridging the gap between a “low-effort” integration of MR and VR technologies and well-established teaching practices. Often, these MR and VR technologies are introduced as engaging but superficial, low-value-added demonstrations that lack alignment with specific learning outcomes and present unbalanced cost-benefit implications for educational institutions. To transcend these limitations, MR and VR must be deliberately deployed to facilitate deep conceptual understanding and practical skill development, rather than serving as mere visual spectacles. The educator’s role is more important than ever as a “human-in-the-loop,” bridging the gap so that learning outcomes can be effectively achieved.

We propose an integrated framework for designing a learning approach that combines MR/VR technologies with flipped classroom models in chemical engineering education. This framework emphasizes pre-class exposure to traditional content, enabling students to acquire foundational knowledge through established resources. Pre-, post-, and in-class sessions leverage MR/VR to provide immersive, interactive experiences that reinforce and contextualize theoretical concepts. Extending this approach throughout the undergraduate or master's curriculum facilitates a consolidated practice, which in turn meets current student expectations. This holistic educational strategy may align better with industry expectations, preparing students to navigate the multifaceted demands of the professional market.

The integration of MR/VR technologies within a flipped classroom paradigm can enhance learning outcomes by providing experiential learning opportunities that complement traditional content delivery without any partial withdrawal. The educator’s role is more crucial than ever due to the need to integrate and blend new tools in the chemical engineering course mix with a clear learning design that aligns with learning outcomes. By focusing on the alignment of these technologies with curricular objectives, we can cultivate chemical engineering professionals equipped with both the theoretical acumen and practical skills necessary to excel in a technologically advancing field such as chemical engineering.



4:10pm - 4:30pm

Integrated Project in the Master of Chemical Engineering and Materials Science at the University of Liège

Marie-Noelle Dumont, Marc Philippart de Foy, Grégoire Léonard

université de Liège, Belgium

The Integrated Project for the 2024-2025 academic year in the Master of Chemical Engineering and Materials Science at the University of Liège aims to consolidate technical knowledge and promote the acquisition of soft skills by integrating and linking chemical engineering disciplines that are usually taught separately. The key learning outcomes include making connections between different chemical engineering classes, consolidating technical knowledge, developing critical thinking, addressing complex and multidisciplinary topics in the chemical and process industry, and increasing awareness of the role of science and technology in society.

The project focuses on developing technical skills such as project management, meeting deadlines, working in large groups, and communicating effectively in English, both in written and oral forms. The final deliverable is a 15-page article and a presentation.

For the 2024-2025 academic year, the project topic is the synthesis of Vinyl Chloride Monomer (VCM). The project is divided into several parts, each concluding with a final report and presentation:

  1. Part 1: Individual work on mass balances and literature reviews, followed by group consolidation of results and project planning. This phase includes presenting the mass balance, literature review results, and initial process basics.
  2. Part 2: Detailed models for thermodynamics, process techno-economics, kinetics, reactors, separation, and unit operations. Students work in groups and sub-groups to study the chemical system and critical elements in detail.
  3. Part 3: Sensitivity studies to assess key process parameters and evaluate their impact on unit operation results. Students challenge assumptions and discuss model validation.
  4. Part 4: Integration of the process into one model, building a global flowsheet, optimizing its topology, applying optimization and heat integration techniques, and studying process techno-economics and life cycle assessment.
  5. Part 5: Extended literature review and creation of a report for a general audience. Students update the literature review, identify key performance indicators, challenge and validate process assumptions, identify alternative manufacturing pathways and product alternatives, and communicate their findings to a broader audience.

Throughout the project, students will have regular interactions with industry and academic experts, participate in plenary meetings and feedback sessions, and use shared drives for communication and document sharing. The ULiège Soft Skills Team will provide support for group management and soft skills development.

Evaluation of the project includes both technical and soft skills assessments. Each student will receive an individual grade based on group performance and individual contributions. The technical group grade is based on reports and the final presentation, while the technical individual grade is based on Part 1 reports and written assessments. Soft skills evaluation includes group and individual levels, with self and peer-assessment.

This integrated project has been running for more than 5 years at the University of Liège, and an annual feedback meeting takes place at the end of the project. The general feeling expressed by students is that even if this project requires a lot of effort, it is very rich and instructive and is seen as an excellent preparation for their future career.

 
2:30pm - 4:30pmT9: PSE4Food and Biochemical - Session 6
Location: Zone 3 - Room E030
 
2:30pm - 2:50pm

Application of pqEDMD to modelling and control of bioprocesses

Camilo Garcia-Tenorio, Guilherme Araujo Pimentel, Laurent Dewasme, Alain Vande Wouwer

University of Mons, Belgium

Extended Dynamic Mode Decomposition (EDMD) and its variant, the pqEDMD, which uses a p-q-quasi norm reduction of the polynomial basis functions, are appealing tools to derive linear operators approximating the dynamic behavior of nonlinear systems. This study highlights how this methodology can be applied to data-driven modeling and control of bioprocesses by discussing the selection of several ingredients of the method, such as polynomial basis, order, data sampling, and preparation for training and testing, and ultimately, the exploitation of the model in linear model predictive control.



2:50pm - 3:10pm

Incorporating Process Knowledge into Latent-Variable Models: An Application to Root Cause Analysis in Bioprocesses

Tobias Overgaard1,2, Maria-Ona Bertran3, John Bagterp Jørgensen1, Bo Friis Nielsen1

1Techical University of Denmark, Department of Applied Mathematics and Computer Science, Matematiktorvet, Building 303B, DK-2800 Kgs. Lyngby, Denmark; 2Novo Nordisk A/S, PS API Manufacturing, Science & Technology, Smørmosevej 17‐19, DK‐2880 Bagsværd, Denmark; 3Novo Nordisk A/S, PS API Expansions, Hallas Alle 1, DK-4400 Kalundborg, Denmark

Troubleshooting performance variations in batch bioprocesses at a plant-wide level involves identifying the process step responsible for these variations and analyzing the root cause. While root cause analysis is well-documented, pinpointing the specific process step responsible for variations is less explored due to complexities like serial-parallel unit arrangements [1]. In commercial production, measured process variables may not reveal the root cause, as tightly controlled variables show minimal variation, thus hiding critical information [2]. Therefore, incorporating developmental data from smaller-scale experiments is crucial for identifying the cause of variation.

We propose a structured methodology for troubleshooting plant-wide batch bioprocesses in multi-source data environments using latent-variable techniques. Initially, we select a process step where unexplained performance variations manifest, termed the "step of manifestation“. Next, a sequential multi-block partial least squares (SMB-PLS) model spanning the process flow diagram until the step of manifestation is built [3]. This model aims to isolate a potential step where the variation originates, termed the "step of origin“. The SMB-PLS model captures connectivity information from a multi-step process by linking data blocks from each step sequentially and uses orthogonalization to separate correlated information between blocks, retaining unique information for each block [4]. To handle parallel units, data blocks are arranged using low-level fusion, where data blocks are concatenated and analyzed as a single block.

Once the step of origin is isolated using the SMB-PLS model, an in-depth investigation of the step is performed by incorporating knowledge from small-scale experiments. The aim of this data combination is to unveil internal dynamics and variable interactions that cause the performance variation. The joint-Y PLS (JY-PLS) model is used to incorporate knowledge from different scales, capturing the common variable structure across multiple scales [5].

We apply this multi-step, multi-scale methodology to troubleshoot a commercial batch bioprocess producing an active pharmaceutical ingredient. We find that downstream productivity is limited by unexplained variability during cell culture production. To gain further insights, bioreactor data from small-scale developmental studies are paired with commercial-scale data. The output data include quality attributes related to the final product concentration profile along with various metabolites, and the input data include process variables like temperature, pH, pO2, dilution rate, and raw material components such as seed inocula and glucose.

Given the data-driven nature of the methodology, validation of the process improvements is crucial. Identified effects and hypotheses are discussed between process specialists and data scientists, which has been key to obtaining valuable insights. Furthermore, the model's adherence to the flowsheet design and system scale enhances transparency, leading to effective collaboration between process experts and data scientists. In collaboration, various process variable interactions that impact cell culture performance are identified.

References

1. F. Zuecco, et al., Processes 9 1074 (2021).
2. T. Kourti, Crit. Rev. Anal. Chem. 36 257 (2006).
3. J. Lauzon-Gauthier, et al., Chemom. Intell. Lab. Syst. 101 72 (2018).
4. Q. Zhu, et al., Chemom. Intell. Lab. Syst. 252 105192 (2024).
5. S. García-Muñoz, et al., Chemom. Intell. Lab. Syst. 79 101 (2005).



3:10pm - 3:30pm

Future Forecasting of Dissolved Oxygen Concentration in Wastewater Treatment Plants using Machine Learning Techniques

Sena Kurban1, Aslı Yasmal1, Ocan Şahin1, Aycan Sapmaz1, Mustafa Oktay Samur1, Gizem Kuşoğlu Kaya1, Gözde Akkoç2, Mahmut Kutay Atlar2

1Turkish Petroleum Refinery, 41780, Körfez, Kocaeli, Turkey; 2Turkish Petroleum Refinery, 71480, Merkez, Kırıkkale, Turkey

Since water is essential to life, its quality can be greatly impacted by pollution, which can have a negative impact on the sustainable and effective use of water resources [1]. Reusing water and preventing pollution of water sources are the goals of the wastewater treatment plant (WWTP) process [2]. In oil refinery, WWTP consists of three steps. In order to remove oil and suspended solids from water, mechanical and physical techniques like gravity separators, dissolved air flotation, filtration, and sedimentation are used in the pretreatment stage. In order to eliminate organic materials and meet the necessary discharge limits, secondary and tertiary treatments come after this stage. By the time tertiary treatment such as biological treatment is completed, more than 99 percent of the toxic and harmful pollutants will have been removed [3].

Predicting the quality of the water is crucial for managing and planning the water environment as well as for preventing and controlling pollution in the water. Dissolved oxygen (DO) is one of them and a crucial water quality indicator [1]. This study focuses on forecasting dissolved oxygen levels in the activated sludge tanks of a biological treatment unit at an oil refinery’s WWTP. Maintaining proper oxygen concentration is crucial for microbial activity in the sludge, as insufficient oxygen can disrupt the biological breakdown of pollutants. The study’s aim is to develop predictive models that identify operational risks early on, allowing for better efficiency in the treatment process and optimizing resources such as chemicals, bacterial cultures, and aeration systems. Another key goal is to provide operators and engineers with early warnings about potential problems in the biological treatment stage, reducing reliance on laboratory tests. This proactive approach ensures that the optimal oxygen levels are maintained for bacteria, leading to increased operational efficiency, reduced costs, and enhanced water quality. Hence, the sustainability of wastewater treatment will be ensured.

Influenced by influent flow rates, contaminant levels, chemical conditions and external factors such as weather and wastewater composition, wastewater treatment processes are complex, non-linear systems that present challenges to accurate modelling [4]. To tackle these issues, thorough data analysis and advanced modeling techniques are essential. In this study, different machine learning models such as Recurrent Neural Network (RNN), Gated Recurrent Unit (GRU) and Long-Short Term Memory (LSTM) were used with a two-year real-time dataset and their performance was evaluated on 8-hour datasets and 23 features. The models were trained, validated, and tested on actual process data. Additionally, principal component analysis (PCA) was employed to clarify data relationship.

Overall, the results show that GRU-based soft sensors are capable of accurately forecasting oxygen concentration with a good performance of R² of 0.7, MSE of 0.01 and MAE of 0.07 in activated sludge ponds used for biological wastewater treatment. Although the model provides effective forecast for system control, enhancements could be realized by incorporating climate data, broadening the feature set, or optimizing hyperparameters to improve accuracy in this complex, nonlinear environment.



3:30pm - 3:50pm

Exploring Design Space and Optimization of nutrient factors for maximizing lipid production in Metchnikowia pulcherrima with Design of Experiments

Nichakorn Fungprasertkul, James Winterburn, Peter Martin

The University of Manchester, United Kingdom

Unsaturated fatty acids should be a primally source of fat consumption for human (WHO, 2022) because it can decrease a risk of heart diseases by lowering cholesterol level (NHS UK, 2023). However, due to the increase of global population, there are greater demands of food and food crops area (FAO, 2020). Oleaginous yeast is a promising alternative microorganism for commercial lipid production due to the high volumetric productivity, with Metchnikowia pulcherrima an under‐explored oleaginous yeast with potential as a lipid producer (Abeln, 2021). Critical to achieving high productivity lipid production are nutrient factors. A sensitivity test identified carbon and nitrogen sources as important factors in nitrogen limited broth (NLB) for lipid production in M. pulcherrima which are glucose, yeast extract and Ammonium sulphate. Response Surface Methodology (RSM) involving sets of 15 experimental runs of three-factor three-level Box-Behnken Design (BBD) was implemented for exploring the carbon and nitrogen source design. Quadratic surfaces were least-square fitted and used to identify regions of optimal lipid yield. Multiple sets of runs were conducted with the parameter range progressively adapted and repeated until a clear optimum was identified. The highest total lipid production was in the low carbon concentration range (2.27-21.5 g/L) which suggests the more productive process when compared to NLB media (30.4 g/L of carbon). The optimal carbon concentration was 14.8 g/L whereas the dependence on nitrogen was not found to be significant. The yield of optimal point (YP/S) was 2.3 times higher than NLB media after validation because the depletion of glucose at the end of fermentation (72-104 h) contributes the high increase of total lipid production of the optimal point (94.5%) which was higher than NLB media (34.5%) for 60%.

Reference

Abeln, F., Chuck, C.J. The history, state of the art and future prospects for oleaginous yeast research. Microb Cell Fact 20, 221 (2021). https://doi.org/10.1186/s12934-021-01712-1



3:50pm - 4:10pm

Adaptable dividing-wall column design for intensified purification of butanediols after fermentation

Tamara Jankovic, Siddhant Sharma, Adrie J.J. Straathof, Anton A. Kiss

Delft University of Technology, The Netherlands

2,3-, 1,4- and 1,3-butanediol (BDOs) are valuable platform chemicals traditionally produced through petrochemical routes. Alternatively, there is growing interest in synthesizing these chemicals through fermentation processes. Given the substantial research efforts dedicated to developing genetically modified microorganisms for BDO production from renewable sources, fermentation has great potential to become a sustainable alternative to fossil fuel-based processes. Nonetheless, several challenges remain with the fermentation processes that hinder downstream processing, such as low product titers, the presence of microorganisms, the formation of fermentation by-products, etc. Additionally, BDOs are high-boiling components (180 – 228 ˚C) which may lead to energy-intensive recovery. Consequently, the costs associated with the purification process may substantially increase the total production costs. Despite this limitation, there is still potential for improvement in the downstream processing of the BDOs. Thus, the main objective of this original research is to develop a state-of-the-art large-scale downstream processing design (broth processing capacity of 160 ktonne/y with a production capacity of 11 – 15 ktonne/y) that may be easily adapted to purify 2,3- (case 1), 1,4- (case 2) or 1,3-BDO (case 3) after fermentation and conventional filtration and ion-exchange steps.

In all three cases, Aspen Plus was employed as a computer-aided process engineering (CAPE) tool to design BDO purification processes, whereby rigorous simulations were performed for all process operations. Data from the published literature was used to obtain realistic compositions of fermentation broths. Generally, concentrations of BDO and water are 7-9 and 87-91 wt%, respectively, while both light and heavy impurities are present. The developed process design includes an initial preconcentration step in a heat pump-assisted vacuum distillation column to remove most of the water and some light impurities (e.g. ethanol). This step allows a significant reduction in total energy requirements and equipment size in the final purification step. The heart of the developed process is an integrated dividing-wall column that effectively purifies high-purity BDO (>99.4 wt% in all cases) from the remaining light (formic and acetic acids, etc.) and heavy impurities (succinic and lactic acids, glucose, etc.).

Finally, a single process design was proven to cost-effectively (0.252 – 0.291 $/kgBDO) and energy-efficiently (1.667 – 2.075 kWthh/kgBDO) recover over 99% of BDO from different fermentation processes. Implementation of the advanced process intensification and heat integration techniques reduced energy requirements by over 33% compared to the existing literature. Furthermore, the adaptable purification process offers flexibility in developing sustainable business models. Lastly, the results of this novel work highlight the importance of using CAPE tools in developing competitive bioprocesses by demonstrating that computer-aided simulations may play a crucial role in advancing sustainable industrial fermentation.

 
2:30pm - 4:30pmT10: PSE4BioMedical and (Bio)Pharma - Session 4
Location: Zone 3 - Room E031
Chair: Boram Gu
Co-chair: Gintaras Reklaitis
 
2:30pm - 2:50pm

Integrating process and demand uncertainty in capacity planning for next-generation pharmaceutical supply chains

Miriam Sarkis1,2, Nilay Shah1,2, Maria M. Papathanasiou1,2

1The Sargent Centre for Process Systems Engineering, Imperial College London, UK; 2Department of Chemical Engineering, Imperial College London, UK

Pharmaceutical capacity planning is crucial to meet product demands from clinical to commercial stages. In recent years, the market boom of gene therapies and demand for vaccines in pandemic contexts has highlighted a need to shorten scale-up timelines and improve responsiveness of pharmaceutical supply chains to demand fluctuations and unforeseen events. To this end, the industry has seen an increasing uptake of single-use equipment (SU) to substitute more inflexible stainless-steel facilities (MU), allowing for rapid scale-up and scale-out of manufacturing capacity. In this space, investment planning is challenged by a need to make scale-up decisions before processes are fully intensified and process capabilities known for certain. Furthermore, process uncertainty in early stages of planning is combined with an uncertainty in future demands. In this context, an overestimation of attainable production targets and sub-optimal demand forecasting can result in shortages and larger costs.

In this work, we consider the integration of early-stage process uncertainty and demand uncertainty in the investment planning problem and account for the different timescales of uncertainty. We develop a planning tool integrating of process uncertainty using adaptive robust optimisation (ARO) and demand uncertainty using stochastic programming. Our framework consists of a quantification step where we quantify process uncertainty and cost-related inputs to the optimisation, followed by an optimisation step. Given a set of demand scenarios and process realisations along the time horizon based on ARO, the optimisation selects network structures and investment into facilities as first-stage decisions. Production levels at each manufacturing node, transportation flows and shipments are scenario-dependent second-stage decisions. In networks implementing MU equipment, the selection of parallel lines and scale is considered a scenario-independent decision, hence capturing the inflexibility of the equipment and longer timelines for recourse actions. In SU-based manufacturing, these variables become second-stage decisions and depend on demand realisations.

The adoption of the ARO approach leads to conversative decisions in the first-stage of the time horizon, with 10-fold larger costs and lower inventory accumulated. Results highlight that SU manufacturing leads to lower expected manufacturing costs and a better adaptation after risk-averse decision-making. Instead, MU results in less flexibility to cater for demands, thus leading to larger expected costs. This highlights that shortening the set-up times for capacity expansion leads to more responsive supply chains. Furthermore, the integration of process uncertainty helps establish more robust initial capacity plans that mitigate shortage risks on early stages of planning.



2:50pm - 3:10pm

Data-driven modeling of a Continuous Direct Compression Tableting Process using sparse identification

Pau Lapiedra Carrasquer1, Satyajeet S. Bhonsale1, Carlos André Muñoz López2, Kristof Dockx2, Jan F.M. Van Impe1

1KU Leuven, Belgium; 2Janssen Pharmaceutica NV, Belgium

Continuous manufacturing has emerged as a crucial innovation in pharmaceutical tableting production, offering significant advantages in efficiency, scalability, and tablet quality. Understanding the complex dynamics of this process is essential to ensure the quality of the product across the production line. Data-driven modeling offers the opportunity to gain more insight into these types of processes. This study explores the application of the Sparse Identification of Nonlinear Dynamics (SINDy) method to model these dynamics. SINDy is a nonlinear identification technique that can identify the process dynamics in the form of first-order differential equations using only experimental data.

In silico data was generated using a flowsheet model of a Continuous Direct Compression line developed in gPROMS. This approach provided the flexibility to simulate a wide variety of experimental conditions, producing the data needed to train the SINDy model. The mass flow rate of the API feeder was used as the control input, while blend uniformity and content uniformity were defined as the state variables. To incorporate the effects of the mass flow rate, the SINDy with control (SINDYc) algorithm was used. A series of step changes and pulse inputs, along with their corresponding responses were generated to train the model.

An exhaustive exploration of different candidate functions was conducted and the main hyper-parameter of this model (λ) was fine-tuned to achieve the optimal level of sparsity in the model. Choosing the appropriate data scaling technique was a key step to obtain a good model performance.

The results show that the SINDy method, particularly with careful tuning of hyperparameters and data preprocessing, can effectively capture the key dynamics of a continuous direct compression tableting line. Future work will focus on validating the model with experimental data and investigating the effect of noisy signals.



3:10pm - 3:30pm

Cyber-Physical Systems for Digital Medicines Manufacturing: A Self-Optimising Tableting DataFactory

Mohammad Salehian, Faisal Abbas, Jonathan Goldie, Jonathan Moores, Daniel Markl

Centre for Continuous Manufacturing and Advanced Crystallisation (CMAC), University of Strathclyde, Glasgow, United Kingdom

The pharmaceutical industry is increasingly leveraging digital technologies, such as modelling and optimisation techniques, to enhance the efficiency of drug development processes. However, existing approaches face key limitations: 1) the absence of a comprehensive system of models to predict blend properties and final product attributes based on raw component properties, process conditions, and formulation; and 2) a lack of large-scale optimisation frameworks to achieve desired product quality attributes by combining physical and data-driven models. This study proposes a novel modelling and optimisation framework tailored to develop directly compressed tablets to achieve optimal drug quality while minimizing time and costs.

The hybrid framework integrates mixture and process models, both mechanistic and data-driven, to predict key characteristics like particle size, shape distribution, flowability, tablet porosity, and tensile strength. These models are incorporated into a digital optimisation system that fine-tunes tablet formulation and initial process conditions to meet critical quality attributes (e.g. porosity >15%, tensile strength >2 MPa). The framework's optimisation capabilities are further enhanced through a physics-informed Bayesian optimisation algorithm, which combines experimental data from an automated tablet manufacturing and testing system with physics-based compaction models to optimize process conditions while significantly reducing the number of required experiments.

Incorporating an advanced automated tablet manufacturing and testing system, this framework demonstrates a self-driven, robotics-based approach to conducting experiments. The system is equipped with an automated dosing unit, a bespoke powder transportation unit, and a compaction simulator, enabling precise powder dispensing, tablet production, and subsequent testing of tablet properties (e.g. weight, dimensions, tensile strength). Integrated near-infrared spectroscopy measures blend homogeneity, and a sessile drop system analyzes tablet performance through liquid uptake and swelling kinetics. All processes are digitally and physically integrated, allowing real-time adaptation of process parameters.

The proposed system was validated through several case studies, achieving accurate predictions of new active pharmaceutical ingredients (APIs) and successfully meeting desired quality attributes with up to 60% fewer experiments compared to traditional methods. The high-throughput automated system significantly reduces manual intervention, enhances precision, and mitigates the risk of human error. By integrating data-driven machine learning with physics-based models, the framework enables rapid and efficient process design, representing a transformative advancement in tablet manufacturing and pharmaceutical development.



3:30pm - 3:50pm

Closed-loop data-driven model predictive control for a wet granulation process of continuous pharmaceutical tablet production

Consuelo Del Pilar Vega Zambrano1, Nikolaos A. Diangelakis2, Vassilis M. Charitopoulos1

1Department of Chemical Engineering, The Sargent Centre for Process Systems Engineering, University College London, Torrington Place, London WC1E 7JE, UK; 2School of Chemical and Environmental Engineering, Technical University of Crete, Chania, Crete, GR 73100, Greece

In 2023, the ICH Q13 guideline for the development, implementation, and lifecycle management of continuous manufacturing (CM), was implemented in Europe (ICH, 2023). It promotes quality-by-design (QbD) and quality by control (QbC) strategies as well as the appropriate use of mathematical modelling. This urges a harmonizing understanding across academia and industry for adoption of interpretable models instead of black-box models, especially when applied in Good Manufacturing Practice (GMP) regulated areas (Altrabsheh,2023). These models can be obtained employing surrogate reduced order modelling which offers an entirely data-driven mean to represent highly reliable yet computationally intensive models in a lower-dimensional space (Ierapetritou et al., 2017; Pantelides and Pereira, 2024).

Advancements in data-driven system identification techniques, such as Dynamic Mode Decomposition with control (DMDc), are generating new opportunities for computationally efficient and explainable model development in comparison with complex physics-based models (Schmid, 2022). To this end, we propose a comprehensive model development using DMDc to represent the complex dynamics of CM processes in a lower-dimensional space, disambiguating between underlying dynamics and actuation effects. Simulation data was collected using a digital twin based on an integrated twin-screw granulation process – fluidized bed drying process at the Diamond Pilot Plant (DiPP).

Our model demonstrates low computational complexity while effectively capturing nonlinear dynamics with significant improvements observed in the performance metrics (e.g., r2 > 0.93 for mean granule size prediction) when compared with state-space models obtained with N4SID algorithm of MATLAB System Identification Toolbox and Sparse Identification of Nonlinear Dynamical systems with control. Finally, we developed a closed-loop workflow that seamlessly connects data exchanges between Python (DMDc), GAMS (MPC optimisation) & gPROMS using the packages gO:Python and GAMSPy where we evaluate the controller performance with setpoint tracking and disturbance rejection studies. Results indicated high accuracy in real-time monitoring and control of granule size.

This study offers a novel, interpretable control strategy for CM. By integrating DMDc with MPC, we provide a robust framework that aligns with ICH Q13. The results demonstrate the potential for real-time release testing, reduced reliance on end-product testing, and improved process control, supporting the adoption of CM in the pharmaceutical industry.

References

Altrabsheh, E., Heitmann, M., Steinmüller, P., Pastori Vinco, B., 2023. The Road to Explainable AI in GXP-Regulated Areas. ISPE, Pharmaceutical Engineering 43(1).

ICH Q13, 2023. ICH guideline Q13 on continuous manufacturing of drug substances and drug products

Ierapetritou, M., Sebastian Escotet‐Espinoza, M., Singh, R., 2017. Process Simulation and Control for Continuous Pharmaceutical Manufacturing of Solid Drug Products, In: Tekin, F., Schönlau, A. (Eds.), Continuous Manufacturing of Pharmaceuticals. Wiley, pp. 33–105. https://doi.org/10.1002/9781119001348.ch2

Pantelides, C.C., Pereira, F.E., 2024. The future of digital applications in pharmaceutical operations. Curr Opin Chem Eng. https://doi.org/10.1016/j.coche.2024.101038

Schmid, P. J. (2022). Dynamic Mode Decomposition and Its Variants. Annu. Rev. Fluid Mech., 54(1), 225–254. https://doi.org/10.1146/annurev-fluid-030121-015835



3:50pm - 4:10pm

Mechanistic Modelling of Thrombolytic Therapy and Model-based Optimisation of Treatment Protocols

Boram Gu1, Yilin Yang2, Xiao Yun Xu2

1School of Chemical Engineering, Chonnam National University, 77 Yongbong-ro, Buk-gu, Gwangju 61186, Republic of Korea; 2Department of Chemical Engineering, Imperial College London, South Kensington Campus, London SW7 2AZ, UK

Thrombolysis is a medical treatment aimed at dissolving blood clots that obstruct blood vessels, impeding the delivery of oxygen and nutrients. Conditions related to blood clots, such as stroke, heart attacks, and pulmonary embolisms, can be life-threatening. While several drugs are available to treat heart attacks and pulmonary embolisms, alteplase is the only FDA-approved option for thrombolysis in acute ischemic stroke (AIS) [1]. Researchers are currently exploring other thrombolytic drugs as possible alternatives to alteplase.

We have developed mechanistic models that can assess the efficacy and safety of various thrombolytics, including urokinase, pro-urokinase (proUK), alteplase, tenecteplase, and reteplase, for intravenous treatment of AIS [2-4]. These models combine pharmacokinetics and pharmacodynamics with a local fibrinolysis model using one-dimensional convection-diffusion-reaction equations. This approach allows us to predict outcomes such as lysis completion time and risk of intracranial haemorrhage (ICH).

Moreover, studies have shown a synergistic benefit when tissue plasminogen activator (tPA) is combined with pro-urokinase (proUK) in in vitro experiments [5]. Our model has been used to examine the combination of intravenous tPA and m-proUK as a promising treatment for ischemic stroke [6].

When comparing the effectiveness of different drugs in monotherapy, we found that urokinase achieves the fastest clot breakdown but carries the highest ICH risk due to severe depletion of fibrinogen in the bloodstream. Tenecteplase and alteplase have similar thrombolytic efficacy, but tenecteplase offers a lower ICH risk. Reteplase, despite the slowest fibrinolysis rate, maintains fibrinogen levels in systemic plasma during treatment.

For combination therapy, our simulations indicate that the complementary mechanisms of tPA and m-proUK can achieve clot dissolution times comparable to tPA alone while maintaining fibrinogen levels. Varying dose combinations showed that increasing the tPA bolus significantly reduces fibrinogen levels but only moderately improves clot breakdown time. Conversely, higher doses and longer infusion times of m-proUK had a minimal impact on fibrinogen levels but greatly improved clot lysis time.

Future research will focus on optimising treatment protocols by adjusting tPA bolus, m-proUK dosage and infusion rates, as well as exploring additional drug combinations. These adjustments could potentially maximise the therapeutic benefits of both combination and monotherapy for treating ischemic stroke. The full scope of work, from mechanistic modelling to optimisation, will be presented at the conference.

[1] FDA, Center for Drug Evaluation and Research Approval Package for Ivermectin, 1996.

[2] Gu et al., Pharmaceutics, 2019, 11(3), 111

[3] Gu et al., Pharmaceutical Research, 2022, 39(1), 41-56

[4] Yang et al., Pharmaceutics 2023, 15(3), 797

[5] Gurewich, J. Thromb. Thrombolysis, 2015, 40(4), 480-487

[6] Yang et al., Computers in Biology and Medicine, 2024, 171, 108141



4:10pm - 4:30pm

Process analysis of end-to-end continuous pharmaceutical manufacturing using PharmaPy

Mohammad Shahab, Kensaku Matsunami, Zoltan Nagy, Gintaras Reklaitis

Davidson School of Chemical Engineering, Purdue University, USA

Pharmaceutical manufacturing is witnessing a major transition from traditional batch to continuous mode of operation. This is because continuous manufacturing (CM) brings several benefits to the pharmaceutical industry which include a smaller CM equipment footprint that results in increased controllability and reduced capital cost. Additionally, CM can alleviate the scale-up challenge and reduce the development time. However, there exists a lack of convenient tools for facilitating CM design and development with which the drug substance and drug product unit operations can be readily integrated for the overall evaluation of process and product performance. To that end, the Python-based PharmaPy framework was proposed recently to advance the design, simulation, and analysis of these continuous pharmaceutical processes. However, the initial library of models only addressed upstream drug substance processing. In this work, new capabilities which include drug product unit operations have been added to the PharmaPy framework that are crucial for the manufacture of final solid oral-dosage products. As a consequence, PharmaPy now enables the end-to-end study and optimization of the effects of the material properties of the drug substance on solid oral dosage products. This is essential for improving product quality and reducing costs in product development and manufacturing. The new capabilities of the PharmaPy platform are demonstrated with process modeling and simulation studies using the sequential-modular approach. The added process design capability includes unit operations such as feeders, blenders, and tablet press that can be integrated with the drug substance unit operations such as reactors, crystallizers, filters, and dryers. The platform allows the development of different mechanistic, data-driven, or hybrid models to study and compare final output to support computational efficiency and model accuracy. Sensitivity analysis can be performed on the integrated end-to-end simulator to identify the critical input variables (material properties, process conditions, etc.) that influence the product quality. These subsets of input variables are also crucial for the development of control strategies. The analysis lowers the complexity of the model by ranking the significant input variables. Finally, feasibility studies are conducted on the extracted influential input variables to characterize the process design space to achieve desirable output. The accuracy and effectiveness of the feasibility analysis are increased by using a surrogate model technique. The proposed enhanced PharmaPy package can now support decision-making from the early research and development stages through manufacturing.

 
3:00pm - 4:00pmBrewery visit
Location: On-campus brewery
3:50pm - 4:30pmT4: Model Based optimisation and advanced Control - Session 7
Location: Zone 3 - Room D049
Chair: NABEEL ABOGHANDER
Co-chair: Vasile Mircea Cristea
 
3:50pm - 4:10pm

Cost-effective Process Design and Optimization of Decarbonized Utility Systems Integrated with Renewable Energy and Carbon Capture Systems

Haryn Park1, Joohwa Lee1, Bogdan Dorneanu2, Harvey Arellano-Garcia2, Jin-Kuk Kim1

1Department of Chemical Engineering, Hanyang University, Republic of Korea; 2FG Prozess, und Anlagentechnik, Brandenburgische Technische Universität Cottbus-Senftenberg, Germany

Industrial decarbonization is considered as one of the key objectives in the global effort to respond to climate change. According to estimates by the International Energy Agency (IEA) [1], the industrial sector, including the power industry, accounts for a major portion of overall CO2 emissions. In order to achieve a net-zero industry, energy supply with less use of fossil fuel-based facilities and replacing them with renewable energy sources should be actively implemented. However, without addressing the intermittent nature of renewable energy sources, a consistently reliable and robust supply of energy to the industrial site is not possible.

Therefore, the integration of renewable energy systems with existing industrial processes, subject to energy storage and main grid interconnection, should be investigated to improve operational reliability and enhance the energy resilience of the total site. Previous studies [2] were limited to accurately reflect the flexibility and/or constraints associated with renewable energy production, as the power demand of the utility systems was simply met with electricity import from external renewable sources.

In this contribution, a novel process design and optimization framework is proposed for industrial utility systems integrated with renewable energy sources. A multi-period approach is adopted to consider variable demand and non-constant availability in renewable energy supply. The model also explores energy integration at the microgrid level, which enables various scenarios for the industrial utility system, including the sales of surplus electricity or steam generated beyond the site demand. Moreover, carbon capture is considered in this work as a viable decarbonization measure, which can be strategically combined with renewable-based electrification.

The optimization model is constructed to evaluate the economic trade-offs of integrating carbon capture, renewable energy, and energy storage. With the proposed approach, design guidelines for the transition of a conventional steady-state utility system with renewable energy can be systematically obtained, which ensures economically-viable and sustainable energy management in process industries. In a case study of an industrial utility system, the novel integrated design approach developed in this study was shown to reduce overall energy costs by 6%, compared to the conventional approach of purchasing renewable electricity

Reference

[1] International Energy Agency, CO2 Emissions in 2022, IEA, Paris, 2023. https://www.iea.org/reports/co2-emissions-in-2022

[2] H. Park, J-K. Kim, and S.C. Yi. Optimization of site utility systems for renewable energy integration. Energy, 269, 126799, 2023. https://doi.org/10.1016/j.energy.2023.126799

 
4:30pm - 5:30pmClosing & Award Ceremony (Eurecha Award and presentation of the winning contribution, Frontiers in Energy - Eurecha Award to the best poster, mobility survey price, ESCAPE|36 announcement)
Location: Zone 1 - Aula Louisiane

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ESCAPE | 35
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany