Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Poster Session 1
Time:
Monday, 07/July/2025:
10:00am - 10:30am

Location: Zone 2 - Cafetaria

KU Leuven Ghent Technology Campus Gebroeders De Smetstraat 1, 9000 Gent

Show help for 'Increase or decrease the abstract text size'
Presentations

IMPLEMENTATION AND ASSESSMENT OF FRACTIONAL CONTROLLERS FOR AN INTENSIFIED DISTILLATION SYSTEM

Luis Refugio Flores-Gómez1, Fernando Israel Gómez-Castro1, Francisco López-Villarreal2, Vicente Rico-Ramírez3

1Universidad de Guanajuato, Mexico; 2Instituto Tecnológico de Villahermosa, Mexico; 3Tecnológico Nacional de México en Celaya, Mexico

Process intensification is a strategy applied to chemical engineering which is devoted to the development of technologies that enhance the performance of the operations in a chemical process. This is achieved through the implementation of modified equipment and multi-tasking equipment, among other approaches. Although various studies have demonstrated that the dynamic properties of intensified systems can be better than the conventional configurations, the development of better control structures is still necessary (Wang et al., 2018). The use of fractional controllers can be an alternative to achieve this target. Fractional PID controllers are based on fractional calculus, increasing the flexibility of the controller by allowing fractional orders for the derivative and the integrative actions. However, this implies a higher complexity to perform the tuning of the controller. This work presents an approach to implement and assess fractional controllers in an intensified distillation system. The study is performed in the Simulink environment in Matlab, tuning the controllers through a hybrid optimization approach; first using a genetic algorithm to find an initial point, and then refining the solution with the fmincon algorithm. The calculations also involve the estimation of fractional derivatives and integrals with fractional order numerical techniques. As case study, the experimental dynamic data for an extractive distillation column has been used (Kumar et al., 1984). The data has been adjusted to fractional order functions. Since the number of experimental points is low, a strategy is implemented to interpolate data and generate a more adequate adjustment to the fractional order transfer function. Through this approach, the sum of the square of errors is below 2.9x10-6 for perturbations in heat duty, and 1.2x10-5 for perturbations in the reflux ratio. Moreover, after controller tuning, a minimal value for ISE of 1,278.12 is obtained, which is approximately 8% lower than the value obtained for an integer-order controller.

References

Wang, C., Wang, C., Cui, Y., Guang, C., Zhang, Z., 2018. Economics and controllability of conventional and intensified extractive distillation configurations for acetonitrile/ethanol/benzene mixtures. Industrial & Engineering Chemistry Research, 57, 10551-10563.

Kumar, S., Wright, J.D., Taylor, P.A. 1984. Modelling and dynamics of an extractive distillation column. Canadian Journal of Chemical Engineering, 62, 185-192.



Sustainable pathways toward a decarbonized steel industry

Selene Cobo Gutiérrez1, Max Kroppen2, Juan Diego Medrano2, Gonzalo Guillén-Gosálbez2

1University of Cantabria; 2ETH Zurich

The steel industry, responsible for about 7% of global CO2 emissions1, faces significant pressure to reduce its environmental impact. Various technological pathways are available, but it remains unclear which is the most effective in minimizing CO2 emissions without causing greater environmental harm in other areas. This work aims to conduct the prospective life cycle assessment of five steelmaking pathways to identify the most environmentally sustainable option in terms of global warming impacts and damage to human health, ecosystems, and resources. The studied processes are 1) blast furnace plus basic oxygen furnace (BF-BOF, the dominant steelmaking route at present), 2) BF-BOF with carbon capture and storage (CCS), 3) coal-based direct reduction of iron paired with an electric arc furnace (DRI-EAF), 4) DRI-EAF using natural gas, and 5) the more recently developed low-temperature iron oxide electrolysis (IOE). Life cycle inventories were developed using a detailed Aspen Plus® model for BF-BOF, data from the Ecoinvent V3.8 database2, and literature for the other processes. The results indicate that the BF-BOF process with CCS, gas-based DRI-EAF, and IOE are the most promising pathways for reducing the steel industry’s carbon footprint while minimizing overall environmental damage. If renewable energy and hydrogen produced via water electrolysis are available at competitive costs, DRI-EAF and IOE show the most promise. However, if low-carbon hydrogen is not available and the main electricity source is the global grid mix, BF-BOF with CCS has the lowest overall impacts. The choice of technology depends on the expected development of the energy system and the current technological stock. Retrofitting existing BF-BOF plants with CCS is a viable option, while constructing new DRI-EAF plants may be more advantageous due to their versatility and higher decarbonization potential. IOE, although promising, is not yet ready for immediate industrial deployment but could be a key technology in the long term. In conclusion, the optimal technology choice depends on regional energy availability and technological readiness levels. These findings underscore the need for a tailored approach to decarbonizing the steel industry, balancing environmental benefits with economic and infrastructural considerations.

References

1. W. Cornwall. Science, 2024, 384(6695), 498-499.

2. G. Wernet, C. Bauer, B. Steubing, J. Reinhard, E. Moreno-Ruiz and B. Weidema, Int. J. Life Cycle Assess., 2016, 21, 1218–1230.



OPTIMIZATION OF HEAT EXCHANGERS THROUGH AN ENHANCED METAHEURISTIC STRATEGY: THE SUCCESS-BASED OPTIMIZATION ALGORITHM

Oscar Daniel Lara-Montaño1, Fernando Israel Gómez-Castro2, Claudia Gutiérrez-Antonio1, Elena Niculina Dragoi3

1Universidad Autónoma de Querétaro, Mexico; 2Universidad de Guanajuato, Mexico; 3Gheorghe Asachi Technical University of Iasi, Romania

The optimal design of the units in a chemical process is commonly challenging due to the high nonlinearity of the models that represent the equipment. This also applies to heat exchangers, where the mathematical equations modeling such units are nonlinear, including nonconvex terms, and require simultaneous handling of continuous and discrete variables. Finding the global optima of such models is complex, thus the optimization strategy must be robust. In this context, metaheuristics are a robust alternative to classical optimization strategies. They are a set of stochastic algorithms that can efficiently find the global optima region when adequately tuned and are adequate for nonconvex functions with several local optima. The literature presents numerous metaheuristics, each with distinct properties, many of which require parameter tuning. However, no universal method exists to solve all optimization problems, as stated by the no-free-lunch theorem (Wolpert and Macready, 1997). This implies that a given algorithm may properly work for some problems but have an inadequate performance for others, as reported for the optimal design of heat exchangers by Lara-Montaño et al. (2021). As such, new optimization strategies are still under development, and this work presents an enhanced metaheuristic algorithm, the Success-Based Optimization Algorithm (SBOA). The development of the method takes the concepts of success from a social perspective as initial inspiration. As a case study, the design of a shell-and-tube heat exchanger using the Bell-Delaware method is analyzed to minimize the total annual cost. The algorithm's performance is compared with current state-of-the-art metaheuristic algorithms, such as particle swarm optimization, grey wolf optimizer, cuckoo search, and differential evolution. Based on the findings, in terms of the standard deviation and mean values, the suggested algorithm outperforms nearly all other approaches except differential evolution. Nevertheless, the SBOA has shown a faster convergence than differential evolution and best solutions with lower total annual costs.

References

Wolpert, D.H., Macready, W.G., 1997. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), 67-82.

Lara-Montaño, O.D., Gómez-Castro, F.I., Gutiérrez-Antonio, C. 2021. Comparison of the performance of different metaheuristic methods for the optimization of shell-and-tube heat exchangers. Computers & Chemical Engineering, 152, 107403.



OPTIMAL DESIGN OF PROCESS EQUIPMENT THROUGH HYBRID MECHANISTIC-ANN MODELS: EFFECT OF HYBRIDIZATION

Zaira Jelena Mosqueda-Huerta1, Oscar Daniel Lara-Montaño2, Fernando Israel Gómez-Castro1, Manuel Toledano-Ayala2

1Universidad de Guanajuato, Mexico; 2Universidad Autónoma de Querétaro, México

Artificial neural networks (ANN) are data-based structures that allow the representation of the performance of units in chemical processes. They have been widely used to represent the operation of equipment as reactors (e.g. Cerinski et al., 2020) and separation units (e.g. Jawad et al., 2020). To develop ANN-based models, it is necessary to obtain data to train the network. Thus, their employment for process design represents a challenge, since the equipment does not exist, and actual data is commonly not available. On the other hand, despite the popularity of artificial neural networks to generate models for chemical processes, there are warnings about the risks of completely depend on these data-based models while ignoring the fundamental knowledge of the phenomena occurring in the units, given by the traditional mechanistic models. Thus, the use of hybrid models has arisen to combine the power of the ANN’s to predict interactions difficult to represent through rigorous modelling, but maintaining the relevant information provided by the traditional mechanistic approach. However, a rising question is, what part of the model must be represented through a data-based approach for design applications? To answer this question, this work analyzes the effect of the degree of hybridization in the design and optimization of a shell-and-tube heat exchanger, assessing the performance of a complete ANN model and a hybrid model in terms of the computational time and the accuracy of the solution. Since the data for the heat exchanger is not available, such information is obtained through the solution of the rigorous model for randomly selected conditions. The Bell-Delaware approach is employed to perform the design of the exchanger. Such model is characterized by non-linearities and the need for handling discrete and continuous variables. Using the data, a neural network is trained in Python to obtain an approximation to determine the area and cost of the exchanger. A second neural network is generated to predict a component of the model with the high nonlinearities, namely the calculation of the heat transfer coefficients, while the other calculations are performed with the rigorous model. Both representations are optimized with the differential evolution algorithm. According to the preliminary results, for the same architecture, the hybrid model produces designs with standard deviation approximately 30% lower than the complete ANN model, related to the areas predicted by the rigorous model. However, the hybrid model requires approximately 11 times of computational time than the complete ANN model.

References

Cerinski, D., Baleta, J., Mikulčić, H., Mikulandrić, R., Wang, J., 2020. Dynamic modelling of the biomass gsification process in a fixed bed reactor by using the artificial neural network. Cleaner Engineering and Technology, 1, 100029.

Jawad, J., Hawari, A.H., Zaidi, S. 2020. Modeling of forward osmosis process using artificial neural networks (ANN) to predict the permeate flux. Desalination, 484, 114427.



MODELLING OF A PROPYLENGLYCOL PRODUCTION PROCESS WITH ARTIFICIAL NEURAL NETWORKS: OPTIMIZATION OF THE ARCHITECTURE

Emilio Alba-Robles1, Oscar Daniel Lara-Montaño2, Fernando Israel Gómez-Castro1, Jahaziel Alberto Sánchez-Gómez1, Manuel Toledano-Ayala2

1Universidad de Guanajuato, Mexico; 2Universidad Autónoma de Querétaro, México

The mathematical models used to represent chemical processes are characterized by a high non-linearity, mainly associated to the thermodynamic and kinetic relationships. The inclusion of non-convex bilinear terms is also common when modelling chemical processes. This leads to challenges when optimizing an entire process. In the last years, the interest for the development of data-based models to represent processing units has increased. As example, the work of Kwon et al. (2021), related to the dynamic performance of distillation columns, can be mentioned. Artificial neural networks (ANN) can be mentioned as one of the most relevant strategies to develop data-based models. The accuracy of the predictions for an ANN is highly dependent on the quality of the provided data, the nature of the interactions among the studied variables, and the architecture of the network. Indeed, the selection of an adequate architecture is itself an optimization problem. In this work, two strategies are proposed and assessed for the determination of the architecture of ANN’s that represent the performance of a chemical process. As case study, a process to produce propylene glycol using glycerol as raw material is analyzed (Sánchez-Gómez et al., 2023). The main units of the process are the chemical reactor and two distillation columns. To generate the data required to train the artificial neural network, random values for the design and operating variables are generated from a simulation in Aspen Plus. To determine the best architecture for the artificial neural network, two approaches are used: (i) the random generation of structures for the ANN, and (ii) the formal optimization of the architecture employing the ant colony algorithm, which is particularly useful for discrete problems (Zhao et al., 2022). In both cases, the decision variables are the number of hidden layers and the number of neurons per layer. The objective function implies the minimization of the mean squared error. Both strategies generate ANN-based predictions with good agreement with the data from rigorous simulation, with values of r2 higher than 99.9%. However, the use of the ant colony algorithm allows the best fit, although it has a slower convergence.

References

Kwon, H., Oh, K.C., Choi, Y., Chung, Y.G., Kim, J., 2021. Development and application of machine learning-based prediction model for distillation column. International Journal of Intelligent Systems, 36, 1970-1997.

Sánchez-Gómez, J.A., Gómez-Castro, F.I., Hernández, S. 2023. Design and intensification of the production process of propylene glycol as a high value-added glycerol derivative. Computer Aided Chemical Engineering, 52, 1915-1920.

Zhao, H., Zhang, C., Zheng, X., Zhang, C., Zhang, B. 2022. A decomposition-based many-objective ant colony optimization algorithm with adaptive solution construction and selection approaches. Swarm and Evolutionary Computation, 68, 100977.



Analysis for CFD of the Claus Reaction Furnace with Operating Conditions: Temperature and Excess Air for Sulfur Recovery

PABLO VIZGUERRA MORALES1, MIGUEL ANGEL MORALES CABRERA2, FABIAN SALVADOR MEDEROS NIETO1

1INSTITUTO POLITECNICO NACIONAL, MEXICO; 2UNIVERSIDAD VERACRUZANA, MEXICO

In this work, a Claus reaction furnace was analyzed in a sulfur recovery unit (SRU) of the Abadan Oil Refinery where the combustion operating temperature is important since it ensures optimal performance in the reactor, the novelty of the research focused on temperature control of 1400, 1500 and 1600 K and excess air of 10, 20 and 30% to improve the reaction yield and H2S conversion and the CFD simulation was carried out in Ansys Fluent in transitory state and in 3 dimensions, considering the turbulence model estándar, energy model with transport by convection and mass transport with chemical reaction using the Arrhenius Finite-rate/Eddy - Dissipation model for a kinetic model of destruction of acid gases H2S and CO2, obtaining a good approximation with the experimental results of the industrial process of the Abadan Oil refinery, Iran. The percentage difference between experimental and simulated results varies between 0.6 to 4% depending on the species. The temperature of 1600 K and with excess air of 30% was the best, with one a mol fraction of 0.065 of S2 at the outlet and with a conversion of the acid gas (H2S) of 95.64%, which is quite good compared to the experimental one.



Numerical Analysis of the Hydrodynamics of Proximity Impellers using the SPH Method

MARIA SOLEDAD HERNÁNDEZ-RIVERA1, KAREN GUADALUPE MEDINA-ELIZARRARAZ1, JAZMÍN CORTEZ-GONZÁLEZ1, RODOLFO MURRIETA-DUEÑAS1, JUAN GABRIEL SEGOVIA-HERNÁNDEZ2, CARLOS ENRIQUE ALVARADO-RODRÍGUEZ2, JOSÉ DE JESÚS RAMÍREZ-MINGUELA2

1TECNOLÓGICO NACIONAL DE MÉXICO/ CAMPUS IRAPUATO, DEPARTAMENTO DE INGENIERÍA QUÍMICA; 2UNIVERSIDAD DE GUANAJUATO/DEPARTAMENTO DE INGENIERÍA QUÍMICA

Mixing is a fundamental operation in many industrial processes, typically achieved using agitated tanks for homogenization. However, the design of tanks and impellers is often overlooked during the selection of the agitation system, leading to excessive energy consumption and non-homogeneous mixing. To address these operational inefficiencies, Computational Fluid Dynamics (CFD) can be utilized to analyze the hydrodynamics and mixing times within the tank. CFD employs mathematical modeling of mass, heat, and momentum transport phenomena to simulate fluid behavior. Among the latest methods used for modeling stirred tank hydrodynamics is Smoothed Particle Hydrodynamics (SPH), a mesh-free Lagrangian approach that tracks individual particles carrying physical properties such as mass, position, velocity, and pressure. This method offers advantages over traditional mesh discretization techniques by analyzing particle interactions to simulate fluid behavior more accurately. In this study, we compare the performance of different impellers based on hydrodynamics and mixing times during the homogenization of water and ethanol in a 0.5 L stirred tank. The tank and agitators were rigorously sized, operating at 70% capacity with the fluids' rheological properties as follows: ρ₁=1000 kg/m³, ρ₂=789 kg/m³, μ₁=1E-6 m²/s², and μ₂=1.52E-6 m²/s². The simulation, conducted for 2 minutes at a turbulent flow regime with a Reynolds number of 10,000, involved three impellers—double ribbon, paravisc, and hybrid—simulated using DualSPHysics software at a stirring speed of 34 rpm. The initial particle distance was set to 1 mm, generating 270,232 fluid particles and 187,512 boundary particles representing the tank and agitator. The results included velocity profiles, flow patterns, divergence, vorticity, and density fields to quantify mixing performance. The Q criterion was also applied to identify whether fluid motion was dominated by rotation or deformation and to locate stagnation zones. The double ribbon impeller demonstrated the best performance, achieving 88.28% mixing in approximately 100 seconds, while the paravisc and hybrid impellers reached 12.36% and 11.8% mixing, respectively. The findings highlight SPH as a robust computational tool for linking hydrodynamics with mixing times, allowing for the identification of key parameters that enhance mixing efficiency.



Surrogate Modeling of Twin-Screw Extruders Using a Recurrent Deep Embedding Network

Po-Hsun Huang1, Yuan Yao1, Yen-Ming Chen2, Chih-Yu Chen2, Meng-Hsin Chen2

1Department of Chemical Engineering, National Tsing Hua University, Hsinchu 30013, Taiwan; 2Industrial Technology Research Institute, Hsinchu 30013, Taiwan

Twin-screw extruders (TSEs) are extensively used in the plastics processing industry, with their performance highly dependent on operating conditions and screw configurations. However, optimizing these parameters through experimental trials is often time-consuming and resource-intensive. Although some neural network models have been proposed to tackle the screw arrangement problem [1], they fail to account for the positional information of the screw elements. To overcome this challenge, we propose a recurrent deep embedding network that leverages a deep autoencoder with a recurrent neural network (RNN) structure to develop a surrogate model based on simulation data.

The details are as follows. An autoencoder is a neural network architecture designed to learn latent representations of input data. In this study, we integrate the autoencoder with an RNN to capture the complex physical relationships between the operating conditions, screw configurations of TSEs, and their corresponding performance metrics. To further enhance the model’s ability to represent screw positions, we incorporate an attention layer from the Transformer model architecture. This addition allows the model to more effectively capture the spatial relationships between the screw elements.

The model was trained and evaluated using simulation data generated from the Ludovic software package. The experimental setup included eight screw element arrangements and three key operating variables: temperature, feed rate, and rotation speed. For data collection, we employed two data sampling strategies: progressive Latin hypercube sampling [2] and random sampling.

The results demonstrate that the proposed surrogate model accurately predicts TSE performance across both training and testing datasets. Notably, the model generalizes well to unseen operating conditions, making reliable predictions even for scenarios not encountered during training. This highlights the model’s robustness and versatility as a tool for optimizing TSE configurations.

In conclusion, the recurrent deep embedding surrogate model offers a highly efficient and effective solution for optimizing TSE performance. By integrating this model with optimization algorithms, it is possible to rapidly identify optimal configurations, resulting in improved product quality, enhanced process efficiency, and reduced production costs.



Predicting Final Properties in Ibuprofen Production with Variable Batch Durations

Kuan-Che Huang, David Shan-Hill Wong, Yuan Yao

Department of Chemical Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan

This study addresses the challenge of predicting final properties in batch processes with highly uneven durations, using the ibuprofen production process as a case study. A novel methodology is proposed and compared against traditional regression algorithms, which rely on batch trajectory synchronization as a pre-processing step. The performance of each method is evaluated using established metrics.

Batch processes are widely used in the chemical industry. Nevertheless, variability between production runs often leads to differences in batch durations, resulting in unequal lengths of process variable trajectories. Common solutions include time series truncation or time warping. However, truncation risks losing valuable process information, thereby reducing model prediction accuracy. Conversely, time warping may introduce noise or distort trajectories when compressing significantly unequal sequences, causing the model to learn incorrect process information. In multivariate chemical processes, combining time warping with batch-wise unfolding can result in the curse of dimensionality, especially when data is limited, thereby increasing the risk of overfitting in machine learning models.

The data for this study were generated using Aspen Plus V12 simulation software, focused on batch reactors. To capture the process characteristics, statistical sampling was employed to strategically position data points within a reasonable process range. The final isobutylbenzene conversion rate for each batch was used to determine batch completion. A total of 1,000 simulation runs were conducted, and the resulting data were used to develop a neural network model. The target variables to predict are: (1) the isobutylbenzene conversion rate, and (2) the accumulated mass of ibuprofen.

To handle the unequal-length trajectories in batch processes, this research constructs a dual-transformer deep neural network with multihead attention and layer normalization mechanism to extract shared information from the high-dimensional, uneven-length manipulated variable profiles into latent space, generating equal-dimensional latent codes. As an alternative strategy for feature extraction, a dual-autoencoder framework is also employed to achieve equal-dimensional representations. The representation vectors are then used as inputs for downstream deep learning models to predict the target variables.



Develop a Digital Twin System Based on a Physics-Informed Neural Networks for Pipeline Leakage Detection

Wei-Shiang Lin1, Yi-Hsiang Cheng2, Zhen-Yu Hung2, Yuan Yao1

1Department of Chemical Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan; 2Material and Chemical Research Laboratories, Industrial Technology Research Institute, Hsinchu 310401, Taiwan

As the demand for industrial and domestic resources continues to grow, the transportation of water, fossil fuels, and chemical products increasingly depends on pipeline systems. Therefore, monitoring pipeline transportation has become crucial, as leaks can lead to severe environmental disasters and safety risks. To address this challenge, this study is dedicated to developing a pipeline leakage detection system based on digital twin technology.

The core of this research lies in combining existing physical knowledge, such as the continuity and momentum equations, with neural network technology. These physical models are incorporated into the loss function of the neural network, enabling the model to be trained based on physical laws. By integrating physical models with neural networks, we aim to achieve high accuracy in detecting pipeline leakages. An advantage of Physics-informed Neural Networks (PINNs) is that they do not rely on large datasets and can enforce physical constraints during model training, making them a powerful tool for addressing pipeline safety challenges. Using the PINN model, we can more accurately simulate the fluid dynamics within pipelines, thereby significantly enhancing the prediction of potential leaks.

In detail, the system employs a fully connected neural network alongside the continuity and momentum partial differential equations to describe fluid pressure and flow rate variations. This equation not only predicts pressure transients and pressure wave propagation but also accounts for the impact of pipeline friction coefficients on flow behavior. By integrating data fitting with physical constraints, our model aims to minimize both prediction loss and partial differential equation loss, ensuring that predictions align closely with real-world data while adhering to physical laws. This approach provides both interpretability and reliability.

The PINN model is trained on data from normal pipeline operations to describe fluid dynamics in non-leakage conditions. When the input data reflects flow rates and pressures indicative of a leak, the predicted values will exhibit statistically significant deviations from the actual values. The process involves collecting prediction errors from the training data, evaluating their statistical distribution, and establishing a detection statistic using parametric or non-parametric methods. A rejection region and control limits are then defined, followed by the creation of a control chart to detect leaks. Finally, we test the accuracy and efficiency of the control chart using field or experimental data to ensure reliability.



Higher alcohol = higher value? Identifying Promising and Unpromising Synthesis Routes for 1-Propanol

Lukas Spiekermann, Mae McKenna, Luca Bosetti, André Bardow

Energy & Process Systems Engineering, Department of Mechanical and Process Engineering, ETH Zürich

In response to climate change, the chemical industry is investigating synthesis routes using renewable carbon sources (Shukla et al., 2022). CO2 and biomass have been shown to be convertible into 1-propanol, which could serve as a future platform chemical with diverse applications and higher value than traditional bulk chemicals (Jouny et al., 2018, Schemme et al., 2018, Gehrmann and Tenhumberg, 2020, Vo et al., 2021). A variety of potential pathways to 1-propanol have been proposed, but their respective benefits and disadvantages are unclear in guiding future innovations.

Here, we aim to identify the most promising routes to produce 1-propanol and establish development targets necessary to become competitive with benchmark technologies. To allow for a comprehensive assessment, we embed 1-propanol into the overall chemical supply chain. For this purpose, we formulate a technology choice model (Kätelhön et al., 2019, Meys et al., 2021) of the chemical industry to evaluate the cost-effectiveness and climate impact of various 1-propanol synthesis routes. The model includes thermo-catalytic, electrocatalytic, and fermentation-based synthesis steps with various intermediates to produce 1-propanol from CO2, diverse biomass feedstocks, and fossil resources. A comprehensive techno-economic analysis coupled with life cycle assessment quantifies both the economic and environmental potentials of new synthesis routes.

Our findings define performance targets for direct conversion of CO2 to 1-propanol via thermo-catalytic hydrogenation or electrocatalysis to become a beneficial synthesis route. If these performance targets are not met, the direct synthesis of 1-propanol is substituted by multi-step processes based on syngas and ethylene from CO2 or biomass.

Overall, our study demonstrates the critical role of synthesis route optimization in guiding the development of new chemical processes. By establishing quantitative benchmarks, we provide a roadmap for advancing 1-propanol synthesis technologies, contributing to the broader effort of reducing the chemical industry’s carbon footprint.

References

P. R. Shukla, et al., 2022, Climate Change 2022: Mitigation of Climate Change. Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC, Cambridge University Press, Cambridge, UK and New York, NY, USA)

M. Jouny, et al., 2018, Ind. Eng. Chem. Res. 57(6), 2165–2177

C. H. Vo, et al., 2021, ACS Sustain. Chem. Eng. 9(31), 10591–10600

S. Schemme, et al., 2018, Journal of CO2 Utilization 27, 223–237

S. Gehrmann, N. Tenhumberg, 2020, Chemie Ingenieur Technik 92(10), 1444–1458

A. Kätelhön, et al., 2019, Proceedings of the National Academy of Sciences 116(23), 11187–11194

R. Meys, et al., 2021, Science 374(6563), 71–76



A Python/Numpy-based package to support model discrimination and identification

Seyed Zuhair Bolourchian Tabrizi1,2, Elena Barbera1, Wilson Ricardo Leal da Silva2, Fabrizio Bezzo1

1Department of Industrial Engineering, University of Padova, via Marzolo 9, 35131 Padova PD, Italy; 2FLSmidth Cement, Green Innovation, Denmark

Process design, scale-up, and optimisation requires the precise determination of underlying phenomena and the identification of accurate models to describe them. This process can become complex when multiple rival models and higher uncertainty in the data exist, and the data needed to select and calibrate them is costly to obtain. Numerical techniques for screening various models and narrowing the pool of candidates without requiring additional experimental effort have been introduced to streamline the pre-discrimination stage [1]. These techniques have been followed by the development of model-based design of experiment (MBDoE) methods, which not only design new experiments to maximize the information for easier discrimination between rival models but also reduce the confidence ellipsoid volume of estimated parameters by enriching the information matrix through optimal experiment design [2].
Performing these techniques in an open source and user-friendly environment has been recognized by the community and has led to the development of several valuable packages, especially in the Python/PYOMO environment, which perform many of these numerical techniques [3,4]. These existing packages have made significant contributions to parameter estimation and calibration of models as well as model-based design of experiments. However, the need for a systematic package that flexibly performs all of these steps with a clear distinction between model simulation and model identification in an object-oriented approach is still highly advocated. To address these challenges, we present a new Python package that serves as an independent numerical wrapper around the kernel functions (models and their numerical interpretation). It facilitates crucial model identification steps, including the screening of rival models (through global sensitivity, identifiability, and estimability analyses), parameter estimation, uncertainty analysis, and model-based design of experiments to discriminate and calibrate models. This package not only brings together all the necessary steps but also conducts the analysis in an object-oriented manner, offering flexibility to adapt to the physical constraints of various processes. It is independent of specific programming structures and relies on Numpy and Python arrays, making it as general as possible while remaining compatible with features available in these packages. The application and advantages are demonstrated through an in-silico approach to a multivariate model identification case.

References:
[1] Moshiritabrizi, I., Abdi, K., McMullen, J. P., Wyvratt, B. M. & McAuley, K. B. Parameter estimation and estimability analysis in pharmaceutical models with uncertain inputs. AIChE Journal (2023).
[2] Asprey, S. P. & Macchietto, S. Statistical tools for optimal dynamic model building. Comput Chem Eng 24, (2000).
[3] Wang, J. & Dowling, A. W. Pyomo.DOE: An open-source package for model-based design of experiments in Python. AIChE Journal 68, (2022).
[4] Klise, K. A., Nicholson, B. L., Staid, A. & Woodruff, D. L. Parmest: Parameter Estimation Via Pyomo. in 41–46 (2019).



Experiences in Teaching Statistics and Data Science to Chemical Engineering Students at the University of Wisconsin-Madison

VICTOR ZAVALA

UNIVERSITY OF WISCONSIN-MADISON, United States of America

In this talk, I offer a perspective on my recent experiences in designing a course on statistics and data science for chemical engineers at the University of Wisconsin-Madison and in writing a textbook on the subject.

Statistics is one of the pillars of modern science and engineering and of emerging topics such as data science and machine learning; despite of this, its scope and relevance has remained stubbornly misunderstood and underappreciated in chemical engineering education (and in engineering education at large). Specifically, statistics is often taught by placing emphasis on data analysis. However, statistics is much more than that; statistics is a mathematical modeling paradigm that complements physical modeling paradigms used in chemical engineering (e.g., thermodynamics, transport phenomena, conservation, reaction kinetics). Specifically, statistics can help model random phenomena that might not be predictable from physics alone (or from deterministic physical laws), can help quantify the uncertainty of predictions obtained with physical models, can help discover physical models from data, and can help create models directly from data (in the absence of physical knowledge).

The desire design a new course on statistics for chemical engineering came about from my personal experience in learning statistics in college and in identifying the significant gaps in my understanding of statistics throughout my professional career. Similar feelings are often shared with me by professionals working in industry and academia. Throughout my professional career, I have been exposed to a broad range of applications in which knowledge of statistics has proven to be essential: uncertainty quantification, quality control, risk assessment, modeling of random phenomena, process monitoring, forecasting, machine learning, computer vision, and decision-making under uncertainty. These are applications that are pervasive in industry and academia.

The course that I designed at UW-Madison (and the accompanying textbook) follows a "data-models-decisions" pipeline. The intent of this design is to emphasize that statistics is a modeling paradigm that maps data to decisions; moreover, this design also aims to "connect the dots" between different branches of statistics. The focus on the pipeline is also important in reminding students that understanding the application context matters. Similarly, the nature of the decision and the data available influence the type of model used. The design is also intended for the student to understand the close interplay between statistical and physical modeling; specifically, we emphasize on how statistics provides tools to model aspects of a system that cannot be fully predicted from physics. The design is also intended to help the student appreciate how statistics provides a foundation to a broad range of modern tools of data science and machine learning.

The talk also offers insights into experiences in using software, as a way to reduce complex mathematical concepts to practice. Moreover, I discuss how statistics provides an excellent framework to teach and reinforce concepts of linear algebra and optimization. For instance, it is much easier to explain the relevance of eigenvalues when this is explained from the perspective of data science (e.g., they measure information).



Rule-based autocorrection of Piping and Instrumentation Diagrams (P&IDs) on graphs

Lukas Schulze Balhorn1, Niels Seijsener2, Kevin Dao2, Minji Kim1, Dominik P. Goldstein1, Ge H. M. Driessen2, Artur M. Schweidtmann1

1Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, The Netherlands; 2Fluor BV, Amsterdam, The Netherlands

Undetected errors or suboptimal designs in Piping and Instrumentation Diagrams (P&IDs) can cause increased financial costs, hazardous situations, unnecessary emissions, and inefficient operation. These errors are currently captured in extensive design processes leading to safe, operable, and maintainable facilities. However, grassroots engineering projects can involve tens to thousands of P&ID pages, leading to a significant revision workload. With the advent of digitalization and data exchange standards such as the Data Exchange in the Process Industry (DEXPI), there are new opportunities for algorithmic support of P&ID revision.

We propose a rule-based, automatic correction (i.e., autocorrection) of errors in P&IDs represented by the DEXPI data model. Our method detects potential errors, suggests improvements, and provides explanations for these suggestions. Specifically, our autocorrection method represents a DEXPI P&ID as a graph. Thereby, nodes represent DEXPI classes and directed edges the connectivity between them. The nodes retain all attributes of the DEXPI classes. Additionally, each rule consists of an erroneous P&ID template and the corresponding correct template, represented as a graph. The correct template includes the rule explanation as a graph attribute. Then, we apply the rules at inference time. The autocorrection method searches the erroneous template via subgraph isomorphism and replaces the erroneous with the corresponding correct template in the P&ID graph.

An industry case study demonstrates the method’s accuracy and performance, with rule inference taking less than a second. However, rules can conflict, requiring careful application order, and rules must be extended for specific cases. The explainability of the rule-based approach builds trust in the method and facilitates its integration into existing engineering workflows. Furthermore, DEXPI provides an existing interface between the autocorrection method and industrial P&ID development software.



Deposition rate constants: a DPM approach for particles in pipe flow

Alkhatab Bani Saad, Edward Obianagha, Lande Liu

University of Huddersfield, United Kingdom

Particle deposition is a natural phenomenon that occurs in many natural and industrial systems. Nevertheless, modelling and understanding of particle deposition in flow is still quite a big challenge especially for the determination of deposition rate constant. This study focuses on the use of the discrete particle model to calculate the deposition rate constant of particles flowing in a horizontal pipe. It was found that increasing the velocity of the flow decreases particle deposition. As the size of particles increases, deposition increases. Similarly, deposition flux was proportional to the concentration of the particles. The deposits per unit area of the inner pipe surface is higher at lower fluid velocity. Nonetheless, when the velocity of the continuous phase is increased by a factor of 100, the deposits volume per unit area decreased by half. The deposition rate constant was found to be nonlinear to both the location of the pipe and particle size. It was also interesting to see that the constant is substantially higher at the inlet of the pipe then gradually decreases along the axial direction of the flow. The change of deposition rate constant in particle size was found to be exponentially dependent.

Novelty in this research is that by extracting some quantitative parameters, deposition rate constants in this case, from a steady state Lagrangian simulation, the Eulerian approach based unsteady state population balance modelling can be made possible to determine the thickness of particle deposit in a pipe.



Plate heat exchangers: a CFD study on the effect of dimple shape on heat transfer

Mitchell Stolycia, Lande Liu

University of Huddersfield, United Kingdom

This article studies how heat transfer is affected across different dimple shapes on a plate within a plate heat exchanger using computational fluid dynamics (CFD). Four different dimple shapes were designed and studied: spherical, edge smoothed-spherical, normal distribution, and error distribution. In a pipe of 0.1 m in diameter with the dimple height being 0.05 m located at a distance of 0.3 m from the inlet under the fluid velocity of 0.5 m s–1, simulation shows that the normal distribution dimple lifted a 0.53 K increase in fluid temperature after 1.5 s. This increase is 10 times of the spherical, 8 times of the edge smoothed-spherical and 1.13 times of the error distribution shapes in their contributions to elevating fluid temperature. This was primarily due to the large increase in the intensity and number of eddies that the normal distribution dimple induced upon the fluid flow.

The effect that a fully developed velocity profile had on heat transfer was also analysed for an array of normal distribution dimples in a 5 m long pipe. It was found that fully developed flow resulted in the greatest temperature change, which was 9.5% more efficient than half developed flow and 31% more efficient than placing dimples directly next to one another.

Novelty in this research demonstrates how a typical plate heat exchanger equipment can be designed and optimised by a computational approach prior to manufacturing.



Modeling and life cycle assessment for ammonia cracking process

Heungseok Jin, Yeonsoo Kim

Kwangwoon University, Korea, Republic of (South Korea)

Ammonia (NH3) is gaining attention as a sustainable hydrogen (H2) carrier for long-distance transportation due to its higher boiling point and lower boil-off issues compared to liquefied hydrogen. These properties make ammonia a practical choice for storing and transporting hydrogen over long distances. However, extracting hydrogen from ammonia requires significant energy due to the endothermic nature of the reaction. Optimizing the operational conditions for this decomposition process is crucial to ensure energy-efficient hydrogen production. In particular, we focus on determining the amount of slipped ammonia that provides the most efficient energy generation through mixed oxidation, where both slipped ammonia (unreacted NH3) and a small amount of hydrogen are used.

Key factors include the temperature and pressure of the ammonia cracking process, the ammonia-to-hydrogen ratio in the fuel mixture, and catalyst kinetics. By optimizing these conditions, the goal is to maximize ammonia production while minimizing hydrogen consumption for fueling and NH3 consumption for NOx reduction.

In addition to the mass and energy balance derived from process modeling, a comprehensive life cycle assessment (LCA) is conducted to evaluate the sustainability of ammonia as a hydrogen carrier. The LCA considers the entire process, from ammonia production (often through the energy-intensive Haber-Bosch process or renewable energy-driven water electrolysis) to transportation and ammonia cracking for hydrogen extraction. This assessment highlights the environmental and energy impacts at each stage, offering insights into how to reduce the overall carbon footprint of using ammonia as a hydrogen carrier.



Technoeconomic Analysis of a Methanol Conversion Process Using Microwave-Assisted Dry Reforming and Chemical Looping

Omar Almaraz, Srinivas Palanki

West Virginia University, United States of America

The global methanol market size was valued at $28.78 billion in 2020 and is projected to reach $41.91 billion by 2026 [1]. Methanol has traditionally been produced from natural gas by first converting methane to syn gas and then converting syn gas to methanol. However, this is a very energy intensive process and produces a significant amount of the greenhouse gas carbon dioxide. Hence, there is motivation to look for alternative routes to the manufacture of methanol. In this research a novel microwave reactor is used for simulating the dry reforming process to convert methane to methanol. The objective is to produce 14,200 lbmol/h of methanol, which is the current production rate of methanol at Natgasoline LLC, Texas (USA) using the traditional steam reforming process [2].

Dry reforming requires a stream of carbon dioxide as well as a stream of methane to produce syn gas. Additional hydrogen is required to achieve the necessary carbon to hydrogen ratio to produce methanol from syn gas. These streams of carbon dioxide and hydrogen are generated via chemical looping. A three-reactor chemical looping system is developed that utilizes methane as the feed to produce a pure stream of hydrogen and a pure stream of carbon dioxide. The carbon dioxide stream from the chemical looping reactor system is mixed with a desulfurized natural gas stream and is sent to a novel microwave syngas reactor, which operates at a temperature of 800 °C and pressure of 1 bar to produce a mixture of carbon monoxide and hydrogen. The stream of hydrogen obtained via chemical looping is added to this syngas stream and sent to a methanol reactor train where methanol is produced. These reactors operate at a temperature range of 220-255°C and pressure of 76 bar. The reactor outlet stream is sent to a distillation train where the product methanol is separated from methane, carbon dioxide, hydrogen, and other products. The carbon dioxide is recycled back to the microwave reactor.

This process was simulated in ASPEN Plus. The thermodynamic property method used was RKSoave for the process to convert methane to syn gas and NRTL for the process to convert syn gas to methanol. The energy requirement for operating the microwave reactor is determined via simulation in COMSOL. Heat integration tools are utilized to reduce the hot utility and cold utility usage in this integrated plant that leads to optimal operation. A technoeconomic analysis is conducted to determine the overall capital cost and the operating cost of this novel process. The simulation results from this study demonstrate the significant potential of utilizing a microwave-assisted reactor for dry reforming of methane.

References

[1] Methanol Market by Feedstock (Natural Gas, Coal, Biomass), Derivative (Formaldehyde, Acetic Acid), End-use Industry (Construction, Automotive, Electronics, Solvents, Packaging), and Region - Global Forecast to 2028, Markets and Markets. (2023). https://www.marketresearch.com/MarketsandMarkets-v3719/Methanol-Feedstock-Natural-Gas-Coal-30408866/

[2] M. E. Haque, N. Tripathi, and S. Palanki,” Development of an Integrated Process Plant for the Conversion of Shale Gas to Propylene Glycol,” Industrial & Engineering Chemistry Research, 60 (1), 399-41 (2021)



A Techno-enviro-economic Transparency of a Coal-fired Power Plant: Integrating Biomass Co-firing and CO2 Sequestration Technology in a Carbon-priced Environment

Norhuda Abdul Manaf1, Nilay Shah2, Noor Fatina Emelin Nor Fadzil3

1Department of Chemical and Environmental Engineering, Malaysia-Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Kuala Lumpur; 2Department of Chemical Engineering, Imperial College London, SW7 2AZ, United Kingdom; 3Department of Chemical and Environmental Engineering, Malaysia-Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Kuala Lumpur

The energy industry, as the primary contributor to worldwide greenhouse gas emissions, plays a crucial role in addressing global climate issues. Despite numerous governmental commitments and initiatives aimed at combating the root causes of rising temperatures, carbon dioxide (CO2) emissions from industrial and energy-related activities continue to climb. Coal-fired power plants are significant contributors to this situation. Currently, two promising strategies for mitigating emissions from coal-fired power plants are CO2 capture and storage (CCS) and biomass gasification. CCS is a mature technology in the field, while biomass gasification, a process that converts biomass into gaseous fuel, offers an encouraging avenue for generating sustainable energy resources. While extensive research has explored the techno-economic potential of coal-biomass co-firing with CCS (CB-CCS) retrofit systems, no work has considered the synergistic impact of coal power plant stranded assets, carbon price schemes, and co-firing ratios. This study develops an hourly-resolution optimization model framework using mixed-integer linear programming to predict the operational profile and economic potential of CB-CCS-retrofit systems. Two dynamic scenarios for ten-year operations are evaluated: with and without carbon price imposition, subject to the minimum coal power plant stranded asset and CO2 emissions at different co-firing ratios. These scenarios reflect possible implementations in developed countries with established carbon price schemes, such as the United Kingdom and Australia, as well as developing or middle-income countries without strong carbon policy schemes, such as Malaysia and Indonesia. The outcome of this work will help determine whether retrofitting individual coal power plants is worthwhile for reducing greenhouse gas emissions. It is also significant to comprehend the role of CCS in the retrofit system and the associated co-firing ratio for biomass gasification systems. This work contributes to the international agenda delineated in the International Energy Agency (IEA) report addressing carbon lock-in and stranded assets, which potentially stem from the premature decommissioning of contemporary coal-based electricity generation facilities. This work also aligns with Malaysia's National Energy Transition Roadmap, which focuses on bioenergy and CCS.



Methodology for multi-actor and multi-scale decision support for Water-Food-Energy systems

Amaya Saint-Bois1, Ludovic Montastruc1, Marianne Boix1, Olivier Therond2

1Laboratoire de Génie Chimique, UMR 5503 CNRS, Toulouse INP, UPS, 4 Allée Emile Monso, 31432 Toulouse Cedex 4, France; 2UMR 1121 LAE INRAE- Université de Lorraine – ENSAIA, 54000 Nancy, France

We have designed a generic multi-actor multi-level framework to optimize the management of water-energy-food nexus systems. These are essential systems for human life characterized by water, energy and food synergies and trade-offs at varied spatial and temporal scales. They are managed by cross sector decision-makers at varied decision levels. They are complex and dynamic systems for which the operational level cannot be overlooked to design adequate management strategies.

Our methodology combines spatial operational multi-agent based integrated simulations of water-energy-food nexus systems with strategic decision-making methods (Saint-Bois et al., 2024). We have implemented it to allocate land-use alternatives to agricultural plots. The number of territory possible combinations of parcel land-use allocations equals the number of land-use alternatives explored for each parcel exponential the number of parcels in the territory. Stochastic multi-criteria decision-making methods have been designed to provide decision support for large territories (more than 1000 parcels). A multi-objective optimization method has been designed to produce optimized regional level land-use scenarios.

The methodology has been applied to an agricultural watershed of approximately 800 km2 and 15224 parcels situated downstream the French Aveyron River. The watershed experiences water stress and is located in one of France’s sunniest regions. Renewable energy production in agricultural land appears as a means to meet national renewable energy production targets and to move towards autonomous sustainable agricultural systems and regions. The installation of renewable energy generation units in agricultural land facing water stress is a perfect illustration of a complex water-energy-food system for which a holistic approach is required. MAELIA (Therond et al., 2014) (modelling of socio-agro-ecological systems for landscape integrated assessment), a multi-agent based platform developed by French researches to simulate complex agro-hydrological systems, has been used to simulate dynamics of water-energy-food nexus systems at operational level. Three strategic multi-criteria decision-making methods that combine Monte Carlo simulations with the Analytic Hierarchy Process method have been implemented. The first one is local; it selects land-use alternatives that optimize multi-sector parcel level indicators. The other two are regional; decisions are based on regional indicators. The first regional decision-making method identifies the best uniform regional scenario from those known and the second regional decision-making method explores combinations of parcel land-use allocations and selects the one that optimizes multi-sector criteria at regional level. A multi-objective optimization method that combines MILP (Mixed Integer Linear Programming) and goal programming has been implemented with IBM’s ILOG CPLEX optimization studio to find parcel level land-use allocations that optimize regional multi-sector criteria.

The three decision-making methods provide the same result: covering all land that is suitable for solar panels with solar panels optimizes parcel and regional multi-criteria performance indicators. Perspectives are simulating scenarios with positive agricultural governmental studies, adding social indicators and designing a game theory based strategic decision-making method.



Synergies of Adaptive Learning for Surrogate-Based Flowsheet Model Maintenance

Balázs Palotai1,2, Gábor Kis1, János Abonyi2, Ágnes Bárkányi2

1MOL Group Plc.; 2Faculty of Engineering, University of Pannonia

The integration of digital models with business processes and real-time data access is pivotal for advancing Industry 4.0 and autonomous systems. This evolution necessitates that digital models maintain high fidelity and credibility to ensure reliable decision support in dynamic environments. Flowsheet models, commonly used for process simulation and optimization in such contexts, often face challenges related to convergence issues and computational demands during optimization. Surrogate models, which approximate complex models with simpler mathematical representations, present a promising solution to mitigate these challenges by estimating calibration factors for flowsheet models efficiently. Traditionally, surrogate models are trained using Latin Hypercube Sampling to capture a broad range of system behaviors. However, physical systems in industrial applications are typically operated within specific local regions, where globally trained surrogate models may not perform adequately. This discrepancy limits the effectiveness of surrogate models in accurately calibrating flowsheet models, especially when the system deviates from the conditions used during the surrogate model training.

This paper introduces a novel adaptive calibration methodology that combines the principles of active and adaptive learning to enhance surrogate model performance for flowsheet model calibration. The proposed approach iteratively refines the surrogate model by generating new data points in the local operating regions of interest using the flowsheet model itself. This adaptive retraining process ensures that the surrogate model remains accurate across both local and global domains, thus providing reliable calibration factors for the flowsheet model.

A case study on a simplified refinery process demonstrates the effectiveness of the proposed methodology. The adaptive surrogate-based calibration significantly reduces the computational time associated with direct simulation-based calibration while maintaining high accuracy in model predictions. The results show an improvement in both the efficiency and precision of the flowsheet model calibration process, highlighting the synergistic benefits of integrating surrogate models into adaptive calibration strategies for industrial process engineering.

In summary, the synergies between adaptive maintenance of surrogate and flowsheet models offer a robust solution for maintaining model fidelity and reducing computational costs in dynamic industrial environments. This research contributes to the field of computer-aided process engineering by presenting a methodology that not only supports real-time decision-making but also enhances the adaptability and performance of digital models in the face of evolving physical systems.



Comparison of Prior Mean and Multi-Fidelity Bayesian Optimization of a Hydroformylation Reactor

Stefan Tönnis, Luise F. Kaven, Eike Cramer

Process Systems Engineering, RWTH Aachen University, Germany

Accurate process models are not always available and can be prohibitively expensive to obtain for model-based optimization. Hence, the process systems engineering (PSE) community has gained an interest in Bayesian Optimization (BO), for it approximates black-box objectives using the probabilistic Gaussian processes (GP) surrogate models [1]. BO fits the surrogate models by iteratively proposing experiments by optimizing over so-called acquisition functions and updating the surrogate model based on the results. Although BO is generally known as sample-efficient, treating chemical engineering design problems as fully black-box problems can still be prohibitively expensive, particularly for high-cost technical-scale experiments. At the same time, there is an extensive knowledge and modeling base for chemical engineering design problems that are fully neglected by black-box algorithms such as BO. One widely known option to include such prior knowledge in BO is prior mean modeling [2], where the user complements the BO algorithm with an initial guess, i.e., the prior mean. Alternatives include hybrid models or compositions of GPs with mechanistic equations [3]. A lesser-known alternative is augmenting the GP with lower fidelity data [4], e.g., from low-cost simulations or approximate models. Such low-fidelity data can give cheap yet valuable insights, which reduces the number of high-cost experiments. In this work, we compare the usage of prior mean and multi-fidelity modeling for BO in PSE design problems. We first review how prior mean and multi-fidelity modeling can be incorporated using multi-fidelity benchmark problems such as the well-known Forrester, Rosenbrock, and Rastrigin test functions. In a second step, we apply the two methods to optimize a multi-phase reaction mini-plant process, including a decanter separation step and a recycle stream. The process is based on the hydroformylation of 1-dodecene in microemulsion systems [5]. Overall, we observe accelerated convergence in the different test functions and the hydroformylation mini-plant. In fact, combining both prior mean and multi-fidelity modeling methods achieves the best overall fit of the GP surrogate models. However, our analysis also reveals how poorly chosen prior mean functions can cause the algorithm to get stuck in local minima or lead to numerical failure.

Bibliography
[1] Roman Garnett. Bayesian optimization. Cambridge University Press, Cambridge, United Kingdom,
2023.

[2] Aniket Chitre, Jayce Cheng, Sarfaraz Ahamed, Robert C. M. Querimit, Benchuan Zhu, Ke Wang,
Long Wang, Kedar Hippalgaonkar, and Alexei A. Lapkin. phbot: Self–driven robot for ph adjustment
of viscous formulations via physics–informed–ml. Chemistry–Methods, 4(2), 2024.

[3] Leonardo D. González and Victor M. Zavala. Bois: Bayesian optimization of interconnected systems.
IFAC-PapersOnLine, 58(14):446–451, 2024.

[4] Jian Wu, Saul Toscano-Palmerin, Peter I. Frazier, and Andrew Gordon Wilson. Practical multi-fidelity
bayesian optimization for hyperparameter tuning. In Ryan P. Adams and Vibhav Gogate, editors,
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, volume 115 of Proceedings
of Machine Learning Research, pages 788–798. PMLR, 2020.

[5] David Müller, Markus Illner, Erik Esche, Tobias Pogrzeba, Marcel Schmidt, Reinhard Schomäcker,
Lorenz T. Biegler, Günter Wozny, and Jens-Uwe Repke. Dynamic real-time optimization under
uncertainty of a hydroformylation mini-plant. Computers & Chemical Engineering, 106:836–848,
2017.



A global sensitivity analysis for a bipolar membrane electrodialysis capturing carbon dioxide from the air

Grazia Leonzio1, Alexia Thill2, Nilay Shah2

1Department of Mechanical, Chemical and Materials Engineering, University of Cagliari, via Marengo 2, 09123 Cagliari, Italy , Sargent Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK; 2Sargent Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK

Global warming and climate change are two critical, current global challenges. For this reason, as the concentration of atmospheric carbon dioxide (CO2) continues to rise, it is becoming increasingly imperative to invent efficient and cost-effective technologies for controlling the atmospheric CO2 concentration. In addition to the capture of CO2 from flue gases and industrial processes, new solutions to capture CO2 from the air have been proposed and investigated in the literature such as absorption, adsorption, ion exchange resin, mineral carbonation, membrane, photocatalysis, cryogenic separation, electrochemical approach and electrodialysis approaches (Leonzio et al., 2022). These are the well-known direct air capture (DAC) or negative emission technologies (NETs).

Among them, in the electrodialysis approach, a bipolar membrane electrodialysis (BPMED) stack is used to regenerate the hydroxide based-solved (NaOH or KOH water solution) coming from an absorption column and capturing CO2 from the air (Sabatino et al., 2020). In this way, it is possible to recycle the solvent to the column and release the captured CO2 for its storage or utilization.

Although not yet deployed at an industrial or even pilot scale, CO2 separation through BPMED has already been described and analyzed in the literature (Eisaman et al., 2011; Sabatino et al., 2020, 2022; Vallejo Castano et al., 2024).

Regarding the economic aspect, a preliminary levelized cost of the BPM-based process was suggested to be 770 $/tonCO2 due to the high cost of the membrane, the large electricity consumption, and uncertainties on the lifetime of the materials (Sabatino et al., 2020). Due to the relatively early stage of development, process optimization through the use of a mathematical model is therefore useful to support design and development through identifiation of the best operating conditions and parameters along with a Global Sensitivity Analysis (GSA) with the aim of suggesting significant operating parameters that could optimize both cost and energy consumption.

In this research, a mathematical model for a BPMED capturing CO2 from the air is proposed to conduct a GSA to identify the most effective operating condition for total costs (including capital and operating expenditures) and energy consumption, as the considered Key Performance Indicators (KPIs). The investigated uncertain parameters are: current density, concentration in the rich solution, membrane active area, number of cell pairs, CO2 partial pressure in the gas phase, load ratio and carbon loading.

References

Vallejo Castano, S., Shu, Q., Shi, M., Blauw, R., Loldrup Fosbøl, P., Kuntke, P., Tedesco, M., Hamelers, H.V.M., 2024. Chemical Engineering Journal 488, 150870

Eisaman, M. D.; Alvarado, L.; Larner, D.; Wang, P.; Littau, K.A. 2011. Energy Environ. Sci. 4 (10), 4031.

Leonzio, G., Fennell, P.S., Shah, N., 2022, Appli. Sci., 12(16), 8321

Sabatino, F., Mehta, M., Grimm, A., Gazzani, M., Gallucci, F., Kramer, G.J., and Annaland, M., 2020. Ind. Eng. Chem. Res. 59, 7007−7020

Sabatino, F., Gazzani, M., Gallucci, F., Annaland, M., 2022. Ind. Eng. Chem. Res. 61, 12668−12679



Refrigerant Selection and Cycle Design for Industrial Heat Pump Applications exemplified for Distillation Processes

Jonas Schnurr, Momme Adami, Mirko Skiborowski

Hamburg University of Technology, Institute of Process System Engineering, Germany

Abstract

In the scope of global warming the essential objectives for the industry are the transition to renewable energy and the improvement of energy efficiency. A potential approach to achieving both of these goals in a single step is the implementation of heat pumps, which effectively recover low-temperature waste heat that would otherwise be lost to the environment. By elevating the heat to a higher temperature level where it can be reused or recycled within the process, the application range of heat pumps is not limited to new designs and they have a huge potential; as retrofit options for existing processes in order to reduce the external energy demand [1] and electrify the industrial processes, thereby promoting a more sustainable industry with an increased share of renewable electricity generation.

Nevertheless, the optimal design of heat pumps depends heavily on the selection of an appropriate refrigerant, as the refrigerant performance is influenced by both thermodynamic properties and the heat pump cycle design, which is typically fixed in current selection approaches. Methods like iterative approaches [2], database screening followed by simulations [3], and optimization of thermodynamic parameters with subsequent identification of real refrigerants [4] are computationally intensive and time-consuming. Although these methods can identify thermodynamically beneficial refrigerants, practical application may be hindered by limitations of the compressor. Additionally, these approaches are challenging to implement in process design tools.

The current work presents a novel approach for a fast screening and identification of suitable refrigerant and heat pump cycle designs for specific applications, considering a variety of established refrigerants. The method automatically evaluates the performance of 38 pure refrigerants for any heat pump with defined heat sink and source, adapting the heat pump design by incorporating an internal heat exchanger, in case superheating the refrigerant prior to compression is required. By considering practical constraints such as compression ratio and compressor discharge temperature, the remaining suitable refrigerants are ranked based on energy demand or COP.

The application of an integrated process design and screening is demonstrated for the evaluation of different distillation processes, by linking the screening tool with an existing shortcut screening framework proposed by Skiborowski [5]. This integration enables the combination of heat pumps with other energy integration methods, like thermal coupling, thereby facilitating a more comprehensive assessment of potential process variants and the identification of the most promising process alternatives.

References

[1] A. A. Kiss, C. A. I. Ferreira, Heat Pumps in Chemical Process Industry, CRC Press, Boca Raton, 2017

[2] J. Jiang, B. Hu, T. Ge, R. Wang, Energy 2022, 241, 1222831.

[3] M. O. McLinden, J. S. Brown, R. Brignoli, A. F. Kazakov, P. A. Domanski, Nature Communications 2017, 8 (1), 1-9.

[4] J. Mairhofer, M. Stavrou, Chemie Ingenieur Technik 2023, 95 (3), 458-466.

[5] M. Skiborowski, Chemical Engineering Transactions 2018, 69, 199-204.



CO2 conversion to polyethylene based on power-to-X technology and renewable resources

Monika Dokl1, Blaž Likozar2, Chunyan Si3, Zdravko Kravanja1, Yee Van Fan3,4, Lidija Čuček1

1Faculty of Chemistry and Chemical Engineering, University of Maribor, Smetanova ulica 17, 2000 Maribor, Slovenia; 2Department of Catalysis and Chemical Reaction Engineering, National Institute of Chemistry, Hajdrihova 19, Ljubljana 1001, Slovenia; 3Sustainable Process Integration Laboratory, Faculty of Mechanical Engineering, Brno University of Technology, Technická 2896/2, 616 69 Brno, Czech Republic; 4Environmental Change Institute, University of Oxford, Oxford OX1 3QY, United Kingdom

In addition to increasing material and energy efficiency, the plastics sector is already stepping up its efforts further to minimize greenhouse gas emissions during the production phase in order to support the EU's transition to climate neutrality by 2050. These initiatives include expanding the circular economy in the plastics value chain through recycling, increasing the use of renewable raw materials, switching to renewable energy and developing advanced carbon capture and utilization methods. Bio-based plastics have been extensively explored as a potential substitute for plastics derived from fossil fuels. Despite the potential of bio-based plastics, there are concerns about sustainability, including the impact on land use, water resources and biodiversity. An alternative route is to convert CO2 into valuable chemicals using power-to-X technology. This includes the surplus of renewable energy to transform CO2 into fuels, chemicals and plastics. In this study, the process simulation of polyethylene production using CO2 and renewable electricity is performed to identify feedstocks aligned with climate objectives. CO2-based polyethylene production is compared with conventional fossil-based production and burdening and unburdening effects of potential transition to the production of renewable plastics are evaluated.



Design of Experiments Algorithm for Comprehensive Exploration and Rapid Optimization in Chemical Space

Kazuhiro Takeda1, Kondo Masaru2, Muthu Karuppasamy3,4, Mohamed S. H. Salem3,5, Takizawa Shinobu3

1Shizuoka university, Japan; 2University of shizuoka, Japan; 3Osaka university, Japan; 4Graduate School of Pharmaceutical Sciences, Osaka University, Japan; 5Suez Canal University, Egypt

1. Introduction

Bayesian Optimization (BO)1) is known for its ability to explore optimal conditions with a limited number of experiments. However, the number of experiments conducted through BO is often insufficient to fully understand the experimental condition space. To address this, various experimental design methods have been proposed. Among these, the Definitive Screening Design (DSD)2) has been introduced as a method that minimizes confounding and requires fewer experiments. This study proposes an algorithm that combines DSD and BO to reduce confounding, ensure sufficient experimentation to understand the experimental condition space and enable rapid optimization.

2. Fusion Algorithm of DSD and BO

In DSD, each factor is set at three levels (+, 0, -), and experiments are conducted with one factor at 0 and the others at + or -. This process is repeated for the number of factors m, and a final experiment is conducted with all factors set to 0, resulting in a total of 2m+1 experiments. Typically, after conducting experiments based on DSD, a model is created by selecting factors using criteria such as AIC (Akaike information criteria), followed by additional experiments to optimize the objective function. Using BO allows for optimization with fewer additional experiments.

In this study, the levels (+ and -) required by DSD are determined based on BO, enabling the integration of BO from the DSD experiment stage. The proposed algorithm is outlined as follows:

1. Formulate a DSD experimental plan with 0, +, and - levels.

2. Conduct experiments using the maximum and minimum ranges (as defined by DSD) until all variables are no longer unique.

3. For the next experimental condition, use BO to search within the range of the original planned values with the same sign.

4. Conduct experiments under the explored conditions.

5. If the experimental plan formulated in Step 1 is complete, proceed to the next step; otherwise, return to Step 3.

6. Use BO to explore the optimal conditions within the range.

7. Conduct experiments under the explored conditions.

8. If the convergence criteria are met, terminate the process; otherwise, return to Step 6.

3. Numerical Experiment

Numerical experiments were conducted to minimize each objective function. The upper and lower limits of each variable were set at (-2, 2), and the experiment was conducted 10 times. The results indicate that the proposed algorithm converges faster than BO alone. Moreover, the variability in convergence speed was also reduced. Although not shown due to space constraints, the proposed algorithm also demonstrated faster and more stable convergence compared to other experimental design methods combined with BO.

4. Conclusion

This study proposed an algorithm combining DSD and BO to minimize confounding, reduce the required experiments, and enable rapid optimization. Numerical experiments demonstrated that the algorithm converges early and stably. Future work will involve verifying the effectiveness of the proposed algorithm through actual experiments.

References

1. J. Snoek, et al.; arXiv:1206.2944, pp.1-9, 2012

2. B. Jones and C. J. Nachtsheim; J. Qual. Technol., Vol.43, pp.1-15, 2011



Surrogate Modeling for Real-Time Simulation of Spatially Distributed Dynamically Operated Chemical Reactors: A Power-to-X Case Study

Luisa Peterson1, Ali Forootani2, Edgar Ivan Sanchez Medina1, Ion Victor Gosea1, Peter Benner1,3, Kai Sundmacher1,3

1Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstraße 1, Magdeburg, 39106, Germany; 2Helmholtz Centre for Environmental Research, Permoserstraße 15, Leipzig, 04318 , Germany; 3Otto von Guericke University Magdeburg, Universitaetsplatz 2, Magdeburg, 39106, Germany

Spatially distributed dynamical systems are omnipresent in chemical engineering. These systems are often modeled by partial differential equations (PDEs) to describe complex, coupled processes. However, solving PDEs can be computationally expensive, especially for highly nonlinear systems. This is particularly challenging for outer-loop computations such as optimization, control, and uncertainty quantification, all requiring real-time performance. Surrogate models reduce computational costs and are classified into data-fit, reduced-order, and hierarchical models. Data-fit models use statistical techniques or machine learning to map input-output relationships, while reduced-order models project equations onto a lower-dimensional subspace. Hierarchical models simplify physical or numerical methods to reduce complexity.

In this study, we simulate the dynamic behavior of a catalytic CO2 methanation reactor, critical for Power-to-X applications that convert CO2 and green hydrogen to methane. The reactor must adapt to changing load conditions, which requires real-time executable simulation models. A one-dimensional mechanistic model, calibrated with pilot plant data, simulates temperature and CO2 conversion. We develop and test three surrogate models using load change simulation data. (i) Operator Inference (OpInf) projects the system into a lower dimensional subspace and infers a quadratic polynomial within this space, incorporating stability constraints to improve prediction reliability. (ii) Sparse Identification of Nonlinear Dynamics (SINDy) uncovers the system's governing equations through sparse regression. Our adaptation of SINDy uses Q-DEIM to efficiently select significant data for regression inputs and is implemented within a neural network framework with a Physics-Informed Neural Network (PINN) loss function. (iii) The proposed Graph Neural Network (GNN) uses a windowed graph structure with Graph Attention Networks.

When reproducing data from the mechanistic model, OpInf achieves a low relative Frobenius norm error of 0.043% for CO2 conversion and 0.030% for temperature. The quadratic, guaranteed stable polynomial provides a good balance between interpretability and performance. SINDy gives relative errors of 2.37% for CO2 conversion and 0.91% for temperature. While SINDy is the most interpretable model, it is also the most computationally intensive to evaluate, requires manual tuning of the regression library, and occasionally experiences stability issues. GNNs produce relative errors of 1.08% for CO2 conversion and 0.81% for temperature. GNNs offer the fastest evaluation and require the least domain-specific knowledge of the three methods, but their black-box nature limits interpretability and they are prone to overfitting and can struggle with extrapolation. All surrogate models reduce computational time while maintaining acceptable accuracy, making them suitable for real-time decision-making in dynamic reactor operations. The choice of model depends on the application requirements, in particular the balance between speed and interpretability. In this case, OpInf provides the best overall balance, while SINDy and GNNs provide useful trade-offs depending on whether interpretability or speed is prioritized [2].


References

[1] R. T. Zimmermann, J. Bremer, and K. Sundmacher, “Load-flexible fixed-bed reactors by multi-period design optimization,” Chemical Engineering Journal, vol. 428, 130771, 2022, DOI: 0.1016/j.cej.2021.130771.

[2] L. Peterson, A. Forootani, E. I. S. Medina, I. V. Gosea, K. Sundmacher, and P. Benner, “Towards Digital Twins for Power-to-X: Comparing Surrogate Models for a Catalytic CO2 Methanation Reactor”, Authorea Preprints, 2024, DOI: 10.36227/techrxiv.172263007.76668955/v1.



Computer Vision for Chemical Engineering Diagrams

Maged Ibrahim Elsayed Eid, Giancarlo Dalle Ave

McMaster University, Canada

This paper details the development of a state-of-the-art object, word, and connectivity detection system tailored for the analysis of chemical engineering diagrams, namely Process Flow Diagrams (PFDs), Block Flow Diagrams (BFDs), and Piping and Instrumentation Diagrams (P&IDs), utilizing cutting-edge computer vision methodologies. Chemical engineering diagrams play a pivotal role in the field, offering visual representations of plant processes and equipment. They are integral to both the design, analysis, and operational phases of chemical processes, aiding in process documentation and serving as a foundation for simulating and monitoring the performance of essential equipment operations.

The necessity of automating the interpretation of BFDs, PFDs, and P&IDs arises from their widespread use and the challenges associated with their manual analysis. These diagrams, often stored as image-based PDFs, present significant hurdles in terms of data extraction and interpretation. Manual processing is not only labor-intensive but also prone to errors and inconsistencies. Given the complexity and volume of these diagrams, which include intricate details of plant processes and equipment, manual methods can lead to delays and inaccuracies. Automating this process with advanced computer vision techniques addresses these challenges by providing a scalable, accurate, and efficient means to extract and analyze information.

The primary aim of this project is to automate the interpretation of various chemical engineering diagrams, a task that has traditionally relied on manual expertise. This automation encompasses the precise detection of unit operations, text recognition, and the mapping of interconnections between components. To achieve this, the proposed methodology relies on rule-based and predefined approaches. These approaches are employed to detect unit operations by analyzing visual patterns and shapes, recognizing text using OCR techniques, and mapping the interconnections between components based on spatial relationships. This method specifically avoids deep learning which can be computationally intensive and often requires extensive labeling to effectively differentiate between various objects. These challenges can complicate implementation and scalability, making deep learning less suitable for this application. The results showed high detection accuracy, successfully identifying unit operations, text, and interconnections with reliable performance, even in complex diagrams.



Digital Twin for Operator Training- and Real-Time Support for a Pilot Scale Packed Batch Distillation Column

Mads Stevnsborg, Jakob K. Huusom, Krist V. Gernaey

PROSYS DTU, Denmark

Digital Twin (DT) is a frequently used term by industry and academia to describe data-centric models that accurately depict a physical system counterpart. The DTs is typically used in either an offline context as Virtual Laboratories (VL) [4, 5] or in real-time applications as predictive toolboxes [2]. In processes restricted by a low degree of automation that instead rely greatly on operator competence in key decision-making situations the DTs can act as a guiding tool [1, 3]. This work explores the challenge of developing DTs to support operators by developing a combined virtual laboratory and decision support tool for students conducting experiments on a pilot scale packed batch distillation column at the Technical University of Denmark [2]. This operation is an unsteady operation, controlled by a set of manual valves, which the operator must continuously balance to meet purity constraints without an excessive consumption of utilities. The realisation is achieved by leveraging the software development and IT operations (DevOps) methodology with a modular compartmentalisation of DT resources to better leverage model applicability across various projects. The final solution is comprised of several stand-alone packages that together offer real-time communication to physical equipment through OPC-UA endpoints and a scalable simulation environment through web-based user interfaces (UI). The advantages of this implementation strategy are flexibility and speed allowing for continuously updating process models as data is generated and offering process operators with the necessary training and knowledge before and during operation to run experiments effectively and enhancing the learning outcome.

References

[1] F. Bähner et al., 2021,” Challenges in Optimization and Control of Biobased Process Systems: An Industrial-Academic Perspective”, Industrial and Engineering Chemistry Research, Volume 60, Issue 42, pp. 14985-15003

[2] M. Jones et al., 2022, “Pilot Plant 4.0: A Review of Digitalization Efforts of the Chemical and Biochemical Engineering Department at the Technical University of Denmark (DTU)”, Computer-aided Chemical Engineering, Volume 49, pp. 1525-1530

[3] V. Steinwandter et al., 2019, “Data science tools and applications on the way to Pharma 4.0”, Drug Discovery Today, Volume 24, Issue 9, pp. 1795-1805

[4] M. Schueler & T. Mehling, 2022, “Digital Twin- A System for Testing and Training”, Computer Aided Chemical Engineering, Volume 52, pp. 2049-2055

[5] J.Ismite et al., 2019, “A systems engineering framework for the design of bioprocess operator training simulators”, E3s Web of Conferences, 2019, Volume 78, pp. 03001

[6] N. Kamihama et al., 2011, “Isobaric Vapor−Liquid Equilibria for Ethanol + Water + EthyleneGlycol and Its Constituent Three Binary Systems”, Journal of Chemical and Engineering Data, Volume 57, Issue 2, pp. 339-344



Hybridizing Neural Networks with Physical Laws for Advanced Process Modeling in Chemical Engineering

Jana Mousa, Stéphane Negny

INP Toulouse, France

Neural networks (NNs) have become the talk of the century as they have been labeled as indispensable tools for modeling complex systems due to their ability to learn and predict from vast datasets. Their success spans a wide range of applications, including chemical engineering processes. However, one key limitation of NNs is their lack of physical interpretability, which becomes critical when dealing with complex systems governed by known physical laws. In chemical engineering, particularly in unit operations like reactors—considered the heart of any process—the accuracy and reliability of models depend not only on their predictive capabilities, but also on their adherence to physical constraints such as mass and energy balances, reaction kinetics, and equilibrium constants.

This study investigates the integration of neural networks with nonlinear data reconciliation (NDR) as a method to impose physical constraints on predictive models. Nonlinear data reconciliation is a mathematical technique used to adjust measured data to satisfy predefined physical laws, enhancing model consistency and accuracy. By embedding NDR into neural networks, the resulting hybrid models ensure physical realism while retaining the flexibility and learning power of NNs.

The framework first trains an NN to capture nonlinear system relationships, then applies NDR to correct predictions so that key physical metrics—such as conversion, selectivity, and equilibrium constants in reactors—are met. This ensures that the model aligns not only with data but also with fundamental physical laws, enhancing the model's interpretability and reliability. Furthermore, this method's efficacy has been evaluated by comparing it to other hybrid approaches, such as Karush-Kuhn-Tucker Neural Networks (KKT-NN) and KarushKuhn-Tucker Physics-Informed Neural Networks (KKT-PINN), both of which aim to enforce physical constraints within neural networks.

In conclusion, the integration of physical interpretability into neural networks through nonlinear data reconciliation significantly enhances modeling accuracy and reliability in engineering applications. Future enhancements may focus on refining the method to accommodate a wider range of engineering challenges, thereby facilitating its application in diverse fields such as process engineering, and system optimization



Transferring Graph Neural Networks for Soft Sensor Modeling using Process Topologies

Maximilian F. Theisen1, Gabrie M.H. Meesters2, Artur M. Schweidtmann1

1Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands; 2Product and Process Engineering, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands

Transfer learning allows - in theory - to re-use and fine-tune machine learning models and thus reduce data requirements. However, transferring data-driven soft sensor models is in practice often not possible. In particular, the fixed input structure of standard soft sensor models prohibits transfer if e.g. the sensor information is not identical in all plants.

We propose a process-aware graph neural network approach for transfer learning of soft sensor models across multiple plants. In our method, plants are modeled as graphs: Unit operations are nodes, streams are edges, and sensors are embedded as attributes. Our approach brings two advantages for transfer learning: First, we not only include sensor data but also crucial information on the plant topology. Second, the graph neural network algorithm is flexible with respect to its sensor inputs. We test the transfer learning capabilities of our modeling approach on ammonia synthesis loops with different process topologies (Moulijn, 2013). We build a soft sensor predicting the ammonia concentration in the product. After training on data from several processes, we successfully transfer our soft sensor model to a previously unseen process with a different topology. Our approach promises to extend the use case of data-driven soft sensors to cases where data from similar plants is leveraged.

References

Moulijn, J. A. (2013). Chemical process technology (2nd ed (Online-Ausg.) ed.). (M. Makkee, & A. Diepen, Eds.) Chichester, West, Sussex: John Wiley & Sons Inc.



Production scheduling based on Real-time Optimization and Zone Control Nonlinear Model Predictive Controller

José Matias1, Alvaro Marcelo Acevedo Peña2

1KU Leuven, Belgium; 2YPFB Refinación S.A.

Chemical industry has a high demand for process optimization methods and tools that enhance profitability while operating near the nominal capacity. Products inventory, both in-process and end-of-process, serve as buffers to mitigate fluctuations in operation and demand while maintaining consistent and predictable production. Efficient product inventory management is crucial for the a profitable operation of chemical plants. To ensure optimal operation, various strategies have been proposed that consider in-process storage and aim to satisfy mass balances while avoiding bottlenecks [1].

When final product demand is highly oscillatory with unexpected stoppages, end-of-process inventories must be carefully controlled within minimum and maximum bounds. This prevents plant shutdowns and ensures compliance with legal product supply requirements. In both cases, plant-wide operations should be considered when making in- and end-of-process product inventory level decisions to improve overall profitability [2].

To address this problem, we propose a holistic hierarchical two-layered strategy. The upper layer uses real-time optimization (RTO) to determine optimal plant flow rates from an economic perspective. The lower layer employs a zone control nonlinear model predictive controller (NMPC) to define inventory setpoints. The idea is that RTO defines setpoints for flow rates that manipulate plant throughput, while NMPC maintains inventory levels within desired bounds while keeping flow rates as close as possible to the RTO-defined setpoints. The use of this two-layered holistic approach is novel for this specific problem; however, our primary contribution lies in introducing an ensemble of optimization problems at the RTO level. Each RTO problem is associated with a different uncertain product demand scenario. This enables us to recompute optimal throughput plant manipulator setpoints based on the current scenario, improving the overall strategy performance.

We tested our strategy on a three-stage distillation column system that separates a mixture of four products, inspired by a LPG production plant with recycle split vapour (RSV) invented by Ortloff Ltd [3]. While the lightest and cheapest product is directly sent to a pipeline, the other three more valuable products are stored in tanks. Demand for these three products fluctuates significantly, but can be forecasted in advance, allowing for proactive measures. We compared the results of our holistic two-layered strategy to typical actions taken by plant operators in various uncertain demand scenarios. Our approach addresses the challenges of mitigating bottlenecks and minimizing inventory fluctuations and is more effective than the operator decisions from an economic perspective.

[1] Skogestad, S., 2004. Computers & Chemical Engineering, 28(1-2), pp.219-234.

[2] Downs, J.J. and Skogestad, S., 2011. Annual Reviews in Control, 35(1), pp.99-110.

[3] Zhang S. et al., 2020. Comprehensive Comparison of Enhanced Recycle Split Vapour Processes for Ethane Recovery, Energy Reports, 6, pp.1819–1837.



Talking like Piping and Instrumentation Diagrams (P&IDs)

Achmad Anggawirya Alimin, Dominik P. Goldstein, Lukas Schulze Balhorn, Artur M. Schweidtmann

Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands

Piping and Instrumentation Diagrams (P&IDs) are pivotal in process engineering, serving as comprehensive references across multiple disciplines (Toghraei, 2019). However, the intricate nature of P&IDs and the system's complexity pose challenges for engineers to examine flowsheet overview and details efficiently and accurately. Recent developments in flowsheet digitalization through computer vision and data exchange in the process industry (DEXPI) have opened up the potential to have a unified machine-readable format for P&IDs (Theisen et al., 2023). Yet, the industry DEXPI P&IDs are often extremely complex often including thousands pages.

We propose the ChatP&ID methodology that allows to communicate with P&IDs using natural language. In particular, we represent DEXPI P&IDs as labelled property graphs and integrate them with Large Language Models (LLMs). The approach consists of three main parts: 1) P&IDs graph representation is developed following DEXPI specification via our pyDEXPI Python package (Goldstein et al., n.d.). 2) A tool for generating P&ID knowledge graphs from pyDEXPI. 3) Integration of the P&ID knowledge graph to LLMs using graph-based retrieval augmented generation (graph-RAG). This approach allows users to communicate with P&IDs using natural language. It extends LLM’s ability to retrieve contextual data from P&IDs and mitigate hallucinations. Leveraging the LLM's large corpus, the model is also able to interpret process information in P&ID, which could help engineer daily tasks. In the future, this work also opens up opportunities in the context of other generative Artificial Intelligence (genAI) solutions on P&IDs such as auto-generation or auto-correction (Schweidtmann, 2024).

References

Goldstein, D.P., Alimin, A.A., Schulze Balhorn, L., Schweidtmann, A.M., n.d. pyDEXPI: A Python implementation and toolkit for the DEXPI information model.

Schweidtmann, A.M., 2024. Generative artificial intelligence in chemical engineering. Nat. Chem. Eng. 1, 193–193. https://doi.org/10.1038/s44286-024-00041-5

Theisen, M.F., Flores, K.N., Balhorn, L.S., Schweidtmann, A.M., 2023. Digitization of chemical process flow diagrams using deep convolutional neural networks. Digit. Chem. Eng. 6, 100072.

Toghraei, M., 2019. Piping and instrumentation diagram development. Wiley, Hoboken, NJ, USA.



Multi-Objective Optimization and Analytical Hierarchical Process for Sustainable Power Generation Alternatives in the High Mountain Region of Santurbán: case of Pamplona, Colombia

Ana María Rosso-Cerón2, Nicolas Cabrera1, Viatcheslav Kafarov1

1Department of Chemical Engineering, Carrera 27 Calle 9, Universidad Industrial de Santander, Bucaramanga, Colombia; 2Department of Chemical Engineering, Cl. 5 No. 3-93, Kilometro 1 Vía Bucaramanga, Universidad de Pamplona, Norte de Santander, Colombia

This study presents an integrated approach combining the Analytical Hierarchical Process (AHP) and a Mixed-Integer Multi-Objective Linear Programming (MOMILP) model to evaluate and select sustainable power generation alternatives for Pamplona, Colombia. The research focuses on the high mountain region of Santurbán, a páramo ecosystem that provides water to over 2.5 million people and supports rich biodiversity. Given the region’s vulnerability to climate change, sustainable energy solutions are essential to ensure environmental conservation and energy security [1].

The MOMILP model considers several power generation technologies, including photovoltaic panels, wind turbines, biomass, and diesel plants. These alternatives are integrated into the local electrical distribution system with the goal of minimizing two objectives: costs (net present value) and CO₂ emissions, while adhering to design, operational, and budgetary constraints. The ε-constraint method was employed to generate a Pareto-optimal set of solutions, balancing trade-offs between economic and environmental performance. Additionally, the study examines the potential for forming local energy communities by allowing surplus electricity from renewable sources to be sold, promoting local economic growth and energy independence.

The AHP is used to assess these alternatives based on a set of multi-criteria, including social acceptance, job creation, regional accessibility, technological maturity, reliability, pollutant emissions, land use, and habitat impact. Expert opinions were gathered through the Delphi method, and the criteria were weighted using Saaty’s scale. This comprehensive evaluation ensures that the decision-making process incorporates not only technical and economic aspects but also environmental and social considerations [2].

The analysis revealed that a hybrid solution combining solar, wind, and biomass technologies provides the best balance between economic viability and environmental sustainability. Solar energy, due to its technological maturity and minimal impact on the local habitat, emerged as a highly favourable option. Biomass, although contributing more to emissions than solar and wind, was positively evaluated for its potential to create local jobs and its high social acceptance in the region.

This study contributes to the growing body of literature on the integration of renewable energy sources into power distribution networks, particularly in ecologically sensitive areas like the Santurbán páramo. The combined use of AHP and MOMILP offers a robust framework for decision-makers, allowing for the systematic evaluation of sustainable alternatives based on technical performance and stakeholder priorities. This approach is particularly relevant for policymakers and utility companies engaged in Colombia’s energy transition efforts and sustainable development.

References

[1] Llambí, L. D., Becerra, M. T., Peralvo, M., Avella, A., Baruffol, M., & Díaz, L. J. (2019). Monitoring biodiversity and ecosystem services in Colombia's high Andean ecosystems: Toward an integrated strategy. Mountain Research and Development, 39(3). https://doi.org/10.1659/MRD-JOURNAL-D-19-00020.

[2] A. M. Rosso-Cerón, V. Kafarov, G. Latorre-Bayona, and R. Quijano-Hurtado, "A novel hybrid approach based on fuzzy multi-criteria decision-making tools for assessing sustainable alternatives of power generation in San Andrés Island," Renewable and Sustainable Energy Reviews, vol. 110, 159–173, 2019. https://doi.org/10.1016/j.rser.2019.04.053.



Environmental assessment of the catalytic Arabinose oxidation

Mouad Hachhach, Dmitry Murzin, Tapio Salmi

a Laboratory of Industrial Chemistry and Reaction Engineering (TKR), Johan Gadolin Process Chemistry Centre, Åbo Akademi University, Åbo-Turku FI-20500, Finland

Oxidation of arabinose to arabinoic acid present an innovative way to valorize local biomass to high add value product. Experiments on the oxidation of arabinose to arabinoic acid with molecular oxygen were previously to determine the optimum reaction conditions (Kusema et al., 2010; Manzano et al., 2021) and using the obtained results a scaled-up process has been designed and analysed from techno-economic aspect (Hachhach et al., 2021).

Also these results are used to analyse the environmental impact of the scaled-up process during its its life time using life cycle assessment (LCA) methodology. SimaPro software combined with impact assessment method IMPACT 2002+ were used in this work.

The results revealed that the heating seems to be the biggest contributor of the environmental impacts even though that the reaction is performed under mild conditions (70 C) which highlighted the importance of reducing the energy consumption via an efficient heat integration for example.



A FOREST BIOMASS-TO-HYDROCARBON SUPPLY CHAIN MATHEMATICAL MODEL FOR OPTIMIZING CARBON EMISSIONS AND ECONOMIC METRICS

Frank Piedra-Jimenez1, Rishabh Mehta2, Valeria Larnaudie3, Maria Analia Rodriguez1, Ana Inés Torres2

1Instituto de Investigación y Desarrollo en Ingeniería de Procesos y Química Aplicada (UNC-CONICET), Universidad Nacional de Córdoba. Facultad de Ciencias Exactas, Físicas y Naturales. Av. Vélez Sarsfield 1611, X5016GCA Ciudad Universitaria, Córdoba, Argentina; 2Department of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh PA 15213; 3Departamento de Bioingeniería, Facultad de Ingeniería, Universidad de la Republica, Julio Herrera y Reissig 565, Montevideo, Uruguay.

Forest supply chains (FSCs) are critical for achieving decarbonization targets (Santos et al., 2019). FSCs are characterized by abundant biomass residues, offering an opportunity to add value to processes while contributing to the production of clean energy products. One particularly interesting aspect is their potential integration with oil refineries to produce drop-in fuels, offering a transformative pathway to mitigate traditional refinery emissions (Barbosa-Povoa and Pinto, 2020).

In this article, a disjunctive mathematical programming approach is presented to optimize the design and planning of the FSC for the production of hydrocarbon products from biomass, optimizing both economic and environmental objectives. Various types of byproducts and residual biomass from forest harvesting activities, sawmill production, and the pulp and paper industries are considered. Alternative processing facilities and technologies can be established over a multi-period planning horizon. The design problem scope involves selecting forest areas for exploitation, identifying biomass sources, and determining the locations, technologies, and capacities of facilities that transform wood-based residues into methanol and pyrolysis oil, which are further processed in biodiesel and petroleum refinery plants, respectively. This problem is challenging due to the complexity of the supply chain network, which involves numerous decisions, constraints, and objectives.

Especially in the case of large geographical areas, transportation becomes a crucial aspect of supply chain design and planning because the low biomass density significantly impacts carbon emissions and costs. Thus, the planning problem scope includes selecting connections and material flows across the supply chain and analyzing the impact of different types of transportation vehicles.

To estimate FSC carbon emissions, the Life Cycle Assessment (LCA) methodology is used. A gate-to-gate analysis is carried out for each activity in the FSC. The predicted LCA results are then integrated as input parameters into a mathematical programming model for FSC design and planning, extending previous work (Piedra-Jimenez et al., 2024). In this article, a multi-objective approach is employed to minimize CO2-equivalent emissions while optimizing net present value from an economic standpoint. A set of efficient Pareto points is obtained and compared in a case study of the Argentine forest industry.

References

Barbosa-Povoa, A.P., Pinto, J.M. (2020). “Process supply chains: perspectives from academia and industry”. Comput. Chem. Eng., 132, 106606, 10.1016/J.COMPCHEMENG.2019.106606

Piedra-Jimenez, F., Torres, A.I., Rodriguez, M.A. (2024), “A robust disjunctive formulation for the redesign of forest biomass-based fuels supply chain under multiple factors of uncertainty.” Computers & Chemical Engineering, 108540, ISSN 0098-1354.

Santos, A., Carvalho, A., Barbosa-Póvoa, A.P, Marques, A., Amorim, P. (2019). “Assessment and optimization of sustainable forest wood supply chains – a systematic literature review.” For. Policy Econ., 105, pp. 112-135, 10.1016/J.FORPOL.2019.05.026



Introducing competition in a multi-agent system for hybrid optimization

Veerawat Udomvorakulchai, Miguel Pineda, Eric S. Fraga

University College London, United Kingdom

Process systems engineering optimization problems may be
challenging. These problems often exhibit nonlinearity, non-convexity,
discontinuity, and uncertainty, and often only the values of objective
and constraint functions are accessible. Black-box optimization
methods may be appropriate to tackle such problems. The effectiveness
of each method differs and is often unknown beforehand. Prior experience
has shown that hybrid approaches can lead to better outcomes than
using a single optimization method (1).

A general-purpose multi-agent framework for optimization, Cocoa, has
recently been developed to automate the configuration and use of
hybrid optimization, allowing for any number of optimization solvers,
including different instances of the same solver (2). Solvers can
share solutions, leading to better outcomes with the same
computational effort. However, the computational resource allocated
to each solver is inversely proportional to the number of solvers.
Allocating equal time to each solver may not be ideal.

This paper describes the implementation of competition to go alongside
cooperation: allocate more computational resource to solvers best
suited to a given problem. The allocation is dynamic and evolves as
the search progresses. Each solver is assigned a priority which
changes based on the results obtained by that solver. Scheduling is
priority based. The scheduler is similar to algorithms used by
multi-tasking operating systems (3). Individual solvers will be given
more or less access to the computational resource, enabling the system
to reward those solvers that do well while ensuring that all solvers
are allocated some computational resource.

The framework allows for the use of both metaheuristic and direct
search methods. Metaheuristics explore the full search space while
direct search methods are good at exploiting solutions. The framework
has been implemented in Julia (4) making full use of multiprocessing.

A case study on the design of a micro-analytic system is presented
(5). The model is dynamic and has uncertainties; the selection of
designs is based on multiple criteria. This is a good test of the
proposed framework as the computational demands are large and the
search space is complex. The case study demonstrates the benefits of
a multi-solver hybrid optimization approach with both cooperation and
competition. The framework adapts to the evolving requirements of the
search. Often, a metaheuristic method is allocated more computational
resource at the beginning of the search while direct search methods
are emphasized later.

1. Fraga ES. Hybrid methods for optimisation. In: Zilinskas J, Bogle
IDL, editors. Computer aided methods for optimal design and
operations. World Scientific Publishing Co.; 2006. p. 1–14.

2. Fraga ES, Udomvorakulchai V, Papageorgiou L. 2024. DOI: 10.1016/B978-0-443-28824-1.50556-1.

3. Madnick SE, Donovan JJ. Operating systems. McGraw-Hill Book Company;
1974.

4. Bezanson J, Edelman A, Karpinski S, Shah VB. Julia: A fresh approach
to numerical computing. SIAM rev. 2017;59(1):65–98.

5. Pineda M, Tsaoulidis D, Filho P, Tsukahara T, Angeli P, Fraga
E. 2021. DOI: 10.1016/j.nucengdes.2021.111432.



A Component Property Modeling Framework Utilizing Molecular Similarity for Accurate Predictions and Uncertainty Quantification

Youquan Xu, Zhijiang Shao, Anjan Kumar Tula

Zhejiang University, China, People's Republic of

In many industrial applications, the demand for high-performance products, such as advanced materials and efficient working media, continues to rise. A key step in developing these products lies in the design of their constituent molecules. Traditional methods, based on expert experience, are often slow, labor-intensive, and prone to overlooking molecules with optimal performance. As a result, computer-aided molecular design (CAMD) has garnered significant attention for its potential to accelerate and improve the design process. One of the major challenges in CAMD is the lack of mechanistic knowledge that accurately links molecular structure to its properties. As a result, machine learning models trained on existing molecular databases have become the primary tools for predicting molecular properties. The typical approach involves using these models to predict the properties of potential molecules and selecting the best candidates based on these predictions. However, prediction errors are inevitable, introducing uncertainty into the reliability of the design. This can result in significant discrepancies between the predicted and experimentally verified properties, limiting the effectiveness of molecular discovery.
To address this issue, we propose a novel molecular property modeling framework based on a similarity coefficient. This framework introduces a new formula for molecular similarity, which considers compound type identification to enable more accurate molecular comparisons. By calculating the similarity between a target molecule and those in an existing database, the framework selects the most similar molecules to form a tailored training dataset, ensuring that only the most informative molecules are selected for the training set, while less relevant or misleading data points are excluded, significantly improving the accuracy of property predictions. In addition to enhancing prediction accuracy, the similarity coefficient also quantifies the confidence in the property predictions. By evaluating the availability and magnitude of the similarity index, the framework provides a measure of uncertainty in the predictions, giving a clearer understanding of how reliable the predicted properties are. This is especially important for molecules where limited similar data is available, allowing for more informed decision-making in the selection process. In tests across various molecular properties, our framework not only enhances the accuracy of predictions but also offers a clear evaluation of prediction reliability, especially for molecules with high similarity. Our framework introduces a two-fold evaluation system for potential molecules, using both predicted properties and the similarity coefficient. This dual criterion ensures that only molecules with both excellent predicted properties and high similarity are selected, enhancing the reliability of the screening process. The improved prediction accuracy, particularly for molecules with high similarity, reduces the need for extensive experimental validation and significantly increases the overall confidence in the molecular design process by explicitly addressing prediction uncertainty.



A simple model for control and optimisation of a produced water re-injection facility

Rafael David de Oliveira1, Edmary Altamiranda2, Gjermund Mathisen2, Johannes Jäschke1

1Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), Trondheim, Norway; 2Subsea Technology, AkerBP ASA, Stavanger, Norway

Water injection (or water flooding) is an enhanced oil recovery technique that consists of injecting water into the reservoir to maintain the reservoir pressure. The injected water can come either from the sea or the water separated from the oil and gas production (produced water). The amount of injected water in each well is typically decided by the reservoir engineers, and many methodologies have been proposed so far in the literature where reservoir models are usually applied (Grema et al., 2016). Once the injection targets have been defined, the water injection network system can be optimised. A relevant optimisation problem in this context may consist of the optimal operation of the pump system topside while ensuring the integrity of the subsea water injection system by maximising the lifetime of the equipment. Usually, the works in that phase are modelling the system at a macro-level where each unit is represented as a node in a network (Ivo and Imsland, 2022). The use of simple and lower-level models where the manipulated variables and measured variables can be directly connected proved to be very useful in the design of new control strategies (Sivertsen et al., 2006) as well as in real-time optimisation formulations where the model parameters can be updated in real-time (Matias et al., 2022).

This work proposes a simple model for control and optimisation of a produced water re-injection facility. The model was based on a real facility in operation on the Norwegian continental shelf and consisted of a set of differential-algebraic equations. Data was gathered from the available sensors, pump operation and water injection targets. Model parameters related to equipment dimensions and the valve's flow coefficient were fixed as in the real plant. The remaining parameters were estimated from the field data, solving a nonlinear least-square problem. Uncertainty quantification was performed to assess the parameter's confidence intervals. Moreover, simulations were performed to evaluate and validate the proposed model. The results show that a simple model can be fitted to the plant and, at the same time, describe the key features of the plant dynamics. The developed model is expected to aid the implementation of strategies like self-optimising control and real-time optimisation on produced water re-injection facilities in the near future.

References

Grema, A. S., and Yi Cao. 2016. “Optimal Feedback Control of Oil Reservoir Waterflooding Processes.” International Journal of Automation and Computing 13 (1): 73–80.

Ivo, Otávio Fonseca, and Lars Struen Imsland. 2022. “Framework for Produced Water Discharge Management with Flow-Weighted Mean Concentration Based Economic Model Predictive Control.” Computers & Chemical Engineering 157 (January):107604.

Matias, José, Julio P. C. Oliveira, Galo A. C. Le Roux, and Johannes Jäschke. 2022. “Steady-State Real-Time Optimization Using Transient Measurements on an Experimental Rig.” Journal of Process Control 115 (July):181–96.

Sivertsen, Heidi, John-Morten Godhavn, Audun Faanes, and Sigurd Skogestad. 2006. “CONTROL SOLUTIONS FOR SUBSEA PROCESSING AND MULTIPHASE TRANSPORT.” IFAC Proceedings Volumes, 6th IFAC Symposium on Advanced Control of Chemical Processes, 39 (2): 1069–74.



An optimization-based conceptual synthesis of reaction-separation systems for glucose to chemicals conversion.

Syed Ejaz Haider, Ville Alopaeus

Department of Chemical and Metallurgical Engineering, School of Chemical Engineering, Aalto University, P.O. Box 16100, 00076 Aalto, Finland.

Abstract

Lignocellulosic biomass has emerged as a promising renewable alternative to fossil resources for the sustainable production of green chemicals [1]. Among the high-value biomass-derived building block chemicals, levulinic acid has gained significant attention due to its wide industrial applications [2]. It serves as a raw material for the synthesis of resins, plasticizers, textiles, animal feed, coatings, antifreeze, pharmaceuticals, and bio-based products [3]. In order to produce levulinic acid on commercial scale, it is essential to identify the most cost-effective and optimal synthesis route.

Two main methods exist to identify the optimal process structure: hierarchical decomposition and superstructure-based optimization. The hierarchical decomposition method involves making design decisions at each detail level based on heuristics; however, it struggles to capture interactions among decisions at different detail levels. In contrast, superstructure-based synthesis is a smart process systems engineering methodology that systematically evaluates a wide range of structural alternatives simultaneously using an equation-oriented approach to identify the optimal structure.

This study aims to identify the optimal process structure and parameters for the commercial-scale production of levulinic acid from glucose using mathematical programming approach. To achieve more valuable results, the reaction and separation systems were separately investigated under two optimization scenarios using two different objective functions.

Scenario 1 focuses on optimizing the glucose conversion reactor to enhance overall profit and minimize waste disposal. The optimization model includes a rigorous economic objective function that simultaneously considers product selling prices, capital and manufacturing costs over a 20-year project life, and waste disposal costs. A continuous tank reactor model was used as a mass balance constraint, utilizing the rate parameters from our recent research findings at chemical engineering research group, Aalto University. This non-linear programming (NLP) problem was implemented in GAMS and solved using the BARON solver to determine the optimal operating conditions and reactor size. The optimal reactor volume was found to be 13.2m3, with an optimal temperature of 197.8°C for the levulinic acid production capacity of 1593tonnes/year.

Scenario 2 addresses the synthesis of distillation-based separation sequences to separate the multicomponent reactor effluent into various product streams. All potential candidates are embedded in a superstructure, which is translated into a mixed-integer nonlinear programming problem (MINLP). Research is progressing towards solving this MINLP problem and identifying the optimal configuration of distillation columns for the desired separation task.

References

[1] F. H. Isikgor and C. R. Becer, "Lignocellulosic biomass: a sustainable platform for the production of bio-based chemicals and polymers," Polymer chemistry, vol. 6, no. 25, pp. 4497-4559, 2015.

[2] T. Werpy and G. Petersen, "Top value added chemicals from biomass: volume I--results of screening for potential candidates from sugars and synthesis gas," National Renewable Energy Lab.(NREL), Golden, CO (United States), 2004.

[3] S. Takkellapati, T. Li, and M. A. Gonzalez, "An overview of biorefinery-derived platform chemicals from a cellulose and hemicellulose biorefinery," Clean technologies and environmental policy, vol. 20, pp. 1615-1630, 2018.



Kinetic modeling of drug substance synthesis considering slug flow characteristics in a liquid-liquid reaction

Shunsei Yayabe1, Junu Kim1, Yusuke Hayashi1, Kazuya Okamoto2, Keisuke Shibukawa2, Hayao Nakanishi2, Hirokazu Sugiyama1

1The University of Tokyo, Japan; 2Shionogi Pharma Co., Ltd., Japan

In the production of drug substances (or active pharmaceutical ingredients), flow synthesis is increasingly being introduced due to its various advantages such as high surface volume ratio and small system size [1]. One promising application of flow synthesis is liquid-liquid reaction [2]. When two immiscible liquids enter together in a flow reactor, unique flow patterns, especially like slug flow, are formed. These patterns are determined by the fluid properties and the reactor specifications, and have a significant impact on the mass transfer rate. Previous studies have analyzed the effect of slug flow on mass transfer in liquid-liquid reactions using computational fluid dynamics [3, 4]. These studies provide valuable insights into the influence of flow characteristics on reaction. However, there is a lack of modeling approaches that simultaneously account for flow characteristics and reaction kinetics, which may limit the application of liquid-liquid reactions in flow synthesis.

We developed a kinetic model of drug substance synthesis by incorporating slug flow characteristics in a liquid-liquid reaction, with the aim to determine the feasible range of the process parameters. The target reaction was Stevens oxidation, which is a novel liquid-liquid reaction of organic and aqueous phases, producing the ester via a shorter pathway than the conventional route. To obtain kinetic data, experiments were conducted, varying the inner diameter, reaction temperature, and residence time. In Stevens oxidation, a catalyst was used, and experimental conditions were adjusted to form slug flow to promote the catalyst’s mass transfer. Using the obtained data, the model was developed for the change in concentrations of the starting material, desired product, intermediate, dimer, carboxylic acid, and the catalyst. In the change in the catalyst concentration model, mass transfer was considered using the overall volumetric mass transfer coefficient during slug flow formation.

The model successfully reproduced experimental results and demonstrated that, as the inner diameter increases, the efficiency of mass transfer in slug flow decreases with slowing down the reaction. The developed model was used to simulate the yields of the start material, the dimer, and the process mass intensity, in order to determine the feasible region. As a result, it was shown that when the reagent concentration was either too high or too low, operating conditions fell outside the feasible region. This kinetic model with flow characteristics will be useful for the process design of drug substance synthesis using liquid-liquid reactions. In the ongoing work, we are conducting validation of the feasible region.

[1] S. Diab, et al., React. Chem. Eng., 2021, 6, 1819. [2] L. Capaldo, et al., Chem. Sci., 2023, 14, 4230. [3] A. Mittal, et al., Ind. Eng. Chem. Res., 2023, 62, 15006. [4] D. Cheng, et al., Ind. Eng. Chem. Res., 2020, 59, 4397.



Learning-based control approach for nanobody-scorpion antivenom optimization

Juan Camilo Acosta-Pavas1, David C Corrales1, Susana M Alonso Villela1, Balkiss Bouhaouala-Zahar2, Georgios Georgakilas3, Konstantinos Mexis4, Stefanos Xenios4, Theodore Dalamagas3, Antonis Kokosis4, Michael O'donohue1, Luc Fillaudeau1, César A. Aceves-Lara1

1TBI, Université de Toulouse, CNRS UMR5504, INRAE UMR792, INSA, Toulouse, France, France; 2Laboratoire des Biomolécules, Venins et Applications Théranostiques (LBVAT), Institut Pasteur de Tunis, 13 Place Pasteur, BP-74, 1002 Le Belvédère, Tunis, Tunisia; 3Athena Research Center, Marousi, Greece; 4School of Chemical Engineering, National Technical University of Athens, Iroon Polytechneiou 9, Zografou, 15780 Athens, Greece

One market scope of bioindustries is the production of recombinant proteins in E. coli for its application in serotherapy (Alonso Villela et al., 2023). However, its process's monitoring, control, and optimization present limitations. There are different approaches to optimize bioprocess performance; one common is using model-based control strategies such as Model Predictive Control (MPC). Another strategy is learning-based control, such as Reinforcement Learning (RL).

In this work, an RL approach was applied to maximize the production of recombinant proteins in E. coli at induction mode. The aim was to find the optimal substrate feed rate (Fs) applied during the induction that maximize the protein productivity. The RL model was trained using the actor-critic Twin-Delayed Deep Deterministic (TD3) Policy Gradient agent. The reward corresponded to the maximum value of the productivity. The environment was represented with a dynamic hybrid model (DHM) published by Corrales et al. (2024). The simulated conditions consisted in a reactor with 2L of working volume (V) at 37°C for the batch (10gglucose/L) and fed-batch (feeding with 300gglucose/L) modes, and 28°C during induction stage. The first 3.4h was the batch mode. The fed-batch mode was operated with a Fs=1x10^-3L/h until reach 8h. Afterwards, the RL agent was trained in the induction mode until the process's final at 20h. The agent actions were updated every 2h. It was considered two types of constraints 1.49<V<5.00L and 1x10^-3<Fs≤5x10^-4 L/h. Finally, the results were compared with the MPC approach.

The training options for all the networks were a learning rate of 1x10^-3 for the critic and 1x10^-4 for the actor; gradient threshold of 1.0, mini-batch size of 1x10^2, a discount factor of 0.9, experience buffer length of 1x10^6, and agent sample time of 0.1h with maximum 700 episodes.

MPC and RL control strategies show similar behaviors. In both cases, the optimal action suggested is to apply the maximum Fs to increase the protein productivity at the end of the process until 4.81x10^-2 mg/h. Regarding computation time, the RL agent training spent a mean value of 0.3284s performing 14.0x10^3 steps in each action update. At the same time, the MPC required a mean value of 0.3366s to solve an optimization problem at every action update. The RL approach demonstrates to be a good alternative to explore the optimization in the production of recombinant proteins.

References

Alonso Villela, S. M., Kraïem-Ghezal, H., Bouhaouala-Zahar, B., Bideaux, C., Aceves Lara, C. A., & Fillaudeau, L. (2023). Production of recombinant scorpion antivenoms in E. coli: Current state and perspectives. Applied Microbiology and Biotechnology, 107(13), 4133-4152. https://doi.org/10.1007/s00253-023-12578-1

Corrales, D. C., Villela, S. M. A., Cescut, J., Daboussi, F., Fillaudeau, L., & Aceves-Lara, C. A. (2024). Dynamic Hybrid Model for Nanobody-based Antivenom Production (scorpion antivenom) with E. coli CH10-12 and E. coli NbF12-10.



Kinetics modeling of the thermal degradation of densified refuse-derived fuel (d-RDF)

Mohammad Ali Nazari, Juma Haydary

Institute of Chemical and Environmental Engineering, Slovak University of Technology in Bratislava, Slovak Republic

Currently, modern human life is experiencing an energy crisis and a massive generation of Municipal Solid Waste (MSW). The conversion of the carbon-containing fraction of MSW, known as refuse-derived fuel (RDF), into energy, fuel, and high-value bio-based chemicals has become a key focus in ongoing discussions on sustainable development, driven by rising energy demand, depleting fossil fuel reserves, and growing environmental concerns. However, a significant limitation of unprocessed RDF lies in its heterogeneous composition, which complicates material handling, reactor feeding, and the accurate prediction of its physical and chemical properties. The densification of RDF (d-RDF) offers a potential solution to these challenges by reducing material variability and generating a more uniform, durable form, thereby enhancing its suitability for processes such as pyrolysis. This research effort involves evaluating the physicochemical characteristics and thermal degradation of d-RDF using a thermogravimetric analyzer (TGA) under controlled conditions at various heating rates of 2, 5, 10, and 20 K·min⁻¹. The model-free methods, including Friedman (FRM), Flynn-Wall-Ozawa (FWO), Kissinger-Akahira-Sunose (KAS), Vyazovkin (VYZ), and Kissinger, were applied to determine the apparent kinetic and thermodynamic parameters within the conversion range of 1% to 85%. The physicochemical properties of d-RDF demonstrated its suitability for various thermochemical conversion applications. Thermal degradation predominantly occurred within the temperature range of 220–500°C, accounting for 98% of the total weight loss. The coefficients of determination (R²) for the fitted plots ranged from 0.90 to 1.00 across all applied models. The average activation energy (Eα) calculated using the FRM, FWO, KAS, and VYZ methods was 260, 247, 247, and 263 kJ·mol⁻¹, respectively. The evaluation of thermodynamic parameters (ΔH, ΔG, and ΔS) indicated the endothermic nature of the process. Statistical F-test was applied to identify the best agreement between experimental and calculated data. According to the F-value test, the differences of variance in FRM and VYZ models were insignificant, illustrating the best agreement with experimental data. Considering all results, including kinetic and thermodynamic parameters, along with the high heating value (HHV) of 25.20 MJ·kg⁻¹, d-RDF demonstrates a strong affinity for thermal degradation under pyrolysis conditions and can be regarded as a suitable feedstock to produce fuel and value-added products. Moreover, it serves as a viable alternative to fossil fuels, contributing to the United Nations 2030 Sustainable Development Goals.



Cost-optimal solvent selection for batch cooling crystallisation of flurbiprofen

Matthew Blair, Dimitrios I. Gerogiorgis

University of Edinburgh, United Kingdom

ABSTRACT

Choosing suitable solvents for crystallisation processes can be very challenging when developing new pharmaceuticals, given the vast number of choices, crystallisation techniques and performance metrics. A high-efficiency solvent must ensure high API recovery, low cost and minimal environmental impact,1 and allow batch (or possibly continuous) operation within an acceptable (not narrow) parameter space. To streamline this task, process and thermodynamic modelling tools2,3 can be used to systematically probe the behaviour of different crystallisation setups in silico prior to conducting lab-scale experiments. In particular, it has been found that we can use thermodynamic models alongside principles from solid-liquid equilibria (SLE) to determine the impact of key process variables (e.g. temperature and solvent choice)1 on the performance of different processes without (or prior to) testing them in the laboratory.2,3

This paper presents the implementation of a modelling framework that can be used to minimise the cost and environmental impact of batch crystallisation processes on the basis of thermodynamic principles. This process modelling framework (implemented in MATLAB®) is employed to study the batch cooling crystallisation of flurbiprofen, a non-steroidal anti-inflammatory drug (NSAID) used against arthritis.4 Moreover, we have used the Non-Random Two-Liquid (NRTL) activity coefficient model, to study its thermophysical and solubility properties in twelve (12) common upstream pharmaceutical solvents,4,5 namely three alkanes (n-hexane, n-heptane, n-octane), two (isopropyl, methyl-tert-butyl) ethers, five alcohols (n-propanol, isopropanol, n-butanol, isobutanol, isopentanol), an ester (isopropyl acetate), and acetonitrile, in an adequately wide temperature range (283.15-323.15 K). Established green metrics1 (e.g. E-factor) and costing methodologies are employed to comparatively evaluate process candidates.6

LITERATURE REFERENCES

  1. Blair et al., Process modeling, simulation and technoeconomic evaluation of batch vs continuous pharmaceutical manufacturing cephalexin. 2023 AIChE Annual Meeting, Orlando, to appear (2023).
  2. Watson et al., Computer aided design of solvent blends for hybrid cooling and antisolvent crystallization of active pharmaceutical ingredients. Organic Process Research & Development 25(5): 1123 (2021).
  3. Sheikholeslamzadeh et al., Optimal solvent screening for crystallization of pharmaceutical compounds from multisolvent systems. Industrial & Engineering Chemistry Research 51(42): 13792 (2012).
  4. Tian et al., Solution thermodynamic properties of flurbiprofen in twelve solvents (283.15–323.15 K). Journal of Molecular Liquids 296: 111744 (2019).
  5. Prat et al., CHEM21 selection guide of classical and less classical solvents. Green Chemistry 18(1): 288 (2016).
  6. Dafnomilis et al., Multiobjective dynamic optimization of ampicillin batch crystallization: sensitivity analysis of attainable performance vs product quality constraints, Industrial & Engineering Chemistry Research 58(40): 18756 (2019).


A Machine Learning (ML) implementation for beer fermentation optimisation

Dimitrios I. Gerogiorgis

University of Edinburgh, United Kingdom

ABSTRACT

Food and beverage industries receive key feedstocks whose composition is subject to geographic and seasonal variability, and rely on factories whose process conditions have limited manipulation margins but must rightfully meet stringent product quality specifications. Unlike chemicals, most of our favourite foods and beverages are highly sensitive and perishable, with relatively small profit margins. Although manufacturing processes (recipes) have been perfected over centuries or even millennia, quantitative understanding is limited. Predictions about the influence of input (feedstock) composition and manufacturing (process) conditions on final food/drink product quality are hazardous, if not impossible, because small changes can result in extreme variations. A slightly warmer fermentation renders beer undrinkable; similarly, an imbalance among sugar, lipid (fat) and protein can make chocolate unstable.

Artificial Neural Networks/ANN and their representational versatility for process systems studies is well known for decades.2 First-principles knowledge, though (mass-heat-momentum conservation, chemical reactions) is captured via deterministic (ODE/PDE) models, which invariably require laborious parameterisation for each particular process plant. Physics-Informed Neural Networks (PINN)3 though combine the best of both worlds: they offer chemistry-compliant NN with proven extrapolation power to revolutionise manufacturing, circumventing parametric estimation uncertainty and enabling efficient process control. Fermentation for specific products (e.g. ethanol4, biopharmaceuticals5) has been explored by means of ML/ANN (not PINN) tools, thus without embedded first-principles descriptions.3

Though Food Science cannot provide global composition-structure-quality correlations, Artificial Intelligence/AI can be used to extract valuable process knowledge from factory data. The case of beer, in particular, has been the focus of several of our papers,6-7 offering a sound comparison basis for evaluating model fidelity between precedents and new PINN approaches. Pursuing PINN modelling caters to greater complexity, in terms of plant flowsheet, and target product structure and chemistry. We thus revisit the problem with ML/PINN tools to efficiently predict process efficiency, which is instrumental in computational design and optimisation of key unit operations (e.g. fermentors). Traditional (first-principles) descriptions of these necessitate elaborate (e.g. CFD) submodels of extreme complexity, with at least two severe drawbacks: (1) cumbersome prerequisite parameter estimation with extreme uncertainty, (2) prohibitively high CPU cost. The complementarity of the two major approaches is thus investigated, and major advantages/shortcomings will be highlighted.

LITERATURE REFERENCES

  1. Gerogiorgis & Bakalis, Digitalisation of Food+Beverage Manufacturing, Food & Bioproducts Processing, 128: 259-261 (2021).
  2. Lee et al., Machine learning: Overview of recent progresses and implications for the Process Systems Engineering field, Computers & Chemical Engineering, 114: 111-121 (2018).
  3. Karniadakis et al., Physics-informed machine learning, Nature Reviews Physics, 3(6): 422-440 (2021).
  4. Pereira et al., Hybrid NN modelling and particle swarm optimization for improved ethanol production from cashew apple juice, Bioprocess & Biosystems Engineering 44: 329-342 (2021).
  5. Petsagkourakis et al., Reinforcement learning for batch bioprocess optimization. Computers & Chemical Engineering, 133: 106649 (2020).
  6. Rodman & Gerogiorgis, Multi-objective process optimisation of beer fermentation via dynamic simulation, Food & Bioproducts Processing, 100A: 255-274 (2016).
  7. Rodman & Gerogiorgis, Dynamic optimization of beer fermentation: Sensitivity analysis of attainable performance vs. product flavour constraints, Computers & Chemical Engineering, 106: 582-595 (2017).


Operability analysis of modular heterogeneous electrolyzer plants using system co-simulation

Michael Große1,3, Isabell Viedt2,3, Hannes Lange2,3, Leon Urbas1,2

1TUD Dresden University of Technology, Chair of Process Control Systems; 2TUD Dresden University of Technology, Process Systems Engineering Group; 3TUD Dresden University of Technology, Process-to-Order Lab

In the upcoming decades, the scale-up of hydrogen production will play a crucial role for the integration of renewable energy into future energy systems [1]. One scale-up strategy is the numbering-up of standardized electrolysis units in a modular plant concept, according to [2, 3]. The use of a modular plant concept can support the integration of different electrolyzer technologies into one heterogeneous electrolyzer plant to leverage technology-specific advantages and counteract disadvantages [4].

This work focuses on the analysis of technical operability and feasibility of large-scale modular electrolyzer plants in a heterogeneous plant layout using system co-simulation. Developed and available dynamic process models of low-temperature electrolysis components are combined in Simulink as a shared co-simulation environment. Strategies to control relevant process parameters, like temperatures, pressures, flow rates and component mass fractions in the different subsystems and the overall plant, are developed and presented. An operability analysis is carried out to verify the functionality of the presented plant layout and the corresponding control strategies [5].

The dynamic progression of all controlled parameters is presented for different operative states that may occur, like start-up, continuous operation, load change and hot-standby behavior. It is observed that the exemplary plant is operational, as all relevant process parameter can be held within the allowed operating range during all operative states. However, some limitations, regarding the possible operating range of individual technologies, are introduced. Possible solution approaches for these identified problems are conceptualized.

Additionally, relevant metrics for efficiency and flexibility, like the specific energy consumption and expected unserved flexible energy (EUFE) [4] are calculated to prove the feasibility and show the advantages of heterogeneous electrolyzer plant layouts, such as a heightened operational flexibility without mayor reductions in efficiency.

Sources

[1] I. International Energy Agency, „Global Hydrogen Review 2023“, 2023, doi: https://www.iea.org/reports/global-hydrogen-review-2023.

[2] L. Bittorf u. a., „Upcoming domains for the MTP and an evaluation of its usability for electrolysis“, in 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), Sep. 2022, S. 1–4. doi: 10.1109/ETFA52439.2022.9921280.

[3] H. Lange, A. Klose, L. Beisswenger, D. Erdmann, und L. Urbas, „Modularization approach for large-scale electrolysis systems: a review“, Sustain. Energy Fuels, Bd. 8, Nr. 6, S. 1208–1224, 2024, doi: 10.1039/D3SE01588B.

[4] M. Mock, I. Viedt, H. Lange, und L. Urbas, „Heterogenous electrolysis plants as enabler of efficient and flexible Power-to-X value chains“, in Computer Aided Chemical Engineering, Bd. 53, Elsevier, 2024, S. 1885–1890. doi: 10.1016/B978-0-443-28824-1.50315-X.

[5] V. Gazzaneo, J. C. Carrasco, D. R. Vinson, und F. V. Lima, „Process Operability Algorithms: Past, Present, and Future Developments“, Ind. Eng. Chem. Res., Bd. 59, Nr. 6, S. 2457–2470, Feb. 2020, doi: 10.1021/acs.iecr.9b05181.



High-pressure membrane reactor for ammonia decomposition: Modeling, simulation and scale-up using a Python-Aspen Custom Modeler interface

Leonardo Antonio Cáceres Avilez, Antonio Esio Bresciani, Claudio Augusto Oller do Nascimento, Rita Maria de Brito Alves

Universidade de São Paulo, Brazil

One of the current challenges for hydrogen-related technologies is its storage and transportation. The low volumetric density and low boiling point require high-pressure and low-temperature conditions for effective transport and storage. A potential solution to these challenges involves storing hydrogen in chemical compounds that can be easily transported and stored, with hydrogen being released through decomposition processes [1]. Ammonia is a promising hydrogen carrier due to its high hydrogen content, approximately 17.8% by mass, and its low volumetric density of H2, which is 121 kg/m³ at 10 bar pressure [2]. The objective of this study was to develop a mathematical model to analyze and design a packed bed membrane reactor (PBMR) for large-scale ammonia decomposition. The kinetic model for the Ru-K/CaO catalyst was obtained from the literature and validated with experimental data [3]. This catalyst was selected due to its effective performance under high-pressure conditions, which increases the drive force for hydrogen permeation in the membrane reactor. The model was developed in Aspen Custom Modeler (ACM) using a 1D pseudo-homogeneous approach. The governing equations for mass, energy, and momentum conservation were discretized via a first-order backward finite difference method and solved using a nonlinear solver. The effectiveness factor was incorporated to account for intraparticle mass transfer limitations, which are prevalent with the large particle sizes typically employed in industrial applications. The study further investigated the influence of sweep gas ratio, temperature, relative pressure, and spatial velocity on ammonia conversion and hydrogen recovery, employing response surface methodology generated through an ACM-Python interface. The proposed multi-tubular membrane reactor achieved approximately 90,4% ammonia conversion and 91% hydrogen recovery, operating at an inlet temperature of 400°C and a pressure of 40 bar. Under the same heat flux, the membrane reactor exhibited approximately 15% higher ammonia conversion compared to a conventional fixed bed reactor. Furthermore, the developed model is easily transferable to Aspen Plus, facilitating subsequent process conceptual design and economic analyses.

[1] I. Lucentini, G. García Colli, C. D. Luzi, I. Serrano, O. M. Martínez, and J. Llorca, ‘Catalytic ammonia decomposition over Ni-Ru supported on CeO2 for hydrogen production: Effect of metal loading and kinetic analysis’, Appl Catal B, vol. 286, p. 119896, 2021.

[2] J. W. Makepeace, T. J. Wood, H. M. A. Hunter, M. O. Jones, and W. I. F. David, ‘Ammonia decomposition catalysis using non-stoichiometric lithium imide’, Chem Sci, vol. 6, no. 7, p. 3805–3815, 2015.

[3] S. Sayas, N. Moerlanés, S. P. Katikaneni, A. Harale, B. Solami, J. Gascon. ‘High pressure ammonia decomposition on Ru-K/CaO catalysts’. Catal. Sci. Technol. vol. 10, p. 5027- 5035, 2020.



Developing a circular economy around jam production wastes

Carlos Sanz, Mariano Martin

Department of Chemical Engineering. Universidad de Salamanca, Plz Caídos 1-5, 37008, Salamanca, Spain

Abstract

The food industry is a significant source of waste. In the EU alone, more than 58 million tons of food waste are generated annually [1], with an estimated market value of 132 billion euros [2]. While over half of this waste is produced at the household level and thus consists of a mixture, one-quarter originates directly from manufacturing facilities. Traditionally, the mixed waste has been managed through municipal solid waste (MSW) treatment and valorization procedures [3]. However, there is an opportunity to valorize the waste produced in the agri-food sector to support the adoption of a circular economy within the food supply chain, beginning at the transformation facilities. This would enable the recovery of value-added products and reduce the need for external resources, creating a circular economy through process integration.

In this work, the valorization of biowaste for a circular economy is explored through the case of jam waste. An integrated process is designed to extract value-added products such as phenolic compounds and pectin, as well as to produce ethanol, a green solvent, for internal use and/or as a final product. The solid residue can then either be gasified (GA) or digested (AD) to produce hydrogen, thermal energy and power. These technologies are systematically compared using a mathematical optimization approach, with units modeled based on first principles and experimental yields. The base case focuses on a real jam production facility from a well-known company.

Waste processing requires an investment of €2.0-2.3 million to treat 37 tons of waste per year, yielding 5.2 kg/t of phenolic compounds and 15.9 kg/t of pectin. After extraction of the valuable products, the solids are subjected to either anaerobic digestion or gasification. The amount of biogas produced (368.1 Nm3/t) is about half that of syngas (660.2 Nm3/t), so the energy produced by the gasification process (5,085.6 kWh/t) is higher than that produced by anaerobic digestion (3,136.3 kWh/t). Nevertheless, both technologies are self-sufficient in terms of energy, but require additional thermal energy input. Conversely, although the energy produced by gasification is higher than that produced by anaerobic digestion, the latter is cheaper than the former and has a lower entry barrier, especially as the process scales. As the results show, incorporating such processes into jam production facilities is not only profitable, but also allows the application of circular economy principles, reducing waste and external energy consumption, while providing value-added by-products such as phenolic compounds and pectin.

References

[1] Eurostat, Food waste and food waste prevention - estimates, (2023).

[2] SWD, Impact Assessment Report, Brussels, 2023.

[3] EPA, Municipal Solid Waste, (2016). https://archive.epa.gov/epawaste/nonhaz/municipal/web/html/ (accessed April 13, 2024).



Data-driven optimization of chemical dosage in wastewater treatment: A surrogate model approach for enhanced physicochemical phosphorus removal

Florencia Caro1, Jimena Ferreira2,3, José Carlos Pinto4, Elena Castelló1, Claudia Santiviago1

1Biotechnological Processes for the Environment Group, Faculty of Engineering, Universidad de la República, Montevideo, Uruguay, 11300; 2Chemical & Process Systems Engineering Group, Faculty of Engineering, Universidad de la República, Montevideo, Uruguay, 11300; 3Heterogeneous Computing Laboratory, Faculty of Engineering, Universidad de la República, Montevideo, Uruguay, 11300; 4Programa de Engenharia Química/COPPE, Universidade Federal do Rio de Janeiro, Cidade Universitária, CP: 68502, Rio de Janeiro, 21941-972 RJ, Brazil

Excessive phosphorus discharge into water bodies can cause severe environmental issues, such as eutrophication [1]. Discharge limits have become more stringent and the operation of phosphorus removal systems from wastewater that are economically feasible and allow for regulatory compliance remains a challenge [2]. Physicochemical phosphorus removal (PPR) using metal salts is effective for achieving low phosphorus levels and can supplement biological phosphorus removal (BPR) [3]. PPR offers flexibility, as phosphorus removal can be adjusted by modifying chemical dosage [4], and is simple, requiring only a chemical dosing system and a clarifier to separate the treated effluent from the resulting precipitate [3]. Proper dosage control is relevant to avoid under or overdosing, which affects phosphorus removal efficiency and operational costs. PPR depends on the system design and effluent characteristics [4]. Therefore, dosages are generally established through laboratory experiments, data from other wastewater treatment plants (WWTPs), and dosing charts [3]. Modeling can enhance chemical dosing in WWTPs, and various sequential simulators can perform this task. BioWin exemplifies this capability, incorporating PPR using metal salts, accounting for pH, precipitation processes, and interactions with organic matter measured as chemical oxygen demand (COD). However, BioWin cannot directly optimize chemical dosing for specific WWTPs configurations. This work develops a surrogate model using BioWin's simulated data to create a tool to optimize chemical dosages based on influent characteristics, thus providing tailored solutions for an edible oil WWTP, which serves as the case study. The industry operates its own WWTP and discharges the treated effluent into a watercourse. Due to the production process, the influent has high and variable phosphorus concentrations. PPR is applied as a supplementary treatment to BPR when phosphorus levels exceed discharge limits. The decision variables in the optimization are the aluminum sulfate dosage for phosphorus removal and sodium hydroxide for pH adjustment, as aluminum sulfate lowers effluent pH. The chemical cost is set as the objective function, and effluent discharge parameters as constraints. The surrogate physicochemical model, which links influent parameters and dosing to effluent outcomes, is also included as a constraint. Data acquisition from BioWin is automated using Bio2Py. [5]. The optimization model is implemented in Pyomo.

Preliminary results indicate that influent COD significantly affects phosphorus removal and should be considered when determining chemical dosage. For high COD levels, more aluminum than the suggested by a rule of thumb [3] is required, whereas for moderate and low COD levels, less dosage is needed, leading to potential cost savings. Furthermore, it was found that pH adjustment is only necessary when phosphorus concentrations are high.

[1]V. Smith et al., Environ. Pollut. 100, 179–196 (1999). doi: 10.1016/S0269-7491(99)00091-3.

[2]R. Bashar, et al., Chemosphere 197, 280–290 (2018). doi: 10.1016/j.chemosphere.2017.12.169.

[3]Metcalf & Eddy, Wastewater Engineering: Treatment and Resource Recovery (McGraw-Hill, 2014).

[4]A. Szabó et al., Water Environ. Res. 80, 407–416 (2008). doi: 10.2175/106143008x268498.

[5]F. Caro et al., J. Water Process Eng. 63, 105426 (2024). doi: 10.1016/j.jwpe.2024.105426.



Leveraging Machine Learning for Real-Time Performance Prediction of Near Infrared Separators in Waste Sorting Plant

Imam Mujahidin Iqbal1, Xinyu Wang1, Isabell Viedt1,2, Leonhard Urbas1,2

1TUD Dresden University of Technology, Chair of Process Control Systems; 2TUD Dresden University of Technology, Process Systems Engineering Group

Abstract

Many small and medium-sized enterprises (SMEs), including waste sorting facilities, are not fully capitalizing on the data they collect. Recent advances in waste sorting technology are addressing this challenge. For instance, Tanguay-Rioux et al. (2022) used a mix modelling approach to develop a process model using data from Canadian sorting facilities, while Kroell et al. (2024) leveraged Near Infrared (NIR) data to create a machine learning model that optimizes NIR setup. A key obstacle for SMEs in utilizing their data effectively is the lack of technical expertise. Wang et al. (2024) demonstrated that the ecoKI platform is a viable solution for SMEs, as it is a low-code platform, requires no prior machine learning knowledge and is simple to use. This work forms part of the EnSort project, which aims to enhance automation and energy efficiency in waste sorting plants by utilizing the collected data. This study explores the application of the ecoKI platform to process measurement data into performance monitoring tools. Data, including material composition and belt weigher sensor readings, were collected from an operational waste sorting plant in Northen Europe. The data was processed using the ready-made building blocks provided within the ecoKI platform, avoiding the need for manual coding. The platform’s real-time monitoring feature was utilized to continuously track performance. Two neural network architectures—Multilayer Perceptrons (MLP) and Long Short-Term Memory (LSTM) networks—were explored for predicting NIR separation efficiency. Results demonstrated the potential of these data-driven models to accurately capture essential relationships between input features and NIR performance. This work illustrates how raw measurement data in waste sorting facilities is transformed into actionable insights for real-time performance monitoring, offering an accessible, user-friendly solution for industries that lack machine learning expertise. By enabling SMEs to leverage their existing data, the platform paves the way for improved operational efficiency and decision-making. Furthermore, this approach can be adapted to various industrial contexts besides waste sorting applications, setting the stage for future developments in automated, data-driven optimization of equipment performance.

References

Tanguay-Rioux, F., Spreutels, L., Héroux, M., & Legros, R. (2022). Mixed modeling approach for mechanical sorting processes based on physical properties of municipal solid waste. Waste Management, 144, 533–542.

Kroell, N., Maghmoumi, A., Dietl, T., Chen, X., Küppers, B., Scherling, T., Feil, A., & Greiff, K. (2024). Towards digital twins of waste sorting plants: Developing data-driven process models of industrial-scale sensor-based sorting units by combining machine learning with near-infrared-based process monitoring. Resources, Conservation and Recycling, 200, 107257.

Wang, X., Rani, F., Charania, Z., Vogt, L., Klose, A., & Urbas, L. (2024). Steigerung der Energieeffizienz für eine nachhaltige Entwicklung in der Produktion: Die Rolle des maschinellen Lernens im ecoKI-Projekt (p. 840).



A Benchmark Simulation Model of Ammonia Production: Enabling Safe Innovation in the Emerging Renewable Hydrogen Economy

Niklas Groll, Gürkan Sin

Process and Systems Engineering Center (PROSYS), Department of Chemical and Biochemical Engineering, Technical University of Denmark (DTU), 2800 Kgs.Lyngby, Denmark

The emerging hydrogen economy plays a vital part in transitioning to a sustainable industry. Green hydrogen can be a renewable fuel for process heat and a sustainable feedstock, e.g., for green ammonia. From today on, the necessity of producing green ammonia for the food industry and as a platform chemical will be inherent [1]. Accordingly, many developments focus on designing and optimizing hydrogen process routes. However, implementing new process ideas and designs also requires testing and ensuring safety.

Safety methodologies can be tested on so-called "Benchmark models." Several benchmark processes have been used to innovate new process control and monitoring methods. The Tennessee-Eastman process imitates the behavior of a standard chemical process, the Fed-Batch Fermentation of Penicillin serves as a benchmark for biochemical fed-batch operated processes, and with the COST benchmark model methodologies for wastewater treatment can be evaluated [2], [3], [4]. However, the established benchmark processes do not feature all relevant aspects of the renewable hydrogen pathways, e.g., sustainable feedstocks and energy supply or electrochemical reactions. Thus, lacking a basic benchmark model for the hydrogen industry creates unnecessary risks when adopting process monitoring and control technologies.

Introducing our unique simulation benchmark model, we pave the way for safer innovations in the hydrogen industry. Our model connects hydrogen production with renewable electricity to the Haber-Bosch process for ammonia production. By integrating electrochemical electrolysis with a standard chemical process, our ammonia benchmark process encompasses all key aspects of innovative hydrogen pathways. The model, built with the versatile Aveva Process Simulator, allows for a seamless transition between steady-state and dynamic simulations and easy adjustments to process design and control parameters. By introducing a set of failures, the model is a benchmark for evaluating risk monitoring and control methods. Furthermore, detecting and eliminating failures can also contribute to the development of new process safety methodologies.

Our new ammonia simulation model is a significant addition to the emerging hydrogen industry, filling the void of a missing benchmark. This comprehensive model serves a dual purpose: It can evaluate and confirm existing process safety methodologies and serve as a foundation for developing new safety methodologies specifically targeting safe hydrogen pathways.

[1] A. G. Olabi et al., ‘Recent progress in Green Ammonia: Production, applications, assessment; barriers, and its role in achieving the sustainable development goals’, Feb. 01, 2023, Elsevier Ltd. doi: 10.1016/j.enconman.2022.116594.

[2] U. Jeppsson and M. N. Pons, ‘The COST benchmark simulation model-current state and future perspective’, 2004, Elsevier Ltd. doi: 10.1016/j.conengprac.2003.07.001.

[3] G. Birol, C. Ündey, and A. Çinar, ‘A modular simulation package for fed-batch fermentation: penicillin production’, Comput Chem Eng, vol. 26, no. 11, pp. 1553–1565, Nov. 2002, doi: 10.1016/S0098-1354(02)00127-8.

[4] J. J. Downs and E. F. Vogel, ‘A plant-wide industrial process control problem’, Comput Chem Eng, vol. 17, no. 3, pp. 245–255, Mar. 1993, doi: 10.1016/0098-1354(93)80018-I.



Thermo-Hydraulic Performance of Pillow-Plate Heat Exchangers with Streamlined Secondary Structures: A Numerical Analysis

Reza Afsahnoudeh, Julia Riese, Eugeny Y. Kenig

Paderborn University, Germany

In recent years, pillow-plate heat exchangers (PPHEs) have gained attention as a promising alternative to conventional shell-and-tube and plate heat exchangers. Their advantages include high pressure resistance, leak-tight construction, and good cleanability. The pillow-like wavy channel structure promotes fluid mixing in the boundary layer, thereby improving heat transfer. However, a significant drawback of PPHEs is boundary layer separation near the welding spots, leading to large recirculation zones. Such zones are the primary cause of increased pressure drop and reduced heat transfer efficiency. Downsizing these recirculation zones is key to improving the thermo-hydraulic performance of PPHEs.

One potential solution is the application of secondary surface structuring [1]. Among others, this can be realized using Electrohydraulic Incremental Forming (EHIF) [2]. Afsahnoudeh et al. [3] demonstrated that streamlined secondary structures, particularly those with ellipsoidal geometries, improved thermo-hydraulic efficiency by up to 6% compared to unstructured PPHEs.

Building upon previous numerical studies, this work investigated the impact of streamlined secondary structures on fluid dynamics and heat transfer within PPHEs. The complex geometries of PPHEs, with and without secondary structures, were generated using forming simulations in ABAQUS 2020. Flow and heat transfer in the inner PPHE channels were simulated using FLUENT 24.1, assuming a single-phase, incompressible, and turbulent system with constant physical properties.

Performance evaluation was based on pressure drop, heat transfer coefficients, and overall thermo-hydraulic efficiency. Additionally, a detailed analysis of the Fanning friction factor and drag coefficient was conducted for various Reynolds numbers to provide deeper insights into the fluid dynamics in the inner channels. The results of these investigations are summarized in this contribution.

References

[1] M. Piper, A. Zibart, E. Djakow, R. Springer, W. Homberg, E.Y. Kenig, Heat transfer enhancement in pillow-plate heat exchangers with dimpled surfaces: A numerical study. Appl. Therm. Eng., vol 153, 142-146, 2019.

[2] E. Djakow, R. Springer, W. Homberg, M. Piper, J. Tran, A. Zibart, E.Y. Kenig, “Incremental electrohydraulic forming - A new approach for the manufacturing of structured multifunctional sheet metal blanks,” Proc. of the 20th International ESAFORM Conference on Material Forming, Dublin, Ireland, vol. 1896, 2017.

[3] R. Afsahnoudeh, A. Wortmeier, M. Holzmüller, Y. Gong, W. Homberg, E.Y. Kenig, Thermo-hydraulic Performance of Pillow-Plate Heat Exchangers with Secondary Structuring: A Numerical Analysis,” Energies, vol. 16 (21), 7284, 2023.



Modular and Heterogeneous Electrolysis Systems: a System Flexibility Comparison

Hannes Lange1,2, Michael Große2,3, Isabell Viedt2,3, Leon Urbas1,3

1TUD Dresden University of Technology, Process Systems Engineering Group; 2TUD Dresden University of Technology, Process to Order Lab; 3TUD Dresden University of Technology, Chair of Process Control Systems

Green hydrogen will play a key role in the decarbonization of the steel sector. As a result, the demand for hydrogen in the steel industry will increase in the coming years due to the direct reduction of iron [1]. As the currently commercially available electrolysis stacks are far too small for the production of green hydrogen, the scaling strategy of numbering up standardized process units can provide support [2]. In addition, a cost-effective production of green hydrogen requires the electrolysis system to be able to follow the electricity load, which necessitate a more efficient and flexible system. The modularization of electrolysis systems can provide an approach for this [3]. The potential to include different electrolysis technologies into one heterogeneous electrolysis system can help make use of technology specific advantages and reduce disadvantages [4]. In this paper, a design of such a heterogeneous electrolysis system is presented, which is built using the modularization of electrolysis process units and is scaled up for large-scale applications, such as a direct iron reduction process, by numbering up. The impact of different degrees of technological and production capacity-related heterogeneity is investigated using system co-simulation of existing electrolyzer models. The direct reduction of iron for green steel production must be provided a constant stream of hydrogen from a fluctuating electricity profile. To reduce cost and storage losses the hydrogen storage capacity must be minimized. For this presented use-case the distribution of technology and production capacity in the heterogeneous plant layout is optimized regarding overall system efficiency and the ability to follow flexible electricity profiles. The resulting pareto front is analyzed and the results are compared with a conventional homogenous electrolyzer plant layout. First results underline the benefits of combining different technologies and production capacities of individual systems in a large-scale heterogeneous electrolyzer plant.

1] Wietschel M, Zheng L, Arens M, Hebling C, Ranzmeyer O, Schaadt A, et al. Metastudie wasserstoff – auswertung von energiesystemstudien. Studie im auftrag des nationalen wasserstoffrats. Karlsruhe, Freiburg, Cottbus: Fraunhofer ISI, Fraunhofer ISE, Fraunhofer IEG; 2021.

[2] Lange H, Klose A, Beisswenger L, Erdmann D, Urbas L. Modularization approach for large-scale electrolysis systems: a review. Sustain Energy Fuels 2024:10.1039.D3SE01588B. https://doi.org/10.1039/D3SE01588B.

[3] Lange H, Klose A, Lippmann W, Urbas L. Technical evaluation of the flexibility of water electrolysis systems to increase energy flexibility: A review. Int J Hydrog Energy 2023;48:15771–83. https://doi.org/10.1016/j.ijhydene.2023.01.044.

[4] Mock M, Viedt I, Lange H, Urbas L. Heterogenous electrolysis plants as enabler of efficient and flexible Power-to-X value chains. Comput. Aided Chem. Eng., vol. 53, Elsevier; 2024, p. 1885–90. https://doi.org/10.1016/B978-0-443-28824-1.50315-X.



CFD-Based Shape Optimization of Structured Packings for Enhancing Separation Efficiency in Distillation

Sebastian Blauth1, Dennis Stucke2, Mohamed Adel Ashour2, Johannes Schnebele1, Thomas Grützner2, Christian Leithäuser1

1Fraunhofer ITWM, Germany; 2Ulm University, Germany

In past years the research in the field of structured packing development for laboratory-scale separation processes has intensified, where one of the main objectives is to miniaturize laboratory columns regarding the column diameter. This reduction has several advantages such as reduced operational costs and lower safety requirements due to a reduced amount of chemicals being used. However, a reduction in diameter also causes problems due to the increased surface-to-volume ratio, e.g., stronger impact of heat losses or liquid maldistribution issues. There are a lot of different approaches to design structured packings, such as using repeatedly stacked unit cells, but all of these approaches have in common that the development of new structures and the improvement of existing ones is based on educated guesses by the engineers.
In this talk, we investigate the novel approach of applying techniques from free-form shape optimization to increase the separation efficiency of structured packings in laboratory-scale distillation columns. A simplified single-phase computational fluid dynamics (CFD) model for the mass transfer in the distillation column is used and a corresponding shape optimization problem is solved numerically with the optimization software cashocs. The optimization approach uses a free-form shape optimization, where the shape is not parametrized, e.g., with the help of a CAD model, but all nodes of the computational mesh are moved to alter the shape. Particularly, this approach allows for more freedom in the packing design than the classical, parametrized approach. The goal of the shape optimization is to increase the mass transfer in the column by changing the packing's shape. The numerical shape optimization yields promising results and shows a greatly increased mass transfer for the simplified CFD model. To validate our findings, the optimized shape is additively manufactured and investigated experimentally. The experimental results are in great agreeement with the simulation-based prediction and show that the separation efficiency of the packing increased by around 20 % as consequence of the optimization. Our results show that the proposed approach of using free-form shape optimization for improving structured packings in distillation is extremely promising and will be pursued further in future research.



Multi-Model Predictive Control of a Distillation Column

Mehmet Arıcı1,3, Wachira Daosud2, Jozef Vargan3, Miroslav Fikar3

1Gaziantep Islam Science and Technology University, Gaziantep 27010, Turkey; 2Faculty of Engineering, Burapha University, Chonburi 20131, Thailand; 3Slovak University of Technology in Bratislava, Bratislava 81237, Slovakia

Due to the increasing demand for performance and rising complexity of systems, classical model predictive control (MPC) techniques are often inadequate and new applications often requires some modifications in predictive control mechanism. The modifications frequently include reformulation of optimal control in order to cope with system uncertainties, external perturbations and adverse effect of rapid changes in operating points. Besides, successful implementation of this optimization-driven control technique is highly dependent on an accurate and detailed model of the process which is relatively easy to obtain for chemical processes with simple structure. As complexity in the system increases, however linear approximation used in MPC may result with poor performance or even a total failure. In such a case, nonlinear system model can be used for optimal control signal calculation but lack of reliable dynamic process model is of major challenges in real time implementation of MPC schemes. Even though a model representing the complex behavior is available such model can be difficult to optimize in real time.
To demonstrate the potential challenges addressed above, a binary distillation column process is chosen as testbed. The process is multivariable and inherently nonlinear. Furthermore, linear model approximation for a critical operating point is valid in only a small neighborhood of operation. Therefore, we propose to employ multiple models that can describe the same process dynamics to a certain degree. In addition to the linear model, multi-layered feedforward network is used for data-based modeling and constitutes an additional process model. Both models collaborate to predict state variables individually, and their outputs and constraints are applied in the MPC algorithm. Various cost function formulations are proposed to cope with multiple models. The aim is to enhance efficiency and robustness in process control by compensating for the limitations of each individual model. Additionally, the offset-free technique is applied to eliminate steady-state errors resulting from model-process mismatch.
We compare the performance of the proposed method to MPC using the full nonlinear model and also to single-model MPC methods for both the linear model and neural network model. We show that the proposed method is only slightly suboptimal with respect to the best available performance and greatly improves over individual methods. In addition, the computational load is significantly reduced when compared to the full nonlinear MPC.



Enhancing Fault diagnosis for Chemical Processes via MSCNN with Hyperparameters Optimization and Uncertainty Estimation

Jingkang Liang, Gürkan Sin

Process and Systems Engineering Center (PROSYS), Department of Chemical and Biochemical Engineering, Technical University of Denmark

Fault diagnosis is critical for maintaining the safety and efficiency of chemical process, as undetected faults can lead to operational disruptions, safety hazards, and significant financial losses. Data-driven fault diagnosis methods, especially deep-learning-based methods have been widely used in the field of fault diagnosis of chemical process [1]. However, these deep learning methods often rely on manually tunning the hyperparameters to obtain an optimal model, which is time-consuming and labor-intensive [2]. Additionally, existing fault diagnosis methods typically lack consideration of uncertainty in their analysis, which is essential to assess the confidence in model predictions, especially in safety-critical industries. This underscores the need for research to provide reliable methods that not only improve accuracy but also provide uncertainty estimation in fault diagnosis for chemical processes. This sets the premise for the research focus in this contribution.,

To this end, we present assessment of a new approach that combines a Multiscale Convolutional Neural Network (MSCNN) with hyperparameter optimization and Bootstrap for uncertainty estimation. The MSCNN is designed to capture complex nonlinear features from chemical processes. Tree-Structured Parzen Estimator (TPE), a Bayesian optimization method was employed to automatically search for optimal hyperparameters, such as the number of convolutional layers, and kernel sizes in the multiscale module, minimizing manual tuning efforts and ensuring higher accuracy for training the deep learning models. Additionally, Bootstrap technique which was validated earlier for deep learning applications for property predictions [3], was employed to improve model accuracy and provide uncertainty estimation, making the model more robust and reliable.

A simulation study was carried out on the Tennessee Eastman Process dataset, which is a widely used benchmark for fault diagnosis in a chemical process. The dataset consists of 21 types of faults, each sample is a one-dimensional vector of 52 variables. Totally 26880 samples were collected, and was split randomly to training, validation and testing set according to the ratio of 0.6:0.2:0.2. Other state-of-the-art machine learning methods, including MLP, CNN, LSTM, and WDCNN were conducted for benchmarking of the proposed method. Performance is evaluated based on precision, recall, number of parameters, and quality of predictions (i.e. uncertainty estimation).

The benchmarking results showed that the proposed MSCNN with TPE and Bootstrap achieved the highest accuracy among all the methods considered. Ablation studies were carried out to verify the effectiveness of the TEP and Bootstrap in enhancing the fault diagnosis of chemical process. Confusion matrix and uncertainty estimation were presented to further discuss the effectiveness of the proposed method.

This work paves the way for more robust and reliable fault diagnosis systems in the chemical industry, offering a powerful tool to enhance process safety and efficiency.

References

[1] Melo et al. "Data-Driven Process Monitoring and Fault Diagnosis: A Comprehensive Survey." Processes 12.2 (2024): 251.

[2] Qin et al. "Adaptive multiscale convolutional neural network model for chemical process fault diagnosis." Chinese Journal of Chemical Engineering 50 (2022): 398-411.

[3] Aouichaoui et al. "Uncertainty estimation in deep learning‐based property models: Graph neural networks applied to the critical properties." AIChE Journal 68.6 (2022): e17696.



Machine learning-aided identification of flavor compounds with green notes in plant-based foods

Huabin Luo, Simen Akkermans, Thian Ping Wong, Ferdinandus Archie Pangestu, Jan F.M. Van Impe

BioTeC+, Chemical and Biochemical Process Technology and Control, Department of Chemical Engineering, KU Leuven, Ghent, Belgium

Plant-based foods have emerged as a global trend as consumers become increasingly concerned about sustainability and health. Despite their growing demand, the presence of off-flavors, especially green notes, significantly impacts consumer acceptance and preference. This study aims to develop a model using Machine Learning (ML) techniques to identify flavor compounds with green notes based on their molecular structure. To achieve this, a database of green compounds in plant-based foods was established by searching flavor databases and literature. Additionally, non-green compounds with similar structures and balanced chemical classes to green compounds were collected as a negative set for model training. Subsequently, molecular descriptors (MD) and molecular fingerprints (MF) were calculated based on the molecular structure of these collected flavor compounds and then used as input for ML. In this study, k-Nearest Neighbor (kNN), Logistic Regression (LR), and Random Forest (RF) were used to develop a model. Afterward, the developed models were optimized and evaluated. Results indicated that green compounds exhibit a wide range of structural variations. Topological structure, electronic properties, and surface area properties were essential MD to distinguish green and nongreen compounds. Regarding the identification of flavor compounds with green notes, the LR model performed best, correctly classifying more than 95% of the compounds in the test set, followed by the RF model with an accuracy of more than 92%. In summary, combining MD and MF as the input for ML provides a solid foundation for identifying flavor compounds with green notes. These findings provide knowledge tools for developing strategies to mitigate green off-flavor and control flavor quality in plant-based foods.



LSTMs and nonlinear State Space Models- are they the same?

Ashwin Chandrasekhar, Prashant Mhaskar

McMaster University, Canada

This manuscript identifies and addresses discrepancies in the implementation of Long Short-Term Memory (LSTM) neural networks for naturally occurring dynamical processes, specifically in cases claiming to capture input-output dynamic relationships using a state-space framework. While LSTMs are well-suited for these kinds of problems, there are two key issues in how LSTMs are currently structured and trained in this context.

First, the hidden and the cells states of the LSTM model are often reinitialized or discarded between input-output sequences in the training dataset. This practice essentially results in a framework where the initial hidden and cells states of each sequence are not being trained. However, in a typical state-space model identification process, both model parameters and states need to be identified simultaneously.

Second, the model structure of LSTMs differs from a traditional state-space (SS) representation. In state-space models, the current state is defined as a function of the previous state and input from the prior time step. In contrast, LSTMs use the input from the same time step, creating a structural mismatch. Moreover, for each LSTM cell, there is a corresponding hidden state and a cell state, representing the short- and long-term memory of a given state, and hence it is necessary to address this difference in structure conceptually.

To resolve these inconsistencies, two changes are proposed in this paper. First, the initial hidden and cell states for the training sequences should be trained. Second, to address the structural mismatch, the hidden and cell states from the LSTM are reformatted to match the state and data pairing that a state-space model would use.

The effectiveness of these modifications is demonstrated using data generated from a simple dynamical system modeled by a Linear Time-Invariant (LTI) state-space system. The importance of these corrections is shown by testing them individually. Interestingly, the worst performance was observed in the model with only trained hidden states, followed by the unmodified LSTM model. The model that only corrected the input timing (without trained hidden and cell states) showed a significant improvement. Finally, the best results were achieved when both corrections were applied together.



Simple Regulatory Control Structure for Proton Exchange Membrane Water Electrolysis Systems

Marius Fredriksen, Johannes Jäschke

Norwegian University of Science and Technology, Norway

Effective control of electrolysis systems connected to renewable energy sources (RES) is crucial to ensure efficient and safe plant operation due to the intermittent nature of most RES. Current control architectures for Proton Exchange Membrane (PEM) electrolysis systems primarily use relatively simple control structures such as Proportional-Integral-Derivative (PID) controllers and on/off controllers. Some works introduce more advanced control structures based on Model Predictive Controllers (MPC) and AI-based control methods (Mao et al., 2024). However, few studies have been conducted on advanced regulatory control (ARC) strategies for PEM electrolysis systems. These control structures have several advantages as they offer fast disturbance rejection, are easier to scale, and are less affected by model accuracy than many of the more computationally expensive control methods, such as MPC (Cammann & Jäschke, 2024).

In this work, we proposed an ARC structure for a PEM electrolysis system using the "Top-down" section of Skogestad's plantwide control procedure (Skogestad & Postlethwite, 2007, p. 384). First, we developed a steady-state model loosely based on the PEM system presented by Crespi et al. (2023). The model was verified by comparing the behavior of the polarization curve under varying pressure and temperature. We performed step responses on different system inputs to assess their impact on the outputs and to determine suitable pairings of the manipulated and controlled variables. Thereafter, we formulated an optimization problem for the plant and evaluated various implementations of the system's cost function. Finally, we mapped the active constraint regions of the electrolysis system to identify the active constraints in relation to the system's power input. From an economic perspective, controlling the active constraints is crucial, as deviating from the optimal constraint values usually results in an economic penalty (Skogestad, 2023).

We have shown that the optimal operation of PEM electrolysis systems is close to fully constrained in all regions. This implies that constraint-switching control may be used to achieve optimal system operation. The active constraint regions found for the PEM system share several similarities with those found for alkaline electrolysis systems by Cammann and Jäschke (2024). Finally, we have presented a simple constraint-switching control structure for the PEM electrolysis system using PID controllers and selectors.

References

Cammann, L. & Jäschke, J. A simple constraint-switching control structure for flexible operation of an alkaline water electrolyzer. IFAC-PapersOnLine 58, 706–711 (2024).

Crespi, E., Guandalini, G., Mastropasqua, L., Campanari, S. & Brouwer, J. Experimental and theoretical evaluation of a 60 kW PEM electrolysis system for flexible dynamic operation. Energy Conversion and Management 277, 116622 (2023).

Mao, J. et al. A review of control strategies for proton exchange membrane (PEM) fuel cells and water electrolysers: From automation to autonomy. Energy and AI 17, 100406 (2024).

Skogestad, S. Advanced control using decomposition and simple elements. Annual Reviews in Control 56, 100903 (2023).

Skogestad, S. & Postlethwaite, I. Multivariable Feedback Control: Analysis and Design. (John Wiley & Sons, 2007).



Solid streams modelling for process integration of an EAF steel plant.

Maura Camerin, Alexandre Bertrand, Laurent Chion

Luxembourg Institute of Science and Technology (LIST), Luxembourg

Global warming is an urgent matter that involves and heavily influences industrial activities. Steelmaking is one of the largest sources of industrial CO2 emissions globally, with key players setting ambitious targets to reduce these emissions by 2030 and/or achieve carbon neutrality by 2050. A key factor in reaching these goals is the efficient use of waste heat, especially in such industries that involve high-temperature processes. Waste heat valorisation (WHV) holds significant potential. (McBrien et al., 2016) highlighted that about 28% of the heating needs in a blast furnace plant could be met using existing WHV technologies. This figure could rise to 44% if solid streams, not just gaseous and liquid ones, are included.

At the current state, heat recovery from solid streams, like semi-finished products and slag, and its transfer to cold solid streams, such as scrap and DRI, is rather uncommon. Its mathematical definition for process integration (PI) / mathematical programming (MP) models poses unique challenges due to the need for specialized equipment (Matsuda et al., 2012).

The objective of this work is to propose novel WHV models of such solid streams, specifically formulated for PI/MP problems. In a first step, emerging technologies for slag treatment will be incorporated, and key parameters of the streams will be defined. The heat recovery potential of the slag will be modelled based on its charge weight and the recovery technology used, for example from a heat exchanger below the slag pit or using more advanced treatment technologies. The algorithm will calculate the resulting mass flow and temperature of the heat transfer medium, which can be incorporated into the heat cascade to meet the needs of cold streams such as scrap or DRI preheating.

The expected outcome is an improvement of solid streams models and, as such, more precise process integration results. The improved quantification of waste heat valorisation, especially through the inclusion of previously unconsidered streams, will be of significant benefit to support the decarbonization of the steel industry.

References:

Matsuda, K., Tanaka, S., Endou, M., & Iiyoshi, T. (2012). Energy saving study on a large steel plant by total site based pinch technology. Applied Thermal Engineering, 43, 14–19.

McBrien, M., Serrenho, A. C., & Allwood, J. M. (2016). Potential for energy savings by heat recovery in an integrated steel supply chain. Applied Thermal Engineering, 103, 592–606. https://doi.org/https://doi.org/10.1016/j.applthermaleng.2016.04.099



Design of Microfluidic Mixers using Bayesian Shape Optimization

Rui Miguel Grunert da Fonseca, Fernando Pedro Martins Bernardo

CERES, Department of Chemical Engineering, University of Coimbra, Portugal

Mixing and mass transfer are fundamental aspects of many chemical and biological processes. For instance, in the synthesis of nanoparticles, where a solute solution is mixed with an antisolvent to induce nanoprecipitation, highly efficient and controlled mixing conditions are required to obtain particles with low size variability. Specialized mixing technologies, such as microfluidic mixing, are therefore used. Microfluidic mixing is a continuous process in which passive mixing of two different streams of fluid takes place in micro-sized channels. The geometry and small volume of the device enable very fast mixing, which in turn reduces mass transfer limitations during the nanoparticle formation process. Several different mixer geometries, such as the toroidal and herringbone micromixer [1], have already been used for nanoparticle production. Since mixer geometry plays such a vital role in mixing performance, mathematical optimization of that geometry is clearly a tool to exploit in order to come up with superior designs.
In this work, a methodology for shape optimization of micromixers using Computational Fluid Dynamics (CFD) and Bayesian Optimization is presented. It consists in the sequential performance evaluation of mixer geometries defined through geometric variables, such as angles and lengths, with predefined bounds. The performance of a given geometry is evaluated through CFD simulation, using OpenFOAM software, of the Villermaux-Dushman reaction system [2]. It consists of two competing reactions: one quasi-instantaneous acid-base reaction and a very fast redox reaction. Mixing time can therefore be inferred by analyzing the reaction selectivity at the mixer's outlet. Using Bayesian Optimization, the geometric domain can be explored with an emphasis on maximizing the defined objective functions. This is done by assigning probabilistic functions to each objective based on previously attained data. An acquisition function is then optimized in order to determine the next geometry to be evaluated, balancing exploration and exploitation. This approach is especially appropriate when objective function evaluation is expensive, which is the case for CFD simulations. This methodology is very flexible and can be applied to many other equipment design problems. Its main challenge is the definition of the optimization problem and its domain. This is similar to network design problems, where the choice of the system's superstructure has a great impact on problem solvability. The domain must include as many viable solutions as possible while minimizing problem dimensionality and avoiding redundancy of solutions.
In this work, a case-study for the optimization of the toroidal mixer geometry is presented for three different operating conditions and seven geometric degrees of freedom. Both pressure drop and mixing time were considered as objective functions and the respective Pareto fronts were obtained. The trade-offs between objective functions were analyzed for each case and the general design features are presented.

[1] C. Webb et al, “Using microfluidics for scalable manufacturing of nanomedicines from bench to gmp: A case study using protein-loaded liposomes,” International Journal of Pharmaceutics, vol. 582, p. 119266, May 2020.

[2] J.-M. Commenge and L. Falk, “Villermaux–dushman protocol for experimental characterization of micromixers, ”Chemical Engineering and Processing: Process Intensification, vol. 50, no. 10, pp.979–990, Oct. 2011.



Solubility prediction of lipid compounds using machine learning

Agustin Porley Santana1, Gabriel Gutierrez1, Soledad Gutiérrez Parodi1, Jimena Ferreira1,2

1Grupo de Ingeniería de Sistemas Químicos y de Procesos, Instituto de Ingeniería Química, Facultad de Ingeniería, Universidad de la República, Montevideo, 11300, Uruguay; 2Heterogenoeus Computing Laboratory, Instituto de Computación, Facultad de Ingeniería, Universidad de la República, Montevideo, 11300, Uruguay

Aligned with the principles of biorefinery and circular economy, biomass waste valorization not only reduces the environmental impact of production processes but also presents economic opportunities for companies. Various natural lipids with complex chemical compositions are recovered from different types of biomass and further processed, such as essential oils from citrus waste and eucalyptus oil from wood.

In this context, wool grease, a complex mixture of esters of steroid and aliphatic alcohols with fatty acids, is a byproduct of wool washing. [1] Its derivatives, including lanolin, cholesterol, and lanosterol, differ in their methods of extraction and market value.

Purification of the high value products can be achieved using crystallization, chromatography, liquid-liquid extraction or solid-liquid extraction. The interaction of the selected compound with a liquid phase known as a solvent or diluent (depending on the case) is a crucial aspect in the design of this processes. To achieve an effective separation of target components, it is crucial to identify the solubility of the compounds in a solvent. Given the practical difficulties in determining solubility and the vast array of natural compounds, a comprehensive bibliographic source for their solubilities in different solvents remains elusive. Employing machine learning [2] is an alternative to predict the solubility of the target compound in alternate solvents.

This work focuses on the construction of a model to predict the solubility of several lipids in various solvents, using experimental data obtained from scientific articles and handbooks. It was collected almost 800 data points from 6 solutes and 34 solvents. As a first step it was evaluated 21 properties as input variables of the model, this includes properties of the solute, properties of the solvent, and temperature.

After data preprocessing, in the feature selection step it uses the Pearson and Spearman correlations between input variables to select the relevant input variables. The model is obtained using Random Forest and it is compared to a linear regression model. The dataset was divided between training and validation sets in an 80-20 split. It is analysed the usage of different compound between training and validation sets (extrapolation model), and a random separation of the sets (interpolation model).

It is compared the performance of the models obtained with a full and a reduced input variables set, as well as interpolation and extrapolation models.

In all cases, the Random Forest model performs than the linear one. The preliminaries results shown that the model using the reduced set of input variables performs better than the one that use the full set of input variables.

References

[1] S. Gutiérrez, M. Viñas (2003). Anaerobic degradation kinetics of a cholesteryl ester. Water Science and Technology, 48(6), 141-147.

[2] P. Daoutidis, J. H. Lee, S. Rangarajan, L. Chiang, B. Gopaluni, A. M. Schweidtmann, I. Harjunkoski, M. Mercangöz, A. Mesbah, F. Boukouvala, F. V. Lima, A. del Rio Chanona, C. Georgakis (2024). Machine learning in process systems engineering: Challenges and opportunities, Computers & Chemical Engineering, 181, 108523.



Refining Equation-Based Model Building for Practical Applications in Process Industry

Shota Kato, Manabu Kano

Kyoto University, Japan

Automating physical model building from literature databases holds significant potential for advancing the process industry, particularly in the rapid development of digital twins. Digital twins, based on accurate physical models, can effectively simulate real-world processes, yielding substantial operational and strategic benefits. We aim to develop an AI system that automatically extracts relevant information from documents and constructs accurate physical models.
One of the primary challenges is constructing practical models from extracted equations. The existing method [Kato and Kano, 2023] builds physical models by combining equations to satisfy two criteria: ensuring all specified variables are included and matching the number of degrees of freedom with the number of input variables. While this approach excels at quickly generating models that meet these requirements, it does not guarantee their solvability, leading to the inclusion of impractical models. This issue underscores the need for a robust validation mechanism.
To address this issue, we propose a filtering method that refines models generated by the approach above to identify solvable models. This method evaluates models by comparing variables across different equations, efficiently identifying redundant or conflicting equations to ensure that only coherent and functional models are retained. Furthermore, we generated an evaluation dataset comprising physical models relevant to chemical engineering and applied our proposed method. The results demonstrated that our method accurately identifies solvable models, significantly enhancing the automated model-building approach from the literature.
However, our method faces challenges mainly when the same variable is defined differently under varying conditions. For example, the concentration of a gas dissolved in a liquid might be determined by temperature via an equilibrium constant or by pressure using Henry's law. If extracted equations include these equations, the model-building algorithm may include both equations in the output models; then, the proposed method may struggle to filter models precisely. Another limitation is the necessity to compare multiple equations to determine the model's solvability. In cases where several reaction rate equations and corresponding rate constants are available, all possible combinations must be evaluated. This strategy can be complex and cannot be efficiently handled by our current methodology without additional enhancements.
In summary, aiming to automate physical model building, we proposed a method for refining the models generated by an existing approach. Our method successfully identified solvable models from sets that included redundant ones. Future work will focus on refining our algorithms to handle complexities such as variables defined under different conditions and integrating advanced natural language processing technologies to standardize notation and interpret nuanced relationships between variables, ultimately achieving truly automated physical model building.

References
Kato and Kano, "Efficient physical model building algorithm using equations extracted from documents," Computer Aided Chemical Engineering, 52, pp. 151–156, 2023.



Solar Desalination and Porphyrin Mediated Vis-Light Photocatalysis in Decolouration of Dyes as Biological Analogues Applied in Advanced Water Treatment

Evans Martin Nkhalambayausi Chirwa, Fisseha Andualem Bezza, Osemeikhain Ogbeifun, Shepherd Masimba Tichapondwa, Wesley Lawrence, Bonhle Manoto

University of Pretoria, South Africa

Engineering can be made simple and more impactful by observing and understanding how organisms in nature solve eminent problems. For example, scientists around the world have observed green plants thriving without organic food inputs using the complex photosynthesis process to kick start a biochemical food chain. Two case studies are presented in this study based on research under way at the University of Pretoria on Solar Desalination of sea water using plant-based carbon material as solar absorbers and the work on solar or vis-light photocatalysis using porphyrin based Bi-OCl and BiOIO3 compounds and simulations of the function of chlorophyll in advanced water treatment and recovery. In the study on solar desalination using 3D-printed Graphene Oxide (GO), 82% of water recovery has thus far been achieved using simple GO-Black TiO2 monolayer as a solar absorber supported on cellulose nanocubes. In preparation for possible scale-up of the process, methods are being investigated for inhibition or reversal of salting on the adsorbed surface which inhibits energy transfer. For the vis-light photocatalytic process for discoloration of dye, a in Porphyrin@Bi12O17Cl2 system was used to successfully degrade methyl Blue dye in batch experiments achieving up to 98% degradation within 120 minutes. These results show that more advances and more efficient engineered systems can achieved through observation of nature and how these systems have survived over billions of years. Based on these observations, the research group from the Water Utilisation Group at the University of Pretoria has studied and developed fundamental processes for degradation and remediation of unwanted compounds such as disinfection byproducts (DBPs), volatile organic compounds (VOCs) and pharmaceutical products from water.



Diagnosing Faults in Wastewater Systems: A Data-Driven Approach to Handle Imbalanced Big Data

Morteza Zadkarami1, Krist Gernaey2, Ali Akbar Safavi1, Pedram Ramin2

1Shiraz University, Iran, Islamic Republic of; 2Technical University of Denmark (DTU), Denmark

Process monitoring is critical in industrial settings to ensure system functionality, making it essential to identify and understand the causes of any faults that occur. Although a considerably broader range of research focuses on fault detection, significantly less attention has been devoted to fault diagnosis. Typically, faults arise either from abnormal instrument behavior, suggesting the need for calibration or replacement, or from process faults indicating a malfunction within the system [1]. A key objective of this study is to apply the proposed process fault diagnosis methodology to a benchmark that closely mirrors real-world conditions. In fact, we propose a fault diagnosis framework for a wastewater treatment plant (WWTP) that effectively addresses the challenges of imbalanced big data typically found in large-scale systems. Fault scenarios were simulated using the Benchmark Simulation Model No.2 (BSM2) [2], a highly regarded tool that closely mimics the operations of a real-world WWTP. Using BSM2 a dataset was generated which spans 609 days, comprising 876,960 data points across 31 process parameters.

In contrast to our previous research [3], [4], which primarily focused on fault detection frameworks for imbalanced big data in the BSM2, this study extends the approach to include a comprehensive fault diagnosis structure. Specifically, it determines whether a fault has occurred and, if so, identifies whether the fault is due to an abnormality in the instrument, the process, or both simultaneously. A major challenge lies in the highly imbalanced nature of the dataset: 87.82% of the data represent normal operating conditions, while 6% reflect instrument faults, 6.14% correspond to process faults, and less than 0.05% involve concurrent faults in both the process and instruments. To address this imbalance, we evaluated multiple deep network architectures and various learning strategies to identify a robust fault diagnosis framework that achieves acceptable accuracy across all fault scenarios.

References:

[1] Liu, Y., Ramin, P., Flores-Alsina, X., & Gernaey, K. V. (2023). Transforming data into actionable knowledge for fault detection, diagnosis and prognosis in urban wastewater systems with AI techniques: A mini-review. Process Safety and Environmental Protection, 172, 501-512.

[2] Al, R., Behera, C. R., Zubov, A., Gernaey, K. V., & Sin, G. (2019). Meta-modeling based efficient global sensitivity analysis for wastewater treatment plants–An application to the BSM2 model. Computers & Chemical Engineering, 127, 233-246.

[3] Zadkarami, M., Gernaey, K. V., Safavi, A. A., & Ramin, P. (2024). Big Data Analytics for Advanced Fault Detection in Wastewater Treatment Plants. In Computer Aided Chemical Engineering (Vol. 53, pp. 1831-1836). Elsevier.

[4] Zadkarami, M., Safavi, A. A., Gernaey, A. A., & Ramin, P. (2024). A Process Monitoring Framework for Imbalanced Big Data: A Wastewater Treatment Plant Case Study. In IEEE Access (Vol. 12, pp. 132139-132158). IEEE.



Industrial Time Series Forecasting for Fluid Catalytic Cracking Process

Qiming Zhao1, Yaning Zhang2, Tong Qiu1

1Department of Chemical Engineering, Tsinghua University, Beijing 100084, China; 2PetroChina Planning & Engineering Institute, Beijing 100083, China

Abstract

Industrial process systems generate complex time-series data, challenging traditional regression models that assume static relationships and struggle with system uncertainty and process drifts. These models may also be sensitive to noise and disturbances in the training data, potentially leading to unreliable predictions when encountering fluctuating inputs.

To address these limitations, researchers have explored various algorithms in time-series analysis. The wavelet transform (WT) has emerged as a powerful tool for analyzing non-stationary time series by representing them with localized signals. For instance, Hosseini et al. applied WT and feature extraction to improve gas-liquid two-phase flow meters in oil and petrochemical industries, successfully classifying flow regimes and calculating void fraction percentages with low errors. Another approach to modeling uncertainties in observations is through stochastic processes, with the Gaussian process (GP) gaining popularity due to its flexibility. Bradford et al. demonstrated its effectiveness by proposing a GP-based nonlinear model predictive control algorithm that considered state-dependent uncertainty, which they verified in a challenging semi-batch bioprocess case study. Recent research has explored the integration of WT and GP. Band et al. developed a hybrid model combining these techniques, which accurately predicted groundwater levels in arid areas. However, much of the current research focuses on one-step ahead forecasts rather than comprehensive process modeling.

This research explores a novel predictive modeling framework that integrates wavelet features with GP regression, thus creating a more robust predictive model capable of extracting both temporal and cross-variable information from the data while adapting to changing patterns over time. The effectiveness of this hybrid method is verified using an industrial dataset from fluid catalytic cracking (FCC), a complex petrochemical process crucial for fuel production. The results demonstrate the method’s robustness in delivering accurate and reliable predictions despite the presence of noise and system variability typical in industrial settings. Percentage yields are predicted with a mean absolute percentage error (MAPE) of less than 1% for critical products, meeting the requirements for industrial application in modeling and optimization.

References

[1] Band, S. S., Heggy, E., Bateni, S. M., Karami, H., Rabiee, M., Samadianfard, S., Chau, K.-W., & Mosavi, A. (2021). Groundwater level prediction in arid areas using wavelet analysis and Gaussian process regression. Engineering Applications of Computational Fluid Mechanics, 15(1), 1147–1158. https://doi.org/10.1080/19942060.2021.1944913

[2] Bradford, E., Imsland, L., Zhang, D., & del Rio Chanona, E. A. (2020). Stochastic data-driven model predictive control using Gaussian processes. Computers & Chemical Engineering, 139, 106844. https://doi.org/10.1016/j.compchemeng.2020.106844

[3] Hosseini, S., Taylan, O., Abusurrah, M., Akilan, T., Nazemi, E., Eftekhari-Zadeh, E., Bano, F., & Roshani, G. H. (2021). Application of Wavelet Feature Extraction and Artificial Neural Networks for Improving the Performance of Gas-Liquid Two-Phase Flow Meters Used in Oil and Petrochemical Industries. Polymers, 13(21), Article 21. https://doi.org/10.3390/polym13213647



Electrochemical conversion of CO2 into CO. Analysis of the influence of the electrolyzer type, operating parameters, and separation stage

Luis Vaquerizo1,2, David Danaci2,3, Bhavin Siritanaratkul4, Alexander J Cowan4, Benoît Chachuat2

1Institute of Bioeconomy, University of Valladolid, Spain; 2The Sargent Centre for Process Systems Engineering, Imperial College, UK; 3I-X Centre for AI in Science, Imperial College, UK; 4Department of Chemistry, Stephenson Institute for Renewable Energy, University of Liverpool, UK

The electrochemical conversion of CO2 into CO is an opportunity for the decarbonization of the chemical industry, turning the current linear utilization scheme of carbon into a more cyclic scheme. Compared to other existing CO2 conversion technologies, the electrochemical reduction of CO2 into CO benefits from the fact that is a room temperature process, it does not depend on the physical location of the plant, and its energy efficiency is in the range of 40-50%. Although some techno-economic analyses have already assessed the potential of this technology, finding that the CO production cost is mainly influenced by the CO2 cost, the availability and price of the electricity, and the maturity of the carbon capture technologies, none of them addressed the effect of the electrolyzer type, operating conditions, and separation stage on the final production cost. This work determines the impact of the electrolyzer type (either AEM or BPM), the operating parameters (current density and CO2 inlet flow), and the technology used for product separation (either PSA or, for the first time for this technology, cryogenic distillation) on the annual production cost of CO using experimental data for CO2 electrolysis. The main findings of this work are that the use of either AEM or BPM electrolyzers and either PSA or cryogenic distillation yields a very similar annual production cost (around 25 MM$/y for a 100 t/d CO plant) and that the operation beyond current intensities of 150 mA/cm2 and CO2 inlet flowrates of 3.2 (AEM) and 1 (BPM) NmL/min/cm2 slightly affect the annual production cost. For all the possible operating cases (AEM or BPM electrolyzer, and PSA or cryogenic distillation alternative), the minimum production cost is reached when maximizing the CO productivity in the electrolyzer. Moreover, it is found that although the downstream process alternative has minimum influence on the CO production cost, since the cryogenic distillation alternative requires also a final PSA to separate the column overhead products, a downstream process based on PSA separation seems, at least at this scale, more preferable. Finally, a minimum selling price of 868 $/t CO is estimated in this work considering a CO2 cost of 40 $/t and an electricity cost of 0.03 $/kWh. Although this value is higher than the current CO selling price (600 $/t), there is some margin for improvement if the current electrolyzer CAPEX and lifetime are improved.



Enhancing Pharmaceutical Development: Process Modelling and Control Strategy Optimization for Liquids Drug Products Multiphase Mixing and Milling Processes

Noor Al-Rifai, Guoqing Wang, Sushank Sharma, Maxim Verstraeten

Johnson & Johnson Innovative Medicine, Belgium

Recent regulatory trends from health authorities advocate for greater understanding of drug product and process, enabling more efficient drug development, supply chain agility and the introduction of new and challenging therapies and modalities. Traditional drug product process development and validation relies on fully experimental design spaces with limited insight into what drives process performance, and where every change (in equipment, material attributes, scale) triggers the requirement for a new experimental design space, post-approval submission, as well as risking issues with process performance. Quality-by-Design in process development and manufacturing helps to achieve these aims, aided by sufficient mechanistic understanding and resulting in flexible yet robust control strategies.

Mechanistic correlations and computational fluid dynamics simulations provide digital input towards demonstrating process robustness, scale-up and transfer; particularly for pharmaceutical mixing and milling setups involving complex and unconventional geometries.

This presentation will show synergistic workflows, utilizing mechanistic correlations and/or CFD and PAT to gain process understanding, optimize development work and construct control strategies for pharmaceutical multiphase mixing and milling processes.



Assessing operational resilience within the natural gas monetisation network for enhanced production risk management: Qatar as a case study

Noor Yusuf, Ahmed AlNouss, Roberto Baldacci, Tareq Al-Ansari

Hamad Bin Khalifa University, Qatar

The increased turbulence in energy consumer markets has imposed risks on energy suppliers regarding sustaining markets and profits. While risk mitigation strategies are essential when assessing new projects, retrofitting existing industrially mature infrastructure to adapt to the changing market conditions enforces added cost and time. For the state of Qatar, a gas-dependent economy, the natural gas industry is highly vulnerable to exogenous uncertainties in final markets, including demand and price volatilities. On the other hand, endogenous uncertainties could hinder the project’s profitability and sustainability targets due to poor proactive planning in the early design stages of the project. Hence, in order to understand the industrially mature network’s risk management capabilities, it is crucial to assess the resilience at a production system and overall network level. This is especially important in natural gas supply chains as failure in the production part would influence the subsequent components, represented by storage, shipping, and agreed volume sales to markets. This work evaluates the resilience of the existing Qatari natural gas monetisation infrastructure (i.e., production) by addressing the system’s failure to satisfy the targeted production capacity due to process-level disruptions and/or final market conditions. The network addressed herein comprises 7 direct and in-direct natural gas utilisaion industrial clusters (i.e., natural gas liquefaction, ammonia and urea, methanol and MTBE, power, and gas-to-liquids). Process technical data simulated using Aspen Plus, along with calculated emissions and economic data were used to estimate the resilience of individual processes and the overall network at different endogenous disruption scenarios. First, historical and forecasted demand and prices were used to simulate the optimal natural gas allocation to processes over a planning period between 2000-2032. Secondly the resilience index of the processes within the baseline allocation strategy were then investigated throughout the planning period. Overall, a resilience index value below 100% indicate low process resilience towards the changing endogenous and/or exogenous fluctuations. For methanol and ammonia processes within the investigated network, the annual resilience index was enhanced from 35% to 90% for ammonia process, and from 36% to 84% for methanol process. The increase in the value of the resilience index was mainly due to the introduction of operational bounds and forecasted demand and price data that aided in efficient resilient process management. Finally, qualitative recommendations were summarised to aid decision-makers with planning under different economic and environmental scenarios to maintain the resilience of the network despite the fluctuations imposed by unavoidable external factors, including climate change, policy change, and demand fluctuations.



Membrane-based Blue Hydrogen Production in Sub-Ambient Temperature: Process Optimization, Techno-Economic Analysis and Life Cycle Assessment

JIUN YUN1, BORAM GU1, KYUNHWAN RYU2

1Chonnam National University, Korea, Republic of (South Korea); 2Sunchon National University, Korea, Republic of (South Korea)

In 2022, 62% of hydrogen was produced using natural gas, while only 0.1% came from water electrolysis [1]. This suggests that an immediate shift to green hydrogen is infeasible in the short- to medium-term, which makes blue hydrogen production crucial. Auto-thermal reforming (ATR) processes, which combine steam methane reforming reaction and partial oxidation, offer high energy efficiency by eliminating additional heating. During the ATR process, CO2 can be captured from the shifted syngas, which consists mainly of a CO2/H2 binary mixture.

Recently, gas separation membranes have been gaining significant attention for their high energy efficiency for CO₂ capture. For instance, the Polaris CO₂-selective membrane, specifically designed to separate CO₂/H₂ mixtures, is known to offer a high CO₂ permeance of 1000 GPU and a CO₂/H₂ selectivity of 10. Furthermore, sub-ambient temperatures are reported to enhance its CO₂/H₂ selectivity up to 20, enabling the production of high-purity liquid CO₂ (over 98%) [1].

Hydrogen recovery rates are significantly affected by the H₂ purity at the PSA inlet and the pressure of the tail gas [2], which are dependent on the selected capture location. In the ATR process, CO2 capture can be applied to shifted syngas and PSA tail gas. Therefore, optimal location selection is crucial for improving hydrogen production efficiency.

In this study, an integrated process combining ATR with a sub ambient temperature membrane process for CO₂ capture was designed using gPROMS. Two different capture locations were compared, and economic feasibility of the integrated process was evaluated. The ATR process was developed as a flowsheet-based model, while the membrane unit was built using equation-based custom modeling, which consists of balances and permeation models. Concentration polarization effects were also accounted for, which play a significant role in performance when membrane permeance is high. In both cases, the CO₂ capture rate was fixed at 90%.

In the membrane-based CO2 capture process, the inlet gas was cooled to -35°C using a cooling cycle, increasing membrane selectivity up to 20. This enables energy savings and the capture of high-purity liquid CO₂. Our simulation results demonstrated that the H₂ purity at the PSA inlet reached 92% when CO2 was captured from syngas, and this high H₂ purity improved the PSA recovery rate. For PSA tail gas capture, the CO₂ capture rate was 98.8%, with only a slight increase in the levelized cost of hydrogen (LCOH). However, in the syngas capture case, higher capture rates led to increased costs. Overall, syngas capture achieved a lower LCOH due to the higher PSA recovery rate.

Further modeling of the PSA unit will be performed to optimize the integrated process and perform a CO₂ life cycle assessment. Our results will provide insights into the potential of sub-ambient membrane gas separation for blue hydrogen production and guidelines for the design and operation of PSA and gas separation membranes in the ATR process.

References

[1] International Energy Agency, Global Hydrogen Review 2023, 2023.

[2] C.R. Spínola Franco, Pressure Swing Adsorption for the Purification of Hydrogen, Master's Dissertation, University of Porto, 2014.



Dynamic Life Cycle Assessment in Continuous Biomanufacturing

Ada Robinson Medici1, Mohammad Reza Boskabadi2, Pedram Ramin2, Seyed Soheil Mansouri2, Stavros Papadokonstantakis1

1Institute of Chemical, Environmental and Bioscience Engineering TU Wien,1060 Wien, Austria; 2Department of Chemical and Biochemical Engineering, Søltofts Plads, Building 228A, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark

Process Systems Engineering (PSE) has seen rapid advancements since its inception in the 1970s. Currently, there is an increasing demand for tools that enable the integration of sustainability metrics into process simulation to cope with today’s grand challenges. In recent years, continuous manufacturing has gained attention in biologics production due to its ability to improve process monitoring and ensure consistent product quality. This work introduces a Python-based interface that integrates process simulation and control with cradle-to-gate Life Cycle Assessment resulting into dynamic process inventories and thus to dynamic life cycle inventories and impact assessment (dLCA), with the potential to improve environmental assessment and sustainability metrics in the biopharmaceutical industry.

This framework utilizes the open-source tool Activity Browser, a graphical user interface for Brightway25, that supports the analysis of environmental impacts using LCA (Mutel, 2017). The tool allows real-time tracking of environmental inventories of the foreground process and its impact assessment. Unlike traditional sustainability indicators, such as the E-factor, which focuses only on waste generation, the introduced approach can retrieve comprehensive environmental inventories from the 3.9.10 ecoinvent database to calculate mid-point (e.g. global warming potential)) and end-point LCA indicators (e.g. damage to ecosystems), according to the ReCiPE framework, a widely recognized method in life cycle impact assessment.

This study utilizes the KTB1 benchmark model as a dynamic simulation model for continuous biomanufacturing, which serves as a decision-support tool for evaluating process design, optimization, monitoring, and control strategies in real-time (Boskabadi et al., 2024). KTB1 is a comprehensive dynamic model developed in MATLAB-Simulink covering upstream and downstream components that provide an integrated production process perspective (Boskabadi, M. R., 2024). The functional unit for this study is the production of the typical maintenance dose commonly found in pharmaceutical formulations, 40 mg of pure Active Pharmaceutical Ingredient (API) Lovastatin, under dynamic manufacturing conditions.

Preliminary results show that control decisions can have a significant impact on the dynamic and integral LCA profile for selected resource and energy-related Life Cycle Impact Assessment (LCIA) categories. By integrating LCIA into the control framework, a multi-objective model predictive control (MO-MPC) is enabled with the potential to dynamically adjust process parameters and optimize process conditions based on real-time environmental and process data (Sohn et al., 2020). This work lays the foundation for an advanced computational platform for assessing sustainability in biomanufacturing, positioning it as a critical tool in the industry's ongoing transition toward more environmentally responsible continuous production methods.

Importantly, open-source tools ensure transparency, adaptability, and accessibility, facilitating collaboration and further development within the scientific community.

References

Boskabadi, M. R., 2024.KT-Biologics I (KTB1): A dynamic simulation model for continuous biologics manufacturing.

Boskabadi, M.R., Ramin, P., Kager, J., Sin, G., Mansouri, S.S., 2024. KT-Biologics I (KTB1): A dynamic simulation model for continuous biologics manufacturing. Computers & Chemical Engineering 188, 108770. https://doi.org/10.1016/j.compchemeng.2024.108770

Mutel, C., 2017. Brightway: An open source framework for Life Cycle Assessment. JOSS 2, 236. https://doi.org/10.21105/joss.00236

Sohn, J., Kalbar, P., Goldstein, B., Birkved, M., 2020. Defining Temporally Dynamic Life Cycle Assessment: AReview. Integr Envir Assess &amp; Manag 16, 314–323. https://doi.org/10.1002/ieam.4235



Multi-level modeling of reverse osmosis process based on CFD results

Yu-hyeok Jeong, Boram Gu

Chonnam National University, Korea, Republic of (South Korea)

Reverse osmosis (RO) is a membrane separation process that is widely used in desalination and wastewater treatment [1]. However, solutes blocked by the membrane can accumulate near the membrane, causing concentration polarization (CP), which hinders RO separation performance and reduces energy efficiency [2]. Structures called spacers are added between membrane sheets to create flow channels, which also induces disturbed flow that mitigates CP. Different types of spacers exhibit different hydrodynamic behavior, and understanding this is essential to designing the optimal spacer.

Computational fluid dynamics (CFD) can be a useful tool for theoretically analyzing the impact of these spacers, through which the impact of geometric characteristics of each spacer on RO performance can be understood. However, due to the requirement for large computing resources, it is limited to a small-scale RO unit. Alternatively, mathematical modeling of RO modules can help to understand the effect of spacers on process variables and separation performance by incorporating appropriate constitutive model equations. Despite its advantages of low demands of computing resources even for a large-scale simulation, the impact of spacers is approximated using simple empirical correlations usually derived from experimental data in the limited ranges of operating and geometric conditions.

To overcome this, we present a novel modeling approach that combines these two methods. First, three-dimensional (3D) CFD models of RO spacer units at a small scale possible to represent the periodicity of the spacer geometry were simulated for various spacers (a total of 20 geometries) and a wide range of operating conditions. By fitting the relationship between the operating conditions and simulation results with the response surface methodology, a surrogate model with the operating conditions as independent variables and the simulation results as dependent variables was derived for each spacer. Using the surrogate model, the outlet conditions were derived from the inlet conditions for a single unit. These outlet conditions were then iteratively applied as the inlet conditions for the next unit, thereby representing processes at various scales.

As we expected, the CFD analysis in this study showed varied hydrodynamic behaviors across the spacers, resulting in up to a 10% difference in water flux. The multi-level modeling using the surrogate model showed that the optimal spacer design may vary depending on the size of the process, as the ranking of performance indices, such as recovery and specific energy consumption, changes with process size. In particular, pressure losses were not proportional to process size, and water recovery did not increase linearly. This highlights the need for utilizing a surrogate model via CFD in large-scale process simulations.

By combining 3D CFD simulation with 1D mathematical modeling, the hydrodynamic behavior influenced by the geometric characteristics of the spacer and the varied effects of spacers at different process scales can be efficiently reflected, using as a platform for large-scale process optimization.

References

[1] Sung, Berrin, Novel technologies for reverse osmosis concentrate treatment: A review, Journal of Environmental Management, 2015.

[2] Haidari, Heijman, Meer, Optimal design of spacers in reverse osmosis, Separation and Purification Technology, 2018.



Optimal system design and scheduling for ammonia production from renewables under uncertainty: Stochastic programming vs. robust optimization

Alexander Klimek1, Caroline Ganzer1, Kai Sundmacher1,2

1Max Planck Institute for Dynamics of Complex Technical Systems, Department of Process Systems Engineering, Sandtorstr. 1, 39106 Magdeburg, Germany; 2Otto von Guericke University, Chair for Process Systems Engineering, Universitätsplatz 2, 39106 Magdeburg, Germany

Production of green ammonia from renewable electricity could play a vital role in a net zero economy, yet the intermittency of wind and solar energy poses challenges to sizing and scheduling of such plants [1]. One approach to investigate the interaction between fluctuating renewables and chemical processes is to model the production network in the form of a large-scale mixed-integer linear programming (MILP) problem [2, 3].

A wide range of parameters is necessary to characterize the chemical production system, including investment costs, wind speeds, solar irradiance, purchase and sales prices. These parameters are usually derived from literature data and fixed before optimization. However, parameters such as costs and capacity factors are not deterministic in reality but rather subject to uncertainty. Mathematical methods of optimization under uncertainty can be applied to deal with such deviations from the nominal parameter values. Stochastic programming (SP) and robust optimization (RO) in particular are widely used to address parameter uncertainty in optimization problems and to identify solutions that satisfy all constraints under all possible realizations of uncertain parameters [4].

In this work, we reformulate a deterministic MILP model for determining the optimal design and scheduling of an ammonia plant based on renewables as a SP and a RO problem. Using the Pyomo extensions mpi-sppy and ROmodel [5, 6], the optimization problems are implemented and solved under parameter uncertainty. The results in terms of plant design and operation are compared with the outcomes of the deterministic formulation. In the case of SP, a two-stage problem is used, whereby Monte Carlo sampling is applied to generate different scenarios. Analysis of the value of the stochastic solution (VSS) shows that when the model is constrained by the nominal scenario's first-stage decisions and subjected to the conditions of other scenarios, the deterministic model cannot handle even a 1% decrease in the wind potential, highlighting the model’s sensitivity. The stochastic approach mitigates this risk with a solution approximately 30% worse in terms of net present value (NPV) but more resilient to fluctuations. For RO, different approaches are chosen with regard to uncertainty sets and formulation. The very conservative approach using box uncertainty sets is relaxed by the use of auxiliary parameters, ensuring that only a certain number of uncertain parameters can take their worst-case value at the same time. The RO framework is extended by the use of adjustable decision variables, requiring a reduction in the time horizon compared to the SP formulation in order to solve these problems within a reasonable time frame.

References:
[1] Wang, H. et al. 2021. ACS Sust. Chem. Eng. 9, 7, 2816–2834.
[2] Ganzer, C. and Mac Dowell, N. 2020. Sust. En. Fuels 4, 8, 3888–3903.
[3] Svitnič, T. and Sundmacher, K. 2022. Appl. En. 326, 120017.
[4] Mavromatidis, G. 2017. PhD Dissertation. ETH Zurich.
[5] Knueven, B. et al. 2023. Math. Prog. Comp. 15, 4, 591–619.
[6] Wiebe, J. and Misener, R. 2022. Optim. & Eng. 23, 4, 1873–1894.



CO2 Sequestration and Valorization to Synthetic Fuels: Multi-criteria Based Process Design and Optimization for Feasibility

Thuy T. Hong Nguyen, Satoshi Taniguchi, Takehiro Yamaki, Nobuo Hara, Sho Kataoka

National Institute of Advanced Industrial Science and Technology, Japan

CO2 capture and utilization/storage (CCU/S) has been considered one of the linchpin strategies to reduce greenhouse gas (CO2 equivalent) emissions. CCS promises to contribute to removing large CO2 amount but faces high-cost barriers. CCU produces high-value products; thereby gaining some economic benefits but requires large supplies of energy. Different CCU pathways have been studied to utilize CO2 as renewable raw material for producing different valuable chemical products and fuels. Especially, many kinds of catalysts and synthesis conditions have been examined to convert CO2 to different types of gaseous and liquid fuels (methane, methanol, gasoline, etc.). As the demand of these synthetic fuels are exceptionally high, such CCU pathways potentially help mitigate large CO2 emissions. Nevertheless, implementation of these CCU pathways hinges on an ample supply of carbon free H2 raw material that is currently not available for large-scale production. Thus, to remove large industrial CO2 emission sources, combining these CCU pathways with sequestration is required.

This study aims to develop a CCUS system that can contribute to remove large CO2 emissions with high economic efficiency. Herein, multiple CCU pathways converting CO2 to different gaseous and liquid synthetic fuels (methane, methanol and Fischer-Tropsch fuels) were examined for integrating with CO2 sequestration in an economic manner. Process simulator is employed to design and optimize the operating conditions of all included processes. A multi-objective evaluation model is constructed to optimize the economic benefit and CO2 reduction amount. Based on the optimization results, the feasible synthetic fuel production processes that can be integrated with CO2 sequestration process for mitigating large CO2 emission sources can be proposed.

The results showed that the formulation of CCUS system (types of CCU pathways and the amounts of CO2 to be utilized and stored) heavily depends on the types and purchasing cost of hydrogen raw material, product selling prices, and carbon tax. The CCUS system integrating the CCU pathways converting CO2 to methanol and methane and CO2 sequestration can contribute to large CO2 reduction with low economic loss. The economic benefit of this system can be dramatically enhanced when the carbon tax increases up to $250/ton CO2. Due to the exceptionally high demand of energy supply and high initial investment cost, Fischer-Tropsch fuels synthesis process is the least competitive process in terms of both economic benefit and potential CO2 reduction.



Leveraging Pilot-scale Data for Real-Time Analysis of Ion Exchange Chromatography

Søren Villumsen, Jesper Frandsen, Jakob Huusom, Xiadong Liang, Jens Abildskov

Technical University of Denmark, Denmark

Chromatography processes are key in the downstream processing of bio-manufactured products to attain high-purity products. Chromatographic separation is hard to operate optimally due to hard-to-describe mechanisms, which can be partly described by partial differential equations of convection, diffusion, mass transfer and adsorption. The processes may also be subject to batch-to-batch variation in feed composition and operating conditions. To ensure high purity of products, chromatography may be operated in a conservative manner, meaning fraction collection may be started later than necessary and terminated prematurely. This results in sub-optimal chromatographic yields in productions, as operators are forced to make the tough decision to cut the purification process at a point where they know purity is ensured at the expense of product loss (Kozorog et al. 2023).

If the overall separation process were better understood and monitored, such that the batch-to-batch variation could be better accounted for, it may be possible to secure a higher yield in the separation process (Kumar and Lenhoff 2020). Using mechanistic models or hybrid models of the chromatographic process, the process may be analyzed in real-time leading to potential insights about the current process. These insights could be communicated to operators, allowing them to perform more optimal decision-making, increasing yield without sacrificing purity.

The potential for this real-time process prediction was investigated at a pilot scale ion-exchange facility at the Technical University of Denmark (DTU). The process is equipped with sensors for real-time data extraction and supports digital twin development (Jones et al. 2022). Using this data, mechanistic and hybrid models were fitted to predict key process events such as breakthrough. The partial differential equations were solved using state-of-the-art discretization methods that are sufficiently computationally fast to allow for real-time prediction of process events (Frandsen et al. 2024). This serves as proof-of-concept for real-time analysis through Monte Carlo simulation of chromatographic processes.

References

Frandsen, Jesper, Jan Michael Breuer, Eric Von Lieres, Johannes Schmölder, Jakob K. Huusom, Krist V. Gernaey, and Jens Abildskov. 2024. “Discontinuous Galerkin Spectral Element Method for Continuous Chromatography: Application to the Lumped Rate Model Without Pores.” In Computer Aided Chemical Engineering, 53:3325–30. Elsevier.

Jones, Mark Nicholas, Mads Stevnsborg, Rasmus Fjordbak Nielsen, Deborah Carberry, Khosrow Bagherpour, Seyed Soheil Mansouri, Steen Larsen, et al. 2022. “Pilot Plant 4.0: A Review of Digitalization Efforts of the Chemical and Biochemical Engineering Department at the Technical University of Denmark (DTU).” In Computer Aided Chemical Engineering, 49:1525–30. Elsevier.

Kozorog, Mirijam, Simon Caserman, Matic Grom, Filipa A. Vicente, Andrej Pohar, and Blaž Likozar. 2023. “Model-Based Process Optimization for mAb Chromatography.” Separation and Purification Technology 305 (January): 122528.

Kumar, Vijesh, and Abraham M. Lenhoff. 2020. “Mechanistic Modeling of Preparative Column Chromatography for Biotherapeutics.” Annual Review of Chemical and Biomolecular Engineering 11 (1): 235–55.



Model Based Flowsheet Studies on Cement Clinker Production Processes

Georgios Melitos1,2, Bart de Groot1, Fabrizio Bezzo2

1Siemens Industry Software Limited, 26-28 Hammersmith Grove, W6 7HA London, United Kingdom; 2CAPE-Lab (Computer-Aided Process Engineering Laboratory), Department of Industrial Engineering, University of Padova, 35131 Padova PD, Italy

The cement value chain is responsible for 7-8% of global CO­2 emissions [1]. These emissions originate both directly via chemical reactions (e.g. calcination) taking place in the process and indirectly via the process energy demands. Around 90% of these emissions come from the production of clinker, the main constituent of cement [1]. Clinker production comprises some high temperature and carbon intensive processes, which occur in the pyroprocessing section of a cement plant. The chemical and physical phenomena occurring in such processes are rather complex and to this day, these processes have mostly been examined and modelled in literature as standalone unit operations [2-4]. As a result, there is a lack of holistic model-based approaches on flowsheet simulations of cement plants in literature.

This paper presents first-principles mathematical models for the simulation of the pyro-process section of a cement production plant; more specifically the preheating cyclones, the calciner, the rotary kiln and the grate cooler. These mathematical models are then combined in an integrated flowsheet model for the production of clinker. The models incorporate the major heat and mass transport phenomena, reaction kinetics and thermodynamic property estimation models. These mathematical formulations have been implemented in the gPROMS® Advanced Process Modelling environment and solved for various reactor geometries and operating conditions.

The final flowsheet is validated against published data, demonstrating the ability to predict accurately operating temperatures, degree of calcination, gas and solids compositions, fuel consumption and overall CO2 emissions. The utilization of several types of alternative fuels is also investigated, to evaluate the potential for avoiding CO2 emissions by replacing part of the fossil-based coal fuel (used as a reference case). Trade-offs between different process KPIs (net energy consumption, conversion efficiency, CO2 emissions) are identified and evaluated for each fuel utilization scenario.

REFERENCES

[1] Monteiro, Paulo JM, Sabbie A. Miller, and Arpad Horvath. "Towards sustainable concrete." Nature materials 16.7 (2017): 698-699.

[2] Iliuta, I., Dam-Johansen, K., & Jensen, L. S. (2002). Mathematical modeling of an in-line low-NOx calciner. Chemical engineering science, 57(5), 805-820.

[3] Pieper, C., Liedmann, B., Wirtz, S., Scherer, V., Bodendiek, N., & Schaefer, S. (2020). Interaction of the combustion of refuse derived fuel with the clinker bed in rotary cement kilns: A numerical study. Fuel, 266, 117048.

[4] Cui, Z., Shao, W., Chen, Z., & Cheng, L. (2017). Mathematical model and numerical solutions for the coupled gas–solid heat transfer process in moving packed beds. Applied energy, 206, 1297-1308.



A Social Life Cycle Assessment for Sustainable Pharmaceutical Supply Chains

Inês Duarte, Bruna Mota, Andreia Santos, Tânia Pinto-Varela, Ana Paula Barbosa-Povoa

Centre for Management Studies of IST (CEG-IST), University of Lisbon, Portugal

The increasing pressure from governments, media, and consumers is driving companies to adopt sustainable practices by reducing their environmental and social impacts. While the economic dimension of sustainable supply chain management is always considered, and the environmental one has been thoroughly addressed, the social dimension remains underdeveloped (Barbosa-Póvoa et al., 2018) despite growing attention to social sustainability issues in recent years (Duarte et al., 2022). This imbalance is particularly concerning in the healthcare sector, especially within the pharmaceutical industry, given the significant impact of pharmaceutical products on public health and well-being. On the other hand, while vital to society, there are social risks incurred throughout the entire supply chain, from primary production activities to the manufacturing of the final product and its distribution. Addressing these concerns requires a comprehensive framework that captures the social impacts of every stage of the pharmaceutical supply chain.

Social LCA is a well-established approach to assessing the social performance of supply chains by identifying both the positive and negative social impacts linked to a system's life cycle. By adopting a four-step process as outlined in the ISO 14040 standard (ISO, 2006), Social LCA enables a thorough evaluation of the social sustainability of supply chain activities. This approach allows for the identification and mitigation of key social risks, thus enabling more informed decision-making and promoting sustainable development goals. Hence, in this work, a social life cycle assessment framework is developed and integrated into the pharmaceutical supply chain design and planning model of Duarte et al. (2022), a multi-objective mixed integer linear programming model. The economic objective is measured through the maximization of the Net Present Value, while the social objective maximizes equity in access through a Disability Adjusted Life Years (DALY) metric. The social life cycle assessment will allow a broader social assessment of the whole supply chain activities by evaluating social risks and generating actionable insights for minimizing the most significant social risks within the pharmaceutical supply chain.

A case study based on a global vaccine supply chain is conducted where the main social hotspots are identified, as well as trade-offs between the economic and accessibility objectives. Through this analysis, informed recommendations are developed to mitigate potential social impacts associated with the supply chain under study.

The integration of social LCA into a pharmaceutical supply chain design and planning optimization model constitutes the main contribution of this work, providing a practical tool for decision-makers to enhance the overall sustainability of their operations and address the complex social challenges of global pharmaceutical supply chains.

Barbosa-Póvoa, A. P., da Silva, C., & Carvalho, A. (2018). Opportunities and challenges in sustainable supply chain: An operations research perspective. European Journal of Operational Research, 268(2), 399–431. https://doi.org/10.1016/j.ejor.2017.10.036

Duarte, I., Mota, B., Pinto-Varela, T., & Barbosa-Póvoa, A. P. (2022). Pharmaceutical industry supply chains: How to sustainably improve access to vaccines? Chemical Engineering Research and Design, 182, 324–341. https://doi.org/10.1016/j.cherd.2022.04.001

ISO. (2006). ISO 14040:2006 Environmental management - Life cycle assessment - Principles and framework. Geneva, Switzerland: International Organization for Standardization.



Quantum Computing for Synthetic Bioprocess Data Generation and Time-Series Forecasting

Shawn Gibford1,2, Mohammed Reza Boskabadi2, Seyed Soheil Mansouri1,2

1Sqale; 2Denmark Technical University

Data scarcity in bioprocess engineering, particularly for single-cell organism cultivation in pilot-scale photobioreactors (PBRs), poses significant challenges for accurate model development and process optimization. This issue is especially pronounced in pilot-scale operations (e.g., 20L PBRs), where data acquisition is infrequent and costly. The nonlinear nature of these processes, coupled with various non-idealities, creates a substantial gap between lab-scale and pilot-scale operations, hindering the development of accurate mechanistic models and data-driven approaches.

To address these challenges, we propose a novel approach leveraging quantum computing and machine learning. Specifically, we employ a quantum Generative Adversarial Network (qGAN) to generate synthetic bioprocess time-series data, with a focus on quality indicator variables like Optical Density (OD) and Dissolved Oxygen (DO), key metrics for Dry Biomass estimation. The quantum approach offers potential advantages over classical methods, including better generalization capabilities and faster model training using tensor networks.

Various network and quantum circuit architectures were tested to capture the statistical characteristics of real process data. Our results show high fidelity in synthetic data generation and significant improvement in the performance of forecasting models, such as Long Short-Term Memory (LSTM) networks, when augmented with GAN-generated samples. This approach addresses critical data gaps, enabling better model development and parameter optimization in bioprocess engineering.

The success in generating high-quality synthetic data opens new avenues for bioprocess optimization and scale-up. By addressing the critical issue of data scarcity, this method enables the development of more accurate virtual twins and robust optimization strategies. Furthermore, the ability to continuously update models with newly acquired online data suggests a pathway towards adaptive, real-time process control.

This work not only demonstrates the potential of quantum machine learning in bioprocess engineering but also provides a framework for addressing similar data scarcity issues in other complex scientific domains. Future research will focus on refining the qGAN architectures, exploring integration with real-time sensor data, and extending the approach to other bioprocess systems and scale-up scenarios.

References:

Orlandi, F.; Barbierato, E.; Gatti, A. Enhancing Financial Time Series Prediction with Quantum-Enhanced Synthetic Data Generation: A Case Study on the S&P 500 Using a Quantum Wasserstein Generative Adversarial Network Approach with a Gradient Penalty. Electronics 2024, 13, 2158. https://doi.org/10.3390/electronics13112158



Optimising Crop Schedules and Environmental Impact in Climate-Controlled Greenhouses: A Hydroponics vs. Soil-Based Food Production Case Study

Sarah Namany, Farhat Mahmoud, Tareq Al-Ansari

Hamad bin Khalifa University, Qatar

Optimising greenhouse operations in arid regions is essential for sustainable agriculture due to limited water resources and high energy demands for climate control. This paper proposes a multi-objective optimisation framework aimed at minimising both the operational costs and environmental emissions of a climate-controlled greenhouse. The framework schedules the cultivation of three different crops counting tomato, cucumber, and bell pepper, throughout the year. These crops are selected for their varying growth conditions, which induce variability in energy and water inputs, providing a comprehensive assessment of the optimisation model. The model integrates factors such as temperature, humidity, light intensity, and irrigation requirements specific to each crop. It is solved using a genetic algorithm combined with Pareto front analysis to address the multi-objective nature effectively. This approach facilitates the identification of optimal trade-offs between cost and emissions, offering a set of efficient solutions for decision-makers. Applied to a greenhouse in an arid region, the model evaluates two scenarios: a hydroponic system and a conventional soil-based system. Results of the study indicate that the multi-objective optimisation effectively reduces operational costs and environmental emissions while fulfilling crop demand. The hydroponic scenario demonstrates higher water-use efficiency and allows for precise nutrient management, resulting in a lower environmental impact compared to the conventional soil system. Moreover, the optimised scheduling balances energy consumption for climate control across different crop requirements, enhancing overall sustainability. This study underscores the potential of advanced optimisation techniques in enhancing the efficiency and sustainability of greenhouse agriculture in challenging environments.



Technological Trends towards Sustainable and Circular Process Design

MAURICIO SALES-CRUZ, TERESA LOPEZ-ARENAS

Departamento de Procesos y Tecnología, Universidad Autónoma Metropolitana-Cuajimalpa, Mexico

Current trends in technology are being directed toward the enhancement of teaching methods and the applicability of engineering concepts to industry, especially in the areas of sustainability and circular process design. These shifts signal a transformation in the education of chemical and biological engineering students, who are being equipped with emerging skills through practical, digital-focused approaches that align with evolving industry needs and global sustainability objectives.

Within this educational framework, significant focus is placed on the computational modeling and simulation tools, sustainable design process and circular economy, which are recognized as essential in preparing students to implement efficient and environmentally friendly processes. For instance:

  • The circular economic concept is introduced, where waste is eliminated by redesigning production systems to enhance or maintain profitability. This model emphasizes product longevity, recycling, reuse, and the valorization of waste.
  • Process integration (the biorefineries concept) is highlighted as a complex challenge requiring advanced techniques in separation, catalysis, and biotechnology, integrating both chemical and biological engineering disciplines.
  • Modeling and simulation tools are essential in engineering education, enabling students to analyze and optimize complex processes without incurring the costs or time associated with experimental setups.
  • The use of programming languages (such as MATLAB or COMSOL), equation-based process simulators (such as gPROMS), and modular process simulators (such as ASPEN or SuperPro Designer) is strongly encouraged.

From a pedagogical viewpoint, primary educational trends for knowledge transfer and meaningful learning include:

  1. Problem-Based Learning (PBL) approaches are promoted, using practical industry-related problems to improve students' decision-making skills and knowledge application.
  2. Virtual Labs offer students remote or simulated access to complex processes, including immersive experiences in industrial plants and laboratory equipment.
  3. Integration of Industry 4.0 and Process Automation tools facilitate the analysis of massive data (Big Data) and introduce technologies such as artificial intelligence (AI).
  4. Interdisciplinary and Collaborative Learning fosters integration across disciplines such as biology, chemistry, materials engineering, computer science, and economics.
  5. Blended Learning Models combine traditional teaching methods with digital tools, with online courses, e-learning platforms, and multimedia resources enhancing in-person classes.
  6. Continuing Education and Micro-credentials are encouraged as technologies and approaches evolve rapidly, with short, specialized courses often offered through online platforms.

This paper critically examines these educational trends, emphasizing the shift toward practical and digital approaches that align with changing industry demands and sustainability goals. Additionally, student-led case studies on organic waste revalorization will be included, demonstrating the quantification of environmental impacts, assessments of economic viability in terms of investment and operational costs, and evaluations of innovative solutions grounded in circular economy principles.



From experiment design to data-driven modeling of powder compaction process

Rene Brands1, Vikas Kumar Mishra2, Jens Bartsch1, Mohammad Al Khatib2, Markus Thommes1, Naim Bajcinca2

1RPTU Kaiserslautern, Germany; 2TU Dortmund, Germany

Tableting is a dry granulation process for compacting powder blends into tablets. In this process, a blend of active pharmaceutical ingredients (APIs) and excipients are fed into the hopper of a rotary tablet press via feeders. Inside the tablet press, rotating feed frame paddle wheels fill powder into dies, with tablet mass adjusted by the lower punch position during the die filling process. Pre-compression rolls press air out of the die, while main compression rolls apply the force necessary for compacting the powder into tablets. In this paper, process variables such as feeder screw speeds, feed frame impeller speed, lower punch position during die filling, and punch distance during main compression have been systematically varied. Corresponding responses, including pre-compression force, ejection force, and tablet porosity have been evaluated to optimize the tableting process. After implementing an OPC UA interface, process variables can be monitored in real-time. To enable in-line monitoring of tablet porosity, a novel UV/Vis fiber optic probe has been implemented into the rotary tablet press. To further analyze the overall process, a data-driven modeling approach is adopted. Data-driven modeling is a valuable alternative to modeling real-world processes where, for instance, first principles modeling is difficult or infeasible. Due to the complex nature of the process, several model classes need to be explored. To begin with, linear autoregressive models with exogenous inputs (ARX) have been considered. Thereafter, nonlinear autoregressive models with exogenous inputs (NARX) have been considered. Finally, several experiments have been designed to further validate and test the effectiveness of the developed models in real-time scenarios.



Taking into account social aspects for the development of industrial ecology

Maud Verneuil, Sydney Thomas, Marianne Boix

Laboratoire de Genie Chimique, Toulouse INP, CNRS, Université Paul Sabatier, France

Industrial Ecology, in the context of decarbonization, appears to be an important and significant way to reduce carbon dioxide emissions. The development of eco-industrial parks are also real applications that can help to modify socio-ecological landscapes at the scale of a territory.

In the context of Industrial ecology, optimization models make possible to implement synergies according to economic and environmental criteria. Even if numerous studies have proposed several criteria such as: CO2 emissions, Net Present Value or other economic ones; to date, a few social indicators have been taken into account in multi-criteria models. Job creation is often used as a social indicator in this type of analysis. However, the social nature of this indicator is debatable.

The first aim of the present work is to question the relevance of job creation as a social indicator with a case study. Afterward, we will evaluate the need to measure the social impact of industrial ecology initiatives and query the meaning and the added value of social indicators in this context.

So, the case study is about the development of offshore wind energy expertise in the port of Port-La-Nouvelle, with the port of Sète as a rear base. The aim is to assess the capacity of the port of Sète to host component manufacturing and anchor system storage activities, by evaluating the economic, environmental and social impacts of this approach. We will then highlight the criteria chosen and assess their relevance and limitations, particularly with regard to the social aspect.

The second step is to define the needs and challenges of an industrial and territorial ecology approach. What are the key success factors? In attempting to answer this question, it became clear that an eco-industrial park could not survive without a climate of trust and cooperation (Diemer & Rubio, 2016). The complexity of this eco-system and its communicating vessels between industrialists on a micro scale, the park on a meso scale and its environment on a macro scale make the link and the building of relationships the sinews of war.

Thirdly, we will examine the real added value of social indicators on this relational dimension, in particular by studying the way in which social indicators are implemented. Indeed, beyond the indicator itself, the process chosen for its elaboration has a real influence on the indicator itself, as well as on the ability of users to appropriate it. We therefore need to consider which process seems most effective in enabling the use of social indicators to provide a new perspective on the context of an industrial and territorial ecology approach.

Finally, we will highlight the limits of metrics based on social indicators, and question their ability to capture a complex, multidimensional social environment. We will also explore the possibility of using other concepts and tools to account for social reality, and assess their relevance to industrial and territorial ecology.



Life cycle impacts characterization of carbon capture technologies for their integration in eco-industrial parks

Agathe Gabrion, Sydney Thomas, Marianne Boix, Stephane Negny

Laboratoire de Genie Chimique, Toulouse INP, CNRS, Université Paul Sabatier, France

Human activities since pre-industrial era have been recognized to be responsible of climate change. This influence on the climate is primarily driven by the combustion of fossil fuels. The burning of these fuels releases significant quantities of carbon dioxide (CO2) and other greenhouse gases into the atmosphere, contributing to the greenhouse effect.

Industrial activities are a major factor in climate change, given the amount of greenhouse gases released into the Earth’s atmosphere from fossil fuel burning and from the energy required for industrial processes. In an attempt to reduce the environmental impact of the industry on climate change, many methods are studied and considered.

This study focuses on one of these technologies – carbon capture. Carbon capture refers to the process of trapping CO2 molecules after the combustion of fossil fuels. Next, the carbon is used or stored in order to prevent him reaching the atmosphere. This whole process is referred as Carbon Capture, Utilization and Storage (CCUS). Carbon capture is declined into multiple technologies. This study focuses only on absorption capture method using amine because it represents 90% of the operational market. It does not evaluate the utilization and storage part.

In this study, the process of carbon capture is seen as part of a bigger project aiming at reducing the CO2 emissions of the industry referred to as an Eco-Industrial Park (EIP). Indeed, the process is studied in the context of an EIP in order to determine whether setting it up is more or less valuable in terms of ecological impact than the current situation consisting on releasing the greenhouse gases into the atmosphere. Results will conduct to study the integration of carbon capture alternative methods in the EIP.

To properly conduct this study, it was necessary to consider various factors of ecological impacts. While carbon absorption using an amine solvent reduces the amount of CO2 released into the atmosphere, the degradation associated with amine solvents must also be taken into account. Therefore, it was necessary to involve several different criteria in order to compare the ecological impact of a carbon capture and the ecological impact of the release of industry-produced greenhouse gases. The objective is to prevent the transfer of pollution from greenhouse gases to other forms of environmental contamination. To do so, the Life Cycle Analysis (LCA) method was chosen to assess the environmental impacts of both scenarios.

Using the SimaPro© software to conduct the LCA, this study showed that processing the stream gas exiting an industrial site offers environmental advantages compared to its direct release into the atmosphere. Within the framework of an Eco-Industrial Park (EIP), the implementation of a CO2 absorption process could contribute to mitigate climate change impacts. However, it is important to consider that other factors, such as ecotoxicity and resource utilization, may become more significant when the CO2 absorption process is incorporated into the EIP.



Dynamic simulation and life cycle assessment of energy storage systems connecting variable renewable sources with regional energy demand

Ayumi Yamaki, Shoma Fujii, Yuichiro Kanematsu, Yasunori Kikuchi

The University of Tokyo, Japan

Increasing reliance on variable renewable energy (VRE) is crucial to achieving a sustainable and carbon-neutral energy system. However, the inherent intermittency of VRE creates challenges in ensuring a reliable power supply that meets fluctuating electricity demand. Energy storage systems are pivotal in addressing this issue by storing surplus energy and supplying it when needed. This study explores the applicability of different energy storage technologies—batteries, hydrogen (H2) storage, and thermal energy storage (TES)—to control electricity variability from renewable energy sources, focusing on electricity demand and life cycle impacts.
This research aims to evaluate the performance and environmental impacts of the energy storage system when integrated with wind power. A model of an energy storage system connected to wind energy was constructed based on the existing model (Yamaki et al., 2024), and the annual energy flow simulation was conducted. The model assumes that all generated wind energy is stored and subsequently used to supply electricity to consumers. The energy flow was calculated hourly from 0:00 on January 1st to 24:00 on December 31st based on the model made by Yamaki et al. (2023). The amounts of energy storage and VRE installation were set, and then the maximum amount of power to be sold from the energy storage system was estimated. In the simulation, the stored energy was calculated hourly from the charge of VRE-derived power/heat and the discharge of power to be sold.
Life cycle assessment (LCA) was employed to quantify the environmental impacts of each storage technology from cradle to grave, considering both the energy storage system infrastructure and operational processes for various wind energy and energy storage scales. This study evaluated GHG emissions and abiotic resource depletion as environmental impacts.
The amount of power sold was calculated by energy flow simulation. The simulation results indicate that the amount of power sold increases as wind energy generation and storage capacity rise. However, when storage capacities are over-dimensioned, the stored energy diminishes due to battery self-discharge, H2 leakage, or thermal losses in TES. This loss of stored energy leads to a reduction in the power sold. The environmental impacts of each energy storage system depended on the specific storage type and capacity. Batteries, H2 storage, and TES exhibited different trade-offs regarding GHG emissions and abiotic resource depletion.
This study highlights the importance of integrating dynamic simulations with LCA to provide a holistic assessment of energy storage systems. By quantifying both the energy supply capacity and the environmental impacts, this research offers valuable insights for designing energy storage solutions that enhance the viability of VRE integration while minimizing environmental impacts. The findings contribute to developing more resilient and sustainable energy storage systems that are adaptable to regional energy supply conditions.

Yamaki, A., et al.; Life cycle greenhouse gas emissions of cogeneration energy hubs at Japanese paper mills with thermal energy storage, Energy, 270, 126886 (2023)
Yamaki, A., et al.; Comparative Life Cycle Assessment of Energy Storage Systems for Connecting Large-Scale Wind Energy to the Grid, J. Chem. Eng. Jpn., 57 (2024)



Optimisation of carbon capture utilisation and storage supply chains under carbon trading and taxation

Hourissa Soleymani Babadi, Lazaros G. Papageorgiou

The Sargent Centre for Process Systems Engineering, Department of Chemical Engineering,University College London (UCL), Torrington Place, London WC1E 7JE, UK

To mitigate climate change, and in particular, the rise of CO2 levels in the atmosphere, ambitious emissions targets have been set by political institutions such as the European Union, which aims to reduce 2050 emissions by 80% versus 1990 levels (Leonzio et al., 2019). One proposed solution to lower CO2 levels in the atmosphere is Carbon Capture, Storage and Utilisation (CCUS). However, studies in the literature to date have largely focused on utilisation and storage separately and neither considered the effects of CO2 taxation nor systematically studied the optimality criteria of the CO2 conversion products (Leonzio et al., 2019; Zhang et al., 2017; Zhang et al., 2020). A systematic study for a realistically large industrial supply chain that considers the aforementioned aspects jointly is necessary to inform political and industrial decision-making.

In this work, a Mixed Integer Linear Programming (MILP) framework for a supply chain network was developed to incorporate storage, utilisation, trading, and taxation as strategies for managing CO2 emissions. Possible CO2 utilisation products were ranked using Multi-Criteria Decision Analysis (MCDA) techniques, and three of the top 10 products were considered to serve as CO2 -based products in this supply chain network. The model included several power plants in one of the European countries with the highest CO2 emissions. The goal of the proposed model is to minimise the total cost of the supply chain taking into account the process and investment decisions. Furthermore, incorporating multi-objective optimisation that simultaneously considers CO2 reduction and supply chain costs can offer both environmental and economic benefits. Therefore, the ε-constraint multi-objective optimisation method was implemented as a solution procedure to minimise the total cost while maximising the CO2 reduction. The game theory Nash approach was applied to determine the fair trade-off between the two objectives. The investigated case study demonstrates the importance of including financial carbon management through tax and trade in addition to the physical CO2 capturing and storage, and utilisation.

References

Leonzio, G., Foscolo, P. U., & Zondervan, E. (2019). An outlook towards 2030: optimization and design of a CCUS supply chain in Germany. Computers & Chemical Engineering, 125, 499-513.

Zhang, D., Alhorr, Y., Elsarrag, E., Marafia, A. H., Lettieri, P., & Papageorgiou, L. G. (2017). Fair design of CCS infrastructure for power plants in Qatar under carbon trading scheme. International Journal of Greenhouse Gas Control, 56, 43-54.

Zhang, S., Zhuang, Y., Liu, L., Zhang, L., & Du, J. (2020). Optimization-based approach for CO2 utilization in carbon capture, utilization and storage supply chain. Computers & Chemical Engineering, 139, 106885.



Impact of energy sources on Global Warming Potential of hydrogen production: Case study of Uruguay

Vitória Olave de Freitas1, José Pineda1, Valeria Larnaudie2, Mariana Corengia3

1Unidad Tecnológica de Energias Renovables, Universidad Tecnologica del Uruguay; 2Depto. de Bioingeniería, Instituto de Ingeniería Química, Facultad de Ingeniería, Udelar; 3Instituto de Ingeniería Química, Facultad de Ingeniería, Udelar

In recent years, several countries have developed strategies to advance green hydrogen as a feedstock or energy carrier. Hydrogen can contribute to the decarbonization of various sectors, being of particular interest its use in the transport and industry sectors. In 2022, Uruguay launched its green hydrogen roadmap, outlining its plan to promote this market. The country has the potential to become a producer of green hydrogen derivatives for exportation due to: the availability and complementarity of renewable energies (solar and wind); an electricity matrix with a high share of renewable sources; the availability of water; and favorable logistics.

The energy source for water electrolysis is a key factor in both the final cost and the environmental impact of hydrogen production. In this context, this work performs the life cycle assessment (LCA) of a hydrogen production process by water electrolysis, combining different renewable energy sources available in Uruguay. The system evaluated includes a 50 MW electrolyzer and the installation of 150 MW of new power sources. Three configurations for power production were analyzed: (1) a photovoltaic farm, (2) a wind farm, and (3) a hybrid farm (solar and wind). In all cases, connection to the national power grid is assumed to ensure a reliable and uninterrupted energy supply for plant operation.

Different scenarios for the grid energy mix are analyzed to assess the environmental impact on the hydrogen produced. For the current case, the average generation over the past five years is considered, while for future projections it was evaluated the variation of fossil and renewable energy sources.

To determine the optimal combination of renewable energy sources for the hybrid generation scenario, the complementarity of solar and wind resources was analyzed using the standard deviation, a metric widely used for this purpose. This study was developed employing data for real plants in Uruguay. Seeking for the most stable generation, the optimal mix of power generation capacity implies 54% solar and 46% wind.

The environmental impact of the different case studies was evaluated through an LCA using OpenLCA software and the Ecoinvent database. For this analysis, 1 kg of produced hydrogen was considered the functional unit. The system boundaries included power generation and the electrolysis system used for hydrogen production. Among the impact categories that can be analyzed by LCA (human health, environmental, resource depletion, etc.), this work focused on the global warming potential (GWP). As hydrogen is promoted as an alternative fuel or feedstock that may diminish CO2 emissions, its GWP is a particularly relevant metric.

Implementing hybrid solar and wind energy systems increases the stability in the energy produced from renewable sources, thereby reducing the amount of energy taken from the grid. Then, these hybrid plants have the potential to reduce CO2 emissions per kg of hydrogen produced. Still, this impact is diminished when the electric grid has higher contributions of renewable energy.



Impact of the share of renewable energy integration in the selection of sustainable natural gas production pathways

Meire Ellen Gorete Ribeiro Domingos, Daniel Florez-Orrego, Oktay Boztas, Soline Corre, François Maréchal

Ecole Polytechnique Federale de Lausanne, Switzerland

Sustainable natural gas (SNG) can be produced via different routes, such as anaerobic digestion and thermal gasification. Other technologies, such as CO2 injection, storage systems (e.g., CH4, CO2) and reversible solid oxide cells (rSOC) can be also integrated in order to handle the seasonal fluctuations of renewable energy supply and market volatility. In this work, the impact of seasonal excess and deficit of electricity generation, and the renewable fraction thereof, on the sustainability metrics of different scenarios for the energy transition in the SNG production is evaluated. The analysis considers both the current energy mix scenario and a future energy mix scenario. In the latter, full renewable grid is modeled based on the generation taking into account GIS-based land-restriction, geo-spatial wind speed and irradiation data, and the maximum electricity production from renewable sources considering EU-wide low restrictions. Moreover, the electricity demand considers full electrification of the residential and mobility sectors. The biodigestion process considers a biomethane potential of 300 Nm3 CH4 per t of volatile solids using organic wastes. The upgraded biomethane is marketed and the CO2 rich stream follows to the biomethane production. The CO2 from the anaerobic digestion unit can be stored at -50 °C and 7 bar (1,155 kg/m3), so that it can be later regasified and fed to a methanation system. The necessary hydrogen is provided by the rSOC system operating at 1 bar, 800 °C, and 81% water conversion. The rSOC system can also be operated in fuel cell mode consuming methane to produce electricity. The gasification of the digestate from the anaerobic digestion unit uses steam as gasification agent, and hydrogen coming from the electrolyzer is used to adjust the syngas composition to be suitable for the methanation reaction. The methanation system is based on the TREMP® process, consisting of intercooled catalytic beds to achieve higher conversion. A mixed integer linear programming method is employed to identify optimal system configurations under different economic scenarios, helping elucidating the feasibility of the proposed processes, as well as the optimal planning production of SNG. As a result, the integration of renewable energy and the combination of different SNG production processes prove to be crucial for the strategic planning, enhancing the resilience against market volatility and also supporting the decarbonization of the energy sector. Improved handling of intermittent renewable energy allows an optimal CO2 and waste management to achieve year-round overall processes efficiencies above 55%. This systematic approach enables better decision-making, risk management, and investment planning, informing energy providers about the opportunities and challenges linked to the decarbonization of the energy supply.



Decarbonizing the German Aviation Sector: Assessing the feasibility of E-Fuels and their environmental implications

PABLO SILVA ORTIZ1, OUALID BOUKSILA2, AGNES JOCHER2

1Universidad Industrial de Santander-UIS, Colombia; 2Technical University of Munich-TUM, Germany

The aviation industry is united in its goal of achieving "net-zero" emissions by mid-century, in accordance with global targets like COP21 and European initiatives such as "Fit for 55" and "ReFuelEU Aviation." However, current advancements and capacities may be insufficient to meet these targets on time. Recognizing the critical need to reduce greenhouse gas GHG emissions, the German government and the European Commission strongly advocate measures to lower aviation emissions, which is expected to significantly increase the demand for sustainable aviation fuels, especially synthetic fuels. In this sense, import scenarios from North African countries to Germany are under consideration. Hence, we set the objective of this work in exploring the pathways and the life cycle environmental impacts of e-fuels production and import, focusing on decarbonizing the aviation sector. Through a multi-faceted investigation, this work aims to offer strategic insights into the future of aviation fuel, blending technological advancements with international cooperation for a sustainable aviation industry. Our analysis compares the feasibility of local production in Germany with potential imports from Maghreb countries—Tunisia, Algeria, and Morocco. To establish a comprehensive view, the study forecasts Germany’s aviation fuel demand across three key timelines: the current scenario, 2030, and 2050. These projections account for anticipated advancements in renewable energy, proton exchange membrane-PEM electrolysis, and Direct Air Capture-DAC technologies via Life Cycle Assessment-LCA prospective. A technical concept of a power-to-liquid fuel production is presented with the corresponding Life Cycle Inventory, reflecting a realistic consideration of the local conditions including the effect of water desalination. In parallel, the export potential of the Maghreb countries is evaluated, considering both social and economic dimensions. The environmental impacts of two export pathways—direct e-fuel export and hydrogen export as an intermediate product—are then assessed through cradle-to-gate and cradle-to-grave scenarios, offering a detailed analysis of their respective carbon footprints. Finally, the study determines the qualitative cost implications of each pathway, providing a comparative analysis that identifies the most promising approach for sustainable aviation fuel production. The results, related mainly to Global Warming Potential-GWP and Water Consumption Potential-WCP suggest that Algeria, doted with high-capacity factors for photovoltaic-PV solar and wind systems, achieves the most considerable WCP reductions compared to Germany, ranging from 31.2% to 57.1% in a cradle-to-gate scenario. From a cradle-to-grave perspective, local German PV solar scenarios fail to meet RED II sustainable fuel requirements, whereas most export scenarios achieve GWP reductions exceeding 70%. Algeria shows the best overall reduction, particularly with wind energy (85% currently to 88% by 2050), while Morocco excels with PV solar (70% currently to 75% by 2050). Despite onshore wind showing strong environmental numbers, PV solar offers the highest impact reductions and cost advantages, making Morocco and Algeria’s PV systems superior to German and North African wind systems.



Solar-Driven Hydrogen Economy Potential in the Greater Middle East: Geographic, Economic, and Environmental Perspectives

Abiha Abbas1, Muhammad Mustafa Tahir2, Jay Liu3, Rofice Dickson1

1Department of Chemical and Metallurgical Engineering, School of Chemical Engineering, Aalto University, P.O. Box 11000, FI-00076 Aalto, Finland; 2Department of Chemistry & Chemical Engineering, SBA School of Science and Engineering, Lahore University of Management Sciences (LUMS), Lahore, 54792, Pakistan; 3Department of Chemical Engineering, Pukyong National University, Busan, Republic of Korea

This study employed advanced GIS spatial analysis to assess land suitability for solar-powered hydrogen production across thirty countries in the GME region. Factors such as PVOUT, proximity to water sources and roads, land slope, land use and cover, and restricted/protected areas were evaluated. An AHP-based MCDM analysis was used to classify land into different suitability levels.

Techno-economic optimization models were then applied to assess the levelized cost of hydrogen (LCOH), production potential, and the levelized costs of ammonia (LCOA) and methanol (LCOM) for 2024 and 2050 under different scenarios. Sensitivity analysis quantified uncertainties, while cradle-to-grave life cycle analysis (LCA) calculated the CO₂ avoidance potential for highly suitable areas.

Key findings include:

  1. Water scarcity is a major factor in site selection for hydrogen production. Fifty-seven percent of the region lacks access to water or is over 10 km away from any source, posing a challenge for hydrogen facility placement. A minimum of 1.7 trillion liters of water is needed to meet conservative hydrogen production estimates, and up to 13 trillion liters for optimistic estimates. A reliable water supply chain is crucial to realize this potential.
  2. Around 14% of the land in the region is unsuitable for hydrogen production due to slopes exceeding 5°. In mountainous countries like Tajikistan, Kyrgyzstan, Lebanon, Armenia, and Türkiye, this figure rises to 50%.
  3. Forty percent of the region is unsuitable due to poor road access, highlighting the need for adequate transportation infrastructure. Roads are essential for the construction, operation, and maintenance of hydrogen facilities, as well as for transporting resources and products.
  4. Only 3.8% of the GME region (1,122,696 km²) is classified as highly suitable for solar hydrogen projects. This land could produce 167 Mt/y and 209 Mt/y of hydrogen in 2024 and 2050 under conservative estimates, with an LCOH of 4.7–7.9 $/kg in 2024 and 2.56–4.17 $/kg in 2050. Under optimistic scenarios, production potential could rise to 1,267 Mt/y in 2024 and 1,590 Mt/y in 2050. Saudi Arabia, Sudan, Pakistan, Iran, and Algeria account for over 50% of the region’s hydrogen potential.
  5. Green ammonia production costs in the region range from 0.96–1.38 $/kg in 2024, decreasing to 0.56–0.79 $/kg by 2050. Green methanol costs range from 1.12–1.59 $/kg in 2024, dropping to 0.67–0.93 $/kg by 2050. Egypt and Libya show the lowest production costs.
  6. LCA reveals significant potential for CO₂ emissions avoidance. In 2024, avoided emissions could range from 119–488 t/y/km² (481 Mt/y), increasing to 477–1952 t/y/km² (3,655 Mt/y) in the optimistic case. By 2050, avoided emissions could reach 4,586 Mt/y. Saudi Arabia and Egypt show the highest potential for CO₂ avoidance.

The study provides a multitude of insights, making significant contributions to the global hydrogen dialogue and offering significant contributions to the roadmap for policymakers to develop comprehensive strategies for expanding the hydrogen economy in the GME region.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ESCAPE | 35
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany