Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Poster Session 3
Time:
Wednesday, 09/July/2025:
10:00am - 10:30am

Location: Zone 2 - Cafetaria

KU Leuven Ghent Technology Campus Gebroeders De Smetstraat 1, 9000 Gent

Show help for 'Increase or decrease the abstract text size'
Presentations

pyDEXPI: A Python framework for piping and instrumentation diagrams using the DEXPI information model

Dominik P. Goldstein, Lukas Schulze Balhorn, Achmad Anggawirya Alimin, Artur M. Schweidtmann

Process Intelligence Research Group, Department of Chemical Engineering, Delft University of Technology, Van der Maasweg 9, Delft 2629 HZ, The Netherlands

Developing piping and instrumentation diagrams (P&IDs) is a fundamental task in process engineering. For designing complex installations, such as petroleum plants, multiple departments across several companies are involved in refining and updating these diagrams, creating significant challenges in data exchange between different software platforms from various vendors. The primary challenge in this context is interoperability, which refers to the seamless exchange and interpretation of information to collectively pursue shared objectives. To enhance the P&ID creation process, a unified, machine-readable data format for P&ID data is essential. A promising candidate is the Data Exchange in the Process Industry (DEXPI) standard. However, the absence of an open-source software implementation of DEXPI remains a major bottleneck, limiting the interoperability of P&ID data in practice. This lack of interoperability is further hindering the adoption of cutting-edge digital process engineering tools, such as automated data analysis and the integration of generative artificial intelligence (AI), which could significantly improve the efficiency and innovation of engineering design workflows.

We present pyDEXPI, an open-source implementation of the DEXPI format for P&IDs in Python. Currently, pyDEXPI encompasses three main parts. (1) At its core, pyDEXPI implements the classes of the DEXPI information model as Pydantic data classes. The pyDEXPI classes define the class relationships and the data attributes outlined in the DEXPI specification. (2) pyDEXPI provides several possibilities for importing and exporting P&ID data into the data class framework. This includes importing DEXPI data in its Proteus XML exchange format, saving and loading pyDEXPI models as a Python pickle file, and casting pyDEXPI into a graph format. (3) pyDEXPI offers toolkit functionalities to analyze and manipulate pyDEXPI P&IDs. For example, pyDEXPI tools can be used to search through P&IDs for data of interest and add, remove, or change data without violating DEXPI modeling conventions. With this functionality, pyDEXPI makes P&ID data more efficient to handle, more flexible, and more interoperable. We envision that, with further development, pyDEXPI will act as a central scientific computing library for the domain of digital process engineering, facilitating interoperability and the application of data analytics and generative AI on P&IDs.

Key references:

M. Theißen et al., 2021. DEXPI P&ID specification. DEXPI Initiative, version 1.3

M. Toghraei, 2019. Piping and instrumentation diagram development, first edition Edition. John Wiley & Sons, Inc, Hoboken, NJ, USA



A Bayesian optimization approach for data-driven Petlyuk distillation columns design.

Alexander Panales-Perez1, Antonio Flores-Tlacuahuac2, Fabian Fuentes-Cortés3, Miguel Angel Gutierrez-Limon4, Mauricio Sales-Cruz5

1Tecnológico Nacional de México, Instituto Tecnológico de Celaya, Departamento de Ingeniería Química Celaya, Guanajuato, México, 38010; 2Escuela de Ingeniería y Ciencias, Tecnológico de Monterrey, Campus Monterrey Ave. Eugenio Garza Sada 2501, Monterrey, N.L, 64849, México; 3Department of Energy Systems and Environment, IMT Atlantique, GEPEA rue Alfred Kastler, Nantes, 44000, France; 4Departamento de Energía, Universidad Autónoma Metropolitana-Azcapotzalco Av. San Pablo 180, C.P. 02200, Ciudad de México, México; 5Departamento de Procesos y Tecnología, Universidad Autónoma Metropolitana-Cuajimalpa Av. Vasco de Quiroga 4871, C.P. 05348, Ciudad de México, México

Recently, the focus on increasing process efficiency to lower energy consumption has led to alternative systems, such as Petlyuk distillation columns. It has been proven that, when compared to conventional distillation columns, these systems offer significant energy and cost savings. Therefore, from an economic perspective, achieving high-purity products alone does not define the feasibility of a process, to achieve an equilibrium in the trade-off between product purity and cost a multiobjective optimization is needed. Despite the effectiveness of common optimization methods, novel strategies like Bayesian optimization, which do not require an explicit mathematical model, can handle complex systems. Even starting from just one initial point, Bayesian optimization can effectively perform the optimization process. However, as a black-box method, it requires an analysis of the influence of hyperparameters on the optimization process. Thus, this work presents a Petlyuk column case study, including an analysis of hyperparameters such as the acquisition function and the number of initial points.



Enhancing Energy Efficiency of Industrial Brackish Water Reverse Osmosis Desalination Process using Waste Heat

Mudhar Al-Obaidi1, Alanood Alsarayreh2, Iqbal M Mujtaba3

1Middle Technical University, Iraq; 2Mu’tah University, Jordan; 3University of Bradford, United Kingdom

The Reverse Osmosis (RO) system has the potential as a vibrant technology to produce high-quality water from brackish water sources. However, the progressive increase in water and electricity demands necessitates the development of a sustainable desalination technology. This can be achieved by reducing the specific energy consumption of the process which will also reduce the environmental footprint. This study proposes the concept of reducing the overall energy consumption of a multistage multi-pass RO system of Arab Potash Company (APC) in Jordan via heating the feed brackish water. The utilisation of waste heat generated from different units of production plant of APC such as steam condensate supplied to a heat exchanger is a feasible technique to heat brackish water entering the RO system. To systematically evaluate the contribution of water temperature on the performance metrics including specific energy use, a generic model of RO system is developed. Model based simulation is used to evaluate the influence of water temperature. The results indicate a clear enhancement of specific energy consumption while using water temperatures close to the maximum recommended temperature of the manufacture. It has been noticed that an increase in water temperature from 25 ºC to 40 ºC can result an overall energy saving of more than 10%.

References

  1. Alanood A. Alsarayreh, M.A. Al-Obaidi, A.M. Al-Hroub, R. Patel, and I.M. Mujtaba. Optimisation of energy consumption in a medium-scale reverse osmosis brackish water desalination plant. Proceedings of the 30th European Symposium on Computer Aided Chemical Engineering (ESCAPE30), May 24-27, 2020, Milano, Italy
  2. Alanood A Alsarayreh, Mudhar A Al-Obaidi, Shekhah K Farag, Raj Patel, Iqbal M Mujtaba, 2021. Performance evaluation of a medium-scale industrial reverse osmosis brackish water desalination plant with different brands of membranes. A simulation study. Desalination, 503, 114927.
  3. Alanood A. Alsarayreh, Mudhar A. Al-Obaidi, Saad S. Alrwashdeh, Raj Patel, Iqbal M. Mujtaba, 2022. Enhancement of energy saving of reverse osmosis system via incorporating a photovoltaic system. Editor(s): Ludovic Montastruc, Stephane Negny, Computer Aided Chemical Engineering, Elsevier, 51, 697-702.


Analysis of Control Properties as a Sustainability Indicator in Intensified Processes for Levulinic Acid Purification

Tadeo Velázquez-Sámano, Heriberto Alcocer-García, Eduardo Sánchez-Ramírez, Carlos Rodrigo Caceres-Barrera, Juan Gabriel Segovia-Hernández

Universidad de Guanajuato, México

Sustainability is one of the greatest challenges humanity has faced. Therefore, there is a special emphasis on current chemical processes being improved or redesigned to ensure sustainability for future generations. The chemical industry has successfully implemented process redesign using process intensification. Through the process intensification, significant savings in energy consumption, lower production costs, reduction in the size or number of equipment and reductions in environmental impacts can be generated. However, one of the disadvantages associated with the intensification of processes is the loss of manipulable variables, due to the increase in interactions due to integrations in the equipment, which can generate a deterioration in the control properties. In other words, intensified processes can be more unstable in the face of disturbances in the system, which could become a problem not only of product quality but even a safety problem. On the other hand, some studies have shown that intensified designs can have better control properties than their conventional counterparts. Therefore, it is important to incorporate the study of control properties into intensified schemes since it is not known whether this intensification will improve or worsen the control properties.

Taking this into account, this study performed an analysis of the control properties of recently proposed schemes for the purification of levulinic acid. Levulinic acid is considered one of the bioproducts from lignocellulosic biomass with the greatest market potential, so the evaluation of control aspects in these schemes is relevant for its possible industrial application. These alternatives include conventional hybrid systems that contemplate liquid-liquid extraction and distillation and intensified schemes using thermal coupling and movement of sections. The studied schemes were obtained through a rigorous multi-objective optimization process taking the total annual cost as an economic criterion and the eco-indicator 99 as an environmental criterion. They were optimized using the differential evolution method with tabu list, which is a hybrid method that has proven to be efficient in complex nonlinear and nonconvex systems. The objective of this study is to identify the dynamic characteristics of the designs studied and to anticipate which could present control problems. Furthermore, each study of intensified distillation schemes contributes to generating guidelines that help the design stage of this type of systems. To analyze the control of the systems, two types of analyses were conducted: closed-loop and open-loop. For the closed-loop analysis, the aim was to minimize the integral of absolute error by identifying the optimal tuning of the controller's gain and integral time. In the open-loop analysis, the condition number, the relative gain array, and the feed sensitivity index were examined. The results reveal that the design comprising a liquid-liquid extraction column, three distillation columns, and thermal coupling between the last two columns exhibits the best dynamic performance. This design demonstrates a lower total condition number, a sensitivity index below the average, a stable control structure, and low values for the integral of absolute error. Additionally, this design shows superior cost and environmental impact indicators, making it the best option among the proposed designs.



Reactive Crystallization Modeling for Process Integration Simulation

Zachary Maxwell Hillman, Gintaras Reklaitis, Zoltan K Nagy

Purdue University, United States of America

Reactive crystallization (RC) is a chemical process in which the reaction yields a crystalline product. It is used in various industries such as pharmaceutical manufacturing or water purification (McDonald et al., 2021). In some cases, RC is the only feasible process pathway, such as the precipitation of certain ionic solids from solution. In other cases, a reaction can become a RC by changing the reaction environment to a solvent with low product solubility.

In either case, the process combines reaction with separation, intensifying the overall design. Process intensification leads to different advantages and disadvantages compared to traditional routes and therefore conducting an analysis prior to construction would be valuable (McDonald et al., 2021; Schembecker & Tlatlik, 2003).

Despite the utility and prevalence of RC, it has not been incorporated into any modern process design software, to our knowledge. There are RC models that simulate the inner reactions and dynamics of a RC (Tang et al., 2023; Salami et al., 2020), but each have limiting assumptions, and none have been integrated with the rest of a process line simulation. This modeling gap complicates RC process design and limits both the exploration of the possible benefits to using RC as well as the ability to optimize a system that relies on it.

To fill this gap, we built a generalized model that can be integrated with other unit operations in the Python process simulator package PharmaPy (Casas-Orozco et al., 2021). This model focuses on the reaction-crystallization interactions and dynamics to predict reaction yield and crystal critical quality attributes given inlet streams and reactor conditions. In this way, RC can be integrated with other unit operations to capture the effects RC has on the process overall.

The model and its assumptions are described in this work. The model space, limitations and capabilities are explored. Finally, the potential benefits of the RC system are shown using three example cases.

  1. Casas-Orozco, D., Laky, D., Wang, V., Abdi, M., Feng, X., Wood, E., Laird, C., Reklaitis, G. V., & Nagy, Z. K. (2021). PharmaPy: An object-oriented tool for the development of hybrid pharmaceutical flowsheets. Computers & Chemical Engineering, 153, 107408. https://doi.org/10.1016/j.compchemeng.2021.107408
  2. McDonald, M. A., Salami, H., Harris, P. R., Lagerman, C. E., Yang, X., Bommarius, A. S., Grover, M. A., & Rousseau, R. W. (2021). Reactive crystallization: A review. Reaction Chemistry & Engineering, 6(3), 364–400. https://doi.org/10.1039/D0RE00272K
  3. Salami, H., Lagerman, C. E., Harris, P. R., McDonald, M. A., Bommarius, A. S., Rousseau, R. W., & Grover, M. A. (2020). Model development for enzymatic reactive crystallization of β-lactam antibiotics: A reaction–diffusion-crystallization approach. Reaction Chemistry & Engineering, 5(11), 2064–2080. https://doi.org/10.1039/D0RE00276C
  4. Schembecker, G., & Tlatlik, S. (2003). Process synthesis for reactive separations. Chemical Engineering and Processing: Process Intensification, 42(3), 179–189. https://doi.org/10.1016/S0255-2701(02)00087-9
  5. Tang, H. Y., Rigopoulos, S., & Papadakis, G. (2023). On the effect of turbulent fluctuations on precipitation: A direct numerical simulation – population balance study. Chemical Engineering Science, 270, 118511. https://doi.org/10.1016/j.ces.2023.118511


A Machine Learning approach for subvisible particles classification in biotherapeutic formulations

Louis Joos, Anouk Brizzi, Eva-Maria Herold, Erica Ferrari, Cornelia Ziegler

Sanofi, France

Processing steps on biotherapeutics can cause appearance of Subvisible Particles (SvPs) which are considered a critical quality attribute (CQA) by pharmaceutical regulatory agencies [2,3]. SvPs are usually split between Inherent Particles (protein particles), Intrinsic Particles (Silicon Oil droplets, Glass, Cellulose, etc.) and Extrinsic Particles (e.g. clothes fibers). Discrimination between proteinaceous and other particles (generally of size ranging from 2 to 100 µm) is key in assessing product stability and potential risk factors such as immunogenicity or negative effects on quality and efficacy of the drug product [1].

According to USP <788> [4], the preferred method for determination of SvPs is light obscuration (LO). However, LO is not able to distinguish between particles of different compositions. In contrast, Flow Imaging Microscopy (FIM) has demonstrated high sensitivity in detecting and imaging SvPs [5].

In this study we develop a novel experimental and modeling workflow based on binary supervised classification, which allows a simple and robust classification of silicone oil (SO) droplets and non-silicone oil (NSO) particles. First, we generate experimental data from different therapeutic proteins exposed to various stresses and some samples mixed with relevant impurities. Data acquisition is performed with IPAC-2 (Occhio), MFI (Protein Simple), and Flowcam (Yokogawa Fluid Imaging Technologies) microscopes that are able to extract different morphological (e.g. circularity, aspect ratio) and intensity-based (e.g. Average, Standard Deviation) features from features from particle images.

Second, we train tree-based models, particularly Random Forests, on tabular data extracted from the microscopes from different projects and manually labelled by expert scientists. We obtain 97% vs 85% global accuracy for previously used baseline filters, even for particles in the range 2-5 µm which are usually hardest to classify.

Finally, we extend these models to multi-class problems with new types of particles (glass and cellulose) with good accuracy (93%), suggesting that this methodology is adapted to classify efficiently many different particles. Future perspectives include the exploration of new particle classes (air bubbles, protein aggregates, etc.) and a complementary Deep Learning multilabel approach to classify particles by direct image analysis when there are multiple particles overlapping on the same image.

References

[1] Sharma, D. K., & King, D. (2012). Flow imaging microscopy for the characterization of protein particles. Journal of Pharmaceutical Sciences, 101(10), 4046-4059.

[2] International Conference on Harmonisation (ICH). (1999). Q6B: Specifications: Test Procedures and Acceptance Criteria for Biotechnological/Biological Products.

[3] International Conference on Harmonisation (ICH). (2004). Q5E: Comparability of Biotechnological/Biological Products Subject to Changes in Their Manufacturing Process.

[4] United States Pharmacopeia (USP). (2017). <788> Particulate Matter in Injections.

[5] Zölls, S., Weinbuch, D., Wiggenhorn, M., Winter, G., Jiskoot, W., Friess, W., & Hawe, A. (2013). Flow imaging microscopy for protein particle analysis—a comparative evaluation of four different analytical instruments. AAPS Journal, 15(4), 1200-1211.



EVALUATION OF THE CONTROLLABILITY OF DISTILLATION WITH MULTIPLE REACTIVE STAGES

Josué Julián Herrera Velazquez1,3, Julián Cabrera Ruiz1, J. Rafael Alcántara Avila2, Salvador Hernández1

1Universidad de Guanajuato, Mexico; 2Pontificia Universidad Católica del Perú, Peru; 3Instituto Tecnológico Superior de Guanajuato, Mexico

Different energy alternatives to fossil fuels have been proposed to reduce greenhouse gas emissions that have contributed to the poor climate conditions we have today. Despite the collective efforts of the generation of these technologies, it remains a challenge to reduce production costs so that they are accessible to the largest sector of the population. Silicon-based photovoltaic (PV) solar panels (Silane) are an alternative to electricity generation in homes and industries. Most of the cost of these panels is in obtaining the raw material. Intensified schemes have been proposed to produce Silane (SiH4) that reduce costs and energy demand of the process, such as reactive distillation. Zeng et al. (2017) propose dividing the reactive zone into several reactive zones where, under a parametric study, they found that three reactive zones greatly benefit the energy needs of the unit operation. Alcántara-Maciel et al. (2022) solved this problem by stochastic optimization using dynamic limits where the case of one, two, and three reactive zones are evaluated in the same study to determine the optimal reactive zones for this problem by optimizing the Total Annual Cost (TAC), finding that the best solution is one reactive zone. Post-optimization controllability studies have been carried out for the reactive distillation column to produce Silane for a single reactive zone but not for the case of multiple reactive zones. Techniques have been proposed to evaluate the controllability of steady-state processes based on previous open-loop analysis and using Singular Value Decomposition (SVD) with simplified first-order transfer function models (Cabrera et al., 2018). In this work, three Pareto solutions from a previous study of this reactive distillation column solved in a multi-objective way for the reboiler load (Qh) and the TAC will be evaluated, as well as the case proposed by Zeng et al. (2017). The condition number of the rigorous control of the SVD with a quantitative measurement proposal Ag+gsm (Cabrera et al., 2018) will be compared with approximations of first and second-order transfer function models for a positive perturbation, so that the feasibility of using these simplified proposals to evaluate the steady-state controllability in a multi-objective global optimization of this complex scheme can be evaluated. The results of this study show that first and second-order transfer functions can be effectively used to predict steady-state controllability for a frequency of up to 100 rad/h, which is a new proposal, since only first-order transfer functions are reported in the literature. Simplifying rigorous transfer function models to first- or second-order models helps reduce noise in the stochastic optimization process, as well as helping to achieve shorter computational time in this novel implementation of steady-state controllability evaluation.



Optimization of steam power systems in industrial parks considering distributed heat supply and auxiliary steam turbines

Lingwei Zhang, Yufei Wang

China University of Petroleum (Beijing), China, People's Republic of

Enterprises in industrial parks have dispersed locations and various heat demands. In the steam power system for centralized heat supply in an industrial park, heat demands of all consumers are satisfied by the energy station, leading to the high steam delivery costs caused by the individual distant enterprises. Additionally, considering the trade-off between distance-related costs and heat cascaded utilization, the number of steam levels is limited, thus some consumers are supplied with heat at higher temperature than that of required, resulting in the low energy efficiency. To deal with the above problems, an optimization model of steam power systems in industrial parks considering distributed heat supply and auxiliary steam turbines is proposed. Field erected boilers can independently supply heat for consumers to avoid excessive pipeline costs; Auxiliary steam turbines are used for the re-depressurization of steam received by consumers, which can increase the electricity generation capacity and improve the temperature matching of heat supply and demand. A mixed-integer nonlinear programming model is established for the problem, and the steam power systems are optimized by the objective function of minimizing the total annual cost (TAC). In this model, the influence of different numbers of steam levels is considered. The saturation temperatures of steam levels are the decision variables, and the arrangements of field erected boilers and auxiliary turbines are determined by the binary variables. A case study illustrates that there is an optimal number of steam levels to minimize the TAC of the system. The selective installation of field erected boilers and steam auxiliary turbines for consumers can effectively reduce the cost of pipeline network, increase the income of electricity generation, and significantly decrease the TAC.



Conceptual Modular Design and Optimization for Continuous Pharmaceutical Processes

Tuse Asrav, Merlin Alvarado-Morales, Gurkan Sin

Technical University of Denmark, Denmark

The pharmaceutical industry faces challenges such as high manufacturing costs, strict regulations, and rapidly evolving product portfolios, driving the need for efficient, flexible, and adaptive manufacturing processes. To meet these demands, the industry is shifting toward multiproduct, multiprocess facilities, increasing interest in modular process designs [1].

Modular design is characterized by using standardized, interchangeable, small-scale process units or modules that can be easily rearranged by exchanging units and numbering up for fast adaptation to different products and production scales. The modular approach not only supports the efficient design of multiproduct facilities but also allows for the continuous optimization of processes as new data and technologies become available.

This study presents a systematic framework for the conceptual design of modular pharmaceutical facilities, which allows for reduced engineering cycles, faster time-to-market, and enhanced adaptability to changing market demands. In brief, the proposed framework consists of 1) module definition, 2) process flowsheet design, 3) simulation-based optimization, and 4) uncertainty analysis and robustness evaluation.

The application of the framework is demonstrated through case studies involving the manufacturing of two widely used active pharmaceutical ingredients (APIs) ibuprofen and paracetamol with distinct production steps following the modular design approach. The standard modules such as the reaction and separation modules are defined in terms of the type of equipment, the number of equipment, and the size of the equipment. The process flowsheets are then designed and optimized by combining these standardized modules. Simulation-based optimization and uncertainty analysis are integrated to quantify key metrics such as process efficiency, robustness, and flexibility.

This study demonstrates how modular systems offer a cost-efficient, adaptable solution that integrates continuous production with high flexibility. The approach allows pharmaceutical facilities to quickly reconfigure processes to meet changing demands, providing an innovative pathway for future developments in pharmaceutical manufacturing. The results also highlight the importance of integrating stochastic optimization in modular design to enhance robustness and ensure confidence in performance by accounting for uncertainties.

References

[1] Bertran, M. O., & Babi, D. K. (2023). Exploration and evaluation of modular concepts for the design of full-scale pharmaceutical manufacturing facilities. Biotechnology and Bioengineering. https://doi.org/10.1002/bit.28539



Design of a policy framework in support of the Transformation of the Dutch Industry

Jan van Schijndel, Rutger deMare, Nort Thijssen, Jim van der Valk Bouman

QuoMare

The size of the Dutch Energy System in 2022 was approximately 2700 PJ. Some 14% (380 PJ) classifies as renewable heat & power and the remaining 86% as fossil energy (natural gas, crude oil and coal). A network of power-generation units, refineries and petrochemical complexes convert fossil resources into heat (700 PJ), power (400 PJ), transportation fuels (500 PJ) and high value chemicals (400 PJ). Some 700 PJ is lost by conversion and transport. The corresponding CO2 emission level in 2022 was some 150 mln ton of CO2-equivalents.

Transformation of this system into a Net Zero CO2 system by 2050 calls for both decarbonisation and recarbonisation of fossil resources into renewable resources: renewable heat (waste heat, geo- & aqua-thermal heat), renewable power (solar & wind) and renewable carbon (biomass, waste, and CO2).

QuoMare developed a decision support framework TDES to support this Transformation of the Dutch Energy System.

TDES is based on Mixed-Integer Multi-Period Linear Programming mathematics.

TDES evaluates the impact of integer decisions (decarbonization, recarbonisation & infrastructure investment options) on a year-to-year basis simultaneously with continuous variables (unit capacities & interconnecting flows) subject to various constraints (like CO2 targets over time and infrastructure limitations). The objective is to maximize the net present value of the accumulated energy system margin over the 2020-2050 time-horizon.

TDES can help policy makers to develop policies for ‘optimal transition pathways’ that will deliver a Net Zero energy system by 2050.

Decarbonisation of heat & power is well underway at the moment. Over 50% of current Dutch power demand comes already from solar and wind. Large scale waste heat recovery and distribution projects are under development. Also a high penetration rate of residential heat-pumps is noticed. High level heat to be supplied to industry by green and blue H2 is projected to be viable from 2035 onwards.

However, the recarbonisation of fossil based transportation fuels (in particular for shipping and aviation) and chemicals is hampering progress due to the lack of robust business cases. Without a line of sight towards healthy production margins, companies are reluctant to invest in technologies (like electrolysis, pyrolysis, gasification, oxy-firing, fermentation, Fischer-Tropsch synthesis, methanol synthesis, auto-thermal reforming, dry-reforming) needed to produce the envisaged 800 PJ (some 20 mln ton) of renewable carbon based transportation fuels and high value chemicals by 2050.

The paper will address which set of meaningful policies would steer the energy system transformation towards a Net Zero system in 2050. Such an optimal set of policy measures will be a combination of CO2 emission constraints (prerequisite for any license to operate), CO2 tax levels (imposed on top of ETS), and capital investment subsidies (to ensure a level playing field in cost terms for the production of renewable carbon based transportation fuels and chemicals).

The novelty of this work relates to the application of a MP-MILP approach to the development of optimal policies to drive the energy transition at a country wide level.



Data-Driven Deep Reinforcement Learning for Greenhouse Temperature Control

Farhat Mahmood, Sarah Namany, Rajesh Govindan, Tareq Al-Ansari

College of Science and Engineering, Hamad bin Khalifa University, Qatar

Efficient temperature control in closed greenhouses is essential for optimal plant growth, especially in arid regions where extreme conditions challenge mirco-climate management. Maintaining the optimum temperature range directly influences healthy plant development and overall agricultural productivity, impacting crop yields and financial outcomes. However, the greenhouse in the present case study fails to maintain the optimum temperature as it operates based on predefined settings, limiting its ability to adapt to dynamic climate conditions. To address this, the objective is to develop a control system that maintains an ideal temperature range within the greenhouse that dynamically adapts to fluctuating external conditions, ensuring consistent climate control. Therefore, this study presents a control framework using Deep Deterministic Policy Gradient, a model-free deep reinforcement learning algorithm, to optimize temperature control in the closed greenhouse. A deep neural network is trained using historical data collected from the greenhouse to accurately represents the nonlinear behavior of the greenhouse system under varying conditions. The deep determinstic policy gradient algorithm learns optimal control strategies by interacting with a simulated greenhouse environment, continuously adapting without needing an explicit system dynamics model. Results from the study demonstrate that for a three-day simulation period, the deep deterministic policy gradient-based control system leads to superior temperature control as compared to the existing system with mean squared error of 0.1459 °C and a mean absolute error of 0.2028°C. The proposed control system promotes healthier plant growth and improved crop yields, contributing to better resource management and sustainability in controlled environment agriculture.



Balancing modelling complexity and experimental effort for conducting QbD on lipid nanoparticles (LNPs) systems

Daniel Vidinha Batista, Marco Seabra Reis

University of Coimbra, CERES, Department of Chemical Engineering

Abstract

Lipid nanoparticles (LNPs) efficiently encapsulate nucleic acids while ensuring successful intracellular delivery and endosomal escape. Therefore, there is increasing interest from the industrial and research communities in exploring the LNPs’ unique properties as a promising drug carrier. To ensure the successful and safe synthesis of these LNPs while maintaining their quality attributes, the pharmaceutical industry typically recommends following a Quality by Design (QbD) approach. One of the key aspects of the QbD approach is the use of Design of Experiments (DOE) to establish the Design Space that guarantees the quality requirements of the LNPs are met1. However, before defining a design space, several DOE stages may be necessary for screening the important factors, modelling the system’s behaviour accurately, and finding the optimal operational conditions. As each experiment is expensive due to the high cost of the formulation components, there is a strong concern and interest in making this process as efficient and informative as possible.

In this context, an in silico study provides a suitable test bed to analyse and compare the different DOE strategies that may be adopted and collect insights about a reasonable number of experiments to accommodate within a designated budget, while ensuring a statistically valid analysis. Therefore, we have conducted a systematic study based on the work developed by Karl et al.2, who provided a simulation model of the LNP synthesis, referred to as the Golden Standard (GS) Model. This model was derived and codified in the JMP Pro software using a recent methodology called self-validated ensemble model (SVEM). The model is quite complex in its structure, and was considered unknown throughout the study.

The objective of this study is to ascertain the efficacy of different DOE alternatives for a selected number of effects. A variety of models with increasing complexity was considered. These models are referred to as Estimated Models (EM) and vary from main effects only to models contemplating third-order non-linear mixture effects. In the development of the EM models, there are predictors of the GS model that were not considered, to better reproduce the realistic situation of modelling mismatch and experimental limitations. This is the case of the type of ionizable lipid and the total flow rate.

We have considered the molar ratio of each lipidic component (Ionizable lipid, Structural lipid, Helper lipid and PEG lipid) and the N/P ratio as factors, and for the responses, the potency and average size of the LNP. These responses were contaminated with additive white noise with different signal-to-noise ratio (SNRs) to better reflect the reality of having different levels of reproducibility of the measured responses.

Our results revealed that different responses require quite different model structures, with distinct levels of complexity. However, it was possible to notice that the number of experiments suggested is approximately the same, of the order of 30, a fact that may be anticipated for a DOE with similar factors under analysis.

References:

  1. Gurba-Bryśkiewic et al. Biomedicines. 2023;11(10):2752.doi:10.3390/biomedicines11102752
  2. Karl et al. JoVE. 2023;(198):65200. doi:10.3791/65200


Decarbonizing Quebec’s Chemical Sector: Bridging sector disparities with simplified modeling

Mélissa Lemire, Marie-Hélène Talbot, Sylvain Larose

Laboratoire des technologies de l’énergie, Institut de Recherche d’Hydro-Québec, Canada

Electric utilities are at a critical juncture where they must proactively anticipate energy consumption and power demand over extended time horizons to support the energy transition. These projections are essential for meeting the expected surge in renewable electricity as we shift away from natural gas to eliminate greenhouse gas (GHG) emissions. Given that a significant portion of these emissions comes from industrial processes, utilities need a comprehensive understanding of the thermal energy requirements of various processes within their service regions in order to navigate this transition effectively.

In Quebec, the chemical sector includes 19 major GHG emitters, each with annual emissions exceeding 10,000 tCO2 equivalent, operating across 11 distinct application areas, excluding refineries from this analysis. The sector is undergoing rapid transformation driven by the closure of aging facilities and the establishment of new plants focused on battery production and renewable fuel generation. The latter aims at decarbonising “hard-to-abate” sectors, which pose significant challenges. It is imperative to establish a clear methodology for characterising the chemical sector to accurately estimate the energy requirements for decarbonisation.

A thorough analysis of existing literature and reported GHG emissions serves as a foundation for estimating the actual energy requirement of each major emitter. Despite the diversity of industrial processes, a trend emerges: alternative end-use technologies can often be identified based on the required thermal temperature levels. With this approach, alternative end-use technologies that closely align with the specific heat levels needed are considered. Furthermore, two key performance indicators for decarbonisation scenarios have been developed. These indicators enable the comparison of various technological solutions and estimation of the uncertainties associated with different decarbonisation pathways. We introduce the Decarbonisation Efficiency Coefficient (DEC), which evaluates the reduction of fossil fuel consumption per unit of renewable energy and relies on the first law efficiency of both existing fossil-fuel technologies and alternative renewable energy technologies. The second indicator, the GHG Performance Indicator (GPI), assesses the reduction of greenhouse gas emissions per unit of renewable energy required, providing a clear metric for assessing the most efficient technological solutions to support decarbonisation efforts.

In a status quo market, the decarbonisation of this sector could yield a significant reduction in primary energy consumption, ranging from 10% to 61%, depending on the technologies implemented. Alternative end-use technologies include heat pumps, electric boilers with reheaters, biomass boilers, and green hydrogen utilisation, each presenting unique advantages for a sustainable industrial landscape. Ultimately, for Quebec’s energy transition to succeed, electric utilities must adapt to evolving market conditions and enhance their understanding of industrial energy requirements. By accurately estimating the electricity required for effective decarbonisation, utilities can play a pivotal role in shaping a sustainable future.



Optimizing Green Hydrogen Supply Chains in Portugal: Balancing Economic Efficiency and Water Sustainability

João Imaginário1, Tânia Pinto Varela1, Nelson Chibeles-Martins2,3

1CEG-IST, IST UL, Portugal; 2NOVA Math, NOVA FCT, Portugal; 3Mathematics Department, NOVA FCT, Portugal

As the world intensifies efforts to reduce carbon emissions and combat climate change, green hydrogen has emerged as a pivotal solution for sustainable energy transition. Produced using renewable sources like hydro, wind, and solar energy, green hydrogen holds immense potential for clean energy systems. Portugal, with its abundant renewable resources, is well-positioned to become a leader in green hydrogen production. However, the water-intensive nature of hydrogen production, especially via electrolysis, poses a challenge, particularly in regions facing water scarcity.

In Portugal, water resources are unevenly distributed, with southern regions such as Alentejo and Algarve already experiencing significant water stress. This creates a complex challenge for balancing green hydrogen development with the need to conserve water. To address this, a multi-objective optimization model for the Green Hydrogen Supply Chain (GHSC) in Portugal is proposed. This model aims to minimize both production costs and water stress, offering a more sustainable approach than traditional models that focus solely on economic efficiency.

The model leverages a meta-heuristic algorithm to explore large solution spaces, offering near-optimal solutions for supply chain design/planning. It incorporates regional water availability by analysing hydrographic characteristics of mainland Portugal, allowing for flexible decision-making that balances cost and water stress according to regional constraints. Scenario analysis is employed to evaluate different production strategies under varying conditions of water availability and demand.

By integrating these dual objectives, the model supports the design of green hydrogen supply chains that are both economically viable and environmentally responsible. This approach ensures that hydrogen production does not exacerbate water scarcity, particularly in already vulnerable regions. The findings contribute to the broader goal of creating cleaner, more resilient energy systems, providing valuable insights for sustainable energy planning and policy.

This research is a critical step in ensuring green hydrogen development aligns with long-term sustainability, offering a framework that prioritizes both economic and environmental goals.



Towards net zero carbon emissions and optimal water management within an integrated aquatic and agricultural livestock system

Amira Siniscalchi1,2, Guillermo Durand1,2, Erica Patricia Schulz1,2, Maria Soledad Diaz1,2

1Universidad Nacional del Sur, Argentine Republic; 2Planta Piloto de Ingeneria Quimica (PLAPIQUI)

We propose an integrated agricultural, livestock, ecohydrological and carbon capture model for the management of extreme climate events within a salt lake basin, while minimizing carbon dioxide emissions. Salt lakes, are typical of arid and semiarid zones where annual evaporation exceeds rainfall or runoff. They are particularly vulnerable to climatic changes, and salt and water levels can reach critical values. The mitigation of the consequences of extreme environmental events, such as floods and droughts, has been addressed for an endorheic salt lake in previous work [1].

In the present model, the system is composed of five integrated submodels: ecohydrological, meteorological, agricultural, livestock and carbon emission/capture. In the ecohydrological model, dynamic mass balances are formulated for both a salt lake and an artificial freshwater reservoir. The meteorological model includes surrogate models for meteorological variables, based on historical data for air temperature, wind, relative humidity and precipitations, on a daily basis. Based on them, wind speed profiles, radiation, vapor saturation, etc. are estimated, as required for evaporation and evapotranspiration profiles calculation. The agricultural submodel includes biomass growth for native trees, crops and pasture, as well as water requirement at each life cycle stage, which is calculated as a function of tree/crop/pasture evapotranspiration and precipitations. Local data is collected for native species and soil types. Carbon capture is calculated as function of biomass and soil type. Water requirement for cattle is calculated as function of biomass. The proposed model also accounts for CO2 emissions associated to sowing, electrical consumption for pumps (drip irrigation for crop and pasture and for water diversion to/from river), methane emissions (CO2-eq) by livestock, as well as CO2 sequestration by trees, pasture, crops and soil. One objective is to carry out the carbon mass balance along a given time horizon (six years or more) and to propose additional activities to achieve net zero carbon, depending on different climate events.

An optimal control problem is proposed, in which the objective function is an integral one that aims to keep the salt lake volume (and its associated salinity, as it is an endorheic basin), at a desired value, along a given time horizon, to keep salinity at optimal values for reproduction of valuable fish species. Control variables are stream flowrates that are diverted to/from the tributary to the salt lake from/to an artificial freshwater reservoir during dry/wet periods. The resulting optimal control problem is constrained with a DAE system of equations that represents the above-described system. The optimal control problem has been implemented in gPROMS (Siemens, 2024) and solved with a control vector parameterization approach.

Numerical results show that the system under study can produce meat, quinoa crops and fish, with important incomes, as well as restore native tree species, under different extreme events. Net zero carbon goals are approached within the basin, while performing optimal water management in a salt lake basin.

References

Siniscalchi, A, Diaz, M.S, Lara, R.J (2022). Sustainable long-term mitigation of floods and droughts in semiarid regions:Integrated optimal management strategies for a salt lake basin. Ecohydrology. 2022;15:e2396



A Modelling and Simulation Software for Polymerization with Microscopic Resolution

Shenhua Jiao1, Xiaowen Lin1,2, Rui Liu1, Xi Chen1,2

1State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University 310027, Hangzhou China; 2Huzhou Institute of Industrial Control Technology 313000, Huzhou China

In the domain of process systems engineering, developing software embedded with advanced computational methods is in great demand to enhance the kinetic comprehension and facilitate industrial applications. Polymer production, characterized by complex reaction mechanisms, represents a particularly intricate process industry. In this study, a scientific software, PolymInsight, is developed for polymerization modelling and simulation with insight on microscopic resolution.

From an algorithm perspective, PolymInsight offers high-performance solution strategies for polymerization process modelling by utilizing self-developed approaches. On a flowsheet level, the software provides an equation-oriented and a sequential-module approach to solve for macroscopic information. On micro-structure level, it provides users with both deterministic and stochastic algorithms to predict polymers’ microscopic properties, e.g., molecular weight distribution (MWD). Users can choose from various methods to model these indices: the stochastic method (Liu, 2023) which proposes concepts of the “buffer pool” to enable multi-step steady-state Monte Carlo simulation for complicated reactions including long chain branching, the orthogonal collocation method (Lin, 2021) which applied a model reformulation strategy to enable the numerical solution of the large-scale system of equations for calculating MWD in steady-state polymerizations, and an explicit analytical solution derivation method which provide the analytical expressions of MWD for specific polymerization mechanisms including FRP with combination termination, chain transfer to polymer and CRP with reversible reactions.

From a software architecture perspective, PolymInsight is built on a self-developed process modelling platform that allows flexible user customization and is specifically tailored for the macromolecular field. As a general software, it is modularly designed, and each module supports external libraries and secondary development. Pivotal modules, including reaction components, reaction kinetics, standard units, standard streams and solution strategies, are meticulously constructed and seamlessly integrated. The software's versatility is ensured by its support for a wide range of (i) polymerization mechanisms, (including Ziegler-Natta polymerization, free radical polymerization, and controlled radical polymerization), (ii) computing algorithms (including deterministic methods solving large scale equations and stochastic methods utilizing Monte Carlo simulation), (iii) user-defined flowsheets and parameters, and (iv) extensible standard model libraries. The insights gained from this work open up opportunities for optimizing operational conditions, addressing complex computational challenges, and enabling online control with minimal requirements for specialized knowledge.

References:

Lin, X., Chen, X., Biegler, L. T., & Feng, L.-F. (2021). A modified collocation framework for dynamic evolution of molecular weight distributions in general polymer kinetic systems. Chemical Engineering Science, 237, 116519.

Liu, R., Lin, X., Armaou, A., & Chen, X. (2023). A multistep method for steady-state Monte Carlo simulations of polymerization processes. AIChE Journal, 69(3), e17978.

Mastan, E., & Zhu, S. (2015). Method of moments: A versatile tool for deterministic of polymerization kinetics. European Polymer Journal, 68, 139–160.



Regularization and Uncertainty Quantification for Parameter Estimation of NRTL Models

Volodymyr Kozachynskyi1, Christian Hoffmann1, Erik Esche1,2

1Technische Universität Berlin, Process Dynamics and Operations, Straße des 17. Juni 135, 10623 Berlin, Germany; 2Bundesanstalt für Materialforschung und -prüfung (BAM), Unter den Eichen 87, 12205 Berlin, Germany

Accurate prediction of vapor-liquid equilibria (VLE) using thermodynamic models is critical to every step of chemical process design. The model’s accuracy and uncertainty can be quantified based on the uncertainty of the estimated parameters. The NRTL model is among the most widely used activity models. The estimation of its binary interaction parameters is usually done using a heuristic that sets the value of the nonrandomness parameter α to a value between 0.1 and 0.47. However, this heuristic can lead to an overestimation of the prediction accuracy of the final thermodynamic model, i.e., the model is actually not as reliable as the process engineer thinks. In this contribution, we present the results of the identifiability analysis of the binary VLE model [1] and argue that regularization should be used instead of simply fixing α.

In this work, the NRTL model with temperature dependent binary interaction parameters is considered, resulting in five parameters to be estimated: parameter α and four binary interaction parameters. 12 different binary mixtures with different azeotropic behavior, including no azeotrope and double azeotrope, are analyzed. A standard Monte Carlo method for describing real parameter and model prediction uncertainty is modified to be used for identifiability analysis and comparison of regularization techniques. Identifiability analysis is a technique used to determine parameters that can be uniquely estimated based on model sensitivity to the parameters. Four different subset selection regularization techniques are compared: SVD algorithm, generalized orthogonalization, forward selection, and eigenvalue algorithm, as they use different identifiability methods to select and remove unidentifiable parameters from the estimation.

The results of our study on 12 binary mixtures show that, depending on the mixture, the number of identifiable parameters varies between 3 and 5, implying that it is crucial to use regularization to efficiently solve the underlying parameter estimation problem. According to the analysis of all mixtures, parameter α, depending on the chosen regularization technique, is usually the most sensitive parameter, suggesting that it is inadvisable to remove this parameter from the estimation – in contradiction to standard heuristics.

In addition to this identifiability analysis, the nonlinearity of the NRTL model with respect to the parameters is analyzed. The actual form of the parameter uncertainty usually indicates nonlinearity and does not follow the normal distribution, which contradicts standard assumptions. Nevertheless, the prediction accuracy estimated using the linearization assumption is sufficiently good, i.e., linearization provides at least a valid underestimation of the real model prediction uncertainty.

In the presentation, we shall demonstrate exemplarily for some of the investigated mixtures that the estimation of NRTL parameters should be performed using regularization techniques, how large the introduced bias is based on a selected regularization technique, and compare actual uncertainty to its linear estimator.

[1] Kozachynskyi V., Hoffmann C., and Esche E. 2024. Why fixing alpha in the NRTL model might be a bad idea – Identifiability analysis of a binary Vapor-Liquid equilibrium, 10.48550/arXiv.2408.07844. Preprint.

[2] Lopez, C.D.C., Barz, T., Körkel, S., Wozny, G., 2015. Nonlinear ill-posed problem analysis in model-based parameter estimation and experimental design. 10.1016/j.compchemeng.



From Sugar to Bioethanol – Simulation, Optimization, and Process Technology in One Module

Jan Schöneberger1, Burcu Aker2

1Berliner Hochschule für Technik; 2Chemstations

The Green Processes Lab module, part of the Green Engineering study program at BHT, aims at equipping students to simulate, optimize, and implement an industrially relevant sustainable process within a single semester. Bioethanol production, with a minimum purity of 99.8 wt% is the selected process, using readily available supermarket feedstocks: sugar and yeast.

In earlier modules of the program, students engage with essential unit operations, including vessel reactor (fermenter), batch distillation, batch rectification, filtration, centrifugation, dryer, and adsorber. These operations are thoroughly covered in theoretical lectures reinforced through mathematical modeling, and predefined experiments, so that a comprehensive knowledge about their behavior exists. The students work in groups and are mostly unrestricted in designing their process, beside safety regulations and two artificial constraints: Only existing equipment can be used, and each process step is limited to a duration of 180 minutes including set-up, shutdown and cleaning. The groups compete in finding the economically best process, i.e. the process which produces the maximum amount bioethanol with the minimum amount of resources, namely sugar, yeast, and electricity. This turns the limitation for the process step duration into a big challenge, for it requires a very detailed simulation and process planning.

For tackling this task, the students use a commercial software, namely the flowsheet simulator CHEMCAD. This tool provides basic simulation models for unit operations and a powerful thermodynamic engine to calculate physical properties of pure substances and mixtures. However, the models must still be parametrized based on the existing equipment. Therefore, tools such as reaction rate regression and data reconciliation are used with data from previous experiments and a limited number of individually designed new experiments.

The parametrized models are then used to optimize the economic objective function. Due to the stepwise nature of the process, an overall optimization of all process parameters is extremely difficult. Instead, the groups combine different optimization approaches and try to focus on the individual processes without disregarding the other steps. This encourages a high degree of communication within the groups, because each group member is responsible for one process step.

At the end of the semester, each group successfully produced a quantifiable amount of bioethanol and documented the resources utilized throughout the process, as utility consumption was measured at each step. This data allows for the calculation of specific product costs, facilitating comparisons among groups and against commercially available bioethanol.

This work presents insights gained from the course, highlighting the challenges and the successes. It emphasizes the importance of mathematical modelling and the challenges in aligning modeled data with measured data. A key finding is that while the models may not perfectly reflect reality, they are essential for a successful process design, particularly for inexperienced engineers transitioning from academic to industry.



Aotearoa-New Zealand’s Energy Future: A Model for Industrial Electrification through Renewable Integration

Daniel Jia Sheng Chong1, Timothy Gordon Walmsley1, Martin Atkins1, Botond Bertok2, Michael Walmsley1

1Ahuora – Centre for Smart Energy Systems, School of Engineering, The University of Waikato; 2Szechenyi István University, Gyor, Egyetem tér 1, Hungary

Green energy carriers are increasingly proposed as the energy of the future. This study evaluates Aotearoa-New Zealand’s potential to transition to full industrial electrification and produce high-value, green hydrogen-rich compounds, all within the country’s resource constraints. At the core of this research is a regional energy transition system model, developed using the P-graph framework. P-graph is a bipartite graph designed specifically for representing complex process systems, particularly combinatorial problems. The novelty of this research lies in integrating the open-source P-graph Python library with the ArcPy library and the MERRA-2 global dataset API to conduct large-scale energy modelling.

The model integrates renewable energy and biomass resources for green hydrogen production, simulating energy transformation processes in an hourly basis. On the demand side, scenarios consider full electrification of industrial process heat through heat pumps and electrode boilers, complemented by biomass using driven technologies such as bubbling fluidised-bed reactor for biomass residue, straw and stover as well as biomass boilers for K-grade logs to meet heat demand. Additionally, the model accounts for projected increases in electricity consumption from the growing use of electric and hydrogen-battery hybrid vehicles, as well as existing residential and commercial energy needs. Aotearoa-New Zealand’s abundant natural wood resources emerge as a viable feedstock for downstream processes, supplying carbon sources for hydrogen-rich compounds such as methanol and urea.

The regional energy transition model framework is structured to minimise overall system costs. To optimise the logistics of biomass transportation, we use the Python-based ArcPy library to calculate cost functions based on the distance to green refineries. The model is designed to be highly dynamic, adapting to spot electricity prices and fluctuating demand across residential, commercial, and industrial sectors, particularly influenced by seasonal and weather variations. It incorporates non-dispatchable energy sources, such as wind and solar, with variable outputs, while utilising hydroelectric power as a stable baseload and energy storage solution to counter peak demand periods. The hourly solar irradiance, wind speed, and precipitation data from the MERRA-2 global dataset are coupled with the model to produce realistic and accurate capacity factors for these renewable energy sources.

The study concludes that Aotearoa-New Zealand remains a major player in the Oceanic region with respect to energy-based chemical production. Beyond meeting domestic needs, the country has the potential to become a net exporter of sustainable fuels, comparable to conventional energy sources. This outcome is achievable through the optimisation of diverse renewable energy sources and cross-sector energy integration. The findings provide policymakers with concrete, in-depth analyses of renewable projects to guide New Zealand’s transition to a net-zero hydrogen economy.



Non-Linear Model Predictive Control for Oil Production in Wells Using Electric Submersible Pumps

Carine de Menezes Rebello1, Erbet Almeida Costa1, Marcos Pellegrini Ribeiro4, Marcio Fontana3, Leizer Schnitman2, Idelfonso Bessa dos Reis Nogueira1

1Department of Chemical Engineering, Norwegian University of Science and Technology, Norway; 2Department of Chemical Engineering, Federal University of Bahia, Polytechnic School, Bahia, Brazil; 3Department of Electrical and Computer Engineering, Federal University of Bahia, Polytechnic School, Bahia, Brazil.; 4CENPES, Petrobras R&D Center, Brazil, Av. Horcio Macedo 950, Cid. Universitria, Ilha do Fundo, Rio de Janeiro, Brazil.

The optimization of oil production in wells lifted by Electric Submersible Pumps (ESPs) requires precise control of operational parameters, along with strict adherence to safety and efficiency constraints. The stable and safe operation of these wells is guided by physical and safety limits designed to minimize failures, extend equipment lifespan, and reduce costs associated with repairs, maintenance, and operational downtime. Moreover, maintaining operational stability not only lowers repair expenses but also mitigates revenue losses caused by unexpected equipment failures or inefficient production processes.

Process control has become a tool for reducing the frequency of constraint violations and ensuring the continuous optimization of oil production. By keeping operations within a well-defined operational envelope, operators can avoid common issues such as excessive vibrations, which may lead to premature pump wear and tear. Moreover, staying within this envelope prevents the degradation of pump efficiency over time and curbs excessive energy consumption, both of which have significant long-term cost implications.

Leveraging the available degrees of freedom to overcome the system's inherent constraints and improve operational efficiency. In the case of wells using ESPs, these degrees of freedom are primarily the ESP rotation speed (or frequency) and the opening of the production choke valve.

We propose a Non-Linear Model Predictive Control (NMPC) system tailored for a well equipped with an ESP. The NMPC framework explicitly accounts for the pump's operational limitations and effectively uses the available degrees of freedom to maximize performance. The NMPC's overarching objectives are to maximize oil production while respecting all system constraints, including both physical limitations and operational safety boundaries. This approach presents a more advanced and systematic control method compared to traditional PID-based systems, particularly in nonlinear, and constraint- intensive environments like oil wells.

The NMPC methodology is fundamentally based on a phenomenological model of the ESP, calibrated to predict key controlled variables accurately. These include the production flow rate and the liquid column height (HEAD). The prediction model consists of a system of three differential equations and a set of algebraic equations representing a stiff, single-phase, and isothermal system.

The system being modeled is a pilot plant located at the Artificial Lift Lab at the Federal University of Bahia. This pilot plant features a 15-stage centrifugal pump installed in a 32-meter-high well, circulating 3,000 liters of mineral oil within a closed-loop system.

In this setup, the controlled variables are the HEAD and the production flow, while the manipulated variables include the ESP frequency and the choke valve opening. The proposed NMPC system has been tested and has demonstrated its effectiveness in rejecting disturbances and accurately tracking setpoints. This guarantees stable and safe pump operation while optimizing oil production, providing a robust solution to the challenges associated in ESP-lifted well operations.



Life Cycle Assessment of Green Hydrogen Electrofuels in India's Transportation Sector

Ankur Singhal, Pratham Arora

IIT Roorkee, India

A transition to low-carbon fuels is integral in addressing the challenge of climate change. An essential transformation is underway in the transportation sector, one of the primary sources of global greenhouse gas emissions. The electrofuels that represent methanol synthesis via power-to-fuel technology have the potential to decarbonize the sector. This paper outlines an important comprehensive life cycle assessment for electrofuels, with this study focusing on the production of synthetic methanol from renewable hydrogen from water electrolysis coupled with carbon from the direct air capture (DAC) process. It looks at the whole value chain from raw material extraction to fuel combustion in transportation applications to give a proper cradle-to-grave analysis. The results from this impact assessment will offer a fuller comparison of merits and shortcomings associated with the electrofuel pathway compared to conventional methanol. The sensitivity study will determine how influential factors such as electrolyzer performance, carbon capture efficiency, and energy mix can impact the overall environmental effect. This study will compare synthetic methanol with traditional methanol, considering such categories as global warming potential, energy consumption, acidification, and eutrophication to appreciate the prospects for scaling synthetic methanol for the transportation industry.



Probabilistic Design Space Identification for Upstream Bioprocesses under Limited Data Availability

Ranjith Chiplunkar, Syazana Mohamad Pauzi, Steven Sachio, Maria M Papathanasiou, Cleo Kontoravdi

Imperial College London, United Kingdom

Design space identification and flexibility analysis are essential in process systems engineering, offering frameworks that enhance the optimization of operating conditions[1]. Such approaches can be broadly categorized into model-based and data-driven methods[2-4]. For complex systems like upstream biopharma processes, developing reliable mechanistic models is challenging, either due to a limited understanding of the underlying mechanisms or the need for simplifying assumptions to reduce model complexity. As a result, data-driven approaches often prove more practical from a modeling perspective. However, they often require extensive experimentation, which can be expensive and impractical, leading to sparse datasets[3]. Such sparsity also means that the data uncertainty becomes a significant factor that needs to be addressed.

We present a novel framework that utilizes a data-driven model to overcome the aforementioned challenges, even with sparse experimental data. Specifically, we utilize Gaussian Process (GP) models to account for real-world data uncertainties, enabling a probabilistic characterization of the design space—a critical generalization beyond traditional deterministic approaches. The framework has two primary components. First, the GP model predicts key performance indicators (KPIs) based on input process variables, allowing for the probabilistic modeling of these KPIs. Based on process performance constraints, a probability of feasibility is calculated, which indicates the likelihood that the constraints will be satisfied for a given input. After achieving a probabilistic design space characterization, the framework conducts a comprehensive quantitative analysis of process flexibility. Alpha shapes are employed to define deterministic boundaries at various confidence levels, allowing for the quantification of volumetric process flexibility and acceptable operational ranges. This enables a detailed examination of trade-offs between process flexibility, performance, and confidence levels.

The proposed framework is applied to an experimental dataset designed to study the effects of cell culture osmolality and temperature on the yield and purity of a monoclonal antibody product produced in Chinese hamster ovary cell fed-batch cultures. The results help balance purity-yield trade-offs through probabilistic characterizations that guide further experimentation and process design. The framework visualizes results through probabilistic heat maps and flexibility metrics to provide actionable insights for process development scientists. Since it is primarily a data-based framework, the framework is transferable to other types of bioprocesses.

References

[1] Yang, W., Qian, W., Yuan, Z. & Chen, B. 2022. Perspectives on the flexibility analysis for continuous pharmaceutical manufacturing processes. Chinese Journal of Chemical Engineering, 41, 29-41.

[2] Ding, C. and M. Ierapetritou. 2021. A novel framework of surrogate-based feasibility analysis for establishing design space of twin-column continuous chromatography. Int J Pharm, 609: p.121161.

[3] Kasemiire, A., Avohou, H. T., De Bleye, C., Sacre, P. Y., Dumont, E., Hubert, P. & Ziemons, E. 2021. Design of experiments and design space approaches in the pharmaceutical bioprocess optimization. European Journal of Pharmaceutics and Biopharmaceutics, 166, 144-154.

[4] Sachio, S., C. Kontoravdi, and M.M. Papathanasiou. 2023. A model-based approach towards accelerated process development: A case study on chromatography. ChERD, 197: p.800-820.

[5] M. M. Papathanasiou & C. Kontoravdi. 2020. Engineering challenges in therapeutic protein product and process design. Current Opinion in Chemical Engineering, 27, 81-88.



Study of the Base Case in a Comparative Analysis of Recycling Loops for Sustainable Aviation Fuel Synthesis from CO2

Antoine Rouxhet, Alejandro Morales, Grégoire Léonard

University of Liège, Belgium

In the context of the fight against global warming, the EU launched the ReFuelEU Aviation plan as part of the Fit for 55 package. Within this framework, sustainable aviation fuels are identified as a key tool for reducing hard-to-abate CO2 emissions. Power-to-fuel processes offer the potential to synthesise a wide range of fuels by replacing crude oil with captured CO2 as the carbon source. This CO2 is combined with hydrogen produced through water electrolysis, utilizing the reverse water-gas shift (RWGS) reaction :

CO2 + H2 ⇌ CO + H2O ∆°H298.15K = +41 kJ/molCO2 (1)

The purpose of this reaction is to convert the CO2 molecule into a less stable one, making it easier to transform into complex molecules, such as the hydrocarbon chains that constitute kerosene. This conversion is carried out through the Fischer-Tropsch (FT) reaction :

CO + H2 ⇌ CnH2n+2 + H2O ∆°H298.15K = -160 kJ/molCO (2)

In previous work, two kinetic reactor models were developed in Aspen Custom Modeler: one for the RWGS reaction [1] and one for the FT reaction [2]. The next step consists in integrating both models into a single process model built in Aspen Plus. This process includes both reaction units and the subsequent separation steps, yielding three main product fractions: the heavy hydrocarbons, the middle-distillates, which contain the kerosene-like fraction, and the light hydrocarbons along with unreacted gases.

This work is part of a broader study aimed at comparing different recycling loops for this process. Indeed, the literature proposes various configurations for recirculating unreacted gases, some of which include additional conversion units to transform light FT gases into reactants. However, there is currently a lack of comprehensive comparisons of these options from both technical and economic perspectives. The objective is therefore to compare these configurations to determine the one best suited for kerosene production.

In particular, this work presents the results of the base case i.e., the recycling of the gaseous phase leaving the separation units without any transformation of this stream. Three options are considered for the entry point of this recycled stream : at the inlet of the RWGS reactor, at the inlet of the FT reactor, or at both inlets. The present study investigates the comparison of these options based on carbon and energy efficiencies.

The next step involves adding a transformation unit to the recycling loop, such as a partial combustion unit. This would allow the conversion of light FT gases into process reactants, thereby improving overall efficiency. An economic comparison of the various options is also a goal of the study.

[1] Rouxhet, A., & Léonard, G. (2024). The Reverse Water-Gas Shift Reaction as an Intermediate Step for Synthetic Jet Fuel Production: A Reactor Sizing Study at Two Different Scales. Computer Aided Chemical Engineering, 53, 685-690. doi:10.1016/B978-0-443-28824-1.50115-0

[2] Morales Perez, A., & Léonard, G. (2022). Simulation of a Fischer-Tropsch reactor for jet fuel production using Aspen Custom Modeler. In L. Montastruc & S. Negny, 32nd EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING. Amsterdam, Netherlands: Elsevier. doi:10.1016/B978-0-323-95879-0.50051-5



Electricity Bidding with Variable Loads

Iiro Harjunkoski1,2

1Hitachi Energy Germany AG; 2Aalto University, Finland

The ongoing and planned electrification of many industries and processes also results in that all disturbances or changes in production will directly require countermeasures at the power grid level to maintain stability. As the electricity infrastructure is already facing the increasing volatility on the supply side due to the growing number of renewable energy source (RES) generation units, it is important also to untap the potential of the electrification. Processes will have a strong impact and can also help balancing for the RES-fluctuations, ensuring that the demand and supply are balanced at all times. This opportunity has already been recognized [1] and here we further elaborate on the concept by adding a battery energy storage system (BESS) to support the balancing between the production targets and grid stability.

Electricity bidding offers more opportunities than only forecasting the electricity load. Large consumers must participate in this electricity markets ahead of time and their energy bids will affect the market clearing. This mechanism allows to schedule the power plants to ensure sufficient supply but with increasing RES participation it becomes a challenge to deliver to promise and here industrial loads could potentially participate also in helping to maintain the stability. The main vehicle to deal with unplanned supply variations is ancillary services [3], which from the consumer point of view would commit to a potential increase/lowering of the energy consumption if called upon. This raises the practical question how much the process industries can plan for such volatility as it mainly must be focused on delivering to its own customers.

A common option – also seen by many RES unit owners – is to invest on BESS to act as a buffer between the consuming load and the power grid. This can also shield the process owner from unwanted and infeasible power volatilities, which can have an immense effect on the more electricity dependent processes. With such an energy storage system in place there is an option to use it for offering ancillary services, as well as participate in energy arbitrage trading [4]. However, the key is how to operate such a combined system in a profitable manner also taking into account the uncertainty of electricity prices. In this paper we extend the approach in [5], where a number of energy and ancillary service products are co-optimized taking into account the uncertainty in price developments. The previous approach was aimed for RES/BESS owners, where the forecasted load was relatively stable and mainly focused on keeping the system running. Here we change the load behavior to be not a parameter but a variable and link this to a process schedule, which is co-optimized with the bidding decisions. Much following the concepts in [6] we compare the cases with various levels of uncertainty (forecasting quantiles) and different sizes of BESS systems using a simplified stochastic approach, which reduces to a deterministic optimization approach in the case there is only one scenario available. The example process is modeled using the resource task network [7] approach.



Sodium bicarbonate production from CO2 captured in waste-to-energy plants: an Italian case-study

Laura Annamaria Pellegrini1, Elvira Spatolisano1, Giorgia De Guido1, Elena Riva Redolfi2, Mauro Corradi2, Davide Alberti3, Adriano Carrara3

1Politecnico di Milano, Italy; 2Acinque Ambiente Srl, Italy; 3a2a S.p.A., Italy

Waste-to-energy (WtE) plants, despite offering a sustainable solution to both waste management and energy production, significantly contribute to greenhouse gas emissions (Kearns, 2019). Therefore, the integration with CO₂ capture technologies represents a promising approach to enhance sustainability, enabling for both waste reduction and climate change mitigation (Otgonbayar and Mazzotti, 2024). Once this CO2 is captured from the flue gas, it can be eventually converted into high value-added products, following the circular economy principles. Key conversion technologies include chemical, electrochemical or biological methods for CO₂ valorization to methanol, syngas, plastics, minerals or fuels. However, challenges remain about the cost-effective implementation of these solution at the commercial scale. Research efforts in this respect are focused on improving efficiency and reducing costs, to allow for the process scale-up to the industrial level.

One of the viable alternatives for carbon dioxide utilization in the waste-to-energy context is its conversion into sodium bicarbonate (NaHCO₃). NaHCO₃, commonly known as baking soda, is often used for waste-to-energy flue gas treatment to abate various harmful pollutants, as sulfur oxides (SOₓ) and acidic gases as hydrogen chloride (HCl). Hence, bicarbonate production in situ from captured carbon dioxide can be an interesting solution for simultaneously lowering the plant environmental impact and improving the overall economic balance.

To explore sodium bicarbonate production as an alternative for carbon dioxide utilization, its production from sodium carbonate (Na₂CO₃) is analyzed referring to an existing waste-to-energy context in Italy (Moioli et al., 2024). The process technical assessment is performed through Aspen Plus V14®. Inlet CO2 flowrate is fixed to guarantee a bicarbonate output of about 30% of the annual need of the waste-to-energy plant. The effect of Na2CO3/CO2 (in the range 0.8-1.2 mol/mol) and temperature (in the range 35-45°C) is analyzed. Performances are evaluated considering the energy consumption for each of these cases. Outlet waste streams as well as water demand are minimized by a proper integration between process steams. Direct and indirect CO2 emissions are evaluated, to verify the process viability. As a result, optimal operating conditions are identified, in view of the pilot plant engineering and construction.

Due to the encouraging outcomes and the easy implementation to the existing infrastructures, the potential of carbon dioxide conversion to bicarbonate is demonstrated, proving that it can became a feasible CO2 utilization choice within the waste-to-energy context.

References

Kearns, D. T., 2019. Waste-to-Energy with CCS: A pathway to carbon-negative power generation. ©Global CCS Institute.

Otgonbayar, T., Mazzotti, M., 2024. Modeling and assessing the integration of CO2 capture in waste-to-energy plants delivering district heating. Energy 290, 130087. https://doi.org/10.1016/j.energy.2023.130087.

Moioli, S., De Guido, G., Pellegrini, L.A., Fasola, E., Redolfi Riva, E., Alberti D., Carrara A., 2024. Techno-economic assessment of the CO2 value chain with CCUS applied to a waste-to-energy Italian plant. Chemical Engineering Science 287, 119717.



A Decomposition Approach for Operable Space Maximization

Alberto Saccardo1, Marco Sandrin1,2, Constantinos C. Pantelides1,2, Benoît Chachuat1

1Sargent Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, London SW7 2AZ, United Kingdom; 2Siemens Industry Software, London W6 7HA, United Kingdom

Model-based design of experiments (MBDoE) is a powerful methodology for improving parameter precision and thus optimising the development of predictive mechanistic models [1]. By leveraging the system knowledge embedded in a mathematical model structure, MBDoE aims to maximise experimental information while minimising experimental time and resources. Recent developments in MBDoE have enabled the computation of robust campaigns of parallel experiments [2], which could in turn be applied repeatedly in a sequential design. Effort-based methods are particularly suited to the design of parallel experiments. They proceed by discretising the experimental design space into a set of candidate experiments and determine the optimal number of replicates (or efforts) for each, aiming to maximise the information content of the overall campaign.

A challenge with MBDoE is that its success ultimately depends on the assumed model structure, which can introduce epistemic errors when the model presents a large structural mismatch. Traditional MBDoE methods rely on Fisher information matrix (FIM)-derived metrics (e.g., D-optimality criterion), which implicitly assume a correct model structure [3], making them prone to suboptimality in case of significant structural mismatch. Albeit common in engineering models, the impact of structural uncertainty on MBDoE has not received as much attention as parametric uncertainty in the literature [3].

Inspired by [4], we propose to address this issue by appending a secondary, space-filling criterion to the main FIM-based criterion in a bi-objective optimisation framework. The idea is for the space-filling criterion to promote alternative experimental campaigns that explore the experimental design space more broadly, yet without significantly compromising their predicted information content. Within an effort-based approach, we compute such a space-filling criterion as a (minimal or average) distance between the selected experiments in the discretised experimental space and maximise it alongside a D-optimality criterion. We can furthermore apply gradient search to refine the effort-based discretization in a subsequent step [5].

We benchmark the proposed bi-criterion approach against a standard D-optimality approach for a microalgae cultivation system, whereby the (inaccurate) model describes nitrogen consumption with a simple Monod model and the ground truth is simulated using the Droop model.

References

[1] G. Franceschini, S. Macchietto, 2008. Model-based design of experiments for parameter precision: State of the art. Chem Eng Sci 63:4846–4872.

[2] K. P. Kusumo, K. Kuriyan, S. Vaidyaraman, S. García-Muñoz, N. Shah, B. Chachuat, 2022. Risk mitigation in model-based experiment design: a continuous-effort approach to optimal campaigns. Comput Chem Engg 159:107680.

[3] M. Quaglio, E. S. Fraga, F. Galvanin, 2018. Model-Based Design of Experiments in the Presence of Structural Model Uncertainty: An Extended Information Matrix Approach. Chem Engin Res Des 136:129–43.

[4] Q. Chen, R. Paulavičius, C. S. Adjiman, S. García‐Muñoz, 2018. An Optimization Framework to Combine Operable Space Maximization with Design of Experiments. AIChE J 64(11):3944–57.

[5] M. Sandrin, B. Chachuat, C. C. Pantelides, 2024. Integrating Effort- and Gradient-Based Approaches in Optimal Design of Experimental Campaigns. Comput Aided Chem Eng 53:313–18.



Non-invasive Tracking of PPE Usage in Research Lab Settings using Computer Vision-based Approaches: Challenges and Solutions

Haseena Sikkandar, Sanjeevrajan Nagavelu, Pradhima Mani Amudhan, Babji Srinivasan, Rajagopalan Srinivasan

Indian Institute of Technology, Madras, India

Personal Protective Equipment (PPE) protects researchers working in laboratory environments involving biological, chemical, medical, and other hazards. Therefore, monitoring PPE compliance in academic and industrial laboratories is vital. CSB case studies have reported significant injuries and fatalities in university lab settings, highlighting the importance of proper PPE and safety protocols to prevent accidents (https://www.csb.gov/videos). This paper develops a real-time PPE monitoring system using computer vision to ensure lab users wear essential gear like coats, safety gloves, bouffant caps, goggles, masks, and shoes (Arfan et al.,2023).

Current literature indicates substantial advancements in computer vision and object detection for PPE monitoring in industrial settings, though challenges persist due to variable lighting, background noise, and PPE occlusion (Protik et al., 2021). However, consistent real-time effectiveness in dynamic settings still requires further development of more robust solutions.

The non-intrusive detection of PPE usage in laboratory settings requires (1) a suitable hardware system comprising cameras, along with (2) computer vision-based algorithms whichareessential for effective monitoring.

In hardware system design, the strategic placement of cameras in the donning area, rather than inside the laboratory, is recommended. This preference is due to the ability to capture individuals as they equip their PPE before entering hazardous zones. Additionally, environments with significant height variations and lighting variability greatly affect detection accuracy. The physical occlusion of PPE items either by the individual’s body or surrounding objects, further complicates the task of ensuring full compliance. Computer vision-based algorithms face challenges with overlapping objects which can lead to tracking and identification errors. Variations in individual postures, movements, and PPE appearances also reduce detection accuracy. This problem is exacerbated if the AI model is trained on a limited dataset that doesn't accurately represent real-world diversity. Additionally, static elements like posters or dynamic elements can be misclassified as PPE, leading to a high rate of false positives.

To address the hardware system design issues, a solution involves strategically placing multiple cameras to cover the entire process, eliminating blind spots, and confirming correct PPE usage before individuals enter sensitive zones. In computer vision-based algorithms, the system uses adaptive image processing techniques to tackle variable lighting, occlusion, and posture variations. Software enhancements include multi-object tracking and pose estimation algorithms, trained on diverse datasets for accurate PPE detection. Incorporating edge cameras that utilize decentralized computing significantly enhances the operational efficiency of real-time PPE detection systems.

Future conceptual challenges in PPE detection systems include the ability to effectively detect multiple individuals. Each laboratory may also require customized PPE based on specific safety requirements. These variations necessitate the development of highly adaptable AI models capable of recognizing a wide range of PPE and distinguishing between different individuals, even in crowded settings, to ensure compliance and safety.

References:

  • M. Arfan et al., “Advancing Workplace Safety: A Deep Learning Approach for PPE Detection using Single Shot Detector”, International Workshop on Artificial Intelligence and Image Processing, Indonesia, pp. 127-132, 2023.
  • Protik et al.,, “Real-time PPE Detection Using YOLOv4 and TensorFlow,” IEEE Region 10 Symposium, Jeju, Korea, pp. 1-6, 2021.


Integrating batch operations involving liquid-solid mixtures into continuous process flows

Valeria González Sotelo, Pablo Monzón, Soledad Gutiérrez Parodi

Universidad de la República, Facultad de Ingeniería, Uruguay

While there has been a growth in specialized simulators for batch processes, the prevailing trend is towards a simple cycle modeling. Batch processes can be then integrated into an overall flowsheet, with the output stream properties calculated based on the established reaction conditions (time, temperature, etc.). To guarantee a continuous flow of material, an accumulation tank is usually incorporated.

Moreover, a wide range of heterogeneous batch processes exist within industry. Examples include sequential batch reactors in wastewater treatment, solid-liquid extraction processes, adsorption reactors, lignocellulosic biomass hydrolysis and grains soaking. When processing solid-liquid mixtures, or multiphase mixtures in general, phase separation can be exploited allowing for savings in resources such as raw materials or energy. In fact, these processes enable the separate discharge of the liquid and solid phases, providing flexibility to selectively retain either phase or a fraction thereof. Sequencing batch reactors retain microbial flocs while periodically discharging a portion of the treated effluent. By treating lignocellulosic biomass with a hot, pressurized aqueous solution, lignin and pentoses can be solubilized, leaving cellulose as the remaining solid phase1. In this case, since cellulose is the fraction of interest, the solid can be separated and most of the liquid phase retained for processing a new batch of biomass, thus saving reagents, water, and energy.

In a heterogeneous batch process, a degree of freedom typically emerges that often becomes a decision variable in the design of these processes: the solid-to-liquid ratio (S/L), which is a critical parameter that influences factors such as reaction rate, heat and mass transfer. Partial phase retention adds a new degree of freedom, the retained fraction, to the process design.

The re-use process is thus inherently dynamic. In a traditional batch process, the time horizon for analysis corresponds to the reaction, loading, and unloading time. For re-use processes, the mass balance will cause reaction products to accumulate in the retained phase from cycle to cycle. To take this into account, the time horizon for mass balances needs to be extended to include as many cycles as necessary. Eventually, a periodic operating condition will be reached.

The primary objective of this work is to incorporate the batch-with-reuse model into flowsheets, similar to traditional batches, by identifying the periodic condition under the given process conditions. A general algorithm to simulate the periodic condition suitable for any kinetics is proposed. It could enable the coupling of these processes in a simulation flowsheet. Regarding the existence of a periodic condition, an analytical study of the involved kinetic expressions and illustrative examples will be included.

1 Mangone, F., Gutiérrez, S..A Recycle Model of Spent Liquor in Pre-treatment of Lignocellulosic Biomass, Computer Aided Chemical Engineering, Volume 48, Pages 565-570, 2020, ISSN 1570-7946, ISBN 9780128233771, Elsevier, https://doi.org/10.1016/B978-0-12-823377-1.50095-1



Enhancing decision-making by prospective Life Cycle Assessment linked to Integrated Assessment Models: the roadmap of formic acid production

Marta Rumayor, Javier Fernández-González, Antonio Domínguez-Ramos, Angel Irabien

University of Cantabria, Spain

Formic acid (FA) is gaining attention as a versatile compound used both as chemical and energy carrier. Currently, it is produced by a two-step fossil-based process that include the reaction between methanol and carbon monoxide to methyl formate which is then hydrolyzed to form FA. Growing the global concerns about climate change the exploration of new strategies to produce FA from renewable sources has never been more important than today. Several sustainable FA production pathways have emerged in the latest decades including those based in chemocatalytic and electrochemical processes. Their environmental viability has been confirmed through ex-ante life cycle assessment (LCA) provided there are enhancements in energy consumption and consumable durability.1,2 However, these studies have been conducted using static approaches, which may not accurately reflect the influence related to the background evolution and the long-term reliability of the environmental prospects taking into account other decarbonization pathways in the background processes.

Identifying exogenous challenges affecting FA production due to supply changes is just as crucial as targeting the hotspots in the foreground technologies. This study aims to overcome this epistemological uncertainty by performing a dynamic life cycle assessment (d-LCA) utilizing the open-source Python Premise tool with the IMAGE integrated assessment model (IAM). A time-dependent background system was developed that aligned with prospective scenarios based on socio-economic pathways and climate change mitigation targets. This was coupled with the ongoing portfolio of emerging renewable technologies together with the traditional decarbonization approaches.

Given the substantial energy demands of chemocatalytic- and electro-based technologies, they could not be considered a feasible decarbonization solution under pessimistic policy scenarios. Conversely, a rapid development rate could enhance the feasibility of the electro-based pathway by 2030 within the optimistic background trajectory. A fully renewable electrolytic production approach could significantly reduce carbon emissions (up to 70%) and fossil fuel dependence (up to 80%) compared to conventional production by 2050. Other traditional approaches involve an intermediate decarbonization/defossilization synergy. Despite the potential of the electro-based pathway, a complete shift would involve land degradation risks. To facilitate the development of electrolyzers, prioritizing reductions in the use of scarce materials is crucial, aiming to enhance durability to 7 years by 2050. This study facilitates a comprehensive analysis of the portfolio of production processes minimizing the overall impact considering several regions and time-horizons and interlinked them with energy-economy-climate systems.

Acknowledgements

The present work is related to CAPTUS Project. This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101118265. J.F.-G. would like to thank the financial support of the Spanish Ministry of Science, Innovation (MICIN) for the concession of the FPU grant (19/05483).

References

(1) Rumayor, M.; Dominguez-Ramos, A.; Perez, P.; Irabien, A. Journal of CO2 Utilization 2019, 34, 490–499.

(2) Rumayor, M.; Dominguez-Ramos, A.; Irabien, A. Sustainable Production and Consumption 2019, 18, 72–82.



Towards Self-Tuning PID Controllers: A Data-Driven, Reinforcement Learning Approach for Industrial Automation

Kyle Territo, Peter Vallet, Jose Romagnoli

LSU, United States of America

As industries transition toward the digitalization and interconnectedness of Industry 4.0, the availability of vast amounts of process data opens new opportunities for optimizing industrial control systems. Traditional Proportional-Integral-Derivative (PID) controllers, often require manual tuning to maintain optimal performance in the face of changing process conditions. This paper presents an automated and adaptive method for PID tuning, leveraging historical closed-loop data and machine learning to create a data-driven approach that can continuously evolve over time.

At the core of this method is the use of historical process data to train a plant surrogate model, which accurately mimics the behavior of the real system under various operating conditions. This model allows for safe and efficient exploration of control strategies without interfering with live operations. Once the surrogate model is constructed, an RL agent interacts with it to learn the optimal control policy. This agent is trained to respond dynamically to the current state of the plant, which is defined by a comprehensive set of variables, including operational conditions, system disturbances, and other relevant measurements.

By integrating RL into the tuning process, the system is capable of adapting to a wide range of scenarios without the need for manual intervention. The RL agent learns to adjust the PID controller parameters based on the evolving state of the system, optimizing performance metrics such as stability, response time, and energy efficiency. After the training phase, the agent is deployed online to monitor the real-time state of the plant. If any significant deviations or disturbances are detected, the RL agent is called upon to make real-time adjustments to the PID controller, ensuring that the process remains optimized under new conditions.

One of the unique advantages of this approach is its ability to continuously update and refine the surrogate model and RL agent over time. As the plant operates, real-time data is collected and integrated into the historical dataset, allowing the models to adapt to any long-term changes in the process. This continuous learning capability makes the system highly resilient and scalable, ensuring optimal performance even in the face of new and unforeseen operating conditions.

By combining data-driven modeling with reinforcement learning, this method provides a robust, adaptive, and automated solution for PID tuning in modern industrial environments. The approach not only reduces the need for manual tuning and oversight but also maximizes the use of available process data, aligning with the principles of Industry 4.0. As industrial systems become increasingly complex and data-rich, such methods hold significant potential for improving process efficiency, reliability, and sustainability.



Energy integration of an intensified biorefinery scheme from waste cooking oil to produce sustainable aviation fuel

Ma. Teresa Carrasco-Suárez1, Araceli Guadalupe Romero-Izquierdo2

1Faculty of Engineering, Monash University, Australia; 2Facultad de Ingeniería, Universidad Autónoma de Querétaro, Mexico

The sustainable aviation fuel (SAF) has been proved as a viable alternative to reduce the CO2 emissions derived from the aviation sector activities, boosting its sustainable growth. However, the reported SAF processes are not economically competitive with jet fuel fossil-oil derived, thus, the application of strategies to reduce its economical issues has captured the interest of researchers and industrials. In this sense, in 2022 Carrasco-Suárez et al., studied the intensification, on the SAF separation zone, of a biorefinery scheme from waste cooking oil (WCO), which allowed a reduction of 3.07 % of CO2 emissions, regarding the conventional processing scheme; also, diminishing the operational cost from steam and cooling water services. Despite these improvements, the WCO biorefinery scheme is not economically viable and possesses high energy requirements. For this reason, in this work we present the energy integration of the whole biorefinery scheme from WCO, including the intensification of all separation zones involved in the scheme, using Aspen Plus V.10.0. The energy integration of WCO biorefinery scheme was addressed from the pinch point methodology to minimize its energy requirements. The energy integration (EI-PI-S) results have been presented in form of indicators to compare them with the conventional scheme (CS) and the intensified scheme before energy integration (PI-S). The defined indicators were: total annual cost (TAC), energy investment per delivered energy by the products (EI-P), energy investment per the mass of the main product (EI-MP, SAF as main product) and CO2 emissions per mass of main product (CO2-MP). According with results, the EI-PI-S contain the best indicators, regarding the CS and PI-S, reducing 14.34 % and 31.06 % the steam and cooling water requirements, regarding to PI-S; also, the CO2 emissions were reduced in 13.85 % and 14.13 % regarding CS and PI-S, respectively. However, the TAC for EI-PI-S is 0.5 % higher than the PI-S. The studied integrated and intensified WCO biorefinery scheme rises as a feasible option to produce SAF and other biofuels, attending the principles of minimum energy requirements and improving its economic performance.

References:

M. T. Carrasco-Suárez, A.G. Romero-Izquierdo, C. Gutiérrez-Antonio, F.I. Gómez-Castro, S. Hernández, 2022. Production of renewable aviation fuel by waste cooking oil processing in a biorefinery scheme: Intensification of the purification zone. Chem. Eng. Process. - Process Intensif. 181, 109103. https://doi.org/https://doi.org/10.1016/j.cep.2022.109103



Integrating Renewable Energy and CO₂ Utilization for Sustainable Chemical Production: A Superstructure Optimization Approach

Tianen Lim, Yuxuan Xu, Zhihong Yuan

Tsinghua University, China, People's Republic of

Climate change, primarily caused by the extensive emission of greenhouse gases, particularly carbon dioxide (CO₂), has intensified global efforts toward achieving carbon neutrality. In this context, renewable energy and CO₂ utilization technologies have emerged as key strategies for reducing the reliance on fossil fuels and mitigating environmental impacts. In this work, a superstructure optimization model is developed to integrate renewable energy networks and chemical production processes. The energy network incorporates multiple sources, including wind, solar, and biomass, along with energy storage systems to enhance reliability and minimize grid dependence. The reaction network features various pathways that utilize CO₂ as a raw material to produce high value-added chemicals such as polyglycolic acid (PGA), ethylene-vinyl acetate (EVA), and dimethyl carbonate (DMC), allowing for efficient conversion and resource utilization. The optimization is formulated as a mixed-integer linear programming (MILP) model, targeting the minimization of production costs while identifying the most efficient energy and reaction routes. This research supports the green transition of the chemical industry by optimizing a model that integrates renewable energy and CO₂ in chemical processes, contributing to more sustainable production methods.



Sustainable production of L-lactic acid from lignocellulosic biomass using a recyclable buffer: Process development and techno-economic evaluation

Donggeun Kang, Donghyeon Kim, Dongin Jung, Siuk Roh, Jiyong Kim

School of Chemical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea

With growing concerns about energy security and climate change, there is an increasing emphasis on finding solutions for sustainable development. To address this problem, using lignocellulosic biomass (LCB) to produce polymeric materials is one of the promising strategies to reduce dependence on fossil fuels. L-lactic acid (L-LA), a key monomer in biodegradable plastics, is a sustainable alternative that can be derived from LCB. The L-LA production process typically involves several various technologies such as fermentation, filtration, and distillation. In the L-LA production process, large amounts of buffers are used to maintain proper pH during fermentation, so conventional buffers (e.g., CaCO3) are often selected because of their low cost. However, these buffers cannot be recycled efficiently, and the potential for recyclable buffers remains uncertain. In this work, we aim to develop and evaluate a novel process for sustainable L-LA production using a recyclable buffer (i.e., KOH). The process involves a series of different unit operations such as pretreatment, fermentation, extraction, and electrolysis. In particular, the fermentation process is designed to achieve high yields of L-LA by maximizing the conversion of sugars to L-LA. In addition, an efficient buffer regeneration process using membrane electrolysis is implemented to recycle the buffer with minimal energy input. Then, we evaluated the viability of the proposed process compared to the conventional process based on minimum selling price (MSP), and net CO2 emissions (NCE). The MSP for L-LA was evaluated to be 0.88 USD /kg L-LA, and the NCE was assessed to be 3.31 kg CO₂-eq/kg L-LA. These results represent a 15% reduction in MSP and a 10% reduction in NCE compared to the conventional process. Additionally, a sensitivity analysis was performed with a 20% change in production scale, and LCB composition from the reference value. The sensitivity analysis results showed that the MSP varied from -4.4% to 3.6% by production scale, and from -13.0% to 19.0% by LCB composition. The proposed process, as a cost-effective and eco-friendly process, promotes biotechnology practices for sustainable production of L-LA.

References

Wang, Yumei, Zengwei Yuan, and Ya Tang. "Enhancing food security and environmental sustainability: A critical review of food loss and waste management." Resources, Environment and Sustainability 4 (2021): 100023.



Potential of chemical looping for green hydrogen production from biogas: process design and techno-economic analysis

Donghyeon Kim, Donggeun Kang, Dongin Jung, Siuk Roh, Jiyong Kim

School of Chemical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea

Hydrogen (H₂), as the most promising alternative to conventional fossil fuel-based energy carriers, faces the critical challenge of diversifying its sources and lowering production costs. In general, there are two main technological routes for H2 production: electrolysis using renewable power and catalytic reforming of natural gas. Biogas, produced from organic waste, offers a renewable and carbon-neutral option for H₂ production, but due to its high CO2 content, it requires a pre-separation process of CO2 from CH4 or a catalyst with different performance to be used as a feed gas in existing reforming processes. Chemical looping reforming (CLR), as an advanced H₂ production system, uses an oxygen carrier as the oxidant instead of air, allowing raw biogas to be used directly in the reforming process. Recently, a number of studies on the design and analysis of the CLR process have been reported, and these technological studies have gradually secured the economic feasibility of H2 production by CLR. However, for the CLR process to be deployed in the biogas treatment industry, further research is needed to comprehensively analyze the economic, environmental, and technical capabilities of the CLR processes under different feed conditions, required capacities, and targeted H2 purity. This study proposes new biogas-based CLR processes and analyzes the capability of the processes from techno-economic and environmental perspectives: ⅰ) conventional CLR as a base process, ⅱ) chemical looping steam reforming (CLSR), ⅲ) chemical looping water splitting (CLWS), and ⅳ) chemical looping dry reforming (CLDR). The proposed processes consist of unit operations such as a CLR reactor, a water-gas shift reactor, a pressure swing adsorption (PSA) unit, and a monoethanolamine (MEA) sorbent-based CO₂ absorption unit. Evaluation metrics include unit production cost (UPC), net CO2 equivalent emissions (NCE), and energy efficiency to compare economic, environmental, and technical performance, respectively. Each process is simulated using the commercial process simulator Aspen Plus to obtain mass and energy balance data. The oxygen carrier to fuel ratio and the heat exchanger network (HEN) are optimized through thermodynamic analysis to ensure efficient redox reactions, maximize heat recovery, and achieve autothermal conditions. As a result, we comparatively analyzed the economic and environmental capability of the proposed processes by identifying the major cost-drivers and CO2 emission contributors. In addition, the sensitivity analysis was performed using various scenarios to provide technical solutions to improve the economic and environmental performance, resulting in the real implementation of the CLR process.



Data-Driven Soft Sensors for Process Industries: Case Study on a Delayed Coker Unit

Wei Sun1, James G. Brigman2, Cheng Ji1, Pratap Nair2, Fangyuan Ma1, Jingde Wang1

1Beijing University of Chemical technology, China, People's Republic of; 2Ingenero Inc. 4615 Southwest Freeway, Suite 320, Houston TX 77027, USA

Research on data-driven soft sensors have been extensively conducted, yet reports of successful industrial applications are still notably scarce. The reason can be attributed to the variable operating conditions and frequent disturbances encountered during real-time process operations. Industrial data are always nonlinear, dynamic, and highly unbalanced, which poses huge challenges to capture the key characteristics of the underlying processes. Aiming at this issue, this work presents a comprehensive solution for industrial applications of soft sensors, including feature selection, feature extraction, and model updating.

Feature selection aims to identify variables that are both independent of each other and have significant impact on concerned performance, including quality, safety, and etc. It not only helps in reducing the dimensionality of the data to simplify the model, but also improving the prediction performance. Process knowledge can be utilized to initially screen variables, then correlation and redundancy analysis has to be employed because the existence of information redundancy not only leads to an increase in the computational load of modeling, but also significantly affects its prediction accuracy. Therefore, a mutual information-based relevance-redundancy algorithm is introduced for feature selection in this work, in which the relevance and redundancy among process variables are evaluated through a comprehensive correlation function and ranked according to their importance using greedy search to obtain the optimal variable set [1]. Then feature extraction is performed to capture internal features from the optimal variable set and build the association between latent features and output variables. Considering the complexity of industrial processes, deep learning techniques are often leveraged to handle the intricate patterns and relationships within the data. Long Short-Term Memory (LSTM) networks, a specific type of recurrent neural network (RNN), are particularly well-suited for this task due to their ability to capture long-term dependencies in sequential data. In industrial processes, many variables exhibit temporal correlations. LSTM networks can effectively model these temporal dependencies by maintaining a memory state that allows them to learn from sequences of data over extended periods. Meanwhile, a differential unit is embedded in the latent layer of LSTM networks in this work to simultaneously handle the short-term nonstationary features caused by process disturbances [2]. Once the model is trained, the model is updated during online application to incorporate the slow deviation in equipment and reaction agents.. Some quality related data are usually available behind real-time measurement, but can still be utilized to fine-tune the model parameters, ensuring sustained prediction accuracy over an extended period. To verify the effectiveness of this work, a case study on a delayed Coker unit is investigated. The results demonstrated promising long-term prediction performance for tube metal temperature, indicating the potential of this work in industrial application.

[1] Tao, T., Ji, C., Dai, J., Rao, J., Wang, J. and Sun, W. Data-based Health Indicator Extraction for Battery SOH Estimation via Deep Learning, Journal of Energy Storage, 2024

[2] Ji, C., Ma, F., Wang, J., & Sun, W. Profitability Related Industrial-Scale Batch Processes Monitoring via Deep Learning based Soft Sensor Development, Computers and Chemical Engineering, 2022



Retrofitting AP-X LNG Process Through Mixed Refrigerant Composition variation: A Sensitivity Analysis Towards Decarbonization Objective

Mutaman Abdulrahim, Saad Al-Sobhi, Fares Almoamoni

Chemical Engineering department, Qatar University, Qatar

Despite the promising outlook for the LNG market as a cost-effective energy carrier, associated GHG emissions remain an obstacle toward the net-zero emissions target. This study focuses on the AP-X LNG process, investigating the potential for decarbonization through optimizing a mixed refrigerant (MR) composition. The process simulation is carried out using Aspen HYSYS v.12.1 to simulate the large-scale AP-X LNG process, with the Peng-Robinson equation of state as a fluid package. Several reported studies have incorporated ethylene into their MR cycle instead of ethane, which might result in different MR volumes and energy requirements. Different refrigerant compositions are examined through the Aspen HYSYS optimizer, aiming to identify optimal MR composition that minimizes environmental impact and maximizes profitability without compromising the efficiency and performance of the process. Energy, Exergy, Economic, and Environmental (4E) assessment analysis will be performed to obtain key performance indicators such as specific power consumption, exergy efficiency, cost of production, etc. This work will contribute to the existing AP-X-based plant retrofitting activity and sustainability, offering insights into pathways for reducing the carbon footprint of the AP-X process.



Performance Evaluation of Gas Turbine Combined Cycle Plants with Hydrogen Co-Firing Under Various Operating Conditions

Hyeonrok Choi1,2, Won Yang1, Youngjae Lee1, Uendo Lee1, Changkook Ryu2, Seongil Kim1

1Korea Institute of Industrial Technology, Korea, Republic of (South Korea); 2School of Mechanical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea

In response to global efforts on climate change, countries are advancing low-carbon strategies and aiming for carbon neutrality. To reduce CO₂ emissions, fossil fuel power plants are integrating co-firing and combustion technologies centered on carbon-free fuels. Hydrogen has emerged as a promising fuel option, especially for gas turbine combined cycle (GTCC) plants when co-fired with natural gas. Due to the similar Wobbe Index (WI) values of hydrogen and methane, minimal modifications are required to the existing gas turbine nozzles. Furthermore, hydrogen’s high combustion limit allows stable operation even at elevated fuel-air ratios. Gas turbines are also adaptable to changes in ambient condition, which enables them to accommodate the output variations and operational changes, associated with hydrogen co-firing. Hydrogen, having distinct combustion characteristics compared to natural gas, affects gas turbine operation and alters the properties of the exhaust gases. The increased water vapor fraction from hydrogen co-firing results in a higher specific heat capacity of the exhaust gases and a reduced flow rate, leading to changes in turbine power output and efficiency compared to methane combustion. These changes impact the heat transfer properties of the Heat Recovery Steam Generator (HRSG) in the bottom cycle, thereby affecting the overall thermal performance of the GTCC plant. Since gas turbine operations vary with seasonal changes in temperature and humidity, it is essential to evaluate hydrogen co-firing’s impact on thermal performance across different seasonal conditions.

This study developed an in-house code to evaluate gas turbine performance during hydrogen co-firing and to assess the HRSG and steam turbine cycle based on the heat transfer mechanism, focusing on the impact on thermal performance across different seasonal conditions. Hydrogen co-firing effects on GTCC plant thermal performance were assessed under various ambient conditions. Three ambient cases (-12°C, RH 60%; 5°C, RH 60%; and 32°C, RH 70%) were analyzed for two scenarios: one with fixed Turbine Inlet Temperature (TIT) and one with fixed power output. A 600 MWe-class GTCC plant model consists of two F-class gas turbines and one steam turbine. Compressor performance maps and a turbine choking equation were used to analyze operating point and isentropic efficiency variations. The HRSG model, developed from heat exchanger geometric data, provided results for gas and water-steam side temperatures and heat transfer rates. The GTCC plant models were validated based on manufacturer data for design and off-design conditions.

The study performed process analysis to predict GTCC plant thermal performance and power output under hydrogen co-firing. Thermodynamic and off-design models of the gas turbine, HRSG, and steam turbine were used to analyze changes in exhaust temperature, flow rate, and composition, along with corresponding bottom cycle output variations. The effects of seasonal conditions on thermal performance under hydrogen co-firing were analyzed, providing a detailed evaluation of its impact on GTCC plant efficiency and output across different seasons. This analysis provides insights into the effects of hydrogen co-firing on GTCC plant performance across seasonal conditions, highlighting its role in hydrogen applications for combined cycle plants.



Modelling of Woody Biomass Gasification for process optimization

Yu Hui Kok, Yasuki Kansha

The University of Tokyo, Japan

In recent decades, public awareness of climate change has been increasing significantly due to the accelerating rate of global warming. To align with the Paris Agreement and “Green Transformation (GX) Basix Policy” in 2023, the use of biomass instead of fossil fuel for power generation and biofuel production has increased (Zhou & Tabata, 2024). Biomass gasification is widely used for biomass conversion as this thermochemical process can satisfy various needs such as producing heat, electricity, fuels and chemical synthesis (Situmorang et al., 2020). To date, extensive research has been conducted on biomass gasification, particularly focusing on the reaction models of the process. These models enable more computationally efficient predictions of the yield and composition of various gas and tar species, making it feasible to simulate complex reactor configurations without compromising accuracy. However, existing models are too complex to apply to the control system or to optimize the process operating conditions effectively, limiting their practical use for industrial applications. To address this, a simple reaction model for biomass gasification was developed in this research. To analyze the gasification reaction of the system and evaluate the gasification model, two feedstocks - Japanese cedar and waste cardboard were used in this steam gasification experiments to gain insight into the gasifier behaviour. A reaction model is developed by combining the biomass gasification equilibrium model and the experimental data. This model simulates woody biomass gasification using AspenTech’s Aspen Plus, a chemical process simulator. Validation of the accuracy of the model is done by comparing simulation results with available literature data and experimental data. As a case study, the model was used for process optimization, examining the effect of varying key operating parameters at the steam gasifier such as temperature of the gasification process, biomass moisture content and steam to biomass ratio (S/B) on the conversion performance. The experimental results show that Japanese Cedar has a higher syngas yield and H2/CO ratio than the cardboard gasification, indicating a more promising conversion of biofuel and bioenergy for Japanese Cedar. The optimal operating condition for maximizing syngas was found to be at 850°C gasifier temperature and S/B of 2. The process simulation model effectively predicts syngas composition with an absolute error below 4% for syngas composition. This study is helpful in developing a control system in future studies which able to capture the complex interactions between the factors that influence the performance of gasifiers and optimize them for improved efficiency and scalability in industrial applications.

Reference

Zhou, J., & Tabata, T. (2024). Research Trends and Future Direction for Utilization of Woody Biomass in Japan. Retrieved from https://www.mdpi.com/2076-3417/14/5/2205

Situmorang, Y. A., Zhao, Z., Yoshida, A., Abudula, A., & Guan, G. (2020). Small-scale biomass gasification systems for power generation (<200 kW class): A review. In Renewable and Sustainable Energy Reviews (Vol. 117). Elsevier Ltd. https://doi.org/10.1016/j.rser.2019.109486



Comparative analysis of conventional and novel low-temperature and hybrid technologies for carbon dioxide removal from natural gas

Federica Restelli, Giorgia De Guido

Politecnico di Milano, Italy

Global electricity consumption is projected to rise in the coming decades. To meet this growing demand sustainably, renewable energy sources and, among fossil fuels, natural gas are expected to see the most significant growth. As natural gas consumption increases, it will also become necessary to extract it from low-quality reserves, which often contain high levels of acid gases such as carbon dioxide and hydrogen sulphide [1].

The aim of this work is to compare various innovative and conventional technologies for the removal of carbon dioxide from natural gas, considered as a binary mixture of methane and carbon dioxide, with carbon dioxide contents ranging from 5 to 70 mol%. It first examines the performance of the physical absorption process using propylene carbonate as a solvent, along with a hybrid process in which it is applied downstream of low-temperature distillation. These results are, then, compared with previously studied technologies, including conventional chemical absorption with amines, physical absorption with dimethyl ethers of polyethylene glycol (DEPG), low-temperature distillation, and hybrid processes that combine distillation and absorption [2].

Propylene carbonate is particularly advantageous, as noted in the literature [3], when hydrogen sulphide is not present in raw natural gas. The processes are simulated using Aspen Plus® V9.0 [4] and Aspen HYSYS® V9.0 [5]. The energy analysis is conducted using the “net equivalent methane” method, which allows to compare duties of different nature [6]. The processes are compared in terms of methane equivalent consumption, methane losses, and product quality, offering guidance on the optimal process based on the composition of the raw natural gas.

References

[1] Langé S., Pellegrini L.A. (2016). Energy analysis of the new dual-pressure low-temperature distillation process for natural gas purification integrated with natural gas liquids recovery. Industrial & Engineering Chemistry Research 55, 7742-7767.

[2] De Guido, G., Gilardi, M., Pellegrini, L.A. (2021). Novel technologies for low-quality natural gas purification. In: Computer Aided Chemical Engineering (Vol. 50, pp. 241-246). Elsevier.

[3] Bucklin, R.W., Schendel, R.L (1984). Comparison of Fluor Solvent and Selexol processes. Energy Prog., United States.

[4] AspenTech (2016). Aspen Plus®, Burlington (MA), United States.

[5] AspenTech (2016). Aspen HYSYS®, Burlington (MA), United States.

[6] Pellegrini, L.A., De Guido, G., Valentina, V. (2019). Energy and exergy analysis of acid gas removal processes in the LNG production chain. Journal of Natural Gas Science and Engineering 61, 303-319.



Development of Chemical Recycling System for NOx Gas from NH3 Combustion

Isshin Ino, Yuka Sakai, Yasuki Kansha

Organization for Programs on Environmental Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Japan

Referring to the SDGs suggested by the United Nations, resource recycling is receiving more attention. However, some toxic but reactive wastes are only stabilized using additional resources before being released into the environment. Converting pollutants to valuable materials using their reactivity enhances the recycle ratio in society, leading to a reduction of environmental impact, which is called chemical recycling.

In this study, the potentials of chemical recycling for nitrogen oxides (NOX) gases from ammonia (NH3) combustion were evaluated from the chemical and economic points of view. Fundamental research for the system was conducted using NOX gas as the case study. As a chemical recycling method for NOX, the conversion to potassium nitrate (KNO3), valuable as fertilizer and raw material for gunpowder, was adopted. In this method, the high reactivity of NOX as toxicity was effectively utilized for chemical conversion.

On the other hand, most of the NOX gas in Japan is currently neutralized to nitrogen gas by the Selective Catalytic Reduction (SCR) method using additional ammonia. Nitrogen and water products are neutral and non-toxic but cannot be utilized further. Compared to this SCR method, this adapted method has high economic potential for the chemical recycling of NOX. The conversion ratio of chemical absorption by potassium hydroxide (KOH) was experimentally measured to analyze this method's environmental protection and economic potential. In addition, the system's economic value was estimated using the experimental data. The research further focuses on modeling and evaluating the NOX utilization system for NH3 combustion. The study concluded that the waste gas utilization system for NO­X waste gas is feasible as it is profit-oriented, enabling further resource utilization and construction of the nitrogen cycle. Furthermore, applying this approach to other waste gases is promising for realizing a sustainable society.



Hybrid Model: Oxygen balance for the development of a digital twin

Marc Lemperle1, Pedram Ramin1, Julian Kager1, Benny Cassells2, Stuart Stocks2, Krist Gernaey1

1Technical University Denmark, Denmark; 2Novonesis, Fermentation Pilot Plant

The oxygen transfer rate (OTR) is often a limiting factor when targeting maximum yield in a fermentation process. Understanding the OTR is therefore critical for improved bioreactor performance, as dissolved oxygen often becomes the limiting factor in aerobic fermentations due to its inherent low solubility in liquids such as in fermentation broths1. With the long-term aim of establishing a digital twin framework, the initial phase of development involves mathematical modelling of the OTR in a pilot-scale bioreactor, hosting the filamentous fungus Aspergillus oryzae using an elaborate experimental design.

The experimental design is specifically tailored to the interplay of the factors influencing the OTR e.g., airflow, back-pressure and agitation speed. Through a first set of four fermentation, a full-factorial experimental design with three factors (aeration, agitation, and pressure) and two levels (high and low) is designed. Concluding the 23-factorial design, eight different unique patterns of factors and two centre points were investigated in four different fermentation processes.

Since viscosity plays a crucial role in determining mass transfer properties in the chosen fungal process, understanding its effects is essential for modelling the OTR2. Another set of the similar experimental setup offered the possibility to investigate the on-line viscosity measurement in the fermentation broth. A significant improvement in the description of the volumetric oxygen mass transfer coefficient (KLa) with an R2 fit of 92 % and the unsatisfactory mechanistic understanding of viscosity led therefore to the development of a hybrid OTR model. The hybrid sequential OTR model includes a light gradient boost machine model that predicts the online viscosity from both the mechanistic model outputs and the process data. Evaluation of the first series of experiments without online viscosity data showed an improved KLa fit with a normalized mean square error of up to 0.14. Further evaluation with production batches to demonstrate model performance is being considered as a subsequent step.

Cell dry weight and off-line viscosity measurements were taken from each of the above-mentioned industrial based fermentation processes throughout the fermentation. The subsequent analysis aims to decipher the relationships between the OTR and the agitation, aeration, head pressure and viscosity, thus providing the basis for an accurate and reliable mathematical model of the oxygen balance inside a fermentation.

The hybrid OTR model presents the first step towards developing a digital twin, aiding with operational decisions for fermentation processes.



An integrated approach for the sustainable water resources optimisation

MICHAELA ZAROULA1, EMILIA KONDILI1, JOHN K. KALDELLIS2

1Optimisation of Production Systems Lab, Mechanical Engineering Department, University of West Attica; 2Soft Energy Applications and Environmental Protection Lab., University of West Attica

Unhindered access to clean water, the preservation and strengthening of water reserves is, together with the coverage of energy needs, a basic element of the survival of the human species (and not only) and therefore a top priority of both the UN and the E.U. In particular, the E.E. has set the goal of upgrading the coverage of 70 million of its citizens to clean water by 2030.

On the other hand, of course, the current situation in the balance of water supply and demand in the southern Mediterranean is clearly deficient and particularly worrying, while the situation worsens even more during the summer season, where excessive tourist flows. In Greece for example the ever-increasing demand for water, especially in the island regions during the summer (tourist) season, combined with the prolonged drought, has led to over-exploitation (to the point of exhaustion) of any available water reserves, depriving traditional agricultural crops of the necessary amounts of water and makes absolutely imperative the need for the optimal management of existing water resources but also the optimal development of new or improvement of existing infrastructures.

In particular, the lack of water resources suffocates the irrigation of agricultural crops, constantly shrinking the production of local products and drastically reducing the number of people employed in the primary sector.

In this context and, especially, in the light of ever-increasing violation of the area’s capacity, the present work highlights the main rationale and methods of our current research in water resources optimisation.

More specifically, the main objectives of the present work are:

The detailed description of the integrated energy – water problem in highly pressed areas

The use of scientific methods for the optimization of the water resource system

The development of a mathematical optimization model for the optimal exploitation of existing water resources as well as the optimization of new infrastructure projects planning that takes quantitatively into account the priorities and the values of the water use.

Furthermore, the innovative approach in the present work also considers the need to reduce the demand based on future forecasts so that the water resources are always in balance with the wider environment where they are utilized.

Water resources sustainability is included in the optimization model for the reduction of the environmental impacts and the environmental footprint of the energy-water system.

It is expected that with the completion of the research will result in an integrated tool that will support the users in the optimal water resources exploitation.



Streamlined Life-Cycle Assessments of Chemicals Based on Chemical Taxonomies

Maximilian Guido Hoepfner, Lucas F. Santos, Gonzalo Guillén-Gosálbez

Institute for Chemical and Bioengineering, Department of Chemistry and Applied Biosciences, ETH Zurich, Vladimir-Prelog-Weg 1, 8093 Zurich, Switzerland

Addressing the challenges caused by climate change and the impact of human activities requires a tool to evaluate and identify strategies for mitigating climate risk. Life cycle assessment (LCA) has emerged as the prevalent approach to quantify the impact of industrial systems, providing valuable insights on how to improve their sustainability performance. Still, it remains in most cases a data-intensive and complex tool. Especially, for the chemical industry, with a wide variety of products, there is an urge for tools that streamline and accelerate their environmental impact assessment. As an example, the largest LCA database, Ecoinvent, currently includes only around 700 chemicals 1, most of them bulk chemicals, which highlights the need to cover data gaps and develop streamlined methods to facilitate the widespread adoption of LCA in the chemical sector.

Specifically, LCA data focus mostly on high production volume chemicals, most of them produced in continuous processes operating at high temperature and pressure. Quantifying the impact of fine chemicals, often produced in bath plants and at milder conditions, thus, requires time-consuming process simulations2, or data-driven methods 3. The latter estimate impacts based on molecular descriptors and are often trained with high production volume chemicals, which might make them less accurate for fine chemicals.

Alternatively, here we explore another approach to streamline the LCA calculations based on classifying chemicals according to their molecular structure, e.g., occurring functional groups in the molecule. By applying a chemical taxonomy, we establish intervals within which impacts are likely to fall and correlations between sustainability metrics within classes. Furthermore, we investigate the use of process metric indicators (PMI), such as waste-mass and energy intensity, as proxies of LCA impacts. Notably, we studied the 783 chemicals found in the Ecoinvent 3.9.1. cutoff database by using the taxonomy implemented in the classyfire tool 1. Subsequently, the LCIs for all chemicals were used to estimate simple PMI metrics, while their impacts were computed following the IPCC 2013 GWP 100 and ReCiPe 2016 midpoint methods. Starting with the classification into organic and inorganic chemicals, a subsequent classification into so-called superclasses, representing more complex molecular characteristics, is performed. Furthermore, we applied clustering, principal component analysis (PCA) and data fitting to identify patterns and trends in the superclasses. The calculations were implemented in Brightway and Python 3.11.

Preliminary results show that the use of a chemical taxonomy allows to identify stronger correlations between LCA impacts and PMI metrics, opening the door for streamlined LCA methods based on simple metrics and formulas tailored to the specific type of chemical class.

1. Lucas, E. et al. The need to integrate mass- and energy-based metrics with life cycle impacts for sustainable chemicals manufacture. Green Chem. 26, (2024).

2. Hai, X. et al. Geminal-atom catalysis for cross-coupling. Nature 622, 754–760 (2023).

3. Zhang, D., Wang, Z., Oberschelp, C., Bradford, E. & Hellweg, S. Enhanced Deep-Learning Model for Carbon Footprints of Chemicals. ACS Sustain. Chem. Eng. 12, 2700–2708 (2024).



Aspen Plus Teaching: Spread or Compact Approach

Fernando G. Martins1,2, Henrique A. Matos3

1LEPABE, Laboratory for Process Engineering, Environment, Biotechnology and Energy, Chemical Engineering Department, Faculty of Engineering, University of Porto, Porto, Portugal; 2ALiCE, Associate Laboratory in Chemical Engineering, Faculty of Engineering, University of Porto, Porto, Portugal; 3CERENA , Departamento de Engenharia Química, Instituto Superior Técnico, Universidade de Lisboa, Portugal

Aspen Plus is a software package for the modelling and simulation of chemical processes used in several chemical engineering courses of different levels worldwide, with the support of several books [1-4]. This contribution has the goal to discuss how this teaching and learning is applied in two Portuguese universities: Instituto Superior Técnico – University of Lisbon (IST.UL) and Faculty of Engineering – University of Porto (FE.UP)

In 2021, the former integrated master’s in Chemical Engineering, with a duration of 5 years, was split into two courses: the Bachelor, with a duration of 3 years, and the Master, with a duration of 2 years.

With this reformulation, and at IST.UL, the courses’ coordination decided to spread the Aspen Plus teaching in the 2nd year of the Bachelor in different courses, starting the first introduction to the package in the 1st semester with the Chem. Eng. Thermodynamics. The idea is to use Aspen Plus to support the learning about compound properties, phase diagrams with different models (IDEAL, NRTL, PR, SRK, etc.), azeotropic identification and activity coefficients calculation. Moreover, based on experimental data is possible to obtain the binary interaction coefficient by regression, being a tool for help experimental data analysis. In addition, a Rankine cycle is modelled and simulations are carried out to calculate COP and other KPIs for different fluids automatically.

The same procedure is now introduced in other courses, such as Process Separation, Transport Phenomena, etc. At IST.UL, there are 2 courses of Project Design (12 ECTS) at Bachelor and Master levels that use Aspen Plus as a tool in Conceptual Project Design .

At FE.UP, the introductory teaching of Aspen Plus occurs in 3rd year of the Bachelor, in a course called Software Tools for Chemical Engineering, intending to simulate industrial processes of small complexity, properly choosing the applicable thermodynamic and unit operations models, and analyse the influence of design variables and operating conditions. The Aspen Plus is also taught, in a more advanced way, in the course of Engineering Design (12 ECTS), in the 2nd year of the master's degree, when students develop preliminary designs for industrial chemical processes.

This work tries to analyse how these two teaching strategies influence the student’s performance in the two Project Design courses at IST.UL, and in Engineering Project at FE.UP, by the fact that Aspen Plus is intensively used in these courses.

References:

[1] Schefflan, R. (2016). Teach yourself the basics of ASPEN PLUS, 2nd edition, Wiley & Sons

[2] Al-MALAH, K.I.M. (2017). ASPEN PLUS – Chemical Engineering Applications, Wiley & Sons

[3] Sandler, S.I. (2015). Using Aspen Plus in Thermodynamics Instruction: A Step-by-Step Guide, Wiley & Sons

[4] Adams II, T.A. (2022). Learn Aspen Plus in 24 Hours, 2nd Edition, McGraw Hill



Integration of Life Cycle Assessment into the Optimal Design of Hydrogen Infrastructure for Regional-Scale Deployment

Alessandro Poles1, Catherine Azzaro-Pantel1, Henri Schneider2, Renato Luise3

1Laboratoire de Génie Chimique, Université Toulouse, CNRS, INPT, Toulouse, France; 2LAboratoire PLAsma et Conversion d'Énergie, INPT, Toulouse, France; 3European Institute for Energy Research, Emmy-Noether Straße 11, Karlsruhe, Germany

Climate change mitigation is one of the most urgent global challenges. Greenhouse gas (GHG) emissions are the primary drivers of climate change, needing coordinated international action. However, political and territorial complexities make a uniform global approach difficult. As a result, individual countries are developing their own national policies aligned with international guidelines, such as those from the Intergovernmental Panel on Climate Change (IPCC). These policies often focus solely on emissions generated within national borders, as is the case with France’s National Low-Carbon Strategy (SNBC). Focusing solely on territorial emissions in national carbon neutrality strategies may lead to the unintended consequence of shifting environmental impacts to other stages of the life cycle occurring outside the country's borders. To provide a comprehensive assessment of environmental impacts, broader decision-support tools, such as Life Cycle Assessment (LCA), are crucial.

This is particularly important in energy systems, where hydrogen has emerged as a key component of the future energy mix. Hydrogen production technologies - such as Steam Methane Reforming (SMR) and electrolysis - each present distinct trade-offs. Currently, hydrogen is predominantly produced via SMR (>90%), largely due to its established market presence and lower production costs (1-3 $/kgH2). However, SMR brings significant GHG emissions (10-12 kgCO₂-eq / kgH2). Electrolysis, on the other hand, presents a lower-carbon alternative when powered by renewable energy, although it is currently more expensive (6 $/kgH2).

Literature shows that most existing hydrogen system optimizations focus on reducing costs and minimizing GHG emissions, often overlooking broader environmental considerations. This highlights the need for a multi-objective framework that addresses not only economic and GHG emission reductions but also the mitigation of other environmental impacts, thus ensuring a more sustainable approach to hydrogen network development.

This study proposes an integrated framework that couples multi-objective optimization for hydrogen networks with LCA. The optimization framework is developed using Mixed Integer Linear Programming (MILP) and an augmented epsilon-constraint method, implemented in the GAMS environment over a multi-year timeline (2022-2050). Evaluated hydrogen production pathways include electrolysis powered by renewable energy sources (wind, PV, hydro, and the national grid) and SMR with Carbon Capture and Storage (CCS). The LCA model is directly integrated into the optimization process, using the ReCiPe2016 method to calculate environmental indicators following a Well-to-Tank approach. A case study of hydrogen deployment in Auvergne-Rhône-Alpes, addressing industrial and mobility demand for hydrogen, will illustrate this framework.

The current phase of the research focuses on a bi-criteria optimization framework that balances economic objectives with environmental indicators, considered individually, to identify correlated indicators. Future research will explore strategies to reduce dimensionality in multi-objective optimization (MOO) without compromising solution quality, ensuring that decisions are both efficient and environmentally robust.

Reference [1] Thèse Renato Luise, Développement par approche ascendante de méthodes et d'outils de conception de chaînes logistiques « hydrogène décarboné »: application au cas de la France, Toulouse INP, 4 octobre 2023, https://theses.fr/2023INPT0083?domaine=theses



Streamlining Catalyst Development through Machine Learning: Insights from Heterogeneous Catalysis and Photocatalysis

Mitra Jafari, Julia Schowarte, Parisa Shafiee, Bogdan Dorneanu, Harvey Arellano-Garcia

Brandenburg University of Technology Cottbus-Senftenberg, Germany

Designing heterogeneous catalysts and optimizing reaction conditions present significant challenges. This process typically involves catalyst synthesis, optimization, and numerous reaction tests, which are not only energy- and time-intensive but also costly. Advances in machine learning (ML) have provided researchers with new tools to predict catalysts' behaviour, reaction conditions, and product distributions without the need for extensive laboratory experiments. Through correlation analysis, ML can uncover relationships between various parameters and catalyst performance. Predictive models, trained on existing data, can forecast the effectiveness of new materials, while data-driven insights help guide catalyst design and optimization. Automating the ML framework further streamlines this process, improving scalability and enabling rapid evaluation of a wider range of candidates, which accelerates the development of solutions to current challenges [1,2].

In this contribution, a proposed ML approach and its potential in catalysis (heterogeneous and photocatalysis) is explored by analysing datasets from different reactions, such as Fischer-Tropsch synthesis and pollutant degradation. These datasets are categorized based on descriptors like catalyst formulation, pretreatment, characteristics, activation, and reaction conditions, with the goal of predicting reaction outcomes. Initially, the data undergoes cleaning and labelling using one-hot encoding. Subsequent steps include imputation and normalization for data preparation. In addition, techniques such as Spearman correlation matrices, dendrograms, pair plots, and dimensionality reduction methods like PCA are applied. The datasets are then employed to train and test several models, including ensemble methods, regression techniques, and neural networks. Hyperparameter tuning was optimized using GridSearchCV, alongside cross-validation. Performance metrics such as R², RMSE, and MAE, are used to assess model accuracy and AIC for model selection, with a simple mean value model or linear regression serving as a baseline for comparison.

Finally, the prediction accuracy of each model is investigated, and the best performing model is selected. The effect of different descriptors on the respons have also been assessed to find out the most effective parameters on the catalysts performance. In regards of photocatalysis, nonlinear behaviour was observed due to the optimization driven influences. This is likely because the published results solely consisted of optimized data.

References

  1. Tang, Deqi, Rangsiman Ketkaew, and Sandra Luber. "Machine Learning Interatomic Potentials for Catalysis." Chemistry–A European Journal (2024): e202401148.
  2. Schnitzer, Tobias, Martin Schnurr, Andrew F. Zahrt, Nader Sakhaee, Scott E. Denmark, and Helma Wennemers. "Machine Learning to Develop Peptide Catalysts─ Successes, Limitations, and Opportunities." ACS Central Science 10, no. 2 (2024): 367-373.


Life Cycle Design of a Novel Energy Crop “Sweet Erianthus” by Backcasting from Process Simulation Integrating Agriculture and Industry

Satoshi Ohara1, Yoshifumi Terajima2, Hiro Tabata3,4, Shoma Fujii5, Yasunori Kikuchi3,5

1Research Center for Advanced Science and Technology, LCA Center for Future Strategy, The University of Tokyo; 2Tropical Agriculture Research Front, Japan International Research Center for Agricultural Sciences; 3Presidential Endowed Chair for “Platinum Society”, The University of Tokyo; 4Research Center for Solar Energy Chemistry, Graduate School of Engineering Science, Osaka University; 5Institute for Future Initiatives, The University of Tokyo

Crops have been developed primarily for food production. Toward decarbonization, it is also essential to design and develop novel crops suitable for new application processes such as biofuels and green chemicals production through backcasting approaches. For example, modifying industrial crops through crossbreeding or genetic modification can change their unit yield, environmental tolerance, and raw material composition (i.e., sugars, starch, and lignocellulose). However, conventional energy crop improvement has been aimed only at high-unit yield with high fiber content, such as Energy cane and Giant Miscanthus, which contain little or no sugar, limiting their use to energy and lignocellulosic applications.

Sweet Erianthus was developed in Japan as a novel energy crop by crossbreeding Erianthus (wild plants with high biomass productivity even in poor environments) and Saccharum spp. hybrids (sugarcane with sugar storage ability). Erianthus has a deep root system to draw up nutrients and water from the deep layers of the soil, making it possible to cultivate crops with low fertilizer and water inputs even in farmland unsuitable for agriculture due to low rainfall or low nutrients and water near the surface. On the other hand, sugarcane accumulates sugars directly in the stalk. Microorganisms can easily convert extracted sugar juice into bioproducts such as ethanol and polylactic acid. Therefore, Sweet Erianthus presents a dual characteristic of both Erianthus and sugarcane.

In this study, we are tackling the design of optimal Sweet Erianthus crop conditions (unit yield, compositional balance of sugars and fiber) by backcasting from simulations of the entire life cycle, considering sustainable agriculture, industrial productivity, environmental impact, and resource recycling. As options for industrial applications, ethanol fermentation, biomass combustion, power generation, and torrefaction to produce charcoal, biogas oil, and syngas were selected. Production potentials and energy inputs were calculated using the previously reported simulation models (Ouchida et al., 2017; Leonardo et al., 2023). Specifically, the production potential of each energy per unit area was simulated by multiplying conversion factors with three variables: unit yield; Y [t/ha], sugar content; S [wt%], and fiber content; F [wt%]. Each variable was assumed not to exceed the range of widths of the various prototypes developed.

The simulation results reveal optimal feedstock conditions that maximize energy productivity per unit area or minimize environmental impact. The fiber-to-sugar content (F/S) ratio was found to be especially important. This study presents a simulation-based crop design methodology. This study presents a method for practical crop design on the agricultural side based on simulations on the industrial side, which is expected to enable efficient new crop development.

K. Ouchida et al., 2017, Integrated Design of Agricultural and Industrial Processes: A Case Study of Combined Sugar and Ethanol Production, AIChE Journal, 63(2), 560-581

L. Leonardo et al., 2023, Simulation-based design of regional biomass thermochemical conversion system for improved environmental and socio-economic performance, Comput. Aid. Chem. Eng., 52. 2363-2368



Reversible Solid Oxide Cells and Long-term Energy Storage in Residential Areas

Arthur Waeber, Dorsan Lepour, Xinyi Wei, Shivom Sharma, François Maréchal

EPFL, Switzerland

As environmental concerns intensify and energy demand rises, especially in residential areas, reversible Solid Oxide Cells (rSOC) stand out as a promising technology. Characterized by their reversibility, high electrical efficiency, and fuel flexibility, they also cogenerate high-quality heat. The smart operation of rSOC systems can present interesting opportunities for long-term energy storage, facilitating the penetration of renewable energies at different scales while continuously providing useful heat.

Although the implementation of energy storage systems in residential areas has already been extensively discussed in the literature, the focus is mainly on batteries, often omitting the seasonal dimension. This study aims to address this gap by investigating the technical and economic feasibility of rSOC systems in residential areas alongside various long-term storage options: Hydrogen (H2), Hybrid tank (CH4/CO2) , and Ammonia (NH3).

Each of these molecules requires precise modeling, introducing specific constraints and impacting the rSOC system's performance in terms of electricity or heat output in different ways. To achieve this, the processes are first modeled in Aspen Plus to account for thermodynamic properties before being integrated into the Renewable Energy Hub Optimizer (REHO) framework.

REHO is a decision-support tool designed for sustainable urban energy system planning. It considers the endogenous resources of a specified area, various end-use demands (such as heating and mobility), and multiple energy carriers, including electricity, heat, and hydrogen. Multi-objective optimizations are conducted across economic, environmental, and energy efficiency criteria to facilitate a sound comparison of different storage solutions.

This analysis emphasizes the need for long-term storage technologies to support the penetration of decentralized electricity production. By providing tangible figures, such as CAPEX, storage tank sizes, and renewable energy installed capacity, it enables a fair comparison of the three main scalable long-term storage options. Additionally, it offers guidelines on the optimal storage conditions for each molecule, balancing energy efficiency and storage tank size. The role of rSOC as electricity storage technology and as heat producer for domestic hot water and/or space heating is also evaluated for the different storage options.



A Comparative Analysis of Industrial Edge MLOps prototype for ML Application Deployment at the edge of Process Industry

Fatima Rani, Lucas Vogt, Prof. Leon Urbas

Technische Universität Dresden, Germany

In the evolving Industry 4.0 revolution, combining Artificial Intelligence of Things (AIoT) and edge computing represents a significant step forward in innovation and efficiency. This paper introduces a prototype for constructing an edge AI system utilizing the contemporary Machine Learning Operations (MLOps) concept (Rani et al., 2024 & 2023). By employing microcontrollers such as Raspberry Pi and Nvidia Jetson Nano microcomputer as hardware, our methodology encompasses data ingestion and machine learning model deployment on edge devices (Antonini et al., 2022). Crucially, the MLOps pipeline is fully developed within the ecoKI platform, a pioneering research initiative focused on making energy-saving solutions for Small and Medium-sized Enterprises (SMEs). Here, we propose an MLOps pipeline that can be run as either multiple or single workflows, leveraging a REST API for interaction and customization through the FastAPI web framework in Python. This pipeline enables seamless data processing, model development, and deployment on edge devices. Moreover, real-time AI processing on edge devices enables microcontrollers, even those with limited resources, to effectively handle tasks in areas such as predictive maintenance, process optimization, quality assurance, and supply chain management. Furthermore, a comparative analysis conducted with Edge Impulse validates the effectiveness of our approach, demonstrating how optimized ML algorithms can be successfully deployed in the process industry (Janapa Reddi et al., 2023). Finally, this study aims to provide a blueprint for advancing Edge AI development in the process industry by exploring AI techniques suited for resource-limited environments and addressing key challenges, such as ML algorithm optimization and computational power.

References

Rani, F., Chollet, N., Vogt, L., & Urbas, L. (2024). Industrial Edge MLOps: Overview and Challenges. Computer Aided Chemical Engineering, 53, 3019-3024.

Rani, F., Khaydarov, V., Bode, D., Hasan, I. H. & Urbas, L.(2023). MLOps Practice: Overcoming the Energy Efficiency Gap, Empirical Support Through ecoKI Platform in the Case of German SMEs. PAC- Protection, Automation Control, World Global Conference 2023.

Antonini, M., Pincheira, M., Vecchio, M., & Antonelli, F. (2022, May). Tiny-MLOps: A framework for orchestrating ML applications at the far edge of IoT systems. In 2022 IEEE international conference on evolving and adaptive intelligent systems (EAIS) (pp. 1-8). IEEE.

Janapa Reddi, V., Elium, A., Hymel, S., Tischler, D., Situnayake, D., Ward, C., ... & Quaye, J. (2023). Edge impulse: An mlops platform for tiny machine learning. Proceedings of Machine

Learning and Systems, 5.

Acknowledgments: This work was Funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) under the grant number 03EN2047C.



Energy Water Nexus Resilience Analysis Using Integrated Resource Allocation Approach

Hesan Elfaki1, Dhabia Al-Mohannadi2, Mohammad Lameh1

1Texas A & M, United States of America; 2Hamad Bin Khalifa University , Qatar

Power and water systems are strongly interconnected through exchanged flows of water, electricity, and heat which are fundamental to maintain continuous operation and provide functional services that meet the demands. These systems are highly vulnerable to climate stressors resulting in possible disruption to the operation. As the services delivered by these systems are vital for community development across all sectors, it is essential to create reliable frameworks and effective methods to assess and enhance the resilience of the energy-water nexus to climate impacts.

This work presents a macroscopic, high-level representation of the interconnected nexus system utilizing a resource allocation model to capture the interactions between the power and water subsystems. The model is used to assess the performance of the system under various climate impact scenarios to determine peak demands the system can withstand and quantify the losses of the functional services which reveals the system vulnerabilities. Resilience metrics are incorporated to interpret these results and characterize the nexus performance. The overall method is generic, and its capabilities will be demonstrated through a case study on the energy-water nexus in the Gulf Cooperation Council (GCC) region.



Technoeconomic Analysis of a Novel Amine-Free Direct Air Capture System Integrated with HVAC

Yasser Abdellatif1,2, Ikhlas Ghiat1, Riham Surkatti2, Yusuf Bicer1, Tareq AL-ANSARI1,2, Abdulkarem I. Amhamed1,3

1Hamad Bin Khalifa University College of Science and Engineering, Qatar; 2Qatar Environment and Energy Institute (QEERI), Doha, Qatar.; 3Corresponding author’s email: aamhamed@hbku.edu.qa

The increasing demand for Direct Air Capture (DAC) technologies has been driven by the need to mitigate rising CO2 levels and address climate change. However, DAC systems face challenges, particularly in humid environments, where high humidity substantially increases the energy required for regeneration. Conventional CO2 physisorption is often hindered by competitive water adsorption, which reduces system efficiency and increases energy demand. Addressing these limitations is crucial for advancing DAC technology and improving commercial viability. This study proposes a novel DAC system integrated with an Air Handling Unit (AHU) to manage these challenges. A key feature of the system is the incorporation of a silica gel wheel for air dehumidification prior to physisorption. This pre-treatment step significantly enhances physisorbents' performance by reducing water vapor in the air, optimizing the CO2 adsorption process. As a result, physisorbents can better perform with conventional chemisorbents, which benefit from water co-adsorption but could have limitations, such as material degradation and higher energy demands. The study focuses on two adsorbents: NbOFFIVE and SBA-15 functionalized with TEPA. These materials were chosen for their promising CO2 capture properties. The system was tailored for the AHU of Doha Tower, a high-rise in a hot, humid climate. The silica gel wheel dehumidifies return air before it enters the CO2 capture stage. The air is then cooled by the existing AHU system to create optimal conditions for adsorption. After CO2 capture, the air is reheated using the AHU’s heater to maintain indoor temperatures. The adsorbed water in the silica gel is regenerated using the CO2- and water-free airstream, allowing the system to deliver the required humidity range for indoor areas before supplying the air to the building. This ensures both air quality and operational efficiency. This integrated approach offers significant advantages in energy savings and efficiency. The use of silica gel prior to physisorption reduced energy requirements by 82% for NbOFFIVE and 39% for SBA-15/TEPA, compared to a DAC-HVAC system without silica gel dehumidification. Physisorbents generally exhibit lower heats of adsorption than chemisorbents, further reducing the system’s overall energy demand. The removal of excess moisture also minimizes the energy required for water desorption and addresses key drawbacks of amines, such as instability in indoor environments. Additionally, this approach lowers the cooling load by eliminating water condensation typically managed by the HVAC system. These factors were evaluated in a technoeconomic analysis, where they played a crucial role in reducing operational costs. Utilizing the existing AHU infrastructure further reduces capital expenditures (CAPEX), making this system a highly attractive solution for large-scale CO2 capture applications.



Computer-Aided Molecular Design for Citrus and Coffee Wastes Valorisation

Giovana Correia de Assis Netto1, Moisés Teles dos Santos1, Vincent Gerbaud2

1University of São Paulo, Brazil; 2Laboratoire de Génie Chimique, France

Brazil is the world's largest producer of both coffee and oranges. These agro-industrial processes generate large quantities of wastes, which are typically discarded in landfills, mixed with animal feed, or incinerated. Such practices not only pose environmental issues but also fail to fully exploit the economic potential of these residues. The Brazilian coffee processing predominantly employs the dry method, wherein the coffee fruit is dried and dehulled, resulting in coffee husk as the primary waste (18% w/w fresh fruit). Subsequently, green coffee beans are roasted, generating an additional residue known as silverskin (4.3% w/w fresh fruit). Finally, roasted and ground coffee undergoes extraction, resulting in spent coffee grounds (91% w/w of ground coffee). Altogether, these residues can account for up to 99% of the coffee fruit's mass. Similarly, Brazil leads global orange juice production. This process generates orange peel waste, which comprises 50–60% of the fruit's mass. Coffee and orange peel wastes contain valuable compounds that can be extracted or produced via biological or chemical conversions, making the residues potential sources of chemical platforms. These chemical platforms can be used as molecular building blocks, with multiple functional groups that can be functionalised into useful chemicals. A notable example is furfural, a key bio-based chemical platform that serves as a precursor for various chemicals, offering an alternative to petroleum-based products. Furfural is usually obtained from xylose dehydration and purified by extraction with organic solvents, such as toluene or methyl isobutyl ketone, followed by distillation. The objective of this work is to design alternative solvents for furfural extraction from aqueous solutions, using Computer-Aided Molecular Design (CAMD). A comprehensive literature review identified chemical platforms that can be produced from coffee and orange residues. These molecular structures were then used as molecular building blocks in the chemical library of an in-house CAMD tool. The CAMD tool employed uses molecular graphs for chemical structures representation and modification, group contribution methods for property estimations and a genetic algorithm as search procedure. The target properties for the screening included Kow (as a measurement of toxicity), enthalpy of vaporisation, melting point, boiling point, flash point and Hansen solubility parameters. Other 31 properties, including EHS indicators, were also calculated for reference. From the initial list of 40 building block families, 19 families were identified in coffee wastes, and 20 families were identified in orange wastes. Among these, 13 building blocks are common to both types of residues and were evaluated as molecular fragments to design candidate solvents for furfural separation: furoate, geranyl, glucaric acid, glutamic acid, hydroxymethylfurfural, hydroxypropionic acid, levulinic acid, limonene, 5-methylfurfural, oleic acid, succinic acid, glycerol and the furfural itself. The results demonstrate that molecular structures derived from citrus and coffee residues have the potential to produce solvents with properties comparable to those of toluene. The findings are promising as they represent an advancement over the use of toluene, a fossil-derived solvent, enhancing sustainability in furfural extraction and avoiding the use of non-renewable chemicals in downstream processes of agro-based biorefineries.



Introduction of carbon capture technologies in industrial symbioses for decarbonization

Sydney Thomas, Marianne Boix, Stéphane Negny

Laboratoire de Genie Chimique, Toulouse INP, CNRS, Université Paul Sabatier, France

Climate change is a consequence of human activities, with one of the primary sources of greenhouse gases emissions (GHG) being industrial activities. Therefore, it is imperative to drastically reduce emissions from the industrial sector in order to effectively address climate change. This endeavor will necessitate the implementation of multiple actions aimed at enhancing both sobriety and efficiency.

Eco-industrial parks are among the viable options for increasing efficiency. They operate through the collaboration of industries that choose to cooperate to mutualize or exchange materials, energy, or services. By optimizing these flows, it is possible to reuse a fraction of materials, thus reducing waste and fossil fuel consumption, thereby decreasing GHG emissions.

This study is based on a real eco-industrial park located in South Korea, where some companies are capable of producing different levels of steam, while others have a demand for steam (Kim et al., 2010). However, this work also pertains to a project for reindustrialization in France, necessitating that parameters are adapted to French conditions while striving for a general applicability that may extend to other countries. One of the preliminary solutions for reducing GHG emissions involves optimizing the steam network among companies. Additionally, it is feasible to implement carbon capture solutions to mitigate the impact of fuel consumption, although these techniques may also contribute to other forms of pollution. Consequently, while they reduce GHG emissions, they may inadvertently increase other types of pollution. The ultimate objective is to optimize the park utilizing a systemic approach.

In this analysis, carbon capture modules are modeled and integrated into an optimization model for steam exchanges that was previously developed by Mousqué et al. (2018). The multi-period model utilizes a multi-criteria mixed-integer linear programming (MILP) approach. The constraints of the problem correspond to material and energy balances as well as thermodynamic equations. Three criteria are considered to assert the optimal organization: cost, greenhouse gas emissions (GHG), and pollution from amines. Subsequently, an epsilon-constraint strategy is employed to delineate the Pareto front. Finally, the TOPSIS method is utilized to determine the most advantageous solution.

The preliminary findings appear to indicate that capture through adsorption holds significant promise. Compared to base case scenario, this method has the potential to divide by 3 the CO2 emissions while the cost only increases by 0.4% per year. This approach may eliminate the necessity of amine use in carbon capture and reduce the energy needs compared to absorption capture. However, further researches need to be done to confirms these results.



Temporal Decomposition Scheme for Designing Large-Scale CO2 Supply Chains Using a Neural-Network Based Model for Forecasting CO2 Emissions

Jose A. Álvarez-Menchero, Ruben Ruiz-Femenia, Raquel Salcedo-Díaz, Isabela Fons Moreno-Palancas, Jose A. Caballero

University of Alicante, Spain

The battle against climate change and the search for innovative solutions to mitigate its effects have become the focus of the researchers’ attention. One potential approach to reduce the impacts of the global warming could be the design of a Carbon Capture and Storage Supply Chain (CCS SC), as proposed by D’Amore [1]. However, the high complexity of the model requires exploring alternative ways to optimise it.

In this work, a CCS multi-period supply chain for Europe, based on that presented by D’Amore [1], is designed. Data on CO2 emissions have been sourced from the EDGAR database [2], which includes information spanning the last 50 years. Since this problem involves optimising the cost and operation decisions over a 10-year time horizon, it would be advisable to forecast carbon dioxide emissions to enhance the reliability of the data used. For this purpose, a neural-network based model is implemented for forecasting [3]. The chosen model is N-BEATS.

Furthermore, a temporal decomposition scheme is used to address the intractability issues of the model. The selected method is Lagrangean decomposition, which has been employed in other high-complexity works, demonstrating strong performance and significant computational savings [4,5].

References

[1] D’Amore, F., Bezzo, F., 2017. Economic optimisation of European supply chains for CO2 capture, transport and sequestration.

[2] JRC, 2021. Emission Database for Global Atmospheric Research (EDGAR). Joint Research Centre, European Commission (Available at:). https://edgar.jrc.ec.europa.eu/index.php.

[3] Akiba, T., Sano, S., Yanase, T., Ohta, T., & Koyama, M., 2019. A Next-generation Hyperparameter Optimization Framework. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.

[4] Jackson, J. R., Grossmann, I. E., 2003. Temporal decomposition scheme for nonlinear multisite production planning and distribution models.

[5] Goel, V., Grossmann, I. E., 2006. A novel branch and bound algorithm for optimal development of gas fields under uncertainty in reserves.



Dynamic simulation of turquoise hydrogen production using a regenerative non-catalytic pyrolysis reactor under various heat sources

Jiseon Park1,2, Youngjae Lee1, Uendo Lee1, Won Yang1, Jongsup Hong2, Seongil Kim1

1Korea Institute of Industrial Technology, Korea, Republic of (South Korea); 2Yonsei university, Korea, Republic of (South Korea)

Hydrogen is widely regarded as a key energy source for reducing carbon emissions and dependence on fossil fuels. As a result, several methods for producing hydrogen have been developed, such as gray, blue, green, and turquoise hydrogen. Grey hydrogen is produced from natural gas but generates a large amount of CO₂ as a byproduct. Blue hydrogen captures and stores CO2 to overcome the problems of gray hydrogen production methods. Green hydrogen is produced through water electrolysis powered by renewable energy and emits almost zero CO2. However, it faces challenges such as intermittent energy supply and high production costs.

In turquoise hydrogen production, methane pyrolysis generates hydrogen and solid carbon at high temperatures. Unlike the other hydrogen production methods, this process does not emit carbon dioxide, thus it offers environmental benefits. Notably, non-catalytic methane pyrolysis has the advantage of avoiding catalyst deactivation issues. While catalytic methane pyrolysis increases operational complexity and costs because of the regular catalyst replacement, the non-catalytic process addresses these challenges. However, non-catalytic processes require maintaining much higher reactor temperatures than steam methane reforming and catalytic methane pyrolysis. Consequently, optimizing the heat supply is critical to maintaining high temperatures.

This study explores various methods of supplying heat to sustain the high temperature inside the reactor. We propose a new method for turquoise hydrogen production based on a regenerative pyrolysis reactor to optimize heat supply. In this system, as methane pyrolysis begins in one reactor, it undergoes an endothermic reaction, causing a decrease in temperature. Meanwhile, another reactor supplies heat by combusting hydrogen, ammonia, or methane to gradually increase the temperature. This system enables continuous heat supply and efficiently uses thermal energy.

Therefore, this study conducts dynamic simulation to optimize a regenerative non-catalytic pyrolysis system for continuous turquoise hydrogen production. By utilizing dynamic analysis inside the reactor, optimal operating conditions for this hydrogen production system are determined, which ensures efficient and continuous hydrogen production. Additionally, the study compares hydrogen, ammonia, and methane as heat sources to determine the most effective fuel for maintaining high temperatures in reactors. This comparison utilizes life cycle assessment (LCA) to comprehensively evaluate the energy consumption and CO2 emissions of each fuel source.

The integration of dynamic analysis with LCA provides critical insights into the environmental and operational efficiencies of various heat supply methods used in the regenerative turquoise hydrogen production system. This approach enables the quantification of those impacts and supports the identification of the most suitable fuel. Ultimately, this research contributes to the development of more sustainable and efficient hydrogen production technologies, highlighting the potential for significant reductions in carbon emissions.



Empowering Engineering with Machine Learning: Hybrid Application to Reactor Modeling

Felipe CORTES JARAMILLO1, Julian Per BECKER1, Benoit CELSE1, Thibault FANEY1, Victor COSTA1, Jean-Marc COMMENGE2

1IFP Energies nouvelles, France; 2Université de Lorraine, CNRS, LRGP, France

Hydrocracking is a chemical process that breaks down heavy hydrocarbons into lighter, more valuable products, using feedstocks such as vacuum gas oil (VGO) or renewable sources like vegetable oil and animal fat. Although existing hydrocracking models, developed over years of research, can achieve high accuracy and robustness once calibrated and validated [1-3], significant challenges persist. These include the inherent complexity of the feedstocks (containing billions of molecules), high computational costs, and limitations in analytical techniques, particularly in differentiating between similar compounds like iso and normal alkanes. These challenges result in extensive experimentation, higher costs, and considerable discrepancies between physics-based model predictions and actual measurements.

To overcome these limitations, effective approximations are needed that integrate both empirical data and established process knowledge. A preliminary investigation into purely data-driven models revealed difficulties in capturing the fundamental behavior of the hydrocracking reaction, motivating the exploration of an hybrid modeling approach. Among various hybrid modeling frameworks [4], physics-informed machine learning was selected after in-depth examination, as it can leverage well-established first-order principles, represented by ordinary differential equations (ODEs), to guide data-driven models. This method can improve approximations of real-world reactions, even when the first-order principles do not perfectly match the underlying, complex processes [5].

This work introduces a novel hybrid modeling approach that employs physics-informed neural networks (PINNs) to address the challenges of hydrocracking reactor modeling. The performance is compared against a traditional kinetic model and a range of purely data-driven models, using data from 120 continuous pilot plant experiments as well as simulated scenarios based on the existing first-order behavior models developed at IFPEN [2].

Multiple criteria including accuracy, trend analysis, extrapolation capabilities, and model development time were used to evaluate the methods. In all scenarios, the proposed approach demonstrated a performance improvement over both the kinetic and purely data-driven models. The results highlight that constraining data-driven models, such as neural networks, with known first-order principles enhances robustness and accuracy. This hybrid methodology offers a new avenue for modeling uncertain reactor processes by effectively combining general a priori knowledge with data-driven insights.

References

[1] Chinesta, F., & Cueto, E. (2022). Empowering engineering with data and AI: a brief review.

[2] Becker, P. J., & Celse, B. (2024). Combining industrial and pilot plant datasets via stepwise parameter fitting. Computer Aided Chemical Engineering, 53, 901-906.

[3] Becker, P. J., Serrand, N., Celse, B., Guillaume, D., & Dulot, H. (2017). Microkinetic model for hydrocracking of VGO. Computers & Chemical Engineering, 98, 70-79.

[4] Bradley, W., et al. (2022). Integrating first-principles and data-driven modeling. Computers & Chemical Engineering, 166, 107898.

[5] Tai, X. Y., Ocone, R., Christie, S. D., & Xuan, J. (2022). Hybrid ML optimization for catalytic processes. Energy and AI, 7, 100134.



Cascade heat pumps as an enabler for solvent-based post-combustion capture in a cement plant

Sarun Kumar Kochunni1, Rahul Anantharaman2, Armin Hafner1

1Department of Energy and Process Engineering, NTNU; 2SINTEF Energy Research

Cement production is a significant source of global CO₂ emissions, contributing about 7-8% of the world's total emissions. This is mainly due to the energy-intensive process of producing clinker (the primary component of cement) and the chemical reaction called calcination, which releases CO₂ when limestone (calcium carbonate) is heated. Around 60% of these direct emissions arise from calcination, while the remaining 40% result from fuel combustion. Thus, capturing CO₂ is essential for decarbonising the industry. Among the various capture techniques, solvent-based post-combustion CO₂ capture stands out due to its maturity and compatibility with existing cement plants. However, this method demands significant heat for solvent regeneration, which is often scarce in many cement facilities that require substantial heat for drying raw materials. Typically, 30-50% of the heat needed for solvent regeneration can be sourced from the excess heat generated within the cement plant. Additional heat can be supplied by burning fuels to create steam or by employing heat pumps to upgrade the low-grade heat available from the capture facility or the subsequent CO₂ liquefaction process.

This study systematically incorporates cascade heat pumps to harness waste heat from the CO₂ liquefaction process for regenerating solvents. The proposed method replaces the conventional ammonia-based refrigeration system for CO₂ liquefaction with a cascade HTHP, which provides refrigeration for the liquefaction and high-temperature heat for solvent regeneration. The system liquefies CO₂ using the evaporator and applies the heat rejected via the condenser for solvent regeneration. In this cascade HTHP, ammonia or propane is used in the lower cycle, while butane or pentane operates in the upper cycle, aiming for operational temperatures of 240 K for liquefaction and 395 K for heat supply.

The system’s thermodynamic performance is evaluated using ASPEN HYSYS simulations across different refrigerant configurations in the integrated setup. The findings indicate that an HTHP system using ammonia and pentane can deliver up to 12.5% of the heat needed for solvent regeneration, resulting in a net COP of 2.0. This efficiency exceeds that of other low-temperature heat sources for solvent regeneration. While adding a pentane cycle raises power consumption, the system remains energy-efficient overall, highlighting its potential for decarbonising cement production through enhanced CO₂ capture and integration strategies.



Agent-Based Simulation of Integrated Process and Energy Supply Chains: A Case Study on Biofuel Production

Farshid Babaei, David B. Robins, Robert Milton, Solomon F. Brown

School of Chemical, Materials and Biological Engineering, University of Sheffield, United Kingdom

Despite the potential benefits of decision level integration for the process and energy supply chains, these systems are traditionally assessed and optimised by incorporating simplified models of unit operations within a spatially distributed network. Such an organisational level integration can hardly be achieved without leveraging Information and Communication Technology (ICT) tools and concepts. In this research work, a multi-scale agent-based model is proposed to facilitate the transition from traditional practices to coordinated supply chains.

The multi-agent system framework proposed incorporates different organisational dimensions of the process and energy supply chains including raw material suppliers, rigorous processing plants, and consumers. Furthermore, the overall behaviour of each agent type in the model and its interaction with other agents are implemented. This allows for the simultaneous assessment and optimisation of process and supply chain decisions. By integrating detailed process models into the supply chain operation, the devised framework goes beyond existing studies in which the behaviour of lower decision levels is neglected.

To demonstrate the application of the proposed multi-agent system, a case study for a biofuel supply chain is presented which captures the underlying dynamic of the supply chain network. The involved actors, comprising farmers, biorefineries, and end-users, seek to increase their payoffs given their interdependencies and intra-organisational variables. The example features distributed and asynchronous decision-making, same-echelon actor competition, and incomplete information. The aggregated payoff of the supply network is optimised under different scenarios and fraction of capacity allocated to biofuel production and consumption as well as biofuel production variables are obtained. According to the results, unit operation level decisions along with the participant allocated capacity options significantly influence the supply chain performance. In conclusion, the proposed research expounds a more realistic view of multi-scale coordination schemes in the process and energy supply chains.



Steel Plant Electrification: A Pathway to Sustainable Production and Carbon Reduction

Rachid Klaimi2, Sabla Alnouri1, Vladimir Stijepovic3, Aleksa Miladinovic3, Mirko Stijepovic3

1Qatar University, Qatar; 2Notre Dame University; 3University of Belgrade

Traditional steel processes are energy-intensive and rely heavily on fossil fuels, contributing to significant greenhouse gas emissions. By adopting electrification technologies, such as electric boilers and compressors, particularly when powered by renewable energy, steel plants can reduce their carbon footprint, enhance process flexibility, and lower long-term operational costs. This transition also aligns with increasing regulatory pressures and market demand for greener practices, positioning companies for a more competitive and sustainable future. This work investigates the potential of replacing conventional steam crackers in a steel plant that relies on the use of fossil fuels, with electrically driven heating systems powered by renewable energy sources. The overall aim was to significantly lower greenhouse gas emissions by integrating electric furnaces and heat pumps into the steel production process. This study evaluates the potential carbon savings from the integration of solar energy in a steel plant with a production capacity of 300,000 tons per month. The solar field required for this integration was found to span an area of 40,764 m². By incorporating solar power into the plant’s energy mix, the analysis reveals a significant reduction in carbon emissions, with an estimated saving of 2,831 tons of CO₂ per year.



INCEPT: Interpretable Counterfactual Explanations for Processes using Timeseries comparisons

Omkar Pote3, Dhanush Majji3, Abhijit Bhakte1, Babji Srinivasan2,3, Rajagopalan Srinivasan1,3

1Department of Chemical Technology, Indian Institute of Technology Madras, Chennai 600036, India; 2Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai 600036, India; 3American Express Lab for Data Analytics and Risk Technology, Indian Institute of Technology Madras, Chennai 600036, India

Advancements in sensors, storage technologies, and computational power have unlocked the potential of AI for process monitoring. AI-based methods can successfully address complex process monitoring involving multivariate time series data. While their classification performance in process monitoring is very good, the decision-making logic of AI models are often difficult to interpret by operators and other plant personnel. In this paper, we propose a novel approach to explain the results from AI based process monitoring methods to plant operators based on counterfactual explanations.

Explainable AI (XAI) has emerged as a promising field of research, aiming to address these challenges by enhancing the interpretability of AI. XAI has gained significant attention in chemical engineering; much of this research focuses on explainability in tabular and image data. Most XAI methods provide explanations at a sample level, i.e., they assume that a single data point is inherently interpretable, which is unrealistic assumption for dynamic systems such as chemical processes. There has been limited exploration into explainability for systems characterized by multivariate time series. To address this gap, we propose a novel XAI method that provides counterfactual explanations accounting for a multivariate time-series nature.

A counterfactual explanation is the "smallest change to the feature values that alters the prediction to a predefined output." Ates et al. (2021) developed a method for counterfactual multivariate time series explainability. Here, we adapt this method and extend it to account for autocorrelation and cross-correlation which is essential in process monitoring. Our proposed method, called INterpretable Counterfactual Explanations for Processes using Time series comparisons (INCEPT), generates a counterfactual explanation through a four-step methodology. Consider an online process sample given to a neural network based fault identification model. The neural network would use a window of data around this sample to predict the state of the process (normal, fault-1, etc). First, the time series data is transformed into PC space using Dynamic PCA to address autocorrelation and cross-correlation. Second, the nearest match from the training data is identified in this space for the desired class using Euclidean distance. Third, a counterfactual sample is generated by adjusting key variables that increase the likelihood of the desired class, guided by a greedy algorithm. Finally, the counterfactual is transformed back to the original space, and the model recalculates the class probabilities until the desired class is achieved. The adjustments needed to the process variables to result in the counterfactual are used as the basis to generate explanations.

The effectiveness of the proposed method will be demonstrated using the Tennessee Eastman case study. The generated explanations can aid model developers in debugging and model enhancement. They can also assist plant operators in understanding the model’s predictions and gain actionable insights.

References:

[1] Bhakte, A., et.al., 2024. Potential for Counterfactual Explanations to Support Digitalized Plant Operations.

[2] Bhakte, A., et.al., 2022. An explainable artificial intelligence-based approach for interpretation of fault classification results from deep neural networks.

[3] Ates, E., et.al., 2021. Counterfactual Explanations for Multivariate Time Series.



Dynamic Simulation of an Oxy-Fuel Cement Pyro-processing Section

Marc-Daniel Stumm1, Tom Dittrich2, Jost Lemke2, Eike Cramer1, Alexander Mitsos3,1,4

1Process Systems Engineering (AVT.SVT), RWTH Aachen University, 52074 Aachen, Germany; 2thyssenkrupp Polysius GmbH, 59269 Beckum, Germany; 3JARA-ENERGY, 52056 Aachen, Germany; 4Institute of Climate and Energy Systems, Energy Systems Engineering (ICE-1), Forschungszentrum Jülich GmbH, 52425 Jülich, Germany

Cement production accounts for 7 % of global greenhouse gas emissions [1]. Tackling these emissions requires carbon capture and storage technologies [1], of which an oxy-fuel combustion process followed by CO2 compression is economically promising [2]. The oxy-fuel process substitutes air with a mixture of O2 and CO2 as the combustion medium. The O2-CO2 mixture requires a partial recirculation of flue gas [3], which increases the complexity of the process dynamics and potentially inefficient operating conditions, thus necessitating process control. We propose the use of model-based control and state estimation schemes. As the recycle connects the dynamics of the whole pyro-processing section, the process model must include the entire section, namely, the preheater tower, precalciner, rotary kiln, and clinker cooler. Literature on dynamic cement production models is scarce and focuses on modeling individual units, e.g., the rotary kiln [4⁠,5] or the precalciner [6]. We develop a first-principle dynamic model of the full pyro-processing section, including the preheater tower, precalciner, rotary kiln, and clinker cooler as submodels. The states of the precalciner, rotary kiln, and clinker cooler vary significantly in axial direction. Thus, the corresponding models are spatially discretized using the finite volume method. Parameter values for the model are taken from the literature [6]. We implement the models in Modelica as an aggregation of submodels. Thus, we can easily adapt our model to fit different cement plants, which vary in configuration. We simulate the oxy-fuel pyro-processing section outlined in the CEMCAP study [3]. The simulation has similar residence times, temperatures, and cement compositions as reported in the literature [2⁠,7], validating our model. Therefore, the presented dynamic model can form the basis for future model-based control and state estimation applications. Furthermore, the model can be used to investigate carbon reduction measures in the cement industry.

References

1. European Commission. Joint Research Centre. Decarbonisation options for the cement industry; Publications Office, 2023.

2. SINTEF Energy Research. CEMCAP D4.6 - Comparative techno-economic analysis of CO2 capture in cement plants 2018.

3. Ditaranto, M.; Bakken, J. Study of a full scale oxy-fuel cement rotary kiln. International Journal of Greenhouse Gas Control 2019, 83, 166–175, doi:10.1016/j.ijggc.2019.02.008.

4. Spang, H.A. A Dynamic Model of a Cement Kiln. Automatica 1972, 309–323, doi:10.1016/0005-1098(72)90050-7.

5. Svensen, J.L.; Da Silva, W.R.L.; Merino, J.P.; Sampath, D.; Jørgensen, J.B. A Dynamical Simulation Model of a Cement Clinker Rotary Kiln, 2024. Available online: http://arxiv.org/pdf/2405.03200v1.

6. Svensen, J.L.; Da Silva, W.R.L.; Jørgensen, J.B. A First-Engineering Principles Model for Dynamical Simulation of a Calciner in Cement Production, 2024. Available online: http://arxiv.org/pdf/2405.03208v1.

7. European Commission - JRC IPTS European IPPC Bureau. Best Available Techniques (BAT) Reference Document for the Production of Cement, Lime and Magnesium Oxide.

8. Mujumdar, K.S.; Ganesh, K.V.; Kulkarni, S.B.; Ranade, V.V. Rotary Cement Kiln Simulator (RoCKS): Integrated modeling of pre-heater, calciner, kiln and clinker cooler. Chemical Engineering Science 2007, 62, 2590–2607, doi:10.1016/j.ces.2007.01.063.



Multi-Objective Optimization for Sustainable Design of Power-to-Ammonia Plants

Andrea Isella, Davide Manca

Politecnico di Milano, Italy

Ammonia synthesis currently is the most carbon-intensive chemical process behind only oil refining (Isella and Manca, 2022). From this perspective, producing ammonia from renewable-energy-powered electrolysis (i.e. Power-to-Ammonia) is attracting increasing interest and earning the potential to make the ammonia industry achieve carbon neutrality (MPP, 2022). This work addresses the process design of such a synthetic pathway through a methodology based on the multi-objective optimization of the so-called “Three pillars of sustainability”: i.e. Economic, Environmental, and Social. Specifically, we developed a tool estimating the installed capacities of every main process section typically featured by Power-to-Ammonia facilities (e.g., the renewable power plant, the electrolyzer, energy and hydrogen storage systems, etc.) to maximize the “Global Sustainability Score” of the plant.



Simulating Long-term Carbon Balance on Forestry Management and Woody Biomass Applications in Japan

Ziyi Han1, Heng Yi Teah2, Yuichiro Kanematsu2, Yasunori Kikuchi1,2,3

1Department of Chemical System Engineering, The University of Tokyo; 2Presidential Endowed Chair for “Platinum Society”, The University of Tokyo; 3Institute for Future Initiatives, The University of Tokyo

Forests play a vital role as carbon sinks and renewable resources in mitigating climate change. However, in Japan, insufficient forest management has resulted in a suboptimal age class distribution of trees. Aging trees are left unattended and underutilized, resulting in inefficient carbon capture as they age. This underutilization contributes to a substantial reliance on imported wood products (lower self-sufficiency). To improve carbon sequestration and renew forest industries, it is crucial to adopt a systematic approach, so that the emissions and mitigation opportunities in the transformation of forest resources to usable products along the forest value chain can be identified and optimized.

In this study, we aim to identify an efficient forestry value chain that maximizes the carbon mitigation considering the coordination of the varied interests from diverse stakeholders in Japan. We simulate the long-term carbon balance on forest management and forest resources utilization, incorporating the woody biomass material flow across five modules, with two in wood production and three in wood utilization sectors.

(1) Forest and forestry management: the supply of woody biomass from designed forestry management practices, for example, to homogenize the forest age class distribution within a given simulation period.

(2) Wood processing: the transformation of roundwood into timber, plywood, and wood chips. A different ratio of wood products is determined based on the demand from each application.

(3) Construction sector: using timber and plywood for wood construction; the maximum flow is to satisfy the domestic demand of construction with 100% self-sufficiency rate without the need for imported wood.

(4) Energy sector: using wood chips for direct conversion to heat and electricity; the maximum flow is to reach the saturation of local renewable energy demand provided by local governments.

(5) Chemical sector: using wood chips as sources of cellulose, hemicellulose and lignin for thermochemical conversion to chemicals that serves as versatile energy carriers considering multiple pathways. The target hydrocarbon products include hydrogen, jet fuels and biodiesel.

We focus on the allocation of woody biomass from modules (1) and (2) to the three utilization modules. The objective is to identify the flow of energy-material productions by various pathway and to evaluate the GHG emissions within the defined system boundary. We evaluate the carbon balance of sequestration and emission from modules (1) and (2), and the cradle-to-gate life cycle GHG emission of modules (3), (4) and (5) accounting for processes from the selected co-production pathways.

Our model shows the overall GHG emissions resulting from the forestry value chain at a given forestry management and processing strategy, and the environmentally preferred order of woody biomass utilization. The variables in each module can be set to reflect the interest of each sector, allowing the model to capture the consequences of wood resource allocation, availability, and its contribution to climate mitigation. Therefore, the simulation can support the policymakers and relevant industry stakeholders for a more comprehensive forestry management and biomass application planning in Japan.



Discovering patterns in Food Safety Culture by k-means clustering

Simen Akkermans1, Maria Tsigka2, Jan FM Van Impe1, Efstathia Tsakali1,2

1BioTeC+ KU Leuven; 2University of West Attica, Greece

Food safety (FS) is an ongoing issue and despite the awareness and major initiatives taken in recent decades, several outbreaks highlight the need for further action. The key element of prevention combined with the application of prerequisite programs constitute the fundamental principles of any Food Safety Management System (FSMS), with particular emphasis on hygiene, food safety training, and the development and implementation of FSMSs throughout all areas of activity in the food industry. On the other hand, the concept of Food Safety Culture (FSC) separates FS from FSMS, by focusing on human behavior. Food safety managers often do not fully understand the relationship between FS and FSC, resulting in improper practices and further risks for food safety. Over the past decade, various tools for enforcing FSC have been proposed for different sectors of the food industry. However, there is no universal assessment tool, as specific aspects of food safety culture and each sector of the food industry, require different or customized assessment tools. Although, the literature on FS is growing rapidly, existing research related to FSC is virtually non-existent or fragmented. The aim of this study was to test the potential of machine learning based on questionnaire results to understand patterns in FSC.

As a case study, surveys were conducted with 103 employees of the Greek food industry. These employees were subdivided over different departments, genders, experience levels, company food hazard levels and company sizes. Each employee filled out a questionnaire consisting of 18 questions based on a Likert scale. After establishing the existence of significant relationships between the answers that were provided, it was investigated if specific subgroups of the employees had a different FSC. This was done by implementing an unsupervised k-means clustering of the survey results. It was found that, when the employees were subdivided into just 3 clusters, these clusters significantly affected all 18 questions of the surveys as demonstrated by Kruskal-Wallis tests. As such, these 3 clusters represented employee subgroups that adhered to a distinct FSC. This classification provides valuable information on the different cohorts that exist with respect to FSC and thereby enables a targeted approach to improve FSC.

This study has demonstrated the potential of machine learning techniques to monitor and control FSC. As such, the proposed approach contributes to the implementation of GFSI and BRC GC standards requirements and the General Principles for Food Hygiene of the 2020 amendment of Codex Alimentarius.



Development and Integration of a Co-Current Hollow Fiber Membrane Unit for Gas Separation in Process Simulators Using CAPE-OPEN Standards

Loretta Salano, Mattia Vallerio, Flavio Manenti

Politecnico di Milano, Italy

Process simulation plays a crucial role in the design, control, and optimization of chemical processes, offering a cost-effective alternative to experimental approaches. This study presents the development and implementation of a custom co-current hollow fiber membrane unit for gas separation using the CAPE-OPEN standard, integrated into Aspen HYSYS®. A one dimensional model was considered after appropriate physical assumptions, presenting a boundary value problem (BVP) due to the pressure profile along the fiber. The shooting method allows for the accurate resolution of BVPs by iteratively adjusting initial conditions to minimize errors across the domain. This approach ensures convergence to the correct solution, critical for complex gas separation processes. The CAPE-OPEN standards allow to link the developed model in C++ and interact with the software through input and output ports. To further ensure reliability in the simulation, error handling has been included to ensure appropriate operational parameters from the user. Further on, appropriate output variables are given to the simulator environment to enable direct optimization within the process simulator. This flexibility provides greater control over key performance indicators, such as energy consumption and separation efficiency, ultimately facilitating a more efficient design process for applications like biogas upgrading, hydrogen purification, and carbon capture. Results from case studies demonstrate that the co-current hollow fiber membrane unit significantly reduces energy consumption compared to traditional methods like pressure swing water absorption (PSWA) for biogas upgrading to biomethane. While membrane technology showed a 21% reduction in energy consumption for biomethane production, PSWA exhibited slightly higher efficiency for biomethanol production. This study not only demonstrates the value of CAPE-OPEN standards in implementing custom unit operations but also lays the groundwork for future developments in process simulation using advanced mathematical modelling and optimization techniques.



A Comparative Study of Aspen Plus and Machine Learning Models for Syngas Prediction in Biomass-Plastic Waste Co-Gasification

Usman Khan Jadoon, Ismael Diaz, Manuel Rodriguez

Departamento de Ingeniería Química Industrial Y del Medioambiente, Escuela Superior de Ingenieros Industriales, Universidad Politécnica de Madrid

The transition to cleaner energy sources is critical for addressing global environmental challenges, and the co-gasification of biomass and plastic waste presents a viable solution for sustainable syngas production. Syngas, a crucial component in energy applications, demands precise prediction of its composition to enhance co-gasification efficiency. Traditional modelling techniques, such as those implemented in Aspen Plus, have been instrumental in simulating gasification processes. However, machine learning (ML) models offer the potential to improve predictive accuracy, particularly for complex, non-linear systems. This study explores the comparative performance of Aspen Plus models and surrogate ML models in predicting syngas composition during the steam and air co-gasification of biomass and plastic waste.

The primary focus of this research is on evaluating the aspen-plus-based modelling techniques like thermodynamic and restricted equilibrium thermodynamics and kinetic modelling, alongside surrogate models such as Kriging, support vector machines, and artificial neural networks. The novelty of this work lies in the integration of Aspen Plus with machine learning methodologies, providing a comprehensive comparative analysis of both approaches for the first time. This study seeks to determine which modelling approach offers superior accuracy for predicting syngas components like hydrogen, carbon monoxide, carbon dioxide, and methane.

The methodology involves developing Aspen Plus models for steam and air co-gasification using woody biomasses and plastic wastes as feedstocks. These models simulate syngas production under varying operating conditions. Concurrently, machine learning models are trained on experimental datasets to predict syngas composition based on the same input parameters. A comparative analysis is then performed, with the accuracy of each approach measured using performance matrices like root mean square error.

ML models are anticipated to better capture the non-linearities of the gasification process, while Aspen Plus models will continue to offer valuable mechanistic insights and process understanding. The potential superiority of ML models suggests that integrating data-driven and process-driven approaches could enhance predictive capabilities and optimize co-gasification processes. This study offers significant contributions to the field of bioenergy and gasification technologies by exploring the potential of machine learning as a powerful predictive tool. By comparing Aspen Plus and machine learning models, this research highlights the potential benefits of combining these methodologies to improve syngas production forecasts. The findings from this comparative analysis are expected to advance the development of more accurate and efficient bioenergy technologies, contributing to the global transition toward sustainable energy systems.



A Fault Detection Method Based on Key Variable Forecasting

Borui Yang, Jinsong Zhao

Department of Chemical Engineering, Tsinghua University, Beijing 100084, China

With the advancement of industrial production toward digitalization and automation, process monitoring have become an essential technical tool for ensuring the safe and efficient operation of chemical processes. Although process engineering has greatly developed, the risk of process faults remains there. If such faults are not detected and diagnosed at an early stage, they may go beyond control. Over the past decades, various fault detection approaches have been proposed, including model-driven, knowledge-driven, and data-driven methods. Data-driven methods, in particular, have gained prominence, as they rely primarily on large amounts of process data, making them especially relevant with the widespread application of the Internet of Things (IoT). Among these, neural-network-based methods have emerged as a prominent approach. By stacking feature extraction layers and applying nonlinear activation functions between them, deep neural networks exhibit a strong capacity to capture complex, nonlinear patterns. This aligns well with the nature of chemical process variables, which are inherently nonlinear, strongly coupled with control loops, multivariate, and subject to time lags.
In industrial applications, fault detection algorithms rely on the time-series data of key variables. However, statistical methods such as Principal Component Analysis (PCA) and Partial Least Squares (PLS) are limited in capturing the temporal dependencies between consecutive data points. To address this, architectures such as Autoencoders (AE), Convolutional Neural Networks (CNN), and Transformers incorporate the relationships between time points through sliding window sampling. However, this approach can dilute fault signals, leading to delayed fault detection. Inspired by human decision-making process, where adverse future trends are often considered to enable timely responses to unfavorable outcomes, we propose incorporating key variables that have already entered a fault state in future time points into the fault detection model. This proactive inclusion of future fault indicators can significantly improve the timeliness of fault detection.

Building on the aforementioned concept, this work develops and implements a proactive fault detection method based on key variable forecasting. This approach employs multiple predictive models (such as LSTM, Transformer, and Crossformer) to actively forecast key variables over a future time horizon. The predicted results, combined with historical information, are used as inputs to a variational autoencoder (VAE) to calculate the reconstruction error for fault detection. The detection component of the method is trained using normal operating data, and faults are identified by evaluating the reconstruction error. The forecasting component is trained with mixed data, where the initial part contains normal data, followed by the selective introduction of faults after a certain period, enabling the predictive model to capture both fault evolution trends and normal data characteristics.

The proposed method has been validated on the CSTH dataset and the Tennessee Eastman Process (TEP) dataset, demonstrating that incorporating future information at the current time point significantly enhances early fault detection. However, optimizing the design of the reconstruction loss function and model architecture is necessary to mitigate false alarms. Introducing future expectations into current assessments shows great potential for advancing early fault detection and diagnosis but also poses challenges, requiring higher performance from key variable forecasting models.



Rate-Based Modeling Approach of a Rotating Packed Bed for CO2 Chemisorption in aqueous MEA Solutions

Joshua Orthey, John Paul Gerakis, Markus Illner, Jens-Uwe Repke

Process Dynamics and Operations Group - Technical University of Berlin, Germany

Driven by societal and political pressure for climate action, CO2 capture from flue gases is a primary focus for both academia and industry. Rotating Packed Beds (RPBs)[1][2] are a potential way to enhance process intensification and offer significant advantages in intensifying amine-based absorption processes, including enhanced mass transfer, load flexibility, higher allowable fluid loads, and the ability to use more concentrated amine solutions with higher viscosities. One main focus of our study encompasses both a direct comparison between packed columns and RPBs, and the integration of these technologies in a hybrid concept, with the potential to enhance the overall efficiency of the CO₂ capture process. Since there are numerous process configurations of RPB and packed columns in CO2 capture, covering gas pretreatment, absorption, and desorption, an initial evaluation of viable candidate configurations is essential. Equally important is the analysis of fundamental process behavior and its limitations, which is crucial for planning effective experimental campaigns and identifying suitable operating conditions. Unlike existing models, our approach offers a more detailed analysis, focusing specifically on the assessment of different process configurations and experimental conditions, enabling a deeper understanding and refinement of the capture process, which allowing us to effectively plan and design experiments.

For this purpose, a rate-based approach RPB model for the reactive absorption of CO2 with MEA solutions using the two-film theory was developed. The model is formulated for steady-state operations and encompasses all relevant component species. It addresses multicomponent mass transfer, incorporating equilibrium and kinetic reactions in the liquid phase while considering mass transfer resistances in both the liquid and gas phase.

For the gas bulk phase, ideal gas behavior is assumed, while the non-ideal liquid phase incorporates activity coefficients (elecNRTL). The Maxwell-Stefan approach was used to describe the diffusion processes and mass transport in both phases. The model is generally discretized using an equidistant radius. [1] Additionally, a film discretization near the interface was implemented. First validation studies show that the model accurately depicts dependencies on rotational speed and varying liquid-to-gas (L/G) ratios with respect to temperature and concentration profiles and has been validated against literature data.[2]

The CO₂ absorption and desorption process using conventional packed bed columns has been implemented in Aspen Plus. To enable simulations of hybrid configurations, the developed RPB model will be integrated into Aspen Custom Modeler. This study aims to analyze various hybrid process configurations through simulation to identify an efficient configuration, which will be validated by experiments in pilot plants. These results will demonstrate whether integrating RPBs with packed columns enhances energy efficiency and separation performance while reducing operational costs and providing key insights for future scale-up efforts and driving the advancement of hybrid CO₂ capture processes.

[1] Thiels et al. (2016): Modelling and Design of Carbon Dioxide Absorption in Rotating Packed Bed and Packed Column. DOI: 10.1016/j.ifacol.2016.07.303

[2] Hilpert, et al. (2022): Experimental analysis and rate-based stage modeling of multicomponent distillation in a Rotating Packed Bed. DOI: 10.1016/j.cep.2021.108651.



Machine Learning applications in dairy production

Alexandra Petrokolou1, Satyajeet Sheetal Bhonsale2, Jan FM Van Impe2, Efstathia Tsakali1,2

1BioTeC+ KU Leuven; 2University of West Attica, Greece

The dairy sector is one of the most well developed and prosperous industries at an international level. Due to several factors, including its high nutritional value, it’s susceptibility but also its popularity among the consumers, milk attracted scientific interest quiet early comparing to other food products. Likewise, the dairy industry had always been a pioneer in adopting new processing, monitoring and quality control technologies, starting from pasteurization heat treatment and canning for shelf life expansion at the beginning of 20th century to PCR methods to detect adulteration and machine learning applications, nowadays.

The dairy industry is closely connected with large-scale production lines and complex processes that require precision and continuous monitoring. The primary target is to meet customer requirements with increased profit while minimizing environmental impact. In this regard, various automated models based on artificial intelligence, particularly Machine Learning, have been developed to contribute to sustainability and circular economy. There are three major types of Machine Learning: Supervised Learning which uses labeled data, Unsupervised Learning where the algorithm tries to find hidden patterns and relationships and Reinforcement Learning which employs a trial and error method. Building a machine learning model requires several steps, starting with relevant and accurate data collection. These smart applications have been extensively introduced to dairy production, from the farm stage and milk processing to final inspection and product distribution. In this paper, the most significant applications of Machine Learning in the dairy industry are illustrated with actual applications and discussed in terms of their potentials. The applications are categorized as per the production stage and their serving purpose.

The most significant applications integrate recognition cameras, smart sensors, thermal imaging cameras, and digitized supply chain systems to facilitate inventory management. During the animal raising, smart environmental sensors can monitor weather conditions in real-time. In addition to this, animals can be fitted with smart collars or other small devices to record parameters such as breathing rates, metabolism, weight, and body temperature. These devices can also track animals’ location and monitor transitions from lying to standing. By collecting these data, algorithms can detect the potential onset of diseases such as mastitis, minimizing the need for manual human processing of repetitive tasks and enabling proactive health management.

Beyond the farm, useful applications emerge in milk processing, particularly in pasteurization, which requires specific temperature and time settings for each production line. Machine learning models can optimize this process, resulting in energy savings. The control of processing conditions through sensors also aids in the ripening stage, contributing to the standardization of cheese products. Advancements are also occurring in product packaging, where Machine Vision technology can identify damages and defects that may compromise product quality, potentially leading to food spoilage and consumer dissatisfaction. Finally, dairy products are particularly vulnerable and necessitate specific conditions throughout the supply chain. By employing machine learning algorithms, it is possible to identify the most efficient distribution routes, thereby reducing operational costs. Additionally, a smart sensor system can monitor temperature and humidity levels, spotting deviations from established safety/quality standards



Dynamic Modelling of CO2 Capture with Hydrated Lime: Integrating Porosity Evolution, Evaporation, and Temperature Variations

Natalia Vidal de la Peña, Dominique Toye, Grégoire Léonard

University of Liège, Belgium

The construction sector is currently one of the most polluting industries globally. In Europe, over 30% of the EU's environmental footprint is attributed to buildings, making this sector the largest contributor to environmental impact within the European Union. Buildings are responsible for 42% of the EU's annual energy consumption and for 35% of annual greenhouse gas (GHG) emissions.

Considering these data, it is essential to explore methods to reduce the negative impact of this sector on the environment. To contribute to the circular economy of this sector, this work proposes the mineral carbonation method as a resource to mitigate the environmental impact of this industry. Specifically, we propose the mineral carbonation of mineral wastes from the construction sector specifically, pure hydrated lime (Ca(OH)2), referred to as CH in construction terminology.

This research is part of the Walloon Region's Mineral Loop project, with the objective of modelling the carbonation reactions of mineral waste and optimizing the process by improving reactor conditions and material pretreatment. The carbonation of hydrated lime involves a reaction between calcium hydroxide and CO2, combining physical and chemical phenomena. The novelty of this mathematical model lies in the consideration of porosity evolution during carbonation, as well as the liquid water saturation of the material, by accounting for evaporation phenomena. Furthermore, the model is able to represent the temperature gradient along the reactor. These parameters are important because they affect the carbonation rate of this material. In previous work, we observed that the influence of water in this system is significant, and it is crucial to have a good characterization of its behaviour during this process. First, water is needed to initiate the carbonation, but introducing too much can lead to pore blockage. In addition, the release of water during carbonation can also cause pore blockage if evaporation is not adequately considered. Considering that, this model allows us to account for the influence of water, enabling a good correlation between water evaporation and carbonation rates under different carbonation conditions. All parameters are experimentally validated to provide a reliable model that can predict the behaviour of CH during carbonation.

The experimental setup for the carbonation process consists of an aluminium cup filled with CH placed inside a reactor with a capacity of 1.4 L, where pure CO2 is introduced through a hole in the upper part of the reactor. The system is modelled in COMSOL Multiphysics 6.2 by introducing the cup geometry and assuming CO2 is transported axially through the aluminium cup containing hydrated lime particles by dispersion, without convection, and that it diffuses within the material.

In conclusion, the proposed mathematical model accounts for the reaction phenomena, porosity variation, thermal gradient, and evaporation during the carbonation process, providing a solid understanding of the system and an effective tool to contribute to the circular economy of the construction industry. The model has been successfully validated, and the primary objective moving forward is to use it as a tool for predicting the carbonation response of other more complex materials.



Integrated LCA and Eco-design Process for Hydrogen Technologies: Case Study of the Solid Oxide Electrolyser.

Gabriel Magnaval1,2, Tristan Debonnet2, Manuele Margni1,2

1CIRAIG, Polytechnique Montréal, Montréal, Canada; 2HES-SO Valais-Wallis, Sion, Switzerland

Fuel Cell and Electrolyzer Cell hydrogen technologies are promising solutions to support the green transition. To ensure their sustainable development from the early stages of design, it is essential to assess their environmental impacts and define effective ecodesign strategies.

Life Cycle Assessment (LCA) is a widely used methodology for evaluating the environmental impacts of a product or system throughout its entire life cycle, from raw material extraction to disposal. So far literature does not provide consistent modelling approaches for hydrogen technologies assessment, limiting the interpretation, comparability of LCA results and hindering interoperability of datasets. A novel modular LCA model has been specifically designed to harmonize assessment models. The modular approach is structured by (supply) tiers, each of them subdivided in common unit processes. Tier 0 represents the operation phase delivering the functional unit. Tier 1 encompasses the stack manufacturing, the balance of plant equipment, the operation consumables, and the end-of-life of the stack. Each element is further subdivided in common Tier 2 sub processes and so on. This model has been applied to perform a Screening LCA of a Solid Oxide Electrolyzer (SOE), based on publicly available literature data to be used as a baseline for evaluating technological innovation of SOE designed for high-pressure applications, and developed within an industrial European Project. The Functional unit has been defined as the producing 1kg of hydrogen at 30 bar of a 20kW SOE stack.

Our findings suggest that Hydrogen production through SOE performs better than steam methane reforming only if supplied by electricity from renewable sources or nuclear. The operation consumables (electricity consumption and heat supply) have been identified as the most significant contributors to the environmental footprint, emphasizing the importance of energy efficiency and renewable energy sourcing. Critical parameters affecting the life cycle impact scores include the stack's lifespan, the balance of plant equipment, and material production.

To further support an environmentally sustainable development of stack technologies, we propose to integrate the LCA metrics within an ecodesign process tailored to the development of hydrogen technologies. The deployment of this process aims to ensure an environmentally sound development at the early stage of the innovation by improving the communication between the LCA experts and the technology developers, and to accelerate the data collection. An ecodesign workshop is organized during the first months of the project to enhance the literacy of the technology developers. It introduces the systemic and quantified approach to determine the hotspots of the technology, identify sustainable innovations, and evaluate their benefits and the risk of potential burden shift. Once trained, a parametrized tool which integrates screening LCA results in a user-friendly interface is distributed to the project partners. It allows technology developers to quickly assess potential innovations, compare different scenarios for improving environmental performance of their technology, and iterate calculation without the need of LCA experts. The LCA team is working throughout the project on updating the tool and explaining the trade-offs.



Decision Support Tool for Technology Selection in Industrial Heat Generation: Balancing Cost and Emissions

Soha Shokry Mousa, Dhabia Al-Mohannadi

Texas A&M University at Qatar, Qatar

Decarbonization of industrial processes is essential in order for the world to meet sustainability targets, including those for energy-intensive industries. On this note, electrification of heat generation could potentially reduce CO₂ emissions but comes with a set of challenges on balancing cost-efficiency with technical feasibility. A decision support framework is presented for the choice of technologies generating heat in industries, addressing the trade-offs between capital cost, CO₂ emissions, and heat demand across different temperature levels.

A tool was developed to evaluate various heat generation technologies, including high-temperature heat pumps, electrode boilers, and conventional systems. The application of heat integration principles allows the developed tool to analyse heat demands at different temperatures and, in turn, the suitability of a technology based on certain parameters, such as the cost-effectiveness of technology and capacity limits. The framework incorporates multi-criteria analysis, enabling decision-makers to systematically identify technologies that minimize overall cost while achieving goals for emission reductions and meeting total heat demands of the industrial process.

The initial application of the developed tool to real case studies proved the effectiveness of the methodology as part of the energy transition of the industrial sector.



Assessing Distillation Processes through Sustainability Indicators Aligned with the Sustainable Development Goals

Omer Faruk Karaman, Peter Lang, Laszlo Hegely

Budapest University of Technology and Economics, Hungary

There has been a growing interest in sustainability in chemical engineering as industries aim to reduce their environmental footprint without compromising economic performance. This research proposes a set of sustainability indicators aligned with the United Nations’ Sustainable Development Goals (SDGs) for the evaluation of the sustainability of distillation processes, offering a structured way to assess and improve these systems. The use of these indicators is illustrated in two case studies: (1) a continuous pressure-swing distillation (PSD) of a maximum-azeotropic mixture without and with heat integration and (2) the recovery of acetone from a waste solvent mixture by batch distillation (BD). These processes were selected due to their widespread industrial use, their potential to benefit from improvements in their sustainability, and to show the general applicability of the indicators proposed.

Distillation is one of the most commonly used methods for the separation of liquid mixtures. It is performed in a continuous way when large processing capacities are needed (e.g. refining, petrochemical industry). Batch distillation is also used frequently (e.g. in the pharmaceutical or fine chemical industry) because of its flexibility in separating mixtures with varying quantity and composition, including waste solvent mixtures. However, distillation is very energy-intensive, leading to high operational costs and greenhouse gas emissions.

This study aims to address these issues by developing sustainability indicators (e.g. recovery of components, wastewater generation, greenhouse gas emissions) that account for environmental, economic and social aspects. By aligning these indicators with the SDGs, which are globally recognized sustainability standards, the research also aims to encourage industries towards more sustainable practices.

The novelty of this work is that, to our knowledge, we are the first who propose sustainability indicators aligned with SDGs in the field of distillation.

The case studies illustrate how to apply the proposed indicators to evaluate the sustainability aspects of distillation processes. In the PSD example (Karaman et al., 2024a), the process was optimised without and with heat integration, which led to a significant decrease in both the total annual cost and environmental impact (CO2 emission). In the acetone recovery by BD case (Karaman et al., 2024b), either the profit or the CO2 emissions were optimised by the Box-complex method. In this work, we determined how the proposed set of sustainability indicators improved due to the optimisation and heat integration performed in our previous works.

This research emphasizes the increasing importance of sustainability in chemical separation processes by integrating sustainability metrics aligned with SDGs into the evaluation of distillation processes. Our work proposes a generally applicable framework to quantify the sustainability aspects of the processes, which could be used to identify how these processes can be improved by balancing cost-effectiveness and environmental impacts.

References

Karaman, O.F.; Lang P.; Hegely L. 2024a. Optimisation of Pressure-Swing Distillation of a Maximum-Azeotropic Mixture with Heat Integration. Energy (submitted).

Karaman, O.F.; Lang, P.; Hegely, L. 2024b. Economic and Environmental Optimisation of Acetone Recovery by Batch Distillation. Proceedings of the 27th Conference on Process Integration, Modelling and Optimisation for Energy Saving and Pollution Reduction. Paper: PRES24.0144.



Strategies for a more Resilient Green Haber-Bosch Process

José M. Pires1,2, Diogo Narciso2, Carla I. C. Pinheiro1

1Centro de Química Estrutural, Institute of Molecular Sciences, Departamento de Engenharia Química, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001, Portugal; 2Centro de Recursos Naturais e Ambiente, Departamento de Engenharia Química, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001, Portugal

With a global production of 183 million metric tons in 2020 [1], ammonia (NH3) stands out as one of the most important commodity chemicals on the global scene, alongside ethylene and propylene. Despite 85% of all ammonia produced being used in fertilizer production [1], its applications extend beyond the agri-food sector. The Haber-Bosch (HB) process has historically enabled large-scale ammonia production, supporting agricultural practices in response to the unprecedented population growth over the past century, but it also accounts for 1.2% of global anthropogenic CO2 emissions [2]. In the ongoing energy transition, Power-to-X systems have emerged as promising solutions for both i) storing renewable energy, and ii) producing chemical or fuels. The green HB (gHB) process, powered entirely by green electricity, can be viewed as a Power-to-Ammonia (PtA) system. In this process, hydrogen from electrolysis and nitrogen from an air separation unit are compressed and fed into the NH3 synthesis loop, whose general configuration mirrors that of the conventional HB process. However, the intermittent nature of renewable energy means hydrogen production is not constant over time. Therefore, in a PtA system, the NH3 synthesis loop must be operated dynamically, which presents a major operational challenge.

Dynamic operation of NH3 converters is typically associated with reaction extinction and sustained temperature oscillations (known as limit cycles), which can severely damage the catalyst. This work is situated in this context, with the development of a high-fidelity model of the gHB process using gPROMS Process. As various process flexibilization measures have already been proposed in the literature and industrial patents [3,4], this work aims to test some of these measures, or combination thereof, by quantitatively assessing their impacts on the process. The process is first modelled and simulated at a nominal process load, followed by a flexibility analysis in which partial loads are introduced to observe their effects on process responsiveness and resilience. Essentially, all proposed measures boil down to maintaining high loop pressure, a key aspect consistently addressed in the patents, which can be achieved by exploiting the ammonia synthesis reaction equilibrium. Therefore, measures that shift the equilibrium towards the reactants side are particularly relevant for this analysis, as they lead to an increase in the number of moles leaving the reactor. Increasing the reactor operating temperature and the NH3 fraction in the reactor feed are some of proposed possibilities, but are complex, as they affect the intricate reaction dynamics and may cause reactor overheating or even reaction extinction. Other possibilities include reducing reactor flow or, in the worst case, decreasing loop pressure [3].

[1] IRENA & AEA. (2022). Innovation outlook: renewable ammonia.

[2] Smith, C. et al. (2020). Current and future role of Haber-Bosch ammonia in a carbon-free energy landscape. Energy Environ. Sci., 13(2), 331-344.

[3] Fahr, S. et al. (2023). Design and thermodynamic analysis of a large-scale ammonia reactor for increased load flexibility. Chemical Engineering Journal, 471, 144612.

[4] Ostuni, R. & Zardi, F. (2016). Method for load regulation of an ammonia plant (U.S. Patent No. 9463983).



Process simulation and thermodynamic analysis of newly synthesized pre-combustion CO2 capture system using novel Ionic liquids for H2 production

Sadah Mohammed, Fadwa Eljack

Qatar University, Qatar

Deploying fossil fuels to meet global energy needs has increased greenhouse gas emissions, mainly CO2, contributing to climate change. Therefore, transitioning toward clean energy sources is crucial for a sustainable low-carbon economy. Hydrogen (H2) is a viable decarbonization option, but its production via steam methane reforming (SMR) emits significant CO2 [1]. Integrating abatement technology, like pre-combustion CO2 capture in the SMR process, can reduce carbon intensity. The pre-combustion systems are effective for high-pressure streams rich in CO2, making them suitable for H2 production. In this regard, solvent selection is crucial in designing effective CO2 capture systems by considering several factors, including eco-toxicity, reducing irreversibility, and maximizing energy efficiency. In this context, ionic liquids (ILs) have become increasingly popular for their low regeneration energy, making them well-suited for pre-combustion applications.

The main goal of this work is to synthesize a pre-combustion CO2 capture system using newly designed ILs and conduct a thermodynamic analysis regarding energy requirements and exergy loss. These novel ILs are synthesized using a predictive deep-learning model developed in our previous work [2]. Before assessing the performance of novel ILs, an eco-toxicity analysis is conducted using the ADMETlab 2.0 web tool to ensure their environmental suitability. The novel ILs are then defined in the simulation software Aspen Plus, following the integrated modified translation-rotation-internal coordinate (TRIC) system with the COMSO-based/Aspen approach developed in our previous publication [3]. The proposed steady-state pre-combustion CO2 capture process suggested by Zhai and Rubin [4] is then simulated in Aspen plus V12 to treat the syngas stream with high CO2 concentration (16.27% CO2). The suggested process configuration is modified to employ an IL-based absorption system suitable for processing large-scale syngas streams, enhancing CO2 removal and H2 purity under high-pressure conditions. Finally, a comprehensive energy and exergy analysis is scrutinized to quantify the thermodynamic deficiencies of the developed system based on the performance of novel ILs. This work is essential as it provides insights into the overall CO2 capture system efficiency and the source of irreversibility to ensure an eco-friendly and optimal process design.

Reference

[1] S. Mohammed, F. Eljack, S. Al-Sobhi, and M. K. Kazi, “A systematic review: The role of emerging carbon capture and conversion technologies for energy transition to clean hydrogen,” J. Clean. Prod., vol. 447, no. May 2023, p. 141506, 2024, doi: 10.1016/j.jclepro.2024.141506.

[2] S. Mohammed, F. Eljack, M. K. Kazi, and M. Atilhan, “Development of a deep learning-based group contribution framework for targeted design of ionic liquids,” Comput. Chem. Eng., vol. 186, no. January, p. 108715, 2024, doi: 10.1016/j.compchemeng.2024.108715.

[3] S. Mohammed, F. Eljack, S. Al-Sobhi, and M. K. Kazi, “Simulation and 3E assessment of pre-combustion CO2 capture process using novel Ionic liquids for blue H2 production,” Comput. Aided Chem. Eng., vol. 53, pp. 517–522, Jan. 2024, doi: 10.1016/B978-0-443-28824-1.50087-9.

[4] H. Zhai and E. S. Rubin, “Systems Analysis of Physical Absorption of CO2 in Ionic Liquids for Pre-Combustion Carbon Capture,” Environ. Sci. Technol., vol. 52, no. 8, pp. 4996–5004, 2018, doi: 10.1021/acs.est.8b00411.



Mechanistic and Data-Driven Models for Predicting Biogas Production in Anaerobic Digestion Processes

Rohit Murali1, Benaissa Dekhici1, Michael Short1, Tao Chen1, Dongda Zhang2

1University of Surrey, United Kingdom; 2University of Manchester, United Kingdom

Anaerobic digestion (AD) plays a crucial role in renewable energy production by converting organic waste into biogas in the absence of oxygen. However, accurately forecasting biogas production for real-time applications in AD plants remains a challenge due to the complex and dynamic nature of the AD process. Despite the extensive literature on decision-making in AD, there are currently no industrially applicable tools available for operators that can aid in predicting biogas output for site-specific applications. Mechanistic models are valuable tools for controlling systems, estimating states and parameters, designing reactors, and optimising operations. They can also predict biological system behaviour, reducing the need for time-consuming and expensive experiments. To ensure effective control, state estimation, and future predictions, accurate models that accurately represent the AD process are essential.

In this study, we present a comparative analysis of two modelling approaches: mechanistic models and data-driven models focusing on their ability to predict biogas production from a lab-scale anaerobic digester. Our work includes the development of a simplistic mechanistic model based on two states concentration of biomass and concentration of substrate which incorporates Haldane kinetics to simulate and predict the biogas production over time. The model was optimised using experimental data, with key kinetic parameters tuned via non-linear regression methods to minimise prediction error. While the mechanistic model demonstrated reasonable accuracy in predicting output trends, it fails to accurately characterise feedstock and biomass concentration for future predictions. A more robust model, such as the Anaerobic Digestion Model No. 1 (ADM1), could offer a more accurate representation. However, its complexity with 35 state variables and over 100 parameters, many of which are rarely measured at AD plants makes it impractical for real-time applications.

To address these limitations, we compared the mechanistic model's performance to a data-driven approach using a Long Short-Term Memory (LSTM) neural network. The LSTM model was trained on lab-scale AD data and demonstrated a closer fit to the experimental results, than the simple mechanistic model proving to be a more accurate alternative for predicting biogas production. The LSTM model were also applied to a larger industrial dataset from an AD site, showing strong predictive capabilities and offering a practical alternative to time and resource intensive experimental analysis.

The mechanistic model, while valuable for providing insights into the biochemical processes of AD, achieved an R2 value of 0.82, indicating moderate accuracy in capturing methane production. In contrast, the LSTM model for the lab-scale dataset demonstrated significantly better predictive capabilities with corresponding R2 values ranging between 0.93 - 0.98, indicating a strong fit to the experimental data. When applied to a larger industrial dataset, the LSTM model continued to perform, with R2 values ranging between 0.95 - 0.97. These results demonstrate LSTM model’s superior ability to capture temporal dependencies and handle both lab-scale and industrial data, making it a promising tool for deployment in large-scale AD plants. Its robust performance across different scales highlights its potential for optimising biogas production in real-world applications.



Application and comparison of optimization methods for an Energy Mix optimization problem

Julien JEAN VICTOR1, Zakaria Adam SOULEYMANE2, Augustin MPANDA2, Philippe TRUBERT3, Laurent FONTANELLI1, Sebastien POTEL1, Arnaud DUJANY1

1UniLaSalle, UPJV, B2R GeNumEr, U2R 7511, 60000 Beauvais, France; 2UniLaSalle, UPJV, B2R GeNumEr, U2R 7511, 80000 Amiens, France; 3Syndicat mixte de l'aéroport de Beauvais-Tillé (SMABT), 1 rue du Pont de Paris - 60000 Beauvais

In the last decades, governmental and intergovernmental policies have evolved regarding the global arousal of climate change awareness. In the conception of energy mixes, ecological considerations have taken a predominant importance in the conception of energy mixes, and renewable energy sources are now widely preferred to fossil fuels. Simultaneously, availability of critical resources such as energy is highly sensitive to geopolitical relationships. It is therefore important for territories at various scales to develop their energy mixes and achieve energetic independence [IRENA, 2022]. The development of optimized, renewable and local energy mixes is therefore highly supported by the economic, political and environmental situations [Østergaard and Sperling, 2014].

Multiple studies have aimed to optimize renewable energy technologies and facilities locations to develop more renewable and efficient energy mixes. A majority of these optimization problems are solved using MILP, MINLP or Heuristic algorithms. This study aims to assess and compare optimization methods for environmental and economic optimization of an infrastructure’s energy mix. It focuses on yearly production potential at a regional scale, and therefore does not consider Decomposition or Stochastic optimization methods, better fitted for problems including temporal variation or multiple time periods. From existing methods in Energy Mix literature, Goal Programming, Branch-and-Cut and NSGA-II were selected for they are widely used and cover different problem formulations [Jaber et al, 2024] [Moret et al, 2016]. These methods will be applied to a case study and compared based on their specificities and the solutions they provide.

After census of energy resources already in place in the target territory, the available energy mix undergoes a carbon footprint evaluation that will serve as the environmental component of the problem. The economical component is an aggregation of operative, maintenance and installation costs. The two components constitute the objectives of the problem, either treated separately or weighted in a single objective function. The three selected methods are applied to the problem, and the results provided by each are gathered and assessed based on criteria including optimality, diversity of solutions, and sensitivity to constraints and settings. The assessed methods are then compared based on these criteria, so strengths and weaknesses of each method regarding this specific problem can be identified. The goal is to identify the best fitting methods for such a problem and may lead to the design of a hybrid method ideally fitted to the energy mix optimization problem.

International Renewable Energy Agency (IRENA). (2022). Geopolitics of the energy transformation: The hydrogen factor. Retrieved August 2024, from https://www.irena.org/Digital-Report/Geopolitics-of-the-Energy-Transformation

Jaber, A., Younes, R., Lafon, P., Khoder, J. (2024). A review on multi-objective mixed-integer non-linear optimization programming methods. Engineering, 5(3), 1961-1979. https://doi.org/10.3390/eng5030104

Moret, S., Bierlaire, M., Maréchal, F. (2016). Strategic energy planning under uncertainty: A mixed-integer linear programming modeling framework for large-scale energy systems. In Z. Kravanja & M. Bogataj (Eds.), Computer aided chemical engineering (Vol. 38, pp. 1899–1904). Elsevier. https://doi.org/10.1016/B978-0-444-63428-3.50321-0

Østergaard, P. A., Sperling, K. (2014). Towards sustainable energy planning and management. International Journal of Sustainable Energy Planning and Management, 1, 1-10. https://doi.org/10.5278/IJSEPM.2014.1.1



Insights into the Development and Implementation of Soft Sensors in Industrial Settings

Shweta Mohan Nagrale1, Abhijit Bhakte1, Rajagopalan Srinivasan1,2

1Department Chemical Engineering, Indian Institute of Technology Madras, Chennai, 600036, India; 2American Express Lab for Data Analytics, Risk & Technology, Indian Institute of Technology Madras, Chennai, 600036, India

Soft sensors offer a viable solution for industries where key quality variables cannot be measured frequently. By utilizing readily available process measurements, soft sensors provide frequent estimates of quality variables, thus avoiding the delays typically associated with traditional analyzers. They enhance efficiency and economic performance while improving process control and decision-making.

The literature outlines several challenges in deploying soft sensors within industrial environments. Laboratory measurements are crucial for developing, calibrating, and validating models. Wang et al. (2010) emphasized the mismatch between high-frequency process data and infrequent lab measurements, necessitating down-sampling and, consequently, significant data loss. High dimensionality of process data and multicollinearity complicate model building. Additionally, time delays and varying operational regimes complicate data alignment and model generalization. Without continuous adaptation, soft sensor models risk becoming outdated, reducing their predictive accuracy (Kay et al., 2024). Online learning and model updates are vital for maintaining performance amid changing conditions and sensor drift. Also, effective imputation techniques and outlier management are essential to prevent model distortion. Integrating soft sensors into DCS and suitable human-machine interaction also presents unique challenges.

This work presents practical strategies for developing and implementing soft sensors in real-world refineries. By monitoring key quality parameters like Distillation-95 and Research Octane Number (RON), these sensors provide timely, precise estimations that enhance prediction and process control. We gathered process data at 5-minute intervals and weekly laboratory data over two years. Further, we utilized data preprocessing techniques and clustering methods to distinguish steady-state and transient regimes. Feature engineering strategies were used to address high dimensionality. Also, simpler models like Partial Least Squares (PLS) ensure effective quality prediction due to their balance of accuracy and interpretability. This enables operators to make informed, data-driven decisions and quickly respond to changes without waiting for traditional laboratory analyses. In this paper, we will discuss how the resulting soft sensor can offer numerous benefits, such as detecting quality issues early, minimizing downtime, and optimizing resource allocation. They thus serve as a tool for continuous process improvement. Finally, the user interface can play a significant role in fostering trust among plant personnel, ensuring easy access to predictions, and explicitly highlighting the soft sensor’s confidence in its prediction.

References

Wang, David & Liu, Jun & Srinivasan, Rajagopalan. (2010). Data-Driven Soft Sensor Approach for Quality Prediction in a Refining Process. Industrial Informatics, IEEE Transactions on. 6. 11 - 17. 10.1109/TII.2009.2025124.

Sam Kay, Harry Kay, Max Mowbray, Amanda Lane, Cesar Mendoza, Philip Martin, Dongda Zhang, Integrating transfer learning within data-driven soft sensor design to accelerate product quality control, Digital Chemical Engineering, Volume 10, 2024, 100142, ISSN 2772-5081, https://doi.org/10.1016/j.dche.2024.100142.

R. Nian, A. Narang and H. Jiang, "A Simple Approach to Industrial Soft Sensor Development and Deployment for Closed-Loop Control," 2022 IEEE International Symposium on Advanced Control of Industrial Processes (AdCONIP), Vancouver, BC, Canada, 2022, pp. 261-262, doi: 10.1109/AdCONIP55568.2022.9894185.



Synthesis of Distillation Flowsheets with Reinforcement Learning using Transformer Blocks

Niklas Slager, Meik Franke

Faculty of Science and Technology, University of Twente, the Netherlands

Process synthesis is one of the main tasks of chemical engineers and has major influence on CAPEX and OPEX in the early design phase of a project. Basically, there are two different approaches: heuristic or superstructure optimization approaches. Heuristic approaches provide quick and often satisfying solutions, but due to non-quantitative nature eventually promising options might be overlooked. On the other hand, superstructure optimization approaches are quantitative, but their formulation and solution are difficult and time-consuming. Furthermore, they require the optimal solution to be imbedded within the superstructure and cannot to be applied to open-ended problems.

Reinforcement learning (RL) offers the potential to solve open-ended process synthesis problems. RL is a type of machine learning (ML) which involves an agent making decisions (actions) at a current state within an environment to maximise an expected reward, e.g., revenue. A few publications have dealt with the design of chemical processes [1,2,3,4]. An overview of reinforcement learning methods for process synthesis is given in [5]. Special attention must be paid to the principle of data input embedding. Data embeddings transform raw data (e.g., states, actions) into a form suitable for model processing. Effective embeddings capture the variance and structure of the data to ensure the model learns meaningful patterns. Most of the authors use Convolutional Neural Networks (CNN) and Graph Neural Networks (GNN). However, Convolutional networks and GNNs generally struggle to capture long-range dependencies.

A fundamentally different methodology for permutation-equivariant data processing comes in the form of transformer blocks [6]. Transformer blocks are built on an attention principle, where relations in input data are weighted, and more attention is paid (a higher weight factor is assigned) to relationships having a stronger effect on the outcome.

To demonstrate the applicability of the method, the separation of an ideal seven-component hydrocarbon mixture is investigated. The RL training session was done in 2 hours and much faster than reported sessions on similar five-component problems in [1]. The full recovery of seven components was achieved using a minimum of six separation units designed by the RL agent. However, it cannot be claimed that the learning progress was reliable as minor deviations in hyperparameters easily led to sub-optimal policies, which will be investigated further.

[1] van Kalmthout, S., Midgley, L. I., Franke, M. B. (2022). ps://arxiv.org/abs/2211.04327.

[2] Stops, L., Leenhouts, R., Gao, Q., Schweidtmann, A. M. (2022). AIChE Journal, 69(1).

[3] Goettl, Q., Pirnay, J., Burger, J. Grimm, D. G. (2023). arXiv:2310.06415v1.

[4] Wang, D., et al., (2023). Energy Advances, 2.

[5] Gao, Q., Schweidtmann, A. M. (2024). Current Opinion in Chemical Engineering, 44, 101012.

[6] Vaswani, A., et al. (2023). Attention is all you need. https://arxiv.org/abs/1706.03762.



Machine Learning Surrogate Models for Atmospheric Dispersion: A Time-Efficient Approach to Air Quality Prediction

Omar Hassani Zerrouk1,2, Eva Gallego1, Jose francisco Perales1, Moisès Graells1

1Polytechnic University of Catalonia, Spain; 2Abdelmalek Essaadi University, Morocco

Atmospheric dispersion models are traditionally used to estimate the impact of pollutants on air quality, relying on complex models and requiring extensive computational resources. This hinders the development of practical real-time solutions anticipating the effects of plant incidental emissions. To address these limitations, this study explores the use of machine learning algorithms as surrogate models, faster, less resource-intensive alternatives to traditional dispersion models, with the aim of replicating their results while reducing computational complexity.

Recent studies have explored machine learning as surrogate models for atmospheric dispersion. Kocijan et al. (2023) and Huang et al. (2020) demonstrated the potential of using tree-based techniques to predict air quality, while Gao et al. (2019) used hybrid LSTM-ARIMA models for PM2.5 forecasting. However, most approaches focus on specific algorithms or pollutants. This study provides a broader evaluation of various models, including Regression, Random Forest, Gradient Boosting, and deep learning, across multiple pollutants and meteorological variables.
This study evaluates machine learning models using data from traditional dispersion models for pollutants like NO₂, NOx, SO₂, PM, and meteorological variables. We combined localized meteorological data from municipalities with dispersion data computed using the Eulerian model TAPM (Hurley et al., 2005).

The best-performing models were Gradient Boosting and Random Forest, with MSE values of 1.23 and 1.39, and R² values of 0.94. These models effectively captured nonlinear relationships between meteorological conditions and pollutant concentrations, demonstrating their capacity to handle complex environmental interactions. In contrast, traditional regression models, like Ridge and Lasso, underperformed with MSE values of 14.62 and 17.50, and R² values of 0.40 and 0.28, struggling with data complexity. Similarly, deep learning models such as LSTM and GRU showed weaker performance, with MSE values of 27.43 and 26.73, and R² values of -0.11 and -0.08, suggesting that the data relationships were more influenced by instantaneous features than long-term temporal patterns.

Feature importance was analysed using permutation and standard metrics, revealing that variables related to atmospheric dispersion and stability, such as wind direction, stability, and solar radiation, were the most significant in predicting pollutant concentrations. Time-derived variables, like day or hour, were less relevant, likely because their effects were captured by other environmental factors. This highlights the potential of ML-based surrogate models as efficient alternatives to traditional dispersion models for air quality monitoring.

References

1. Kocijan, J., Hvala, N., Perne, M. et al. Surrogate modelling for the forecast of Seveso-type atmospheric pollutant dispersion. Stoch Environ Res Risk Assess 37, 275–290 (2023).

2. Huang, Y., Ding, H., & Hu, J. (2020). A review of machine learning methods for air quality prediction: Challenges and opportunities. Environmental Science and Pollution Research, 27(16), 19479-19495.

3. Gao, H., Zhang, H., Chen, X., & Zhang, Y. (2019). A hybrid model based on LSTM neural and ARIMA for PM2.5 forecasting. Atmospheric Environment, 198, 206-213.

4. Hurley, P. J., Physick, W. L., Luhar, A. K (2005). TAPM: a practical approach to prognostic meteorological and air pollution modelling, Environmental Modelling & Software, 20(6), 737-752.



Comparative Analysis of PharmHGT, GCN, and GAT Models for Predicting LogCMC in Surfactants

Gabriela Carolina Theis Marchan, Teslim Olayiwola, Andrew N Okafor, Jose Romagnoli

LSU, United States of America

Predicting the critical micelle concentration (LogCMC) of surfactants is essential for optimizing their applications in various industries, including pharmaceuticals, detergents, and emulsions. In this study, we investigate the performance of graph-based machine learning models, specifically Graph Convolutional Networks (GCN), Graph Attention Networks (GAT), and a graph-transformer model, PharmHGT, for predicting LogCMC values. We aim to determine the most effective model for capturing the structural and physicochemical properties of surfactants. Our results provide insights into the relative strengths of each approach, highlighting the potential advantages of transformer-based architectures like PharmHGT in handling molecular graph representations compared to traditional graph neural networks. This comparative study serves as a step towards enhancing the accuracy of LogCMC predictions, contributing to the efficient design of surfactants for targeted applications.



Short-cut Correlations for CO2 Capture Technologies in Small Scale Applications

So-mang Kim, Joanne Kalbusch, Grégoire Léonard

University of Liege, Belgium

Carbon capture (CC) is crucial for achieving net-zero emissions and mitigating climate change. Despite its critical importance, the current deployment of carbon capture technologies remains insufficient to meet the climate target, indicating an urgency to increase the number of carbon capture applications. Emission sources vary significantly in capture scale, with large-scale emitters benefiting from economies of scale, while smaller-scale applications are often neglected. However, to achieve an economy with net-zero emissions, CC applications at various emission levels are necessary.

While many studies on carbon capture technologies highlight capture cost as a key performance indicator (KPI), there is currently no standardized method in the literature to estimate the cost of carbon capture, leading to inconsistencies and incomparable results. This makes it challenging for decision-makers to fairly compare and identify suitable carbon capture options based on the literature results, hindering the deployment of CC units. In addition, conducting detailed simulations and Techno-Economic Assessments (TEAs) to identify viable capture options across various scenarios can be time-consuming and requires significant effort.

To address the aforementioned challenges, this work develops short-cut correlations describing the total equipment cost (TEC) and energy consumptions of selected carbon capture technologies for small capture scale applications. This will allow exploration of the role of CC in small-scale industries and offer a practical framework for evaluating the technical and economic viability of various CO₂ capture systems. The goal is to provide an efficient approach for decision-makers to estimate the cost of carbon capture without the need for extensive simulations and detailed TEAs while ensuring consistent assumptions and cost estimation methods are applied across comparison studies. Also, the correlations are flexible, which allows for various cost estimation methods and case-specific assumptions to fine tune the analyses for various scenarios.

The shortcut correlations can offer valuable insights into small-scale carbon capture (CC) applications by identifying scenarios that enhance their feasibility, such as integrating small-scale carbon capture with waste heat and renewable energy sources. They also facilitate the exploration of various spatial configurations, including the deployment of multiple small-scale capture units versus combining flue gases from small-scale sources into a single larger CC unit. The shortcut correlations are envisioned to improve the accessibility of carbon capture technologies for small-scale industries.



Mixed-Integer Bilevel Optimization Problem Generator and Library for Algorithm Evaluation and Development

Meng-Lin Tsai, Styliani Avraamidou

University of Wisconsin-Madison, United States of America

Bilevel optimization, characterized by nested optimization problems, has gained prominence in modeling two-player interactions across various domains, including environmental policy (Beykal et al. 2020) and hierarchical control (Avraamidou et al., 2017). Despite its wide applicability, bilevel optimization is known to be NP-hard. Mixed-integer bilevel optimization problems are even more challenging to solve (Kleinert et al. 2021), prompting the development of diverse solution methods, such as Bender’s decomposition (Saharidis et al. 2009), multiparametric optimization (Avraamidou et al. 2019), penalty functions (Dempe et al. 2005), and branch-and-bound/cut algorithms (Fischetti et al. 2018). However, due to the large variety of problem types (type of variables, constraints, objective functions), the field lacks standardized benchmark problems. Random problem generators are commonly used to generate problems for algorithm evaluation (Avraamidou et al. 2019), but often produce trivial bilevel problems—defined as those where the high-point relaxation solution is feasible.

In this work, we investigate the prevalence of trivial problems across different problem structures (LP-LP, ILP-ILP, MILP-MILP) and sizes (number of upper/lower variables, binary/continuous variables, constraints), and we reveal how problem structure and size influence trivial problem occurrence probabilities. We introduce a new bilevel problem generator, coded in Python using Gurobi as a solver, designed to create non-trivial bi-level problem instances of chosen type and size. A library of 200 randomly generated problems of different sizes and types will also be part of the tool and available to access online. The proposed tool aims to enhance the robustness of bilevel optimization algorithm testing by ensuring that generated problems provide a meaningful challenge to the solver, offering a reliable method for algorithm evaluation, and accelerating the development of efficient solvers for complex, real-world bilevel optimization problems.

References

Avraamidou, S., & Pistikopoulos, E. N. (2017). A multi-parametric bi-level optimization strategy for hierarchical model predictive control. In Computer aided chemical engineering(Vol. 40, pp. 1591-1596). Elsevier.

Avraamidou, S., & Pistikopoulos, E. N. (2019). A multi-parametric optimization approach for bilevel mixed-integer linear and quadratic programming problems. Computers & Chemical Engineering, 125, 98-113.

Beykal, B., Avraamidou, S., Pistikopoulos, I. P., Onel, M., & Pistikopoulos, E. N. (2020). Domino: Data-driven optimization of bi-level mixed-integer nonlinear problems. Journal of Global Optimization, 78, 1-36.

Dempe, S., Kalashnikov, V., & Rıos-Mercado, R. Z. (2005). Discrete bilevel programming: Application to a natural gas cash-out problem. European Journal of Operational Research, 166(2), 469-488.

Fischetti, M., Ljubić, I., Monaci, M., & Sinnl, M. (2018). On the use of intersection cuts for bilevel optimization. Mathematical Programming, 172(1), 77-103.

Kleinert, T., Labbé, M., Ljubić, I., & Schmidt, M. (2021). A survey on mixed-integer programming techniques in bilevel optimization. EURO Journal on Computational Optimization, 9, 100007.

Saharidis, G. K., & Ierapetritou, M. G. (2009). Resolution method for mixed integer bi-level linear problems based on decomposition technique. Journal of Global Optimization, 44, 29-51.



Surface Tension Data Analysis for Advancing Chemical Engineering Applications

Ulderico Di Caprio1, Flora Esposito1, Bruno C. L. Rodrigues2, Idelfonso Bessa dos Reis Nogueira2, Mumin Enis Leblebici1

1Center for Industrial Process Technology, Department of Chemical Engineering, KU Leuven, Agoralaan Building B, 3590 Diepenbeek, Belgium; 2Chemical Engineering Department, Norwegian University of Science and Technology, Sem Sælandsvei 6, Kjemiblokk 4, Trondheim 7043, Norway

Surface tension plays a critical role in numerous aspects of chemical engineering, influencing key processes such as mass transfer, fluid dynamics, and the behavior of multiphase systems. Accurate surface tension data are essential for the design of separation processes, reactor optimization, and the development of advanced materials. However, despite its importance, the availability of comprehensive, high-quality experimental data has lagged behind modern research needs, limiting progress in fields where precise interfacial properties are crucial.

In this work, we address this gap by revisiting a vast compilation of experimental surface tension data published in 1972. Originally recognized for its breadth and accuracy, this compilation has remained largely inaccessible to the modern scientific community due to its outdated digital format. The digital version of the original document consists primarily of scanned images, making data extraction difficult and time-consuming for researchers. Manual transcription was often required, increasing the risk of human error and reducing efficiency for those seeking to use the data for new developments in chemical engineering.

Our project involves not only the digitalization of this critical dataset—transforming it into a machine-readable and easily accessible format with experimental measurements of surface tension for over 2000 substances across a wide range of conditions—but also an in-depth analysis aimed at identifying the key physical parameters that influence surface tension behavior. Using modern data extraction tools and statistical techniques, we have studied the relationships between surface tension and various physical properties. By analyzing these factors, we present insights into which features most strongly impact surface tension under different conditions.

This comprehensive dataset and accompanying feature analysis offer researchers a valuable foundation for exploring surface tension behavior across diverse areas of chemical engineering. We believe this will contribute to significant advancements in fields such as phase equilibrium, material design, and fluid mechanics, as well as support innovation in emerging technologies like microfluidics, nanotechnology, and sustainable process design.



Design considerations for hardware based acceleration of molecular dynamics

Joseph Middleton, Joan Cordiner

University of Sheffield, United Kingdom

As demand for long and accurate molecular simulations increases so too does the computation demand. Beyond using new, enterprise scale processor developments - such as the ARM neoverse chips – or performing simulations leveraging GPU compute, there exists a potentially faster and more power efficient option in the form of custom hardware. Using hardware description languages it is possible to transform existing algorithms into custom, high performance hardware layouts. This can lead to faster and more efficient simulations, but compromises on the required development time and flexibility. In order to take greatest advantage of the potential performance gains, the focus should be on transforming the most computationally expensive parts of the algorithms.

When performing molecular dynamics simulations in a polar solvent like water, non-bonded electrostatic calculations dominate each simulation step, as the interactions between the solvent and the molecular structure are calculated. However, simply developing a non-bonded electrostatics co-processor may not be enough, as transferring data between the host program and the FPGA itself incurs a significant time delay. For any changes to be made, competitive to existing calculation solutions, the number of data transfers must be reduced. This could be achieved by simulating multiple time-steps between memory transfers, which may impact accuracy, or performing more calculations in the custom hardware.



A Novel AI-Driven Approach for Adsorption Parameter Estimation in Gas-Phase Fixed-Bed Experiments

Rui D. G. Matias1,2, Alexandre F. P. Ferreira1,2, Idelfonso B.R. Nogueira3, Ana Mafalda Ribeiro1,2

1Laboratory of Separation and Reaction Engineering−Laboratory of Catalysis and Materials (LSRE LCM), Department of Chemical Engineering, University of Porto, Porto, 4200-465, Portugal; 2ALiCE−Associate Laboratory in Chemical Engineering, Faculty of Engineering, University of Porto, Porto, 4200-465, Portugal; 3Chemical Engineering Department, Norwegian University of Science and Technology, Sem Sælandsvei 4, Kjemiblokk 5, Trondheim, 793101, Norway

The need to reduce greenhouse gas emissions has driven the shift toward renewable energy sources such as biogas. To use biogas as a substitute for natural gas, it must undergo a purification process to separate methane from carbon dioxide. Adsorption-based separation processes are standard methods for biogas separation (1).

Developing precise mathematical models that can accurately describe all the phenomena involved in the process is crucial for a deeper understanding and the creation of innovative control and optimization techniques for these systems.
By solving a system of coupled Partial Differential Equations, Ordinary Differential Equations, and Algebraic Equations, it is possible to accurately simulate the fixed-bed units used in these processes. However, a robust simulation and, consequently, a better understanding of the intrinsic phenomena governing these systems - such as adsorption isotherms, film and particle mass transfer, among others - heavily depends on carefully selecting parameters for these equations.

These parameters can be estimated using well-known mathematical correlations or trial and error. However, these methods often introduce significant errors (2). For a more accurate determination of parameters, an optimization algorithm can be employed to find the best set of parameters that minimize the difference between the simulation and experimental data, thereby providing a better representation and understanding of the real process.

Different optimization methodologies can be employed for this purpose. For example, deterministic methods are known for ensuring convergence to an optimal solution, but the starting point selection significantly impacts their performance(3). In contrast, meta-heuristic techniques are often preferred for their adaptability and efficiency since they do not rely on predefined initial conditions(4). However, these approaches may not always guarantee finding the optimal solution for every problem.

A parameter estimation methodology based on Artificial Intelligence (AI) offers several advantages. AI algorithms can handle complex problems by processing high-dimensional data and modelling nonlinear relationships more accurately. Additionally, AI techniques, such as neural networks, do not rely on well-defined initial conditions, making them more robust and efficient in the search for global solutions, avoiding local minima traps. Beyond that, they also have the ability to continuously learn from new data, enabling dynamic adjustments.

This work presents an innovative methodology for estimating the isotherm parameters of a mathematical phenomenological model for fixed-bed experiments involving CO2 and CH4. By integrating Artificial Intelligence tools with the phenomenological model and experimental data, this approach develops an algorithm that generates parameter values for the process’ mathematical model, resulting in simulation data with close-to-optimal fit with the experimental points, leading to more accurate simulations and providing valuable insights about this separation.

1. Ferreira AFP, Ribeiro AM, Kulaç S, Rodrigues AE. Methane purification by adsorptive processes on MIL-53(Al). Chemical Engineering Science. 2015;124:79-95.

2. Weber Jr WJ, Liu KT. DETERMINATION OF MASS TRANSPORT PARAMETERS FOR FIXED-BED ADSORBERS. Chemical Engineering Communications. 1980;6(1-3):49-60.

3. Schwaab JCPM. Análise de Dados Experimentais: I. Fundamentos de Estatística e Estimação de Parâmetros: Editora E-papers.

4. Lin M-H, Tsai J-F, Yu C-S. A Review of Deterministic Optimization Methods in Engineering and Management. Mathematical Problems in Engineering. 2012;2012(1):756023.



Integration of Graph Theory and Machine Learning for enhanced process synthesis and design of wastewater treatment networks

Andres D. Castellar-Freile1, Jean Pimentel2, Alec Guerra1, Pratap M. Kodate3, Kirti M. Yenkie1

1Department of Chemical Engineering, Rowan University, Glassboro, New Jersey, USA; 2Sustainability Competence Center, Széchenyi István University, Győr, Hungary; 3Department of Physics, Indian Institute of Technology, Kharagpur, India

Process synthesis (PS) is the first step in process design. This is crucial to finding the best configuration of unit operations/technologies and stream flows that optimize the parameters of interest (cost, environmental impact, energy use, etc.). Traditional approaches such as superstructure optimization strongly depend on user-defined technologies, stream connections, and reasonable initial guesses for the unknown variables. This results not only in missing out on possible structures that can perform better than the selected but also in not considering important aspects such as multiple-input, multiple-output systems, and recycling streams [1]. Regarding this, the enhanced P-graph methodology integrated with insights from machine learning and realistic technology models is presented as a novel approach for process synthesis. It offers a unique advantage providing all n-feasible structures considering its specific connectivity rules for input, intermediate, and terminal nodes [2]. In addition, the novel two-layer process synthesis algorithm [3] is developed which incorporates combinatorial, linear, and nonlinear solvers to integrate the P-graph with realistic nonlinear model equations. It then performs a feasibility analysis and ranks the solution structures based on chosen metrics, such as cost, scalability, or sustainability. However, the n-feasible solutions identified with the P-graph framework could not still be suitable for the real process resulting from limitations in their reliability and structural resilience over a certain period. Considering this, applying Machine Learning (ML) methods for regression, classification, and extrapolation will allow for the accurate prediction of structural reliability and resilience over time [4], [5]. This will support better process design, enable proactive maintenance, and improve overall management.

Many water utility companies use a reactionary (wait-watch-act) methodology to manage their facilities and infrastructure. The proposed method can be applied to these systems offering strong, convergent, and comprehensive solutions for municipalities, water utility companies, and industries, enabling them to make well-informed decisions when designing new facilities or upgrading existing ones, all while minimizing time and financial investment.

Thus, the integration of Graph Theory and ML approaches for optimal design, structural reliability and resilience yields a new framework for Process Synthesis. We demonstrate the Wastewater Treatment Network (WWTN) synthesis as the problem of interest as it is vital in addressing the issues of Water Equity and Public Health. The pipeline network, pumping stations, and the wastewater treatment plant are modeled with the P-graph framework. Detailed and accurate models are developed for the treatment technologies. ML methods such as eXtreme gradient boosting (XGBoost) and Artificial Neural Networks (ANNs) are tested to calculate the pumping stations and the pipeline network's resilience and structural reliability.

References

[1] K. M. Yenkie, Curr. Opin. Chem. Eng., 2019, doi: 10.1016/j.coche.2019.09.002.

[2] F. Friedler, Á. Orosz, and J. Pimentel Losada, 2022. doi: 10.1007/978-3-030-92216-0.

[3] J. Pimentel et al., Comput. Chem. Eng., 2022, doi: 10.1016/j.compchemeng.2022.108034.

[4] G. Kabir, N. B. C. Balek, and S. Tesfamariam, J. Perform. Constr. Facil., 2018, doi: 10.1061/(ASCE)CF.1943-5509.0001162.

[5] Á. Orosz, F. Friedler, P. S. Varbanov, and J. J. Klemes, 2018, doi: 10.3303/CET1863021.



An Automated CO2 Capture Pilot Plant at ULiège: A Platform for the Validation of Process Models and Advanced Control

Cristhian Molina Fernández, Patrick Kreit, Brieuc Beguin, Sofiane Bekhti, Cédric Calberg, Joanne Kalbusch, Grégoire Léonard

University of Liège, Belgium

As the European Union accelerates its efforts to decarbonize society, the exploration of effective pathways to reduce greenhouse gas emissions is increasingly being driven by digital innovation. Pilot installations play a pivotal role in validating both emerging and established technologies within the field of carbon capture, utilization, and storage (CCUS).

At the University of Liège (ULiège) in Belgium, researchers are developing a "smart campus" that integrates advanced CCUS technologies with cutting-edge computational tools. Supported by the European Union's Resilience Plan, the Products, Environment, and Processes (PEPs) group is leading the construction of several key pilot installations, including a CO2 capture pilot plant, a CO2-to-kerosene conversion unit, and a direct air capture (DAC) test bench. These facilities are designed to support real-time data monitoring and advanced computational modeling, enabling enhanced process optimization.

The CO2 capture pilot plant has a processing capacity of 1 ton of CO2 per day, utilizing a fully automated chemical absorption system. Capable of working with either amine or carbonate solvents, the plant operates under an intelligent control framework that allows for remote and extended operation. This level of automation supports continuous data collection, essential for validating computational models and applying advanced control strategies, such as machine learning algorithms. Extended operation provides critical datasets for optimizing solvent stability, understanding corrosion behavior, and refining process models—key factors for scaling up CCUS technology.

The plant is fully electrified, with a heat pump integrated into the system to enhance energy efficiency by recovering heat from the condenser and upgrading it for reboiler use. The initial commissioning and testing phase will be conducted at ULiège’s Sart Tilman campus, where the plant will capture CO2 from a biomass boiler’s exhaust gases at the central heating station.

The modular design of the installation, housed within three 20-foot shipping containers, supports easy transport and deployment at various industrial locations. The automation and control system is centralized in the third container, allowing for full remote operation and facilitating quick reconfiguration of the plant for different experimental setups.

A key feature of the pilot is its flexible design, which integrates advanced gas pretreatment systems (including NOx and SOx removal) and optimized absorption/desorption columns with intercooling and interheating capabilities. These features allow dynamic adjustment of process conditions, enabling real-time optimization of CO2 capture performance. The solvent feed can be varied at different column heights, allowing researchers to evaluate the effect of column height on separation efficiency without making physical modifications. This flexibility is supported by a modular column design, where flanged segments can be dismantled or reassembled easily.

Overall, this pilot plant is designed to facilitate process optimization through data-driven approaches and intelligent control systems, offering critical insights into the performance and scalability of CCUS technologies. By providing a flexible, automated platform for long-duration experimental campaigns, it serves as a vital resource for advancing decarbonization efforts, especially in hard-to-abate industrial sectors.



A COMPARISON OF ROBUST MODELING APPROACHES TO COPE WITH UNCERTAINTY IN INDEPENDENT TERMS, CONSIDERING THE FOREST SUPPLY CHAIN CASE STUDY

Frank Piedra-Jimenez1, Ana Inés Torres2, Maria Analia Rodriguez1

1Instituto de Investigación y Desarrollo en Ingeniería de Procesos y Química Aplicada (UNC-CONICET), Universidad Nacional de Córdoba. Facultad de Ciencias Exactas, Físicas y Naturales. Av. Vélez Sarsfield 1611, X5016GCA Ciudad Universitaria, Córdoba, Argentina; 2Department of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh PA 15213

The need to consider uncertainty in the decision-making process is widely acknowledged in the PSE community, which distinguishes three main modelling paradigms for optimization under uncertainty, namely robust optimization (RO), stochastic programming (SP), and chance-constrained programming (CCP). The last two paradigms mentioned are computationally challenging due to the need for complete distributional knowledge (Chen et al., 2018). Instead, RO does not require the probabilistic behavior of uncertain parameters and strikes a good balance between solution quality and computational tractability (Ning and You, 2019).

One widely used method is the static robust optimization, initially presented by Bertsimas and Sim (2004). They proposed the budgeted uncertainty set which allows flexible handling of the level of conservatism of robust solutions in terms of probabilistic limits of constraint violations. It defines for each uncertain parameter a deviation bound from its nominal value and a budget parameter to determine the number of uncertain parameters that are allowed to take their worst value in each equation. In the case that there is only one uncertain parameter on the right-hand side of the equations, this method may adopt a too conservative perspective, considering the worst-case scenario for each constraint.

To address this concern, Ghelichi et al. (2018) introduced the “adjustable column-wise robust optimization” (ACWRO) method, to define a number of uncertain realizations that the decision-maker is willing to satisfy. Initially presented as a nonlinear model, it is later reformulated to achieve a linear formulation. The present paper proposes an alternative method applying a linear disjunctive formulation, called “disjunctive robust optimization” (DRO). Here, the proposed method is applied to forest supply chain design problem, extending a previous work from the authors (Piedra-Jimenez, et al., 2024). Due to disjunctive structure of the proposed approach, big-M and hull reformulations are applied to the DRO formulation and compared with the ACWRO approach for a large number of instances showing the tightness of each reformulation and computational performance considering the forest supply chain design problem case study.

References:

Bertsimas, D., Sim, M., 2004. The Price of Robustness. Oper. Res. 52, 35–53. https://doi.org/10.1287/OPRE.1030.0065

Chen, Y., Yuan, Z., Chen, B., 2018. Process optimization with consideration of uncertainties—An overview. Chinese J. Chem. Eng. 26, 1700–1706.

Ghelichi, Z., Tajik, J., Pishvaee, M.S., 2018. A novel robust optimization approach for an integrated municipal water distribution system design under uncertainty: A case study of Mashhad. Comput. Chem. Eng. 110, 13–34. https://doi.org/10.1016/J.COMPCHEMENG.2017.11.017

Ning, C., You, F., 2019. Optimization under uncertainty in the era of big data and deep learning: When machine learning meets mathematical programming. Comput. Chem. Eng. 125, 434–448. https://doi.org/10.1016/J.COMPCHEMENG.2019.03.034

Piedra-Jimenez, F., Torres, A.I, Rodriguez, M.A., 2024. A robust disjunctive formulation for the redesign of forest biomass-based fuels supply chain under multiple factors of uncertainty. Comput. Chem. Eng. 181, 108540.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ESCAPE | 35
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany